NLP Reading Group, Dhaka

Tutorial
Deep Learning

Tutorial 3 - Optimization in Deep Learning

Toshiba Zaman, Shahad Mahmud, Zannatul Naim Shanto

  1. Gradiend descent: batch, stochastic, mini-batch gradient descent
  2. Optimization algorithms: SGD, Momentum, Adagrad, Adadelta, Adam
  3. Notebook comparison of optimization algorithms
More Information:

 Overview  Seminar