Implementing Gradient Descent in Python, Part 2: Extending for Any Number of Inputs
This is the second tutorial in the series which discusses extending the implementation for allowing the GD algorithm to work with any number of inputs in the input layer.
Implementing Gradient Descent in Python, Part 1: The Forward and Backward Pass
In this tutorial, which is the Part 1 of the series, we are going to make a worm start by implementing the GD for just a specific ANN architecture in which there is an input layer with 1 input and an output layer with 1 output.
Intro to optimization in deep learning: Momentum, RMSProp and Adam
In this post, we take a look at a problem that plagues training of neural networks, pathological curvature.
Intro to optimization in deep learning: Gradient Descent
An in-depth explanation of Gradient Descent, and how to avoid the problems of local minima and saddle points.