Implementing Gradient Descent in Python, Part 4: Applying to Any Number of Neurons
In this tutorial, we extend our implementation of gradient descent to work with a single hidden layer with any number of neurons.
In this tutorial, we extend our implementation of gradient descent to work with a single hidden layer with any number of neurons.
In the third part of this series, the implementation of Part 2 will be extended for allowing the GD algorithm to work with a single hidden layer with 2 neurons.
This is the second tutorial in the series which discusses extending the implementation for allowing the GD algorithm to work with any number of inputs in the input layer.
In this tutorial, which is the Part 1 of the series, we are going to make a worm start by implementing the GD for just a specific ANN architecture in which there is an input layer with 1 input and an output layer with 1 output.