How backpropagation works


Details
• What we'll do
Backpropagation is the workhorse of deep learning, used in one form or another with every deep learning model, but how does it work? In this meeting we'll learn the math behind backpropagation and understand fully how it works.
For a neural network with one layer, we use gradient descent to find the optimal parameters. Sprinkle in some calculus, and we can make it more efficient. To go into the deeper layers of the network, we incorporate the chain rule in calculus.
We'll have a look at some backpropagation code written without TensorFlow (or any other deep learning frameworks), and then at the end we'll have a quick look at TensorFlow. Backpropagation is actually a special case of what we call machine autodifferentiation, which TensorFlow can do and makes very easy. (It's amazing!)
We're going to try using Google's new Colab tool for the meeting, which gives us the the equivalent of Jupyter notebooks, but capable of deep learning on Google's servers, and capable of being actively edited by more than one person at a time (!) like a Google Doc. Give it a whirl if you have a chance.
We scheduled the meeting from 6pm to 7pm, but we actually have the room from 5pm to 8pm. Feel free to show up early for meet & greet. The actual backpropagation discussion will start at 6pm. See you there!
• What to bring
• Important to know

How backpropagation works