Demystifying Deep Learning: Part 5
Backpropagation
August 31, 2018
6 min read
This may be the most important post in the series, and also the most overlooked, both for the same reason - this is where the maths gets interesting! It is important to get to grips with it when looking at deep learning algorithms - although later you may never have to implement it manually,thanks to the power of deep learning frameworks, understanding it is crucial when debugging your model.
Don’t fret though, the principles we applied in the Learning by Gradient Descent post will help us a great deal to really understand this.
But before we dive into the maths, it makes sense to describe what the backpropagation algorithm is.
Backpropagation is an algorithm for computing the partial derivatives of the parameters, by going through the network backwards, layer by layer, starting from the output layer.
Why backpropagation to learn the weights?
It first makes sense to understand why we use backprop to compute derivatives in the first place. Aren’t there better alternatives?
What about numerically approximating the gradient by seeing how much the cost changes directly when we nudge each parameter? Well, there are often tens of thousands, if not more parameters, and computing cost using forward prop individually for all of them is very expensive.
So we have to analytically compute our partial derivative through the chain rule.
If we were to go through the network forwards, and consider the weights from the first layer, we’d have to consider how a nudge affects the immediate outputs - i.e. the 2nd layer, and then how that nudge in the 2nd layer propagates to the 3rd layer etc. For the 2nd layer we’d have to recompute all the later layer calculations, which is expensive since the later layers’ partial derivatives are used again. And again, for the weights in the 3rd layer, and so on.
To put the maths formally:
\frac{\partial{J}}{\partial{W^{[1]}}} = \frac{\partial{A^{[1]}}}{\partial{W^{[1]}}}_\frac{\partial{A^{[2]}}}{\partial{A^{[1]}}}_[\frac{\partial{A^{[3]}}}{\partial{A^{[2]}}}...\frac{\partial{A^{[L]}}}{\partial{A^{[L-1]}}}\*\frac{\partial{J}}{\partial{A^{[L]}}}]
Notice how the term in square brackets (the ripple effect of the “nudges” from the 3rd layer onwards) is the same for both equations. So what if we computed the term first, then computed the weights for the first and second layer?
More generally, when computing the weights in layers we’ll be reusing the same computations involving partial derivatives from layer onwards.
So it makes sense to start from the output, and make one pass backward, computing the partial derivatives layer by layer - each layer’s calculations being reused in subsequent calculations, such that we don’t compute anything twice.
Aside: Linking this to other areas of computer science as a way of looking at the underlying principles, this is an example of the more general principle of bottom-up dynamic programming.
Deriving the Backpropagation Algorithm
When deriving the algorithm’s equations, we will intersperse the intuition with the maths.
It is helpful to restate the tips given in the Learning by Gradient Descent post:
Partial Derivative Intuition: Think of loosely as quantifying how much would change if you gave the value of a little “nudge” at that point.
Breaking down computations - we can use the chain rule to aid us in our computation - rather than trying to compute the derivative in one fell swoop, we break up the computation into smaller intermediate steps.
Computing the chain rule - when thinking about which intermediate values to include in our chain rule expression, think about the immediate outputs of equations involving - which other values get directly affected when we slightly nudge ?
One element at a time - rather than worrying about the entire matrix , we’ll instead look at an element . One equation we will refer to time and time again is:
A useful tip when trying to go from one element to a matrix is to look for summations over repeated indices (here it was k) - this suggests a matrix multiplication.
Another useful equation is the element-wise product of two matrices:
Sanity check the dimensions - check the dimensions of the matrices all match (the derivative matrix should have same dimensions as the original matrix, and all matrices being multiplied together should have dimensions that align.
Backpropagating through layer l:
Let’s break the backprop into a smaller computation, that of one particular layer . Let’s assume we have computed - in the overall algorithm this gets propagated back from layer .
We need to compute the relative partial derivatives for this layer so it makes sense to remind ourselves of the equations from the forward pass:
First let’s consider how to compute . Consider a single element - since we are applying the activation function individually to each element in the matrix, a nudge in will only affect the corresponding activated output - . The magnitude of the nudge of is by definition the derivative of the function at the value of . So we have:
We can apply the chain rule:
This is just an element-wise product of two matrices - since we are multiplying the corresponding indices of both matrices together. So we have our equation:
As a sanity check, has the same dimensions as which has dimensions x , which is the same as since it is applying a function element-wise so preserves dimensions of . So the dimensions of do match with .
Brilliant! Next, let’s look at the effect of nudging the weight . To make it easier to conceptualise, let’s rewrite the matrices in terms at an individual neuron level:
This nudge in affects the weighted input of the neuron across all examples in the training set - we will take the average gradient across the examples. The magnitude of the nudge to the value of neuron is - times the nudge of since is multiplied by in the equation above.
Also consider a nudge in , the magnitude of the corresponding nudge to will be the same, and just like with we will average the effect of the nudge across the examples. So we have:
It’s worth noting though how similar these equations are to logistic regression - just that instead of we have the more general . As we mentioned in the previous post, the principles for logistic regression are just scaled and generalised for a feedforward neural network.
We can now switch our notation back to considering the matrices, having gained the insight from looking at one neuron.
$\frac{\partial{J}}{\partial{W^{[l]}_{jk}}} = \frac{1}{m} \sum_{i=1}^{m}\frac{\partial{J}}{\partial{Z^{[l]}_{ji}}} * A^{[l-1]}_{ki} = \frac{1}{m} \sum_{i=1}^{m}\frac{\partial{J}}{\partial{Z^{[l]}_{ji}}} * A^{[l-1]T}_{ik} $
This gives us our equations:
As a sanity check: the right-hand-side multiplies a x matrix with a x matrix, giving a x matrix - so the dimensions do match.
Finally, we just need to consider how the gradient propagates to layer - we didn’t have to consider this in the specific case of logistic regression as there was only one layer.
Again, the intuition about partial derivatives as “nudges” will help us. Let’s consider one neuron in layer for the example: . A nudge in this neuron affects all of the neurons’ weighted inputs for that example in the next layer, so we will have to sum partial derivatives across the neurons. The magnitude of the nudge in the weighted input will be the original nudge multiplied by the corresponding weight . Indeed the equation is:
Again, now switching back to the matrix notation, now that we’ve got the intuition:
So as a matrix multiplication:
Again as a sanity check: the right-hand-side multiplies a x matrix with a x matrix, giving a x matrix - so the dimensions do match.
Now having looked at the general layer case, let’s look at the final layer of the network. Some potential final layer activations are:
- no activation - for a regression task
- sigmoid - for binary classification
- softmax - for multi-class classification
For regression and binary classification, as we showed before -
It turns out that with softmax for multi-class classification that the same equation holds. As mentioned in the previous post, we will look at the softmax derivation later in the series, when we look at multi-class classification in a convolutional neural network.
In general though for any output layer activation function, you can obtain from the loss function equation directly, since , and then just like in the general case you can compute for whichever activation function is used in the output layer and go from there.
And there you have it! You’ve successfully derived backpropagation for a neural network - no mean feat!
To recap, the key equations are:
Code:
We store the intermediate results of the forward pass in a cache, then we store the gradients in another dictionary. The code for the backprop algorithm is just an implementation of the equation we have derived.
Note the activation function used for the intermediate layer is ReLU. The derivative of is 1 if since it is linear in that region, and 0 otherwise, since the graph is flat there, as we clamp all negative values to zero. NB: technically at the derivative is not defined, but in practice we take it to be 0.
As before, the accompanying code is in the notebook.
def backpropagation(cache,Y,parameters):L = len(parameters)//2m = Y.shape[1]grads = {}grads["dZ" + str(L)]= cache["A" + str(L)] - Ygrads["dW" + str(L)]= (1/m)*np.dot(grads["dZ" + str(L)],cache["A" + str(L-1)].T)grads["db" + str(L)]= (1/m)*np.sum(grads["dZ" + str(L)],axis=1,keepdims=True)for l in range(L-1,0,-1):grads["dA" + str(l)]= np.dot(parameters["W" + str(l+1)].T,grads["dZ" + str(l+1)])grads["dZ" + str(l)]= np.multiply(grads["dA" + str(l)], relu(cache["Z" + str(l)], deriv = True))grads["dW" + str(l)]= (1/m)*np.dot(grads["dZ" + str(l)],cache["A" + str(l-1)].T)grads["db" + str(l)]= (1/m)*np.sum(grads["dZ" + str(l)],axis=1,keepdims=True)return grads
Conclusion
Let’s take a step backward here and just appreciate the beauty of backpropagation.
What appeared to us as initially a complicated and scary expression for the partial derivatives has been broken down to a layer by layer calculation. With backpropagation we didn’t have to worry about the entire network, we could just look at the local behaviour of a neuron or weight and how the immediate outputs were affected. We could consider all these moving pieces separately, layer by layer, then stitch together the gradients using the chain rule.
When we look at different neural network architectures with specialised layers that differ in layout and functionality, the same principles of backpropagation will still hold - break it down layer by layer, gain intuition about the effect of nudging that parameter and the behaviour within that layer, then finally use chain rule to get the overall expression.