Lecture 6: Backpropagation
Roger Grosse
1 Introduction
So far, we’ve seen how to train “shallow” models, where the predictions are computed as a linear function of the inputs. We’ve also observed that deeper models are much more powerful than linear ones, in that they can compute a broader set of functions. Let’s put these two together, and see how to train a multilayer neural network. We will do this using backpropagation, the central algorithm of this course. Backpropagation (“backprop” for short) is a way of computing the partial derivatives of a loss function with respect to the parameters of a network; we use these derivatives in gradient descent, exactly the way we did with linear regression and logistic regression.
Copyright By PowCoder代写 加微信 powcoder
If you’ve taken a multivariate calculus class, you’ve probably encoun- tered the Chain Rule for partial derivatives, a generalization of the Chain Rule from univariate calculus. In a sense, backprop is “just” the Chain Rule — but with some interesting twists and potential gotchas. This lecture and Lecture 8 focus on backprop. (In between, we’ll see a cool example of how to use it.) This lecture covers the mathematical justification and shows how to implement a backprop routine by hand. Implementing backprop can get tedious if you do it too often. In Lecture 8, we’ll see how to implement an automatic differentiation engine, so that derivatives even of rather compli- cated cost functions can be computed automatically. (And just as efficiently as if you’d done it carefully by hand!)
This will be your least favorite lecture, since it requires the most tedious derivations of the whole course.
1.1 Learning Goals
• Be able to compute the derivatives of a cost function using backprop. 1.2 Background
I would highly recommend reviewing and practicing the Chain Rule for partial derivatives. I’d suggest 1, but you can also find lots of resources on Metacademy2.
1https://www.khanacademy.org/math/multivariable-calculus/ multivariable-derivatives/multivariable-chain-rule/v/ multivariable-chain-rule
2 https://metacademy.org/graphs/concepts/chain_rule
2 The Chain Rule revisited
Before we get to neural networks, let’s start by looking more closely at an example we’ve already covered: a linear classification model. For simplicity, let’s assume we have univariate inputs and a single training example (x, t). The predictions are a linear function followed by a sigmoidal nonlinearity. Finally, we use the squared error loss function. The model and loss function are as follows:
z = wx + b (1) y = σ(z) (2)
L = 12(y − t)2 (3)
Now, to change things up a bit, let’s add a regularizer to the cost function. We’ll cover regularizers properly in a later lecture, but intuitively, they try to encourage “simpler” explanations. In this example, we’ll use the regularizer λ2 w2, which encourages w to be close to zero. (λ is a hyperparameter; the larger it is, the more strongly the weights prefer to be close to zero.) The cost function, then, is:
R = 21w2 (4) Lreg =L+λR. (5)
In order to perform gradient descent, we wish to compute the partial deriva- tives ∂J /∂w and ∂J /∂b.
This example will cover all the important ideas behind backprop; the only thing harder about the case of multilayer neural nets will be the cruftier notation.
2.1 How you would have done it in calculus class
Recall that you can calculate partial derivatives the same way you would calculate univariate derivatives. In particular, we can expand out the cost function in terms of w and b, and then compute the derivatives using re-
peated applications of the univariate Chain Rule.
Lreg = ∂Lreg =
12(σ(wx+b)−t)2 + λ2w2
∂ 1(σ(wx+b)−t)2 + λw2
= 1 ∂ (σ(wx + b) − t)2 + λ ∂ w2
2 ∂w 2 ∂w =(σ(wx+b)−t) ∂ (σ(wx+b)−t)+λw
= (σ(wx+b)−t)σ′(wx+b) ∂ (wx+b)+λw ∂w
= (σ(wx+b)−t)σ′(wx+b)x+λw ∂Lreg = ∂ 1(σ(wx+b)−t)2 + λw2
= 1 ∂ (σ(wx + b) − t)2 + λ ∂ w2
= (σ(wx+b)−t)σ′(wx+b) ∂ (wx+b) ∂b
= (σ(wx + b) − t) ∂ (σ(wx + b) − t) + 0
= (σ(wx + b) − t)σ′(wx + b)
the correct answer, but hopefully it’s apparent from this
This gives us
example that this method has several drawbacks:
1. The calculations are very cumbersome. In this derivation, we had to copy lots of terms from one line to the next, and it’s easy to acciden- tally drop something. (In fact, I made such a mistake while writing these notes!) While the calculations are doable in this simple example, they become impossibly cumbersome for a realistic neural net.
2. The calculations involve lots of redundant work. For instance, the first three steps in the two derivations above are nearly identical.
3. Similarly, the final expressions have lots of repeated terms, which means lots of redundant work if we implement these expressions di- rectly. For instance, wx + b is computed a total of four times between ∂J /∂w and ∂J /∂b. The larger expression (σ(wx + b) − t)σ′(wx + b) is computed twice. If you happen to notice these things, then perhaps you can be clever in your implementation and factor out the repeated expressions. But, as you can imagine, such efficiency improvements might not always jump out at you when you’re implementing an al- gorithm.
The idea behind backpropagation is to share the repeated computations wherever possible. We’ll see that the backprop calculations, if done properly, are very clean and modular.
Actually, even in this derivation, I used the “efficiency trick” of not expanding out σ′. If I had expanded it out, the expressions would be even more hideous, and would involve six copies of wx + b.
2.2 Multivariable chain rule: the easy case
We’ve already used the univariate Chain Rule a bunch of times, but it’s worth remembering the formal definition:
d f(g(t)) = f′(g(t))g′(t). (6) dt
Roughly speaking, increasing t by some infinitesimal quantity h1 “causes” g to change by the infinitesimal h2 = g′(t)h1. This in turn causes f to change by f′(g(t))h2 = f′(g(t))g′(t)h1.
The multivariable Chain Rule is a generalization of the univariate one.
Let’s say we have a function f in two variables, and we want to compute
d f(x(t),y(t)). Changing t slightly has two effects: it changes x slightly, dt
and it changes y slightly. Each of these effects causes a slight change to f. For infinitesimal changes, these effects combine additively. The Chain Rule, therefore, is given by:
d f(x(t),y(t)) = ∂f dx + ∂f dy. (7) dt ∂x dt ∂y dt
2.3 An alternative notation
It will be convenient for us to introduce an alternative notation for the derivatives we compute. In particular, notice that the left-hand side in all of our derivative calculations is dL/dv, where v is some quantity we compute in order to compute L. (Or substitute for L whichever variable we’re trying to compute derivatives of.) We’ll use the notation
v , ∂L. (8) ∂v
This notation is less crufty, and also emphasizes that v is a value we com- pute, rather than a mathematical expression to be evaluated. This notation is nonstandard; see the appendix if you want more justification for it.
We can rewrite the multivariable Chain rule (Eqn. 7) using this notation: t = x dx + y dy . (9)
Here, we use dx/dt to mean we should actually evaluate the derivative
algebraically in order to determine the formula for t, whereas x and y are values previously computed by the algorithm.
2.4 Using the computation graph
In this section, we finally introduce the main algorithm for this course, which is known as backpropagation, or reverse mode automatic dif- ferentiation (autodiff).3
3Automatic differentiation was invented in 1970, and backprop in the late 80s. Origi- nally, backprop referred to the special case of reverse mode autodiff applied to neural nets, although the derivatives were typically written out by hand (rather than using an autodiff package). But in the last few years, neural nets have gotten so diverse that we basically think of them as compositions of functions. Also, very often, backprop is now imple- mented using an autodiff software package. For these reasons, the distinction between autodiff and backprop has gotten blurred, and we will use the terms interchangeably in this course. Note that there is also a forward mode autodiff, but it’s rarely used in neural nets, and we won’t cover it in this course.
Figure 1: Computation graph for the regularized linear regression example in Section 2.4. The magenta arrows indicate the case which requires the multivariate chain rule because w is used to compute both z and R.
Now let’s return to our running example, written again for convenience: z = wx + b
L = 21 ( y − t ) 2
R = 21 w 2 Lreg =L+λR.
Let’s introduce the computation graph. The nodes in the graph corre- spond to all the values that are computed, with edges to indicate which values are computed from which other values. The computation graph for our running example is shown in Figure 1.
The goal of backprop is to compute the derivatives w and b. We do this by repeatedly applying the Chain Rule (Eqn. 9). Observe that to compute a derivative using Eqn. 9, you first need the derivatives for its children in the computation graph. This means we must start from the result of the computation (in this case, J ) and work our way backwards through the graph. It is because we work backward through the graph that backprop and reverse mode autodiff get their names.
Let’s start with the formal definition of the algorithm. Let v1, . . . , vN denote all of the nodes in the computation graph, in a topological ordering. (A topological ordering is any ordering where parents come before children.) We wish to compute all of the derivatives vi, although we may only be interested in a subset of these values. We first compute all of the values in a forward pass, and then compute the derivatives in a backward pass. As a special case, vN denotes the result of the computation (in our running example, vN = J ), and is the thing we’re trying to compute the derivatives of. Therefore, by convention, we set vN = 1. The algorithm is as follows:
For i = 1, . . . , N
Compute vi as a function of Pa(vi)
For i = N − 1, . . . , 1
v =P v ∂vj i j∈Ch(vi) j ∂vi
Note that the computation graph is not the network architecture. The nodes correspond to values that are computed, rather than to units in the network.
J = 1 because increasing the cost by h increases the cost by h.
Here Pa(vi) and Ch(vi) denote the parents and children of vi.
This procedure may become clearer when we work through the example
R = L dLreg
reg dR = Lreg λ
L = L dLreg reg dL
y = L dL dy
= L (y − t)
dz = y σ′(z)
w=z ∂z +RdR ∂w dw
=zx+Rw b = z ∂z
Since we’ve derived a procedure for computing w and b, we’re done. Let’s write out this procedure without the mess of the derivation, so that we can compare it with the na ̈ıve method of Section 2.1:
R = Lreg λ
y = L (y − t) z = y σ′(z)
w=zx+Rw b=z
The derivation, and the final result, are much cleaner than with the na ̈ıve method. There are no redundant computations here. Furthermore, the procedure is modular: it is broken down into small chunks that can be reused for other computations. For instance, if we want to change the loss function, we’d only have to modify the formula for y. With the na ̈ıve method, we’d have to start over from scratch.
3 Backprop on a multilayer net
Now we come to the prototypical use of backprop: computing the loss derivatives for a multilayer neural net. This introduces no new ideas beyond
Actually, there’s one redundant computation, since σ(z) can be reused when computing σ′(z). But we’re not going to focus on this point.
Figure 2: (a) Full computation graph for the loss computation in a multi-
layer neural net. (b) Vectorized form of the computation graph.
what we’ve already discussed, so think of it as simply another example to practice the technique. We’ll use a multilayer net like the one from the previous lecture, and squared error loss with multiple output units:
zi =Xw(1)xj +b(1) ij i
hi = σ(zi)
yk = X w(2)hi + b(2) ki k
L = 21 X ( y k − t k ) 2 k
As before, we start by drawing out the computation graph for the network. The case of two input dimensions and two hidden units is shown in Figure 2(a). Because the graph clearly gets pretty cluttered if we include all the units individually, we can instead draw the computation graph for the vec- torized form (Figure 2(b)), as long as we can mentally convert it to Figure 2(a) as needed.
Based on this computation graph, we can work through the derivations of the backwards pass just as before.
yk =L(yk −tk)
w(2) = yk hi ki
b(2) = yk k
hi =Xykw(2) ki
zi = hi σ′(zi)
w(1) = zi xj ij
b(1) = zi i
Focus especially on the derivation of actually uses the multivariable Chain
One you get used to it, feel free to skip the step where we write down L.
hi, since this is the only step which Rule.
Once we’ve derived the update rules in terms of indices, we can find the vectorized versions the same way we’ve been doing for all our other calculations. For the forward pass:
z = W(1)x + b(1) h = σ(z)
y = W(2)h + b(2)
L = 21 ∥ t − y ∥ 2 And the backward pass:
y = L (y − t) W(2) = yh⊤
h = W(2)⊤y
z = h ◦ σ′(z) W(1) = zx⊤
4 Appendix: why the weird notation?
Recall that the partial derivative ∂J /∂w means, how much does J change when you make an infinitesimal change to w, holding everything else fixed? But this isn’t a well-defined notion, because it depends what we mean by “holding everything else fixed.” In particular, Eqn. 5 defines the cost as a function of two arguments; writing this explicitly,
J (L, w) = L + λ2 w2. (10) Computing the partial derivative of this function with respect to w,
∂J =λw. (11) ∂w
But in the previous section, we (correctly) computed
∂J = (σ(wx+b)−t)σ′(wx+b)x+λw. (12)
What gives? Why do we get two different answers?
The problem is that mathematically, the notation ∂J/∂w denotes the
partial derivative of a function with respect to one of its arguments. We make an infinitesimal change to one of the arguments, while holding the rest of the arguments fixed. When we talk about partial derivatives, we need to be careful about what are the arguments to the function. When we compute the derivatives for gradient descent, we treat J as a function of the parameters of the model — in this case, w and b. In this context,
∂J/∂w means, how much does J change if we change w while holding b fixed? By contrast, Eqn. 10 treats J as a function of L and w; in Eqn. 10, we’re making a change to the second argument to J (which happens to be denoted w), while holding the first argument fixed.
Unfortunately, we need to refer to both of these interpretations when describing backprop, and the partial derivative notation just leaves this dif- ference implicit. Doubly unfortunately, our field hasn’t consistently adopted any notational conventions which will help us here. There are dozens of ex- planations of backprop out there, most of which simply ignore this issue, letting the meaning of the partial derivatives be determined from context. This works well for experts, who have enough intuition about the problem to resolve the ambiguities. But for someone just starting out, it might be hard to deduce the meaning from context.
That’s why I picked the bar notation. It’s the least bad solution I’ve been able to come up with.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com