Backpropagation: Difference between revisions
No edit summary |
|||
Line 5: | Line 5: | ||
* Some people use "backpropagation" to mean the process of finding the gradient of a [[cost function]]. This leads people to say things like "backpropagation is just the multivariable chain rule". | * Some people use "backpropagation" to mean the process of finding the gradient of a [[cost function]]. This leads people to say things like "backpropagation is just the multivariable chain rule". | ||
* Some people use "backpropagation" or maybe "backpropagation algorithm" to mean the entire [[gradient descent]] algorithm, where the gradient is computed using the multivariable chain rule. | * Some people use "backpropagation" or maybe "backpropagation algorithm" to mean the entire [[gradient descent]] algorithm, where the gradient is computed using the multivariable chain rule. | ||
==The cost function== | |||
The cost function is defined as: | |||
:<math>C(W,b;x) = \frac{1}{2} \sum_j (y_j - a^L_j)^2</math> | |||
==The usual neural network graph== | ==The usual neural network graph== |
Revision as of 01:43, 24 March 2018
Backpropagation has several related meanings when talking about neural networks. Here we mean the process of finding the gradient of a cost function.
Terminology confusion
- Some people use "backpropagation" to mean the process of finding the gradient of a cost function. This leads people to say things like "backpropagation is just the multivariable chain rule".
- Some people use "backpropagation" or maybe "backpropagation algorithm" to mean the entire gradient descent algorithm, where the gradient is computed using the multivariable chain rule.
The cost function
The cost function is defined as:
The usual neural network graph
Neural networks are usually illustrated where each node is the activation of neuron in layer . Then the weights are labeled along the edges.
[graph goes here]
Computational graphs
The multivariable chain rule can be represented as a computational graph where the nodes are variables that store values from intermediate computations. Each node can use values given to by edges going into it, and sends its output to nodes going out.
A different neural network graph
Naively, we might try to piece together the information in the previous sections as follows:
- The usual neural network graph looks like a computational graph: it's got nodes storing intermediate values that feed into later layers.
- We know computational graphs are good at representing the process of computing the gradient.
- In order to do gradient descent, we want the gradient of the cost function.
- Therefore, by (1)-(3), we can use the neural network graph to represent the process of the computing gradient.
The problem with the above argument is that the computational graph is only good at representing the process of computing the gradient when the variables we are differentiating with respect to are nodes on the graph. But in our case, the cost function is rather than , i.e. since we are tinkering with the weights of the network, we want to think of the cost as a function of the weights (parameterized by the input ) rather than as a function of the input (parameterized by the weights ).
But the usual neural network graph only has nodes for the activations , so how can we compute the relevant derivatives? One solution is to perform surgery on the existing graph to add the weights as nodes.
[graph goes here]
Now that the weights are on individual nodes, we can think of the activation nodes as receiving all the weights and all the previous activations, then doing the multiplication and passing it through the sigmoid function. This is in contrast to before, where the muliplication was "done along the edges" so that the activation nodes received the products and only did the addition and sigmoid.