Backpropagation is a supervised learning for multi-level neural networks. For each training example, it takes the difference between the expected and actual output at the final layer and then uses the differential of the sigmoid function at each node to work out error values at earlier layers and hence update the weights on links between nodes.
Used in Chap. 6: pages 75, 76, 78; Chap. 7: page 96; Chap. 8: pages 102, 103; Chap. 9: pages 122, 123, 127, 129; Chap. 12: page 182; Chap. 20: page 325; Chap. 21: page 337
Also known as backprop