backpropagation

Terms from Artificial Intelligence: humans at the heart of algorithms

Page numbers are for draft copy at present; they will be replaced with correct numbers when final book is formatted. Chapter numbers are correct and will not change now.

Backpropagation is a supervised learning for multi-level neural networks. For each training example, it takes the difference between the expected and actual output at the final layer and then uses the differential of the {sigmoid}} function at each node to work out error values at earlier layers and hence update the weights on links between nodes.

Defined on pages 113, 115, 115

Used on Chap. 6: pages 113, 115, 116; Chap. 7: page 142; Chap. 8: pages 152, 154; Chap. 9: pages 184, 185, 192, 196; Chap. 12: page 277; Chap. 20: page 505; Chap. 21: page 523

Also known as backprop