backpropagation

Terms from Artificial Intelligence: humans at the heart of algorithms

Backpropogation is a supervised learning for multi-level neural networks. For each training example, it takes the difference between the expected and actual output at the final layer and then uses the differential of the {sigmoid}} function at each node to work out error values at earlier layers and hence update the weights on links between nodes.

Defined on pages 113, 115, 115

Used on pages 113, 115, 116, 142, 152, 154, 184, 192, 195, 277, 505, 523

Also known as backprop