Contents
- 6.1 Overview
- 6.2 Why Use Neural Networks?
- 6.3 The Perceptron
- 6.3.1 The XOR Problem
- 6.4 The Multi-layer Perceptron
- 6.5 Backpropagation
- 6.5.1 Basic Principle
- 6.5.2 Backprop for a Single Layer Network
- 6.5.3 Backprop for hidden layers
- 6.6 Associative Memories
- 6.6.1 Boltzmann Machines
- 6.6.2 Kohonen Self-organising Networks
- 6.7 Lower-level Models
- 6.7.1 Cortical Layers
- 6.7.2 Inhibition
- 6.7.3 Spiking Neural Networks
- 6.8 Hybrid Architectures
- 6.8.1 Hybrid Layers
- 6.8.2 Neurosymbolic AI
- 6.9 Summary
Glossary items referenced in this chapter
AlphaGo, analogy, artificial neural networks, associative memory, auto-associative memory, autoencoder, backpropagation, big data, Boltzmann machine, brain architecture, cerebral cortex, classification, clustering, cognitive architecture, computer chess, connectionist model, converge, counterfactual reasoning, data reduction, decision tree, deep fakes, deep neural network, delta, differential (calculus), disambiguation, emotion, event, expert system, fault tolerant, fMRI, fourier analysis, frequency domain, fully connected, game playing, genetic programming, gestalt, GPT-3, graceful degradation, grey matter, heteroassociative memory, heuristic evaluation function, hill climbing algorithm, Human Brain Project, human intelligence, human labelling, human memory, hybrid, hybrid AI, hybrid architecture, hybrid layers, inhibition, knowledge base, Kohonen networks, lateral inhibition, linear discriminant analysis, linearly inseparable, linearly separable, logistic function, machine learning, memory, Monte Carlo search, multi-layer neural network, multi-layer perceptron, mutual inhibition, NETtalk , neural-network architecture, neuron, neurone, neurosymbolic AI, object recognition, OpenAI, parallel processing, pattern recognition, Pavlovian learning, perceptron, phoneme labelling, phonetic typewriter, regret, reinforcement learning, relaxation term, rescue dogs, reservoir computing, Restricted Boltzmann Machine, search, self-organising map, semantic network, sigmoid activation function, sigmoid function, similarity matrix, sleeping phase, speech recognition, spikes, spiking neural network, spin glass models, sub-symbolic systems, supervised learning, support vector machine, symbolic systems, threshold, threshold function, training phase, trigger, underdetermined, unsupervised learning, upper bound, white matter, XOR problem