From entry deep neural network in glossary Artificial Intelligence: humans at the heart of algorithms
A deep neural network is a neural network with lots of layers. Typically the layers are also large, in the sense of lots of nodes. Each layer can also have a different size and use different types of learning. For example, it is common for the first layer to be a Restricted Boltzmann Machine in order to perform dimensional reduction.
Typically the inner layers are underdetermined (many equally good arrangements of weightings); this and their distance from the output layer means that backpropagation or similar learning rules need to have very slow learning rates to avoid instabilities. In addition, more layers and larger layers usually requires more training data. Together these mean that deep learning (training of deep neural networks) requires a lot of training data. This combination of computational cost and data volume is one of the main reasons that deep neural networks were not widely adopted for many years.
Also used in hcistats2e: Chap. 11: pages 128, 130; Chap. 12: page 143
