A short while ago I was mentioning to another computing academic at a meeting the curious fact that the computational power of the complete internet is now roughly similar to that of a single human brain [[see article here]]. While this little factoid is deliberatly provocative, I did not expect the strength of the response.
“that’s impossible” he said.
“why” I asked, “I’m not saying they are similar, just that there is the same computational potential”
“Computers are sequential” he said, “brains are associative”.
Further attempts to reason, likening it to other forms of simulation or emulation, simply met with the same flat response, a complete unwillingness to entertain the concept.
Partly this is to do with the feeling that this somehow diminishes us as people, what for me was a form of play with numbers, for him was perhaps an assault on his integrity as a human. I guess as a Christian I’m used to the idea that the importance of a person is not that we are clever or anything else, but that we are loved and chosen. So, I guess, for me this is less of an insult to my idea of being who I am.
This aside it is interesting that the reason given was about the mode of computation: “computers are sequential” vs. the massively parallel associativity of the human brain.
Of course if the computational substrate is all the PCs connected to the Intenet then this is hardly purely sequential and in fact one of the reasons that you could not ‘run’ a brain simulation on the Internet is that communication is too slow. Distributed computation over 100s of millions of PCs on the internet could not synchronise in the way that long-range synapses do within our brains.
Amongst other things it is suggested that our sense of consciousness is connected with the single track of synchronised activity enabled by the tight interconnections and rapid feedback loops within our brains. In contrast, individual computers connected to the onternet compute far faster than they can communicate, there could be not single thread of attention switching at the rate that our minds can.
If the internet were to think it would be schizophrenic.
Sequence is also imprtant in other ways. As the man said, our brians are associative. When considering spreading activation mechanisms for intelligent internet interfaces, one of the problems is that associative stuff gets ‘mixed up’. If London has a high level of activation, why is that? In a designed computational framework it is possible to consider mutiple ‘flavours’ of activations spreading through a network of concepts, but our brains do not do this, so how do they mange to separate things.
Now to some extent they don’t – we get an overall feel for things, not seeing the world as little pieces. However, it is also important to be able to more or less accurately ascribe feelings and associations to things. Consider one of those FBI training ranges were bank terrorists and hostages pop out from behind windows or doors. Your aim is to shoot the terrorists and save the hostages. But, if you see a robber holding a hostage how do you manage to separate the ‘bad and kill’ feelings and properly ascribe them only to the terrorist and not the hostage.
The answer may well be due exactly to the switching of attention. Even with both terrorist and hostage are next to each other, as mental attention shifts momentarily to one and then the other, the mental associations also shift. Rodney Cotterill in Enchanted Looms describes two levels of attention switich. One near conscious and taking around 500ms and one connected with more low-level visual attention (sometimes called a visual searchlight) at 20-50ms. It is probably the slower timescales that allow fuller webs of association to build and decay, but maybe there are other intermediate timescales of attention switching as well.
If this is right then the rapid sequential shifts of attention could be essential for maintaining the individual identity of percepts and concepts.
If we look at concepts on their own, another story of sequence unfolds.
There is a bit of a joke among neuroscientists about grandmother cells. This is the idea that there is a single neuron that in someway encodes or represents your grandmother
Looking at this purely from a computing science perspective, even if there were not neurological reasons for looking for more distrubuted representations, there are computational ones. If concepts were stored in small local assemblies of neurons (not single ones to allow some redundancy and robsutness) and even a reasonably large part of our brains were dedicated to concept memory, then there just seems too few ‘concept-slots’.
If we used 100 neurons per concept and 10% of the brain for concept memory, we would only have space for around 10 million concepts. A quick scan through the dictionary suggests I have a reconition vocabuary of arounf 35,000 words, so that means I’d have less than 300 other concepts per dictinary word one. Taking into account memories of various kinds, it justs seems a little small. If we take into account the interconnections then we have plenty of potential long-term storage capacity (1/2 petabyte or so), but not if we try to use indiviudal groups of neorons to represent things. Gradmother cells are simpy an inefficient use of neurons!
Now there is also plenty of neurological evidence for more distributed storage. Walter Freeman describes how he and his team lovingly chopped the tops off rabbits’ skulls, embeded electrodes into their olfactory bulbs and then gently nursed them back to health. The rabbits were then presented with different smells and each smell produced a distinctive pattern of neuron firings, but these patterns exteded across the bulb, not localised to a few neurons.
If neurons had ‘continuous’ levels of activation it would be possible to represent things like “1/2 think it is a dog 1/2 think it is a fox”, simply as an overlay of the activation of each. However, if this were the case, and one could have in mind any blend of concepts, then an assembly of N neurons would still only be able to encode up to N concepts as the concepts patterns would form a set of basis vectors for the N-dimensional vector space of possible activation levels (a bit of standard linear algebra).
In fact, neurons tend to behave non-linearly and in many areas there are patterns of inhibition as well as mutual excitement and disinhibition, leading to winner-takes-all effects. If this is true of the places where we represent concepts for short term memory, conscious attention, etc., then this means instead of representations that ‘add up’, we have each pattern potentially completely different, similar to the way binary numers are encoded in computer memory: 1010 is not a combination of 1000 and 0010 but completely different.
In principle this kind of representation allows 2^N (two to the power of N) rather than N different concepts using the same N neurons … In reality, almost certainly representations are less ‘precise’ allowing some levels of similarity in representations etc., so the real story will be more complex, but the basic principle holds that combinations of thresholding and winner-takes-all allow more distinct concepts than would be possible if combinations of concepts can occur more freely.
However, notice again that higher capacity to deal with more concepts is potentially bought at the cost of being able to think of less things ‘at once’ – and the side effect is that we have to serialise.
Returning back to the “computers are sequential, brains are associative” argument, whilst not denying the incredible parallel associativity of human memory, actually there seems as much to wonder about in the mechanisms that the brain ‘uses’ for sequentiality and the gains it gets because of this.