I have just started reading Andy Clark’s “Being There”1 (maybe more on that later), but early on he reflects on the MIT COG project, which is a human-like robot torso with decentralised computation – coherent action emerging through interactions not central control.
This reminded me of results of brain scans (sadly, I can’t recall the source), which showed that the areas in the brain where you store concepts like ‘chair’ are different from those where you store the sound of the word – and also I’m sure the spelling of it also.
This makes sense of the “tip of the tongue” phenomenon, you know that there is a word for something, but can’t find the exact word. Even more remarkable is that of you know words in different languages you can know this separately for each language.
So, musing on this, there seem to be very good reasons why, even within our own mind, we hold multiple representations for the “same” thing, such as chair, which are connected, but loosely coupled.
In an artificial “brain” like COG, the computational units are physically separate. In our brains things are much less discrete, but we know do have well defined locality for certain functions (e.g. Broca’s area for speech). Also, while there is debate about the extent to which we know what we are doing (or perhaps more important know what we are about to do) still it is clear that at least low-level functions operate semi-autonomously, but for instant reaction (pain withdrawal) and also for controlled actions (play a guitar chord).
In particular, there are particular brain lesions that mean that the patient cannot choose what to say, yet still vaguely grammatical but entirely meaningless utterances are constantly made – rather like James Joyce 🙂 In between are cases where “nearly right” words come out, perhaps table, or cushion instead of chair. So for speech it seems a “higher level” part of the brain decides what we want to say and makes gentle suggestion for what this should be, but a different part does the final stringing together of words and it is this part that ‘knows’ the rules of grammar, the way words connect into each other and what the words sound like (although the grammar and aural elements may themselves be generated separately). This is rather like the intention to walk and the fine movement of muscles needed to move each leg.
Obviously the representations needed for saying ‘chair’ in a sentence are about the way it fits into grammatical structure, agreement with verbs, the sound of the word, and eventually (maybe at another level again) the way the lips need to be formed and air expelled. In contrast for choosing what to say, it is the semantics of chairs, the fact that you sit on them, they have legs, etc. that are important. The “planning to speak” bit needs to know there is a suitable word, what that word means and whether it will fit with other words, but does not need to know the details of how to say it. Similarly the “planning to move” bit needs to know roughly that legs can move in the desired way, but not the details of movement. Planning needs a model of action (speech or movement) and the model needs to be close enough to reality for it to work most of the time, but without all the details.
So the representations at a higher level need to share or independently represent enough of the lower level functions to be able to make appropriate suggestions for lower level action, but each will also represent different things. In addition, there needs to be some linkage between the two representations. Suppose you form the intention to say something like “The chair has four legs”. (Note “something like” because as you form the intention to say the thing the exact words will probably not be there.) In order for this intention to lead to the words “The chair has four legs”, something has to link the planning ‘chair’, with the saying ‘chair’.
This is not unlike human communication; we need both shared vocabulary and a level of shared meaning: so that when I say to a garage mechanic “the clutch is not working” it is the same thing we are referring to. However, we can also each have additional meanings, annotations etc.: the mechanic will know how the clutch works better than I do.
However, whereas human verbal communication has to be pushed through a discrete medium of signs, it seems more likely that there is a level of direct (but diffuse) connection between the ‘concepts’ used in our brains at different levels of activity.
hi nice post, i enjoyed it
Pingback: Alan’s blog » Single-track minds - centralised thinking and the evidence of bad models