Human-Like Computing

Last week I attended an EPSRC workshop on “Human-Like Computing“.

The delegate pack offered a tentative definition:

“offering the prospect of computation which is akin to that of humans, where learning and making sense of information about the world around us can match our human performance.” [E16]

However, the purpose of this workshop was to clarify, and expand on this, exploring what it might mean for computers to become more like humans.

It was an interdisciplinary meeting with some participants coming from more technical disciplines such as cognitive science, artificial intelligence, machine learning and Robotics; others from psychology or studying human and animal behaviour; and some, like myself, from HCI or human factors, bridging the two.

Why?

Perhaps the first question is why one might even want more human-like computing.

There are two obvious reasons:

(i) Because it is a good model to emulate — Humans are able to solve some problems, such as visual pattern finding, which computers find hard. If we can understand human perception and cognition, then we may be able to design more effective algorithms. For example, in my own work colleagues and I have used models based on spreading activation and layers of human memory when addressing ‘web scale reasoning’ [K10,D10].

robot-3-clip-sml(ii) For interacting with people — There is considerable work in HCI in making computers easier to use, but there are limitations. Often we are happy for computers to be simply ‘tools’, but at other times, such as when your computer notifies you of an update in the middle of a talk, you wish it had a little more human understanding. One example of this is recent work at Georgia Tech teaching human values to artificial agents by reading them stories! [F16]

To some extent (i) is simply the long-standing area of nature-inspired or biologically-inspired computing. However, the combination of computational power and psychological understanding mean that perhaps we are the point where new strides can be made. Certainly, the success of ‘deep learning’ and the recent computer mastery of Go suggest this. In addition, by my own calculations, for several years the internet as a whole has had more computational power than a single human brain, and we are very near the point when we could simulate a human brain in real time [D05b].

Both goals, but particularly (ii), suggest a further goal:

(iii) new interaction paradigms — We will need to develop new ways to design for interacting with human-like agents and robots, not least how to avoid the ‘uncanny valley’ and how to avoid the appearance of over-competence that has bedevilled much work in this broad area. (see more later)

Both goals also offer the potential for a fourth secondary goal:

(iv) learning about human cognition — In creating practical computational algorithms based in human qualities, we may come to better understand human behaviour, psychology and maybe even society. For example, in my own work on modelling regret (see later), it was aspects of the computational model that highlighted the important role of ‘positive regret’ (“the grass is greener on the other side”) to hep us avoid ‘local minima’, where we stick to the things we know and do not explore new options.

Human or superhuman?

Of course humans are not perfect, do we want to emulate limitations and failings?

For understanding humans (iv), the answer is probably “yes”, and maybe by understanding human fallibility we may be in a better position to predict and prevent failures.

Similarly, for interacting with people (ii), the agents should show at least some level of human limitations (even if ‘put on’); for example, a chess program that always wins would not be much fun!

However, for simply improving algorithms, goal (i), we may want to get the ‘best bits’, from human cognition and merge with the best aspects of artificial computation. Of course it maybe that the frailties are also the strengths, for example, the need to come to decisions and act in relatively short timescales (in terms of brain ‘ticks’) may be one way in which we avoid ‘over learning’, a common problem in machine learning.

In addition, the human mind has developed to work with the nature of neural material as a substrate, and the physical world, both of which have shaped the nature of human cognition.

Very simple animals learn purely by Skinner-like response training, effectively what AI would term sub-symbolic. However, this level of learning require many exposures to similar stimuli. For more rare occurrences, which do not occur frequently within a lifetime, learning must be at the, very slow pace of genetic development of instincts. In contrast, conscious reasoning (symbolic processing) allows us to learn through a single or very small number of exposures; ideal for infrequent events or novel environments.

Big Data means that computers effectively have access to vast amounts of ‘experience’, and researchers at Google have remarked on the ‘Unreasonable Effectiveness of Data’ [H09] that allows problems, such as translation, to be tackled in a statistical or sub-symbolic way which previously would have been regarded as essentially symbolic.

Google are now starting to recombine statistical techniques with more knowledge-rich techniques in order to achieve better results again. As humans we continually employ both types of thinking, so there are clear human-like lessons to be learnt, but the eventual system will not have the same ‘balance’ as a human.

If humans had developed with access to vast amounts of data and maybe other people’s experience directly (rather than through culture, books, etc.), would we have developed differently? Maybe we would do more things unconsciously that we do consciously. Maybe with enough experience we would never need to be conscious at all!

More practically, we need to decide how to make use of this additional data. For example, learning analytics is becoming an important part of educational practice. If we have an automated tutor working with a child, how should we make use of the vast body of data about other tutors interactions with other children?   Should we have a very human-like tutor that effectively ‘reads’ learning analytics just as a human tutor would look at a learning ‘dashboard’? Alternatively, we might have a more loosely human-inspired ‘hive-mind’ tutor that ‘instinctively’ makes pedagogic choices based on the overall experience of all tutors, but maybe in an unexplainable way?

What could go wrong …

There have been a number of high-profile statements in the last year about the potential coming ‘singularity’ (when computers are clever enough to design new computers leading to exponential development), and warnings that computers could become sentient, Terminator-style, and take over.

There was general agreement at the workshop this kind of risk was overblown and that despite breakthroughs, such as the mastery of Go, these are still very domain limited. It is many years before we have to worry about even general intelligence in robots, let alone sentience.

A far more pressing problem is that of incapable computers, which make silly mistakes, and the way in which people, maybe because of the media attention to the success stories, assume that computers are more capable than they are!

Indeed, over confidence in algorithms is not just a problem for the general public, but also among computing academics, as I found in my personal experience on the REF panel.

There are of course many ethical and legal issues raised as we design computer systems that are more autonomous. This is already being played out with driverless cars, with issues of insurance and liability. Some legislators are suggesting allowing driverless cars, but only if there is a drive there to take control … but if the car relinquishes control, how do you safely manage the abrupt change?

Furthermore, while the vision of autonomous robots taking over the world is still far fetched; more surreptitious control is already with us. Whether it is Uber cabs called by algorithm, or simply Google’s ranking of search results prompting particular holiday choices, we all to varying extents doing “what the computer tells us”. I recall in the Dalek Invasion of Earth, the very un-human-like Daleks could not move easily amongst the rubble of war-torn London. Instead they used ‘hypnotised men’ controlled by some form of neural headset. If the Daleks had landed today and simply taken over or digitally infected a few cloud computing services would we know?

Legibility

Sometimes it is sufficient to have a ‘black box’ that makes decisions and acts. So long as it works we are happy. However, a key issue for many ethical and legal issues, but also for practical interaction, is the ability to be able to interrogate a system, so seek explanations of why a decision has been made.

Back in 1992 I wrote about these issues [D92], in the early days when neural networks and other forms of machine learning were being proposed for a variety of tasks form controlling nuclear fusion reactions to credit scoring. One particular scenario, was if an algorithm were used to pre-sort large numbers of job applications. How could you know whether the algorithms were being discriminatory? How could a company using such algorithms defend themselves if such an accusation were brought?

One partial solution then, as now, was to accept underlying learning mechanisms may involve emergent behaviour form statistical, neural network or other forms of opaque reasoning. However, this opaque initial learning process should give rise to an intelligible representation. This is rather akin to a judge who might have a gut feeling that a defendant is guilty or innocent, but needs to explicate that in a reasoned legal judgement.

This approach was exemplified by Query-by-Browsing, a system that creates queries from examples (using a variant of ID3), but then converts this in SQL queries. This was subsequently implemented [D94] , and is still running as a web demonstration.

For many years I have argued that it is likely that our ‘logical’ reasoning arises precisely form this need to explain our own tacit judgement to others. While we simply act individually, or by observing the actions of others, this can be largely tacit, but as soon as we want others to act in planned collaborate ways, for example to kill a large animal, we need to convince them. Once we have the mental mechanisms to create these explanations, these become internalised so that we end up with internal means to question our own thoughts and judgement, and even use them constructively to tackle problems more abstract and complex than found in nature. That is dialogue leads to logic!

Scenarios

We split into groups and discussed scenarios as a means to understand the potential challenges for human-like computing. Over multiple session the group I was in discussed one man scenario and then a variant.

Paramedic for remote medicine

The main scenario consisted of a patient far form a central medical centre, with an intelligent local agent communicating intermittently and remotely with a human doctor. Surprisingly the remote aspect of the scenario was not initially proposed by me thinking of Tiree, but by another member of the group thinking abut some of the remote parts of the Scottish mainland.

The local agent would need to be able communicate with the patient, be able to express a level of empathy, be able to physically examine (needing touch sensing, vision), and discuss symptoms. On some occasions, like a triage nurse, the agent might be sufficiently certain to be able to make a diagnosis and recommend treatment. However, at other times it may need to pass on to the remote doctor, being able to describe what had been done in terms of examination, symptoms observed, information gathered from the patient, in the same way that a paramedic does when handing over a patient to the hospital. However, even after the handover of responsibility, the local agent may still form part of the remote diagnosis, and maybe able to take over again once the doctor has determined an overall course of action.

The scenario embodied many aspects of human-like computing:

  • The agent would require a level of emotional understanding to interact with the patient
  • It would require fine and situation contingent robotic features to allow physical examination
  • Diagnosis and decisions would need to be guided by rich human-inspired algorithms based on large corpora of medical data, case histories and knowledge of the particular patient.
  • The agent would need to be able to explain its actions both to the patient and to the doctor. That is it would not only need to transform its own internal representations into forms intelligible to a human, but do so in multiple ways depending on the inferred knowledge and nature of the person.
  • Ethical and legal responsibility are key issues in medical practice
  • The agent would need to be able manage handovers of control.
  • The agent would need to understand its own competencies in order to know when to call in the remote doctor.

The scenario could be in physical or mental health. The latter is particularly important given recent statistics, which suggested only 10% of people in the UK suffering mental health problems receive suitable help.

Physiotherapist

As a more specific scenario still, one fog the group related how he had been to an experienced physiotherapist after a failed diagnosis by a previous physician. Rather than jumping straight into a physical examination, or even apparently watching the patient’s movement, the physiotherapist proceeded to chat for 15 minutes about aspects of the patient’s life, work and exercise. At the end of this process, the physiotherapist said, “I think I know the problem”, and proceeded to administer a directed test, which correctly diagnosed the problem and led to successful treatment.

Clearly the conversation had given the physiotherapist a lot of information about potential causes of injury, aided by many years observing similar cases.

To do this using an artificial agent would suggest some level of:

  • theory/model of day-to-day life

Thinking about the more conversational aspects of this I was reminded of the PhD work of Ramanee Peiris [P97]. This concerned consultations on sensitive subjects such as sexual health. It was known that when people filled in (initially paper) forms prior to a consultation, they were more forthcoming and truthful than if they had to provide the information face-to-face. This was even if the patient knew that the person they were about to see would read the forms prior to the consultation.

Ramanee’s work extended this first to electronic forms and then to chat-bot style discussions which were semi-scripted, but used simple textual matching to determine which topics had been covered, including those spontaneously introduced by the patient. Interestingly, the more human like the system became the more truthful and forthcoming the patients were, even though they were less so wit a real human.

As well as revealing lessons for human interactions with human-like computers, this also showed that human-like computing may be possible with quite crude technologies. Indeed, even Eliza was treated (to Weizenbaum’s alarm) as if it really were a counsellor, even though people knew it was ‘just a computer’ [W66].

Cognition or Embodiment?

I think it fair to say that the overall balance, certainly in the group I was in, was towards the cognitivist: that is more Cartesian approach starting with understanding and models of internal cognition, and then seeing how these play out with external action. Indeed, the term ‘representation’ used repeatedly as an assumed central aspect of any human-like computing, and there was even talk of resurrecting Newells’s project for a ‘unified theory of cognition’ [N90]

There did not appear to be any hard-core embodiment theorist at the workshops, although several people who had sympathies. This was perhaps as well as we could easily have degenerated into well rehearsed arguments for an against embodiment/cognition centred explanations … not least about the critical word ‘representation’.

However, I did wonder whether a path that deliberately took embodiment centrally would be valuable. How many human-like behaviours could be modelled in this way, taking external perception-action as central and only taking on internal representations when they were absolutely necessary (Alan Clark’s 007 principle) [C98].

Such an approach would meet limits, not least the physiotherapist’s 25 minute chat, but I would guess would be more successful over a wider range of behaviours and scenarios then we would at first think.

Human–Computer Interaction and Human-Like Computing

Both Russell and myself were partly there representing our own research interest, but also more generally as part of the HCI community looking at the way human-like computing would intersect exiting HCI agendas, or maybe create new challenges and opportunities. (see poster) It was certainly clear during the workshop that there is a substantial role for human factors from fine motor interactions, to conversational interfaces and socio-technical systems design.

Russell and I presented a poster, which largely focused on these interactions.

HCI-HLC-poster

There are two sides to this:

  • understanding and modelling for human-like computing — HCI studies and models complex, real world, human activities and situations. Psychological experiments and models tend to be very deep and detailed, but narrowly focused and using controlled, artificial tasks. In contrast HCI’s broader, albeit more shallow, approach and focus on realistic or even ‘in the wild’ tasks and situations may mean that we are in an ideal position to inform human-like computing.

human interfaces for human-like computing — As noted in goal (iii) we will need paradigms for humans to interact with human-like computers.

As an illustration of the first of these, the poster used my work on making sense of the apparently ‘bad’ emotion of regret [D05] .

An initial cognitive model of regret was formulated involving a rich mix of imagination (in order to pull past events and action to mind), counter-factual modal reasoning (in order to work out what would have happened), emption (which is modified to feel better or worse depending on the possible alternative outcomes), and Skinner-like low-level behavioural learning (the eventual purpose of regret).

cog-model

This initial descriptive and qualitative cognitive model was then realised in a simplified computational model, which had a separate ‘regret’ module which could be plugged into a basic behavioural learning system.   Both the basic system and the system with regret learnt, but the addition of regret did so with between 5 and 10 times fewer exposures.   That is, the regret made a major improvement to the machine learning.

architecture

Turning to the second. Direct manipulation has been at the heart of interaction design since the PC revolution in the 1980s. Prior to that command line interfaces (or worse job control interfaces), suggested a mediated paradigm, where operators ‘asked’ the computer to do things for them. Direct manipulation changed that turning the computer into a passive virtual world of computational objects on which you operated with the aid of tools.

To some extent we need to shift back to the 1970s mediated paradigm, but renewed, where the computer is no longer like an severe bureaucrat demanding the precise grammatical and procedural request; but instead a helpful and understanding aide. For this we can draw upon existing areas of HCI such as human-human communications, intelligent user interfaces, conversational agents and human–robot interaction.

References

[C98] Clark, A. 1998. Being There: Putting Brain, Body and the World Together Again. MIT Press. https://mitpress.mit.edu/books/being-there

[D92] A. Dix (1992). Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451. http://www.hcibook.com/alan/papers/neuro92/

[D94] A. Dix and A. Patrick (1994). Query By Browsing. Proceedings of IDS’94: The 2nd International Workshop on User Interfaces to Databases, Ed. P. Sawyer. Lancaster, UK, Springer Verlag. 236-248.

[D05] Dix, A..(2005).  The adaptive significance of regret. (unpublished essay, 2005) https://alandix.com/academic/essays/regret.pdf

[D05b] A. Dix (2005). the brain and the web – a quick backup in case of accidents. Interfaces, 65, pp. 6-7. Winter 2005. https://alandix.com/academic/papers/brain-and-web-2005/

[D10] A. Dix, A. Katifori, G. Lepouras, C. Vassilakis and N. Shabir (2010). Spreading Activation Over Ontology-Based Resources: From Personal Context To Web Scale Reasoning. Internatonal Journal of Semantic Computing, Special Issue on Web Scale Reasoning: scalable, tolerant and dynamic. 4(1) pp.59-102. http://www.hcibook.com/alan/papers/web-scale-reasoning-2010/

[E16] EPSRC (2016). Human Like Computing Hand book. Engineering and Physical Sciences Research Council. 17 – 18 February 2016

[F16] Alison Flood (2016). Robots could learn human values by reading stories, research suggests. The Guardian, Thursday 18 February 2016 http://www.theguardian.com/books/2016/feb/18/robots-could-learn-human-values-by-reading-stories-research-suggests

[H09] Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The Unreasonable Effectiveness of Data. IEEE Intelligent Systems 24, 2 (March 2009), 8-12. DOI=10.1109/MIS.2009.36

[K10] A. Katifori, C. Vassilakis and A. Dix (2010). Ontologies and the Brain: Using Spreading Activation through Ontologies to Support Personal Interaction. Cognitive Systems Research, 11 (2010) 25–41. https://alandix.com/academic/papers/Ontologies-and-the-Brain-2010/

[N90] Allen Newell. 1990. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, USA. http://www.hup.harvard.edu/catalog.php?isbn=9780674921016

[P97] DR Peiris (1997). Computer interviews: enhancing their effectiveness by simulating interpersonal techniques. PhD Thesis, University of Dundee. http://virtual.inesc.pt/rct/show.php?id=56

[W66] Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (January 1966), 36-45. DOI=http://dx.doi.org/10.1145/365153.365168

Holiday Reading

Early in the summer Fiona and I took 10 days holiday, first touring on the West Coast of Scotlad, south from Ullapool and then over the Skye Road Bridge to spend a few days on Skye.  As well as visiting various wool-related shops on the way and a spectacular drive over the pass from Applecross, I managed a little writing, some work on regret modelling1. And, as well as writing and regret modelling, quite a lot of reading.

This was my holiday reading:

The Talking Ape: How Language Evolved, Robbins Burling (see my booknotes and review)

In Praise of the Garrulous, Allan Cameron (see my booknotes)

A Mind So Rare, Merlin Donald (see my booknotes and review)

Wanderlust, Rebecca Solnit (see my booknotes)

  1. At last!  It has been something like 6 years since I first did initial, and very promising, computational regret modelling, and have at last got back to it, writing driver code so that I have got data from a systematic spread of different parameters.  Happily this verified the early evidence that the cognitive model of regret I wrote about first in 2003 really does seem to aid learning.  However, the value of more comprehensive simulation was proved as early indications that positive regret (grass is greener feeling) was more powerful than negative regret do not seem to have been borne out.[back]

book: The Singing Neanderthals, Mithin

One of my birthday presents was Steven Mithin’s “The Singing Neanderthals” and, having been on holiday, I have already read it! I read Mithin’s “The Prehistory of the Mind” some years ago and have referred to it repeatedly over the years1, so was excited to receive this book, and it has not disappointed. I like his broad approach taking evidence from a variety of sources, as well as his own discipline of prehistory; in times when everyone claims to be cross-disciplinary, Mithin truly is.

“The Singing Neanderthal”, as its title suggests, is about the role of music in the evolutionary development of the modern human. We all seem to be born with an element of music in our heart, and Mithin seeks to understand why this is so, and how music is related to, and part of the development of, language. Mithin argues that elements of music developed in various later hominids as a form of primitive communication2, but separated from language in homo sapiens when music became specialised to the communication of emotion and language to more precise actions and concepts.

The book ‘explains’ various known musical facts, including the universality of music across cultures and the fact that most of us do not have perfect pitch … even though young babies do (p77). The hard facts of how things were for humans or related species tens or hundreds of thousands of years ago are sparse, so there is inevitably an element of speculation in Mithin’s theories, but he shows how many, otherwise disparate pieces of evidence from palaeontology, psychology and musicology make sense given the centrality of music.

Whether or not you accept Mithin’s thesis, the first part of the book provides a wide ranging review of current knowledge about the human psychology of music. Coincidentally, while reading the book, there was an article in the Independent reporting on evidence for the importance of music therapy in dealing with depression and aiding the rehabilitation of stroke victims3, reinforcing messages from Mithin’s review.

The topic of “The Singing Neanderthal” is particularly close to my own heart as my first personal forays into evolutionary psychology (long before I knew the term, or discovered Cosmides and Tooby’s work), was in attempting to make sense of human limits to delays and rhythm.

Those who have been to my lectures on time since the mid 1990s will recall being asked to first clap in time and then swing their legs ever faster … sometimes until they fall over! The reason for this is to demonstrate the fact that we cannot keep beats much slower than one per second4, and then explain this in terms of our need for a mental ‘beat keeper’ for walking and running. The leg shaking is to show how our legs, as a simple pendulum, have a natural frequency of around 1Hz, hence determining our slowest walk and hence need for rhythm.

Mithin likewise points to walking and running as crucial in the development of rhythm, in particular the additional demands of bipedal motion (p150). Rhythm, he argues, is not just about music, but also a shared skill needed for turn-taking in conversation (p17), and for emotional bonding.

In just the last few weeks, at the HCI conference in Newcastle, I learnt that entrainment, when we keep time with others, is a rare skill amongst animals, almost uniquely human. Mithin also notes this (p206), with exceptions, in particular one species of frog, where the males gather in groups to sing/croak in synchrony. One suggested reason for this is that the louder sound can attract females from a larger distance. This cooperative behaviour of course acts against each frog’s own interest to ‘get the girl’ so they also seek to out-perform each other when a female frog arrives. Mithin imagines that similar pressures may have sparked early hominid music making. As well as the fact that synchrony makes the frogs louder and so easy to hear, I wonder whether the discerning female frogs also realise that if they go to a frog choir they get to chose amongst them, whereas if they follow a single frog croak they get stuck with the frog they find; a form of frog speed dating?

Mithin also suggests that the human ability to synchronise rhythm is about ‘boundary loss’ seeing oneself less as an individual and more as part of a group, important for early humans about to engage in risky collaborative hunting expeditions. He cites evidence of this from the psychology of music, anthropology, and it is part of many people’s personal experience, for example, in a football crowd, or Last Night at the Proms.

This reminds me of the experiments where a rubber hand is touched in time with touching a person’s real hand; after a while the subject starts to feel as if the rubber hand is his or her own hand. Effectively our brain assumes that this thing that correlates with feeling must be part of oneself5. Maybe a similar thing happens in choral singing, I voluntarily make a sound and simultaneously everyone makes the sound, so it is as if the whole choir is an extension of my own body?

Part of the neurological evidence for the importance of group music making concerns the production of oxytocin. In experiments on female prairie voles that have had oxytocin production inhibited, they engage in sex as freely as normal voles, but fail to pair bond (p217). The implication is that oxytocin’s role in bonding applies equally to social groups. While this explains a mechanism by which collaborative rhythmic activities create ‘boundary loss’, it doesn’t explain why oxytocin is created through rhythmic activity in the first place. I wonder if this is perhaps to do with bipedalism and the need for synchronised movement during face-to-face copulation, which would explain why humans can do synchronised rhythms whereas apes cannot. That is, rhythmic movement and oxytocin production become associated for sexual reasons and then this generalises to the social domain. Think again of that chanting football crowd?

I should note that Mithin also discusses at length the use of music in bonding with infants, as anyone who has sung to a baby knows, so this offers an alternative route to rhythm & bonding … but not one that is particular to humans, so I will stick with my hypothesis 😉

Sexual selection is a strong theme in the book, the kind of runaway selection that leads to the peacock tail. Changing lifestyles of early humans, in particular longer periods looking after immature young, led to a greater degree of female control in the selection of partners. As human size came close to the physical limits of the environment (p185), Mithin suggests that other qualities had to be used by females to choose their mate, notably male singing and dance – prehistoric Saturday Night Fever.

As one evidence for female mate choice, Mithin points to the overly symmetric nature of hand axes and imagines hopeful males demonstrating their dexterity by knapping ever more perfect axes in front of admiring females (p188). However, this brings to mind Calvin’s “Ascent of Mind“, which argues that these symmetric, ovoid axes were used like a discus, thrown into the midst of a herd of prey to bring one down. The two theories for axe shape are not incompatible. Calvin suggests that the complex physical coordination required by axe throwing would have driven general brain development. In fact these forms of coordination, are not so far from those needed for musical movement, and indeed expert flint knapping, so maybe it was this skills that were demonstrated by the shaping of axes beyond that immediately necessary for purpose.

Mithin’s description of the musical nature of mother-child interactions also brought to mind Broomhall’s “Eternal Child“. Broomhall ‘s central thesis is that humans are effectively in a sort of arrested development with many features, not least our near nakedness, characteristic of infants. Although it was not one of the points Broomhall makes, his arguments made sense to me in terms of the mental flexibility that characterises childhood, and the way this is necessary for advanced human innovation; I am always encouraging students to think in a more childlike way. If Broomhall’s theories were correct, then this would help explain how some of the music making more characteristic of mother-infant interactions become generalised to adult social interactions.

I do notice an element of mutual debunking amongst those writing about richer cognitive aspects of early human and hominid development. I guess a common trait in disciplines when evidence is thin, and theories have to fill a lot of blanks. So maybe Mithin, Calvin and Broomhall would not welcome me bringing their respective contributions together! However, as in other areas where data is necessarily scant (such as sub-atomic physics), one does feel a developing level of methodological rigour, and the fact that these quite different theoretical approaches have points of connection, does suggest that a deeper understanding of early human cognition, while not yet definitive, is developing.

In summary, and as part of this wider unfolding story, “The Singing Neanderthal” is an engaging and entertaining book to read whether you are interested in the psychological and social impact of music itself, or the development of the human mind.

… and I have another of Mithin’s books in the birthday pile, so looking forward to that too!

  1. See particularly my essay on the role of imagination in bringing together our different forms of ‘specialised intelligence’. “The Prehistory of the Mind” highlighted the importance of this ‘cognitive fluidity’, linking social, natural and technological thought, but lays this largely in the realm of language. I would suggest that imagination also has this role, creating a sort of ‘virtual world’ on which different specialised cognitive modules can act (see “imagination and rationality“).[back]
  2. He calls this musical communication system Hmmmm in its early form – Holistic, Multiple-Modal, Manipulative and Musical, p138 – and later Hmmmmm – Holistic, Multiple-Modal, Manipulative, Musical and Mimetic, p221.[back]
  3. NHS urged to pay for music therapy to cure depression“, Nina Lakhani, The Independent, Monday, 1 August 2011[back]
  4. Professional conductors say 40 beats per minute is the slowest reliable beat without counting between beats.[back]
  5. See also my previous essay on “driving as a cyborg experience“.[back]

language, dreams and the Jabberwocky circuit

If life is always a learning opportunity, then so are dreams.

Last night I both learnt something new about language and cognition, and also developed a new trick for creativity!

In the dream in question I was in a meeting. I know, a sad topic for a dream, and perhaps even sadder it had started with me filling in forms!  The meeting was clearly one after I’d given a talk somewhere as a person across the table said she’d been wanting to ask me (obviously as a sort of challenge) if there was a relation between … and here I’ll expand later … something like evolutionary and ecological something.  Ever one to think on my feet I said something like “that’s an interesting question”, but it was also clear that the question arose partly because the terms sounded somewhat similar, so had some of the sense of a rhyming riddle “what’s the difference between a jeweller and a jailor”.  So I went on to mention random metaphors as a general creativity technique and then, so as to give practical advice, suggested choosing two words next to each other in a dictionary and then trying to link them.

Starting with the last of these, the two words in a dictionary method is one I have never suggested to anyone before, not even thought about. It was clearly prompted by the specific example where the words had an alliterative nature, and so was a sensible generalisation, and after I woke realised was worth suggesting in future as an exercise.  But it was entirely novel to me, I had effectively done the exactly sort of thinking / problem solving that I would have done in the real life situation, but while dreaming.

One of the reasons I find dreams fascinating is that in some ways they are so normal — we clearly have no or little sensory input, and certain parts of our brain shut down (e.g. motor control to stop us thrashing about too much in our sleep) — but other parts seem to function perfectly as normal.  I have written before about the cognitive nature of dreams (including maybe how to model dreaming) and what we may be able to learn about cognitive function because not everything is working, rather like running an engine when it is out of the car.

In this dream clearly the ‘conscious’ (I know an oxymoron) problem-solving part of the mind was operating just the same as when awake.  Which is an interesting fact about dreaming, but  I was already aware of it from previous dreams.

In this dream it was the language that was interesting, the original conundrum I was given.  The problem came as I woke up and tried to reconstruct exactly what my interlocutor had asked me.  The words clearly *meant* evolutionary and ecological, but in the dream had ‘sounded’ even closer aurally, more like evolution and elocution (interesting to consider, images of God speaking forth creation).

So how had the two words sound more similar in my dream than in real speech?

For this we need the Jabberwocky circuit.

There is a certain neurological condition that arises, I think due to tumours or damage in particular areas of the grain, which disrupts particular functions of language.   The person speaks interminably; the words make sense and the grammar is flawless, but there is no overall sense.  Each small snippet of speech is fine, just there is no larger scale linkage.

When explaining this phenomenon to people I often evoke the Jabberwocky circuit.  Now I should note that this is not a word used by linguists, neurolinguists, or cognitive scientists, and is a gross simplification, but I think captures the essence of what is happening.  Basically there is a part of your mind (the conscious, thinking bit) that knows what to say and it asks another bit, the Jabberwocky circuit, to actually articulate the words.  The Jabberwocky circuit knows about the sound form of words and how to string them together grammatically, but basically does what it is told.  The thinking bit needs to know enough about what can be said, but doesn’t have time to deal with precisely how they are strung together and leaves that to Jabberwocky.

Even without brain damage we can see occasional slips in this process.  For example, if you are talking to someone (and even more if typing) and there is some other speech audible (maybe radio in the background), occasionally a word intrudes into your own speech that isn’t part of what you meant to say, but is linked to the background intruding sound.

Occasionally too, you find yourself stopping in mid sentence when the words don’t quite make sense, for example, when what would be reasonable grammar overlaps with a colloquialism, so that it no longer makes sense.  Or you may simply not be able to say a word that you ‘know’ is there and insert “thingy” or “what’s it called” where you should say “spanner”.

The relationship between the two is rather like a manager and someone doing the job: the manager knows pretty much what is possible and can give general directions, but the person doing the job knows the details.  Occasionally, the instructions get confused (when there is intruding background speech) or the manager thinks something is possible which turns out not to be.

Going back to the dream I thought I ‘heard’ the words, but examining more closely after I woke I realised that no word would actually fit.  I think what is happening is that during dreaming (and maybe during imagined dialogue while awake), the Jabberwocky circuit is not active, or not being attended to.  It is like I am hearing the intentions to speak of the other person, not articulated words.  The pre-Jabberwocky bit of the mind does know that there are two words, and knows what they *mean*.  It also knows that they sound rather similar at the beginning (“eco”, “evo”), but not exactly what they sound like throughout.

I have noticed a similar thing with the written word.  Often in dreams I am reading a book, sheet of paper or poster, and the words make sense, but if I try to look more closely at the precise written form of the text, I cannot focus, and indeed often wake at that point1.  That is the dream is creating the interpretation of the text, but not the actual sensory form, although if asked I would normally say that I had ‘seen’ the words on the page in the dream, it is more that I ‘see’ that there are words.

Fiona does claim to be able to see actual letters in dreams, so maybe it is possible to recreate more precise sensory images, or maybe this is just the difference between simply writing and reading, and more conscious spelling-out or attending to words, as in the well known:

Paris in the
the spring

Anyway, I am awake now and the wiser.  I know a little more about dreaming, which cognitive functions are working and which are not;  I know a little more about the brain and language; and I know a new creativity technique.

Not bad for a night in bed.

What do you learn from your dreams?

  1. The waking is interesting, I have often noticed that if the ‘logic’ of the dream becomes irreconcilable I wake.  This is a long story in itself, but I think similar to the way you get a ‘breakdown’ situation when things don’t work as expected and are forced to think about what you are doing.  It seems like the ‘kick’ that changes your mode of thinking often wakes you up![back]

Descartes: Principles of Philosophy

I have just read Descartes‘ “Principles of Philosophy” – famous for “Cogito ergo sum“.  I have read commentaries on Descartes before, but never the original (or at least a translation1, I don’t read Latin!).  Now-a-days “Cartesian thinking” is often used in a derogatory way, symbolising a narrow, reductionist and simplistic world-view.  However, reading “Principles” in full reveals a man with a rich and deep insight of which his rational and analytic philosophy forms a part.

Continue reading

  1. René Descartes, 1644, Principles of Philosophy, trans. George MacDonald Ross, 1998–1999[back]

tech talks: brains, time and no time

Just scanning a few Google Tech Talks on YouTube.  I don’t visit it often, but followed a link from Rob Style‘s twitter.  I find the video’s a bit slow, so tend to flick through with the sound off, really wishing they had fast forward buttons like a DVD as quite hard to pull the little slider back and forth.

One talk was by Stuart Hameroff on A New Marriage of Brain and Computer.  He is the guy that works with Penrose on the possibility that quantum effects in microtubules may be the source of consciousness.  I notice that he used calculations for computational capacity based on traditional neuron-based models that are very similar to my own calculations some years ago in “the brain and the web” when I worked out that the memory and computational capacity of a single human brain is very similar to those of the entire web. Hameroff then went on to say that there are an order of magnitude more microtubules (sub-cellular structures, with many per neuron), so the traditional calculations do not hold!

Microtubules are fascinating things, they are like little mechano sets inside each cell.  It is these microtubules that during cell division stretch out straight the chromosomes, which are normally tangled up the nucleus.  Even stranger those fluid  movements of amoeba gradually pushing out pseudopodia, are actually made by mechanical structures composed of microtubules, only looking so organic because of the cell membrane – rather like a robot covered in latex.

pictire of amoeba

The main reason for going to the text talks was one by Steve Souders “Life’s Too Short – Write Fast Code” that has lots of tips for on speeding up web pages including allowing Javascript files to download in parallel.  I was particularly impressed by the quantification of costs of delays on web pages down to 100ms!

This is great.  Partly because of my long interest in time and delays in HCI. Partly because I want my own web scripts to be faster and I’ve already downloaded the Yahoo! YSlow plugin for FireFox that helps diagnose causes of slow pages.  And partly  because I get so frustrated waiting for things to happen, both on the web and on the desktop … and why oh why does it take a good minute to get a WiFi connection ….  and why doesn’t YouTube introduce better controls for skimming videos.

… and finally, because I’d already spent too much time skimming the tech talks, I looked at one last talk: David Levy, “No Time To Think” … how we are all so rushed that we have no time to really think about problems, not to mention life1.  At least that’s what I think it said, because I skimmed it rather fast.

  1. see also my own discussion of Slow Time[back]

Coast to coast: St Andrews to Tiree

A week ago I was in St Andrews on the east coast of Scotland delivering three lectures on “Human Computer Interaction: as it was, as it is and as it may be” as part of their distinguished lecture series and now I am in Tiree in the wild western ocean off the west coast.

I had a great time in St Andrews and was well looked after by some I knew already Ian, Gordan, John and Russell, and also met many new people. Ate good food and stayed in a lovely hotel overlooking the sea (and golf course) and full of pictures of golfers (well what do you expect in St Andrews).

For the lectures, I was told the general pattern was one lecture about the general academic area, one ‘state of the art’ and one about my own stuff … hence the three parts of the title!  Ever for cutesy titles I then called the individual lectures “Whose Computer Is It Anyway”, “The Great Escape” and “Connected, but Under Control, Big, but Brainy?”.

The first lecture was about the fact that computers are always ultimately for people (surprise surprise!) and I used Ian’s slight car accident on the evening before the lecture as a running example (sorry Ian).

The second lecture was about the way computers have escaped the office desktop and found their way into the physical world of ubiquitous computing, the digital world of the web ad into our everyday lives in out homes and increasingly the hub of our social lives too.  Matt Oppenheim did some great cartoons for this and I’m going to use them again in a few weeks when I visit Dublin to do the inaugural lecture for SIGCHI Ireland.

for 20 years the computer is chained to the office desktop (image © Matt Oppenheim)

(© Matt Oppenheim)

... now escapes: out into the world, spreading across the net, in the home, in our social lives (image © Matt Oppenheim)

(© Matt Oppenheim)

The last lecture was about intelligent internet stuff, similar to the lecture I gave at Aveiro a couple of weeks back … mentioning again the fact that the web now has the same information storage and processing capacity as a human brain1 … always makes people think … well at least it always makes ME think about what it means to be human.

… and now … in Tiree … sun, wild wind, horizontal hail, and paddling in the (rather chilly) sea at dawn

  1. see the brain and the web[back]

matterealities and the physical embodiment of code

Last Tuesday morning I had the pleasure of entertaining a group of attendees to the Matterealities workshop @ lancaster. Hans and I had organised a series of demos in the dept. during the morning (physiological gaming, Firefly (intelligent fairylights), VoodooIO, something to do with keyboards) … but as computer scientists are nocturnal the demos did not start until 10am, and so I got to talk with them for around an hour beforehand :-/

The people there included someone who studied people coding about DNA, someone interested in text, anthropologosts, artists and an ex-AI man. We talked about embodied computation1, the human body as part of computation, the physical nature of code, the role of the social and physical environment in computation … and briefly over lunch I even strayed onto the modeling of regret … but actually a little off topic.

Alan driving

physicality – Played a little with sticks and stones while talking about properties of physical objects: locality of effect, simplicity of state, proportionality and continuity of effect2.

physical interaction – Also talked about the DEPtH project and previous work with Masitah on natural interaction. Based on the piccie I may have acted out driving when talking about natural inverse actions

ubiquity of computation – I asked the question I often do “How many computers do you have in your house” … one person admitted to over 10 … and she meant real computers3. However, as soon as you count the computer in the TV and HiFi, the washing machine and microwave, central heating and sewing machine the count gets bigger and bigger. Then there is the number you carry with you: mobile phone, camera, USB memory stick, car keys (security codes), chips on credit cards.

FireFly on a Christmas treeHowever at the Firefly demo later in the morning they got to see what may be the greatest concentration of computers in the UK … and all on a Christmas Tree. Behind each tiny light (over 1000 of them) is a tiny computer, each as powerful as the first PC I owned allowing them to act together as a single three dimensional display.

embodiment of computation – Real computation always happens in the physical world: electrons zipping across circuit boards and transistors routing signals in silicon. For computation to happen the code (the instruction of what needs to happen) and the data (what it needs to happen with and to) need to be physically together.

The Turing Machine, Alan Turing’s thought experiment, is a lovely example of this. Traditionally the tape in the Turing machine is thought of as being dragged across a read-write head on the little machine itself.

However … if you were really to build one … the tape would get harder and harder to move as you used longer and longer tapes. In fact it makes much more sense to think of the little machine as moving over the tape … the Turing machine is really a touring machine (ouch!). Whichever way it goes, the machine that knows what to do and the tape that it must do it to are brought physically together4.

This is also of crucial importance in real computers and one of the major limits on fast computers is the length of the copper tracks on circuit boards – the data must come to the processor, and the longer the track the longer it takes … 10 cm of PCB is a long distance for an electron in a hurry.
Alanbrain as a computer – We talked about the way each age reinvents humanity in terms of its own technology: Pygmalion in stone, clockwork figures, pneumatic theories of the nervous system, steam robots, electricity in Shelley’s Frankenstein and now seeing all life through the lens of computation.

This withstanding … I did sort of mention the weird fact (or is it a factoid) that the human brain has similar memory capacity to the web5 … this is always a good point to start discussion 😉

While on the topic I did just sort of mention the socio-organisational Church-Turing hyphothesis … but that is another story

more … I recall counting the number of pairs of people and the number of seat orderings to see quadratic (n squared) and exponential effects, the importance of interpretation, why computers are more than and less than numbers, the Java Virtual Machine, and more, more, more, … it was very full hour

AlanLcoblo - artefactsAlan

  1. I just found notes I’d made for web page in embodied computation 5 years ago … so have put the notes online[back]
  2. see preface to Physicality 2006 proceedings[back]
  3. I just found an online survey on How many computers in your house[back]
  4. Yep I know that Universal Turing machine has the code on the tape, but there the ‘instructions’ to be executed are basically temporarily encoded into the UTM’s state while it zips off to the data part of the tape.[back]
  5. A. Dix (2005). the brain and the web – a quick backup in case of accidents. Interfaces, 65, pp. 6-7. Winter 2005.
    http://www.hcibook.com/alan/papers/brain-and-web-2005/[back]

Single-track minds – centralised thinking and the evidence of bad models

Another post related to Clark’s “Being there” (see previous post on this). The central thesis of Clark’s book is that we should look at people as reactive creatures acting in the environment, not as disembodied minds acting on it. I agree wholeheartedly with this non-dualist view of mind/body, but every so often Clark’s enthusiasm leads a little too far – but then this forces reflection on just what is too far.

In this case the issue is the distributed nature of cognition within the brain and the inadequacy of central executive models. In support of this, Clark (p.39) cites Mitchel Resnick at length and I’ll reproduce the quote:

“people tend to look for the cause, the reason, the driving force, the deciding factor. When people observe patterns and structures in the world (for example, the flocking patterns of birds or foraging patterns of ants), they often assume centralized causes where none exist. And when people try to create patterns or structure in the world (for example, new organizations or new machines), they often impose centralized control where none is needed.” (Resnick 1994, p.124)1

The take home message is that we tend to think in terms of centralised causes, but the world is not like that. Therefore:

(i) the way we normally think is wrong

(ii) in particular we should expect non-centralised understanding of cognition

However, if our normal ways of thinking are so bad, why is it that we have survived as a species so long? The very fact that we have this tendency to think and design in terms of centralised causes, even when it is a poor model of the world, suggests some advantage to this way of thinking.

Continue reading

  1. Mitchel Resnik (1994). Turtles Termites and Traffic Jams: Explorations in Massively Parallel Microworlds. MIT Press.[back]

multiple representations – many chairs in the mind

I have just started reading Andy Clark’s “Being There”1 (maybe more on that later), but early on he reflects on the MIT COG project, which is a human-like robot torso with decentralised computation – coherent action emerging through interactions not central control.

This reminded me of results of brain scans (sadly, I can’t recall the source), which showed that the areas in the brain where you store concepts like ‘chair’ are different from those where you store the sound of the word – and also I’m sure the spelling of it also.

This makes sense of the “tip of the tongue” phenomenon, you know that there is a word for something, but can’t find the exact word. Even more remarkable is that of you know words in different languages you can know this separately for each language.

So, musing on this, there seem to be very good reasons why, even within our own mind, we hold multiple representations for the “same” thing, such as chair, which are connected, but loosely coupled.

Continue reading

  1. Andy Clark. Being There. MIT Press. 1997. ISBN 0-262-53156-9. book@MIT[back]