Human-Like Computing

Last week I attended an EPSRC workshop on “Human-Like Computing“.

The delegate pack offered a tentative definition:

“offering the prospect of computation which is akin to that of humans, where learning and making sense of information about the world around us can match our human performance.” [E16]

However, the purpose of this workshop was to clarify, and expand on this, exploring what it might mean for computers to become more like humans.

It was an interdisciplinary meeting with some participants coming from more technical disciplines such as cognitive science, artificial intelligence, machine learning and Robotics; others from psychology or studying human and animal behaviour; and some, like myself, from HCI or human factors, bridging the two.

Why?

Perhaps the first question is why one might even want more human-like computing.

There are two obvious reasons:

(i) Because it is a good model to emulate — Humans are able to solve some problems, such as visual pattern finding, which computers find hard. If we can understand human perception and cognition, then we may be able to design more effective algorithms. For example, in my own work colleagues and I have used models based on spreading activation and layers of human memory when addressing ‘web scale reasoning’ [K10,D10].

robot-3-clip-sml(ii) For interacting with people — There is considerable work in HCI in making computers easier to use, but there are limitations. Often we are happy for computers to be simply ‘tools’, but at other times, such as when your computer notifies you of an update in the middle of a talk, you wish it had a little more human understanding. One example of this is recent work at Georgia Tech teaching human values to artificial agents by reading them stories! [F16]

To some extent (i) is simply the long-standing area of nature-inspired or biologically-inspired computing. However, the combination of computational power and psychological understanding mean that perhaps we are the point where new strides can be made. Certainly, the success of ‘deep learning’ and the recent computer mastery of Go suggest this. In addition, by my own calculations, for several years the internet as a whole has had more computational power than a single human brain, and we are very near the point when we could simulate a human brain in real time [D05b].

Both goals, but particularly (ii), suggest a further goal:

(iii) new interaction paradigms — We will need to develop new ways to design for interacting with human-like agents and robots, not least how to avoid the ‘uncanny valley’ and how to avoid the appearance of over-competence that has bedevilled much work in this broad area. (see more later)

Both goals also offer the potential for a fourth secondary goal:

(iv) learning about human cognition — In creating practical computational algorithms based in human qualities, we may come to better understand human behaviour, psychology and maybe even society. For example, in my own work on modelling regret (see later), it was aspects of the computational model that highlighted the important role of ‘positive regret’ (“the grass is greener on the other side”) to hep us avoid ‘local minima’, where we stick to the things we know and do not explore new options.

Human or superhuman?

Of course humans are not perfect, do we want to emulate limitations and failings?

For understanding humans (iv), the answer is probably “yes”, and maybe by understanding human fallibility we may be in a better position to predict and prevent failures.

Similarly, for interacting with people (ii), the agents should show at least some level of human limitations (even if ‘put on’); for example, a chess program that always wins would not be much fun!

However, for simply improving algorithms, goal (i), we may want to get the ‘best bits’, from human cognition and merge with the best aspects of artificial computation. Of course it maybe that the frailties are also the strengths, for example, the need to come to decisions and act in relatively short timescales (in terms of brain ‘ticks’) may be one way in which we avoid ‘over learning’, a common problem in machine learning.

In addition, the human mind has developed to work with the nature of neural material as a substrate, and the physical world, both of which have shaped the nature of human cognition.

Very simple animals learn purely by Skinner-like response training, effectively what AI would term sub-symbolic. However, this level of learning require many exposures to similar stimuli. For more rare occurrences, which do not occur frequently within a lifetime, learning must be at the, very slow pace of genetic development of instincts. In contrast, conscious reasoning (symbolic processing) allows us to learn through a single or very small number of exposures; ideal for infrequent events or novel environments.

Big Data means that computers effectively have access to vast amounts of ‘experience’, and researchers at Google have remarked on the ‘Unreasonable Effectiveness of Data’ [H09] that allows problems, such as translation, to be tackled in a statistical or sub-symbolic way which previously would have been regarded as essentially symbolic.

Google are now starting to recombine statistical techniques with more knowledge-rich techniques in order to achieve better results again. As humans we continually employ both types of thinking, so there are clear human-like lessons to be learnt, but the eventual system will not have the same ‘balance’ as a human.

If humans had developed with access to vast amounts of data and maybe other people’s experience directly (rather than through culture, books, etc.), would we have developed differently? Maybe we would do more things unconsciously that we do consciously. Maybe with enough experience we would never need to be conscious at all!

More practically, we need to decide how to make use of this additional data. For example, learning analytics is becoming an important part of educational practice. If we have an automated tutor working with a child, how should we make use of the vast body of data about other tutors interactions with other children?   Should we have a very human-like tutor that effectively ‘reads’ learning analytics just as a human tutor would look at a learning ‘dashboard’? Alternatively, we might have a more loosely human-inspired ‘hive-mind’ tutor that ‘instinctively’ makes pedagogic choices based on the overall experience of all tutors, but maybe in an unexplainable way?

What could go wrong …

There have been a number of high-profile statements in the last year about the potential coming ‘singularity’ (when computers are clever enough to design new computers leading to exponential development), and warnings that computers could become sentient, Terminator-style, and take over.

There was general agreement at the workshop this kind of risk was overblown and that despite breakthroughs, such as the mastery of Go, these are still very domain limited. It is many years before we have to worry about even general intelligence in robots, let alone sentience.

A far more pressing problem is that of incapable computers, which make silly mistakes, and the way in which people, maybe because of the media attention to the success stories, assume that computers are more capable than they are!

Indeed, over confidence in algorithms is not just a problem for the general public, but also among computing academics, as I found in my personal experience on the REF panel.

There are of course many ethical and legal issues raised as we design computer systems that are more autonomous. This is already being played out with driverless cars, with issues of insurance and liability. Some legislators are suggesting allowing driverless cars, but only if there is a drive there to take control … but if the car relinquishes control, how do you safely manage the abrupt change?

Furthermore, while the vision of autonomous robots taking over the world is still far fetched; more surreptitious control is already with us. Whether it is Uber cabs called by algorithm, or simply Google’s ranking of search results prompting particular holiday choices, we all to varying extents doing “what the computer tells us”. I recall in the Dalek Invasion of Earth, the very un-human-like Daleks could not move easily amongst the rubble of war-torn London. Instead they used ‘hypnotised men’ controlled by some form of neural headset. If the Daleks had landed today and simply taken over or digitally infected a few cloud computing services would we know?

Legibility

Sometimes it is sufficient to have a ‘black box’ that makes decisions and acts. So long as it works we are happy. However, a key issue for many ethical and legal issues, but also for practical interaction, is the ability to be able to interrogate a system, so seek explanations of why a decision has been made.

Back in 1992 I wrote about these issues [D92], in the early days when neural networks and other forms of machine learning were being proposed for a variety of tasks form controlling nuclear fusion reactions to credit scoring. One particular scenario, was if an algorithm were used to pre-sort large numbers of job applications. How could you know whether the algorithms were being discriminatory? How could a company using such algorithms defend themselves if such an accusation were brought?

One partial solution then, as now, was to accept underlying learning mechanisms may involve emergent behaviour form statistical, neural network or other forms of opaque reasoning. However, this opaque initial learning process should give rise to an intelligible representation. This is rather akin to a judge who might have a gut feeling that a defendant is guilty or innocent, but needs to explicate that in a reasoned legal judgement.

This approach was exemplified by Query-by-Browsing, a system that creates queries from examples (using a variant of ID3), but then converts this in SQL queries. This was subsequently implemented [D94] , and is still running as a web demonstration.

For many years I have argued that it is likely that our ‘logical’ reasoning arises precisely form this need to explain our own tacit judgement to others. While we simply act individually, or by observing the actions of others, this can be largely tacit, but as soon as we want others to act in planned collaborate ways, for example to kill a large animal, we need to convince them. Once we have the mental mechanisms to create these explanations, these become internalised so that we end up with internal means to question our own thoughts and judgement, and even use them constructively to tackle problems more abstract and complex than found in nature. That is dialogue leads to logic!

Scenarios

We split into groups and discussed scenarios as a means to understand the potential challenges for human-like computing. Over multiple session the group I was in discussed one man scenario and then a variant.

Paramedic for remote medicine

The main scenario consisted of a patient far form a central medical centre, with an intelligent local agent communicating intermittently and remotely with a human doctor. Surprisingly the remote aspect of the scenario was not initially proposed by me thinking of Tiree, but by another member of the group thinking abut some of the remote parts of the Scottish mainland.

The local agent would need to be able communicate with the patient, be able to express a level of empathy, be able to physically examine (needing touch sensing, vision), and discuss symptoms. On some occasions, like a triage nurse, the agent might be sufficiently certain to be able to make a diagnosis and recommend treatment. However, at other times it may need to pass on to the remote doctor, being able to describe what had been done in terms of examination, symptoms observed, information gathered from the patient, in the same way that a paramedic does when handing over a patient to the hospital. However, even after the handover of responsibility, the local agent may still form part of the remote diagnosis, and maybe able to take over again once the doctor has determined an overall course of action.

The scenario embodied many aspects of human-like computing:

  • The agent would require a level of emotional understanding to interact with the patient
  • It would require fine and situation contingent robotic features to allow physical examination
  • Diagnosis and decisions would need to be guided by rich human-inspired algorithms based on large corpora of medical data, case histories and knowledge of the particular patient.
  • The agent would need to be able to explain its actions both to the patient and to the doctor. That is it would not only need to transform its own internal representations into forms intelligible to a human, but do so in multiple ways depending on the inferred knowledge and nature of the person.
  • Ethical and legal responsibility are key issues in medical practice
  • The agent would need to be able manage handovers of control.
  • The agent would need to understand its own competencies in order to know when to call in the remote doctor.

The scenario could be in physical or mental health. The latter is particularly important given recent statistics, which suggested only 10% of people in the UK suffering mental health problems receive suitable help.

Physiotherapist

As a more specific scenario still, one fog the group related how he had been to an experienced physiotherapist after a failed diagnosis by a previous physician. Rather than jumping straight into a physical examination, or even apparently watching the patient’s movement, the physiotherapist proceeded to chat for 15 minutes about aspects of the patient’s life, work and exercise. At the end of this process, the physiotherapist said, “I think I know the problem”, and proceeded to administer a directed test, which correctly diagnosed the problem and led to successful treatment.

Clearly the conversation had given the physiotherapist a lot of information about potential causes of injury, aided by many years observing similar cases.

To do this using an artificial agent would suggest some level of:

  • theory/model of day-to-day life

Thinking about the more conversational aspects of this I was reminded of the PhD work of Ramanee Peiris [P97]. This concerned consultations on sensitive subjects such as sexual health. It was known that when people filled in (initially paper) forms prior to a consultation, they were more forthcoming and truthful than if they had to provide the information face-to-face. This was even if the patient knew that the person they were about to see would read the forms prior to the consultation.

Ramanee’s work extended this first to electronic forms and then to chat-bot style discussions which were semi-scripted, but used simple textual matching to determine which topics had been covered, including those spontaneously introduced by the patient. Interestingly, the more human like the system became the more truthful and forthcoming the patients were, even though they were less so wit a real human.

As well as revealing lessons for human interactions with human-like computers, this also showed that human-like computing may be possible with quite crude technologies. Indeed, even Eliza was treated (to Weizenbaum’s alarm) as if it really were a counsellor, even though people knew it was ‘just a computer’ [W66].

Cognition or Embodiment?

I think it fair to say that the overall balance, certainly in the group I was in, was towards the cognitivist: that is more Cartesian approach starting with understanding and models of internal cognition, and then seeing how these play out with external action. Indeed, the term ‘representation’ used repeatedly as an assumed central aspect of any human-like computing, and there was even talk of resurrecting Newells’s project for a ‘unified theory of cognition’ [N90]

There did not appear to be any hard-core embodiment theorist at the workshops, although several people who had sympathies. This was perhaps as well as we could easily have degenerated into well rehearsed arguments for an against embodiment/cognition centred explanations … not least about the critical word ‘representation’.

However, I did wonder whether a path that deliberately took embodiment centrally would be valuable. How many human-like behaviours could be modelled in this way, taking external perception-action as central and only taking on internal representations when they were absolutely necessary (Alan Clark’s 007 principle) [C98].

Such an approach would meet limits, not least the physiotherapist’s 25 minute chat, but I would guess would be more successful over a wider range of behaviours and scenarios then we would at first think.

Human–Computer Interaction and Human-Like Computing

Both Russell and myself were partly there representing our own research interest, but also more generally as part of the HCI community looking at the way human-like computing would intersect exiting HCI agendas, or maybe create new challenges and opportunities. (see poster) It was certainly clear during the workshop that there is a substantial role for human factors from fine motor interactions, to conversational interfaces and socio-technical systems design.

Russell and I presented a poster, which largely focused on these interactions.

HCI-HLC-poster

There are two sides to this:

  • understanding and modelling for human-like computing — HCI studies and models complex, real world, human activities and situations. Psychological experiments and models tend to be very deep and detailed, but narrowly focused and using controlled, artificial tasks. In contrast HCI’s broader, albeit more shallow, approach and focus on realistic or even ‘in the wild’ tasks and situations may mean that we are in an ideal position to inform human-like computing.

human interfaces for human-like computing — As noted in goal (iii) we will need paradigms for humans to interact with human-like computers.

As an illustration of the first of these, the poster used my work on making sense of the apparently ‘bad’ emotion of regret [D05] .

An initial cognitive model of regret was formulated involving a rich mix of imagination (in order to pull past events and action to mind), counter-factual modal reasoning (in order to work out what would have happened), emption (which is modified to feel better or worse depending on the possible alternative outcomes), and Skinner-like low-level behavioural learning (the eventual purpose of regret).

cog-model

This initial descriptive and qualitative cognitive model was then realised in a simplified computational model, which had a separate ‘regret’ module which could be plugged into a basic behavioural learning system.   Both the basic system and the system with regret learnt, but the addition of regret did so with between 5 and 10 times fewer exposures.   That is, the regret made a major improvement to the machine learning.

architecture

Turning to the second. Direct manipulation has been at the heart of interaction design since the PC revolution in the 1980s. Prior to that command line interfaces (or worse job control interfaces), suggested a mediated paradigm, where operators ‘asked’ the computer to do things for them. Direct manipulation changed that turning the computer into a passive virtual world of computational objects on which you operated with the aid of tools.

To some extent we need to shift back to the 1970s mediated paradigm, but renewed, where the computer is no longer like an severe bureaucrat demanding the precise grammatical and procedural request; but instead a helpful and understanding aide. For this we can draw upon existing areas of HCI such as human-human communications, intelligent user interfaces, conversational agents and human–robot interaction.

References

[C98] Clark, A. 1998. Being There: Putting Brain, Body and the World Together Again. MIT Press. https://mitpress.mit.edu/books/being-there

[D92] A. Dix (1992). Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451. http://www.hcibook.com/alan/papers/neuro92/

[D94] A. Dix and A. Patrick (1994). Query By Browsing. Proceedings of IDS’94: The 2nd International Workshop on User Interfaces to Databases, Ed. P. Sawyer. Lancaster, UK, Springer Verlag. 236-248.

[D05] Dix, A..(2005).  The adaptive significance of regret. (unpublished essay, 2005) http://alandix.com/academic/essays/regret.pdf

[D05b] A. Dix (2005). the brain and the web – a quick backup in case of accidents. Interfaces, 65, pp. 6-7. Winter 2005. http://alandix.com/academic/papers/brain-and-web-2005/

[D10] A. Dix, A. Katifori, G. Lepouras, C. Vassilakis and N. Shabir (2010). Spreading Activation Over Ontology-Based Resources: From Personal Context To Web Scale Reasoning. Internatonal Journal of Semantic Computing, Special Issue on Web Scale Reasoning: scalable, tolerant and dynamic. 4(1) pp.59-102. http://www.hcibook.com/alan/papers/web-scale-reasoning-2010/

[E16] EPSRC (2016). Human Like Computing Hand book. Engineering and Physical Sciences Research Council. 17 – 18 February 2016

[F16] Alison Flood (2016). Robots could learn human values by reading stories, research suggests. The Guardian, Thursday 18 February 2016 http://www.theguardian.com/books/2016/feb/18/robots-could-learn-human-values-by-reading-stories-research-suggests

[H09] Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The Unreasonable Effectiveness of Data. IEEE Intelligent Systems 24, 2 (March 2009), 8-12. DOI=10.1109/MIS.2009.36

[K10] A. Katifori, C. Vassilakis and A. Dix (2010). Ontologies and the Brain: Using Spreading Activation through Ontologies to Support Personal Interaction. Cognitive Systems Research, 11 (2010) 25–41. http://alandix.com/academic/papers/Ontologies-and-the-Brain-2010/

[N90] Allen Newell. 1990. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, USA. http://www.hup.harvard.edu/catalog.php?isbn=9780674921016

[P97] DR Peiris (1997). Computer interviews: enhancing their effectiveness by simulating interpersonal techniques. PhD Thesis, University of Dundee. http://virtual.inesc.pt/rct/show.php?id=56

[W66] Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (January 1966), 36-45. DOI=http://dx.doi.org/10.1145/365153.365168

Holiday Reading

Early in the summer Fiona and I took 10 days holiday, first touring on the West Coast of Scotlad, south from Ullapool and then over the Skye Road Bridge to spend a few days on Skye.  As well as visiting various wool-related shops on the way and a spectacular drive over the pass from Applecross, I managed a little writing, some work on regret modelling1. And, as well as writing and regret modelling, quite a lot of reading.

This was my holiday reading:

The Talking Ape: How Language Evolved, Robbins Burling (see my booknotes and review)

In Praise of the Garrulous, Allan Cameron (see my booknotes)

A Mind So Rare, Merlin Donald (see my booknotes and review)

Wanderlust, Rebecca Solnit (see my booknotes)

  1. At last!  It has been something like 6 years since I first did initial, and very promising, computational regret modelling, and have at last got back to it, writing driver code so that I have got data from a systematic spread of different parameters.  Happily this verified the early evidence that the cognitive model of regret I wrote about first in 2003 really does seem to aid learning.  However, the value of more comprehensive simulation was proved as early indications that positive regret (grass is greener feeling) was more powerful than negative regret do not seem to have been borne out.[back]

intellectual property issues in dreams

Had an active night of dreams last night, but my favourite point was in some sort of workshop, where we had clearly put slides on the web and someone said that we had had a ‘cease and desist’ request concerning one of the slides.   They showed me the web page with the comment below.  Unfortunately, I never seem to be able to read text on the web so first two words of the comment are interpolated, but the last part is verbatim:

1 Comment »
.      . Prior art  O  : – )

If the person who left the comment on the blog in my dreams is out there — good on you!

language, dreams and the Jabberwocky circuit

If life is always a learning opportunity, then so are dreams.

Last night I both learnt something new about language and cognition, and also developed a new trick for creativity!

In the dream in question I was in a meeting. I know, a sad topic for a dream, and perhaps even sadder it had started with me filling in forms!  The meeting was clearly one after I’d given a talk somewhere as a person across the table said she’d been wanting to ask me (obviously as a sort of challenge) if there was a relation between … and here I’ll expand later … something like evolutionary and ecological something.  Ever one to think on my feet I said something like “that’s an interesting question”, but it was also clear that the question arose partly because the terms sounded somewhat similar, so had some of the sense of a rhyming riddle “what’s the difference between a jeweller and a jailor”.  So I went on to mention random metaphors as a general creativity technique and then, so as to give practical advice, suggested choosing two words next to each other in a dictionary and then trying to link them.

Starting with the last of these, the two words in a dictionary method is one I have never suggested to anyone before, not even thought about. It was clearly prompted by the specific example where the words had an alliterative nature, and so was a sensible generalisation, and after I woke realised was worth suggesting in future as an exercise.  But it was entirely novel to me, I had effectively done the exactly sort of thinking / problem solving that I would have done in the real life situation, but while dreaming.

One of the reasons I find dreams fascinating is that in some ways they are so normal — we clearly have no or little sensory input, and certain parts of our brain shut down (e.g. motor control to stop us thrashing about too much in our sleep) — but other parts seem to function perfectly as normal.  I have written before about the cognitive nature of dreams (including maybe how to model dreaming) and what we may be able to learn about cognitive function because not everything is working, rather like running an engine when it is out of the car.

In this dream clearly the ‘conscious’ (I know an oxymoron) problem-solving part of the mind was operating just the same as when awake.  Which is an interesting fact about dreaming, but  I was already aware of it from previous dreams.

In this dream it was the language that was interesting, the original conundrum I was given.  The problem came as I woke up and tried to reconstruct exactly what my interlocutor had asked me.  The words clearly *meant* evolutionary and ecological, but in the dream had ‘sounded’ even closer aurally, more like evolution and elocution (interesting to consider, images of God speaking forth creation).

So how had the two words sound more similar in my dream than in real speech?

For this we need the Jabberwocky circuit.

There is a certain neurological condition that arises, I think due to tumours or damage in particular areas of the grain, which disrupts particular functions of language.   The person speaks interminably; the words make sense and the grammar is flawless, but there is no overall sense.  Each small snippet of speech is fine, just there is no larger scale linkage.

When explaining this phenomenon to people I often evoke the Jabberwocky circuit.  Now I should note that this is not a word used by linguists, neurolinguists, or cognitive scientists, and is a gross simplification, but I think captures the essence of what is happening.  Basically there is a part of your mind (the conscious, thinking bit) that knows what to say and it asks another bit, the Jabberwocky circuit, to actually articulate the words.  The Jabberwocky circuit knows about the sound form of words and how to string them together grammatically, but basically does what it is told.  The thinking bit needs to know enough about what can be said, but doesn’t have time to deal with precisely how they are strung together and leaves that to Jabberwocky.

Even without brain damage we can see occasional slips in this process.  For example, if you are talking to someone (and even more if typing) and there is some other speech audible (maybe radio in the background), occasionally a word intrudes into your own speech that isn’t part of what you meant to say, but is linked to the background intruding sound.

Occasionally too, you find yourself stopping in mid sentence when the words don’t quite make sense, for example, when what would be reasonable grammar overlaps with a colloquialism, so that it no longer makes sense.  Or you may simply not be able to say a word that you ‘know’ is there and insert “thingy” or “what’s it called” where you should say “spanner”.

The relationship between the two is rather like a manager and someone doing the job: the manager knows pretty much what is possible and can give general directions, but the person doing the job knows the details.  Occasionally, the instructions get confused (when there is intruding background speech) or the manager thinks something is possible which turns out not to be.

Going back to the dream I thought I ‘heard’ the words, but examining more closely after I woke I realised that no word would actually fit.  I think what is happening is that during dreaming (and maybe during imagined dialogue while awake), the Jabberwocky circuit is not active, or not being attended to.  It is like I am hearing the intentions to speak of the other person, not articulated words.  The pre-Jabberwocky bit of the mind does know that there are two words, and knows what they *mean*.  It also knows that they sound rather similar at the beginning (“eco”, “evo”), but not exactly what they sound like throughout.

I have noticed a similar thing with the written word.  Often in dreams I am reading a book, sheet of paper or poster, and the words make sense, but if I try to look more closely at the precise written form of the text, I cannot focus, and indeed often wake at that point1.  That is the dream is creating the interpretation of the text, but not the actual sensory form, although if asked I would normally say that I had ‘seen’ the words on the page in the dream, it is more that I ‘see’ that there are words.

Fiona does claim to be able to see actual letters in dreams, so maybe it is possible to recreate more precise sensory images, or maybe this is just the difference between simply writing and reading, and more conscious spelling-out or attending to words, as in the well known:

Paris in the
the spring

Anyway, I am awake now and the wiser.  I know a little more about dreaming, which cognitive functions are working and which are not;  I know a little more about the brain and language; and I know a new creativity technique.

Not bad for a night in bed.

What do you learn from your dreams?

  1. The waking is interesting, I have often noticed that if the ‘logic’ of the dream becomes irreconcilable I wake.  This is a long story in itself, but I think similar to the way you get a ‘breakdown’ situation when things don’t work as expected and are forced to think about what you are doing.  It seems like the ‘kick’ that changes your mode of thinking often wakes you up![back]

understanding others and understanding ourselves: intention, emotion and incarnation

One of the wonders of the human mind is the way we can get inside one another’s skin; understand what each other is thinking, wanting, feeling. I’m thinking about this now because I’m reading The Cultural Origins of Human Cognition by Michael Tomasello, which is about the way understanding intentions enables cultural development. However, this also connects a hypotheses of my own from many years back, that our idea of self is a sort of ‘accident’ of being social beings. Also at the heart of Christmas is empathy, feeling for and with people, and the very notion of incarnation.

Continue reading

bookshelf in Rome

I posted a few weeks ago about books I had got to bring to Rome.  Since then I got another small collection because I had done some reviewing for Routledge.

Mostly philosophy of the mind and materiality … the latter to help as we work on the DEPtH book on Physicality, TouchIT

  • Shaun Gallagher, Dan Zahavi. The Phenomenological Mind: An Introduction to Philosophy of Mind and Cognitive Science, Routledge, 2007.
  • John Lechte. Fifty Key Contemporary Thinkers: From Structuralism to Post-Humanism, 2nd Edition, Routledge, 2007.
  • Jean-Paul Sartre.  Being and Nothingnes: An Essay on Phenomenological Ontology, 1943.  Routledge Classics, , 2nd Edition, 2003.
  • Jay Friedenberg. Artificial Psychology, Routledge , 2008.
  • Max Velmans.  Understanding Consciousness, Routledge, 2009.
  • Peter Carruthers. The Nature of the Mind, Routledge, 2003.

In fact, with these and the previous  set I had far too many even for a month of evenings, and below you can see the books I actually brought.

As well as a selection from the academic books also some fiction/leisure reading, some old favourites and some new ones:

  • How Green was My Valley, Richard Llewellyn – a Welshman has to read this :-/
  • The Catcher in the Rye, J.D. Salinger – a classic I’ve never read
  • More of the Good Life – the TV series was formative for me as a child, but 40 seemed so far away
  • Lark Rise to Candleford, Flora Thompson – some years since I’ve read it last, and have been loving the TV series, but I don’t think it has stayed very close to the book!
  • Nella Last’s War – this is the book that was the basis for the TV drama Housewife 49 and part of the Mass Observation that collected diaries from ordinary people across Britain during the Second World War.
  • Ruth, Elizabeth Gaskill – another classic that I’ve not read yet!
  • As I Walked Out One Midsummer Morning.  Laurie Lee’s account of travelling in Spain in the run up to the Civel War.  I read it in school for O’level.
  • Swallowdale, Arthur Ransome – Couldn’t find Swallow’s an Amazons, I think one of the girls might have it on their shelves!
  • The Shining Company, Rosemary Sutcliff – we have loads of her histroical novels for children.  I find that good children’s writing is so much better than most adult books, which often feel they need to be incomprehensible to be good.
  • The Growing Summer, Noel Streatfield – lovely story, children visiting a quirky old lady in west coast of Ireland.
  • Hovel in the Hills, Elizabeth West  – another book I’ve read many times, but not for many years.  True story about a couple who buy an old house on a Welsh hillside.

In addition, but missing from the picture, is one I borrowed from my daughter, Tamara Pierce’s  The Healing in the Vine, and one I’ve borrowed from Tiziana Catarci during my visit the Languages of Art.

So, two weeks in and how far have I got …

Well, been a little busy, two journal papers, a book chapter, an interfaces article, two 3 hour lectures to the masters students here, a seminar, reading thesis chapters and helping with two grant proposals … so not got very far through the bookshelf.

In fact, to be brutally honest, so far only finished the Tamora Pierce and nearly finished Gibson (just conclusions to go):

As you can see LOTS of notes on Gibson, I will write a very long blog sometime about this, but several others in line first!

But next week several train journeys, so may get through a few more books 🙂

tech talks: brains, time and no time

Just scanning a few Google Tech Talks on YouTube.  I don’t visit it often, but followed a link from Rob Style‘s twitter.  I find the video’s a bit slow, so tend to flick through with the sound off, really wishing they had fast forward buttons like a DVD as quite hard to pull the little slider back and forth.

One talk was by Stuart Hameroff on A New Marriage of Brain and Computer.  He is the guy that works with Penrose on the possibility that quantum effects in microtubules may be the source of consciousness.  I notice that he used calculations for computational capacity based on traditional neuron-based models that are very similar to my own calculations some years ago in “the brain and the web” when I worked out that the memory and computational capacity of a single human brain is very similar to those of the entire web. Hameroff then went on to say that there are an order of magnitude more microtubules (sub-cellular structures, with many per neuron), so the traditional calculations do not hold!

Microtubules are fascinating things, they are like little mechano sets inside each cell.  It is these microtubules that during cell division stretch out straight the chromosomes, which are normally tangled up the nucleus.  Even stranger those fluid  movements of amoeba gradually pushing out pseudopodia, are actually made by mechanical structures composed of microtubules, only looking so organic because of the cell membrane – rather like a robot covered in latex.

pictire of amoeba

The main reason for going to the text talks was one by Steve Souders “Life’s Too Short – Write Fast Code” that has lots of tips for on speeding up web pages including allowing Javascript files to download in parallel.  I was particularly impressed by the quantification of costs of delays on web pages down to 100ms!

This is great.  Partly because of my long interest in time and delays in HCI. Partly because I want my own web scripts to be faster and I’ve already downloaded the Yahoo! YSlow plugin for FireFox that helps diagnose causes of slow pages.  And partly  because I get so frustrated waiting for things to happen, both on the web and on the desktop … and why oh why does it take a good minute to get a WiFi connection ….  and why doesn’t YouTube introduce better controls for skimming videos.

… and finally, because I’d already spent too much time skimming the tech talks, I looked at one last talk: David Levy, “No Time To Think” … how we are all so rushed that we have no time to really think about problems, not to mention life1.  At least that’s what I think it said, because I skimmed it rather fast.

  1. see also my own discussion of Slow Time[back]

Why did the dinosaur cross the road?

A few days ago our neighbour told us this joke:

“Why did the dinosaur cross the road?”

It reminded me yet again of the incredible richness of apparently trivial day-to-day thought.  Not the stuff of Wittgenstein or Einstein, but the ordinary things we think as we make our breakfast or chat to a friend.

There is a whole field of study looking at computational humour, including its use in user interfaces1, and also on the psychology of humour dating back certainly as far as Freud, often focusing on the way humour involves breaking the rules of internal  ‘censors’ (logical, social or sexual) but in a way that is somehow safe.

Of course, breaking things is often the best way to understand them, Graeme Ritchie wrote2:

“If we could develop a full and detailed theory of how humour works, it is highly likely that this would yield interesting insights into human behaviour and thinking.”

In this case the joke starts to work, even before you hear the answer, because of the associations with its obvious antecessor3 as well as a whole genre of question/answer jokes: “how did the elephant get up the tree?”4, “how did the elephant get down from the tree?”5.  We recall past humour (and so neurochemically are set in a humourous mood), we know it is a joke (so socially prepared to laugh), and we know it will be silly in a perverse way (so cognitively prepared).

The actual response was, however, far more complex and rich than is typical for such jokes.  In fact so complex I felt an almost a palpable delay before recognising its funniness; the incongruity of the logic is close to the edge of what we can recognise without the aid of formal ‘reasoned’ arguments.  And perhaps more interesting, the ‘logic’ of the joke (and most jokes) and the way that logic ‘fails’, is not recognised in calm reflection, but in an instant, revealing complexity below the level of immediate conscious thought.

Indeed in listening to any language, not just jokes, we are constantly involved in incredibly rich, multi-layered and typically modal thinking6. Modal thinking is at the heart of simple planning and decision making “if I have another cake I will have a stomach ache”, and when I have studied and modelled regret7 the interaction of complex “what if” thinking with emotion is central … just as in much humour.  In this case we have to do an extraordinary piece of counterfactual thought even to hear the question, positing a state of the world where a dinosaur could be right there, crossing the road before our eyes.  Instead of asking the question “how on earth could a dinosaur be alive today?”, we are instead asked to ponder the relatively trivial question as to why it is doing, what would be in the situation, a perfectly ordinary act.  We are drawn into a set of incongruous assumptions before we even hear the punch line … just like the way an experienced orator will draw you along to the point where you forget how you got there and accept conclusions that would be otherwise unthinkable.

In fact, in this case the punch line draws some if its strength from forcing us to rethink even this counterfactual assumption of the dinosaur now and reframe it into a road then … and once it has done so, simply stating the obvious.

But the most marvellous and complex part of the joke is its reliance on perverse causality at two levels:

temporal – things in the past being in some sense explained by things in the future8.

reflexive – the explanation being based on the need to fill roles in another joke9.

… and all of this multi-level, modal and counterfactual cognitive richness in 30 seconds chatting over the garden gate.

So, why did the dinosaur cross the road?

“Because there weren’t any chickens yet.”

  1. Anton Nijholt in Twente has studied this extensively and I was on the PC for a workshop he organised on “Humor modeling in the interface” some years ago, but in the end I wasn’t able to attend :-([back]
  2. Graeme Ritchie (2001) “Current Directions in Computer Humor”, Artificial Intelligence Review. 16(2): pages 119-135[back]
  3. … and in case you haven’t ever heard it: “why did the chicken cross the road?” – “because it wanted to get to the other side”[back]
  4. “Sit on an acorn and wait for it to grow”[back]
  5. “Stand on a leaf and wait until autumn”[back]
  6. Modal logic is any form of reasoning that includes thinking about other possible worlds, including the way the world is at different times, beliefs about the world, or things that might be or might have been.  For further discussion of the modal complexity of speech and writing, see my Interfaces article about “writing as third order experience“[back]
  7. See “the adaptive significance of regret” in my essays and working papers[back]
  8. The absence of chickens in prehistoric times is sensible logic, but the dinosaur’s action is ‘because ‘ they aren’t there – not just violating causality, but based on the absence.  However, writing about history, we might happily say that Roman cavalry was limited because they hadn’t invented the stirrup. Why isn’t that a ridiculous sentence?[back]
  9. In this case the dinosaur is in some way taking the role of the absent chicken … and crossing the Jurassic road ‘because’ of the need to fill the role in the joke.  Our world of the joke has to invade the dinosaur’s word within the joke.  So complex as modal thinking … yet so everyday.[back]

Single-track minds – centralised thinking and the evidence of bad models

Another post related to Clark’s “Being there” (see previous post on this). The central thesis of Clark’s book is that we should look at people as reactive creatures acting in the environment, not as disembodied minds acting on it. I agree wholeheartedly with this non-dualist view of mind/body, but every so often Clark’s enthusiasm leads a little too far – but then this forces reflection on just what is too far.

In this case the issue is the distributed nature of cognition within the brain and the inadequacy of central executive models. In support of this, Clark (p.39) cites Mitchel Resnick at length and I’ll reproduce the quote:

“people tend to look for the cause, the reason, the driving force, the deciding factor. When people observe patterns and structures in the world (for example, the flocking patterns of birds or foraging patterns of ants), they often assume centralized causes where none exist. And when people try to create patterns or structure in the world (for example, new organizations or new machines), they often impose centralized control where none is needed.” (Resnick 1994, p.124)1

The take home message is that we tend to think in terms of centralised causes, but the world is not like that. Therefore:

(i) the way we normally think is wrong

(ii) in particular we should expect non-centralised understanding of cognition

However, if our normal ways of thinking are so bad, why is it that we have survived as a species so long? The very fact that we have this tendency to think and design in terms of centralised causes, even when it is a poor model of the world, suggests some advantage to this way of thinking.

Continue reading

  1. Mitchel Resnik (1994). Turtles Termites and Traffic Jams: Explorations in Massively Parallel Microworlds. MIT Press.[back]

multiple representations – many chairs in the mind

I have just started reading Andy Clark’s “Being There”1 (maybe more on that later), but early on he reflects on the MIT COG project, which is a human-like robot torso with decentralised computation – coherent action emerging through interactions not central control.

This reminded me of results of brain scans (sadly, I can’t recall the source), which showed that the areas in the brain where you store concepts like ‘chair’ are different from those where you store the sound of the word – and also I’m sure the spelling of it also.

This makes sense of the “tip of the tongue” phenomenon, you know that there is a word for something, but can’t find the exact word. Even more remarkable is that of you know words in different languages you can know this separately for each language.

So, musing on this, there seem to be very good reasons why, even within our own mind, we hold multiple representations for the “same” thing, such as chair, which are connected, but loosely coupled.

Continue reading

  1. Andy Clark. Being There. MIT Press. 1997. ISBN 0-262-53156-9. book@MIT[back]