In December I’m going to be out in the far east talking at the First International Conference on User Science and Engineering (i-USEr) 2010, a new conference being launched in Malaysia. Looking forward to seeing old friends there and meeting new. If you fancy a break from the winter rain then the call for papers is out with deadline 1st July.
Category Archives: HCI and usability
language, dreams and the Jabberwocky circuit
If life is always a learning opportunity, then so are dreams.
Last night I both learnt something new about language and cognition, and also developed a new trick for creativity!
In the dream in question I was in a meeting. I know, a sad topic for a dream, and perhaps even sadder it had started with me filling in forms! The meeting was clearly one after I’d given a talk somewhere as a person across the table said she’d been wanting to ask me (obviously as a sort of challenge) if there was a relation between … and here I’ll expand later … something like evolutionary and ecological something. Ever one to think on my feet I said something like “that’s an interesting question”, but it was also clear that the question arose partly because the terms sounded somewhat similar, so had some of the sense of a rhyming riddle “what’s the difference between a jeweller and a jailor”. So I went on to mention random metaphors as a general creativity technique and then, so as to give practical advice, suggested choosing two words next to each other in a dictionary and then trying to link them.
Starting with the last of these, the two words in a dictionary method is one I have never suggested to anyone before, not even thought about. It was clearly prompted by the specific example where the words had an alliterative nature, and so was a sensible generalisation, and after I woke realised was worth suggesting in future as an exercise. But it was entirely novel to me, I had effectively done the exactly sort of thinking / problem solving that I would have done in the real life situation, but while dreaming.
One of the reasons I find dreams fascinating is that in some ways they are so normal — we clearly have no or little sensory input, and certain parts of our brain shut down (e.g. motor control to stop us thrashing about too much in our sleep) — but other parts seem to function perfectly as normal. I have written before about the cognitive nature of dreams (including maybe how to model dreaming) and what we may be able to learn about cognitive function because not everything is working, rather like running an engine when it is out of the car.
In this dream clearly the ‘conscious’ (I know an oxymoron) problem-solving part of the mind was operating just the same as when awake. Which is an interesting fact about dreaming, but I was already aware of it from previous dreams.
In this dream it was the language that was interesting, the original conundrum I was given. The problem came as I woke up and tried to reconstruct exactly what my interlocutor had asked me. The words clearly *meant* evolutionary and ecological, but in the dream had ‘sounded’ even closer aurally, more like evolution and elocution (interesting to consider, images of God speaking forth creation).
So how had the two words sound more similar in my dream than in real speech?
For this we need the Jabberwocky circuit.
There is a certain neurological condition that arises, I think due to tumours or damage in particular areas of the grain, which disrupts particular functions of language. The person speaks interminably; the words make sense and the grammar is flawless, but there is no overall sense. Each small snippet of speech is fine, just there is no larger scale linkage.
When explaining this phenomenon to people I often evoke the Jabberwocky circuit. Now I should note that this is not a word used by linguists, neurolinguists, or cognitive scientists, and is a gross simplification, but I think captures the essence of what is happening. Basically there is a part of your mind (the conscious, thinking bit) that knows what to say and it asks another bit, the Jabberwocky circuit, to actually articulate the words. The Jabberwocky circuit knows about the sound form of words and how to string them together grammatically, but basically does what it is told. The thinking bit needs to know enough about what can be said, but doesn’t have time to deal with precisely how they are strung together and leaves that to Jabberwocky.
Even without brain damage we can see occasional slips in this process. For example, if you are talking to someone (and even more if typing) and there is some other speech audible (maybe radio in the background), occasionally a word intrudes into your own speech that isn’t part of what you meant to say, but is linked to the background intruding sound.
Occasionally too, you find yourself stopping in mid sentence when the words don’t quite make sense, for example, when what would be reasonable grammar overlaps with a colloquialism, so that it no longer makes sense. Or you may simply not be able to say a word that you ‘know’ is there and insert “thingy” or “what’s it called” where you should say “spanner”.
The relationship between the two is rather like a manager and someone doing the job: the manager knows pretty much what is possible and can give general directions, but the person doing the job knows the details. Occasionally, the instructions get confused (when there is intruding background speech) or the manager thinks something is possible which turns out not to be.
Going back to the dream I thought I ‘heard’ the words, but examining more closely after I woke I realised that no word would actually fit. I think what is happening is that during dreaming (and maybe during imagined dialogue while awake), the Jabberwocky circuit is not active, or not being attended to. It is like I am hearing the intentions to speak of the other person, not articulated words. The pre-Jabberwocky bit of the mind does know that there are two words, and knows what they *mean*. It also knows that they sound rather similar at the beginning (“eco”, “evo”), but not exactly what they sound like throughout.
I have noticed a similar thing with the written word. Often in dreams I am reading a book, sheet of paper or poster, and the words make sense, but if I try to look more closely at the precise written form of the text, I cannot focus, and indeed often wake at that point1. That is the dream is creating the interpretation of the text, but not the actual sensory form, although if asked I would normally say that I had ‘seen’ the words on the page in the dream, it is more that I ‘see’ that there are words.
Fiona does claim to be able to see actual letters in dreams, so maybe it is possible to recreate more precise sensory images, or maybe this is just the difference between simply writing and reading, and more conscious spelling-out or attending to words, as in the well known:
Paris in the
the spring
Anyway, I am awake now and the wiser. I know a little more about dreaming, which cognitive functions are working and which are not; I know a little more about the brain and language; and I know a new creativity technique.
Not bad for a night in bed.
What do you learn from your dreams?
- The waking is interesting, I have often noticed that if the ‘logic’ of the dream becomes irreconcilable I wake. This is a long story in itself, but I think similar to the way you get a ‘breakdown’ situation when things don’t work as expected and are forced to think about what you are doing. It seems like the ‘kick’ that changes your mode of thinking often wakes you up![back]
microsoft makes things easy
I’m so glad that Microsoft’s conference management service allows you to prepare reviews offline, in order to make reviewing easy …
Reflection in practice: Schön and science
I have just finished reading Schön’s “The Reflective Practitioner“. It is one of those books that you feel you ought to have read years ago, resonating so much with many of my own thoughts and writing about creativity and innovation. However, I found myself at odds slightly with the adversarial dualism between science and practice, but realise this is partly because it is a book of its time. I will return to this later.
names – a file by any other name
Naming things seems relatively unproblematic until you try to do it — ask any couple with a baby on the way. Naming files is no easier.
Earlier today Fiona @lovefibre was using the MAC OS Time Machine to retrieve an old version of a file (let’s call it “fisfile.doc”). She wanted to extract a part that she knew she had deleted in order to use in the current version. Of course the file you are retrieving has the same name as the current file, and the default is to overwrite the current version; that is a simple backup restore. However, you can ask Time Machine to retain both versions; at which point you end up with two files called, for example, “fisfile.doc” and “fisfile-original.doc”. In this case ‘original’ means ‘the most recent version’ and the unlabelled one is the old version you have just restored. This was not too confusing, but personally I would have been tempted to call the restored file something like “fisfile-2010-01-17-10-33.doc”, in particular because one wonders what will happen if you try to restore several copies of the same file to work on, for example, to work out when an error slipped into a document.
OK, just an single incident, but only a few minutes later I had another example of problematic naming.
not for itself
While writing the last post and searching for a references, I noticed that I’d never made available the notes of a talk I gave at the “Design and Non-Place Workshop” in Edinburgh back in 2005. So I have just put “Not for itself: insider/outsider orientation of place and signage and systolic flows?” online. The talk reflects on some of the events of the exciting non-place network including a meeting at B&Q in Edinburgh and another at Stanstead airport.
I pick up just a few of the threads from those visits, looking particularly and the way ‘place’ transforms over time, the way signage addresses itself, and the different kinds of flow in populated space. At B&Q especially I was fascinated by the back of the store, the place that gets ignored and yet which was critical for services and the actual delivery of goods.
I can’t recall why (five years ago now!), but the talk slides only tenuously connect to the text of the notes, I think maybe because I was touching on too many issues in the brief notes.
now part-time!
Many people already knew this was happening, but for those that don’t — I am now officially a part-time university academic.
Now this does not mean I’m going to be a part-time academic, quite the opposite. The reason for moving to working part-time at the University is to give me freedom to do the things I’d like to do as an academic, but never have time. Including writing more, reading, and probably cutting some code!
Reading especially, and I don’t mean novels (although that would be nice), but journal papers and academic books. Like most academics I know, for years I have only read things that I needed to review, assess, or comment on — or sometimes in a fretful rush, the day before a paper is due, scurried to find additional related literature that I should have known about anyway. That is I’d like some time for scholarship!
I guess many people would find this odd: working full time for what sounds like doing your job anyway, but most academics will understand perfectly!
Practically, I will work at Lancaster in spurts of a few weeks, travel for meetings and things, sometimes from Lancs and sometimes direct from home, and when I am at home do a day a week on ‘normal’ academic things.
This does NOT mean I have more time to review, work on papers, or other academic things, but actually the opposite — this sort of thing needs to fit in my 50% paid time … so please don’t be offended or surprised if I say ‘no’ a little more. The 50% of time that is not paid will be for special things I choose to do only — I have another employer — me 🙂
Watch my calendar to see what I am doing, but for periods marked @home, I may only pick up mail once a week on my ‘office day’.
Really doing this and keeping my normal academic things down to a manageable amount is going to be tough. I have not managed to keep it to 100% of a sensible working week for years (usually more like 200%!). However, I am hoping that the sight of the first few half pay cheques may strengthen my resolve 😉
In the immediate future, I am travelling or in Lancs for most of February and March with only about 2 weeks at home in between, however, April and first half of May I intend to be in Tiree watching the waves, and mainly writing about Physicality for the new Touch IT book.
not quite everywhere
I’ve been (belatedly) reading Adam Greenfield‘s Everyware: The Dawning Age of Ubiquitous Computing. By ‘everywhere’ he means the pervasive insinuation of inter-connected computation into all aspects of our lives — ubiquitous/pervasive computing but seen in terms of lives not artefacts. Published in 2006, and so I guess written in 2004 or 2005, Adam confidently predicts that everywhere technology will have “significant and meaningful impact on the way you live your life and will do so before the first decade of the twenty-first century is out“, but one month into 2010 and I’ve not really noticed yet. I am not one of those people who fill their house with gadgets, so I guess unlikely to be an early adopter of ‘everywhere’, but even in the most techno-loving house at best I’ve seen the HiFi controlled through an iPhone.
Devices are clearly everywhere, but the connections between them seem infrequent and poor.
Why is ubiquitous technology still so … well un-ubiquitous?
Recognition vs classification
While putting away the cutlery I noticed that I always do it one kind at a time: all the knives, then all the forks, etc. While this may simply be a sign of an obsessive personality, I realised there was a general psychological principle at work here.
We often make the distinction between recognition and recall and know, as interface designers, that the former is easier, especially for infrequently used features, e.g. menus rather than commands.
In the cutlery tray task the trade-off is between a classification task “here is an item what kind is it?” versus a visual recognition one “where is the next knife”. The former requires a level of mental processing and is subject to Hick’s law, whereas the latter depends purely on lower level visual processing, a pop-out effect.
I am wondering whether this has user interface equivalents. I am thinking about times when one is sorting things: bookmarks, photos, even my own snip!t. Sometimes you work by classification: select an item, then choose where to put it; for others you choose a category (or an album) and then select what to put in it. Here the ‘recognition’ task is more complex and not just visual, but I wonder if the same principle applies?
Sounds like a good dissertation project!
the plague of bugs
Like some Biblical locust swarm, every attempt to do anything is thwarted by the dead weight of innumerable bugs! This time I was trying … and failing … to upload a Word file into Google docs. I uploaded the docx file and it said the file was unreadable, tried saving it as .doc, and when that failed created an rtf file. Amazingly from a 1 Meg word file the rtf was 66 Meg, but very very slowly Google docs did upload the file and when it was eventually all uploaded …
To be fair the same document imports pretty badly into Pages (all the headings disappear). I think this is because it is originally a 2003 Word file and gets corrupted when the new Word reads it.
Now I have griped before about backward compatibility issues for Word, and in general about lack of robustness in many leading products, and to add to my woes, for the last month or so (I guess after a software update) Word has decided not to show its formatting menus on an opened document unless I first hide them, then show them, and then maximise the window. Mostly these things are annoying, sometimes really block work, and always waste time and destroy the flow of work.
However, rather than grousing once again (well I already have a bit), I am trying to make sense of this. For some time it has become apparent that software is fundamentally breaking down, in that with every new version there is minimal new useful functionality, but more bugs. This may be simply issues of scale, of the training of programmers, or of the nature of development processes. Indeed in the talk I gave a bit over a year ago to PPIG, “as we may code“, I noted that coding in th 21st Century seems to be radically different, more about finding tricks and community know-how and less about problem solving.
Whatever the reason, I don’t think the Biblical plague of bugs is simply due to laziness or indifference on the part of large vendors such as Microsoft and Adobe, but is symptomatic of a deeper crisis in software development, certainly where there is a significant user interface.
Maybe this is simply an inevitable consequence of scale, but more optimistically I wonder if there are new ways of coding, new paradigms or new architectural models. Can 2010 be the decade when software is reborn?