Dublin, Guiness and the future of HCI

I wrote the title of this post on 5th December just after I got back from Dublin.  I had been in Dublin for the SIGCHI Ireland Inaugural LectureHuman–Computer Interaction in the early 21st century: a stable discipline, a nascent science, and the growth of the long tail” and was going to write a bit about it (including my first flight on and off Tiree) … then thought I’d write a short synopsis of the talk … so parked this post until the synopsis was written.

One month and 8000 words later – the ‘synopsis’ sort of grew … but just finished and now it is on the web as either HTML version or PDF. Basically a sort of ‘state of the nation’ about the current state and challenges for HCI as a discipline …

And although it now fades a little I had great time in Dublin meeting, talking research, good company, good food … and yes … the odd pint of Guiness too.

Coast to coast: St Andrews to Tiree

A week ago I was in St Andrews on the east coast of Scotland delivering three lectures on “Human Computer Interaction: as it was, as it is and as it may be” as part of their distinguished lecture series and now I am in Tiree in the wild western ocean off the west coast.

I had a great time in St Andrews and was well looked after by some I knew already Ian, Gordan, John and Russell, and also met many new people. Ate good food and stayed in a lovely hotel overlooking the sea (and golf course) and full of pictures of golfers (well what do you expect in St Andrews).

For the lectures, I was told the general pattern was one lecture about the general academic area, one ‘state of the art’ and one about my own stuff … hence the three parts of the title!  Ever for cutesy titles I then called the individual lectures “Whose Computer Is It Anyway”, “The Great Escape” and “Connected, but Under Control, Big, but Brainy?”.

The first lecture was about the fact that computers are always ultimately for people (surprise surprise!) and I used Ian’s slight car accident on the evening before the lecture as a running example (sorry Ian).

The second lecture was about the way computers have escaped the office desktop and found their way into the physical world of ubiquitous computing, the digital world of the web ad into our everyday lives in out homes and increasingly the hub of our social lives too.  Matt Oppenheim did some great cartoons for this and I’m going to use them again in a few weeks when I visit Dublin to do the inaugural lecture for SIGCHI Ireland.

for 20 years the computer is chained to the office desktop (image © Matt Oppenheim)

(© Matt Oppenheim)

... now escapes: out into the world, spreading across the net, in the home, in our social lives (image © Matt Oppenheim)

(© Matt Oppenheim)

The last lecture was about intelligent internet stuff, similar to the lecture I gave at Aveiro a couple of weeks back … mentioning again the fact that the web now has the same information storage and processing capacity as a human brain1 … always makes people think … well at least it always makes ME think about what it means to be human.

… and now … in Tiree … sun, wild wind, horizontal hail, and paddling in the (rather chilly) sea at dawn

  1. see the brain and the web[back]

From raw experience to personal reflection

Just a week to go for deadline for this workshop on the Designing for Reflection on Experience that Corina and I are organising at CHI. Much of the time discussions of user experience are focused on trivia and even social networking often appears to stop at superficial levels.  While throwing a virtual banana at a friend may serve to maintain relationships and is perhaps less trivial than it at first appears; still there is little support for deeper reflection on life, with the possible exception of the many topic-focused chat groups.  However, in researching social networks we have found, amongst the flotsam, clear moments of poinency and conflict, traces of major life events … even divorce by Facebook. Too much navel gazing would not be a good thing, but some attention to expressing  deeper issues to others and to ourselves seems overdue.

PPIG2008 and the twenty first century coder

Last week I was giving a keynote at the annual workshop PPIG2008 of the Psychology of Programming Interest Group.   Before I went I was politely pronouncing this pee-pee-eye-gee … however, when I got there I found the accepted pronunciation was pee-pig … hence the logo!

My own keynote at PPIG2008 was “as we may code: the art (and craft) of computer programming in the 21st century” and was an exploration of the changes in coding from 1968 when Knuth published the first of his books on “the art of computer programming“.  On the web site for the talk I’ve made a relatively unstructured list of some of the distinctions I’ve noticed between 20th and 21st Century coding (C20 vs. C21); and in my slides I have started to add some more structure.  In general we have a move from more mathematical, analytic, problem solving approach, to something more akin to a search task, finding the right bits to fit together with a greater need for information management and social skills. Both this characterisation and the list are, of course, a gross simplification, but seem to capture some of the change of spirit.  These changes suggest different cognitive issues to be explored and maybe different personality types involved – as one of the attendees, David Greathead, pointed out, rather like the judging vs. perceiving personality distinction in Myers-Briggs1.

One interesting comment on this was from Marian Petre, who has studied many professional programmers.  Her impression, and echoed by others, was that the heavy-hitters were the more experienced programmers who had adapted to newer styles of programming, whereas  the younger programmers found it harder to adapt the other way when they hit difficult problems.  Another attendee suggested that perhaps I was focused more on application coding and that system coding and system programmers were still operating in the C20 mode.

The social nature of modern coding came out in several papers about agile methods and pair programming.  As well as being an important phenomena in its own right, pair programming gives a level of think-aloud  ‘for free’, so maybe this will also cast light on individual coding.

Margaret-Anne Storey gave a fascinating keynote about the use of comments and annotations in code and again this picks up the social nature of code as she was studying open-source coding where comments are often for other people in the community, maybe explaining actions, or suggesting improvements.  She reviewed a lot of material in the area and I was especially interested in one result that showed that novice programmers with small pieces of code found method comments more useful than class comments.  Given my own frequent complaint that code is inadequately documented at the class or higher level, this appeared to disagree with my own impressions.  However, in discussion it seemed that this was probably accounted for by differences in context: novice vs. expert programmers, small vs large code, internal comments vs. external documentation.  One of the big problems I find is that the way different classes work together to produce effects is particularly poorly documented.  Margaret-Anne described one system her group had worked on2 that allowed you to write a tour of your code opening windows, highlighting sections, etc.

I sadly missed some of the presentations as I had to go to other meetings (the danger of a conference at your home site!), but I did get to some and  was particularly fascinated by the more theoretical/philosophical session including one paper addressing the psychological origins of the notions of objects and another focused on (the dangers of) abstraction.

The latter, presented by Luke Church, critiqued  Jeanette Wing‘s 2006 CACM paper on Computational Thinking.  This is evidently a ‘big thing’ with loads of funding and hype … but one that I had entirely missed :-/ Basically the idea is to translate the ways that one thinks about computation to problems other than computers – nerds rule OK. The tenet’s of computational thinking seem to overlap a lot with management thinking and also reminded me of the way my own HCI community and also parts of the Design (with capital D) community in different ways are trying to say they we/they are the universal discipline  … well if we don’t say it about our own discipline who will …the physicists have been getting away with it for years 😉

Luke (and his co-authors) argument is that abstraction can be dangerous (although of course it is also powerful).  It would be interesting perhaps rather than Wing’s paper to look at this argument alongside  Jeff Kramer’s 2007 CACM article “Is abstraction the key to computing?“, which I recall liking because it says computer scientists ought to know more mathematics 🙂 🙂

I also sadly missed some of Adrian Mackenzie‘s closing keynote … although this time not due to competing meetings but because I had been up since 4:30am reading a PhD thesis and after lunch on a Friday had begin to flag!  However, this was no reflection an Adrian’s talk and the bits I heard were fascinating looking at the way bio-tech is using the language of software engineering.  This sparked a debate relating back to the overuse of abstraction, especially in the case of the genome where interactions between parts are strong and so the software component analogy weak.  It also reminded me of yet another relatively recent paper3 on the way computation can be seen in many phenomena and should not be construed solely as a science of computers.

As well as the academic content it was great to be with the PPIG crowd they are a small but very welcoming and accepting community – I don’t recall anything but constructive and friendly debate … and next year they have PPIG09 in Limerick – PPIG and Guiness what could be better!

  1. David has done some really interesting work on the relationship between personality types and different kinds of programming tasks.  I’ve seen him present before about debugging and unfortunately had to miss his talk at PPIG on comprehension.  Given his work has has shown clearly that there are strong correlations between certain personality attributes and coding, it would be good to see more qualitative work investigating the nature of the differences.   I’d like to know whether strategies change between personality types: for example, between systematic debugging and more insight-based scan and see it bug finding. [back]
  2. but I can’t find on their website :-([back]
  3. Perhaps 2006/2007 in either CACM or Computer Journal, if anyone knows the one I mean please remind me![back]

eprints: relaxed and scalable interfaces

A story, a bit of a moan … and then I hope some constructive ideas .

It is time for the University annual report, which includes a list of all publications across the University. In previous years this was an easy job. I keep an up-to-date web page with all my publications for each year, so I simply gave our secretaries a link to the web publication list, they cut and paste it into Word, tidied the format a little … and job done. However, this year things are different … a short while ago the department installed an EPrints server. This year the department is making its submission to the University by downloading from the EPrints server, which means we have to upload to it :-/

The citation adding page runs to several screen fulls including breaking author names down into surname forename … the thought of that was somewhat daunting.

Fortunately you can import into EPrints from BibTeX and EndNote bibliographies … unfortunately mine is in plain HTML 🙁

Now the 10 million AKT project that Southampton was a lead partner in developed a free text bibliography server … but, unfortunately, not included in EPrints 🙁

So a few regular expression substitutions and a lot of hand edits later and I convert my 2007 pub list into BibTeX (actually couple of hours in total including ‘bug fixing’ syntax errors in the BibTeX).

Then upload the clean .bib file … beautiful – I get a list of all the uploaded items … but they are my ‘user workspace’ and not properly deposited. This I have to do one-by-one and not allowed to do so until I have filled in various additional fields, scattered liberally over several forms including one form for adding subjects that requires several clicks to open up a lovely tree browser that in the end has only 2 leaves.

Now after grouching the lessons.

There seems to be a few key problems:

(1) First the standard usability issues: the inclusion pages are oriented around the data in the system not the user, there are no shortcuts for previously entered authors, etc.

(2) The system will not allow data to be entered if it is not complete. Of course the institution wants full data (e.g. whether it is refereed, etc.), but making it difficult to enter data makes it likely that user will not bother. That is the alternative to perfect data may be no data!

(3) The interface to enter and edit is fine for a small number of entries, but becomes a pain when processing a complete publication list. Contrarily, the page for setting the subject categories is designed for handling large trees of categories but does not gracefully handle a small number.

Both (2) and (3) are also common problems, but not so well considred in usability iterature.

A useful inofmration systems heuristic that I often advocate is

“don’t enforce consistency, but highlight inconsistency”

In this case why not allow me to deposit incomplete records and then leave me a ‘to do list’ page … yes and maybe even badger me periodically with automatic emails to check it.

Anther maxim that applies to (2) is:

“Make it easy for the user to do what you want”

If you want people to upload references make it as easy as possible to do so. Now I’m sure the designers intend this to be the case, but it is easy sometimes to focus on usability of individual screens and interactions rather than the wider context.

In fact, this was the second time that I was faced with problem (3) today. Fiona had accidentally double clicked a large number of archived files when she was trying to drag them to Trash. She had to kill the application as it blindly started to open dozens of files (why not ask?). However, it was clearly coded resiliently and kept backup copies of the files it had started to open, so, when she tried to re-open it, InDesign started to ask her whether she wanted to recover the files … but did so one-by-one and wouldn’t let her do anything else until she had laboriously answered every dialogue box.

In this case the solution is fairly obvious, if there are many (or even ore than one) files to be recovered why not list them and aks about them all, perhaps with check boxes so you can recover some but not others. In general tabular or list-style views tend to work better with large numbers of items, allowing you to perform edits to many items in a single transaction.

Similarly in EPrints, after the import there were just a few fields required for each entry, some form of tabular view would have allowed me to scan down the link and select ‘refereed/not refereed’ for each entry.

With the subject categories, it was in a sense the opposite problem, but a symptom of the way we, as designers, often have some idea in out heads about how large a particular set is likely to be and then design around that idea. However, if you can notice this tendency one can often produce variant interaction styles depending on the size of the set. For example, in web-based systems to browse hierarchies I have often (but not always!) added code that effectively says, “if the number of entries at this level is not to great, then show this level as headings with the next level as well.”


fully expanded EPrints subjects menu

The EPrints server clearly expects that the subject tree will be far bigger, as it would be on a University-wide installation. Although even if the list is very large the number of items used by an individual would be small.

So as general design advice, if there is some form of collection:

  • are there any absolute lower or upper bounds on the size?
  • check, within these absolute bounds, what the interface would be like with 1, 3, 10, 100, 1000 in the collection
  • if the potential collection is large, is the likely size needed for a particular usre, situation, smaller?

To be fair I am an unusual user with my pretty complete HTML publication lists, if I had no systematic way of keeping my own publications then I would appreciate EPrints more. However, there will be many with word processor lists, so maybe I’m not so unusual. I assume other people just knuckle down and get on with it. So the real problem is that I am impatient user!

Which brings us to the last and most valuable piece of advice. When it comes to ussr testing cussed users are worth their weight in gold. Users that are too nice are useless,; they cope, they manage and would hate to hurt your feelings by telling you your system is not perfect. So find the nasty users, the impatient users, the ones who complain at the slightest things … they are true treasure.

escape from distraction

Last week I was away in Cornwall and lost (but later found) my phone, so was both without a phone and with no internet connection … and it was amazingly liberating. My life is driven by the never ending stream of incoming mails and while in principle I could ignore them, in fact I find myself constantly breaking off what I do and seeing what has come in.

This reminded me of a Times article Haliyana pointed out to be a couple of weeks ago “Stoooopid …. why the Google generation isn’t as smart as it thinks“. We make a virtue of the never ending stream of interruptions that assail us; “multi-tasking” we call it, but in fact they not only mean we are less focused, but are possibly loosing the ability to concentrate at all.

While reading the article itself I found myself fighting not to want to follow the numerous links to other stories that littered the Times online page … and I would like to tell you more about it, but I never managed to read to the end before succumbing to the next interruption.

why software need never hang

Over 20 years ago I wrote “The Myth of the Infinitely Fast Machine“, about the way software developers effectively assume that everything on the machine side of human interaction happens instantly. Often interaction is programmed in a turn-taking style:

  1. wait for user action
  2. process the event
  3. display changes
  4. back to step 1

This assumption of instant (or at least infinitely fast) response at step 2 often ignores network delays, disk IO or heavy computation. This tends to work fine on a high-spec development or test machine, with a fast network and clean install of all system software … but when the software hits a real machine, a few years old, untidy system, slow network … things fall to pieces.

So 20 years later (as I described in my post last week) I am sitting watching the spinning rainbow ball as Word struggles to save a document (over an hour now, I think I will need to kill it). To be fair I think the root ’cause’ of the problem … or at least one problem … may be the printer as the Cannon printer driver has never worked properly on an Intel Mac (maybe new driver when I upgrade to Leopard?) and perhaps some change in the rest of the system (maybe the Office install) has tipped it over into not working at all.

As far as I can tell Word then decides to ask the printer things in order to set the margins properly when saving the document, and then gets stuck. I found a post on a Microsoft forum about a different print related problem and the ‘helpful’ tech support from MS simply said “not our fault, re-install everything”.

So to recap:

  • user asks Word to save – probably the most critical operation in the system, or the system auto-saves, again to ensure safety against crashes, so really critical
  • Word decides it needs information from the printer (although it has been displaying the page to the users using some existing information on page properties).
  • Word asks for info from the printer driver of the currently selected printer
  • if the printer doesn’t respond Word hangs and blocks all user interaction

However, the printer driver may be third party, may be connecting to a shared printer hanging off a different network, or in the case of a laptop on a network currently disconnected from the computer … and any resulting delay is not the fault of the developers of Word??!

The annoying thing is that such ‘hanging’ delays need never happen.

Basically there are four main causes for delays:

  1. ordinary computation takes a long time due to it being too complex for the available hardware
  2. unbounded internal computation -for example iterative algorithms
  3. waiting for external resources (disk, network, etc.)
  4. bugs that lead to the system going crazy (effectively case 2 by accident!)

Type 1 will surface during testing and may require re-design of the interaction, but is simply ‘slow’ rather than ‘hanging’. Typically it leads to things gradually getting slower as the document or data gets larger or more complicated. This requires standard profiling and optimisation.

Type 4 is hard to deal with – bugs do happen. However, the majority of the problems I’m experiencing in Word at the moment are not a failure of this kind as Word does, most of the time, eventually complete without crashing.

Types 2 and 3, especially the latter, should be detected and then dealt with in the design of the user interface.

Some real-time programming languages have ways of automatically working out how long code will take to run in order to be able to assert “this will respond within a 10 ms interrupt cycle”. However, this is hard, even for relatively simple embedded systems; so not practical for complex operating systems or user interfaces.

However, a simpler version of the above is possible. Certain system functions invoke external resources such as the disk, or the network. If any function or method in your own application invokes one of these system functions, then it could potentially hang – and should be documented to say so or return some sort of ‘promise’: “I’ve started to do X, please check back later to see if it is ready”. Of course the methods that call these themselves need to be documented as potentially hanging … and so forth.

If the response to any form of user interaction ends up calling a potentially hanging function, then it is in danger of having a delay of type 3 above. However, so long as this is known, it can be dealt with at the user interface level by spawning a thread to do the work so that some form of progress indicator or at least “Cancel” button can be active – it should never ‘hang’.

This marking of functions as potentially ‘hanging’ could be done by programmers themselves, but equally can be automated as a form of static analysis, simply starting with a known set of hanging system functions and recursively ‘colouring’ functions that call them. This kind of automated checking should be standard practice in any large software project.

The type 2 hanging is a little more complicated. The ADA programming language has a ‘safe’ subset that only allows loops where the bounds are fixed at compile time. This is probably too restrictive for complex software, but certainly any loop with unknown limits could be flagged. If as part of a code walk through or similar practice it is decided that the loop is ‘safe’ it can be annotated as such, otherwise, just like the case of system calls, the system can propagate the fact that certain functions may have unbounded computation and then the UI adjusted accordingly.

For small bespoke software development I can be forgiving, but for large vendors like Microsoft, Apple or Adobe, there is no excuse for this form of culpable failure.

… but I have a bad feeling that in 20 years time I may be writing again …

[[ News flash – 1.5 hours later Word has finished saving the document! … 14 pages obviously hard work. … but then it has hung again 🙁 ]]

Tags and Tagging: from semiology to scatology

I’ve just been at a two-day workshop on “Tags and Tagging” organised by the “Branded Meeting Places” project.

Tags are of course becoming ubiquitous in the digital world: Flickr photos, del.icio.us bookmarks; at the digital/physical boundary: RFID and barcodes; and in the physical world: supermarket price stickers, luggage labels and images of Paddington Bear or wartime evacuees each with a brown paper label round their necks. Indeed we started off the day being given just such brown paper tags to design labels for ourselves.

Alan's tag

As well as being labels so we know each other, they were also used as digital identifiers using a mobile-phone-based image-recognition system, which has been used in a number of projects by the project team at Edinburgh (see some student projects here). We could photograph each others tags with our own phones, MMS the picture to a special phone number, then a few moments later an SMS message would arrive with the other person’s profile.

Being focused on a single topic and even single word ‘tag’ soon everything begins to be seen through the lens of “tagging”, so that when we left the building and saw a traffic warden at work outside the building, instantly the thought came “tagging the car”!

Vocal Thumbs logoThe workshop covered loads of ground and included the design and then construction of a real application – part of the project’s methodology of research through design. However, two things that I want to write about. The first is the way the workshop made me think about the ontology or maybe semiology of tags and tagging, and the second is a particular tag (or maybe label, notice?) … on a toilet door … yes the good old British scatological obsession.

Continue reading

mobile design workshop

A couple of weeks ago I attended a mobile design workshop at microsoft labs in Cambridge. Great 2 days … people from academia and industry (and no not just MS, also Google, Yahoo, Sony, Nokia, …!)
Alan in Helmet

Most of the time was spent splitting into small working groups then coming back together for plenaries. I took part in groups discussing:

(1) tools to make it easier for those in developing countries to design mobile phone applications that suit their needs [session notes], rather than simply passing on applications and designs fitted for very different needs and infrastructure. During the discussion various applications of phone technology were cited that were completely different form those we would expect in the UK, US or Europe, but fitted the situations of people. These included using the address book as a ‘who owes what’ list for a trader … the ‘telephone numbers’ were in act amounts of money! This use of ‘ancillary’ parts of the phone rather than simply being a glorified communication device. Although he context of this was Africa, it also echoes studies of domestic phone use by Malay women in the UK by Fariza (who has just had her PhD viva :-)). She found alarm, calculator and things like that, at least as important as phone & text for the people she studied.

(2) ‘mindfulness’ and mobile phones … and of course the fact that normally they do the opposite interupting etc. … but just to not make us all agree too much, I said that mindfulness sounded like we should all become like rabbits; it is the looking forward and back, with all its stress, that is one of the things that make us human.

(3) task/data oriented interaction … escaping from the ‘application’. This was particularly relevant to me given onCue at aQtive was in this space as are Snip!t and work on TIM project with colleagues at Rome, Athens and recently new collaborators Madrid … with whom I had a short but lovely visit after CHI.

Task session