not quite everywhere

I’ve been (belatedly) reading Adam Greenfield‘s Everyware: The Dawning Age of Ubiquitous Computing. By ‘everywhere’ he means the pervasive insinuation of inter-connected computation into all aspects of our lives — ubiquitous/pervasive computing but seen in terms of lives not artefacts. Published in 2006, and so I guess written in 2004 or 2005, Adam confidently predicts that everywhere technology will have  “significant and meaningful impact on the way you live your life and will do so before the first decade of the twenty-first century is out“, but one month into 2010 and I’ve not really noticed yet. I am not one of those people who fill their house with gadgets, so I guess unlikely to be an early adopter of ‘everywhere’, but even in the most techno-loving house at best I’ve seen the HiFi controlled through an iPhone.

Devices are clearly everywhere, but the connections between them seem infrequent and poor.

Why is ubiquitous technology still so … well un-ubiquitous?

Why bother

One reason for lack of connectivity, is that for each production and purchase decision there is little need for it.  It is possible to wire up your whole home with sensors as Andy Stanford-Clark‘s twittering house demonstrates.  But for most people why put sensors in your home unless they are for something?  Why bother to buy things unless they give you value now? Home security, central hearing control, warning you when your fuel is running low — these are reasons for sensors, but in general we tend to buy things that do things for us, not things that might possibly plug together to do something interesting. This is maybe even more true of the new generation of digital natives, brought up with single function toys, and even Lego in  pre-determined models rather than generic kits.

Of course, we do buy things with sensors, the electric kettle has a temperature sensor, the TV knows what channel is tuned — but these are single purpose.  Even SCART connections for TV/DVD/satellite boxes are pretty limited unless the devices expect to be chained.  For the manufacturer, why bother to make stand-alone devices connectible; it will cost more in terms of a wireless chip, not to mention complex interfaces to manage privacy.  Even software rarely takes this step to make itself easy to interconnect — despite the mash-culture of Web2.0.  The barcode reader app on the iPhone is a rare exception; here the cost was relatively low and purely in  software (up front cost, but no cost per ‘device’), and yet even this is rare to see.  Once there are hardware costs even generic devices such as phones only include sensors for which they can see an immediate use: for ecxample, despite the presence of GPS and compass in the iPhone, the new iPad, with sightly different expected uses, has neither.  For physical devices, connectivity may come when individual devices require external interfaces, maybe to manage complex installations or monitoring via mini-webservers for less cost than dedicated displays, just as with most home routers.  Then the hardware cost of connectivity has a purpose for the dedicated device, and the additional cost of genericity is software only.

What’s there

It is interesting to look back at Mark Weiser‘s original vision of ubiquitous computing.  It is less connected than Adam’s ‘everywhere’, but equally rooted in research prototypes and potential, and yet has certainly come to pass as computation is certainly ‘ubiquitous’ in many ways.  Like others, I have always been struck by the disconnect between the opening words of Weiser’s The Computer for the 21st CenturyThe most profound technologies are those that disappear” and the rest of the paper, which focused on displays the visible face of computation.  But, with the exception of miniature MP3 players,  it is precisely these visible displays that are part of our lives.

Weiser classified displays into tabs, pads and boards, with associated scales (inch, foot, yard), and indeed the size of displays is crucial to their uses1.  However, it is perhaps better to look at Wesier’s definitons of where and how these are used: tabs are ‘on things’ microwaves, some TV remotes, and old mobile phones (the kind that were primarily for telephoning people!), pads are held and carried and while decidely smaller than an foot, the iPhone and similar smartphones are clearly in this category, and finally boards are in the environment, stuck on walls (digital photo frames) and furniture (Microsoft surface).  All can be seen in many homes and offices, although the latter have perhaps had least impact so far,  laregly because of costs of production (foreseen by Wesier), but are clearly coming on stream.

The crucial thing is that all of these displays do something for us.  They are ubiquitous because they are not invisible.

How to use them

Adam’s book describes the profoundly different user experience of ‘everywhere’ interactions (pp:27-39), rather than being task-driven by the user, in situations such as such as automatic room lights “the system precedes the user“, “Whether or not you walk into the room in pursuance of a particular aim or goal, the system’s reaction to your arrival is probably tangential to that goal.“.  This what I have previosly called ‘incidental interaction‘, interactions not driven by the user’s immediate intentions, but by the system’s actions for the user.

Somewhat strangely the discussion in ‘Everywhere’ then turns to Bellotti et al’s “Making sense of sensing systems: five questions for designers and researchers“, which summarises some of the key issues of pervasive systems dating back to the 1984 Phone Slave project2, such as knowing how to address a system that has no keyboard and knowing whether it is attending.  These are really important issues, but are about intentional use, getting the system, albeit embedded in the environment, to do something that the user has decided. At a more device level,  Abricht Schimdt‘s concept of implicit interaction3 is also in this area, making the interaction to achieve some purpose as natural and invisible as possible using sensors such as accelerometers. This is now common in devices such as the iPhone, but is still essentially about intentional interaction — again getting the device to do something.

In contrast, we seem to still have little understanding of the user experience of incidental interactions, where the system spontaneously does things for or to us, and they are still rare in the home and office, a few specific devices such as automatic lights, doors and central heating controllers, with little sign of widespread connectivity.  The main exceptions seem to be at a very large scale, Amazon book recommendations or traffic management systems.

The latter is perhaps an interesting comment on ubiquity as, for the ordinary user, the display is simply a red light.

  1. see recent papers with Corina Sas “Mobile Personal Devices meet Situated Public Displays: Synergies and Opportunities” and with Lucia Terrenghi and Aaron Quigley “A taxonomy for and analysis of multi-person-display ecosystems“, which extend this size classification to larger and smaller displays.[back]
  2. See “Phone Slave: A Graphical Telecommunications Interface” and YouTube video, although it was the longer envusonment video, that raised the user interaction design questions.[back]
  3. Schmidt, A. Implicit Human Computer Interaction Through Context. Personal Technologies, 4(2&3), Springer-Verlag, 191–199, 2000[back]

One thought on “not quite everywhere

  1. My feeling with Ubiquitous Computing research is that there has been too much concentration on making the technology work “everywhere” but less effort on figuring out and studying what people would or could do in all these places. And if we want this to be semi-intentional or unwitting, we need some way of discerning the patterns of what people are doing in sensor-rich spaces.

Comments are closed.