
Software for 2050

Physigrams get their own micro-site!
See it now at at physicality.org/physigrams
Appropriate physical design can make the difference between an intuitively obvious device and one that is inscrutable. Physigrams are a way of modelling and analysing the interactive physical characteristics of devices from TV remotes to electric kettles, filling the gap between foam prototypes and code.
Sketches or CAD allow you to model the static physical form of the device, and this can be realised in moulded blue foam, 3D printing or cardboard mock-ups. Prototypes of the internal digital behaviour can be produced using tools such as Adobe Animate, proto.io or atomic or as hand-coded using standard web-design tools. The digital behaviour can also be modelled using industry standard techniques such as UML.
Physigrams allow you to model the ‘device unplugged’ – the pure physical interaction potential of the device: the ways you can interact with buttons, dials and knobs, how you can open, slide or twist movable elements. These physigrams can be attached to models of the digital behaviour to understand how well the physical and digital design compliment one another.
Physigrams were developed some years ago as part of the DEPtH project., a collaboration between product designers at Cardiff School of Art and Design and computer scientists at Lancaster University. Physigrams have been described in various papers over the years. However, with TouchIT ,our book on physicality and design (eventually!) reaching completion and due out next year, it felt that physigrams deserved a home of their own on the web.
The physigram micro-site, part of physicality.org includes descriptions of physical interaction properties, a complete key to the physigram notation, and many examples of physigrams in action from light switches, to complete control panels and novel devices.
How long is an instant? The answer, of course, is ‘it depends’, but I’ve been finding it fascinating playing on the demo page for AngularJS tooltips. and seeing what feels like ‘instant’ for a tooltip.
The demo allows you to adjust the md-delay property so you can change the delay between hovering over a button and the tooltip appearing, and then instantly see what that feels like.
I was recently asked to clarify the difference between usability principles and guidelines. Having written a page-full of answer, I thought it was worth popping on the blog.
As with many things the boundary between the two is not absolute … and also the term ‘guidelines’ tends to get used differently at different times!
However, as a general rule of thumb:
As an example of the latter, look at the iOS Human Interface Guidelines on “Adaptivity and Layout” It starts with a general principle:
“People generally want to use their favorite apps on all their devices and in multiple contexts”,
but then rapidly turns that into more mobile specific, and then iOS specific guidelines, talking first about different screen orientations, and then about specific iOS screen size classes.
I note that the definition on page 259 of Chapter 7 of the HCI textbook is slightly ambiguous. When it says that guidelines are less authoritative and more general in application, it means in comparison to standards … although I’d now add a few caveats for the latter too!
Basically in terms of ‘authority’, from low to high:
lowest | principles | agreed by community, but not mandated |
---|---|---|
guidelines | proposed by manufacture, but rarely enforced | |
highest | standards | mandated by standards authority |
In terms of general applicability, high to low:
highest | principles | very broad e.g. ‘observability’ |
---|---|---|
guidelines | more specific, but still allowing interpretation | |
lowest | standards | very tight |
This ‘generality of application’ dimension is a little more complex as guidelines are often manufacturer specific so arguably less ‘generally applicable’ than standards, but the range of situations that standard apply to is usually much tighter.
On the whole the more specific the rules, the easier they are to apply. For example, the general principle of observability requires that the designer think about how it applies in each new application and situation. In contrast, a more specific rule that says, “always show the current editing state in the top right of the screen” is easy to apply, but tells you nothing about other aspects of system state.
We get used to being able to zoom into every document picture and map, but part of the cartographer’s skill is putting the right information at the right level of detail. If you took area maps and then scaled them down, they would not make a good road atlas, the main motorways would hardly be visible, and the rest would look like a spider had walked all over it. Similarly if you zoom into a road atlas you would discover the narrow blue line of each motorway is in fact half a mile wide on the ground.
Nowadays we all use online maps that try to do this automatically. Sometimes this works … and sometimes it doesn’t.
Here are three successive views of Google maps focused on Bournemouth on the south coast of England.
On the first view we see Bournemouth clearly marked, and on the next, zooming in a little Poole, Christchurch and some smaller places also appear. So far, so good, as we zoom in more local names are shown as well as the larger place.
However, zoom in one more level and something weird happens, Bournemouth disappears. Poole and Christchurch are there, but no Bournemouth.
However, looking at the same level scale on another browser, Bournemouth is there still:
The difference between the two is the Hotel Miramar. On the first browser I am logged into Google mail, and so Google ‘knows’ I am booked to stay in the Hotel Miramar (presumably by scanning my email), and decides to display this also. The labels for Bournemouth and the hotel label overlap, so Google simply omitted the Bournemouth one as less important than the hotel I am due to stay in.
A human map maker would undoubtedly have simply shifted the name ‘Bournemouth’ up a bit, knowing that it refers to the whole town. In principle, Google maps could do the same, but typically geocoding (e.g. Geonames) simply gives a point for each location rather than an area, so it is not easy for the software to make adjustments … except Google clearly knows it is ‘big’ as it is displayed on the first, zoomed out, view; so maybe it could have done better.
This problem of overlapping legends will be familiar to anyone involved in visualisation whether map based or more abstract.
The image above is the original Cone Tree hierarchy browser developed by Xerox PARC in the early 1990s1. This was the early days of interactive 3D visualisation, and the Cone Tree exploited many of the advantages such as a larger effective ‘space’ to place objects, and shadows giving both depth perception, but also a level of overview. However, there was no room for text labels without them all running over each other.
Enter the Cam Tree:
The Cam Tree is identical to the cone tree, except because it is on its side it is easier to place labels without them overlapping 🙂
Of course, with the Cam Tree the regularity of the layout makes it easy to have a single solution. The problem with maps is that labels can appear anywhere.
This is an image of a particularly cluttered part of the Frasan mobile heritage app developed for the An Iodhlann archive on Tiree. Multiple labels overlap making them unreadable. I should note that the large number of names only appear when the map is zoomed in, but when they do appear, there are clearly too many.
It is far from clear how to deal with this best. The Google solution was simply to not show some things, but as we’ve seen that can be confusing.
Another option would be to make the level of detail that appears depend not just on the zoom, but also the local density. In the Frasan map the locations of artefacts are not shown when zoomed out and only appear when zoomed in; it would be possible for them to appear, at first, only in the less cluttered areas, and appear in more busy areas only when the map is zoomed in sufficiently for them to space out. This would trade clutter for inconsistency, but might be worthwhile. The bigger problem would be knowing whether there were more things to see.
Another solution is to group things in busy areas. The two maps below are from house listing sites. The first is Rightmove which uses a Google map in its map view. Note how the house icons all overlap one another. Of course, the nature of houses means that if you zoom in sufficiently they start to separate, but the initial view is very cluttered. The second is daft.ie; note how some houses are shown individually, but when they get too close they are grouped together and just the number of houses in the group shown.
A few years ago, Geoff Ellis and I reviewed a number of clutter reduction techniques2, each with advantages and disadvantages, there is no single ‘best’ answer. The daft.ie grouping solution is for icons, which are fixed size and small, the text label layout problem is far harder!
Maybe someday these automatic tools will be able to cope with the full variety of layout problems that arise, but for the time being this is one area where human cartographers still know best.
Recently, I was asked for any tips or suggestions for stakeholder interviews. I realised it was going to be more than would fit in the response to an IM message!
I’ll assume that this is purely for requirements gathering. For participatory or co-design, many of the same things hold, but there would be additional activities.
See also HCI book chapter 5: interaction design basics and chapter 13: socio-organizational issues and stakeholder requirements.
First remember:
People also find it easier to articulate ‘what’ compared with ‘why’ knowledge:
Most of us think best when we have concrete examples or situations to draw on, even if we are using these to describe more abstract concepts.
As noted the stakeholder’s tacit knowledge may be the most important. By seeking out or deliberately creating odd or unusual situations, we may be able to break out of this blindness to the normal.
Of course some of these, notably fantasy scenarios, may work better in some organisations than others!
You need to make sense of all that interview data!
If possible you may wish to present these back to those involved, even if people are unaware of certain things they do or think, once presented to them, the flood gates open! If your stakeholders are hard to interview, maybe because they are senior, or far away, or because you only have limited access, then if possible do some level of analysis mid-way so that you can adjust future interviews based on past ones.
Neither you nor your interviewees have unlimited time; you need to have a clear idea of the most important things to learn – whilst of course keeping an open ear for things that are unexpected!
If possible plan time for a second round of some or all the interviewees after you have had a chance to analyse the first round. This is especially important as you may not know what is important until this stage!
You may not have total freedom in who you see, what you ask or how it is reported, but in so far as is possible (and maybe refuse unless it is) respect the privacy and personhood of those with whom you interact.
This is partly about good professional practice, but also efficacy – if interviewees know that what they say will only be reported anonymously they are more likely to tell you about the unofficial as well as the official practices! If you need to argue for good practice, the latter argument may hold more sway than the former!
In your reporting, do try to make sure that any accounts you give of individuals are ones they would be happy to hear. There may be humorous or strange stories, but make sure you laugh with not at your subjects. Even if no one else recognises them, they may well recognise themselves.
Of course do ensure that you are totally honest before you start in explaining what will and will not be related to management, colleagues, external publication, etc. Depending on the circumstances, you may allow interviewees to redact parts of an interview transcript, and/or to review and approve parts of a report pertaining to them.
I was looking at Coke Cola’s Rugby World Cup site1,
On the all-red web page the tooltip stood out, with the uninformative text, “headimg”.
Peeking in the HTML, this is in both the title and alt attributes of the image.
<img title="headimg" alt="headimg" class="cq-dd-image" src="/content/promotions/nwen/....png">
I am guessing that the web designer was aware of the need for an alt tag for accessibility, and may even have had been prompted to fill in the alt tag by the design software (Dreamweaver does this). However, perhaps they just couldn’t think of an alternative text and so put anything in (although as the image consists of text, this does betray a certain lack of imagination!); they probably planned to come back later to do it properly.
As the micro-site is predominantly targeted at the UK, Coke Cola are legally bound to make it accessible and so may well have run it through WCAG accessibility checking software. As the alt tag was present it will have passed W3C validation, even though the text is meaningless. Indeed the web designer might have added the unhelpful text just to get the page to validate.
The eventual page is worse than useless, a blank alt tag would have meant it was just skipped, and at least the text “header image” would have been read as words, whereas “headimg” will be spelt out letter by letter.
Perhaps I am being unfair, I’m sure many of my own pages are worse than this … but then again I don’t have the budget of Coke Cola!
More seriously there are important lessons for process. In particular it is very likely that at the point the designer uploads an image they are prompted for the alt tag — this certainly happens with Dreamweaver. However, at this point your focus is in getting the page looking right as the client looking at the initial designs is unlikely to be using a screen reader.
Good design software should not just prompt for the right information, but at the right time. It would be far better to make it easy to say “ask me later” and build up a to do list, rather than demand the information when the system wants it, and risk the user entering anything to ‘keep the system quiet’.
I call this the Micawber principle2 and it is a good general principle for any notifications requiring user action. Always allow the user to put things off, but also have the application keep track of pending work, and then make it easy for the user see what needs to be done at a more suitable time.
Spring has definitely come to Tiree and in the sunshine I took my second run of the year. On Soroby beach I met someone else out running and we chatted as we ran. It reminded me of another run two years ago …
It was spring of 2013 and a busy Tiree Tech Wave with the launch of Frasan on the Saturday evening. A group had come from the Catalyst project in Lancaster, including Maria Ferrario and she had mentioned running when she arrived, so I said I’d do a run with her. Only later did I discover that her level of running was somewhat daunting, competing in marathons with times that made me wonder if I’d survive the outing.
Happily, Maria modified her pace to reflect my abilities, and we took a short run from the Rural Centre to Chocolates and Charms (good to have a destination), indirectly via Soroby Beach, where I ran today.
Running across the sand we talked about smart grids, and the need to synchronise energy use with renewable supply, and from the conversation the seeds of an idea grew.
I started my walk round Wales almost immediately after (with the small matter of my daughter’s wedding in between), but Maria went back to Lancaster and talked to Adrian Friday, who put together a project proposal (with the occasional, very slow email interchange when I could get Internet connections). Towards the end of the summer we heard we had been short-listed and I joined Adrian via Skype for an interview in July.
… and we were successful 🙂
The OnSupply project was born.
OnSupply was a sub-project of the Lancaster Catalyst project. The wider Catalyst project’s aims were to understand better the processes by which advanced technology could be used by communities. OnSupply was the main activity for nine-months of the last year of Catalyst.
OnSupply itself was focused on how people can better understand the availability of renewable energy. Our current model of energy production assumes electricity is always available ‘on demand’ and the power generation companies’ job is to provide it when wanted. However, renewable energy does not come when we want it, but when the wind blows, the tides run and the sun shines. That is in the future we need to shift to a model where energy is used when it is available, ‘on supply’ rather than ‘on demand’.
The Lancaster team, led by Adrian consisted of four full time researchers, Will, Steve, Peter, and of course, Maria, and the other project partners were Tiree Tech Wave, the Tiree Development Trust, Goldsmiths University, and Rory Gianni, an independent developer based in Scotland specialising in environmental issues.
The choice of Tiree was of course partly because of Tiree Tech Wave and my presence here, but also because of Tilly, the Tiree community wind turbine, and the slightly parlous state of the electricity cable between Tiree and the mainland. In many ways the island is just like being on the mainland, you flick the switch and electricity is there. While Tilly can provide nearly a megawatt at full capacity, this simply feeds into the grid, just like the wind farms you see over many hillsides.
However, there is also an extent to which we, as an island population, are more sensitised to issues of electricity and renewable energy.
First is the presence of Tilly, which can be seen from much of the island; while the power goes into the grid, when she turns this generates income, which funds various island projects and groups.
But, the same wind that drives Tilly (incidentally the most productive land-based turbine in the UK), shakes power lines, and at its wildest causes shorts and breakages. The fragile power reduces the lifetime of the sophisticated wireless routers, which provide broadband to half the island, and damages fridge compressors.
Furthermore, the aging sea-cable (now happily replaced) frequently broke so that island power was provided for months at a time from backup diesel generator. As well as filling the ferry with oil tankers, the generator cannot cope with the fluctuating power from Tilly, and so for months she is braked, meaning no electricity and so no money.
So, in some ways, a community perfect for investigating issues of awareness of energy production, sensitised enough that it will be easier to see impact, but similar enough to those on the mainland that lessons learnt can be transferred.
The project itself proceeded through a number of workshops and iterative stages, with prototypes designed to provoke discussions and engagement. My favourites were machines that delivered brightly coloured ping-pong balls as part of a game to explore energy uses, and wonderful self-assembly kits for the children, incorporating a wind and solar energy gauge.
The project culminated in a display at the Tiree Agricultural Show.
While OnSupply finished last summer, the reporting continues and a few weeks ago a paper about the project, to be presented at the CHI’2015 conference in South Korea in April, was given a best paper award at the CHI’2015 conference.
… and all this from a run on the beach.
The eighth Tiree Tech Wave is just over two weeks away. We have some participants coming from GRAND NCE Canada’s Digital Media Research Network as well as those closer to home including the Code for Europe Fellows working in Nesta’s Open Data Scotland project.
There will be the normal open agenda, and also a few special activities. Jacqui Bennet has a little friendly competition planned and Steve Foreshaw from Lancaster will run a workshop on using low-cost 3D scanners, which we hope to then use to scan some of the lug boats around the island in collaboration with the Tiree Maritime Trust.
FabLab Cardiff are bringing a sort of mini-FabLab-in-a-van. During the Tech Wave they will be making things themselves, including re-installing the Tiree touchable in a glorious new enclosure. They will also run some short tutorial/workshops on using some of the equipment for TTW attendees and Tiree locals.
Although time is getting tight, I am hoping we might also have a couple of MicroViews, a miniature Arduino with built in OLED display. I ordered a Learning Kit through their Kickstarter campaign with two MicroViews (Blinking Eyes), so looking forward to some winking teddy bears 🙂 After being ahead of schedule, they had a slight production problem with their second batch, and TTW is in the third batch, so keeping fingers crossed, but, if not this time, certainly at the spring 2015 TTW.
I recently wrote about problems with a slightly too smart scroll bar, and Google periodically change something in Gmail which means you have to horizontally scroll the page to get hook of the vertical scroll bar.
I just came across another beautiful (read terrible) example today.
I was looking at the “Learning Curve“, a bogspot blog, so presumably using a blogspot theme option. On the right hand side was funky pull-out navigation (below left), but unfortunately, look what it does to the scroll bar (below right)!
This is an example of the ‘inaccessible scrollbar’ that I mention in “CSS considered harmful“, and I explain there the reason it arises.
The amazing thing is that this fails equally across all (MacOS) browsers: Safari, Firefox, Chrome, yet must be a standard blogspot feature.
One last vignette: as I looked at the above screen shots I realised that in fact there is a 1 pixel part of the scroll handle still visible to the left of the pull-out navigation. I went back to the web page and tried to select it … unfortunately, I guess to make a larger and easier to select the ‘hot area’, as you move your mouse towards the scroll bar, the pull-out pops out … so that the one pixel of scrollbar tantalises, but is unselectable 🙁