If you do accessibility, please do it properly

I was looking at Coke Cola’s Rugby World Cup site1,

On the all-red web page the tooltip stood out, with the uninformative text, “headimg”.

coke-rugby-web-site-zoom

Peeking in the HTML, this is in both the title and alt attributes of the image.

<img title="headimg" alt="headimg" class="cq-dd-image" 
     src="/content/promotions/nwen/....png">

I am guessing that the web designer was aware of the need for an alt tag for accessibility, and may even have had been prompted to fill in the alt tag by the design software (Dreamweaver does this).  However, perhaps they just couldn’t think of an alternative text and so put anything in (although as the image consists of text, this does betray a certain lack of imagination!); they probably planned to come back later to do it properly.

As the micro-site is predominantly targeted at the UK, Coke Cola are legally bound to make it accessible and so may well have run it through WCAG accessibility checking software.  As the alt tag was present it will have passed W3C validation, even though the text is meaningless.  Indeed the web designer might have added the unhelpful text just to get the page to validate.

The eventual page is worse than useless, a blank alt tag would have meant it was just skipped, and at least the text “header image” would have been read as words, whereas “headimg” will be spelt out letter by letter.

Perhaps I am being unfair,  I’m sure many of my own pages are worse than this … but then again I don’t have the budget of Coke Cola!

More seriously there are important lessons for process.  In particular it is very likely that at the point the designer uploads an image they are prompted for the alt tag — this certainly happens with Dreamweaver.  However, at this point your focus is in getting the page looking right as the client looking at the initial designs is unlikely to be using a screen reader.

Good design software should not just prompt for the right information, but at the right time.  It would be far better to make it easy to say “ask me later” and build up a to do list, rather than demand the information when the system wants it, and risk the user entering anything to ‘keep the system quiet’.

I call this the Micawber principle2 and it is a good general principle for any notifications requiring user action.  Always allow the user to put things off, but also have the application keep track of pending work, and then make it easy for the user see what needs to be done at a more suitable time.

  1. Largely because I was fascinated by the semantically questionable statement “Win one of up to 1 million exclusive Gilbert rugby balls.” (my emphasis).[back]
  2. From Dicken’s Mr Micawber, who was an arch procrastinator.  See Learning Analytics for the Academic:
    An Action Perspective where I discuss this principle in the context of academic use of learning analytics.[back]

REF Redux 1 – UK research assessment for computing; what it means and is it right?

REF is the 5 yearly exercise to assess the quality of UK university research, the results of which are crucial for both funding and prestige. In 2014, I served on the sub-panel that assessed computing submissions. Since, the publication of the results I have been using public domain data from the REF process in order to validate the results using citation data.

The results have been alarming suggesting that, despite the panel’s best efforts to be fair, in fact there was significant bias both in terms of areas of computer science and types of universities.  Furthermore the first of these is also likely to have led to unintentional emergent gender bias.

I’ve presented results of this at a bibliometrics workshop at WebSci 2015 and at a panel at the British HCI conference a couple of weeks ago. However, I am aware that the full data and spreadsheets can be hard to read, so in a couple of posts I’ll try to bring out the main issues. A report and mini-site describes the methods used in detail, so in these posts I will concentrate on the results, and implications, starting in this post by setting the scene seeing how REF ranked sub-areas of computing and the use of citations for validation of the process. The next post will look at how UK computing sits amongst world research, and whether this agrees with the REF assessment.

Few in UK computing departments will have not seen the ranking list produced as part of the final report of the computing REF panel.

REF-ranks

Here topic areas are ranked by the percentage of 4* outputs (the highest rank). Top of the list is Cryptography, with over 45% of outputs ranked 4*. The top of the list is dominated by theoretical computing areas, with 30-40% 4*, whilst the more applied and human areas are at the lower end with less than 20% 4*. Human-centred computing and collaborative computing, the areas where most HCI papers would be placed, are pretty much at the bottom of the list, with 10% and 8.8% of 4* papers respectively.

Even before this list was formally published I had a phone call from someone in an institution where the knowledge of it had obviously leaked. Their department was interviewing for a lectureship and the question being asked was whether they should be recruiting candidates from HCI as this will clearly not be good looking towards REF 2020.

Since then I have heard of numerous institutions who are questioning the value of supporting these more applied areas, due to their apparent poor showing under REF.

In fact, even taken at face value, the data says nothing at all about the value in particular departments., and the sub-panel report includes the warning “These data should be treated with circumspection“.

There are three possible reasons any, or all of which would give rise to the data:

  1. the best applied work is weak — including HCI :-/
  2. long tail — weak researchers choose applied areas
  3. latent bias — despite panel’s efforts to be fair

I realised that citation data could help disentangle these.

There has been understandable resistance against using metrics as part of research assessment. However, that is about their use to assess individuals or small groups. There is general agreement that citation-based metrics are a good measure of research quality en masse; indeed I believe HEFCE are using citations to verify between-panel differences in 4* allocations, and in Morris Sloman’s post REF analysis slides (where the table above first appeared), he also uses the overall correlation between citations and REF scores as a positive validation of the process.

The public domain REF data does not include the actual scores given to each output, but does include citations data provided by Scopus in 2013. In addition, for Morris’ analysis in late 2014, Richard Mortier (then at Nottingham, now at Cambridge) collected Google Scholar citations for all REF outputs.

Together, these allow detailed citation-based analysis to verify (or otherwise) the validity of the REF outputs for computer science.

I’ll go into details in following posts, but suffice to say the results were alarming and show that, whatever other effects may have played a part, and despite the very best efforts of all involved, very large latent bias clearly emerged during the progress.

If the light is on, they can hear (and now see) you

hello-barbie-matel-from-guardianFollowing Samsung’s warning that its television sets can listen into your conversations1, and Barbie’s, even more scary, doll that listens to children in their homes and broadcasts this to the internet2, the latest ‘advances’ make it possible to be seen even when the curtains are closed and you thought you were private.

For many years it has been possible for security services, or for that matter sophisticated industrial espionage, to pick up sounds based on incandescent light bulbs.

The technology itself is not that complicated, vibrations in the room are transmitted to the filament, which minutely changes its electrical characteristics. The only complication is extracting the high-frequency signal from the power line.

040426-N-7949W-007However, this is a fairly normal challenge for high-end listening devices. Years ago when I was working with submarine designers at Slingsby, we were using the magnetic signature of power running through undersea cables to detect where they were for repair. The magnetic signatures were up to 10,000 times weaker than the ‘noise’ from the Earth’s own magnetic field, but we were able to detect the cables with pin-point accuracy3. Military technology for this is far more advanced.

The main problem is the raw computational power needed to process the mass of data coming from even a single lightbulb, but that has never been a barrier for GCHQ or the NSA, and indeed, with cheap RaspberryPi-based super-computers, now not far from the hobbyist’s budget4.

Using the fact that each lightbulb reacts slightly differently to sound, means that it is, in principle, possible to not only listen into conversations, but work out which house and room they come from by simply adding listening equipment at a neighbourhood sub-station.

The benefits of this to security services are obvious. Whereas planting bugs involves access to a building, and all other techniques involve at least some level of targeting, lightbulb-based monitoring could simply be installed, for example, in a neighbourhood known for extremist views and programmed to listen for key words such as ‘explosive’.

For a while, it seemed that the increasing popularity of LED lightbulbs might end this. This is not because LEDs do not have an electrical response to vibrations, but because of the 12V step down transformers between the light and the mains.

Of course, there are plenty of other ways to listen into someone in their home, from obvious bugs to laser-beams bounced of glass (you can even get plans to build one of your own at Instructables), or even, as MIT researchers recently demonstrated at SIGGRAPH, picking up the images of vibrations on video of a glass of water, a crisp packet, and even the leaves of a potted plant5. However, these are all much more active and involve having an explicit suspect.

Similarly blanket internet and telephone monitoring have applications, as was used for a period to track Osama bin Laden’s movements6, but net-savvy terrorists and criminals are able to use encryption or bypass the net entirely by exchanging USB sticks.

However, while the transformer attenuates the acoustic back-signal from LEDs, this only takes more sensitive listening equipment and more computation, a lot easier than a vibrating pot-plant on video!

So you might just think to turn up the radio, or talk in a whisper. Of course, as you’ve guessed by now, and, as with all these surveillance techniques, simply yet more computation.

Once the barriers of LEDs are overcome, they hold another surprise. Every LED not only emits light, but acts as a tiny, albeit inefficient, light detector (there’s even an Arduino project to use this principle).   The output of this is a small change in DC current, which is hard to localise, but ambient sound vibrations act as a modulator, allowing, again in principle, both remote detection and localisation of light.

220px-60_LED_3W_Spot_Light_eq_25WIf you have several LEDs, they can be used to make a rudimentary camera7. Each LED lightbulb uses a small array of LEDs to create a bright enough light. So, this effectively becomes a very-low-resolution video camera, a bit like a fly’s compound eye.

While each image is of very low quality, any movement, either of the light itself (hanging pendant lights are especially good), or of objects in the room, can improve the image. This is rather like the principle we used in FireFly display8, where text mapped onto a very low-resolution LED pixel display is unreadable when stationary, but absolutely clear when moving.

pix-11  pix-21
pix-12  pix-22
LEDs produce multiple very-low-resolution image views due to small vibrations and movement9.

OLYMPUS DIGITAL CAMERA  OLYMPUS DIGITAL CAMERA
Sufficient images and processing can recover an image.

So far MI5 has not commented on whether it uses, or plans to use this technology itself, nor whether it has benefited from information gathered using it by other agencies. Of course its usual response is to ‘neither confirm nor deny’ such things, so without another Edward Snowden, we will probably never know.

So, next time you sit with a coffee in your living room, be careful what you do, the light is watching you.

  1. Not in front of the telly: Warning over ‘listening’ TV. BBC News, 9 Feb 2015. http://www.bbc.co.uk/news/technology-31296188[back]
  2. Privacy fears over ‘smart’ Barbie that can listen to your kids. Samuel Gibbs, The Guardian, 13 March 2015. http://www.theguardian.com/technology/2015/mar/13/smart-barbie-that-can-listen-to-your-kids-privacy-fears-mattel[back]
  3. “Three DSP tricks”, Alan Dix, 1998. https://alandix.com/academic/papers/DSP99/DSP99-full.html[back]
  4. “Raspberry Pi at Southampton: Steps to make a Raspberry Pi Supercomputer”, http://www.southampton.ac.uk/~sjc/raspberrypi/[back]
  5. A. Davis, M. Rubinstein, N. Wadhwa, G. Mysore, F. Durand and W. Freeman (2014). The Visual Microphone: Passive Recovery of Sound from Video. ACM Transactions on Graphics (Proc. SIGGRAPH), 33(4):79:1–79:10 http://people.csail.mit.edu/mrub/VisualMic/[back]
  6. Tracking Use of Bin Laden’s Satellite Phone, all Street Journal, Evan Perez, Wall Street Journal, 28th May, 2008. http://blogs.wsj.com/washwire/2008/05/28/tracking-use-of-bin-ladens-satellite-phone/[back]
  7. Blinkenlight, LED Camera. http://blog.blinkenlight.net/experiments/measurements/led-camera/[back]
  8. Angie Chandler, Joe Finney, Carl Lewis, and Alan Dix. 2009. Toward emergent technology for blended public displays. In Proceedings of the 11th international conference on Ubiquitous computing (UbiComp ’09). ACM, New York, NY, USA, 101-104. DOI=10.1145/1620545.1620562[back]
  9. Note using simulated images; getting some real ones may be my next Tiree Tech Wave project.[back]

CHI Academy … a Faustian bargain?

I am on a short excursion from walking Wales to CHI 2013 in Paris.

Last night I was inducted into the SIGCHI Academy. No great fanfares or anything, just a select dinner and a short ceremony. I thought Gerrit, who is current SIGCHI chair would look good with a sword dubbing each person, but instead just a plaque, a handshake and a photo or two by Ben Shniederman, amanuensis of CHI.

I feel in two minds. On the one hand there are many lovely people in CHI community, and I spent a great afternoon and evening chatting to folk including Phillipe Palanque, Mary Czerwinsky and Hiroshi Ishii over dinner. However, ACM and to some extent SIGCHI often appear like the Star Wars imperial forces, intent on global domination.

The CEO of ACM did little to dispel this at the opening ceremony this morning.  He spoke of ACM’s international aspirations and praised CHI for regularly having its conferences outside of the US.

Now ACM is the de facto international computing organisation and CHI is the de facto international conference in human–computer interaction, but by virtue of the fact that they are the US ones.  In principle, IFIP and Interact are the international computing organisation and HCI conference respectively, as IFIP is the UNESCO founded body of which ACM and other national computing bodies, such as the BCS in the UK, are members.  Interact, the HCI conference sponsored by IFIP is truly international being held in numerous countries over the years (but I think never yet the US!); in contrast having approximately two out of three conferences in the US is laudable, but hardly the sign of a truly international organisation.

So, is the ACM an originally US organisation that is in the process of slowly becoming truly international, or is it part of more general US cultural domination?  Although probably neither are completely accurate, at present there seems to be significant aspects of the latter under the guise of the former.  In a way this is, in microcosm, an image of the same difficult relationship between USA and UN in other areas of international affairs.

And by joining the SIGCI Academy am I increasing the European presence in CHI and thus part of the process to make it truly international, or selling my academic soul in a Faustian bargain?

more on disappearing scrollbars

I recently wrote about problems with a slightly too smart scroll bar, and Google periodically change something in Gmail which means you have to horizontally scroll the page to get hook of the vertical scroll bar.

I just came across another beautiful (read terrible) example today.

I was looking at the “Learning Curve“, a bogspot blog, so presumably using a blogspot theme option.  On the right hand side was funky pull-out navigation (below left), but unfortunately, look what it does to the scroll bar (below right)!

   

This is an example of the ‘inaccessible scrollbar’ that I mention in “CSS considered harmful“, and I explain there the reason it arises.

The amazing thing is that this fails equally across all (MacOS) browsers: Safari, Firefox, Chrome, yet must be a standard blogspot feature.

One last vignette: as I looked at the above screen shots I realised that in fact there is a 1 pixel part of the scroll handle still visible to the left of the pull-out navigation.  I went back to the web page and tried to select it … unfortunately, I guess to make a larger and easier to select the ‘hot area’, as you move your mouse towards the scroll bar, the pull-out pops out … so that the one pixel of scrollbar tantalises, but is unselectable 🙁

Action Research in HCI

Recently Daniel Tetteroo asked if I knew about publications in HCI, prompted partly by the fact that I have described my Wales walk next year as a form of action research.

I realised that all my associations with the term were in information science rather than HCI, and other non-computer disciplines such as education and medicine. I also realised that the “don’t design for yourself” guidance, which was originally about taking a user-centred rather than technologist-centred approach, makes action research ideologically inimical to many.

I posted a question to Twitter, Facebook and the exec mailing list of Interaction yesterday and got a wonderful set of responses.

Here are some of the pointers in the replies (and no, I have not read them all overnight!):

Many thanks to the following for responses, suggestions and comments: Michael Massimi, Nathan Matias, Charlie DeTar, Gilbert Cockton, Russell Beale, David England, Dianne Murray, Jon Rogers, Ramesh Ramloll, Maria Wolters, Alan Chamberlain, Beki Grinter, Susan Dray, Daniel Cunliffe, and any others if I have missed you!

details matter: infinite scrolling and feature interaction

Many sites now dynamically add content to a page as you scroll down; this includes both Facebook and Twitter feeds, which add content as you get near the bottom.  In many ways this is a good thing, if users have to click to get to another page, they often never bother1.  However there can be unfortunate side effects … sometimes making sites un-navigable on certain devices.  There are particular problems on MacOS, due to the removal of scrollbar arrows, a usability disaster anyway, but confounded by feature interactions with other effects.

A recent example was when I visited the SimoleonSense blog in order to find an article corresponding to an image about human sensory illusions.  The image had been shared in Facebook, and I found, when I tried to search for it, also widely pinned in Pinterst, but the Facebook shares only linked back to the image url and Pinterst to the overall site (why some artists hate Pintrest).  However, I wanted to find the actual post on the site that mentioned the image.

Happily, the image url, http://www.simoleonsense.com/wp-content/uploads/2009/02/hacking-your-brain1.jpg, made it clear that it was a WordPress blog and the image had been uploaded in February 2009, so I edited the url to http://www.simoleonsense.com/2009/02/ and started to browse.  The site is a basically a weekly digest and so the page returned was already long.  I must have missed it on my first scan down, so I hit the bottom of the page, it dynamically added more content, and I continued to scroll.  Before long the scrollbar handle looked very small, and the page very big and every time I tried to scroll up and down the page appeared to go crazy, randomly scrolling anywhere, but not where I wanted.

It took me a while to realise that the problem was that the scrollbar had been ‘enhanced’ by the website (using the WordPress infinite scroll plugin), which not only added infinite scrolling, but also ‘smart scrolling’, where a click on the scrollbar makes an animated jump to that location on the scrollbar.  Now many early scrollbars worked in this way, and the ‘smart scroll’ options is inspired by the fact that Apple rediscovered this in iOS for touch screen interaction.  The method gives rapid interaction, especially if the scrollbar is augmented by ‘tips’ on the scrollbar (see the jQuery smartscroll demo page).

Unfortunately, this is different from the Mac normal behaviour when you click above or below the handle on a scrollbar, which effectively does screen up/down.  So, I was trying to navigate up/down the web page a screen at a time to find the relevant post, and not caring where I clicked above the scroll handle, hence the apparently random movements.

This was compounded by two things.  The first is a slight bug in the scrolling extension which means that sometimes it doesn’t notice your mouse release and starts scrolling the page as you move your mouse around.  This is a bug I’ve seen in scrolling systems for many years, not taking into account all the combinations of mouse down/up, enter/leave region etc., and is present even in Google maps.

The second compounding factor is that since MacOS got rid of the scrollbar arrows (why? Why? WHY?!!), this is now the only way to reliably do small up/down movements if you don’t have a scroll wheel mouse or similar.

Now, in fact, my Air has a trackpad and I think Apple assumes you will use this for scrolling, but I have single-finger ‘Tap to click’ turned off to prevent accidental selections, and (I assume due to a persistent bug) this turns off the two finger scrolling gesture as well (even though it is shown as on in the preferences), so no scrolling from the touchpad.

Since near the beginning of my career I have been fascinated by these fine design decisions and have written previously about scrollbars, buttons, etc.  They are often overlooked as they form part of the backdrop to more significant applications and information.  However, the very fact that they are the persistent backdrop of interaction makes their fluid usability crucial, like the many mundane services, buses, rubbish collection, etc., that make cities work, but are often unseen and unnoticed until they fail.

Also note that this failure was not due to any single feature or bug, but the way these work together what the telephony industry originally named ‘feature interaction‘, but common across all technological systems  There is no easy fix, apart from (i) thinking of all possible scenarios (reach for your formal methods in HCI!) and (ii) testing across different devices.  And certainly (Apple please listen!) if it ain’t broke, don’t fix it.

Happily, I did manage to find the post in the end (I forget how, maybe random clicking) and it is “5 Ways To Hack Your Brain“.  The individual post page has no dynamic additions, so is only two screens big on my display (phew), but still scrolled all over the place as I tried to select the page title to paste above!

  1. To my mind, early web guidance, was always wrong about this as it usually suggested making pages fit a screen to improve download speed, whereas my feeling, when using a slow connection, was it was usually better to wait a little longer for one big screen (you were going to have to wait anyway!) and then be able to scroll up and down quickly.[back]

First version of Tiree Mobile Archive app goes live at Wave Classic

The first release version of the Tiree Mobile Archive app (see “Tiree Going Mobile“) is seeing real use this coming week at the Tiree Wave Classic. As well as historical information, and parts customised for the wind-surfers, it already embodies some interesting design features including the use of a local map  There’s a lot of work to do before the full launch next March, but it is an important step.

The mini-site for this Wave Classic version has a simulator, so you can see what it is like online, or download to your mobile … although GPS tracking only works when you are on Tiree 😉

Currently it still has only a small proportion of the archive material from An Iodhlann so still to come are some of the issues of volume that will surely emerge as more of the data comes into the app.

Of course those coming for the Wave Classic will be more interested in the sea than local history, so we have deliberately included features relevant to them, Twitter and news feeds from the Wave Classic site and also pertinent tourist info (beaches, campsites and places to eat … and drink!).  This will still be true for the final version of the app when it is released in the sprint — visitors come for a variety of reasons, so we need to offer a broad experience, without overlapping too much with a more tourism focused app that is due to be created for the island in another project.

One crucial feature of the app is the use of local maps.  The booklet for the wave classic (below left) uses the Discover Tiree tourist map, designed by Colin Woodcock and used on the island community website and various island information leaflets.  The online map (below right) uses the same base layer.  The map deliberately uses this rather than the OS or Google maps (although final version will swop to OS for most detailed views) as this wll be familiar as they move between paper leaflets and the interactive map.

   

In “from place to PLACE“, a collection developed as part of Common Ground‘s ‘Parish Maps‘ project in the 1990s, Barbara Bender writes about the way:

“Post-Renaissance maps cover the surface of the world with an homogeneous Cartesian grip”

Local maps have their own logic not driven by satellite imagery, or military cartography1; they emphasise certain features, de-emphasise others, and are driven spatially less by the compass and ruler and more by the way things feel ‘on the ground’.  These issues of space and mapping have been an interest for many years2, so both here and in my walk around Wales next year I will be aiming to ‘reclaim the local map within technological space’.

In fact, the Discover Tiree map, while stylised and deliberately not including roads that are not suitable for tourists, is very close to a ‘standard map’ in shape, albeit at a slightly different angle to OS maps as it is oriented3 to true North whereas OS maps are oriented to ‘Grid North’ (the problems of representing a round earth on flat sheets!).  In the future I’d like us to be able to deal with more interpretative maps, such as the mural map found on the outside of MacLeod’s shop. Or even the map of Cardigan knitted onto a Cardigan knitted as part of the 900 year anniversary of the town.

     

Technically this is put together as an HTML5 site to be cross-platform,, but … well let’s say some tweaks needed4.  Later on we’ll look to wrapping this in PhoneGap or one of the other HTML5-to-native frameworks, but for the time being once you have bookmarked to the home page on iOS looks pretty much like an app – on Android a little less so, but still easy access … and crucially works off-line — Tiree not known for high availability of mobile signal!

  1. The ‘ordnance‘ in ‘Ordnance Survey‘ was originally about things that go bang![back]
  2. For example, see “Welsh Mathematician walks in Cyberspace” and  “Paths and Patches – patterns of geognosy and gnosis”.[back]
  3. A lovely word, originally means to face East as early Mappa Mundi were all arranged with the East at the top.[back]
  4. There’s a story, going cross browser on mobile platform reminds me so much of desktop web design 10 years ago, on the whole iOS Safari behave pretty much like desktop ones, but Android is a law unto itself!.[back]

Death by design

Wonderful image and set of slides describing some of the reasons multitasking is a myth and how the interfaces we design may be literally killing people (during a mobile outage in Dubai cat accidents dropped by 20%).

Thanks to Ian Sommervile for sharing this on twitter.

“lost in hyperspace” – do we care?

I have rarely heard the phrase “lost in hyperspace” in the last 10 years, although it used to be a recurrent theme in hypertext and HCI literature.  For some time this has bothered me.  We don’t seem less lost, so maybe we are just more laid back about control, or maybe we are simply relinquishing it?

Recently Lisa Tweedie posted a Pintrest link on Facebook to Angela Morelli‘s dynamic infographic on water.  This is a lovely vertically scrolling page showing how the majority of the water we use is indirectly consumed via the food we eat … especially if you are a meat eater (1 kilo beef = 15,400 litres of water!).  The graphic was great, except it took me ages to actually get to it.  In fact the first time I found a single large graphic produced by Angela as a download, it was only when I returned to it that I found the full dynamic info graphic.

Every time I go to Pintrest I feel like I’ve been dropped into a random part of Hampton Court Maze, so hard to find the actual source … this is why a lot of artists get annoyed at Pintrest!  Now for Pintrest this is probably part of their design philosophy … after all they want people to stay on their site.  What is amazing is that this kind of design is so acceptable to users … Facebook is slightly less random, but still it takes me ages to find pages I’ve liked, each time I start the search through my profile afresh.

In the early days of hypertext everyone used to talk about the “lost in hyperspace” problem … now we are more lost … but don’t care anymore.  In the Mediaeval world you put your trust in your ‘betters’ lords, kings, and priests and assumed they knew best … now we put our trust in Pintrest and Facebook.