First version of Tiree Mobile Archive app goes live at Wave Classic

The first release version of the Tiree Mobile Archive app (see “Tiree Going Mobile“) is seeing real use this coming week at the Tiree Wave Classic. As well as historical information, and parts customised for the wind-surfers, it already embodies some interesting design features including the use of a local map  There’s a lot of work to do before the full launch next March, but it is an important step.

The mini-site for this Wave Classic version has a simulator, so you can see what it is like online, or download to your mobile … although GPS tracking only works when you are on Tiree 😉

Currently it still has only a small proportion of the archive material from An Iodhlann so still to come are some of the issues of volume that will surely emerge as more of the data comes into the app.

Of course those coming for the Wave Classic will be more interested in the sea than local history, so we have deliberately included features relevant to them, Twitter and news feeds from the Wave Classic site and also pertinent tourist info (beaches, campsites and places to eat … and drink!).  This will still be true for the final version of the app when it is released in the sprint — visitors come for a variety of reasons, so we need to offer a broad experience, without overlapping too much with a more tourism focused app that is due to be created for the island in another project.

One crucial feature of the app is the use of local maps.  The booklet for the wave classic (below left) uses the Discover Tiree tourist map, designed by Colin Woodcock and used on the island community website and various island information leaflets.  The online map (below right) uses the same base layer.  The map deliberately uses this rather than the OS or Google maps (although final version will swop to OS for most detailed views) as this wll be familiar as they move between paper leaflets and the interactive map.

   

In “from place to PLACE“, a collection developed as part of Common Ground‘s ‘Parish Maps‘ project in the 1990s, Barbara Bender writes about the way:

“Post-Renaissance maps cover the surface of the world with an homogeneous Cartesian grip”

Local maps have their own logic not driven by satellite imagery, or military cartography1; they emphasise certain features, de-emphasise others, and are driven spatially less by the compass and ruler and more by the way things feel ‘on the ground’.  These issues of space and mapping have been an interest for many years2, so both here and in my walk around Wales next year I will be aiming to ‘reclaim the local map within technological space’.

In fact, the Discover Tiree map, while stylised and deliberately not including roads that are not suitable for tourists, is very close to a ‘standard map’ in shape, albeit at a slightly different angle to OS maps as it is oriented3 to true North whereas OS maps are oriented to ‘Grid North’ (the problems of representing a round earth on flat sheets!).  In the future I’d like us to be able to deal with more interpretative maps, such as the mural map found on the outside of MacLeod’s shop. Or even the map of Cardigan knitted onto a Cardigan knitted as part of the 900 year anniversary of the town.

     

Technically this is put together as an HTML5 site to be cross-platform,, but … well let’s say some tweaks needed4.  Later on we’ll look to wrapping this in PhoneGap or one of the other HTML5-to-native frameworks, but for the time being once you have bookmarked to the home page on iOS looks pretty much like an app – on Android a little less so, but still easy access … and crucially works off-line — Tiree not known for high availability of mobile signal!

  1. The ‘ordnance‘ in ‘Ordnance Survey‘ was originally about things that go bang![back]
  2. For example, see “Welsh Mathematician walks in Cyberspace” and  “Paths and Patches – patterns of geognosy and gnosis”.[back]
  3. A lovely word, originally means to face East as early Mappa Mundi were all arranged with the East at the top.[back]
  4. There’s a story, going cross browser on mobile platform reminds me so much of desktop web design 10 years ago, on the whole iOS Safari behave pretty much like desktop ones, but Android is a law unto itself!.[back]

Phoenix rises – vfridge online again

vfridge is back!

I mentioned ‘Project Phoenix’ in my last previous post, and this was it – getting vfridge up and running again.

Ten years ago I was part of a dot.com company aQtive1 with Russell Beale, Andy Wood and others.  Just before it folded in the aftermath of the dot.com crash, aQtive spawned a small spin-off vfridge.com.  The virtual fridge was a social networking web site before the term existed, and while vfridge the company went the way of most dot.coms, for some time after I kept the vfridge web site running on Fiona’s servers until it gradually ‘decayed’ partly due to Javascript/DOM changes and partly due to Java’s interactions with mysql becoming unstable (note very, very old Java code!).  But it is now back online 🙂

The core idea of vfridge is placing small notes, photos and ‘magnets’ in a shareable web area that can be moved around and arranged like you might with notes held by magnets to a fridge door.

Underlying vfridge was what we called the websharer vision, which looked towards a web of user-generated content.  Now this is passé, but at the time  was directly counter to accepted wisdom and looking back seem prescient – remember this was written in 1999:

Although everyone isn’t a web developer, it is likely that soon everyone will become an Internet communicator — email, PC-voice-comms, bulletin boards, etc. For some this will be via a PC, for others using a web-phone, set-top box or Internet-enabled games console.

The web/Internet is not just a medium for publishing, but a potential shared place.

Everyone may be a web sharer — not a publisher of formal public ‘content’, but personal or semi-private sharing of informal ‘bits and pieces’ with family, friends, local community and virtual communities such as fan clubs.

This is not just a future for the cognoscenti, but for anyone who chats in the pub or wants to show granny in Scunthorpe the baby’s first photos.

Just over a year ago I thought it would be good to write a retrospective about vfridge in the light of the social networking revolution.  We did a poster “Designing a virtual fridge” about vfridge years ago at a Computers and Fun workshop, but have never written at length abut its design and development.  In particular it would be good to analyse the reasons, technical, social and commercial, why it did not ‘take off’ the time.  However, it is hard to do write about it without good screen shots, and could I find any? (Although now I have)  So I thought it would be good to revive it and now you can try it out again. I started with a few days effort last year at Christmas and Easter time (leisure activity), but now over the last week have at last used the fact that I have half my time unpaid and so free for my own activities … and it is done 🙂

The original vfridge was implemented using Java Servlets, but I have rebuilt it in PHP.  While the original development took over a year (starting down in Coornwall while on holiday watching the solar eclipse), this re-build took about 10 days effort, although of course with no design decisions needed.  The reason it took so much development back then is one of the things I want to consider when I write the retrospective.

As far as possible the actual behaviour and design is exactly as it was back in 2000 … and yes it does feel clunky, with lots of refreshing (remember no AJAX or web2.0 in those days) and of course loads of frames!  In fact there is a little cleverness that allowed some client-end processing pre-AJAX2.    Also the new implementation uses the same templates as the original one, although the expansion engine had to be rewritten in PHP.  In fact this template engine was one of our most re-used bits of Java code, although now of course many alternatives.  Maybe I will return to a discussion of that in another post.

I have even resurrected the old mobile interface.  Yes there were WAP phones even in 2000, albeit with tiny green and black screens.  I still recall the excitement I felt the first time I entered a note on the phone and saw it appear on a web page 🙂  However, this was one place I had to extensively edit the page templates as nothing seems to process WML anymore, so the WML had to be converted to plain-text-ish HTML, as close as possible to those old phones!  Looks rather odd on the iPhone :-/

So, if you were one of those who had an account back in 2000 (Panos Markopoulos used it to share his baby photos 🙂 ), then everything is still there just as you left it!

If not, then you can register now and play.

  1. The old aQtive website is still viewable at aqtive.org, but don’t try to install onCue, it was developed in the days of Windows NT.[back]
  2. One trick used the fact that you can get Javascript to pre-load images.  When the front-end Javascript code wanted to send information back to the server it preloaded an image URL that was really just to activate a back-end script.  The frames  used a change-propagation system, so that only those frames that were dependent on particular user actions were refreshed.  All of this is preserved in the current system, peek at the Javascript on the pages.    Maybe I’ll write about the details of these another time.[back]

What’s wrong with dynamic binding?

Dynamic scoping/binding of variables has a bad name, rather like GOTO and other remnants of the Bad Old Days before Structured Programming saved us all1.  But there are times when dynamic binding is useful and looking around it is very common in web scripting languages, event propagation, meta-level programming, and document styles.

So is it really so bad?

Continue reading

  1. Strangely also the days when major advances in substance seemed to be more important than minor advances in nomenclature[back]

Some lessons in extended interaction, courtesy Adobe

I use various Adobe products, especially Dreamweaver and want to get the newest version of Creative Suite.  This is not cheap, even at academic prices, so you might think Adobe would want to make it easy to buy their products, but life on the web is never that simple!

As you can guess a number of problems ensued, some easily fixable, some demonstrating why effective interaction design is not trivial and apparently good choices can lead to disaster.

There is a common thread.  Most usability is focused on the time we are actively using a system – yes obvious – however, most of the problems I faced were about the extended use of the system, the way individual periods of use link together.  Issues of long-term interaction have been an interest of mine for many years1 and have recently come to the fore in work with Haliyana, Corina and others on social networking sites and the nature of ‘extended episodic experience’.  However, there is relatively little in the research literature or practical guidelines on such extended interaction, so problems are perhaps to be expected.

First the good bit – the Creative ‘Suite’  includes various individual Adobe products and there are several variants Design/Web, Standard/Premium, however there is a great page comparing them all … I was able to choose which version I needed, go to the academic purchase page, and then send a link to the research administrator at Lancaster so she could order it.  So far so good, 10 out of 10 for Adobe …

To purchase as an academic you quite reasonably have to send proof of academic status.  In the past a letter from the dept. on headed paper was deemed sufficient, but now they ask for a photo ID.  I am still not sure why this is need, I wasn’t going in in person, so how could a photo ID help?  My only photo ID is my passport and with security issues and identity theft constantly in the news, I was reluctant to send a fax of that (do US homeland security know that Adobe, a US company, are demanding this and thus weakening border controls?).

After double checking all the information and FAQs in the site, I decided to contact customer support …

Phase 1 customer support

The site had a “contact us” page and under “Customer service online”, there is an option “Open new case/incident”:

… not exactly everyday language, but I guessed this meant “send us a message” and proceeded. After a few more steps, I got to the enquiry web form and asked whether there was an alternative, or if I sent fax of the passport whether I could blot out the passport number and submitted the form.

Problem 1: The confirmation page did not say what would happen next.  In fact they send an email when the query is answered, but as I did not know that, so I had to periodically check the site during the rest of the day and the following morning.

Lesson 1: Interactions often include ‘breaks’, when things happen over a longer period.  When there is a ‘beak’ in interaction, explain the process.

Lesson 1 can be seen as a long-term equivalent of standard usability principles to offer feedback, or in Nielsen’s Heuristics “Visibility of system status”, but this design advice is normally taken to refer to immediate interactions and what has already happened, not about what will happen in the longer term.  Even principles of ‘predictability’ are normally phrased in knowing what I can do to the system and how it will respond to my actions, but not formulated clearly for when the system takes autonomous action.

In terms of status-event analysis, they quite correctly gave me an generated an interaction event for me (the mail arriving) to notify me of the change of status of my ‘case’.  It was just that the hadn’t explained that is what they were going to do.

Anyway the next day the email arrived …

Problem 2: The mail’s subject was “Your customer support case has been closed”.  Within the mail there was no indication that the enquiry had actually been answered (it had), nor a link to the the location on the site to view the ‘case’ (I had to login and navigate to it by hand), just a general link to the customer ‘support’ portal and a survey to convey my satisfaction with the service (!).

Lesson 2.1: The email is part of the interaction. So apply ‘normal’ interaction design principles, such as Nielsen’s “speak the users’ language” – in this case “case has been closed” does not convey that it has been dealt with, but sounds more like it has been ignored.

Lesson 2.2: Give clear information in the email – don’t demand a visit to the site. The eventual response to my ‘case’ on the web site was entirely textual, so why not simply include it in the email?  In fact, the email included a PDF attachment, that started off identical to the email body and so I assumed was a copy of the same information … but turned out to have the response in it.  So they had given the information, just not told me they had!

Lesson 2.3: Except where there is a security risk – give direct links not generic ones. The email could easily have included a direct link to my ‘case’ on the web site, instead I had to navigate to it.  Furthermore the link could have included an authentication key so that I wouldn’t have to look up my Adobe user name and password (I of course needed to create a web site login in order to do a query).

In fact there are sometimes genuine security reasons for sometimes NOT doing this.  One is if you are uncertain of the security of the email system or recipient address, but in this case Adobe are happy to send login details by email, so clearly trust the recipient. Another is to avoid establishing user behaviours that are vulnerable to ‘fishing’ attacks.  In fact I get annoyed when banks send me emails with direct links to their site (some still do!), rather than asking you to visit the site and navigate, if users get used to navigating using email links then entering login credentials this is an easy way for malicious emails to harvest personal details. Again in this case Adobe had other URLs in the email, so this was not their reason.  However, if they had been …

Lesson 2.4: If you are worried about security of the channel, give clear instructions on how to navigate the site instead of a link.

Lesson 2.5: If you wish to avoid behaviour liable to fishing, do not include direct links to your site in emails.  However, do give the user a fast-access reference number to cut-and-paste into the site once they have navigated to the site manually.

Lesson 2.6: As a more general lesson understand security and privacy risks.  Often systems demand security procedures that are unnecessary (forcing me to re-authenticate), but omit the ones that are really important (making me send a fax of my passport).

Eventually I re-navigate the Adobe site and find the details of my ‘case’ (which was also in the PDF in the email if I had realised).

Problem 3: The ‘answer’ to my query was a few sections cut-and-pasted from the academic purchase FAQ … which I had already read before making the enquiry.  In particular it did not answer my specific question even to say “no”.

Lesson 3.1: The FAQ sections could easily have been identified automatically the day before. If there is going to be  delay in human response, where possible offer an immediate automatic response. If this includes a means to say whether this has answered the query, then human response may not be needed (saving money!) or at least take into account what the user already knows.

Lesson 3.2: For human interactions – read what the user has said. Seems like basic customer service … This is a training issue for human operators, but reminds us that:

Lesson 3.3: People are part of the system too.

Lesson 3.4: Do not ‘close down’ an interaction until the user says they are satisfied. Again basic customer service, but whereas 3.2 is a human training issue, this is about the design of the information system: the user needs some way to say whether or not the answer is sufficient.  In this case, the only way to re-open the case is to ring a full-cost telephone support line.

Phase 2 customer feedback survey

As I mentioned, the email also had a link to a web survey:

In an effort to constantly improve service to our customers, we would be very
interested in hearing from you regarding our performance.  Would you be so
kind to take a few minutes to complete our survey?   If so, please click here:

Yes I did want to give Adobe feedback on their customer service! So I clicked the link and was taken to a personalised web survey.  I say ‘personalised’ in that the link included a reference to the customer support case number, but thereafter the form was completely standard  and had numerous multi-choice questions completely irrelevant to an academic order.  I lost count of the pages, each with dozens of tick boxes, I think around 10, but may have been more … and certainly felt like more.  Only on the last page was there a free-text area where I could say what was the real problem. I only persevered because I was already so frustrated … and was more so by the time I got to the end of the survey.

Problem 4: Lengthy and largely irrelevant feedback form.

Lesson 4.1: Adapt surveys to the user, don’t expect the user to adapt to the survey! The ‘case’ originated in the education part of the web site, the selections I made when creating the ‘case’ narrowed this down further to a purchasing enquiry; it would be so easy to remove many of the questions based on this. Actually if the form had even said in text “if your support query was about X, please answer …” I could then have known what to skip!

Lesson 4.2: Make surveys easy for the user to complete: limit length and offer fast paths. If a student came to me with a questionnaire or survey that long I would tell them to think again.  If you want someone to complete a form it has to be easy to do so – by all means have longer sections so long as the user can skip them and get to the core issues. I guess cynically making customer surveys difficult may reduce the number of recorded complaints 😉

Phase 3 the order arrives

Back to the story: the customer support answer told me no more than I knew before, but I decided to risk faxing the passport (with the passport number obscured) as my photo ID, and (after some additional phone calls by the research administrator at Lancaster!), the order was placed and accepted.

When I got back home on Friday, the box from Adobe was waiting 🙂

I opened the plastic shrink-wrap … and only then noticed that on the box it said “Windows” 🙁

I had sent the research adminstrator a link to the product, so had I accidentally sent a link to the Windows version rather than the Mac one?  Or was there a point later in the purchasing dialogue where she had had to say which OS was required and not realised I used a Mac?

I went back to my mail to her and clicked the link:

The “Platform” field clearly says “Mac”, but it is actually a selection field:

It seemed odd that the default value was “Mac” … why not “CHOOSE A PLATFORM”, I wondered if it was remembering a previous selection I had made, so tried the URL in Safari … and it looked the same.

… then I realised!

The web form was being ‘intelligent’ and had detected that I was on a Mac and so set the field to “Mac”.  I then sent the URL to the research administrator and on her Windows machine it will have defaulted to “Windows”.  She quite sensibly assumed that the URL I sent her was for the product I wanted and ordered it.

In fact offering smart defaults is  good web design advice, so what went wrong here?

Problem 5: What I saw and what the research administrator saw were different, leading to ordering the wrong product.

Lesson 5.1: Defaults are also dangerous. If there are defaults the user will probably agree to them without realising there was a choice.  We are talking about a £600 product here, that is a lot of room for error.  For very costly decisions, this may mean not having defaults and forcing the user to choose, but maybe making it easy to choose the default (e.g. putting default at top of a menu).

Lesson 5.2: If we decide the advantages of the default outweigh the disadvantages then we need to make defaulted information obvious (e.g. highlight, special colour) and possibly warn the user (one of those annoying “did you really mean” dialogue boxes! … but hey for £600 may be worth it).  In the case of an e-commerce system we could even track this through the system and keep inferred information highlighted (unless explicitly confirmed) all the way through to the final order form. Leading to …

Lesson 5.3: Retain provenance.  Automatic defaults are relatively simple ‘intelligence’, but as more forms of intelligent interaction emerge it will become more and more important to retain the provenance of information – what came explicitly from the user, what was inferred and how.  Neither current database systems nor emerging semantic web infrastructure make this easy to achieve internally, so new information architectures are essential.  Even if we retain this information, we do not yet fully understand the interaction and presentation mechanisms needed for effective user interaction with inferred information, as this story demonstrates!

Lesson 5.4: The URL is part of the interaction2.  I mailed a URL believing it would be interpreted the same everywhere, but in fact its meaning was relative to context.  This can be problematic even for ‘obviously’ personalised pages like a Facebook home page which always comes out as your own home page, so looks different.  However, it is essential when someone might want to bookmark, or mail the link.

This last point has always been one of the problems with framed sites and is getting more problematic with AJAX.  Ideally when dynamic content changes on the web page the URL should change to reflect it.  I had mistakenly thought this impossible without forcing a page reload, until I noticed that the multimap site does this.

The map location at the end of the URL changes as you move around the map.  It took me still longer to work out that this is accomplished because changing the part of the URL after the hash (sometimes called the ‘fragment’ and accessed in Javascript via location.hash) does not force a page reload.

If this is too complicated then it is relatively easy to use Javascript to update some sort of “use this link” or “link to this page” both for frame-based sites or those using web form elements or even AJAX. In fact, multimap does this as well!

Lesson 5.5: When you have dynamic page content update the URL or provide a “link to this page” URL.

Extended interaction

Some of these problems should have been picked up by normal usability testing. It is reasonable to expect problems with individual web sites or low-budget sites of small companies or charities.  However, large corporate sites like Adobe or central government have large budgets and a major impact on many people.  It amazes and appals me how often even the simplest things are done so sloppily.

However, as mentioned at the beginning, many of the problems and lessons above are about extended interaction: multiple visits to the site, emails between the site and the customer, and emails between individuals.  None of my interactions with the site were unusual or complex, and yet there seems to be a systematic lack of comprehension of this longer-term picture of usability.

As noted also at the beginning, this is partly because there is scant design advice on such interactions.  Norman has discussed “activity centred design“, but he still focuses on the multiple interactions within a single session with an application.  Activity theory takes a broader and longer-term view, but tends to focus more on the social and organisational context whereas the story here shows there is also a need for detailed interaction design advice.  The work I mentioned with Haliyana and Corina has been about the experiential aspects of extended interaction, but the problems on the Adobe were largely at a functional level (I never got so far as appreciating an ‘experience’ except a bad one!). So there is clearly much more work to be done here … any budding PhD students looking for a topic?

However, as with many things, once one thinks about the issue, some strategies for effective design start to become obvious.

So as a last lesson:

Overall Lesson: Think about extended interaction.

[ See HCI Book site for other ‘War Stories‘ of problems with popular products. ]

  1. My earliest substantive work on long-term interaction was papers at HCI 1992 and 1994 on”Pace and interaction” and “Que sera sera – The problem of the future perfect in open and cooperative systems“, mostly focused on communication and work flows.  The best summative work on this strand is in a 1998  journal paper “Interaction in the Large” and a more recent book chapter “Trigger Analysis – understanding broken tasks“[back]
  2. This is of course hardly new, although neither address the particular problems here, see Nielsen’s “URL as UI” and James Gardner’s “Best Practice for Good URL Structures” for expostions of the general principle. Many sites still violate even simple design advice like W3C’s “Cool URIs don’t change“.  For example, even the BCS’ eWIC series of electronic proceedings, have URLs of the form “www.bcs.org/server.php?show=nav.10270“; it is hard to believe that “show=nav.10270” will persist beyond the next web site upgrade 🙁 [back]

What is Computing? The centrality of systemics

Recently I was in a meeting where the issue of ‘core’ computer science came up. One person listed a few areas, but then this was challenged by another member of the group who said (to be fair, partly in jest), that core computer science should certainly include computer architecture, but not the ‘human stuff’.

I felt a little like a teenager complete with T-shirt and iPod dropped into Jurassic Park arguments that I thought had been put to bed in the 1980s suddenly resurfacing – how do you explain this white thing that makes sounds from its earphones to a caveman wearing skins?

However, I also felt a certain sympathy as I often wonder about computer science as a whole; indeed it has its own arguments in the 1960s and ’70s as to whether it was a ‘discipline’ as opposed to just an application domain for maths or electronics, or just a tool for business. Maybe one of the clinchers was the theoretical foundations of computing in the work of Church and Turing … but strangely enough at Lancaster the closest to this, the course on algorithmic complexity, is taught by a HCI person!

One of my worries in computing is that these theoretical foundations are still weak, there is black hole in the theoretical centre of computer science1. However, these theoretical issues were certainly not what was bothering my colleague. To answer his challenge and my own worries about the discipline we really need to know – what is computing?

Continue reading

  1. This demands a discussion of its own, but the basic problem is that while Church and Turing gave us understanding of disembodied computation, we still do not have clear understanding of generic computation when embodied in devices in general only particular architectures. [back]

omnignorance, the future of the web

The dream of the web seems a form of omniscience, unlmited and universal knowledge available at the vlick ofof a button or least at the click of a Google ‘Seach’ button. However, last night I was on a site that empitomised the problems of the web.

I was pointed to a blog entry about a presentation that the blog said was “Simply the best presentation I’VE EVER SEEN!“. I was intrigued and and followed this to the Identity2.0 site and in particular a page about a keynote at OSCON 2005.

The entry about the talk mentioned ‘Identity2.0’ and ‘digital identity’ so guessed this was something to do with single logins (like MS passport) or open authentication, which has long been an open issue (with many ‘solutions’ but so far little success). However, was this person and this site talking about one of these such as plain open authentication, or something deeper.

Well it is fine for the page about the talk not to say clearly, it is written within the context of the Indetity2.0 site, so I looked for an ‘about’ link or something like that … nothing I stripped the url back to plain identity2.0.com and of course simply got to a stabdard blog front page (I guess rather like this one), with the latest news, but nothing to gove over context or background.

In fact by chasing yet more links to other sites, by half guessing from various abstracts, blog entries, etc. I managed to pick out half a story about what this was about … but why so hard?

This reminds me of the problem I recorded a month or so back when trying to find last week’s (as opposed to yesterday’s) news. Really easy to find the latest item or even the hotest item, but really bad at getting to the background that gives context and turns buzz words into meaning.

At the risk of sounding like a codger at a cafe table, the same is true of much software documentation. If you have seen the software grow and develop over the years it makes sense, but to students trying to make sense of Java packages, AJAX, Mac/Windows APIs, it is like fumbling in the dark. Good signpoosting at every street corner, but no roadmaps.

In all these cases there are real questions we want to ask, this is not like meandering around Flickr or YouTube, travelling just for the journey. However definitive statements (I’ll not say answers) give way to half-overheard conversations in a coffee shop.

It is often said that experts know more and more about less and less untul eventually they know everything about nothing. It seems we are turning into a generation who know less and less about more and more until we know nothing about everthing – omingnorance rules.