Backwards compatibility on the web

I just noticed the following excerpt in the web page describing a rich-text editing component:

Supported Browsers (Confirmed)
… list …

Note: This list is now out of date and some new browsers such as Safari 3.0+ and Opera 9.5+ suffer from some issues.
(Free Rich Text Editor – www.freerichtexteditor.com)

In odd moments I have recently been working on bringing vfridge back to life.  Partly this is necessary because the original Java Servlet code was such a pig1, but partly because the dynamic HTML code had ‘died’. To be fair vfridge was produced in the early days of DHTML, and so one might expect things to change between then and now. However, reading the above web page about a component produced much more recently, I wonder why is it that on the web, and elsewhere, we are so bad at being backward compatible … and I recall my own ‘pain and tears‘ struggling with broken backward compatibility in office 2008.

I’d started looking at current  rich text editors after seeing Paul James’ “Small, standards compliant, Javascript WYSIWYG HTML control“.  Unlike many of the controls that seem to produce MS-like output with <font> tags littered randomly around, Paul’s control emphasises standards compliance in HTML, and is using the emerging de-facto designMode2 support in browsers.

This seems good, but one wonders how long these standards will survive, especially the de facto one, given past history; will Paul James’ page have a similar notice in a year or two?

The W3C approach … and a common institutional one … is to define unique standards that are (intended to be) universal and unchanging, so that if we all use them everything will still work in 10,000 years time.  This is a grand vision, but only works if the standards are sufficiently:

  1. expressive so that everything you want to do now can be done (e.g. not deprecating the use of tables for layout in the absence of design grids leading to many horrible CSS ‘hacks’)
  2. omnipotent so that everyone (MS, Apple) does what they are told
  3. simple so that everyone implements it right
  4. prescient so that all future needs are anticipated before multiple differing de facto ‘standards’ emerge

The last of those is the reason why vfridge’s DHTML died, we wanted rich client-side interaction when the stable standards were not much beyond transactions; and this looks like the reason many rich-text editors are struggling now.

A completely different approach (requiring a  degree of humility from standards bodies) would be to accept that standards always fall behind practice, and design this into the standards themselves.  There needs to be simple (and so consistently supported) ways of specifying:

  • which versions of which browsers a page was designed to support – so that browsers can be backward or cross-browser compliant
  • alternative content for different browsers and versions … and no the DTD does not do this as different versions of browsers have different interpretations of and bugs in different HTML variants.  W3C groups looking at cross-device mark-up already have work in this area … although it may fail the simplicty test.

Perhaps more problematically, browsers need to commit to being backward compatible where at all possible … I am thinking especially of the way IE fixed its own broken CSS implementation, but did so in a way that broke all the standard hacks that had been developed to work around the old bugs!  Currently this would mean fossilising old design choices and even old bugs, but if web-page meta information specified the intended browser version, the browser could selectively operate on older pages in ways compatible with the older browsers whilst offering improved behaviour for newer pages.

  1. The vfridge Java Servlets used to run fine, but over time got worse and worse; as machines got faster and JVM versions improved with supposedly faster byte-code compilers, strangely the same code got slower and slower until it now only produces results intermittently … another example of backward compatibility failing.[back]
  2. I would give a link to designMode except that I notice everyone else’s links seem to be broken … presumably MSDN URLs are also not backwards compatible 🙁 Best bet is just Google “designMode” [back]

Dublin, Guiness and the future of HCI

I wrote the title of this post on 5th December just after I got back from Dublin.  I had been in Dublin for the SIGCHI Ireland Inaugural LectureHuman–Computer Interaction in the early 21st century: a stable discipline, a nascent science, and the growth of the long tail” and was going to write a bit about it (including my first flight on and off Tiree) … then thought I’d write a short synopsis of the talk … so parked this post until the synopsis was written.

One month and 8000 words later – the ‘synopsis’ sort of grew … but just finished and now it is on the web as either HTML version or PDF. Basically a sort of ‘state of the nation’ about the current state and challenges for HCI as a discipline …

And although it now fades a little I had great time in Dublin meeting, talking research, good company, good food … and yes … the odd pint of Guiness too.

Steve’s bin

This is Steve‘s bin that I mentioned in my last post.

Glasdon UK: Plaza® Litter Bin

Glasdon UK: Plaza® Litter Bin

Had to be drunk proof, dustman proof, and bomb proof.  Also has to be emptied without needing a key, but be difficult to open if you don’t know how (to prevent Saturday night vandalism).  To top it all had to be designed to be able to be replaced after emptying so that it self locks, and yet is made by a moulding process that means there may be up to a couple of centimetres movement from the design spec.  I am very impressed.

strength in weakness – Judo design

Steve Gill is visiting so that we can work together on a new book on physicality.  Last night, over dinner, Steve was telling us about a litter-bin lock that he once designed.  The full story linked creative design, the structural qualities of materials, and the social setting in which it was placed … a story well worth hearing, but I’ll leave that to Steve.

One of the critical things about the design was that while earlier designs used steel, his design needed to be made out of plastic.  Steel is an obvious material for a lock: strong unyielding; however the plastic lock worked because the lock and the bin around it were designed to yield, to give a little, and is so doing to absorb the shock if kicked by a drunken passer-by.

This is a sort of Judo principle of design: rather than trying to be the strongest or toughest, instead by  yielding in the right way using the strength of your opponent.

This reminded me of trees that bend in the wind and stand the toughest storms (the wind howling down the chimney maybe helps the image), whereas those that are stiffer may break.  Also old wooden pit-props that would moan and screech when they grew weak and gave slightly under the strain of rock; whereas the stronger steel replacements would stand firm and unbending until the day they catastrophically broke.

Years ago I also read about a programme to strengthen bridges as lorries got heavier.  The old arch bridges had an infill of loose rubble, so the engineers simply replaced this with concrete.  In a short time the bridges began to fall down.  When analysed more deeply  the reason become clear.  When an area of the loose infill looses strength, it gives a little, so the strain on it is relieved and the areas around take the strain instead.  However, the concrete is unyielding and instead the weakest point takes more and more strain until eventually cracks form and the bridge collapses.  Twisted ropes work on the same principle.  Although now an old book, “The New Science of Strong Materials” opened my eyes to the wonderful way many natural materials, such as bone, make use of the relative strengths, and weaknesses, of their constituents, and how this is emulated in many composite materials such as glass fibre or carbon fibre.

In contrast both software and bureaucratic procedures are more like chains – if any link breaks the whole thing fails.

Steve’s lock design shows that it is possible to use the principle of strength in weakness when using modern materials, not only in organic elements like wood, or traditional bridge design.  For software also, one of the things I often try to teach is to design for failure – to make sure things work when they go wrong.  In particular, for intelligent user interfaces the idea of appropriate intelligence – making sure that when intelligent algorithms get things wrong, the user experience does not suffer.  It is easy to want to design the cleverest algotithms, the most complex systems – to design for everything, to make it all perfect. While it is of course right to seek the best, often it is the knowledge that what we produce will not be ‘perfect’ that in fact enables us to make it better.

web ephemera and web privacy

Yesterday I was twittering about a web page I’d visited on the BBC1 and the tweet also became my Facebook status2.  Yanni commented on it, not because of the content of the link, but because he noticed the ‘is.gd’ url was very compact.  Thinking about this has some interesting implications for privacy/security and the kind of things you might to use different url shortening schemes for, but also led me to develop an interesting time-wasting application ‘LuckyDip‘ (well if ‘develop’ is the right word as it was just 20-30 mins hacking!).

I used the ‘is.gd’ shortening because it was one of three schemes offered by twirl, the twitter client I use.  I hadn’t actually noticed that it was significantly shorter than the others or indeed tinyurl, which is what I might have thought of using without twirl’s interface.

Here is the url of this blog <http://www.alandix.com/blog/> shortened by is.gd and three other services:

snurl:   http://snurl.com/5ot5k
twurl:  http://twurl.nl/ftgrwl
tinyurl:  http://tinyurl.com/5j98ao
is.gd:  http://is.gd/7OtF

The is.gd link is small for two reasons:

  1. ‘is.gd’ is about as short as you can get with a domain name!
  2. the ‘key’ bit after the domain is only four characters as opposed to 5 (snurl) or 6 (twurl, tinyurl)

The former is just clever domain choice, hard to get something short at all, let alone short and meaningful3.

The latter however is as a result of a design choice at is.gd.  The is.gd urls are allocated sequentially, the ‘key’ bit (7OtF) is simply an encoding of the sequence number that was allocated.  In contrast tinyurl seems to do some sort of hash either of the address or maybe of a sequence number.

The side effect of this is that if you simply type in a random key (below the last allocated sequence number) for an is.gd url it will be a valid url.  In contrast, the space of tinyurl is bigger, so ‘in principle’ only about one in a hundred keys will represent real pages … now I say ‘in principle’ because experimenting with tinyurl I find every six character seqeunce I type as a key gets me to a valid page … so maybe they do some sort of ‘closest’ match.

Whatever url shortening scheme you use by their nature the shorter url will be less redundant than a full url – more ‘random’ permutations will represent meaningful items.  This is a natural result of any ‘language’, the more concise you are the less redundant the language.

At a practical level this means that if you use a shortened url, it is more likely that someone  typing in a random is.gd (or tinyurl) key will come across your page than if they just type a random url.  Occasionally I upload large files I want to share to semi-private urls, ones that are publicly available, but not linked from anywhere.  Because they are not linked they cannot be found through search engines and because urls are long it would be highly unlikely that someone typing randomly (or mistyping) would find them.

If however, I use url shortening to tell someone about it, suddenly my semi-private url becomes a little less private!

Now of course this only matters if people are randomly typing in urls … and why would they do such a thing?

Well a random url on the web is not very interesting in general, there are 100s of millions and most turn out to be poor product or hotel listing sites.  However, people are only likely to share interesting urls … so random choices of shortened urls are actually a lot more interesting than random web pages.

So, just for Yanni, I spent a quick 1/2 hour4 and made a web page/app ‘LuckyDip‘.  This randomly chooses a new page from is.gd every 20 seconds – try it!


successive pages from LuckyDip

Some of the pages are in languages I can’t read, occasionally you get a broken link, and the ones that are readable, are … well … random … but oddly compelling.  They are not the permanently interesting pages you choose to bookmark for later, but the odd page you want to send to someone … often trivia, news items, even (given is.gd is in a twitter client) the odd tweet page on the twitter site.  These are not like the top 20 sites ever, but the ephemera of the web – things that someone at some point thought worth sharing, like overhearing the odd raised voice during a conversation in a train carriage.

Some of the pages shown are map pages, including ones with addresses on … it feels odd, voyeuristic, web curtain twitching – except you don’t know the person, the reason for the address; so maybe more like sitting watching people go by in a crowded town centre, a child cries, lovers kiss, someone’s newspaper blows away in the wind … random moments from unknown lives.

In fact most things we regard as private are not private from everyone.  It is easy to see privacy like an onion skin with the inner sanctum, then those further away, and then complete strangers – the further away someone is from ‘the secret’ the more private something is.  This is certainly the classic model in military security.  However, think further and there are many things you would be perfectly happy for a complete stranger to know, but maybe not those a little closer, your work colleagues, your commercial competitors.  The onion sort of reverses, apart from those that you explicitly want to know, in fact the further out of the onion, the safer it is.  Of course this can go wrong sometimes, as Peter Mandleson found out chatting to a stranger in a taverna (see BBC blog).

So I think LuckyDip is not too great a threat to the web’s privacy … but do watch out what you share with short urls … maybe the world needs a url lengthening service too …

And as a postscript … last night I was trying out the different shortening schemes available from twirl, and accidentally hit return, which created a tweet with the ‘test’ short url in it.  Happily you can delete tweets, and so I thought I had eradicated the blunder unless any twitter followers happened to be watching at that exact moment … but I forgot that my twitter feed also goes to my Facebook status and that deleting the tweet on twitter did not remove the status, so overnight the slip was my Facebook status and at least one person noticed.

On the web nothing stays secret long, and if anything is out there, it is there for ever … and will come back to hant you someday.

  1. This is the tweet “Just saw http://is.gd/7Irv Sad state of the world is that it took me several paragraphs before I realised it was a joke.”[back]
  2. I managed to link them up some time ago, but cannot find again the link on twitter that enabled this, so would be stuck if I wanted to stop it![back]
  3. anyone out there registering Bangaldeshi domains … if ‘is’ is available!![back]
  4. yea it should ave been less, but I had to look up how to access frames in javascript, etc.[back]

Coast to coast: St Andrews to Tiree

A week ago I was in St Andrews on the east coast of Scotland delivering three lectures on “Human Computer Interaction: as it was, as it is and as it may be” as part of their distinguished lecture series and now I am in Tiree in the wild western ocean off the west coast.

I had a great time in St Andrews and was well looked after by some I knew already Ian, Gordan, John and Russell, and also met many new people. Ate good food and stayed in a lovely hotel overlooking the sea (and golf course) and full of pictures of golfers (well what do you expect in St Andrews).

For the lectures, I was told the general pattern was one lecture about the general academic area, one ‘state of the art’ and one about my own stuff … hence the three parts of the title!  Ever for cutesy titles I then called the individual lectures “Whose Computer Is It Anyway”, “The Great Escape” and “Connected, but Under Control, Big, but Brainy?”.

The first lecture was about the fact that computers are always ultimately for people (surprise surprise!) and I used Ian’s slight car accident on the evening before the lecture as a running example (sorry Ian).

The second lecture was about the way computers have escaped the office desktop and found their way into the physical world of ubiquitous computing, the digital world of the web ad into our everyday lives in out homes and increasingly the hub of our social lives too.  Matt Oppenheim did some great cartoons for this and I’m going to use them again in a few weeks when I visit Dublin to do the inaugural lecture for SIGCHI Ireland.

for 20 years the computer is chained to the office desktop (image © Matt Oppenheim)

(© Matt Oppenheim)

... now escapes: out into the world, spreading across the net, in the home, in our social lives (image © Matt Oppenheim)

(© Matt Oppenheim)

The last lecture was about intelligent internet stuff, similar to the lecture I gave at Aveiro a couple of weeks back … mentioning again the fact that the web now has the same information storage and processing capacity as a human brain1 … always makes people think … well at least it always makes ME think about what it means to be human.

… and now … in Tiree … sun, wild wind, horizontal hail, and paddling in the (rather chilly) sea at dawn

  1. see the brain and the web[back]

web of data practioner’s days

I am at the Web of Data Practitioners Days (WOD-PD 2008) in Vienna.  Mixture of talks and guided hands-on sessions.  I presented first half of session on “Using the Web of Data” this morning with focus (surprise) on the end user. Learnt loads about some of the applications out there – in fact Richard Cyganiak .  Interesting talk from a guy at the BBC about the way they are using RDF to link the currently disconnected parts of their web and also archives.  Jana Herwig from Semantic Web Company has been live blogging the event.

Being here has made me think about the different elements of SemWeb technology and how they individually contribute to the ‘vision’ of Linked Data.  The aim is to be able to link different data sources together.  For this having some form of shared/public vocabulary or ‘data definitions’ is essential as is some relatively uniform way of accessing data.  However, the implementation using RDF or use of SPARQL etc. seems to be secondary and useful for some data, but not other forms of data where tabular data may be more appropriate.  Linking these different representations  together seems far more important than specific internal representations.  So wondering whether there is a route to linked data that allows a more flexible interaction with existing data and applications as well as ‘sucking’ in this data into the SemWeb.  Can the vocabularies generated for SemWeb be used as meta information for other forms of information and can  query/access protocols be designed that leverage this, but include broader range of data types.

From raw experience to personal reflection

Just a week to go for deadline for this workshop on the Designing for Reflection on Experience that Corina and I are organising at CHI. Much of the time discussions of user experience are focused on trivia and even social networking often appears to stop at superficial levels.  While throwing a virtual banana at a friend may serve to maintain relationships and is perhaps less trivial than it at first appears; still there is little support for deeper reflection on life, with the possible exception of the many topic-focused chat groups.  However, in researching social networks we have found, amongst the flotsam, clear moments of poinency and conflict, traces of major life events … even divorce by Facebook. Too much navel gazing would not be a good thing, but some attention to expressing  deeper issues to others and to ourselves seems overdue.

Comics and happy problem solving

I am in Eindhoven doing CSCW, silly ideas and other things with the USI students here. On the book shelf here is Scott McCloud’s “Understanding Comics” I picked this up last year and couldn’t put it down until I had read it all. There is another book on the shelves this year “Reinventing Comics” and I daren’t pick it up until I’ve done all the work I want to today!

Understanding Comics is both an apologetic for comics as an art form and also an exploration into what makes a comic a comic and how comics manage to captivate and give a sense of narrative and action through what are basically static images. As well as being a good read about comics and about art there seem to be many lessons there for other forms of narrative and animation especially on the web.

As far as I can see (without starting to read it and not being able to stop), Reinventing Comics seems to be about the way online delivery trough the web is giving new opportunities for Comic art … but maybe when I finish everything today I will find out.

Less graphic and less fun, but no less fascinating, I have been dipping into chapters of “The Psychology of Problem Solving“, which was also sitting on the USI shelves. I was particularly enthralled by descriptions of experiments where subjects were asked to accomplish divergent thinking tasks whilst either pushing their palms upwards from under a table, or pushing down from on top. The former a positive, ‘come to me’ gesture elicited more diverse ideas than the latter, negative, ‘go away’ gesture, even though the only difference was the muscle groups in tension. I’ve seen other research that shows how our brains monitor our body state to ‘see how we feel’ (like smiling therapy), but this was one of the most subtle and conclusive.

During the week I have had the USI students work through a design brief starting with silly ideas then moving through  structured analysis to good ideas. Perhaps I should have had them pushing up on tables in the first part and down in the second?