Some lessons in extended interaction, courtesy Adobe

I use various Adobe products, especially Dreamweaver and want to get the newest version of Creative Suite.  This is not cheap, even at academic prices, so you might think Adobe would want to make it easy to buy their products, but life on the web is never that simple!

As you can guess a number of problems ensued, some easily fixable, some demonstrating why effective interaction design is not trivial and apparently good choices can lead to disaster.

There is a common thread.  Most usability is focused on the time we are actively using a system – yes obvious – however, most of the problems I faced were about the extended use of the system, the way individual periods of use link together.  Issues of long-term interaction have been an interest of mine for many years1 and have recently come to the fore in work with Haliyana, Corina and others on social networking sites and the nature of ‘extended episodic experience’.  However, there is relatively little in the research literature or practical guidelines on such extended interaction, so problems are perhaps to be expected.

First the good bit – the Creative ‘Suite’  includes various individual Adobe products and there are several variants Design/Web, Standard/Premium, however there is a great page comparing them all … I was able to choose which version I needed, go to the academic purchase page, and then send a link to the research administrator at Lancaster so she could order it.  So far so good, 10 out of 10 for Adobe …

To purchase as an academic you quite reasonably have to send proof of academic status.  In the past a letter from the dept. on headed paper was deemed sufficient, but now they ask for a photo ID.  I am still not sure why this is need, I wasn’t going in in person, so how could a photo ID help?  My only photo ID is my passport and with security issues and identity theft constantly in the news, I was reluctant to send a fax of that (do US homeland security know that Adobe, a US company, are demanding this and thus weakening border controls?).

After double checking all the information and FAQs in the site, I decided to contact customer support …

Phase 1 customer support

The site had a “contact us” page and under “Customer service online”, there is an option “Open new case/incident”:

… not exactly everyday language, but I guessed this meant “send us a message” and proceeded. After a few more steps, I got to the enquiry web form and asked whether there was an alternative, or if I sent fax of the passport whether I could blot out the passport number and submitted the form.

Problem 1: The confirmation page did not say what would happen next.  In fact they send an email when the query is answered, but as I did not know that, so I had to periodically check the site during the rest of the day and the following morning.

Lesson 1: Interactions often include ‘breaks’, when things happen over a longer period.  When there is a ‘beak’ in interaction, explain the process.

Lesson 1 can be seen as a long-term equivalent of standard usability principles to offer feedback, or in Nielsen’s Heuristics “Visibility of system status”, but this design advice is normally taken to refer to immediate interactions and what has already happened, not about what will happen in the longer term.  Even principles of ‘predictability’ are normally phrased in knowing what I can do to the system and how it will respond to my actions, but not formulated clearly for when the system takes autonomous action.

In terms of status-event analysis, they quite correctly gave me an generated an interaction event for me (the mail arriving) to notify me of the change of status of my ‘case’.  It was just that the hadn’t explained that is what they were going to do.

Anyway the next day the email arrived …

Problem 2: The mail’s subject was “Your customer support case has been closed”.  Within the mail there was no indication that the enquiry had actually been answered (it had), nor a link to the the location on the site to view the ‘case’ (I had to login and navigate to it by hand), just a general link to the customer ‘support’ portal and a survey to convey my satisfaction with the service (!).

Lesson 2.1: The email is part of the interaction. So apply ‘normal’ interaction design principles, such as Nielsen’s “speak the users’ language” – in this case “case has been closed” does not convey that it has been dealt with, but sounds more like it has been ignored.

Lesson 2.2: Give clear information in the email – don’t demand a visit to the site. The eventual response to my ‘case’ on the web site was entirely textual, so why not simply include it in the email?  In fact, the email included a PDF attachment, that started off identical to the email body and so I assumed was a copy of the same information … but turned out to have the response in it.  So they had given the information, just not told me they had!

Lesson 2.3: Except where there is a security risk – give direct links not generic ones. The email could easily have included a direct link to my ‘case’ on the web site, instead I had to navigate to it.  Furthermore the link could have included an authentication key so that I wouldn’t have to look up my Adobe user name and password (I of course needed to create a web site login in order to do a query).

In fact there are sometimes genuine security reasons for sometimes NOT doing this.  One is if you are uncertain of the security of the email system or recipient address, but in this case Adobe are happy to send login details by email, so clearly trust the recipient. Another is to avoid establishing user behaviours that are vulnerable to ‘fishing’ attacks.  In fact I get annoyed when banks send me emails with direct links to their site (some still do!), rather than asking you to visit the site and navigate, if users get used to navigating using email links then entering login credentials this is an easy way for malicious emails to harvest personal details. Again in this case Adobe had other URLs in the email, so this was not their reason.  However, if they had been …

Lesson 2.4: If you are worried about security of the channel, give clear instructions on how to navigate the site instead of a link.

Lesson 2.5: If you wish to avoid behaviour liable to fishing, do not include direct links to your site in emails.  However, do give the user a fast-access reference number to cut-and-paste into the site once they have navigated to the site manually.

Lesson 2.6: As a more general lesson understand security and privacy risks.  Often systems demand security procedures that are unnecessary (forcing me to re-authenticate), but omit the ones that are really important (making me send a fax of my passport).

Eventually I re-navigate the Adobe site and find the details of my ‘case’ (which was also in the PDF in the email if I had realised).

Problem 3: The ‘answer’ to my query was a few sections cut-and-pasted from the academic purchase FAQ … which I had already read before making the enquiry.  In particular it did not answer my specific question even to say “no”.

Lesson 3.1: The FAQ sections could easily have been identified automatically the day before. If there is going to be  delay in human response, where possible offer an immediate automatic response. If this includes a means to say whether this has answered the query, then human response may not be needed (saving money!) or at least take into account what the user already knows.

Lesson 3.2: For human interactions – read what the user has said. Seems like basic customer service … This is a training issue for human operators, but reminds us that:

Lesson 3.3: People are part of the system too.

Lesson 3.4: Do not ‘close down’ an interaction until the user says they are satisfied. Again basic customer service, but whereas 3.2 is a human training issue, this is about the design of the information system: the user needs some way to say whether or not the answer is sufficient.  In this case, the only way to re-open the case is to ring a full-cost telephone support line.

Phase 2 customer feedback survey

As I mentioned, the email also had a link to a web survey:

In an effort to constantly improve service to our customers, we would be very
interested in hearing from you regarding our performance.  Would you be so
kind to take a few minutes to complete our survey?   If so, please click here:

Yes I did want to give Adobe feedback on their customer service! So I clicked the link and was taken to a personalised web survey.  I say ‘personalised’ in that the link included a reference to the customer support case number, but thereafter the form was completely standard  and had numerous multi-choice questions completely irrelevant to an academic order.  I lost count of the pages, each with dozens of tick boxes, I think around 10, but may have been more … and certainly felt like more.  Only on the last page was there a free-text area where I could say what was the real problem. I only persevered because I was already so frustrated … and was more so by the time I got to the end of the survey.

Problem 4: Lengthy and largely irrelevant feedback form.

Lesson 4.1: Adapt surveys to the user, don’t expect the user to adapt to the survey! The ‘case’ originated in the education part of the web site, the selections I made when creating the ‘case’ narrowed this down further to a purchasing enquiry; it would be so easy to remove many of the questions based on this. Actually if the form had even said in text “if your support query was about X, please answer …” I could then have known what to skip!

Lesson 4.2: Make surveys easy for the user to complete: limit length and offer fast paths. If a student came to me with a questionnaire or survey that long I would tell them to think again.  If you want someone to complete a form it has to be easy to do so – by all means have longer sections so long as the user can skip them and get to the core issues. I guess cynically making customer surveys difficult may reduce the number of recorded complaints 😉

Phase 3 the order arrives

Back to the story: the customer support answer told me no more than I knew before, but I decided to risk faxing the passport (with the passport number obscured) as my photo ID, and (after some additional phone calls by the research administrator at Lancaster!), the order was placed and accepted.

When I got back home on Friday, the box from Adobe was waiting 🙂

I opened the plastic shrink-wrap … and only then noticed that on the box it said “Windows” 🙁

I had sent the research adminstrator a link to the product, so had I accidentally sent a link to the Windows version rather than the Mac one?  Or was there a point later in the purchasing dialogue where she had had to say which OS was required and not realised I used a Mac?

I went back to my mail to her and clicked the link:

The “Platform” field clearly says “Mac”, but it is actually a selection field:

It seemed odd that the default value was “Mac” … why not “CHOOSE A PLATFORM”, I wondered if it was remembering a previous selection I had made, so tried the URL in Safari … and it looked the same.

… then I realised!

The web form was being ‘intelligent’ and had detected that I was on a Mac and so set the field to “Mac”.  I then sent the URL to the research administrator and on her Windows machine it will have defaulted to “Windows”.  She quite sensibly assumed that the URL I sent her was for the product I wanted and ordered it.

In fact offering smart defaults is  good web design advice, so what went wrong here?

Problem 5: What I saw and what the research administrator saw were different, leading to ordering the wrong product.

Lesson 5.1: Defaults are also dangerous. If there are defaults the user will probably agree to them without realising there was a choice.  We are talking about a £600 product here, that is a lot of room for error.  For very costly decisions, this may mean not having defaults and forcing the user to choose, but maybe making it easy to choose the default (e.g. putting default at top of a menu).

Lesson 5.2: If we decide the advantages of the default outweigh the disadvantages then we need to make defaulted information obvious (e.g. highlight, special colour) and possibly warn the user (one of those annoying “did you really mean” dialogue boxes! … but hey for £600 may be worth it).  In the case of an e-commerce system we could even track this through the system and keep inferred information highlighted (unless explicitly confirmed) all the way through to the final order form. Leading to …

Lesson 5.3: Retain provenance.  Automatic defaults are relatively simple ‘intelligence’, but as more forms of intelligent interaction emerge it will become more and more important to retain the provenance of information – what came explicitly from the user, what was inferred and how.  Neither current database systems nor emerging semantic web infrastructure make this easy to achieve internally, so new information architectures are essential.  Even if we retain this information, we do not yet fully understand the interaction and presentation mechanisms needed for effective user interaction with inferred information, as this story demonstrates!

Lesson 5.4: The URL is part of the interaction2.  I mailed a URL believing it would be interpreted the same everywhere, but in fact its meaning was relative to context.  This can be problematic even for ‘obviously’ personalised pages like a Facebook home page which always comes out as your own home page, so looks different.  However, it is essential when someone might want to bookmark, or mail the link.

This last point has always been one of the problems with framed sites and is getting more problematic with AJAX.  Ideally when dynamic content changes on the web page the URL should change to reflect it.  I had mistakenly thought this impossible without forcing a page reload, until I noticed that the multimap site does this.

The map location at the end of the URL changes as you move around the map.  It took me still longer to work out that this is accomplished because changing the part of the URL after the hash (sometimes called the ‘fragment’ and accessed in Javascript via location.hash) does not force a page reload.

If this is too complicated then it is relatively easy to use Javascript to update some sort of “use this link” or “link to this page” both for frame-based sites or those using web form elements or even AJAX. In fact, multimap does this as well!

Lesson 5.5: When you have dynamic page content update the URL or provide a “link to this page” URL.

Extended interaction

Some of these problems should have been picked up by normal usability testing. It is reasonable to expect problems with individual web sites or low-budget sites of small companies or charities.  However, large corporate sites like Adobe or central government have large budgets and a major impact on many people.  It amazes and appals me how often even the simplest things are done so sloppily.

However, as mentioned at the beginning, many of the problems and lessons above are about extended interaction: multiple visits to the site, emails between the site and the customer, and emails between individuals.  None of my interactions with the site were unusual or complex, and yet there seems to be a systematic lack of comprehension of this longer-term picture of usability.

As noted also at the beginning, this is partly because there is scant design advice on such interactions.  Norman has discussed “activity centred design“, but he still focuses on the multiple interactions within a single session with an application.  Activity theory takes a broader and longer-term view, but tends to focus more on the social and organisational context whereas the story here shows there is also a need for detailed interaction design advice.  The work I mentioned with Haliyana and Corina has been about the experiential aspects of extended interaction, but the problems on the Adobe were largely at a functional level (I never got so far as appreciating an ‘experience’ except a bad one!). So there is clearly much more work to be done here … any budding PhD students looking for a topic?

However, as with many things, once one thinks about the issue, some strategies for effective design start to become obvious.

So as a last lesson:

Overall Lesson: Think about extended interaction.

[ See HCI Book site for other ‘War Stories‘ of problems with popular products. ]

  1. My earliest substantive work on long-term interaction was papers at HCI 1992 and 1994 on”Pace and interaction” and “Que sera sera – The problem of the future perfect in open and cooperative systems“, mostly focused on communication and work flows.  The best summative work on this strand is in a 1998  journal paper “Interaction in the Large” and a more recent book chapter “Trigger Analysis – understanding broken tasks“[back]
  2. This is of course hardly new, although neither address the particular problems here, see Nielsen’s “URL as UI” and James Gardner’s “Best Practice for Good URL Structures” for expostions of the general principle. Many sites still violate even simple design advice like W3C’s “Cool URIs don’t change“.  For example, even the BCS’ eWIC series of electronic proceedings, have URLs of the form “www.bcs.org/server.php?show=nav.10270“; it is hard to believe that “show=nav.10270” will persist beyond the next web site upgrade 🙁 [back]

Why did the dinosaur cross the road?

A few days ago our neighbour told us this joke:

“Why did the dinosaur cross the road?”

It reminded me yet again of the incredible richness of apparently trivial day-to-day thought.  Not the stuff of Wittgenstein or Einstein, but the ordinary things we think as we make our breakfast or chat to a friend.

There is a whole field of study looking at computational humour, including its use in user interfaces1, and also on the psychology of humour dating back certainly as far as Freud, often focusing on the way humour involves breaking the rules of internal  ‘censors’ (logical, social or sexual) but in a way that is somehow safe.

Of course, breaking things is often the best way to understand them, Graeme Ritchie wrote2:

“If we could develop a full and detailed theory of how humour works, it is highly likely that this would yield interesting insights into human behaviour and thinking.”

In this case the joke starts to work, even before you hear the answer, because of the associations with its obvious antecessor3 as well as a whole genre of question/answer jokes: “how did the elephant get up the tree?”4, “how did the elephant get down from the tree?”5.  We recall past humour (and so neurochemically are set in a humourous mood), we know it is a joke (so socially prepared to laugh), and we know it will be silly in a perverse way (so cognitively prepared).

The actual response was, however, far more complex and rich than is typical for such jokes.  In fact so complex I felt an almost a palpable delay before recognising its funniness; the incongruity of the logic is close to the edge of what we can recognise without the aid of formal ‘reasoned’ arguments.  And perhaps more interesting, the ‘logic’ of the joke (and most jokes) and the way that logic ‘fails’, is not recognised in calm reflection, but in an instant, revealing complexity below the level of immediate conscious thought.

Indeed in listening to any language, not just jokes, we are constantly involved in incredibly rich, multi-layered and typically modal thinking6. Modal thinking is at the heart of simple planning and decision making “if I have another cake I will have a stomach ache”, and when I have studied and modelled regret7 the interaction of complex “what if” thinking with emotion is central … just as in much humour.  In this case we have to do an extraordinary piece of counterfactual thought even to hear the question, positing a state of the world where a dinosaur could be right there, crossing the road before our eyes.  Instead of asking the question “how on earth could a dinosaur be alive today?”, we are instead asked to ponder the relatively trivial question as to why it is doing, what would be in the situation, a perfectly ordinary act.  We are drawn into a set of incongruous assumptions before we even hear the punch line … just like the way an experienced orator will draw you along to the point where you forget how you got there and accept conclusions that would be otherwise unthinkable.

In fact, in this case the punch line draws some if its strength from forcing us to rethink even this counterfactual assumption of the dinosaur now and reframe it into a road then … and once it has done so, simply stating the obvious.

But the most marvellous and complex part of the joke is its reliance on perverse causality at two levels:

temporal – things in the past being in some sense explained by things in the future8.

reflexive – the explanation being based on the need to fill roles in another joke9.

… and all of this multi-level, modal and counterfactual cognitive richness in 30 seconds chatting over the garden gate.

So, why did the dinosaur cross the road?

“Because there weren’t any chickens yet.”

  1. Anton Nijholt in Twente has studied this extensively and I was on the PC for a workshop he organised on “Humor modeling in the interface” some years ago, but in the end I wasn’t able to attend :-([back]
  2. Graeme Ritchie (2001) “Current Directions in Computer Humor”, Artificial Intelligence Review. 16(2): pages 119-135[back]
  3. … and in case you haven’t ever heard it: “why did the chicken cross the road?” – “because it wanted to get to the other side”[back]
  4. “Sit on an acorn and wait for it to grow”[back]
  5. “Stand on a leaf and wait until autumn”[back]
  6. Modal logic is any form of reasoning that includes thinking about other possible worlds, including the way the world is at different times, beliefs about the world, or things that might be or might have been.  For further discussion of the modal complexity of speech and writing, see my Interfaces article about “writing as third order experience“[back]
  7. See “the adaptive significance of regret” in my essays and working papers[back]
  8. The absence of chickens in prehistoric times is sensible logic, but the dinosaur’s action is ‘because ‘ they aren’t there – not just violating causality, but based on the absence.  However, writing about history, we might happily say that Roman cavalry was limited because they hadn’t invented the stirrup. Why isn’t that a ridiculous sentence?[back]
  9. In this case the dinosaur is in some way taking the role of the absent chicken … and crossing the Jurassic road ‘because’ of the need to fill the role in the joke.  Our world of the joke has to invade the dinosaur’s word within the joke.  So complex as modal thinking … yet so everyday.[back]

programming as it could be: part 1

Over a cup of tea in bed I was pondering the future of business data processing and also general programming. Many problems of power-computing like web programming or complex algorithmics, and also end-user programming seem to stem from assumptions embedded in the heart of what we consider a programming language, many of which effectively date from the days of punch cards.

Often the most innovative programming/scripting environments, Smalltalk, Hypercard, Mathematica, humble spreadsheets, even (for those with very long memories) Filetab, have broken these assumptions, as have whole classes of ‘non-standard’ declarative languages.  More recently Yahoo! Pipes and Scratch have re-introduced more graphical and lego-block style programming to end-users (albeit in the case of Pipes slightly techie ones).

Yahoo! Pipes (from Wikipedia article) Scratch programming using blocks

What would programming be like if it were more incremental, more focused on live data, less focused on the language and more on the development environment?

Two things have particularly brought this to mind.

First was the bootcamp team I organised at the Winter School on Interactive Technologies in Bangalore1.  At the bootcamp we were considering “content development through the keyhole”, inspired by a working group at the Mobile Design Dialog conference last April in Cambridge.  The core issue was how one could enable near-end-use development in emerging markets where the dominant, or only, available computation is the mobile phone.  The bootcamp designs focused on more media content development, but one the things we briefly discussed was full code development on a mobile screen (not so impossible, after all home computers used to be 40×25 chars!), and where literate programming might offer some solutions, not for its original aim of producing code readable by others2, but instead to allow very succinct code that is readable by the author.

if ( << input invalid >> )
    << error handling code >>
else
    << update data >>

(example of simple literate programming)

The second is that I was doing a series of spreadsheets to produce some Fitts’ Law related modelling.  I could have written the code in Java and run it to produce outputs, but the spreadsheets were more immediate, allowed me to get the answers I needed when I needed them, and didn’t separate the code from the outputs (there were few inputs just a number of variable parameters).  However, complex spreadsheets get unmanageable quickly, notably because the only way to abstract is to drop into the level of complex spreadsheet formulae (not the most readable code!) or VB scripting.  But when I have made spreadsheets that embody calculations, why can’t I ‘abstract’ them rather than writing fresh code?

I have entitled this blog ‘part 1’ as there is more to discuss  than I can manage in one entry!  However, I will return, and focus on each of the above in turn, but in particular questioning some of those assumptions embodied in current programming languages:

(a) code comes before data

(b) you need all the code in place before you can run it

(c) abstraction is about black boxes

(d) the programming language and environment are separate

In my PPIG keynote last September I noted how programming as an activity has changed, become more dynamic, more incremental, but probably also less disciplined.  Through discussions with friends, I am also aware of some of the architectural and efficiency problems of web programming due to the opacity of code, and long standing worries about the dominance of limited models of objects3

So what would programming be like if it supported these practices, but in ways that used the power of the computer itself to help address some of the problems that arise when these practices address issues of substantial complexity?

And can we allow end-users to more easily move seamlessly from filling in a spreadsheet, to more complex scripting?

  1. The winter school was part of the UK-India Network on Interactive Technologies for the End-User.  See also my blog “From Anzere in the Alps to the Taj Bangelore in two weeks“[back]
  2. such as Knuth‘s “TeX: the program” book consisting of the full source code for TeX presented using Knuth’s original literate programming system WEB.[back]
  3. I have often referred to object-oriented programming as ‘western individualism embodied in code’.[back]

Searle’s wall, computation and representation

Reading a bit more of Brain Cantwell Smith’s “On the Origin of Objects”  and he refers (p.30-31) to Searle‘s wall that, according to Searle, can be interpreted as implementing a word processor.  This all hinges on predicates introduced by Goodman such as ‘grue’, meaning “green is examined before time t or blue if examined after”:

grue(x) = if ( now() < t ) green(x)
          else blue(x)

The problem is that an emerald apparently changes state from grue to not grue at time t, without any work being done.  Searle’s wall is just an extrapolation of this so that you can interpret the state of the wall at a time to be something arbitrarily complex, but without it ever changing at all.

This issue of the fundamental nature of computation has long seemed to me the ‘black hole’ at the heart of our discipline (I’ve alluded to this before in “What is Computing?“).  Arguably we don’t understand information much either, but at least we can measure it – we have a unit, the bit; but with computation we cannot even measure except without reference to specific implementation architecture whether Turing machine or Intel Core.  Common sense (or at least programmer’s common sense) tells us that any given computational device has only so much computational ‘power’ and that any problem has a minimum amount of computational effort needed to solve it, but we find it hard to quantify precisely.  However,  by Searle’s argument we can do arbitrary amounts of computation with a brick wall.

For me, a defining moment came about 10 years ago, I recall I was in Loughbrough for an examiner’s meeting and clearly looking through MSc scripts had lost it’s thrill as I was daydreaming about computation (as one does).  I was thinking about the relationship between computation and representation and in particular the fast (I think fastest) way to do multiplication of very large numbers, the Schönhage–Strassen algorithm.

If you’ve not come across this, the algorithm hinges on the fact that multiplication is a form of convolution (sum of a[i] * b[n-i]) and a Fourier transform converts convolution into pointwise multiplication  (simply a[i] * b[i]). The algorithm looks something like:

1. represent numbers, a and b, in base B (for suitable B)
2. perform FFT in a and b to give af and bf
3. perform pointwise multiplication on af and bf to give cf
4. perform inverse FFT on cf to give cfi
5. tidy up cfi a but doing carries etc. to give c
6. c is the answer (a*b) in base B

In this the heart of the computation is the pointwise multiplication at step 3, this is what ‘makes it’ multiplication.  However, this is a particularly extreme case where the change of representation (steps 2 and 4) makes the computation easier. What had been a quadratic O(N2) convolution is now a linear O(N) number of pointwise multiplications (strictly O(n) where n = N/log(B) ). This change of representation is in fact so extreme, that now the ‘real work’ of the algorithm in step 3 takes significantly less time (O(n) multiplications) compared to the change in representation at steps 2 and 4 (FFT is O( n log(n) ) multiplications).

Forgetting the mathematics this means the majority of the computational time in doing this multiplication is taken up by the change of representation.

In fact, if the data had been presented for multiplication already in FFT form and result expected in FFT representation, then the computational ‘cost’ of multiplication would have been linear … or to be even more extreme if instead of ‘representing’ two numbers as a and b we instead ‘represent’ them as a*b and a/b, then multiplication is free.  In general, computation lies as much in the complexity of putting something into a representation as it is in the manipulation of it once it is represented.  Computation is change of representation.

In a letter to CACM in 1966 Knuth said1:

When a scientist conducts an experiment in which he is measuring the value of some quantity, we have four things present, each of which is often called “information”: (a) The true value of the quantity being measured; (b) the approximation to this true value that is actually obtained by the measuring device; (c) the representation of the value (b) in some formal language; and (d) the concepts learned by the scientist from his study of the measurements. It would seem that the word “data” would be most appropriately applied to (c), and the word “information” when used in a technical sense should be further qualified by stating what kind of information is meant.

In these terms problems are about information, whereas algorithms are operating on data … but the ‘cost’ of computation has to also include the cost of turning information into data and back again.

Back to Searle’s wall and the Goodman’s emerald.  The emerald ‘changes’ state from grue to not grue with no cost or work, but in order to ask the question “is this emerald grue?” the answer will involve computation (if (now()<t) …).  Similarly if we have rules like this, but so complicated that Searle’s wall ‘implements’ a word processor, that is fine, but in order to work out what is on the word processor ‘screen’ based on the observation of the (unchanging) wall, the computation involved in making that observation would be equivalent to running the word processor.

At a theoretical computation level this reminds us that when we look at the computation in a Turing machine, vs. an Intel processor or lambda calculus, we need to consider the costs of change of representations between them.  And at a practical level, we all know that 90% of the complexity of any program is in the I/O.

  1. Donald Knuth, “Algorithm and Program; Information and Data”, Letters to the editor. Commun. ACM 9, 9, Sep. 1966, 653-654. DOI= http://doi.acm.org/10.1145/365813.858374 [back]

databases as people think – dabble DB

I was just looking at Enrico Bertini‘s blog Visuale for the first time for ages. In particular at his December entry on DabbleDB & Magic/Replace. Dabble DB allows web-based databases and in some ways sits in similar ground with Freebase, Swivel or even Google docs spreadsheet, all ways to share data of different forms on/through the web.

The USP for Dabble DB amongst other online data sharing apps, is that it appears to really be a complete database solution online … and its USB amongst conventional databses is the way they seem to have really thought about real use.  This focus on real use by ordinary users includes dynamically altering the structure of the data as you gradually understand it more.  The model they have is that you start with plain table data from a spreadsheet or other document and gradually add structure as opposed to the “first analyse and then enter” model of traditional DBs.

As I read Enrico’s blog I remembered that he had mailed me about the ‘magic/replace‘ feature ages ago.  This lets you tidy up  data during import (but apparently not data already imported … wonder why?), using a ‘by example’ approach and is a really nice example of all that ‘programming by example‘ and related work that was so hot 15 years ago eventually finding its way into real products.

The downside to Dabble DB is that editing is via forms only … it is often so much easier to enter data in a spreadsheet view, the API is quite limited, and while they have a ‘Dabble DB Commons‘ for public data (rather like Swivel), there is no directory or other way to see what people have put up 🙁

I was particularly hoping the API was better as it would have been nice to link it into my web version of Query-by-Browsing. or even integrate with the Query-through-Drilldown approach for constructing complex table joins that Damon Oram implemented more recently.

In general, while the DB and (many) UI features are strong it is not really looking outwards to creating shared linked data (in the broadest sense of the term, not just pure SemWeb world linked data), … so still room there for the absolute killer shared data app!

nice quote: Auden on language

Was thumbing through Brain Cantwell Smith’s “On the Origin of Objects1, and came across the following quote:

One notices, if one will trust one’s eyes, the shadow cast by language upon truth.
Auden, “Kairos & Logos

This reminded me of my own ponderings as a school child (I can still hear the clank of china as I was washing cups in the church at the time!) as to whether I would be able to think more freely if I knew more languages and thus had more words and concepts, or whether, on the contrary, my mind would be most clear if I knew no language and was thus free of the conceptual straitjacket of English vocabulary. Of course all shades of Sapir-Whorf (although I didn’t know the term at the time), and now I hold a somewhere in-between view – language shapes thought but does not totally contain it2.  Is that the moderation of maturity, or compromise of age?

  1. Trying to decide whether to start it again, as Luke Church, who I met at the PPIG meeting in September, told me it was worthwhile persevering with even though somewhat oddly written![back]
  2. I discuss this a bit in my transarticulation essay and paths and patches book chapter.[back]

MS Office and the new digital dark age

I’ve just spent best part of 2 hours simply trying to print some Powerpoint slides as PDF, only to discover it is yet more of the incompetence in Office 2008 that I have previously blogged about (pain, tears and office 2008).   I was trying to get a small PDF for the web and so was printing to a postscript file and then converting to PDF using Adobe Distiller, but Distiller kept crashing with broken postscript commands (I assume it would also have failed to print on a printer).  Strangely if I printed straight to PDF it would view OK, but would again crash if I asked Acrobat to process it to reduce the file size.

After doing a lengthy ‘binary chop’ on the file, printing smaller and smaller segments, I narrowed it down to one slide, and  then a single element on the slide that of deleted made it all work OK.

I had assumed the problem would be some big JPEG image that I had imported, but the offending element turned out to be the little patterned rectangle in the center of the excerpt below.

The little rectangle is supposed to represent a screen and was constructed simply from two Powerpoint shapes, a plain rectangle and a rounded rectangle laid on top of one another.  I assume the complication was that I had used one of the built-in textures in the previous version of Powerpoint (yes backward compatibility again).  I can only assume that Powerpoint encodes these textures in some unusual way and that the newer version of Powerpoint gets confused when it comes to print them (even though it appears to display them fine).

In meetings related to the UKCRC Grand Challenge on Memories for Life, there have been frequent worries, not least from the British Library, about digital preservation, how digital materials from some years ago are hard to access today.  A key example was the BBC ‘Doomsday Book’ project that created a two volume interactive multimedia videodisc in 1986, but by 2002 this was virtually unreadable and was only just saved (see 2002 BBC News article). This was ‘just’ 15 year old technology compared to the 1000 year old original Doomsday Book that is still readable on paper.

However, with Powerpoint we are not just seeing digital preservation problems from 15 year old technology, but between two subsequent versions of the same ‘industry standard’ software on some of its most basic features (static geometric shapes).  The British Library worries about a new digital dark age … and Microsoft’s coders seem to be hell bent on making it happen.

European working time directive 2012 – the end of the UK university?

Fiona @ lovefibre just forwarded me a link to a petition about retained firefighters, who evidently may be at risk as the right to opt out of European working time directive is rescinded.  Checking through to the Hansard record, it seems this is really a precautionary debate as the crunch is not until 2012.

However, I was wondering how that was going to impact UK academia if, in 2012, the 48 hour maximum cuts in.

It may make no difference if academics are not required to work more than 48 hours, just decide to do so voluntarily.  However, this presumably has all sorts of insurance ramifications – if we do a reference or paper outside the ‘official hours’ would we be covered by the University’s professional indemnity.  I guess also, in considering promotions and appointments, we would  have to ‘downgrade’ someone’s publications etc. to only include those that were done during paid working hours otherwise we would effectively be making the extra hours a requirement (as we currently do).

The university system has become totally dependent on these extra hours.  In a survey in the early 1990s the average hours worked were over 55 per week, and in the 15 years since then this has gone up substantially. I would guess now the average is well over 60, with many academics getting close to double the 48 hour maximum. I recall one colleague, who had recently had a baby, mentioning how he had cut back on work; now he stops work at 5pm … and doesn’t start again until 7:30pm, his ‘cut back’ week was still way in excess of 60 hours even with a young baby1. Worryingly this has spread beyond the academics and  departmental administrators are often at their desks at 7 or 8 o’clock in the evening, taking piles of work home and answering email through the weekend.  While I admire and appreciate their devotion, one has to wonder at the impact on their personal lives.

So, at a human level, enforcing limited working hours would be no bad thing; certainly many companies force this, forbidding work out of office hours.  However, practically speaking,  if the working time directive does become compulsory in 2012, I  cannot imagine how the University system could continue to function.

And … if you are planning to do a 3 year course, start now; who knows what things will be like after 3 years!

  1. Yea, and I know I can’t talk, as an inveterate workaholic I ‘cut back’ from a high of averaging 95 hours a few years ago and now try to keep around 80 max.  I was however very fortunate in that I was doing a PhD and then personal fellowships when our children were small, so was able to spend time with them and only later got mired in the academic quicksands.[back]

bullying – training for life?

Although I have heard and read similar ideas before, it was still appalling to hear cyber-bullying being described as ‘distressing’ in the tone of voice one would use for spilt tea, and tales of beatings and broken teeth being brushed aside.

I was driving back up country and listening to Tuesday’s Woman’s Hour1.  The guest was Helene Guldberg from the Open University, who had recently published views that anti-bullying initiatives were undermining children’s ability to acquire conflict management skills for later life.

While I share her concerns that we tend towards a nanny society, I cannot imagine that she would feel that being mugged in the streets was helping her to learn how to live in a world where bad things happen,  yet she, and I know she voices a common prejudice in educational theory, feels that violence that would be criminal against an adult is somehow acceptable for a child.  Evidently it is all childhood innocence and any sense of cruelty is simply our adult projections.

In her own moment of exquisite cruelty, Guldberg responded to an email from a woman in her 50s, who felt her life permanently scarred by school bullying.  The woman found it hard to trust anyone, because the instigator of the bullying had been someone whom she thought to be her best friend.  In the classic ‘blame the victim’ fashion, Guldberg explained that this was simply the fact that if we tell children that bullying will scar them for life, then it will.  The woman’s pain was not anything to do with the bullying when she was at school, but effectively self-inflicted … this despite the fact the 35 years ago no-one was telling children that bullying would do harm, as the universal view then was exactly what Guldberg now expounds.

Hearing all this, I recall my own school days and in particular infant school where most of the boys belonged to a class ‘gang’.  Now I would have been perfectly happy if our class gang had fought other classes – I was never one of life’s pacifists.  However, the purpose of the class gang was not to fight other gangs, but to pick on some member of the class, often one of the peripheral members of the gang if there was no-one else.  Now I should explain I was not of a particularly high moral frame; however, I was a romantic and had been brought up with tales of King Arthur and watching Robin Hood on television; so the idea of picking on the weak was against everything I believed in2.  I refused to join in and so became, disproportionately, the one picked on.

my first school

What is particularly striking in retrospect is that those at the heart of the gang leadership, and so of course never picked on by the gang, were the more ‘respectable’ members of the class, the ones the teacher would ask to look after the class if they had to leave.   As far as I can gather, this was not out of some misguided attempt to reform the bullies through responsibility, but purely ignorance.  The teachers were aware of the ‘naughty’ children and those that the gang leaders egged into fighting and hurting others, but not those who seemed on the surface to be the good ones.

This blindness seems odd, but appears to be common.  I recall when our children were small (and home educated), someone telling us about the school their son was at, how good it was and the excellent social environment, but seemed oblivious to the fact that each day he came back with items from his school bag missing or broken and that he kept asking to be picked up from school rather than walk the short distance home.

Later in high school I recall the dynamics were different; there the bullies tended to be the more obvious candidates: big, tough and often less advantaged.  For different reasons I often found myself at the rough end of things; I would try to talk myself out of trouble (those conflict management skills!), but in the end would never back down, no matter the odds.  One of my front teeth is still a little black from a head butt, but today, with knives everywhere, I wonder whether I would have acted the same, or if I had what the consequence would be.

In some sense, in both earlier and later school, I ‘chose’ to be one of the victims, and perhaps as it had an, albeit over romanticised, ethical aspect one could say that it may have strengthened me.  However, most of the victims were not in that position: the less clever children, the first Asian boy in school, the brothers who always had snuffles and so were labelled ‘snotty’, and when my father had died I still recall the taunts of ‘old grey hairs’.  Those who were weaker or simply cannier learnt to appease and submit, but were consequently far more likely to be repeat victims than someone who, even if hurt, would not be cowed.  I am sure the boy I knew in high school, who was learning these important life skills of appeasement and giving in to intimidation, would have developed a rounded and resilient attitude in his later life if he had not committed suicide first.

The presenter, Jane Garvey, and another guest Claude Knights from anti-bullying charity ‘Kidscape‘ did an excellent job in challenging Guldberg’s views, but she seemed completely immune to any evidence.  However, I don’t recall anyone questioning the life skills learnt by the bullies themselves.  The tough but ‘respectable’ boys, who were at the centre of the gangs in early school, are just those who are likely to have become policemen or soldiers.  What did they learn?  Might is right?

And the same attitudes are prevalent in more professional settings; some years ago a team at KPMG were helping us in our search for continued funding for aQtive, our dot.com company.  All the people there were wonderful to us, but looking at their dealings with one another I was often physically sickened by the combination of fawning to superiors and bullying of juniors that I saw.  All good lessons learnt in public school.

For that matter the circle completes and even some teachers repeat the lessons they learnt at school.  I still recall the grin on our lower-school headmaster’s face during school assemblies, when  he would take some child who had committed a misdemeanour, grab him or her by the shoulders and then, in front of everyone, violently shake them in synchrony with his words.

It is not only the victims of school bullying that are the victims; the bullies themselves are victims of those like Guldberg who tell them it is alright to misuse power – and in the deeper weight of things it is perhaps more terrible to learn to be cruel than to learn to be afraid.

  1. Oddly there isn’t a “Man’s Hour” as I guess that would be sexist? … In fact thinking about men’s magazines, perhaps I can see the point.[back]
  2. Although, I didn’t take part in the systematic bullying of the class gang, I am sure there were times during my own childhood, when I hurt others. I am not writing from a moral high ground, I just want us to take all the pains and joys of childhood seriously.[back]

From Anzere in the Alps to the Taj Bangelore in two weeks

In the last two weeks I have experienced both Swiss snow and skiing and Indian sun and traffic for the first time. The former was in Anzere for the French speaking Swiss Universities’ annual winter school and the latter in Bangalore for meetings (including another winter school) connected with the UK-India Network on Interactive Technologies for the End-User. Both have been exciting both personally because of their novelty as experiences and professionally due to stimulating discussions … happily not dry business meetings. I will blog later in more detail about both.

I guess joy always has its pains: in the case of skiing, blisters on my shins; and in India, the nearly inevitable wobbly tummy!

People have been wonderful in both Switzerland and India, both those in the meetings themselves and those I’ve met along the way.

I knew a few of the Swiss people already Denis and Pascal from a previous visit, but most were new including Micheal, my ski buddy, who had been in Switzerland for a long time, but was his first skiing too. Our ski instructor Rudy from Ecole Suisse de Ski et de Snowboard – Anzère was absolutely wonderful with seeming endless patience as we practised again and again (including the odd tumble) things that to him were so natural … if you want to learn to ski, ask for Rudy! In the village the woman at the ski shop was also wonderful helping find the right boots and equipment for someone who hardly wears shoes normally, and when she realised how bad my shins had become, she Christened me “Brave Shins’ :-/ I struggled to recognise her English accent until she explained she was brought up in Belgravia … it was just posh 🙂 However, the lady at the Anzere tourist information was my hero of the week; insisting on picking up special ‘second skin’ plasters from the pharmacy and bringing them to me at the hotel. Thanks to their ministrations my last day of skiing was blessedly pain free.

In India again so many wonderful people, Rama from HP who organized our demo day, the people on my Bootcamp team Ramprakesh, Dinoop and Ramesh, and many many others , and not forgetting the drivers of ‘autos’, including the one who smiled all the time, but got so embarrassed when accosted by the begging transvestites at the traffic lights.

Bootcamp Team: Ramesh, Dinoop, me, Ramprakash
(photo by Ramprakash)

Bangalore dinner: me, Vijay, Dinesh, Sriram
(photo by Ramprakash)

a Bengaluru auto rickshaw
see more and movie at bengaluru-net.in