now part-time!

Many people already knew this was happening, but for those that don’t — I am now officially a part-time university academic.

Now this does not mean I’m going to be a part-time academic, quite the opposite.  The reason for moving to working part-time at the University is to give me freedom to do the things I’d like to do as an academic, but never have time.  Including writing more, reading, and probably cutting some code!

Reading especially, and I don’t mean novels (although that would be nice), but journal papers and academic books.  Like most academics I know, for years I have only read things that I needed to review, assess, or comment on — or sometimes in a fretful rush, the day before a paper is due, scurried to find additional related literature that I should have known about anyway.  That is I’d like some time for scholarship!

I guess many people would find this odd: working full time for what sounds like doing your job anyway, but most academics will understand perfectly!

Practically, I will work at Lancaster in spurts of a few weeks, travel for meetings and things, sometimes from Lancs and sometimes direct from home, and when I am at home do a day a week on ‘normal’ academic things.

This does NOT mean I have more time to review, work on papers, or other academic things, but actually the opposite — this sort of thing needs to fit in my 50% paid time … so please don’t be offended or surprised if I say ‘no’ a little more.  The 50% of time that is not paid will be for special things I choose to do only — I have another employer — me 🙂

Watch my calendar to see what I am doing, but for periods marked @home, I may only pick up mail once a week on my ‘office day’.

Really doing this and keeping my normal academic things down to a manageable amount is going to be tough.  I have not managed to keep it to 100% of a sensible working week for years (usually more like 200%!).  However, I am hoping that the sight of the first few half pay cheques may strengthen my resolve 😉

In the immediate future, I am travelling or in Lancs for most of February and March with only about 2 weeks at home in between, however, April and first half of May I intend to be in Tiree watching the waves, and mainly writing about Physicality for the new Touch IT book.

not quite everywhere

I’ve been (belatedly) reading Adam Greenfield‘s Everyware: The Dawning Age of Ubiquitous Computing. By ‘everywhere’ he means the pervasive insinuation of inter-connected computation into all aspects of our lives — ubiquitous/pervasive computing but seen in terms of lives not artefacts. Published in 2006, and so I guess written in 2004 or 2005, Adam confidently predicts that everywhere technology will have  “significant and meaningful impact on the way you live your life and will do so before the first decade of the twenty-first century is out“, but one month into 2010 and I’ve not really noticed yet. I am not one of those people who fill their house with gadgets, so I guess unlikely to be an early adopter of ‘everywhere’, but even in the most techno-loving house at best I’ve seen the HiFi controlled through an iPhone.

Devices are clearly everywhere, but the connections between them seem infrequent and poor.

Why is ubiquitous technology still so … well un-ubiquitous?

Continue reading

Recognition vs classification

While putting away the cutlery I noticed that I always do it one kind at a time: all the knives, then all the forks, etc. While this may simply be a sign of an obsessive personality, I realised there was a general psychological principle at work here.

We often make the distinction between recognition and recall and know, as interface designers, that the former is easier, especially for infrequently used features, e.g. menus rather than commands.

In the cutlery tray task the trade-off is between a classification task “here is an item what kind is it?” versus a visual recognition one “where is the next knife”. The former requires a level of mental processing and is subject to Hick’s law, whereas the latter depends purely on lower level visual processing, a pop-out effect.

I am wondering whether this has user interface equivalents. I am thinking about times when one is sorting things: bookmarks, photos, even my own snip!t. Sometimes you work by classification: select an item, then choose where to put it; for others you choose a category (or an album) and then select what to put in it. Here the ‘recognition’ task is more complex and not just visual, but I wonder if the same principle applies?

Sounds like a good dissertation project!

the plague of bugs

Like some Biblical locust swarm, every attempt to do anything is thwarted by the dead weight of innumerable bugs! This time I was trying … and failing … to upload a Word file into Google docs. I uploaded the docx file and it said the file was unreadable, tried saving it as .doc, and when that failed created an rtf file. Amazingly from a 1 Meg word file the rtf was 66 Meg, but very very slowly Google docs did upload the file and when it was eventually all uploaded …

To be fair the same document imports pretty badly into Pages (all the headings disappear).  I think this is because it is originally a 2003 Word file and gets corrupted when the new Word reads it.

Now I have griped before about backward compatibility issues for Word, and in general about lack of robustness in many leading products, and to add to my woes, for the last month or so (I guess after a software update) Word has decided not to show its formatting menus on an opened document unless I first hide them, then show them, and then maximise the window. Mostly these things are annoying, sometimes really block work, and always waste time and destroy the flow of work.

However, rather than grousing once again (well I already have a bit), I am trying to make sense of this.  For some time it has become apparent that software is fundamentally breaking down, in that with every new version there is minimal new useful functionality, but more bugs.  This may be simply issues of scale, of the training of programmers, or of the nature of development processes.  Indeed in the talk I gave a bit over a  year ago to PPIG, “as we may code“, I noted that coding in th 21st Century seems to be radically different, more about finding tricks and community know-how and less about problem solving.

Whatever the reason, I don’t think the Biblical plague of bugs is simply due to laziness or indifference on the part of large vendors such as  Microsoft and Adobe, but is symptomatic of a deeper crisis in software development, certainly where there is a significant user interface.

Maybe this is simply an inevitable consequence of scale, but more optimistically I wonder if there are new ways of coding, new paradigms or new architectural models.  Can 2010 be the decade when software is reborn?

understanding others and understanding ourselves: intention, emotion and incarnation

One of the wonders of the human mind is the way we can get inside one another’s skin; understand what each other is thinking, wanting, feeling. I’m thinking about this now because I’m reading The Cultural Origins of Human Cognition by Michael Tomasello, which is about the way understanding intentions enables cultural development. However, this also connects a hypotheses of my own from many years back, that our idea of self is a sort of ‘accident’ of being social beings. Also at the heart of Christmas is empathy, feeling for and with people, and the very notion of incarnation.

Continue reading

reading barcodes on the iPhone

I was wondering about scanning barcodes using the iPhone and found that pic2shop (a free app) lets third part iPhone apps and even web pages access its scanning software through a simply URL scheme interface.

Their developer documentation has an example iPhone app, but not an example web page.  However, I made a web page using their interface in about 5 minutes (see PHP source of barcode reader).

If you have an iPhone you can try it out and scan a bar code now, although you need to install pic2shop first (but it is free).

By allowing third party apps to use their software they encourage downloads of their app, which will bring them revenue through product purchases.  Through free giving they bring themselves benefit; a good open access story.

Apple’s Model-View-Controller is Seeheim

Just reading the iPhone Cocoa developer docs and its description of Model-View-Controller. However, if you look at the diagram rather than the model component directly notifying the view of changes as in classic MVC, in Cocoa the controller acts as mediator, more like the Dialogue component in the Seeheim architecture1 or the Control component in PAC.

MVC from Mac Cocoa development docs

The docs describing the Cocoa MVC design pattern in more detail in fact do a detailed comparison with the Smalltalk MVC, but do not refer to Seeheim or PAC, I guess because they are less well known now-a-days.  Only a few weeks ago when discussing architecture with my students, I described Seeheim as being more a conceptual architecture and not used in actual implementations now.  I will have to update my lectures – Seeheim lives!

  1. Shocked to find no real web documentation for Seeheim, not even on Wikipedia; looks like CS memory is short.  However, it is described in chapter 8 of the HCI book and in the chapter 8 slides[back]