Total Quality, Total Reward and Total Commitment

I’ve been reading bits of Richard Sennett’s The Craftsman1 off and on for some months. It has had many resonances, and I meant to write a post about it after reading its very first chapter. However, for now it is just part of one of the latter chapters that is fresh. Sennett refers to the work of W. Edwards Deming, the originator of the term ‘total quality control’. I was surprised at some of the quotes “The most important things cannot be measured”, “you can expect what you inspect” — in strong contrast to the metrics-based ‘quality’ that seems to pervade government thinking for many years whether it impacts health, policing or academia, and of course not unfamiliar to many in industry.

Continue reading

  1. Richard Sennett, The Craftsman, Penguin, 2009[back]

Been to London to visit the Queen

… well Queen Mary, University of London anyway. Giving a talk on “The New Media of Digital Light”1. While there I was given interesting tours of various research groups in the School of Electronic Engineering and Computer Science at QML including music that plays along with the drums, maps for the blind, and red dots on the heads of crowds at Covent Garden.

On the tube I noticed that if you are standing and look at the reflections in the curved tube train windows, all the seated passengers become Siamese twins joined at the head. Also looking down standing people tend to stand with their toes pointing outwards, whereas for seated people only the men do that. I feel there must be a social psychology paper in that, but it has probably already been written.

At the hotel neo-classical statues line the way down to an underground car park, and while seated at a WiFi sweet spot, was overhearing a dissident group planning a protest.

A typical day out in London.

A reminder of Wales, Aberavon Road, near QML

  1. work with Joe and Angie on Firefly technology[back]

now part-time!

Many people already knew this was happening, but for those that don’t — I am now officially a part-time university academic.

Now this does not mean I’m going to be a part-time academic, quite the opposite.  The reason for moving to working part-time at the University is to give me freedom to do the things I’d like to do as an academic, but never have time.  Including writing more, reading, and probably cutting some code!

Reading especially, and I don’t mean novels (although that would be nice), but journal papers and academic books.  Like most academics I know, for years I have only read things that I needed to review, assess, or comment on — or sometimes in a fretful rush, the day before a paper is due, scurried to find additional related literature that I should have known about anyway.  That is I’d like some time for scholarship!

I guess many people would find this odd: working full time for what sounds like doing your job anyway, but most academics will understand perfectly!

Practically, I will work at Lancaster in spurts of a few weeks, travel for meetings and things, sometimes from Lancs and sometimes direct from home, and when I am at home do a day a week on ‘normal’ academic things.

This does NOT mean I have more time to review, work on papers, or other academic things, but actually the opposite — this sort of thing needs to fit in my 50% paid time … so please don’t be offended or surprised if I say ‘no’ a little more.  The 50% of time that is not paid will be for special things I choose to do only — I have another employer — me 🙂

Watch my calendar to see what I am doing, but for periods marked @home, I may only pick up mail once a week on my ‘office day’.

Really doing this and keeping my normal academic things down to a manageable amount is going to be tough.  I have not managed to keep it to 100% of a sensible working week for years (usually more like 200%!).  However, I am hoping that the sight of the first few half pay cheques may strengthen my resolve 😉

In the immediate future, I am travelling or in Lancs for most of February and March with only about 2 weeks at home in between, however, April and first half of May I intend to be in Tiree watching the waves, and mainly writing about Physicality for the new Touch IT book.

not quite everywhere

I’ve been (belatedly) reading Adam Greenfield‘s Everyware: The Dawning Age of Ubiquitous Computing. By ‘everywhere’ he means the pervasive insinuation of inter-connected computation into all aspects of our lives — ubiquitous/pervasive computing but seen in terms of lives not artefacts. Published in 2006, and so I guess written in 2004 or 2005, Adam confidently predicts that everywhere technology will have  “significant and meaningful impact on the way you live your life and will do so before the first decade of the twenty-first century is out“, but one month into 2010 and I’ve not really noticed yet. I am not one of those people who fill their house with gadgets, so I guess unlikely to be an early adopter of ‘everywhere’, but even in the most techno-loving house at best I’ve seen the HiFi controlled through an iPhone.

Devices are clearly everywhere, but the connections between them seem infrequent and poor.

Why is ubiquitous technology still so … well un-ubiquitous?

Continue reading

Recognition vs classification

While putting away the cutlery I noticed that I always do it one kind at a time: all the knives, then all the forks, etc. While this may simply be a sign of an obsessive personality, I realised there was a general psychological principle at work here.

We often make the distinction between recognition and recall and know, as interface designers, that the former is easier, especially for infrequently used features, e.g. menus rather than commands.

In the cutlery tray task the trade-off is between a classification task “here is an item what kind is it?” versus a visual recognition one “where is the next knife”. The former requires a level of mental processing and is subject to Hick’s law, whereas the latter depends purely on lower level visual processing, a pop-out effect.

I am wondering whether this has user interface equivalents. I am thinking about times when one is sorting things: bookmarks, photos, even my own snip!t. Sometimes you work by classification: select an item, then choose where to put it; for others you choose a category (or an album) and then select what to put in it. Here the ‘recognition’ task is more complex and not just visual, but I wonder if the same principle applies?

Sounds like a good dissertation project!

the plague of bugs

Like some Biblical locust swarm, every attempt to do anything is thwarted by the dead weight of innumerable bugs! This time I was trying … and failing … to upload a Word file into Google docs. I uploaded the docx file and it said the file was unreadable, tried saving it as .doc, and when that failed created an rtf file. Amazingly from a 1 Meg word file the rtf was 66 Meg, but very very slowly Google docs did upload the file and when it was eventually all uploaded …

To be fair the same document imports pretty badly into Pages (all the headings disappear).  I think this is because it is originally a 2003 Word file and gets corrupted when the new Word reads it.

Now I have griped before about backward compatibility issues for Word, and in general about lack of robustness in many leading products, and to add to my woes, for the last month or so (I guess after a software update) Word has decided not to show its formatting menus on an opened document unless I first hide them, then show them, and then maximise the window. Mostly these things are annoying, sometimes really block work, and always waste time and destroy the flow of work.

However, rather than grousing once again (well I already have a bit), I am trying to make sense of this.  For some time it has become apparent that software is fundamentally breaking down, in that with every new version there is minimal new useful functionality, but more bugs.  This may be simply issues of scale, of the training of programmers, or of the nature of development processes.  Indeed in the talk I gave a bit over a  year ago to PPIG, “as we may code“, I noted that coding in th 21st Century seems to be radically different, more about finding tricks and community know-how and less about problem solving.

Whatever the reason, I don’t think the Biblical plague of bugs is simply due to laziness or indifference on the part of large vendors such as  Microsoft and Adobe, but is symptomatic of a deeper crisis in software development, certainly where there is a significant user interface.

Maybe this is simply an inevitable consequence of scale, but more optimistically I wonder if there are new ways of coding, new paradigms or new architectural models.  Can 2010 be the decade when software is reborn?

understanding others and understanding ourselves: intention, emotion and incarnation

One of the wonders of the human mind is the way we can get inside one another’s skin; understand what each other is thinking, wanting, feeling. I’m thinking about this now because I’m reading The Cultural Origins of Human Cognition by Michael Tomasello, which is about the way understanding intentions enables cultural development. However, this also connects a hypotheses of my own from many years back, that our idea of self is a sort of ‘accident’ of being social beings. Also at the heart of Christmas is empathy, feeling for and with people, and the very notion of incarnation.

Continue reading