The Great Apple Apartheid

In days gone by boarding houses and shops had notices saying “Irish and Blacks not welcome“.  These days are happily long past, but today Apple effectively says “poor and rural users not welcome“.

This is a story about Apple and the way its delivery policies exacerbate the digital divide and make the poor poorer.  To be fair, similar stories can be told about other software vendors, and it is hardly news that success in business is often at the expense of the weak and vulnerable.  However, Apple’s decision to deliver Lion predominantly via App store is an iconic example of a growing problem.

I had been using Lion for a little over a week, not downloaded from App Store, but pre-installed on a brand new MacBook Air.  However, whenever I plugged in my iPhone and tried to sync a message appeared saying the iTunes library was created with a newer version of iTunes and so iTunes needed to be updated.  Each time I tried to initiate the update as requested, it started  a long slow download dialogue, but some time later told me that the update had failed.

This at first seemed all a little odd on a brand new machine, but I think the reason is as follows:

  1. When I first initialised the new Air I chose to have it sync data with a Time Machine backup from my previous machine.
  2. The iTunes on the old machine was totally up-to-date due to regular updates.
  3. Apple dealers do not bother to update machines before they are delivered.
  4. The hotel WiFi connection did not have sufficient throughput for a successful update.

From an engineering point of view, the fragility of the iTunes library format is worrying; many will recall the way HyperCard was able to transfer stacks back and forth between versions without loss.  Anyway the paucity of engineering in recent software is a different story!

It is the fact that the hotel WiFi was in sufficient for the update that concerns me here.  It was fast enough to browse the web, without apparent delay, to check email etc.  Part of the problem was that the hotel did offer two levels of service, one (more expensive!) aimed more at heavy multimedia use, so maybe that would have been sufficient.  The essential update for the brand new machine consisted of 1.46 gigabytes of data, so perhaps not surprising the poor connection faltered.

I have been concerned for several years at the ever increasing size of regular software updates, which have increased from 100 Mbytes to now often several Gbytes1.  Usually these happen in the background and I have reasonable broadband at home, so they don’t cause me any problems personally, but I wonder about those with less good broadband, or those whose telephone exchanges do not support broadband at all.  In the UK, this is mainly those outside major urban areas, who are out of reach of cable and fibre super-broadband and reliant on old BT copper lines.  Thinking more broadly across the world, how many in less developed countries or regions will be able to regularly update software?

Of course old versions may well run better on old computers, but without updates it is not just that users cannot benefit from new features, but more critically they are missing essential security updates leaving the vulnerable to attack.

And this is not just a problem for those directly affected, but for us all, as it creates a fertile ground for bot armies to launch denial of service attacks and other forms of cybercrime or cyberterrorism.   Each compromised machine is a future cyberwarrior or cybergangster.

However, the decision of Apple to launch Lion predominantly via App Store has significantly upped the stakes.   Those with slower broadband connections may be able to manage updates, but the full operating system is an order of magnitude larger.  Of course those with slower connections tend to be the poorer, more vulnerable, more marginalised; those without jobs, in rural areas, the elderly.  It is as if Apple has put up a big notice:

To the poor and weak
we don’t want you

To be fair, Lion is (one feels grudgingly) also made available on USB drives, but at more than twice the price of the direct download2.  So this is not entirely shutting the door on the poor, but only letting them in if they pay extra.  A tax on poverty.

Of course, this is not a deliberate act of aggression against the weak, just the normal course of business.  The cheapest and easiest way to deliver software, and one that incidentally ensures that all revenue goes to Apple, is through direct online sales.  The USB option adds complexity and cost to the distribution systems and Apple seem to be pricing to discourage use.  This, like so many other ways in which the poor pay more, is just an ‘accident’ of the market economy.

But for a company that prides itself in design, surely things could be done more creatively?

One way would be to split software into two parts.  One small part would be the ‘key’, essential to run it, but very small,  The second part would constitute the bulk of the software, but be unusable without the ‘key’.   The ‘key’ would then be sold solely on the App store, but would be small enough for anyone to download.  The rest would be also made available online, but for free download and with a licence that allows third party distribution (and of course be suitably signed/encrypted to prevent tampering).  Institutions or cybercafes could download it to local networks, entrepreneurs could sell copies on DVD or USB, but competition would mean this would be likely to end up far cheaper than Apple’s USB premium, close to the cost of the medium, with a small margin.

Of course the same method could be used for any software, not just Lion, and indeed even for software updates.

I’m sure Apple could think of alternative, maybe better, solutions.  The problem is just that Apple’s designers, despite inordinate consideration for the appearance and appeal of their products, have simply not thought beyond the kind of users they meet in the malls of Cupertino.

  1. Note, this is not an inevitable consequence of increasing complexity and (itself lamentable) code bloat.  In the past software updates were often delivered as ‘deltas’, the changes between old and new.  It seems that now an ‘update’ is in fact complete copies of entire major components.[back]
  2. At the tiem of wrting tjis Mac OSX LIon is available for  app store for $29.99, but USB thumb drive version is $69.99[back]

roots – how do we see ourselves spatially

I was just reading the chapter on Benedict Anderson in “Key Thinkers on Space and Place1.  Anderson forged the concept of a national imagination, the way nations are as much, or more, a construct of socio-cultural imaginings than physical topography or legal/political sovereignty.

However, this made me wonder whether this conception itself was very culturally specific, to what extent do people relate to nation as opposed to other areas.

I was reminded particularly of a conversation with, the much missed, Pierro Mussio. He explained to me the distinct nature of Italian cultural identity, which tends to focus on regional and local identity before national identity, partly because Italy itself is quite young as a nation state (a mere 150 years in a country which sees itself in terms of millennia). There is even a word “campanilismo”, which is literally relating to the “bell tower” (campanile) in a town, meaning one’s primary loyalties lie to that bell tower, that town, that community.

How do you see yourself?  Are you British or Geordie, French or Parisian, American or New Yorker?

I know I see myself as ‘Welsh’.  Wales is part of Britain, but my Britishness is secondary to Welshness.  I was born and brought up in Bangor Street, Roath Park, Cardiff, but again while the street, area and city are foci of nostalgia, it is the Welshness which seems central.  For Fiona she is Cumbrian (rather than Wetheral, English or British), Steve who is visiting is British, but says his brother would say Scottish, despite both having spent equal amounts of time in Scotland whilst growing up and since.

I asked people on Twitter and got a variety of answers2, most quite broad:

“I always think English rather than British but I don’t have a more specific area to identify with.”

“I think I primarily think of myself as both “Brit” & “northerner”. Lancastrian when differentiating myself from Yorkshire lot!”

“in decreasing granularity I’m a Devoner (south, of course!), west country-er, English, British, European, World-ean.”

Some less clear:

“I’m confused specially. I am Coloradan and American by birth, but feel more at home in England, and miss Scotland.”

“ooh, complicated. I’m British but not English. that’s as specific as I get.”

The last perhaps particularly interesting in its focus on what he is not!

Obviously the way we see ourselves varies.

The choice of a ‘level of granularity’ for location reminds me a little of the way in which we have some sort of typical level in a classification hierarchy (I think Lakoff writes about this); for example you can say “look at that bird”, but not “look at that mammal”, you have to say “look at that dog” or “look at that cat”.  This also varies culturally including subcultures such as dog breeders – saying “look at that dog” in Crufts would hardy sound natural.

Some cities have specific words to refer to their natives: Glaswegian, Geordie, Londoner; others do not – I was brought up in Cardiff, but Cardiffian sounds odd.  Does the presence of a word (Cumbrian, Welsh) make you more likely to see yourselves in those terms, or is it more that it is that, where cities have forged a strong sense of belonging, words naturally emerge … I sense a Sapir-Whorf moment!

Now-a-days this is even more contested as loyalties and identities can be part of networked communities that cut across national and topographical boundaries.  In some way these new patterns of connection reinforce those focusing on human relations rather than physical space as defining countries and communities, but of course in far newer ways.

However, it also made me think of those parts of the world where there are large numbers of people with problematic statehood.  There is how we see ourselves and how states see us.  We tend to define democracy in terms of citizenship, and laud attempts, such as the Arab Spring, that give power to the people … but where ‘people’ means citizens.  In Bahrain the Shite majority are citizens and therefore their views should be considered in terms of democracy, whereas the migrant workers in Libya fleeing the rebels in the early days of the recent Libyan war, or the Palestinians in Kuwait during the first Gulf War were not citizens and therefore marginalised.

Defining citizenship then becomes one of the most powerful methods of control.  This has been used to powerful effect in Estonia leaving some who had lived the country for fifty years effectively stateless, and, while not leaving people stateless, in the UK new rules for electoral registration could leave up to 10 million, principally the young and the poor, voteless.

In the days of the nation state those with loyalties not tied to geography have always been problematic: Gypsies, Jews before the establishment of Israel, the various Saharan nomad trades.  Many of these have been persecuted and continue to suffer across the world, and yet paradoxically in a networked world it seems possible that pan-national identity may one day become the norm.

  1. I’ve got 1st edition, but 2nd edition recently come out.[back]
  2. Many thanks for those who Tweeted responses.[back]

TTW2 – the second Tiree Tech Wave is approaching

It is a little over a month (3-7 Nov)  until the next Tiree Tech Wave 🙂  However, as I’m going to be off-island most of the time until the end of October, it seems very close indeed!

The first registrations are in, including Clare flying straight here from the US1 and Alessio coming from Madrid; mind you last time Azizah had come all the way from Malaysia, so still looking very parochial in comparison!

While I don’t expect we will be oversubscribed, do ‘book early’ (before Oct 10th) if you intend to come to help us plan things and make sure you get your preferred accommodation (the tent at the end of my garden is draughty in November) and travel.

If you want to take advantage of the island’s watersports, catch me in one place for more than a day, or simply hang out, do take a few extra days before or after the event.  One person has already booked to arrive a couple of days early and others maybe also.

To see what the Tech Wave will be like see the Interfaces report … although it is the people who make the event, so I’m waiting to be surprised again this time round 🙂

Looking forward to seeing you.

  1. In fact guided over the ocean by the ‘golf ball’ on Tiree, which is the North Atlantic civil radar.[back]

Death by Satellite

The Upper Atmosphere Research Satellite (UARS) is on its way down after 20 years zipping by at 375 miles above our heads.  As the bus-sized satellite breaks up parts will reach earth and NASA reassuringly tell us that there is only a 1 in 3,200 chance that anyone will be hit.  Given being hit by a piece of satellite is likely to be painful and most likely terminal, I wonder if I should be worried.

With a world population of 6,963,070,0291, that is around one in a trillion chance that I will die from UARS this year.  Given the annual risk from asteroid impact or shark attack is around one in in 2 billion2, that sounds quite good for UARS (but must buy that shark repellent from Boots).

Of course, it is a bit unfair comparing the UARS that has been up there for 20 years spinning round the world like frenzy, with more mundane day-to-day risks like crossing the road.  For air travel they take into account the distance travelled and aim for safety factors around 1 accident (but with a lot of people in the aeroplane) every hundred million flying miles and achieving a figure about 10 times better than that3.

At 375 miles the UARS will have been orbiting at 7.55978 km/s4, so travelled 2.9 billion miles in the last 20 years.  That means it is causing one death in 10 trillion miles travelled … five thousand times safer than air flight, 120 million times safer than car travel5, and around million times safer than bicycle6.  I must cancel my KLM ticket home and get one by satellite.

  1. World population of 6,963,070,029 at 5:14 UTC (EST+5) Sep 19, 2011 according to US Census Bureau World Population Clock [back]
  2. Scientific American, “Competing Catastrophes: What’s the Bigger Menace, an Asteroid Impact or Climate Change?“, Robin Lloyd, March 31, 2010 [back]
  3. Wikipedia Air Safety page quotes different, but close numbers: 3 deaths per 10 billion passenger miles, one death in 20 billion passenger miles and 0.05 deaths per billion passenger kilometers.[back]
  4. CalcTool Earth Orbit Calculator[back]
  5. Based on UK figures of 3,431 deaths per year (US NHTSA) and 26.7 billion miles driven in the UK per year (Admiral Insurance).[back]
  6. Wikipedia Air Safety statistics[back]

Private schools and open data

Just read short article “Private schools aren’t doing as well right-wingers like to think” by Rob Cowen @bobbiecowman1.  Rob analyses the data on recent GCSE results and finds that independent schools have been falling behind comprehensive schools in the last couple of years.  He uses this to refute the belief that GCSE standards are dropping, although equally it calls into question David Cameron’s recent suggestion that independent schools such as Eton should be given public money to start ‘Free Schools’2.

However, this is also a wonderful example of the way open data can be used to challenge unsupported views including official ones or ‘common knowledge’.  Of course, during the recent voting reform referendum, David Cameron expressed his disinterest in data and statistics compared with gut feelings, so the availability of data is only half the battle!

Graph shwoing comprehensive vs independent school performance

  1. Thanks to Laura Cowen @lauracowen for re-tweeting this.[back]
  2. See BBC News: Cameron: ‘Eton should set up a state school’[back]

book: The Unfolding of Language, Deutscher

I have previously read Guy Deutscher‘s “Through the Language Glass“, and have now, topsy turvy, read his earlier book “The Unfolding of Language“.  Both are about language, “The Unfolding of Language” about the development of the complexity of language that we see today from simpler origins, and “Through the Language Glass” about the interaction between language and thought.  Both are full of sometimes witty and always fascinating examples drawn from languages around the world, from the Matses in the Amazon to Ancient Sumarian.

I recall my own interest in the origins of language began young, as a seven year old over breakfast one day, asking whether ‘night, was a contraction of ‘no light’.  While this was an etymological red herring, it is very much the kind of change that Deutscher documents in detail showing the way a word accretes beginnings and ending through juxtaposition of simpler words followed by erosion of hard to pronounce sounds.

One of my favourites examples was the French “aujourd’hui”.  The word ‘hui, was Old French for ‘today’, but was originally Latin “hoc die”, “(on) this day”. Because ‘hui’ is not very emphatic it became “au jour d’hui”, “on the day of this day” , which contracted to the current ‘aujourd’hui’. Except now to add emphasis some French speakers are starting to say “au jour aujourd’hui”, “on the day on the day of this day”!  This reminds me of Longsleddale in the Lake District (inspiration for Postman Pat‘s Greendale),  a contraction of “long sled dale”, which literally means “long valley valley” from Old English “slaed” meaning “valley” … although I once even saw something suggesting that ‘long’ itself in the name was also “valley” in a different language!

Deutscher gives many more prosaic examples where words meaning ‘I’, ‘you’, ‘she’ get accreted to verbs to create the verb endings found in languages such as French, and how prepositions (themselves metaphorically derived from words like ‘back’) were merged with nouns to create the complex case endings of Latin.

However, the most complex edifice, which Deutscher returns to repeatedly, is that of the Semitic languages with a template system of vowels around three-consonant roots, where the vowel templates change the meaning of the root.  To illustrate he uses the (fictional!) root ‘sng’ meaning ‘to snog’ and discusses how first simple templates such as ‘snug’ (“I snogged”) and then more complex constructions such as ‘hitsunnag’ (“he was made to snog himself”) all arose from simple processes of combination, shortening and generalisation.

“The Unfolding of Language” begins with the 19th century observation that all languages seem to be in a process of degeneration where more complex  forms such as the Latin case system or early English verb endings are progressively simplified and reduced. The linguists of the day saw all languages in a state of continuous decay from an early linguistic Golden Age. Indeed one linguist, August Schleicher, suggested that there was a process where language develops until it is complex enough to get things done, and only then recorded history starts, after which the effort spent on language is instead spent in making history.

As with geology, or biological evolution, the modern linguist rejects this staged view of the past, looking towards the Law of Uniformitarianism, things are as they have always been, so one can work out what must have happened in the pre-recorded past by what is happening now.  However, whilst generally finding this convincing, throughout the book I had a niggling feeling that there is a difference.  By definition, those languages for which we have written records are those of large developed civilisations, who moreover are based on writing. Furthermore I am aware that for biological evolution small isolated groups (e.g. on islands or cut off in valleys) are particularly important for introducing novelty into larger populations, and I assume the same would be true of languages, but somewhat stultified by mass communication.

Deutscher does deal with this briefly, but right at the very end in a short epilogue.  I feel there is a whole additional story about the interaction between culture and the grammatical development of language.  I recall in school a teacher explained how in Latin the feminine words tended to belong to the early period linked to agriculture and the land, masculine words for later interests in war and conquest, and neuter for the still later phase of civic and political development. There were many exceptions, but even this modicum of order helped me to make sense of what otherwise seemed an arbitrary distinction.

The epilogue also mentions that the sole exception to the ‘decline’ in linguistic complexity is Arabic with its complex template system, still preserved today.

While reading the chapters about the three letter roots, I was struck by the fact that both Hebrew an Arabic are written as consonants only with vowels interpolated by diacritical marks or simply remembered convention (although Deutscher does not mention this himself). I had always assumed that this was like English where t’s pssble t rd txt wth n vwls t ll. However, the vowels are far more critical for Semitic languages where the vowel-less words could make the difference between “he did it” and “it will be done to him”.  Did this difference in writing stem from the root+template system, or vice versa, or maybe they simply mutually reinforced each other?

The other factor regarding Arabic’s remarkable complexity must surely be the Quran. Whereas the Bible was read for a over a millennium in Latin, a non-spoken language, and later translated focused on the meaning; in contrast there is a great emphasis on the precise form of the Quran together with continuous lengthy recitation.  As the King James Bible has been argued to have been a significant influence on modern English since the 17th century, it seems likely the Quran has been a factor in preserving Arabic for the last 1500 years.

Early in “The Unfolding of Language” Deutscher dismisses attempts to look at the even earlier prehistoric roots of language as there is no direct evidence. I assume that this would include Mithin’s “The Singing Neanderthals“, which I posted about recently. There is of course a lot of truth in this criticism; certainly Mithin’s account included a lot of guesswork, albeit founded on paleontological evidence.  However, Deutscher’s own arguments include extrapolating to recent prehistory. These extrapolations are based on early written languages and subsequent recorded developments, but also include guesswork between the hard evidence, as does the whole family-tree of languages.  Deutscher was originally a Cambridge mathematician, like me, so, perhaps unsurprisingly, I found his style of argument convincing. However, given the foundations on Uniformitarianism, which, as noted above, is at best partial when moving from history to pre-history, there seems more of  a continuum rather than sharp distinction between the levels of interpretation and extrapolation in this book and Mithin’s.

Deutscher’s account seeks to fill in the gap between the deep prehistoric origins of protolanguage (what Deutscher’s calls ‘me Tarzan’ language) and its subsequent development in the era of media-society (starting 5000BC with extensive Sumerian writing). Rather than seeing these separately, I feel there is a rich account building across various authors, which will, in time, yield a more complete view of our current language and its past.

book: The Singing Neanderthals, Mithin

One of my birthday presents was Steven Mithin’s “The Singing Neanderthals” and, having been on holiday, I have already read it! I read Mithin’s “The Prehistory of the Mind” some years ago and have referred to it repeatedly over the years1, so was excited to receive this book, and it has not disappointed. I like his broad approach taking evidence from a variety of sources, as well as his own discipline of prehistory; in times when everyone claims to be cross-disciplinary, Mithin truly is.

“The Singing Neanderthal”, as its title suggests, is about the role of music in the evolutionary development of the modern human. We all seem to be born with an element of music in our heart, and Mithin seeks to understand why this is so, and how music is related to, and part of the development of, language. Mithin argues that elements of music developed in various later hominids as a form of primitive communication2, but separated from language in homo sapiens when music became specialised to the communication of emotion and language to more precise actions and concepts.

The book ‘explains’ various known musical facts, including the universality of music across cultures and the fact that most of us do not have perfect pitch … even though young babies do (p77). The hard facts of how things were for humans or related species tens or hundreds of thousands of years ago are sparse, so there is inevitably an element of speculation in Mithin’s theories, but he shows how many, otherwise disparate pieces of evidence from palaeontology, psychology and musicology make sense given the centrality of music.

Whether or not you accept Mithin’s thesis, the first part of the book provides a wide ranging review of current knowledge about the human psychology of music. Coincidentally, while reading the book, there was an article in the Independent reporting on evidence for the importance of music therapy in dealing with depression and aiding the rehabilitation of stroke victims3, reinforcing messages from Mithin’s review.

The topic of “The Singing Neanderthal” is particularly close to my own heart as my first personal forays into evolutionary psychology (long before I knew the term, or discovered Cosmides and Tooby’s work), was in attempting to make sense of human limits to delays and rhythm.

Those who have been to my lectures on time since the mid 1990s will recall being asked to first clap in time and then swing their legs ever faster … sometimes until they fall over! The reason for this is to demonstrate the fact that we cannot keep beats much slower than one per second4, and then explain this in terms of our need for a mental ‘beat keeper’ for walking and running. The leg shaking is to show how our legs, as a simple pendulum, have a natural frequency of around 1Hz, hence determining our slowest walk and hence need for rhythm.

Mithin likewise points to walking and running as crucial in the development of rhythm, in particular the additional demands of bipedal motion (p150). Rhythm, he argues, is not just about music, but also a shared skill needed for turn-taking in conversation (p17), and for emotional bonding.

In just the last few weeks, at the HCI conference in Newcastle, I learnt that entrainment, when we keep time with others, is a rare skill amongst animals, almost uniquely human. Mithin also notes this (p206), with exceptions, in particular one species of frog, where the males gather in groups to sing/croak in synchrony. One suggested reason for this is that the louder sound can attract females from a larger distance. This cooperative behaviour of course acts against each frog’s own interest to ‘get the girl’ so they also seek to out-perform each other when a female frog arrives. Mithin imagines that similar pressures may have sparked early hominid music making. As well as the fact that synchrony makes the frogs louder and so easy to hear, I wonder whether the discerning female frogs also realise that if they go to a frog choir they get to chose amongst them, whereas if they follow a single frog croak they get stuck with the frog they find; a form of frog speed dating?

Mithin also suggests that the human ability to synchronise rhythm is about ‘boundary loss’ seeing oneself less as an individual and more as part of a group, important for early humans about to engage in risky collaborative hunting expeditions. He cites evidence of this from the psychology of music, anthropology, and it is part of many people’s personal experience, for example, in a football crowd, or Last Night at the Proms.

This reminds me of the experiments where a rubber hand is touched in time with touching a person’s real hand; after a while the subject starts to feel as if the rubber hand is his or her own hand. Effectively our brain assumes that this thing that correlates with feeling must be part of oneself5. Maybe a similar thing happens in choral singing, I voluntarily make a sound and simultaneously everyone makes the sound, so it is as if the whole choir is an extension of my own body?

Part of the neurological evidence for the importance of group music making concerns the production of oxytocin. In experiments on female prairie voles that have had oxytocin production inhibited, they engage in sex as freely as normal voles, but fail to pair bond (p217). The implication is that oxytocin’s role in bonding applies equally to social groups. While this explains a mechanism by which collaborative rhythmic activities create ‘boundary loss’, it doesn’t explain why oxytocin is created through rhythmic activity in the first place. I wonder if this is perhaps to do with bipedalism and the need for synchronised movement during face-to-face copulation, which would explain why humans can do synchronised rhythms whereas apes cannot. That is, rhythmic movement and oxytocin production become associated for sexual reasons and then this generalises to the social domain. Think again of that chanting football crowd?

I should note that Mithin also discusses at length the use of music in bonding with infants, as anyone who has sung to a baby knows, so this offers an alternative route to rhythm & bonding … but not one that is particular to humans, so I will stick with my hypothesis 😉

Sexual selection is a strong theme in the book, the kind of runaway selection that leads to the peacock tail. Changing lifestyles of early humans, in particular longer periods looking after immature young, led to a greater degree of female control in the selection of partners. As human size came close to the physical limits of the environment (p185), Mithin suggests that other qualities had to be used by females to choose their mate, notably male singing and dance – prehistoric Saturday Night Fever.

As one evidence for female mate choice, Mithin points to the overly symmetric nature of hand axes and imagines hopeful males demonstrating their dexterity by knapping ever more perfect axes in front of admiring females (p188). However, this brings to mind Calvin’s “Ascent of Mind“, which argues that these symmetric, ovoid axes were used like a discus, thrown into the midst of a herd of prey to bring one down. The two theories for axe shape are not incompatible. Calvin suggests that the complex physical coordination required by axe throwing would have driven general brain development. In fact these forms of coordination, are not so far from those needed for musical movement, and indeed expert flint knapping, so maybe it was this skills that were demonstrated by the shaping of axes beyond that immediately necessary for purpose.

Mithin’s description of the musical nature of mother-child interactions also brought to mind Broomhall’s “Eternal Child“. Broomhall ‘s central thesis is that humans are effectively in a sort of arrested development with many features, not least our near nakedness, characteristic of infants. Although it was not one of the points Broomhall makes, his arguments made sense to me in terms of the mental flexibility that characterises childhood, and the way this is necessary for advanced human innovation; I am always encouraging students to think in a more childlike way. If Broomhall’s theories were correct, then this would help explain how some of the music making more characteristic of mother-infant interactions become generalised to adult social interactions.

I do notice an element of mutual debunking amongst those writing about richer cognitive aspects of early human and hominid development. I guess a common trait in disciplines when evidence is thin, and theories have to fill a lot of blanks. So maybe Mithin, Calvin and Broomhall would not welcome me bringing their respective contributions together! However, as in other areas where data is necessarily scant (such as sub-atomic physics), one does feel a developing level of methodological rigour, and the fact that these quite different theoretical approaches have points of connection, does suggest that a deeper understanding of early human cognition, while not yet definitive, is developing.

In summary, and as part of this wider unfolding story, “The Singing Neanderthal” is an engaging and entertaining book to read whether you are interested in the psychological and social impact of music itself, or the development of the human mind.

… and I have another of Mithin’s books in the birthday pile, so looking forward to that too!

  1. See particularly my essay on the role of imagination in bringing together our different forms of ‘specialised intelligence’. “The Prehistory of the Mind” highlighted the importance of this ‘cognitive fluidity’, linking social, natural and technological thought, but lays this largely in the realm of language. I would suggest that imagination also has this role, creating a sort of ‘virtual world’ on which different specialised cognitive modules can act (see “imagination and rationality“).[back]
  2. He calls this musical communication system Hmmmm in its early form – Holistic, Multiple-Modal, Manipulative and Musical, p138 – and later Hmmmmm – Holistic, Multiple-Modal, Manipulative, Musical and Mimetic, p221.[back]
  3. NHS urged to pay for music therapy to cure depression“, Nina Lakhani, The Independent, Monday, 1 August 2011[back]
  4. Professional conductors say 40 beats per minute is the slowest reliable beat without counting between beats.[back]
  5. See also my previous essay on “driving as a cyborg experience“.[back]

Do teachers need a 2:2

Those in the UK will have seen recent news1 that the Education Secretary Michael Grove is planning to remove remove funding for teacher training from those who do not achieve a 2:2 or better. A report on the proposals suggests this will reduce numbers of trainee science teachers by 25% and language teachers by a third.

An Independent article on this lists various high profile figures who got third class degrees (albeit all from prestigious universities), who would therefore not be eligible – including Carol Vorderman, who is the Conservative Party’s ‘maths guru’2.

The proposed policy and the reporting of it raise three questions for me.

First is the perennial problem that the reporting only tells half the story.  Who are these one third of language trainees and one quarter of science trainees who currently do not have 2:2 degrees? Are they recent graduates who have simply not done well in their courses and treating teaching as an easy option? Are they those that maybe made poor choices in their selected courses, but nonetheless have broader talents after careful assessment by the teaching course admissions teams? Or are they mature students who did not do well in university, or maybe never went, but have been admitted based on their experience and achievements since (as we would do for any advanced degree, such as an MSc)?  If it were the first of these, then I think most parents and educators would agree with the government line, but I very much doubt this is the case.  However, with only part of the story how are we to know?  I guess I could read the full report, or maybe the THES has a more complete story, but how many parents reading about this are likely to do so?

Second is the implicit assumption that degree level study in a particular subject is likely to make you a good teacher in that subject.  Certainly in my own first subject, mathematics, many of the brightest mathematicians are unlikely to be good school teachers. In general in the sciences, I would far prefer a teacher who has a really deep understanding of GCSE and A level Physics to one who has a hazy (albeit sufficient to get 2:2 or even 2:1 degree) knowledge at degree-level. I certainly want teachers who have the interest and excitement in their topic to keep up-to-date beyond the minimum needed for their courses, but a broad ‘James Gleik’ style popular science, is probably more useful than third year courses in a Physics degree.

Finally the focus on degree classification, suggests that Michael Gove has a belief in a cross-discipline, cross-department, and cross-institutional absolute grading that appears risible to anyone working in Higher Education. Does he really believe that a 2:2 from Oxford is the same as a 2:2 at every UK institution? If so then I seriously doubt his ability to be hold the education portfolio in government.

To be fair this is a real problem in the Higher Education system as it is hard for those not ‘in the know’ to judge the meaning of grades, especially as it is not simply a matter of institution, often particular parts of an institution (notably music, arts and design schools) have a different profile to the institution as a whole. Indeed we have the same problem within the university system when judging grades from other countries. This has not been helped by gradual ‘grade inflation’ across the education sector from GCSE to degrees, driven in no small part by government targets and independent ‘league tables’ that use crude measures largely unrelated to real educational success. Institutions feel under constant pressure to create rules that meet various metrics to the detriment of real academic judgement3.

If the government is seriously worried about the standard of teachers entering the profession, then shift funding of courses towards measures of real success and motivation – perhaps percentage of students who subsequently obtain public-sector teaching jobs. If the funding moves the selection will follow suit!

… and maybe at the same time this should apply across the sector.  A few weeks ago I was at the graduation at LIPA, which is still managing near 100% graduate employment despite the recession and severe cuts across the arts.  Not that employment is the only measure of success, but if metrics are to be used, then at least make them real ones. Or better still drop the metrics, targets and league tables and let students both at school and university simply learn.

  1. Hit headlines about a week ago in the UK, just catching up after holiday![back]
  2. Reforms of teacher training will bring mass shortages, report finds“, Richard Garner, The Independent, Thursday, 11 August 2011, p14-15.[back]
  3. In fact, I came very close to resigning earlier in the summer over this issue.[back]

book: The Laws of Simplicty, Maeda

Yesterday I started to read John Maeda’s “The Laws of Simplicty” whilst  sitting by Fiona’s stall at the annual Tiree agricultural show, then finished before breakfast today.  Maeda describes his decision to cap at 100 pages1 as something that could be read during a lunch break. To be honest 30,000 words sounds like a very long lunch break or a very fast reader, but true to his third law, “savings in time feel like simplicity”2, it is a short read.

The shortness is a boon that I wish many writers would follow (including me). As with so many single issue books (e.g. Blink), there is s slight tendency to over-sell the main argument, but this is forgiveable in a short delightful book, in a way that it isn’t in 350 pages of less graceful prose.

I know I have a tendency, which can be confusing or annoying, to give, paradoxically for fear of misunderstanding, the caveat before the main point. Still, despite knowing this, in the early chapters I did find myself occasionally bristling at Maeda’s occasional overstatement (although in accordance with simplicity, never hyperbole).

One that particularly caught my eye was Maeda’s contrast of the MIT engineer’s RFTM (Read The F*cking Manual) with the “designer’s approach” to:

marry function with form to create intuitive experiences that we understand immediately.

Although in principle I agree with the overall spirit, and am constantly chided by Fiona for not reading instructions3, the misguided idea that everything ought to ‘pick up and use’ has bedeviled HCI and user interface design for at least the past 20 years. Indeed this is the core misconception about Heidegger’s hammer example that I argued against in a previous post “Struggling with Heidegger“. In my own reading notes, my comment is “simple or simplistic!” … and I meant here the statement not the resulting interfaces, although it could apply to both.

It has always been hard to get well written documentation, and the combination of single page ‘getting started’ guides with web-based help, which often disappears when the web site organisation changes, is an abrogation of responsibility by many designers. Not that I am good at this myself. Good documentation is hard work. It used to be the coders who failed to produce documentation, but now the designers also fall into this trap of laziness, which might be euphemistically labelled ‘simplicity’4.

Personally, I have found that the discipline of documenting (in the few times I have observed it!) is in fact a great driver of simple design. Indeed I recall a colleague, maybe Harold Thimbleby5, once suggested that documentation ought to be written before any code is written, precisely to ensure simple use.

Some years ago I was reading a manual (for a Unix workstation, so quite a few years ago!) that described a potentially disastrous shortcoming of a the disk sync command (which could have corrupted the disk). Helpfully the manual page included a suggestion of how to wrap sync in scripts that prevented the problem. This seemed to add insult to injury; they knew there was a serious problem, they knew how to fix it … and they didn’t do it. Of course, the reason is that manuals are written by technical writers after the code is frozen.

In contrast, I was recently documenting an experimental API6 so that a colleague could use it. As I wrote the documentation I found parts hard to explain. “It would be easier to change the code”, I thought, so I did so. The API, whilst still experimental, is now a lot cleaner and simpler.

Coming back to Maena after a somewhat long digression (what was that about simplicity and brevity?). While I prickled slightly at a few statements, in fact he very clearly says that the first few concrete ‘laws’ are the simpler (and if taken in their own simplistic), the later laws are far more nuanced and suggest deeper principles. This includes law 5 “differences: simplicity and complexity need each other”, which suggest that one should strive for a dynamic between simplicity and complexity. This echoes the emphasis on texture I often advocate when talking with students; whether in writing, presenting or in experience design it is often the changes in voice, visual appearance, or style which give life.

Unix command line prompt

the simplest interface?

I wasn’t convinced by Maeda’s early claim that simple designs were simpler and cheaper to construct.  Possibly true for physical prodcuts, but rarely so for digital interfaces, where more effort is typically needed in code to create simpler user interfaces.  However, again this was something that was revisited later, especially in the context of more computationally active systems (“law 8, in simplicity we trust”), where he contrasts “how much do you need to know about a system?” with “how much does the system know about you?”.  The former is the case of more traditional passive systems, whereas more ‘intelligent’ systems such as Amazon recommendations (or even Facebook news feed) favour the latter.  This is very similar to the principles for incidental and low-intention interaction that I have discussed in the past7.

Finally “The Laws of Simplicity” is beautifully designed in itself.  It includes  many gems not least those arising from Maeda’s roots in Japanese design culture, including aichaku, the “sense of attachment one can feel for an artefact” (p.69) and omakase meaning “I leave it to you”, which asks the sushi chef to create a meal especially for you (p.76).  I am perhaps too much of a controller to feel totally comfortable with the latter, but Maeda’s book certainly inspires the former.

  1. In fact there are 108 pages in the main text, but 9 of these are full page ‘law/chapter’ frontispieces, so 99 real pages.  However, if you include the 8 page introduction that gives 107 … so even the 100 page cap is perhaps a more subtle concept than a strict count.[back]
  2. See his full 10 laws of simplicity at lawsofsimplicity.com[back]
  3. My guess is that the MIT engineers didn’t read the manuals either.[back]
  4. Apple is a great — read poor — example here as it relies on keen technofreaks to tell others about the various hidden ways to do things — I guess creating a Gnostic air to the devices.[back]
  5. Certainly Harold was a great proponent of ‘live’ documentation, both Knuth’s literate programming and also documentation that incorporated calculated input and output, rather like dexy, which I reported after last autumn’s Web Art/Science camp.[back]
  6. In fairness, the API had been thrown together in haste for my own use.[back]
  7. See ‘incidental interaction” and HCI book chapter 18.[back]

Six weeks on the road

I’ve been at home for the last week after six weeks travelling around the UK and elsewhere.  I’ve not kept up while on the road so doing a retrospective post on it all and need to try to catch on other half written posts.

As well as time at Talis offices in B’ham and at Lancs (including exam board week), travels have taken me to Pisa for a workshop on ‘Supportive User Interfaces’, to Koblenz for Web Science conference giving a talk on embodiment issues and a poster on web-scale reasoning , to Newcastle for British HCI conference doing a talk on fridge, to Nottingham to give a talk on extended episodic experience, and back to Lancs for a session on creativity! Why can’t I be like sensible folks and talk on one topic!

Supportive User Interfaces

Monday 13th June I attended a workshop in Pisa on “Supportive User Interfaces“, which includes interfaces that adapt in various ways to users.  The majority of people there were involved in various forms of model-based user interfaces in which various models of the task, application and interaction are used to generate user interfaces on the fly. W3C have had a previous group in this area; Dave Raggett from w3c was at the workshop and it sounds like there will be a new working group soon.  This clearly has strong links to various forms of ‘meta-level’ representations of data, tasks, etc..  My own contribution started the day, framing the area, focusing partly on reasons for having more ‘meta-level’ interfaces including social empowerment, and partly on the principles/techniques that need to be considered at a human level.

Also on Monday was a meeting of IFIP Working Group 2.7/13.4. IFIP is the UNESCO founded pan-national agency that national computer societies such as as the BCS in the UK and ACM and IEEE Computer in the US belong to.  Working Group 2.7/13.4 is focused on the engineering of user interfaces.  I had been actively involved in the past, but have had many years’ lapse.  However, this seemed a good thing to re-engage with with my new Talis hat on!

SUI: paper:

Web Science Conference in Koblenz

Jaime Teevan from Microsoft gave the opening keynote at WebSci 2011.  I know her from her earlier work on personal information management, but her recent work and keynote was about work on analysing and visualising changes in web pages.  Web page changes are also analysed alongside users re-visitation patterns; by looking at the frequency of re-visitation Jaime and her colleagues are able to identify the parts of pages that change with similar frequency, helping them, inter alia, to improve search ranking.

Had many great conversations, some with people I know previously (e.g. the Southampton folks), but also new, including the group at Troy that do lots of work with data.gov.  I was particularly interested in some work using content matching to look for links between otherwise unlinked (or only partly inter-linked) datasets.  Also lots of good presentations including one on trust prediction and a fantastic talk by Mark Bernstein from Eastgate, which he delivered in blank verse!

My own contribution included the poster that Dave@Talis prepared, which was on the web-scale spreading activation work in collaboration with Univ. Athens.  Quite a niche area in a multi-disciplinary conference, so didn’t elicit quite the interest of the social networking posters, but did lead to a small number of in depth discussions.

In addition I gave talk on the more cognitive/philosophical issues when we start to use the web as an external extension to / replacement of memory, including its impact on education.  Got some good feedback from this.

Closing keynote was from Barry Wellman, the guy who started social network analysis way before they were on computers.  At one point he challenged the Dunbar number1. I wondered whether this was due to cognitive extension with address books etc., but he didn’t seem to think so; there is evidence that some large circles predate web (although maybe not physical address books).  Made me wonder about itinerant tradesmen, tinkers, etc., even with no prostheses. Maybe the numbers sort of apply to any single content, but are repeated for each new context?

WebSci papers:

The HCI Conference – Newcastle

I attended the British HCI conference in Newcastle. This was the 25th conference, and as my very first academic paper in computing2 was at the first BHCI in 1984, I was pleased to be there at this anniversary.  The paper I was presenting was a retrospective on vfridge, a social networking site dating back to 1999/2000, it seemed an historic occasion!

As is always the case presentations were all interesting. Strictly BHCI is a ‘second tier’ conference compared with CHI, but why is it that the papers are always more interesting, that I learn more?  It is likely that a fair number of papers were CHI rejects, so it should be the other way round – is it that selectivity and ‘quality’ inevitably become conservative and boring?

Gregory Abowd gave the closing keynote. It was great to see Gregory again, we meet too rarely.  The main focus of his keynote was on three aspects of research: novelty, value and reliability and how his own work had moved within this space over the years.  In particular having two autistic sons has led him in directions he would never have considered, and this immediately valuable work has also created highly novel research. Novelty and value can coexist.

Gregory also reflected on the BHCI conference as it was his early academic ‘home’ when he did his PhD and postdoctoral here in the late 1980’s.  He thought that it could be rather than, as with many conferences, a second best to getting a CHI paper, instead a place for (not getting the quote quite perfect) “papers that should get into CHI”, by which he meant a proving ground for new ideas that would then go on to be in CHI.

Alan at conference dinnerHowever I initially read the quote differently. BHCI always had a broader concept of HCI compare with CHI’s quite limited scope. That is BHCI as a place that points the way for the future of HCI, just as it was the early nurturing place of MobileHCI.  However CHI has now become much broader in it’s own conception, so maybe this is no longer necessary. Indeed at the althci session the organisers said that their only complaint was that the papers were not ‘alt’ enough – that maybe ‘alt’ had become mainstream. This prompted Russell Beale to suggest that maybe althci should now be real science such as replication!

Gregory also noted the power of the conference as a meeting ground. It has always been proud of the breadth of international attendance, but perhaps it is UK saturation that should be it’s real measure of success.  Of course the conference agenda has become so full and international travel so much cheaper than it was, so there is a tendency to  go to the more topic specific international conferences and neglect the UK scene.  This is compounded by the relative dearth of small UK day workshops that used to be so useful in nurturing new researchers.

Tom at conference dinnerI feel a little guilty here as this was the first BHCI I had been to since it was in Lancaster in 2007 … as Tom McEwan pointed out I always apologise but never come! However, to be fair I have also only been twice to CHI in the last 10 years, and then when it was in Vienna and Florence. I have just felt too busy, so avoiding conferences that I did not absolutely have to attend.

In response to Gregory’s comments, someone, maybe Tom, mentioned that in days of metrics-based research assessment there was a tendency to submit one’s best work to those venues likely to achieve highest impact, hence the draw of CHI. However, I have hardly ever published in CHI and I think only once in TOCHI, yet, according to Microsoft Research, I am currently the most highly cited HCI researcher over the last 5 years … So you don’t have to publish in CHI to get impact!

And incidentally, the vfridge paper had NOT been submitted to CHI, but was specially written for BHCI as it seemed the fitting place to discuss a thoroughly British product 🙂

vfridge paper:

Nottingham MRL

I was at Mixed Reality Lab in Nottingham for Joel Fischer‘s PhD viva and while there did a seminar the afternoon on “extended episodic experience” based on Haliyana Khalid‘s PhD work and ideas that arose from it. Basically, whereas ‘user experience’ has become a big issue most of the work is focused on individual ‘experiences’ whereas much of life consists of ongoing series of experiences (episodes) which together make up the whole experience of interacting with a person or place, following a band, etc.

I had obviously not done a good enough job at wearing Joel down with difficult questions in the PhD viva in the morning as he was there in the afternoon to ask difficult questions back of his own 😉

Docfest – Digital Economy Summer School

The last major event was Docfest, which brought together the PhD students from the digital economy centres from around the country. Not sure of the exact count but just short of 150 participants I think. They come from a wide variety of backgrounds, business, design, computing, engineering, and many are mature students with years of professional experience behind them.

This looked like being a super event, unfortunately I was only able to attend for a day 🙁  However, I had a great evening at the welcome event talking with many of the students and even got to ride in Steve Forshaw‘s Sinclair C5!

My contribution to the event was running the first morning session on ‘creativity’. Surprise, surprise this started with a bad ideas session, but new for me too as the largest group I’ve run in the past has been around 30.  There were a number of local Highwire students acting as facilitators for the groups, so I had only to set them off and observe results :-). At the end of the morning I gave some the theoretical background to bad ideas as a method and in understanding (aspects of) creativity more widely.

Other speakers at the event included Jane Prophet, Chris Csikszentmihalyi and Chris Bonnington, so was sad to miss them; although I did get a fascinating chat with Jane over breakfast in the hotel hearing about her new projects on arts and neural imaging, and on how repetitious writing induces temporary psychosis … That is why the teachers give lines, to send the pupils bonkers!

  1. The idea that there are fundamental cognitive limits on social groups with different sized circles family~6, extended family~20, village~60, large village~200[back]
  2. I had published previously in agricultural engineering.[back]