Christmas and the Foundational Myths of Social-Anthropology

Unwrapping Christmas (cover)I have started to read “Unwrapping Christmas” (David Miller ed.) a collection about the modern celebration of Christmas from an anthropological and sociological perspective.

So far I have read just the first two chapters: an attempt to synthesise ‘A Theory of Christmas‘ by Miller and a translation of Lévi-Strauss’ 1952 article on ‘Father Christmas Executed‘ (Le Père Noël supplicié).  These have been fascinating both in their intentional insights into Christmas, but also their unintentional insight into mindset of social-anthropology, the foundational myths of the area

I came across the book as it was mentioned in an article I was reviewing and I realised it is something I should have read years ago when I was first writing about virtual Christmas crackers many years ago1.

It was published in 1993, and both Christmas, as a global festival, and anthropology/sociology have developed since then, so in some ways a snapshot from 20 years ago.  It would, not least, be interesting to see an update on post-9/11 Christmas in Islamic countries

Miller starts his investigation into a ‘theory’ of Christmas by noting that anthropologists take a largely ‘synchronic’ view of phenomena, “detailed observations of current practices” as opposed to folklore research, which is more focused on “survivals”.  However, despite this, he allows himself an historical detour.

He starts, as is traditional, with the Roman midwinter festivals.  However, I had not realised that there were several of these, deriving it appears (based a little wider reading, not Miller) from a variety of different pre-Roman, midwinter traditions possibly dating back to Babylonian times.  First there was Kalends2, which was known for present giving, unusual, freedoms for slave and child, and holding lightly to money.  Second, there was Saturnalia, a period of feasting and gluttony, which even in Roman times was seen as “crass materialism”.  Finally, there is the latecomer Dies Natalis Solis Invicti, the Sun God’s feast, set on 25th December.

The last is the reason I have always heard for the choice of the date of Christmas, a deliberate borrowing of a popular pagan festival, and then ‘Christianising’ it in the same way that many churches were built on pagan religious sites.  Except, when I started to look a little more deeply at this, I found that this account, that I had always heard, is now challenged, by a more modern account based on the earlier supposed date of conception of Jesus (the Annunciation)3.

The evidence for either derivation appears at best circumstantial and partial, and whether or not Christmas was deliberately dated to coincide with these pagan festivals, the driving force of midwinter festivals, the need for consolation in darkness and the hope of new light, is undoubtedly also one of the reasons for the ongoing popularity of Christmas.  Furthermore, it is undoubtedly the case that many rituals connected with the Christian Christmas, such as the Christmas tree, are deliberate or accidental adoptions of pre-Christian symbolism.

However, the origins and strength of the academic myth of the pagan date of Christmas, whether or not it turns out to be true, is an interesting story in itself.  It appears to have arisen in the 18th and 19th centuries.  During this time and running into the early years of the 20th century, there was a popularity for studies of folklore and myth to seek syncretic origins, to see common underlying mythic stories across all cultures. I guess Freud and Jung can be seen as part of the tail-end of this tradition.

This can be seen as a modernist agenda, albeit with a slight trace of the Victorian love of occult, seeking singular causes, and seeing all ancient accounts as not just mythic, but entirely fictional.  This hyper-scepticism was severely challenged by Calvert and Schliemann’s discovery of Troy in the 1860s4, which had been believed to be purely mythical.  Now-a-days historians take a critical but more open-minded approach to documentary evidence, but the remnants of Victorian hyper-scepticism were still evident in Biblical scholarship, certainly until the late 20th century.

Miller ends up with a two-stranded theory.

One strand is externally focused on the carnival aspects of Christmas as a way to connect to a wider world, and through a temporary overturning of order with (almost Easter-like, but to my mind stretched) echoes of the killing and renewal of the Lord of Misrule.  A more cynical view of carnival, might be of a periodically sanctioned inversion of establishment, in order to channel and control dissent.  However, Miller offers a more generous life affirming view.

The second strand is internally focused on family, and in particular the nuclear family.  Miller sees this as a response to society under threat; in modern times in response to “the threat posed by the sheer scale of materialism“.  That is, if I have understood right, Christmas as reaction against, rather than slave to, market culture.

As an investigation of the global secular Christmas, the account makes virtually no reference to religious origins of Christmas.  Indeed, the desire to distance itself from Christian theology and symbolism is sometimes arcane.  At one point Miller suggests that:

“The birth (sic) of Christmas is itself an attempt to anthropomorphize the divinity in the form of the domestic family unit …”

This is stated with no reference to incarnation, or the origins of the early celebration of Christmas when theology of Christ’s humanity and divinity were being contested.  I could not decide whether this was deliberate irony or culpable ignorance.

The relation between Christmas as religious rite and secular festival is not only problematic for the anthropologist.  Church writers decried the intemperate and potentially licentious festivities throughout the first millennium, and Christmas was banned entirely in Puritan England.  Within living memory, Christmas Day was a normal working day in the Calvinist Protestant areas of Scotland, and in the 1970s there was a campaign to “put Christ back in Christmas“, not least in reaction to the growing use of ‘Xmas’.  Many Christians find the figure of Father Christmas problematic.  At the very best there is a fear of confusing children, captured perfectly in ELP’s  “I believe in Father Christmas“:

They sold me a dream of Christmas
They sold me a Silent Night
And they told me a fairy story
‘Till I believed in the Israelite
And I believed in Father Christmas
And I looked to the sky with excited eyes
‘Till I woke with a yawn in the first light of dawn
And I saw him and through his disguise

At worst Father Christmas, often with attendant fairies and pixies, is seen as verging on the demonic.

Conflict between secular and religious Christmas seems particularly strong in the US with, it appears from this side of the Atlantic, endless tales of cribs or Christmas trees being banned from schools or public places for fear of hurting the feelings of other religions (which almost universally seem to reject any problems), not to mention the anodyne ‘Happy Holidays’.  However, similar stories hit the news in the UK also, with one Vicar suffering the ire of the national media for telling school children about the (somewhat gruesome) historic origins of Saint Nicholas, and in so doing undermining their faith in Father Christmas5:

“… horrified parents said the talk would give their children nightmares and make them start to disbelieve in the magic of Father Christmas and his reindeer.”

Lévi-Strauss’ ‘Father Christmas Executed‘ picks up precisely one such point of Christmas conflict in Dijon in 1951, when the local clergy publically burnt an effigy of Father Christmas.  It appears, from the France-soir report that Lévi-Strauss quotes, that the children actively took part in this mock execution, so maybe French children in the 1950s were made of stronger stuff that UK children today. Naughtily, I cannot but help think that the Dijon clergy may  have unwittingly captured the neo-pagan spirit of the burning of wicker men.   Certainly the France-soir reporter saw some of the irony of the situation noting that, “Dijon awaits the resurrection of Father Christmas“, who was to speak from the Town Hall roof later that evening.

Writing in 1952 Lévi-Strauss is fascinated by the ostensive rejection of American culture and values in France, and yet the speed at which American Christmas had infiltrated French nativity.  He attributes this to ‘stimulus diffusion’, where the external system (American Christmas) evokes pre-existing ideas or needs.

He makes excursions across various related beliefs and ceremonies, including, inevitably, Saturnalia, but also, in passing, mentioning the display of antlers in Renaissance Christmas dances, prefiguring Santa’s reindeer (Herne and the Wild Hunt transfigured to Rudolph with his nose so bright).  However, eventually settles that the “beliefs linked to Father Christmas relate to the sociology of initiation (and that is beyond doubt)“, based partly on practices of Pueblo Indians placating the spirits of past dead children by giving gifts to their own live children, the lessons from which “can be extended to all initiation rites and even all occasions when society is divided into two groups.” — well, of course.

As an example of a literature-based study of homo anthropologist, I found the following statement particularly fascinating:

“Explanations in terms of survivals are always inadequate. Customs neither disappear nor survive without reason.  When they do survive, the reason is less likely to be found in the vagaries of history than in the permanence of a function which analysing the present allows us to discover.”

The discussion of the Pueblo Indians is deemed powerful precisely because there are no historical connections and therefore any (strained) parallel is connected to deep underlying human needs and aspirations, the “most general conditions of social life.”

Ignoring the validity of the analysis of Christmas, itself, one cannot help but see the parallels with the Victorian folklorists desire for deep universal structures.  Of course, here there are, in addition, historical cultural connections within academia, but also this desire for the singular casual explanation is surely one of the “general conditions of academic life.”

As mentioned, writing in 1993, 40 years after 1952 Lévi-Strauss’ article, Miller starts by noting a ‘synchronic’ focus of anthropologists, and in the list of contributors Lévi-Strauss is described (and undoubtedly was) “the most distinguished anthropologist of his generation“.

It almost feels as though the quote from Lévi-Strauss is a statement of a foundational myth of the discipline.

The statement is not absolutist, indeed in ‘Father Christmas Executed‘ Lévi-Strauss triangulates his synchronic analysis of Christmas with a diachronic excursion into Abbé de Liesssse, Julebok and the Lord of Misrule amongst others.  However, note the emphasis  of the statement, “less likely to be found in the vagaries of history than in the permanence of a function“, gives a primacy to ahistorical accounts.

Ethnography developed as a recording of people’s behaviours and customs without imposing external values or systems of thought, the cultural equivalent of a physical observation, but with the understanding that the observer needs to get inside the subjective experience of the participants.  This methodological focus has perhaps transposed into an ontological one, the importance methodologically of focusing on the present, becoming a primacy of the present, the avoidance of external value systems tainting historical analysis.

Maybe the field has also been wary of ubiquitous historical determinacy such as colonial accounts of the inevitability and superiority of 19th century civilisation, Marxist belief in its inevitable downfall, or the more recent narratives of western democracy.

However as rhetoric, this ahistorical focus feels rather like pre-ecological biology.  A simple view has animals evolving to fit into environmental niches, just as Lévi-Strauss sees American Christmas ideas finding a place in a French cultural niche.  However, biological systems are now seen as reflexive, co-evolutionary, the environment is shaped by the species just as the species is shaped by the environment.

This is a less comfortable and less stable world that challenges the enduring Darwinian myth of optimality.  Species (including humans) are no longer the best possible creatures for their environment but historically contingent and part of an ongoing dynamic.

Similarly cultural practices do not simply sit upon universal human and social functions as a singular causation, but those functions, the underlying human needs, change due to the practices in which we as individuals and as societies engage.

Adoration of the Shepherds by Gerard van Honthorst The weakness of ignoring this can be seen in Miller’s view of Christmas in relation to the nuclear family.  To the early readers of Matthew and Luke’s gospels, the story of the woman giving birth away from extended family must surely have been strange if not shocking.  Indeed early icongraphy is much more focused on mother and child than the classic modern family crib scene (although both are found, and the history of woodworking tools is indebted to depictions of Joseph the carpenter).

However, if we imagine recently industrialised and urbanised 19th century Britain, with extended family often far away in the country, or recent migrants or settlers in the US, the non-traditional nativity scene must surely have framed and helped build the very notion of nuclear family.  This is particularly obvious in the account of Christmas in Laura Ingalls Wilder’s “Little House on the Prairie“.

Similarly, is the global popularity of Christmas also in part due to the 18th and 19th century combination of colonisation and missionary movements that helped establish globally what we now deem to be ‘universal’ ideas of ethics and rights; ideas, of course, derived in no small part from the Christmas story?  Although, in their part, these ideas maybe had impact precisely because they touched universal needs in the human heart.

  1. To try out virtual crackers go to, to read about them see my chapter “Deconstructing Experience – pulling crackers apart”  or shorter Interactions article “Taking fun seriously“.[back]
  2. Strictly, Kalends were the end of any month, but there appeared to be particular celebrations at the end of December.[back]
  3. For more discussion of the date of Christmas see The Date of Christmas and Epiphany and Catholic Encyclopedia: Christmas.[back]
  4. The Wikipedia page on Troy describes the early archaeology, but not the previous scepticism, which is in more detailed accounts. [back]
  5. Vicar tells primary school children Santa Claus is NOT REAL and reveals gruesome legend.  Express, 12 Dec 2013. [back]

The war in the west

Just got back from the book launch for “Tiree: War among the Barley and Brine“.  Organised by An Iodhlann and the Islands Book Trust.

Mike Hughes, one of the authors, gave a talk and there were ex-service men connected with Tiree and their families present.  One man was the son of the pilot of one of the two Halifaxes which crashed into each other over the airfield on a cloudy day – a father he had never met as his mother was only 4 months pregnant at the time.

I hadn’t realised that it was from Tiree that the weather reports came in that set the timetable for D-Day.  The meteorological squadrons are unsung heroes of the war, flying far out into the Atlantic, in conditions where all other planes were grounded, to get the long-range weather data that is so easy to gather now-a-days from satellites.  Sadly the airman who had made the crucial weather observations for D-Day did not survive the war dying in that same Halifax accident over Tiree.

Neither had I known that Tiree was to be the staging post for the withdrawal of Winston Churchill and the Royal Family had the worst happened and the the German’s invaded Britain.  So, before it withdrew to a government in exile in Saskatchewan, the last outpost of British sovereignty would have been … Tiree.

Holiday Reading

Early in the summer Fiona and I took 10 days holiday, first touring on the West Coast of Scotlad, south from Ullapool and then over the Skye Road Bridge to spend a few days on Skye.  As well as visiting various wool-related shops on the way and a spectacular drive over the pass from Applecross, I managed a little writing, some work on regret modelling1. And, as well as writing and regret modelling, quite a lot of reading.

This was my holiday reading:

The Talking Ape: How Language Evolved, Robbins Burling (see my booknotes and review)

In Praise of the Garrulous, Allan Cameron (see my booknotes)

A Mind So Rare, Merlin Donald (see my booknotes and review)

Wanderlust, Rebecca Solnit (see my booknotes)

  1. At last!  It has been something like 6 years since I first did initial, and very promising, computational regret modelling, and have at last got back to it, writing driver code so that I have got data from a systematic spread of different parameters.  Happily this verified the early evidence that the cognitive model of regret I wrote about first in 2003 really does seem to aid learning.  However, the value of more comprehensive simulation was proved as early indications that positive regret (grass is greener feeling) was more powerful than negative regret do not seem to have been borne out.[back]

books: The Nature of Technology (Arthur) and The Evolution of Technology (Basalla)

I have just finished reading “The Nature of Technology” (NoT) by W. Brian Arthur and some time ago read “The Evolution of Technology” (EoT) by George Basalla, both covering a similar topic, the way technology has developed from the earliest technology (stone axes and the wheel), to current digital technology.  Indeed, I’m sure Arthur would have liked to call his book “The Evolution of Technology” if Basalla had not already taken that title!

We all live in a world dominated by technology and so the issue of how technology develops is critical to us all.   Does technology ultimately serve human needs or does it have its own dynamics independent of us except maybe as cogs in its wheels?  Is the arc of technology inevitable or does human creativity and invention drive it in new directions? Is the development of technology now similar (albeit a bit faster) than previous generations, or does digital technology fundamentally alter things?

Basalla was  published in 1988, while Arthur is 2009, so Arthur has 20 years more to work on, not much compared to 2 million years for the stone axe and 5000 years for the wheel, but 20 years that has included the boom (and bust!), and the growth of the internet.  In a footnote (NoT,p.17), Arthur describes Basalla as “the most complete theory to date“, although then does not appear to directly reference Basalla again in the text – maybe because they have different styles.  Basalla (a historian of technology) offering a more descriptive narrative  whilst Arthur (and engineer and economist) seeks a more analytically complete account. However I also suspect that Arthur discovered Basella’s work late and included a ‘token’ reference; he says that a “theory of technology — an “ology” of technology” is missing (NoT,p.14), but, however partial, Basella’s account cannot be seen as other than part of such a theory.

Both authors draw heavily, both explicitly and implicitly, on Darwinian analogies, but both also emphasise the differences between biological and technological evolution. Neither is happy with, what Basella calls the “heroic theory of invention” where “inventions emerge in a fully developed state from the minds of gifted inventors” (EoT,p.20).  In both there are numerous case studies which refute these more ‘heroic’ accounts, for example Watts’ invention of the steam engine after seeing a kettle lid rattling on the fire, and show how these are always built on earlier technologies and knowledge.  Arthur is more complete in eschewing explanations that depend on human ingenuity, and therein, to my mind, lies the weakness of his account.  However, Arthur does take into account, as central mechanism,  the accretion of technological complexity through the assembly of components, al but absent from Basella’s account — indeed in my notes as I read Basella I wrote “B is focused on components in isolation, forgets implication of combinations“.

I’ll describe the main arguments of each book, then look at what a more complete picture might look like.

(Note, very long post!)

Basella: the evolution of technology

Basella’s describes his theory of technological evolution in terms of  four concepts:

  1. diversity of artefacts — acknowledging the wide variety both of different kinds of things, but also variations of the same thing — one example, dear to my heart, is his images of different kinds of hammers 🙂
  2. continuity of development — new artefacts are based on existing artefacts with small variations, there is rarely sudden change
  3. novelty — introduced by people and influenced by a wide variety of psychological, social and economic factors … not least playfulness!
  4. selection — winnowing out the less useful/efficient artefacts, and again influenced by a wide variety of human and technological factors

Basella sets himself apart both from earlier historians of technology (Gilfillan and Ogburn) who took an entirely continuous view of development and also the “myths of the heroic inventors” which saw technological change as dominated by discontinuous change.

He is a historian and his accounts of the development of artefacts are detailed and beautifully crafted.  He takes great efforts to show how standard stories of heric invention, such as the steam engine, can be seen much more sensibly in terms of slower evolution.  In the case of steam, the basic principles had given rise to Newcomen’s  steam pump some 60 years prior to Watt’s first steam engine.  However, whilst each of these stories emphasised the role of continuity, as I read them I was struck also by the role of human ingenuity.  If Newcomen’s engine had been around since 1712 years, what made the development to a new and far more successful form take 60 years to develop? The answer is surely the ingenuity of James Watt.  Newton said he saw further only because he stood on the shoulders of giants, and yet is no less a genius for that.  Similaly the tales of invention seem to be both ones of continuity, but also often enabled by insights.

In fact, Basella does take this human role on board, building on Usher’s earlier work, which paced insight centrally in accounts of continuous change.  This is particularly central in his account of the origins of novelty where he considers a rich set of factors that influence the creation of true novelty.  This includes both individual factors such as playfulness and fantasy, and also social/cultural factors such as migration and the patent system.  It is interesting however that when he turns to selection, it is lumpen factors that are dominant: economic, military, social and cultural.  This brings to mind Margaret Bowden’s H-creativity and also Csikszentmihalyi’s cultural views of creativity — basically something is only truly creative (or maybe innovative) when it is recognised as such by society (discuss!).

Arthur: the nature of technology

Basella ends his book confessing that he is not happy with the account of novelty as provided from historical, psychological and social perspectives.  Arthur’s single reference to Basella (endnote, NoT, p.17) picks up precisely this gap, quoting Basella’s “inability to account fully for the emergence of novel artefacts” (EoT,p.210).  Arthur seeks to fill this gap in previous work by focusing on the way artefacts are made of components, novelty arising through the hierarchical organisation and reorganisation of these components, ultimately built upon natural phenomena.  In language reminiscent of proponents of ‘computational thinking‘, Arthur talks of a technology being the “programming of phenomena for our purposes” (NoT,p.51). Although, not directly on this point, I should particularly liked Arthur’s quotation from Charles Babbage “I wish to God this calculation had been executed by steam” (NoT,p.74), but did wonder whether Arthur’s computational analogy for technology was as constrained by the current digital perspective as Babbage’s was by the age of steam.

Although I’m not entirely convinced at the completeness of hierarchical composition as an explanation, it is certainly a powerful mechanism.  Indeed Arthur views this ‘combinatorial evolution’ as the key difference between biological and technological evolution. This assertion of the importance of components is supported by computer simulation studies as well as historical analysis. However, this is not the only key insight in Arthur’s work.

Arthur emphasises the role of what he calls ‘domains’, in his words a “constellation of technologies” forming a “mutually supporting set” (NoT,p.71).  These are clusters of technologies/ideas/knowledge that share some common principle, such as ‘radio electronics’ or ‘steam power’.  The importance of these are such that he asserts that “design in engineering begins by choosing a domain” and that the “domain forms a language” within which a particular design is an ‘utterance’.  However, domains themselves evolve, spawned from existing domains or natural phenomena, maturing, and sometimes dying away (like steam power).

The mutual dependence of technology can lead to these domains suddenly developing very rapidly, and this is one of the key mechanisms to which Arthur attributes more revolutionary change in technology.  Positive feedback effects are well studied in cybernetics and is one of the key mechanisms in chaos and catastrophe theory which became popularised in the late 1970s.  However, Arthur is rare in fully appreciating the potential for these effects to give rise to sudden and apparently random changes.  It is often assumed that evolutionary mechanisms give rise to ‘optimal’ or well-fitted results.  In other areas too, you see what I have called the ‘fallacy of optimality’1; for example, in cognitive psychology it is often assumed that given sufficient practice people will learn to do things ‘optimally’ in terms of mental and physical effort.

human creativity and ingenuity

Arthur’s account is clearly more advanced than the early more gradualists, but I feel that in pursuing the evolution of technology based on its own internal dynamics, he underplays the human element of the story.   Arthur even goes so far as to describe technology using Maturna’s term autopoetic (NoT,p.170) — something that is self-(re)producing, self-sustaining … indeed, in some sense with a life of its own.

However, he struggles with the implications of this.  If, technology responds to “its own needs” rather than human needs, “instead of fitting itself to the world, fits the world to itself” (NoT,p.214), does that mean we live with, or even within, a Frankenstein’s monster, that cares as little for the individuals of humanity as we do for our individual shedding skin cells?  Because of positive feedback effects, technology is not deterministic; however, it is rudderless, cutting its own wake, not ours.

In fact, Arthur ends his book on a positive note:

Where technology separates us from these (challenge, meaning, purpose, nature) it brings a type of death. But where it affirms these, it affirms life. It affirms our humanness.” (NoT,p.216)

However, there is nothing in his argument to admit any of this hope, it is more a forlorn hope against hope.

Maybe Arthur should have ended his account at its logical end.  If we should expect nothing from technology, then maybe it is better to know it.  I recall as a ten-year old child wondering just these same things about the arc of history: do individuals matter?  Would the Third Reich have grown anyway without Hitler and Britain survived without Churchill?  Did I have any place in shaping the world in which I was to live?  Many years later as I began to read philosophy, I discovered these were questions that had been asked before, with opposing views, but no definitive empirical answer.

In fact, for technological development, just as for political development, things are probably far more mixed, and reconciling Basella and Arthur’s accounts might suggest that there is space both for Arthur’s hope and human input into technological evolution.

Recall there were two main places where Basella placed human input (individual and special/cultural): novelty and selection.

The crucial role of selection in Darwinian theory is evident in its eponymous role: “Natural Selection”.    In Darwinian accounts, this is driven by the breeding success of individuals in their niche, and certainly the internal dynamics of technology (efficiency, reliability, cost effectiveness, etc.) are one aspect of technological selection.  However, as Basella describes in greater detail, there are many human aspects to this as well from the multiple individual consumer choices within a free market to government legislation, for example regulating genome research or establishing emissions limits for cars. This suggest a relationship with technology les like that with an independently evolving wild beast and more like that of the farmer artificially selecting the best specimens.

Returning to the issue of novelty.  As I’ve noted even Basella seems to underplay human ingenuity in the stories of particular technologies, and Arthur even more so.  Arthur attempts account for “the appearance of radically novel technologies” (NoT,p.17) though composition of components.

One example of this is the ‘invention’ of the cyclotron by Ernest Lawrence (Not,p.114).  Lawrence knew of two pieces of previous work: (i) Rolf Wideröe’s idea to accelerate particles using AC current down a series of (very) long tubes, and (ii) the fact that magnetic fields can make charged particles swing round in circles.  He put the two together and thereby made the cyclotron, AC currents sending particles ever faster round a circular tube.  Lawrence’s  first cyclotron was just a few feet across; now, in CERN and elsewhere, they are many miles in diameter, but the principle is the same.

Arthur’s take-home message from this is that the cyclotron did not spring ready-formed and whole from Lawrence’s imagination, like Athena from Zeus’ head.  Instead, it was the composition of existing parts.  However, the way in which these individual concepts or components fitted together was far from obvious.  In many of the case studies the component technology or basic natural phenomena had been around and understood for many years before they were linked together.  In each case study it seems to be the vital key in putting together the disparate elements is the human one — heroic inventors after all 🙂

Some aspects of this invention not specifically linked to composition: experimentation and trial-and-error, which effectively try out things in the lab rather than in the market place; the inventor’s imagination of fresh possibilities and their likely success, effectively trail-and-error in the head; and certainly the body of knowledge (the domains in Arthur’s terms) on which the inventor can draw.

However, the focus on components and composition does offer additional understanding of how these ‘breakthroughs’ take place.  Randomly mixing components is unlikely to yield effective solutions.  Human inventors’ understanding of the existing component technologies allows them to spot potentially viable combinations and perhaps even more important their ability to analyse the problems that arise allow them to ‘fix’ the design.

In my own work in creativity I often talk about crocophants, the fact that arbitrarily putting two things together, even if each is good in its own right, is unlikely to lead to a good combination.  However, by deeply understanding each, and why they fit their respective environments, one is able to intelligently combine things to create novelty.

Darwinism and technology

Both Arthur and Basalla are looking for modified version of Darwinism to understand technological evolution.  For Arthur it is the way in which technology builds upon components with ‘combinatorial evolution’.  While pointing to examples in biology he remarks that “the creation of these larger combined structures is rarer in biological evolution — much rarer — than in technological evolution” (NoT,p.188).  Strangely, it is precisely the power of sexual reproduction over simpler mutation, that it allows the ‘construction’ and  ‘swopping’ of components; this is why artificial evolutionary algorithms often outperform simple mutation (a form of stochastic hill-climbing algorithm, itself usually better than deterministic hill climbing). However, technological component combination is not the same as biological components.

A core ‘problem’ for biological evolution is the complexity of the genotype–phenotype mapping.  Indeed in “The Selfish Gene” Dawkins attacks Lamarckism precisely on the grounds that the mapping is impossibly complex hence cannot be inverted2.  In fact, Dawkins arguments would also ‘disprove’ Darwinian natural selection as it also depends on the mapping not being too complex.  If the mapping between genotype–phenotype were as complex as Dawkins suggested, then small changes to genotypes as gene patterns would lead to arbitrary phenotypes and so fitness of parents would not be a predictor of fitness of offspring. In fact while not simple to invert (as is necessary for Lamarckian inheritance) the mapping is simple enough for natural selection to work!

One of the complexities of the genotype–phenotype mapping in biology is that the genotype (our chromosomes) is far simpler (less information) than our phenotype (body shape, abilities etc.).  Also the complexity of the production mechanism (a mothers womb) is no more complex than the final product (the baby).  In contrast for technology the genotype (plans, specifications, models, sketches), is of comparable complexity to the final product.  Furthermore the production means (factory, workshop) is often far more complex than the finished item (but not always, the skilled woodsman can make a huge variety of things using a simple machete, and there is interesting work on self-fabricating machines).

The complexity of the biological mapping is particularly problematic for the kind of combinatorial evolution that Arthur argues is so important for technological development.  In the world of technology, the schematic of a component is a component of the schematic of the whole — hierarchies of organisation are largely preserved between phenotype and geneotype.  In contrast, genes that code for finger length are also likely to affect to length, and maybe other characteristics as well.

As noted sexual reproduction does help to some extent as chromosome crossovers mean that some combinations of genes tend to be preserved through breeding, so ‘parts’ of the whole can develop and then be passed together to future generations.  If genes are on different chromosomes, this process is a bit hit-and-miss, but there is evidence that genes that code for functionally related things (and therefore good to breed together), end up close on the same chromosome, hence more likely to be passed as a unit.

In contrast, there is little hit-and-miss about technological ‘breeding’ if you want component A from machine X and component B from machine Y, you just take the relevant parts of the plans and put them together.

Of course, getting component A and component B to work together is anther matter, typically some sort of adaptation or interfacing is needed.  In biological evolution this is extremely problematic, as Arthur says “the structures of genetic evolution” mean that each step “must produce something viable” NoT,p.188).  In contrast, the ability to ‘fix’ the details composition in technology means that combinations that are initially not viable, can become so.

However, as noted at the end of the last section, this is due not just to the nature of technology, but also human ingenuity.

The crucial difference between biology and technology is human design.

technological context and infrastructure

A factor that seems to be weak or missing in both Basella and Arthur’s theories, is the role of infrastructure and general technological and environmental context3. This is highlighted by the development of the wheel.

The wheel and fire are often regarded as core human technologies, but whereas the fire is near universal (indeed predates modern humans), the wheel was only developed in some cultures.  It has long annoyed me when the fact that South American civilisations did not develop the wheel is seen as some kind of lack or failure of the civilisation.  It has always seemed evident that the wheel was not developed everywhere simply because it is not always useful.

I was wonderful therefore to read Basella’s detailed case study of the wheel (EoT,p.7–11) where he backs up what for me had always been a hunch, with hard evidence.  I was aware that the Aztecs had wheeled toys even though they never used wheels for transport. Basella quite sensibly points out that this is reasonable given the terrain and the lack of suitable draught animals. He also notes that between 300–700 AD wheels were abandoned in the Near East and North Africa — wheels are great if you have flat hard natural surfaces, or roads, but not so useful on steep broken hillsides, thick forest, or soft sandy deserts.

In some ways these combinations: wheels and roads, trains and rails, electrical goods and electricity generation can be seen as a form of domain in Arthur’s sense, a “mutually supporting set” of technologies (NoT,p.71), indeed he does talk abut the “canal world” (NoT,p82).  However, he is clearly thinking more about the component technologies that make up a new artefact, and less about the set of technologies that need to surround new technology it make it viable.

The mutual interdependence of infrastructure and related artefacts forms another positive feedback loop. In fact, in his discussion of ‘lock-in’, Arthur does talk about the importance of “surrounding structures and organisations”, as a constraint often blocking novel technology, and the way some technologies are only possible because of others (e.g. complex financial derivatives only possible because of computation).  However, the best example is Basalla’s description of the of the development of the railroad vs. canal in the American Mid-West (EoT,p.195–197).  This is often seen as simply the result of the superiority of the railway, but in the 1960s, Robert Fogel, a historian, made a detailed economic comparison and found that there was no clear financial advantage; it is just that once one began to become dominant the positive feedback effects made it the sole winner.

Arthur’s compositional approach focuses particularly on hierarchical composition, but these infrastructures often cut across components: the hydraulics in a plane, electrical system in a car, or Facebook ‘Open Graph’. And of course one of the additional complexities of biology is that we have many such infrastructure systems in our own bodies blood stream, nervous system, food and waste management.

It is interesting that the growth of the web was possible by a technological context of the existing internet and home PC sales (which initially were not about internet use, even though now this is often the major reason for buying computational devices).  However, maybe the key technological context for the modern web is the credit card, it is online payments and shopping, or the potential for them, that has financed the spectacular growth of the area. There would be no web without Berners Lee, but equally without Barclay Card.

  1. see my WebSci’11 paper for more on the ‘fallacy of optimality’[back]
  2. Why Dawkins chose to make such an attack on Lamarckism I’ve never understood, as no-one had believed in it as an explanation for nearly 100 years.  Strangely, it was very soon after “The Selfish Gene” was published that examples of Lamarckian evolution were discovered in simple organisms, and recently in higher animals, although in the latter through epigenetic (non-DNA) means.[back]
  3. Basalla does describes the importance of “environmental influences”, but is referring principally to the natural envronment.[back]

book: Nightingale, Peter Dorward

Peter Dorward’s Nightingale is a truly beautiful tale, both in language and story.  Not beautiful in a pink ribbons sense, but with a harsh, sometimes almost brutal directness.  Dorward is a Scot, so perhaps the image of whisky is pertinent.  Certainly not a liquor like limoncello, strong beneath but covered over with sweetness, like aspects of the Italy Dorward portrays, but like a South-East Islay Malt, a smoke-tar flavour that almost makes you gag and yet all the richer for its lack of compromise.

Nightingale, takes us into Italy’s “Years of Lead” (Anni di piombo), the period of political terrorism from left and right that left  thousands dead, and in particular the 1980 railway station bombing in Bologna, which killed eighty five one hot holiday morning.  This is hardly an easy topic to deal with.  The jacket describes the novel as a ‘literary thriller’, but it is at heart about people: the almost comic, but bloody, naivete of political extremism, and the tenuous glory of love.

Although, the central character in the novel is Scottish, and the protagonists include a German Baader-Meinhoff acolyte and an Egyptian bartender, Italians and Italy form not just the backdrop, but permeate the pages of Nightingale. Dorward describes Italy with sensitivity and straightforwardness, and I think loves the country and the people in the same way I have come to; yet aware of the dark undercurrents that often underlie the Formica-tabled pizzeria and high fashion boutiques.

I recall a  few years ago seeing flowers around a plaque on the wall, just opposite the entrance to the University of Rome “La Sapienza” in via Salaria.  I had been visiting occasionally for several years, but not noticed the plaque before.  I was told it was to commemorate a Professor of the University, Massimo D’Antona, who had been assassinated some years earlier (1999) for serving on a government committee looking into the reform of labour law.  In the UK it sometimes seems we have lost our passion, that politics and life end up in a lassitude and compromise, that we need some of the passion of the south.  And yet, this passion comes at a cost.

I came to Nightingale through reading Andrew Greig’s At the Loch of the Green Corrie.  The central part of Greig’s semi-biographical, semi-autobiographical book is his journey to fish at the loch of the title, accompanied with two close friends, brothers, one of which was Peter. One evening, camping beside another loch, in conversation oiled with whisky drunk from camping mugs, Peter shares his early ideas for a story.  He is a GP in London at the time, and dabbling in writing, but yet to write a full novel.

I was captivated by this real story, of the man and his desires, and instantly reached for the internet to find him.  It was with so much joy that I saw he had written the novel, and was now an award-winning author (and still a doctor, but now in Scotland). Greig’s account had opened up such an intimacy with these brothers, so wonderful to see those nascent ideas, on that midge-plagued, peat-mattressed shoreline, bear fruit.

roots – how do we see ourselves spatially

I was just reading the chapter on Benedict Anderson in “Key Thinkers on Space and Place1.  Anderson forged the concept of a national imagination, the way nations are as much, or more, a construct of socio-cultural imaginings than physical topography or legal/political sovereignty.

However, this made me wonder whether this conception itself was very culturally specific, to what extent do people relate to nation as opposed to other areas.

I was reminded particularly of a conversation with, the much missed, Pierro Mussio. He explained to me the distinct nature of Italian cultural identity, which tends to focus on regional and local identity before national identity, partly because Italy itself is quite young as a nation state (a mere 150 years in a country which sees itself in terms of millennia). There is even a word “campanilismo”, which is literally relating to the “bell tower” (campanile) in a town, meaning one’s primary loyalties lie to that bell tower, that town, that community.

How do you see yourself?  Are you British or Geordie, French or Parisian, American or New Yorker?

I know I see myself as ‘Welsh’.  Wales is part of Britain, but my Britishness is secondary to Welshness.  I was born and brought up in Bangor Street, Roath Park, Cardiff, but again while the street, area and city are foci of nostalgia, it is the Welshness which seems central.  For Fiona she is Cumbrian (rather than Wetheral, English or British), Steve who is visiting is British, but says his brother would say Scottish, despite both having spent equal amounts of time in Scotland whilst growing up and since.

I asked people on Twitter and got a variety of answers2, most quite broad:

“I always think English rather than British but I don’t have a more specific area to identify with.”

“I think I primarily think of myself as both “Brit” & “northerner”. Lancastrian when differentiating myself from Yorkshire lot!”

“in decreasing granularity I’m a Devoner (south, of course!), west country-er, English, British, European, World-ean.”

Some less clear:

“I’m confused specially. I am Coloradan and American by birth, but feel more at home in England, and miss Scotland.”

“ooh, complicated. I’m British but not English. that’s as specific as I get.”

The last perhaps particularly interesting in its focus on what he is not!

Obviously the way we see ourselves varies.

The choice of a ‘level of granularity’ for location reminds me a little of the way in which we have some sort of typical level in a classification hierarchy (I think Lakoff writes about this); for example you can say “look at that bird”, but not “look at that mammal”, you have to say “look at that dog” or “look at that cat”.  This also varies culturally including subcultures such as dog breeders – saying “look at that dog” in Crufts would hardy sound natural.

Some cities have specific words to refer to their natives: Glaswegian, Geordie, Londoner; others do not – I was brought up in Cardiff, but Cardiffian sounds odd.  Does the presence of a word (Cumbrian, Welsh) make you more likely to see yourselves in those terms, or is it more that it is that, where cities have forged a strong sense of belonging, words naturally emerge … I sense a Sapir-Whorf moment!

Now-a-days this is even more contested as loyalties and identities can be part of networked communities that cut across national and topographical boundaries.  In some way these new patterns of connection reinforce those focusing on human relations rather than physical space as defining countries and communities, but of course in far newer ways.

However, it also made me think of those parts of the world where there are large numbers of people with problematic statehood.  There is how we see ourselves and how states see us.  We tend to define democracy in terms of citizenship, and laud attempts, such as the Arab Spring, that give power to the people … but where ‘people’ means citizens.  In Bahrain the Shite majority are citizens and therefore their views should be considered in terms of democracy, whereas the migrant workers in Libya fleeing the rebels in the early days of the recent Libyan war, or the Palestinians in Kuwait during the first Gulf War were not citizens and therefore marginalised.

Defining citizenship then becomes one of the most powerful methods of control.  This has been used to powerful effect in Estonia leaving some who had lived the country for fifty years effectively stateless, and, while not leaving people stateless, in the UK new rules for electoral registration could leave up to 10 million, principally the young and the poor, voteless.

In the days of the nation state those with loyalties not tied to geography have always been problematic: Gypsies, Jews before the establishment of Israel, the various Saharan nomad trades.  Many of these have been persecuted and continue to suffer across the world, and yet paradoxically in a networked world it seems possible that pan-national identity may one day become the norm.

  1. I’ve got 1st edition, but 2nd edition recently come out.[back]
  2. Many thanks for those who Tweeted responses.[back]

book: The Unfolding of Language, Deutscher

I have previously read Guy Deutscher‘s “Through the Language Glass“, and have now, topsy turvy, read his earlier book “The Unfolding of Language“.  Both are about language, “The Unfolding of Language” about the development of the complexity of language that we see today from simpler origins, and “Through the Language Glass” about the interaction between language and thought.  Both are full of sometimes witty and always fascinating examples drawn from languages around the world, from the Matses in the Amazon to Ancient Sumarian.

I recall my own interest in the origins of language began young, as a seven year old over breakfast one day, asking whether ‘night, was a contraction of ‘no light’.  While this was an etymological red herring, it is very much the kind of change that Deutscher documents in detail showing the way a word accretes beginnings and ending through juxtaposition of simpler words followed by erosion of hard to pronounce sounds.

One of my favourites examples was the French “aujourd’hui”.  The word ‘hui, was Old French for ‘today’, but was originally Latin “hoc die”, “(on) this day”. Because ‘hui’ is not very emphatic it became “au jour d’hui”, “on the day of this day” , which contracted to the current ‘aujourd’hui’. Except now to add emphasis some French speakers are starting to say “au jour aujourd’hui”, “on the day on the day of this day”!  This reminds me of Longsleddale in the Lake District (inspiration for Postman Pat‘s Greendale),  a contraction of “long sled dale”, which literally means “long valley valley” from Old English “slaed” meaning “valley” … although I once even saw something suggesting that ‘long’ itself in the name was also “valley” in a different language!

Deutscher gives many more prosaic examples where words meaning ‘I’, ‘you’, ‘she’ get accreted to verbs to create the verb endings found in languages such as French, and how prepositions (themselves metaphorically derived from words like ‘back’) were merged with nouns to create the complex case endings of Latin.

However, the most complex edifice, which Deutscher returns to repeatedly, is that of the Semitic languages with a template system of vowels around three-consonant roots, where the vowel templates change the meaning of the root.  To illustrate he uses the (fictional!) root ‘sng’ meaning ‘to snog’ and discusses how first simple templates such as ‘snug’ (“I snogged”) and then more complex constructions such as ‘hitsunnag’ (“he was made to snog himself”) all arose from simple processes of combination, shortening and generalisation.

“The Unfolding of Language” begins with the 19th century observation that all languages seem to be in a process of degeneration where more complex  forms such as the Latin case system or early English verb endings are progressively simplified and reduced. The linguists of the day saw all languages in a state of continuous decay from an early linguistic Golden Age. Indeed one linguist, August Schleicher, suggested that there was a process where language develops until it is complex enough to get things done, and only then recorded history starts, after which the effort spent on language is instead spent in making history.

As with geology, or biological evolution, the modern linguist rejects this staged view of the past, looking towards the Law of Uniformitarianism, things are as they have always been, so one can work out what must have happened in the pre-recorded past by what is happening now.  However, whilst generally finding this convincing, throughout the book I had a niggling feeling that there is a difference.  By definition, those languages for which we have written records are those of large developed civilisations, who moreover are based on writing. Furthermore I am aware that for biological evolution small isolated groups (e.g. on islands or cut off in valleys) are particularly important for introducing novelty into larger populations, and I assume the same would be true of languages, but somewhat stultified by mass communication.

Deutscher does deal with this briefly, but right at the very end in a short epilogue.  I feel there is a whole additional story about the interaction between culture and the grammatical development of language.  I recall in school a teacher explained how in Latin the feminine words tended to belong to the early period linked to agriculture and the land, masculine words for later interests in war and conquest, and neuter for the still later phase of civic and political development. There were many exceptions, but even this modicum of order helped me to make sense of what otherwise seemed an arbitrary distinction.

The epilogue also mentions that the sole exception to the ‘decline’ in linguistic complexity is Arabic with its complex template system, still preserved today.

While reading the chapters about the three letter roots, I was struck by the fact that both Hebrew an Arabic are written as consonants only with vowels interpolated by diacritical marks or simply remembered convention (although Deutscher does not mention this himself). I had always assumed that this was like English where t’s pssble t rd txt wth n vwls t ll. However, the vowels are far more critical for Semitic languages where the vowel-less words could make the difference between “he did it” and “it will be done to him”.  Did this difference in writing stem from the root+template system, or vice versa, or maybe they simply mutually reinforced each other?

The other factor regarding Arabic’s remarkable complexity must surely be the Quran. Whereas the Bible was read for a over a millennium in Latin, a non-spoken language, and later translated focused on the meaning; in contrast there is a great emphasis on the precise form of the Quran together with continuous lengthy recitation.  As the King James Bible has been argued to have been a significant influence on modern English since the 17th century, it seems likely the Quran has been a factor in preserving Arabic for the last 1500 years.

Early in “The Unfolding of Language” Deutscher dismisses attempts to look at the even earlier prehistoric roots of language as there is no direct evidence. I assume that this would include Mithin’s “The Singing Neanderthals“, which I posted about recently. There is of course a lot of truth in this criticism; certainly Mithin’s account included a lot of guesswork, albeit founded on paleontological evidence.  However, Deutscher’s own arguments include extrapolating to recent prehistory. These extrapolations are based on early written languages and subsequent recorded developments, but also include guesswork between the hard evidence, as does the whole family-tree of languages.  Deutscher was originally a Cambridge mathematician, like me, so, perhaps unsurprisingly, I found his style of argument convincing. However, given the foundations on Uniformitarianism, which, as noted above, is at best partial when moving from history to pre-history, there seems more of  a continuum rather than sharp distinction between the levels of interpretation and extrapolation in this book and Mithin’s.

Deutscher’s account seeks to fill in the gap between the deep prehistoric origins of protolanguage (what Deutscher’s calls ‘me Tarzan’ language) and its subsequent development in the era of media-society (starting 5000BC with extensive Sumerian writing). Rather than seeing these separately, I feel there is a rich account building across various authors, which will, in time, yield a more complete view of our current language and its past.

book: The Singing Neanderthals, Mithin

One of my birthday presents was Steven Mithin’s “The Singing Neanderthals” and, having been on holiday, I have already read it! I read Mithin’s “The Prehistory of the Mind” some years ago and have referred to it repeatedly over the years1, so was excited to receive this book, and it has not disappointed. I like his broad approach taking evidence from a variety of sources, as well as his own discipline of prehistory; in times when everyone claims to be cross-disciplinary, Mithin truly is.

“The Singing Neanderthal”, as its title suggests, is about the role of music in the evolutionary development of the modern human. We all seem to be born with an element of music in our heart, and Mithin seeks to understand why this is so, and how music is related to, and part of the development of, language. Mithin argues that elements of music developed in various later hominids as a form of primitive communication2, but separated from language in homo sapiens when music became specialised to the communication of emotion and language to more precise actions and concepts.

The book ‘explains’ various known musical facts, including the universality of music across cultures and the fact that most of us do not have perfect pitch … even though young babies do (p77). The hard facts of how things were for humans or related species tens or hundreds of thousands of years ago are sparse, so there is inevitably an element of speculation in Mithin’s theories, but he shows how many, otherwise disparate pieces of evidence from palaeontology, psychology and musicology make sense given the centrality of music.

Whether or not you accept Mithin’s thesis, the first part of the book provides a wide ranging review of current knowledge about the human psychology of music. Coincidentally, while reading the book, there was an article in the Independent reporting on evidence for the importance of music therapy in dealing with depression and aiding the rehabilitation of stroke victims3, reinforcing messages from Mithin’s review.

The topic of “The Singing Neanderthal” is particularly close to my own heart as my first personal forays into evolutionary psychology (long before I knew the term, or discovered Cosmides and Tooby’s work), was in attempting to make sense of human limits to delays and rhythm.

Those who have been to my lectures on time since the mid 1990s will recall being asked to first clap in time and then swing their legs ever faster … sometimes until they fall over! The reason for this is to demonstrate the fact that we cannot keep beats much slower than one per second4, and then explain this in terms of our need for a mental ‘beat keeper’ for walking and running. The leg shaking is to show how our legs, as a simple pendulum, have a natural frequency of around 1Hz, hence determining our slowest walk and hence need for rhythm.

Mithin likewise points to walking and running as crucial in the development of rhythm, in particular the additional demands of bipedal motion (p150). Rhythm, he argues, is not just about music, but also a shared skill needed for turn-taking in conversation (p17), and for emotional bonding.

In just the last few weeks, at the HCI conference in Newcastle, I learnt that entrainment, when we keep time with others, is a rare skill amongst animals, almost uniquely human. Mithin also notes this (p206), with exceptions, in particular one species of frog, where the males gather in groups to sing/croak in synchrony. One suggested reason for this is that the louder sound can attract females from a larger distance. This cooperative behaviour of course acts against each frog’s own interest to ‘get the girl’ so they also seek to out-perform each other when a female frog arrives. Mithin imagines that similar pressures may have sparked early hominid music making. As well as the fact that synchrony makes the frogs louder and so easy to hear, I wonder whether the discerning female frogs also realise that if they go to a frog choir they get to chose amongst them, whereas if they follow a single frog croak they get stuck with the frog they find; a form of frog speed dating?

Mithin also suggests that the human ability to synchronise rhythm is about ‘boundary loss’ seeing oneself less as an individual and more as part of a group, important for early humans about to engage in risky collaborative hunting expeditions. He cites evidence of this from the psychology of music, anthropology, and it is part of many people’s personal experience, for example, in a football crowd, or Last Night at the Proms.

This reminds me of the experiments where a rubber hand is touched in time with touching a person’s real hand; after a while the subject starts to feel as if the rubber hand is his or her own hand. Effectively our brain assumes that this thing that correlates with feeling must be part of oneself5. Maybe a similar thing happens in choral singing, I voluntarily make a sound and simultaneously everyone makes the sound, so it is as if the whole choir is an extension of my own body?

Part of the neurological evidence for the importance of group music making concerns the production of oxytocin. In experiments on female prairie voles that have had oxytocin production inhibited, they engage in sex as freely as normal voles, but fail to pair bond (p217). The implication is that oxytocin’s role in bonding applies equally to social groups. While this explains a mechanism by which collaborative rhythmic activities create ‘boundary loss’, it doesn’t explain why oxytocin is created through rhythmic activity in the first place. I wonder if this is perhaps to do with bipedalism and the need for synchronised movement during face-to-face copulation, which would explain why humans can do synchronised rhythms whereas apes cannot. That is, rhythmic movement and oxytocin production become associated for sexual reasons and then this generalises to the social domain. Think again of that chanting football crowd?

I should note that Mithin also discusses at length the use of music in bonding with infants, as anyone who has sung to a baby knows, so this offers an alternative route to rhythm & bonding … but not one that is particular to humans, so I will stick with my hypothesis 😉

Sexual selection is a strong theme in the book, the kind of runaway selection that leads to the peacock tail. Changing lifestyles of early humans, in particular longer periods looking after immature young, led to a greater degree of female control in the selection of partners. As human size came close to the physical limits of the environment (p185), Mithin suggests that other qualities had to be used by females to choose their mate, notably male singing and dance – prehistoric Saturday Night Fever.

As one evidence for female mate choice, Mithin points to the overly symmetric nature of hand axes and imagines hopeful males demonstrating their dexterity by knapping ever more perfect axes in front of admiring females (p188). However, this brings to mind Calvin’s “Ascent of Mind“, which argues that these symmetric, ovoid axes were used like a discus, thrown into the midst of a herd of prey to bring one down. The two theories for axe shape are not incompatible. Calvin suggests that the complex physical coordination required by axe throwing would have driven general brain development. In fact these forms of coordination, are not so far from those needed for musical movement, and indeed expert flint knapping, so maybe it was this skills that were demonstrated by the shaping of axes beyond that immediately necessary for purpose.

Mithin’s description of the musical nature of mother-child interactions also brought to mind Broomhall’s “Eternal Child“. Broomhall ‘s central thesis is that humans are effectively in a sort of arrested development with many features, not least our near nakedness, characteristic of infants. Although it was not one of the points Broomhall makes, his arguments made sense to me in terms of the mental flexibility that characterises childhood, and the way this is necessary for advanced human innovation; I am always encouraging students to think in a more childlike way. If Broomhall’s theories were correct, then this would help explain how some of the music making more characteristic of mother-infant interactions become generalised to adult social interactions.

I do notice an element of mutual debunking amongst those writing about richer cognitive aspects of early human and hominid development. I guess a common trait in disciplines when evidence is thin, and theories have to fill a lot of blanks. So maybe Mithin, Calvin and Broomhall would not welcome me bringing their respective contributions together! However, as in other areas where data is necessarily scant (such as sub-atomic physics), one does feel a developing level of methodological rigour, and the fact that these quite different theoretical approaches have points of connection, does suggest that a deeper understanding of early human cognition, while not yet definitive, is developing.

In summary, and as part of this wider unfolding story, “The Singing Neanderthal” is an engaging and entertaining book to read whether you are interested in the psychological and social impact of music itself, or the development of the human mind.

… and I have another of Mithin’s books in the birthday pile, so looking forward to that too!

  1. See particularly my essay on the role of imagination in bringing together our different forms of ‘specialised intelligence’. “The Prehistory of the Mind” highlighted the importance of this ‘cognitive fluidity’, linking social, natural and technological thought, but lays this largely in the realm of language. I would suggest that imagination also has this role, creating a sort of ‘virtual world’ on which different specialised cognitive modules can act (see “imagination and rationality“).[back]
  2. He calls this musical communication system Hmmmm in its early form – Holistic, Multiple-Modal, Manipulative and Musical, p138 – and later Hmmmmm – Holistic, Multiple-Modal, Manipulative, Musical and Mimetic, p221.[back]
  3. NHS urged to pay for music therapy to cure depression“, Nina Lakhani, The Independent, Monday, 1 August 2011[back]
  4. Professional conductors say 40 beats per minute is the slowest reliable beat without counting between beats.[back]
  5. See also my previous essay on “driving as a cyborg experience“.[back]

book: The Laws of Simplicty, Maeda

Yesterday I started to read John Maeda’s “The Laws of Simplicty” whilst  sitting by Fiona’s stall at the annual Tiree agricultural show, then finished before breakfast today.  Maeda describes his decision to cap at 100 pages1 as something that could be read during a lunch break. To be honest 30,000 words sounds like a very long lunch break or a very fast reader, but true to his third law, “savings in time feel like simplicity”2, it is a short read.

The shortness is a boon that I wish many writers would follow (including me). As with so many single issue books (e.g. Blink), there is s slight tendency to over-sell the main argument, but this is forgiveable in a short delightful book, in a way that it isn’t in 350 pages of less graceful prose.

I know I have a tendency, which can be confusing or annoying, to give, paradoxically for fear of misunderstanding, the caveat before the main point. Still, despite knowing this, in the early chapters I did find myself occasionally bristling at Maeda’s occasional overstatement (although in accordance with simplicity, never hyperbole).

One that particularly caught my eye was Maeda’s contrast of the MIT engineer’s RFTM (Read The F*cking Manual) with the “designer’s approach” to:

marry function with form to create intuitive experiences that we understand immediately.

Although in principle I agree with the overall spirit, and am constantly chided by Fiona for not reading instructions3, the misguided idea that everything ought to ‘pick up and use’ has bedeviled HCI and user interface design for at least the past 20 years. Indeed this is the core misconception about Heidegger’s hammer example that I argued against in a previous post “Struggling with Heidegger“. In my own reading notes, my comment is “simple or simplistic!” … and I meant here the statement not the resulting interfaces, although it could apply to both.

It has always been hard to get well written documentation, and the combination of single page ‘getting started’ guides with web-based help, which often disappears when the web site organisation changes, is an abrogation of responsibility by many designers. Not that I am good at this myself. Good documentation is hard work. It used to be the coders who failed to produce documentation, but now the designers also fall into this trap of laziness, which might be euphemistically labelled ‘simplicity’4.

Personally, I have found that the discipline of documenting (in the few times I have observed it!) is in fact a great driver of simple design. Indeed I recall a colleague, maybe Harold Thimbleby5, once suggested that documentation ought to be written before any code is written, precisely to ensure simple use.

Some years ago I was reading a manual (for a Unix workstation, so quite a few years ago!) that described a potentially disastrous shortcoming of a the disk sync command (which could have corrupted the disk). Helpfully the manual page included a suggestion of how to wrap sync in scripts that prevented the problem. This seemed to add insult to injury; they knew there was a serious problem, they knew how to fix it … and they didn’t do it. Of course, the reason is that manuals are written by technical writers after the code is frozen.

In contrast, I was recently documenting an experimental API6 so that a colleague could use it. As I wrote the documentation I found parts hard to explain. “It would be easier to change the code”, I thought, so I did so. The API, whilst still experimental, is now a lot cleaner and simpler.

Coming back to Maena after a somewhat long digression (what was that about simplicity and brevity?). While I prickled slightly at a few statements, in fact he very clearly says that the first few concrete ‘laws’ are the simpler (and if taken in their own simplistic), the later laws are far more nuanced and suggest deeper principles. This includes law 5 “differences: simplicity and complexity need each other”, which suggest that one should strive for a dynamic between simplicity and complexity. This echoes the emphasis on texture I often advocate when talking with students; whether in writing, presenting or in experience design it is often the changes in voice, visual appearance, or style which give life.

Unix command line prompt

the simplest interface?

I wasn’t convinced by Maeda’s early claim that simple designs were simpler and cheaper to construct.  Possibly true for physical prodcuts, but rarely so for digital interfaces, where more effort is typically needed in code to create simpler user interfaces.  However, again this was something that was revisited later, especially in the context of more computationally active systems (“law 8, in simplicity we trust”), where he contrasts “how much do you need to know about a system?” with “how much does the system know about you?”.  The former is the case of more traditional passive systems, whereas more ‘intelligent’ systems such as Amazon recommendations (or even Facebook news feed) favour the latter.  This is very similar to the principles for incidental and low-intention interaction that I have discussed in the past7.

Finally “The Laws of Simplicity” is beautifully designed in itself.  It includes  many gems not least those arising from Maeda’s roots in Japanese design culture, including aichaku, the “sense of attachment one can feel for an artefact” (p.69) and omakase meaning “I leave it to you”, which asks the sushi chef to create a meal especially for you (p.76).  I am perhaps too much of a controller to feel totally comfortable with the latter, but Maeda’s book certainly inspires the former.

  1. In fact there are 108 pages in the main text, but 9 of these are full page ‘law/chapter’ frontispieces, so 99 real pages.  However, if you include the 8 page introduction that gives 107 … so even the 100 page cap is perhaps a more subtle concept than a strict count.[back]
  2. See his full 10 laws of simplicity at[back]
  3. My guess is that the MIT engineers didn’t read the manuals either.[back]
  4. Apple is a great — read poor — example here as it relies on keen technofreaks to tell others about the various hidden ways to do things — I guess creating a Gnostic air to the devices.[back]
  5. Certainly Harold was a great proponent of ‘live’ documentation, both Knuth’s literate programming and also documentation that incorporated calculated input and output, rather like dexy, which I reported after last autumn’s Web Art/Science camp.[back]
  6. In fairness, the API had been thrown together in haste for my own use.[back]
  7. See ‘incidental interaction” and HCI book chapter 18.[back]

Or … is Amazon becoming the publishing Industry?

A recent Blog Kindle post asked “Is Amazon’s Kindle Destroying the Publishing Industry?“.  The post defends Kindle seeing the traditional publishers as reactionaries, whose business model depended on paper publishing and, effectively. keeping authors from their public.

However, as an author myself (albeit academic) this seems to completely miss the reasons for the publishing industry.  The printing of physical volumes has long been a minimal part of the value, indeed traditional publishers have made good use of the changes in physical print industry to outsource actual production.  The core value for the author are the things around this: marketing, distribution and payment management.

Of these, distribution is of course much easier now with the web, whether delivering electronic copies, or physical copies via print-on demand services.  However, the other core values persist – at their best publishers do not ring fence the public from the author, but on the contrary connect the two.

I recall as a child being in the Puffin Club and receiving the monthly magazine.  I could not afford many books at the time, but since have read many of the books described in its pages and recall the excitement of reading those reviews.  A friend has a collection of the early Puffins (1-200) in their original covers; although some stories age, some are better, some worse, still just being a Puffin Book was a pretty good indication it was worth reading.

The myth we are being peddled is of a dis-intermediated networked world where customers connect directly to suppliers, authors to readers1, musicians to fans.  For me, this has some truth, I am well enough known and well enough connected to distribute effectively.  However for most that ‘direct’ connection is mediated by one of a small number of global sites … and smaller number of companies: YouTube, Twitter, Google, iTunes, eBay, not to forget Amazon.

For publishing as in other areas, what matters is not physical production, the paper, but the route, the connection, the channel.

And crucially Kindle is not just the device, but the channel.

The issue is not whether Kindle kills the publishing industry, but whether Amazon becomes the publishing industry.  Furthermore, if Amazon’s standard markdown and distribution deals for small publishers are anything to go by, Amazon is hardly going to be a cuddly home for future authors.

To some extent this is an apparently inexorable path that has happened in the traditional industries, with a few large publishing conglomerates buying up the smaller publishing houses, and on the high street a few large bookstore chains such as Waterstones, Barnes & Noble squeezing out the small bookshops (remember “You’ve Got Mail“), and it is hard to have sympathy with Waterstones recent financial problems given this history.

Philip Jones of the Bookseller recently blogged about these changes, noting that it is in fact book selling, not publishing that is struggling with profits … even Amazon – no wonder Amazon want more of the publishing action.  However, while Jones notes that the “digital will lead to smaller book chains, stocking fewer titles” in fact “It wasn’t digital that drove this, but it is about to deliver the coup de grâce.”

Which does seem a depressing vision both as author and reader.

  1. Maybe is actually doing this – see Guardian article, although it sounds more useful to the already successful writer than the new author.[back]