Status Code 451- and the burning of books

I was really pleased to see that Alessio Malizia has just started to blog.  An early entry is a link to a Guardian article about Tim Bray‘s suggestion for a new status code of 451 when a site is blocked for legal reasons.

Bray’s tongue-in-cheek suggestion is both honouring Ray Bradbury, the author of Faranheit 451, and also satirising the censorship implicit in IP blocking such as the UK High Court decision in April to force ISPs to block Pirate Bay.

However, I have a feeling that perhaps the satire could be seen, so to speak, as on the other foot.

Faranheit 451 is about a future where books are burnt because they have increasingly been regarded as meaningless by a public focused on quick fix entertainment and mindless media: censorship more the result than the cause of societal malaise.

Just as Huxley’s Brave New World seemed to sneak up upon us until science fiction was everyday life, maybe Bradbury’s world is here with the web itself not the least force in the dissolution of intellectual life.

Bradbury foresaw ‘firemen’ who burnt the forbidden books, following in a long history of biblioclasts from the destruction of the Royal Library of Ashurbanipal at Ninevah to Nazi book burnings in the 1930s.  However, today it is the availability of information on the internet which is often used as an excuse for the closure of libraries, and publishers foresee the end of paper publication in the next five years.

Paradoxically it is the rearguard actions of publishers (albeit largely to protect profit not principle) that is one of the drivers behind IP blocking and ‘censorship’ of copyright piracy sites.  If I were to assign roles from Faranheit 451 to the current day protagonists it would be hard to decide which is more like the book-burning firemen.

Maybe Faranheit 451 has happened and we never noticed.

Tiree going mobile

Tiree’s Historical Centre An Iodhlann has just been awarded funding by the Scottish Digital Research and Development Fund for Arts and Culture to make historic archive material available through a mobile application whilst ‘on the ground’ walking, cycling or driving around the island.

I’ve been involved in bigger projects, but I can’t recall being more excited than this one: I think partly because it brings together academic interests and local community.

the project

An Iodhlann (Gaelic for a stackyard) is the historical centre on the island of Tiree.  Tiree has a rich history from the Mesolithic period to the Second World war base. The archive was established in 1998, and its collection of old letters, emigrant lists, maps, photographs, stories and songs now extends to 12 000 items.  500 items are available online, but the rest of the primary data is only available at the centre itself.  A database of 3200 island place names collated by Dr Holliday, the chair of An Iodhlann, has recently been made available on the web at tireeplacenames.org.  Given the size of the island (~750 permanent residents) this is a remarkable asset.

          

To date, the online access at An Iodhlann is mainly targeted at archival / historical use, although the centre itself has a more visitor-centred exhibition.  However, the existing digital content has the potential to be used for a wider range of applications, particularly to enhance the island experience for visitors.

Over the next nine months we will create a mobile application allowing visitors and local historians to access geographically pertinent information, including old photographs, and interpretative maps/diagrams, while actually at sites of interest.  This will largely use visitors’ own devices such as smart phones and tablets.  Maps will be central to the application, using both OS OpenData and bespoke local maps and sketches of historical sites.

As well as adding an extra service for those who already visit An Iodhlann, we hope that this will attract new users, especially younger tourists.  In addition a ‘data layer’ using elements of semantic web technology will mean that the raw geo-coded information is available for third parties to mash-up and for digital humanities research.

the mouse that roars

The Scottish Digital Research and Development Fund for Arts and Culture is run by Nesta, Creative Scotland and the Arts and Humanities Research Council (AHRC).

This was a highly competitive process with 52 applications of which just 6 were funded.  The other successful organisations are: The National Piping Centre, Lyceum Theatre Company and the Edinburgh Cultural Quarter, Dundee Contemporary Arts, National Galleries of Scotland, Glasgow Film Theatre and Edinburgh Filmhouse.  These are all big city organisations as were the projects funded by an earlier similar programme run by Nesta England.

As the only rural-based project, this is a great achievement for Tiree and a great challenge for us over the next nine months!

challenges

In areas of denser population or high overall tourist numbers, historical or natural sites attract sufficient visitors to justify full time (volunteer or paid) staff.  In more remote rural locations or small islands there are neither sufficient people for volunteers to cover all, or even a significant number, of sites, nor have they sufficient tourist volume to justify commercial visitor centres.

A recent example of this on Tiree is the closing of the Thatched Cottage Museum.  This is one of the few remaining thatched houses on the island, and housed a collection of everyday historical artefacts.  This was owned by the Hebridean Trust, and staffed by local volunteers, but was recently closed and the building sold, as it proved difficult to keep it staffed sufficiently given the visitor numbers.

At some remote sites such as the Tiree chapels, dating back to the 10th century, or Iron Age hill forts, there are simple information boards and at a few locations there are also fixed indoor displays, including at An Iodhlann itself.  However, there are practical and aesthetic limits on the amount of large-scale external signage and limits on the ongoing running and maintenance of indoor exhibits.  Furthermore, limited mobile signals mean that any mobile-based solutions cannot assume continuous access.

from challenge to experience

Providing information on visitors’ own phones or tablets will address some of the problems of lack of signage and human guides.  However, achieving this without effective mobile coverage means that simple web-based solutions will not work.

The application used whilst on the ground will need to be downloaded, but then this limits the total amount of information that is available whilst mobile; our first app will be built using HTML5 to ensure it will be available on the widest range of mobile devices (iOS, Android, Windows Mobile, ordinary laptops), but using HTML5  further reduces the local storage available1.

In order to deal with this, the on-the-ground experience will be combined with a web site allowing pre-trip planning and post-trip reminiscence.  This will also be map focused, allowing visitors to see where they have been or are about to go, access additional resources, such as photos and audio files that are too large to be available when on the ground (remembering poor mobile coverage). This may also offer an opportunity to view social content including comments or photographs of previous visitors and then to associate one’s own photographs taken during the day with the different sites and create a personal diary, which can be shared with others.

On reflection, this focus on preparation and reminiscence will create a richer and more extended experience than simply providing information on demand.  Rather than reading reams of on-screen text whilst looking at a  monument or attempting to hear an audio recording in the Tiree wind, instead visitors will have some information available in the field and more when they return to their holiday base, or home2.

 

  1. For some reason HTML5 applications are restricted to a maximum of 5Mb![back]
  2. This is another example of a lesson I have seen so many times before: the power of constraints to force more innovative and better designs. So many times I have heard people say about their own designs “I wanted to make X, but couldn’t for some reason so did Y instead” and almost every time it is the latter, the resource-constrained design, that is clearly so much better.[back]

September beckons: calls for Physicality and Alt-HCI

I’m co-chairing a couple of events, both with calls due in mid June: Physicality 2012, and Alt-HCI.   Both are associated with HCI 2012  in Birmingham in September, so you don’t have to chose!

Physicality 2012 – 4th International Workshop on Physicality

(Sept.11, co-located with HCI 2012)

Long awaited, the 4th in the Physicality workshop series exploring design challenges, theories and experiences in developing new forms of interactions that exploit human physical interaction with digital technology.

Position papers and research papers due 18th June.

see:

Alt-HCI

(track of HCI 2012, 12-15 Sept 2012)

A chance to present and engage with work that pushes the boundaries of HCI.  Do you investigate methods for inducing negative user experience, or for not getting things done (or is that Facebook?).  Maybe you would like to argue for the importance of Taylorism within HCI, or explore user interfaces for the neonate.

Papers due 15th June with an open review process in the weeks following.

see: HCI 2012 call for participation  (also HCI short papers and work-in-progress due 15th June: )

books: The Nature of Technology (Arthur) and The Evolution of Technology (Basalla)

I have just finished reading “The Nature of Technology” (NoT) by W. Brian Arthur and some time ago read “The Evolution of Technology” (EoT) by George Basalla, both covering a similar topic, the way technology has developed from the earliest technology (stone axes and the wheel), to current digital technology.  Indeed, I’m sure Arthur would have liked to call his book “The Evolution of Technology” if Basalla had not already taken that title!

We all live in a world dominated by technology and so the issue of how technology develops is critical to us all.   Does technology ultimately serve human needs or does it have its own dynamics independent of us except maybe as cogs in its wheels?  Is the arc of technology inevitable or does human creativity and invention drive it in new directions? Is the development of technology now similar (albeit a bit faster) than previous generations, or does digital technology fundamentally alter things?

Basalla was  published in 1988, while Arthur is 2009, so Arthur has 20 years more to work on, not much compared to 2 million years for the stone axe and 5000 years for the wheel, but 20 years that has included the dot.com boom (and bust!), and the growth of the internet.  In a footnote (NoT,p.17), Arthur describes Basalla as “the most complete theory to date“, although then does not appear to directly reference Basalla again in the text – maybe because they have different styles.  Basalla (a historian of technology) offering a more descriptive narrative  whilst Arthur (and engineer and economist) seeks a more analytically complete account. However I also suspect that Arthur discovered Basella’s work late and included a ‘token’ reference; he says that a “theory of technology — an “ology” of technology” is missing (NoT,p.14), but, however partial, Basella’s account cannot be seen as other than part of such a theory.

Both authors draw heavily, both explicitly and implicitly, on Darwinian analogies, but both also emphasise the differences between biological and technological evolution. Neither is happy with, what Basella calls the “heroic theory of invention” where “inventions emerge in a fully developed state from the minds of gifted inventors” (EoT,p.20).  In both there are numerous case studies which refute these more ‘heroic’ accounts, for example Watts’ invention of the steam engine after seeing a kettle lid rattling on the fire, and show how these are always built on earlier technologies and knowledge.  Arthur is more complete in eschewing explanations that depend on human ingenuity, and therein, to my mind, lies the weakness of his account.  However, Arthur does take into account, as central mechanism,  the accretion of technological complexity through the assembly of components, al but absent from Basella’s account — indeed in my notes as I read Basella I wrote “B is focused on components in isolation, forgets implication of combinations“.

I’ll describe the main arguments of each book, then look at what a more complete picture might look like.

(Note, very long post!)

Basella: the evolution of technology

Basella’s describes his theory of technological evolution in terms of  four concepts:

  1. diversity of artefacts — acknowledging the wide variety both of different kinds of things, but also variations of the same thing — one example, dear to my heart, is his images of different kinds of hammers 🙂
  2. continuity of development — new artefacts are based on existing artefacts with small variations, there is rarely sudden change
  3. novelty — introduced by people and influenced by a wide variety of psychological, social and economic factors … not least playfulness!
  4. selection — winnowing out the less useful/efficient artefacts, and again influenced by a wide variety of human and technological factors

Basella sets himself apart both from earlier historians of technology (Gilfillan and Ogburn) who took an entirely continuous view of development and also the “myths of the heroic inventors” which saw technological change as dominated by discontinuous change.

He is a historian and his accounts of the development of artefacts are detailed and beautifully crafted.  He takes great efforts to show how standard stories of heric invention, such as the steam engine, can be seen much more sensibly in terms of slower evolution.  In the case of steam, the basic principles had given rise to Newcomen’s  steam pump some 60 years prior to Watt’s first steam engine.  However, whilst each of these stories emphasised the role of continuity, as I read them I was struck also by the role of human ingenuity.  If Newcomen’s engine had been around since 1712 years, what made the development to a new and far more successful form take 60 years to develop? The answer is surely the ingenuity of James Watt.  Newton said he saw further only because he stood on the shoulders of giants, and yet is no less a genius for that.  Similaly the tales of invention seem to be both ones of continuity, but also often enabled by insights.

In fact, Basella does take this human role on board, building on Usher’s earlier work, which paced insight centrally in accounts of continuous change.  This is particularly central in his account of the origins of novelty where he considers a rich set of factors that influence the creation of true novelty.  This includes both individual factors such as playfulness and fantasy, and also social/cultural factors such as migration and the patent system.  It is interesting however that when he turns to selection, it is lumpen factors that are dominant: economic, military, social and cultural.  This brings to mind Margaret Bowden’s H-creativity and also Csikszentmihalyi’s cultural views of creativity — basically something is only truly creative (or maybe innovative) when it is recognised as such by society (discuss!).

Arthur: the nature of technology

Basella ends his book confessing that he is not happy with the account of novelty as provided from historical, psychological and social perspectives.  Arthur’s single reference to Basella (endnote, NoT, p.17) picks up precisely this gap, quoting Basella’s “inability to account fully for the emergence of novel artefacts” (EoT,p.210).  Arthur seeks to fill this gap in previous work by focusing on the way artefacts are made of components, novelty arising through the hierarchical organisation and reorganisation of these components, ultimately built upon natural phenomena.  In language reminiscent of proponents of ‘computational thinking‘, Arthur talks of a technology being the “programming of phenomena for our purposes” (NoT,p.51). Although, not directly on this point, I should particularly liked Arthur’s quotation from Charles Babbage “I wish to God this calculation had been executed by steam” (NoT,p.74), but did wonder whether Arthur’s computational analogy for technology was as constrained by the current digital perspective as Babbage’s was by the age of steam.

Although I’m not entirely convinced at the completeness of hierarchical composition as an explanation, it is certainly a powerful mechanism.  Indeed Arthur views this ‘combinatorial evolution’ as the key difference between biological and technological evolution. This assertion of the importance of components is supported by computer simulation studies as well as historical analysis. However, this is not the only key insight in Arthur’s work.

Arthur emphasises the role of what he calls ‘domains’, in his words a “constellation of technologies” forming a “mutually supporting set” (NoT,p.71).  These are clusters of technologies/ideas/knowledge that share some common principle, such as ‘radio electronics’ or ‘steam power’.  The importance of these are such that he asserts that “design in engineering begins by choosing a domain” and that the “domain forms a language” within which a particular design is an ‘utterance’.  However, domains themselves evolve, spawned from existing domains or natural phenomena, maturing, and sometimes dying away (like steam power).

The mutual dependence of technology can lead to these domains suddenly developing very rapidly, and this is one of the key mechanisms to which Arthur attributes more revolutionary change in technology.  Positive feedback effects are well studied in cybernetics and is one of the key mechanisms in chaos and catastrophe theory which became popularised in the late 1970s.  However, Arthur is rare in fully appreciating the potential for these effects to give rise to sudden and apparently random changes.  It is often assumed that evolutionary mechanisms give rise to ‘optimal’ or well-fitted results.  In other areas too, you see what I have called the ‘fallacy of optimality’1; for example, in cognitive psychology it is often assumed that given sufficient practice people will learn to do things ‘optimally’ in terms of mental and physical effort.

human creativity and ingenuity

Arthur’s account is clearly more advanced than the early more gradualists, but I feel that in pursuing the evolution of technology based on its own internal dynamics, he underplays the human element of the story.   Arthur even goes so far as to describe technology using Maturna’s term autopoetic (NoT,p.170) — something that is self-(re)producing, self-sustaining … indeed, in some sense with a life of its own.

However, he struggles with the implications of this.  If, technology responds to “its own needs” rather than human needs, “instead of fitting itself to the world, fits the world to itself” (NoT,p.214), does that mean we live with, or even within, a Frankenstein’s monster, that cares as little for the individuals of humanity as we do for our individual shedding skin cells?  Because of positive feedback effects, technology is not deterministic; however, it is rudderless, cutting its own wake, not ours.

In fact, Arthur ends his book on a positive note:

Where technology separates us from these (challenge, meaning, purpose, nature) it brings a type of death. But where it affirms these, it affirms life. It affirms our humanness.” (NoT,p.216)

However, there is nothing in his argument to admit any of this hope, it is more a forlorn hope against hope.

Maybe Arthur should have ended his account at its logical end.  If we should expect nothing from technology, then maybe it is better to know it.  I recall as a ten-year old child wondering just these same things about the arc of history: do individuals matter?  Would the Third Reich have grown anyway without Hitler and Britain survived without Churchill?  Did I have any place in shaping the world in which I was to live?  Many years later as I began to read philosophy, I discovered these were questions that had been asked before, with opposing views, but no definitive empirical answer.

In fact, for technological development, just as for political development, things are probably far more mixed, and reconciling Basella and Arthur’s accounts might suggest that there is space both for Arthur’s hope and human input into technological evolution.

Recall there were two main places where Basella placed human input (individual and special/cultural): novelty and selection.

The crucial role of selection in Darwinian theory is evident in its eponymous role: “Natural Selection”.    In Darwinian accounts, this is driven by the breeding success of individuals in their niche, and certainly the internal dynamics of technology (efficiency, reliability, cost effectiveness, etc.) are one aspect of technological selection.  However, as Basella describes in greater detail, there are many human aspects to this as well from the multiple individual consumer choices within a free market to government legislation, for example regulating genome research or establishing emissions limits for cars. This suggest a relationship with technology les like that with an independently evolving wild beast and more like that of the farmer artificially selecting the best specimens.

Returning to the issue of novelty.  As I’ve noted even Basella seems to underplay human ingenuity in the stories of particular technologies, and Arthur even more so.  Arthur attempts account for “the appearance of radically novel technologies” (NoT,p.17) though composition of components.

One example of this is the ‘invention’ of the cyclotron by Ernest Lawrence (Not,p.114).  Lawrence knew of two pieces of previous work: (i) Rolf Wideröe’s idea to accelerate particles using AC current down a series of (very) long tubes, and (ii) the fact that magnetic fields can make charged particles swing round in circles.  He put the two together and thereby made the cyclotron, AC currents sending particles ever faster round a circular tube.  Lawrence’s  first cyclotron was just a few feet across; now, in CERN and elsewhere, they are many miles in diameter, but the principle is the same.

Arthur’s take-home message from this is that the cyclotron did not spring ready-formed and whole from Lawrence’s imagination, like Athena from Zeus’ head.  Instead, it was the composition of existing parts.  However, the way in which these individual concepts or components fitted together was far from obvious.  In many of the case studies the component technology or basic natural phenomena had been around and understood for many years before they were linked together.  In each case study it seems to be the vital key in putting together the disparate elements is the human one — heroic inventors after all 🙂

Some aspects of this invention not specifically linked to composition: experimentation and trial-and-error, which effectively try out things in the lab rather than in the market place; the inventor’s imagination of fresh possibilities and their likely success, effectively trail-and-error in the head; and certainly the body of knowledge (the domains in Arthur’s terms) on which the inventor can draw.

However, the focus on components and composition does offer additional understanding of how these ‘breakthroughs’ take place.  Randomly mixing components is unlikely to yield effective solutions.  Human inventors’ understanding of the existing component technologies allows them to spot potentially viable combinations and perhaps even more important their ability to analyse the problems that arise allow them to ‘fix’ the design.

In my own work in creativity I often talk about crocophants, the fact that arbitrarily putting two things together, even if each is good in its own right, is unlikely to lead to a good combination.  However, by deeply understanding each, and why they fit their respective environments, one is able to intelligently combine things to create novelty.

Darwinism and technology

Both Arthur and Basalla are looking for modified version of Darwinism to understand technological evolution.  For Arthur it is the way in which technology builds upon components with ‘combinatorial evolution’.  While pointing to examples in biology he remarks that “the creation of these larger combined structures is rarer in biological evolution — much rarer — than in technological evolution” (NoT,p.188).  Strangely, it is precisely the power of sexual reproduction over simpler mutation, that it allows the ‘construction’ and  ‘swopping’ of components; this is why artificial evolutionary algorithms often outperform simple mutation (a form of stochastic hill-climbing algorithm, itself usually better than deterministic hill climbing). However, technological component combination is not the same as biological components.

A core ‘problem’ for biological evolution is the complexity of the genotype–phenotype mapping.  Indeed in “The Selfish Gene” Dawkins attacks Lamarckism precisely on the grounds that the mapping is impossibly complex hence cannot be inverted2.  In fact, Dawkins arguments would also ‘disprove’ Darwinian natural selection as it also depends on the mapping not being too complex.  If the mapping between genotype–phenotype were as complex as Dawkins suggested, then small changes to genotypes as gene patterns would lead to arbitrary phenotypes and so fitness of parents would not be a predictor of fitness of offspring. In fact while not simple to invert (as is necessary for Lamarckian inheritance) the mapping is simple enough for natural selection to work!

One of the complexities of the genotype–phenotype mapping in biology is that the genotype (our chromosomes) is far simpler (less information) than our phenotype (body shape, abilities etc.).  Also the complexity of the production mechanism (a mothers womb) is no more complex than the final product (the baby).  In contrast for technology the genotype (plans, specifications, models, sketches), is of comparable complexity to the final product.  Furthermore the production means (factory, workshop) is often far more complex than the finished item (but not always, the skilled woodsman can make a huge variety of things using a simple machete, and there is interesting work on self-fabricating machines).

The complexity of the biological mapping is particularly problematic for the kind of combinatorial evolution that Arthur argues is so important for technological development.  In the world of technology, the schematic of a component is a component of the schematic of the whole — hierarchies of organisation are largely preserved between phenotype and geneotype.  In contrast, genes that code for finger length are also likely to affect to length, and maybe other characteristics as well.

As noted sexual reproduction does help to some extent as chromosome crossovers mean that some combinations of genes tend to be preserved through breeding, so ‘parts’ of the whole can develop and then be passed together to future generations.  If genes are on different chromosomes, this process is a bit hit-and-miss, but there is evidence that genes that code for functionally related things (and therefore good to breed together), end up close on the same chromosome, hence more likely to be passed as a unit.

In contrast, there is little hit-and-miss about technological ‘breeding’ if you want component A from machine X and component B from machine Y, you just take the relevant parts of the plans and put them together.

Of course, getting component A and component B to work together is anther matter, typically some sort of adaptation or interfacing is needed.  In biological evolution this is extremely problematic, as Arthur says “the structures of genetic evolution” mean that each step “must produce something viable” NoT,p.188).  In contrast, the ability to ‘fix’ the details composition in technology means that combinations that are initially not viable, can become so.

However, as noted at the end of the last section, this is due not just to the nature of technology, but also human ingenuity.

The crucial difference between biology and technology is human design.

technological context and infrastructure

A factor that seems to be weak or missing in both Basella and Arthur’s theories, is the role of infrastructure and general technological and environmental context3. This is highlighted by the development of the wheel.

The wheel and fire are often regarded as core human technologies, but whereas the fire is near universal (indeed predates modern humans), the wheel was only developed in some cultures.  It has long annoyed me when the fact that South American civilisations did not develop the wheel is seen as some kind of lack or failure of the civilisation.  It has always seemed evident that the wheel was not developed everywhere simply because it is not always useful.

I was wonderful therefore to read Basella’s detailed case study of the wheel (EoT,p.7–11) where he backs up what for me had always been a hunch, with hard evidence.  I was aware that the Aztecs had wheeled toys even though they never used wheels for transport. Basella quite sensibly points out that this is reasonable given the terrain and the lack of suitable draught animals. He also notes that between 300–700 AD wheels were abandoned in the Near East and North Africa — wheels are great if you have flat hard natural surfaces, or roads, but not so useful on steep broken hillsides, thick forest, or soft sandy deserts.

In some ways these combinations: wheels and roads, trains and rails, electrical goods and electricity generation can be seen as a form of domain in Arthur’s sense, a “mutually supporting set” of technologies (NoT,p.71), indeed he does talk abut the “canal world” (NoT,p82).  However, he is clearly thinking more about the component technologies that make up a new artefact, and less about the set of technologies that need to surround new technology it make it viable.

The mutual interdependence of infrastructure and related artefacts forms another positive feedback loop. In fact, in his discussion of ‘lock-in’, Arthur does talk about the importance of “surrounding structures and organisations”, as a constraint often blocking novel technology, and the way some technologies are only possible because of others (e.g. complex financial derivatives only possible because of computation).  However, the best example is Basalla’s description of the of the development of the railroad vs. canal in the American Mid-West (EoT,p.195–197).  This is often seen as simply the result of the superiority of the railway, but in the 1960s, Robert Fogel, a historian, made a detailed economic comparison and found that there was no clear financial advantage; it is just that once one began to become dominant the positive feedback effects made it the sole winner.

Arthur’s compositional approach focuses particularly on hierarchical composition, but these infrastructures often cut across components: the hydraulics in a plane, electrical system in a car, or Facebook ‘Open Graph’. And of course one of the additional complexities of biology is that we have many such infrastructure systems in our own bodies blood stream, nervous system, food and waste management.

It is interesting that the growth of the web was possible by a technological context of the existing internet and home PC sales (which initially were not about internet use, even though now this is often the major reason for buying computational devices).  However, maybe the key technological context for the modern web is the credit card, it is online payments and shopping, or the potential for them, that has financed the spectacular growth of the area. There would be no web without Berners Lee, but equally without Barclay Card.

  1. see my WebSci’11 paper for more on the ‘fallacy of optimality’[back]
  2. Why Dawkins chose to make such an attack on Lamarckism I’ve never understood, as no-one had believed in it as an explanation for nearly 100 years.  Strangely, it was very soon after “The Selfish Gene” was published that examples of Lamarckian evolution were discovered in simple organisms, and recently in higher animals, although in the latter through epigenetic (non-DNA) means.[back]
  3. Basalla does describes the importance of “environmental influences”, but is referring principally to the natural envronment.[back]

One week to the next Tech Wave

Just a week to go now before the next Tiree Tech Wave starts, although the first person is coming on Sunday and one person is going to hang on for a while after getting some surfing in.

Still plenty of room for anyone who decides to come at the last minute.

Things have been a little hectic, as having to do more of the local organisation this time, so running round the island a bit, but really looking forward to when people get here 🙂  Last two times I’ve felt a bit of tension leading up to the event as I feel responsible.  It is difficult planning an event and not having a schedule “person A giving talk at 9:30, person B at 10:45”; strangely much harder having nothing, simply trusting that good things will happen.  Hopefully this time I now have had enough experience to know that if I just hang back and resist the urge to ‘do something’, then people will start to talk together, work together, make together — I just need to have the confidence to do nothing1.

At previous TTW we have had open evenings when people from the local community have come in to see what is being done.  This time, as well as having a general welcome to people to come and see,  Jonnet from HighWire at Lancaster is going to run a community workshop on mending based on her personal and PhD work on ‘Futuremenders‘. Central to this is Jonnet’s pledge to not acquire any more clothes, ever, but instead to mend and remake. This picks up on textile themes on the island especially the ‘Rags to Riches Eco-Chic‘ fashion award and community tapestry group, but also Tech Wave themes of making, repurposing and generally taking things to pieces.   Jonnet’s work is not techno-fashion (no electroluminescent skirts, or LEDs stitched into your wooly hat), but does use social connections both physical and through the web to create mass participation, including mass panda knitting and an attempt on the world mass darning record.

For the past few weeks I have had an unusual (although I hope to become usual) period of relative stability on the island after a previous period of 8 months almost constantly on the move.  This has included some data hacking and learning HTML5 for mobile devices (hence some hacker-ish blog posts recently) I hope to finish off one mini-project during the TTW that will be particularly pertinent the weekend the clocks ‘go forward’ an hour for British Summer Time.  Will blog if I do.

I hit the road last November almost immediately the Tech Wave finished, so never got time to tidy things up.  So, before this one starts, I really should try to write a up a couple of activities from last time as I’m sure there will plenty more this time round…

  1. Strange I always give people the same advice we they take on management roles, “the brave manager does nothing”.  How rare that is.  In a university, new Vice Chancellor starts and feels he/she has to change things — new faculty structure, new committees. “In the long run, will be better”, everyone says, but I’ve always found such re-organisation is itself re-organised before we ever get to t “the long run”.[back]

If Kodak had been more like Apple

Finally Kodak has crumbled; technology and the market changed, but Kodak could not keep up. Lots of memories of those bright yellow and black film spools, and memories in photographs piled in boxes beneath the bed.

But just imagine if Kodak had been more like Apple.

I’m wondering about the fallout from the Kodak collapse. I’m not an investor, nor an employee, or even a supplier, but I have used Kodak products since childhood and I do have 40 years of memories in Kodak’s digital photo cloud. There are talks of Fuji buying up the remains of the photo cloud service, so it maybe that they will re-emerge, but for the time being I can no longer stream my photos to friend’s kTV enabled TV sets when I visit, nor view them online.

Happily, my Kodak kReader has a cache of most of my photos. But, how many I’m not sure, when did I last look at the photos of those childhood holidays or my wedding, will they be in my reader, I’ll check my kPhone as well. I’d hate to think I’d lost the snaps of the seaside holiday when my hat blew into the water; I only half remember it, but every time I look at it I remember being told and re-told the story by my dad.

The kReader is only a few months old. I usually try to put off getting a new one as they are so expensive, but even after a couple of years the software updates put a strain on the old machines.  I had to give up when my three year old model seemed to take about a minute to show each photo. It was annoying as this wasn’t just the new photos, but ones I recall viewing instantly on my first photo-reader more than 30 years ago (I can still remember the excitement as I unwrapped it one Christmas, I was 14 at the time, but now children seem to get their first readers when they are 4). The last straw was when the software updates would no longer work on the old processor and all my newer photos were appearing in strange colours.

Some years ago, I’d tried using a Fuji-viewer, which was much cheaper than the Kodak one. In principle you could download your photo cloud collection in an industry standard format and then import them into the Fuji cloud. However, this lost all the notes and dates on the photos and kept timing out unless I downloaded them in small batches, then I lost track of where I was. Even my brother-in-law, who is usually good at this sort of thing, couldn’t help.

But now I’m glad I’ve got the newest model of kReader as it had 8 times the memory of the old one, so hopefully all of my old photos in its cache. But oh no, just thought, has it only cached the things I’ve looked at since I’ve got it?  If so I’ll have hardly anything. Please, please let the kReader have downloaded all it could.

Suddenly, I remember the days when I laughed a little when my mum was still using her reels of old Apple film and the glossy prints that would need scanning to share on the net (not that she did use the net, she’d pop them in the post!). “I know it is the future”, she used to say, “but I never really trust things I can’t hold”. Now I just wish I’d listened to her.

Wikipedia blackout and why SOPA winging gets up my nose

Nobody on the web can be unaware of the Wikipedia blackout, and if they haven’t heard of SOPA or PIPA before will have now.  Few who understand the issues would deny that SOPA and PIPA are misguided and ill-informed, even Apple and other software giants abandoned it, and Obama’s recent statement has effectively scuppered SOPA in its current form.  However, at the risk of apparently annoying everyone, am I the only person who finds some of the anti-SOPA rhetoric at best naive and at times simply arrogant?

Wikipedia Blackout screenshot

The ignorance behind SOPA and a raft of similar legislation and court cases across the world is deeply worrying.  Only recently I posted about the recent NLA case in the UK, that creates potential copyright issues when linking on the web reminiscent of the Shetland Times case nearly 15 years ago.

However, that is no excuse for blinkered views on the other side.

I got particularly fed up a few days ago reading an article “Lockdown: The coming war on general-purpose computing1  by copyright ativist Cory Doctorow based on a keynote he gave at the Chaos Computer Congress.  The argument was that attempts to limit the internet destroyed the very essence of  the computer as a general purpose device and were therefore fundamentally wrong.  I know that Sweden has just recognised Kopimism as a religion, but still an argument that relies on the inviolate nature of computation leaves one wondering.

The article also argued that elected members of Parliament and Congress are by their nature layfolk, and so quite reasonably not expert in every area:

And yet those people who are experts in policy and politics, not technical disciplines, still manage to pass good rules that make sense.

Doctorow has trust in the nature of elected democracy for every area from biochemistry to urban planning, but not information technology, which, he asserts, is in some sense special.

Now even as a computer person I find this hard to swallow, but what would a geneticist, physicist, or even a financier using the Black-Scholes model make of this?

Furthermore, Congress is chastised for finding unemployment more important than copyright, and the UN for giving first regard to health and economics — of course, any reasonable person is expected to understand this is utter foolishness.  From what parallel universe does this kind of thinking emerge?

Of course, Doctorow takes an extreme position, but the Electronic Freedom Foundation’s position statement, which Wikipedia points to, offers no alternative proposals and employs scaremongering arguments more reminiscent of the tabloid press, in particular the claim that:

venture capitalists have said en masse they won’t invest in online startups if PIPA and SOPA pass

This turns out to be a Google sponsored report2 and refers to “digital content intermediaries (DCIs)“, those “search, hosting, and distribution services for digital content“, not startups in general.

When this is the quality of argument being mustered against SOPA and PIPA is there any wonder that Congress is influenced more by the barons of the entertainment industry?

Obviously some, such as Doctorow and more fundamental anti-copyright activists, would wish to see a completely unregulated net.  Indeed, this is starting to be the case de facto in some areas, where covers are distributed pretty freely on YouTube without apparently leading to a collapse in the music industry, and offering new bands much easier ways to make an initial name for themselves.  Maybe in 20 years time Hollywood will have withered and we will live off a diet of YouTube videos :-/

I suspect most of those opposing SOPA and PIPA do not share this vision, indeed Google has been paying 1/2 million per patent in recent acquisitions!

I guess the idealist position sees a world of individual freedom, but it is not clear that is where things are heading.  In many areas online distribution has already resulted in a shift of power from the traditional producers, the different record companies and book publishers (often relatively large companies themselves), to often one mega-corporation in each sector: Amazon, Apple iTunes. For the latter this was in no small part driven by the need for the music industry to react to widespread filesharing.  To be honest, however bad the legislation, I would rather trust myself to elected representatives, than unaccountable multinational corporations3.

If we do not wish to see poor legislation passed we need to offer better alternatives, both in terms of the law of the net and how we reward and fund the creative industries.  Maybe the BBC model is best, high quality entertainment funded by the public purse and then distributed freely.  However, I don’t see the US Congress nationalising Hollywood in the near future.

Of course copyright and IP is only part of a bigger picture where the net is challenging traditional notions of national borders and sovereignty.  In the UK we have seen recent cases where Twitter was used to undermine court injunctions.  The injunctions were in place to protect a few celebrities, so were ‘fair game’ anyway, and so elicited little public sympathy.  However, the Leveson Inquiry has heard evidence from the editor of the Express defending his paper’s suggestion that the McCann’s may have killed their own daughter; we expect and enforce (the Expresss paid £500,000 after a libel case) standards in the print media, would we expect less if the Express hosted a parallel new website in the Cayman Islands?

Whether it is privacy, malware or child pornography, we do expect and need to think of ways to limit the excess of the web whilst preserving its strengths.  Maybe the solution is more international agreements, hopefull not yet more extra-terratorial laws from the US4.

Could this day without Wikipedia be not just a call to protest, but also an opportunity to envision what a better future might be.

  1. blanked out today, see Google cache[back]
  2. By Booz&Co, which I thought at first was a wind-up, but appears to be a real company![back]
  3. As I write this, I am reminded of the  corporation-controlled world of Rollerball and other dystopian SciFi.[back]
  4. How come there is more protest over plans to shut out overseas web sites than there is over unmanned drones performing extra-judicial executions each week.[back]

changing rules of copyright on the web – the NLA case

I’ve been wondering about the broader copyright implications of a case that went through the England and Wales Court of Appeal earlier this year.  The case was brought by  the NLA (Newspaper Licensing Agency) against Meltwater, who run commercial media-alert services; for example telling  you or your company when and where you have been mentioned in the press.

While the case is specifically about a news service, it appears to have  broader implications for the web, not least because it makes new judgements on:

  • the use of titles/headlines — they are copyright in their own right
  • the use of short snippets (in this case no more than 256 characters) — they too potentially infringe copyright
  • whether a URL link is sufficient acknowledgement of copyright material for fair use – it isn’t!

These, particularly the last, seems to have implications for any form of publicly available lists, bookmarks, summaries, or even search results on the web.  While NLA specifically allow free services such as Google News and Google Alerts, it appears that this is ‘grace and favour’, not use by right.   I am reminded of the Shetland case1, which led to many organisations having paranoid policies regarding external linking (e.g. seeking explicit permission for every link!).

So, in the UK at least, web law copyright law changed significantly through precedent, and I didn’t even notice at the time!

In fact, the original case was heard more than a year ago November 2010 (full judgement) and then the appeal in July 2011 (full judgement), but is sufficiently important that the NLA are still headlining it on their home page (see below, and also their press releases (PDF) about the original judgement and appeal).  So effectively things changed at least at that point, although as this is a judgement about law, not new legislation, it presumably also acts retrospectively.  However, I only recently became aware of it after seeing a notice in The Times last week – I guess because it is time for annual licences to be renewed.

Newspaper Licensing Agency (home page) on 26th Dec 2011

The actual case was, in summary, as follows. Meltwater News produce commercial media monitoring services, that include the title, first few words, and a short snippet of  news items that satisfy some criteria, for example mentioning a company name or product.  NLA have a license agreement for such companies and for those using such services, but Meltwater claimed it did not need such a license and, even if it did, its clients certainly did not require any licence.  However, the original judgement and the appeal found pretty overwhelmingly in favour of NLA.

In fact, my gut feeling in this case was with the NLA.  Meltwater were making substantial money from a service that (a) depends on the presence of news services and (b) would, for equivalent print services, require some form of licence fee to be paid.  So while I actually feel the judgement is fair in the particular case, it makes decisions that seem worrying when looked at in terms of the web in general.

Summary of the judgement

The appeal supported the original judgement so summarising the main points from the latter (indented text quoting from the text of the judgement).

Headlines

The status of headlines (and I guess by extension book titles, etc.) in UK law are certainly materially changed by this ruling (para 70/71), from previous case law (Fairfax, Para. 62).

Para. 70. The evidence in the present case (incidentally much fuller than that before Bennett J in Fairfax -see her observations at [28]) is that headlines involve considerable skill in devising and they are specifically designed to entice by informing the reader of the content of the article in an entertaining manner.

Para. 71. In my opinion headlines are capable of being literary works, whether independently or as part of the articles to which they relate. Some of the headlines in the Daily Mail with which I have been provided are certainly independent literary works within the Infopaq test. However, I am unable to rule in the abstract, particularly as I do not know the precise process that went into creating any of them. I accept Mr Howe’s submission that it is not the completed work as published but the process of creation and the identification of the skill and labour that has gone into it which falls to be assessed.

Links and fair use

The ruling explicitly says that a link is not sufficient acknowledgement in terms of fair use:

Para. 146. I do not accept that argument either. The Link directs the End User to the original article. It is no better an acknowledgment than a citation of the title of a book coupled with an indication of where the book may be found, because unless the End User decides to go to the book, he will not be able to identify the author. This interpretation of identification of the author for the purposes of the definition of “sufficient acknowledgment” renders the requirement to identify the author virtually otiose.

Links as copies

Para 45 (not part of the judgement, but part of NLA’s case) says:

Para. 45. … By clicking on a Link to an article, the End User will make a copy of the article within the meaning of s. 17 and will be in possession of an infringing copy in the course of business within the meaning of s. 23.

The argument here is that the site has some terms and conditions that say it is not for ‘commercial user’.

As far as I can see the judge equivocates on this issue, but happily does not seem convinced:

Para 100. I was taken to no authority as to the effect of incorporation of terms and conditions through small type, as to implied licences, as to what is commercial user for the purposes of the terms and conditions or as to how such factors impact on whether direct access to the Publishers’ websites creates infringing copies. As I understand it, I am being asked to take a broad brush approach to the deployment of the websites by the Publishers and the use by End Users. There is undoubtedly however a tension between (i) complaining that Meltwater’s services result in a small click-through rate (ii) complaining that a direct click to the article skips the home page which contains the link to the terms and conditions and (iii) asserting that the End Users are commercial users who are not permitted to use the websites anyway.

Free use

Finally, the following extract suggests that NLA would not be seeking to enforce the full licence on certain free services:

Para. 20. The Publishers have arrangements or understandings with certain free media monitoring services such as Google News and Google Alerts whereby those services are currently licensed or otherwise permitted. It would apparently be open to the End Users to use such free services, or indeed a general search engine, instead of a paid media monitoring service without (currently at any rate) encountering opposition from the Publishers. That is so even though the End Users may be using such services for their own commercial purposes. The WEUL only applies to customers of a commercial media monitoring service.

Of course, the fact that they allow it without licence, suggests they feel the same copyright rules do apply, that is the search collation services are subject to copyright.  The judge does not make a big point of this piece of evidence in any way, which would suggest that these free services do not have a right to abstract and link.  However, the fact that Meltwater (the agency NA is acting against) is making substantial money was clearly noted by the judge, as was the fact that users could choose to use alternative services free.

Thinking about it

As noted my gut feeling is that fairness goes to the newspapers involved; news gathering and reportingis costly, and openly accessible online newspapers are of benefit to us all; so, if news providers are unable to make money, we all lose.

Indeed, years ago in dot.com days, at aQtive we were very careful that onCue, our intelligent internet sidebar, did not break the business models of the services we pointed to. While we effectively pre-filled forms and submitted them silently, we did not scrape results and present these directly, but instead sent the user to the web page that provided the information.  This was partly out a feeling that this was the right and fair thing to do, partly because if we treated others fairly they would be happy for us to provide this value-added service on top of what they provided, and partly because we relied on these third-party services for our business, so our commercial success relied on theirs.

This would all apply equally to the NLA v. Meltwater case.

However, like the Shetland case all those years ago, it is not the particular of the case that seems significant, but the wide ranging implications.  I, like so many others, frequently cite web materials in blog posts, web pages and resource lists by title alone with the words live and pointing to the source site.  According to this judgement the title is copyright, and even if my use of it is “fair use” (as it normally would be), the use of the live link is NOT sufficient acknowledgement.

Maybe, things are not quite so bad as they seem. In the NLA vs. Meltwater case, the NLA had a specific licence model and agreement.  The NLA were not seeking retrospective damages for copyright infringement before this was in place, merely requiring that Meltwater subscribe fully to the licence.  The issue was not that just that copyright had been infringed, but that it had been when there was a specific commercial option in place.  In UK copyright law, I believe, it is not sufficient to say copyright has been infringed, but also to show that the copyright owner has been materially disadvantaged by the infringement; so, the existence of the licence option was probably critical to the specific judgement.   However the general principles probably apply to any case where the owner could claim damage … and maybe claim so merely in order to seek an out-of-court settlement.

This case was resolved five months ago, and I’ve not heard of any rush of law firms creating vexatious copyright claims.  So maybe there will not be any long-lasting major repercussions from the case … or maybe the storm is still to come.

Certainly, the courts have become far more internet savvy since the 1990s, but judges can only deal with the laws they are give, and it is not at all clear that law-makers really understand the implications of their legislation on the smooth running of the web.

  1. This was the case in the late 1990s where the Shetland Times sued the Shetland News for including links to its articles.  Although the particular case involved material that appeared to be re-badged, the legal issues endangered the very act of linking at all. See NUJ Freelance “NUJ still supports Shetland News in internet case“, BBC “Shetland Internet squabble settled out of court“, The Lawyer “Shetland Internet copyright case is settled out of court“[back]

ignorance or misinformation – the press and higher education

I guess I shouldn’t be surprised at poor reporting in the Mail, but it does feel slightly more serious than the other tabloids.  I should explain I have a copy of the Mail as it was the only UK paper when I got on the Malaysian Airlines plane in Kuala Lumpur on Tuesday evening, and it is the Monday copy as I assume it had flown out of the UK on the flight the day before!

Deepish inside, p22, the article was “UK students lose out in sciences” by Nick Mcdermott.  The article quotes a report by Civitas that shows that while the annual number of students in so called STEM (Science, Technology, Engineering and Maths) courses rose by around 6500 in the 10 years 1997-2007, in fact this is largely due to an increase of 12,308 in overseas students and a fall in UK students of nearly 6000.  Given an overall increase in student numbers of 600,000 in this period and employers “calling for more science graduates”, the STEM drop is particularly marked.

While the figures I assume are correct, the Mail article leaves the false impression that the overseas students are in some way taking places from the UK students, indeed the article’s title “UK students lose out” suggests precisely this.  I can’t work out if this is simply the writer’s ignorance of the UK higher education system, or deliberate misinformation — neither are good news for British journalism.

Of course, the truth is precisely the opposite.  Overseas students are not in competition with UK students for undergraduate places in STEM or other subjects, as the number of UK students is effectively controlled by a combination of Government quotas and falling student demand in STEM subjects.  The latter, a disinterest in the traditionally ‘hard’ subjects by University applicants, has led to the closure of several university science departments across the country.  Rather than competing with UK students, the presence of overseas students makes courses more likely to be viable and thus preserves the variety of education available for UK students.  Furthermore, the higher fees for overseas students compared with the combined student fees and government monies for UK students, means that, if anything, these overseas students subsidise their UK colleagues.

We should certainly be asking why it is that an increasing number of overseas students value the importance of a science/engineering training while their British counterparts eschew these areas.  However, the blame for the lack of UK engineering graduates does not lie with the overseas students, but closer to home.  Somehow in our school system and popular culture we have lost a sense of the value of a deep scientific education.  Until this changes and UK students begin to apply for these subjects, we cannot expect there to be more UK graduates.  In the mean time, we can only hope that there will be more overseas students coming to study in the UK and keep the scientific and engineering expertise of universities alive until our own country finally comes to its senses.

After the Tech Wave is over

The Second Tiree Tech Wave is over.   Yesterday the last participants left by ferry and plane and after a final few hours tidying, the Rural Centre, which the day before had been a tangle of wire and felt, books and papers, cups and biscuit packets, is now as it had been before.  And as I left, the last boxes under my arm, it was strangely silent with only the memory of voices and laughter in my mind.

So is it as if it had never been?  I there anything left behind?  There are a few sheets of Magic Whiteboard on the walls, that I left so that those visiting the Rural Centre in the coming weeks can see something of what we were doing, and there are used teabags and fish-and-chip boxes in the bin, but few traces.

We trod lightly, like the agriculture of the island, where Corncrake and orchid live alongside sheep and cattle.

Some may have heard me talk about the way design is like a Spaghetti Western. In the beginning of the film Clint Eastwood walks into the town, and at the end walks away.  He does not stay, happily ever after, with a girl on his arm, but leaves almost as if nothing had ever happened.

But while he, like the designer, ultimately leaves, things are not the same.  The Carson brothers who had the town in fear for years lie dead in their ranch at the edge of town, the sharp tang of gunfire still in the air and the buzz of flies slowly growing over the elsewise silent bodies.  The crooked major, who had been in the pocket of the Carson brothers, is strapped over a mule heading across the desert towards Mexico, and not a few wooden rails and water buts need to be repaired.  The job of the designer is not to stay, but to leave, but leave change: intervention more than invention.

But the deepest changes are not those visible in the bullet-pocked saloon door, but in the people.  The drunk who used to sit all day at the bar, has discovered that he is not just a drunk, but he is a man, and the barmaid, who used to stand behind the bar has discovered that she is not just a barmaid, but she is a woman.

This is true of the artefacts we create and leave behind as designers, but much more so of the events, which come and go through our lives.  It is not so much the material traces they leave in the environment, but the changes in ourselves.

I know that, as the plane and ferry left with those last participants, a little of myself left with them, and I know many, probably all, felt a little of themselves left behind on Tiree.  This is partly abut the island itself; indeed I know one participant was already planning a family holiday here and another was looking at Tiree houses for sale on RightMove!  But it was also the intensity of five, sometimes relaxed, sometimes frenetic, days together.

So what did we do?

There was no programme of twenty minute talks, no keynotes or demo, indeed no plan nor schedule at all, unusual in our diary-obsessed, deadline-driven world.

Well, we talked.  Not at a podium with microphone and Powerpoint slides, but while sitting around tables, while walking on the beach, and while standing looking up at Tilly, the community wind turbine, the deep sound of her swinging blades resonating in our bones.  And we continued to talk as the sun fell and the overwhelmingly many stars came out , we talked while eating, while drinking and while playing (not so expertly) darts.

We met people from the island those who came to the open evening on Saturday, or popped in during the days, and some at the Harvest Service on Sunday.  We met Mark who told us about the future plans for Tiree Broadband, Jane at PaperWorks who made everything happen, Fiona and others at the Lodge who provided our meals, and many more. Indeed, many thanks to all those on the island who in various ways helped or made those at TTW feel welcome.

We also wrote.  We wrote on sheets of paper, notes and diagrams, and filled in TAPT forms for Clare who was attempting unpack our experiences of peace and calmness in the hope of designing computer systems that aid rather than assault our solitude.  Three large Magic Whiteboard sheets were entitled “I make because …”, “I make with …”, “I make …” and were filled with comments.  And, in these days of measurable objectives, I know that at least a grant proposal, book chapter and paper were written during the long weekend; and the comments on the whiteboards and experiences of the event will be used to create a methodological reflection of the role of making in research which we’ll put into Interfaces and the TTW web site.

We moved.  Walking, throwing darts, washing dishes, and I think all heavily gesturing with our hands while taking.  And became more aware of those movements during Layda’s warm-up improvisation exercises when we mirrored one another’s movements, before using our bodies in RePlay to investigate issues of creativity and act out the internal architecture of Magnus’ planned digital literature system.

We directly encountered the chill of wind and warmth of sunshine, the cattle and sheep, often on the roads as well as in the fields.  We saw on maps the pattern of settlement on the island and on display boards the wools from different breeds on the island. Some of us went to the local historical centre, An Iodhlann [[ http://www.aniodhlann.org.uk/ ]], to see artefacts, documents and displays of the island in times past, from breadbasket of the west of Scotland to wartime airbase.

We slept.  I in my own bed, some in the Lodge, some in the B&B round the corner, Matjaz and Klem in a camper van and Magnus – brave heart – in a tent amongst the sand dunes.  Occasionally some took a break and dozed in the chairs at the Rural Centre or even nodded off over a good dinner (was that me?).

We showed things we had brought with us, including Magnus’ tangle of wires and circuit boards that almost worked, myself a small pack of FireFly units (enough to play with I hope in a future Tech Wave), Layda’s various pieces she had made in previous tech-arts workshops, Steve’s musical instrument combining Android phone and cardboard foil tube, and Alessio’s impressively modified table lamp.

And we made.  We do after all describe this as a making event!  Helen and Claire explored the limits of ZigBee wireless signals.  Several people contributed to an audio experience using proximity sensors and Arduino boards, and Steve’s CogWork Chip: Lego and electronics, maybe the world’s first mechanical random-signal generator.  Descriptions of many of these and other aspects of the event will appear in due course on the TTW site and participants’ blogs.


But it was a remark that Graham made as he was waiting in the ferry queue that is most telling.  It was not the doing that was central, the making, even the talking, but the fact that he didn’t have to do anything at all.  It was the lack of a plan that made space to fill with doing, or not to do so.

Is that the heart?  We need time and space for non-doing, or maybe even un-doing, unwinding tangles of self as well as wire.

There will be another Tiree Tech Wave in March/April, do come to share in some more not doing then.

Who was there:

  • Alessio Malizia – across the seas from Madrid, blurring the boundaries between information, light and space
  • Helen  Pritchard – artist, student of innovation and interested in cows
  • Claire  Andrews – roller girl and researching the design of assistive products
  • Clare  Hooper – investigating creativity, innovation and a sprinkling of SemWeb
  • Magnus  Lawrie – artist, tent-dweller and researcher of digital humanities
  • Steve Gill – designer, daredevil and (when he can get me to make time) co-authoring book on physicality TouchIT
  • Graham Dean – ex-computer science lecturer, ex-businessman, and current student and auto-ethnographer of maker-culture
  • Steve Foreshaw – builder, artist, magician and explorer of alien artefacts
  • Matjaz Kljun – researcher of personal information and olive oil maker
  • Layda Gongora – artist, curator, studying improvisation, meditation and wild hair
  • Alan Dix – me