Web Art/Science Camp — how web killed the hypertext star and other stories

Had a great day on Saturday at the at the Web Art/Science Camp (twitter: #webartsci , lanyrd: web-art-science-camp). It was the first event that I went to primarily with my Talis hat on and first Web Science event, so very pleased that Clare Hooper told me about it during the DESIRE Summer School.

The event started on Friday night with a lovely meal in the restaurant at the British Museum. The museum was partially closed in the evening, but in the open galleries Rosetta Stone, Elgin Marbles and a couple of enormous totem poles all very impressive. … and I notice the BM’s website when it describes the Parthenon Sculptures does not waste the opportunity to tell us why they should not be returned to Greece!

Treasury of Atreus

I was fascinated too by images of the “Treasury of Atreus” (which is actually a Greek tomb and also known as the Tomb of Agamemnon. The tomb has a corbelled arch (triangular stepped stones, as visible in the photo) in order to relieve load on the lintel. However, whilst the corbelled arch was an important technological innovation, the aesthetics of the time meant they covered up the triangular opening with thin slabs of fascia stone and made it look as though lintel was actually supporting the wall above — rather like modern concrete buildings with decorative classical columns.

how web killed the hypertext star

On Saturday, the camp proper started with Paul de Bra from TU/e giving a sort of retrospective on pre-web hypertext research and whether there is any need for hypertext research anymore. The talk brought out several of the issues that have worried me also for some time; so many of the lessons of the early hypertext lost in the web1.

For me one of the most significant issues is external linkage. HTML embeds links in the document using <a> anchor tags, so that only the links that the author has thought of can be present (and only one link per anchor). In contrast, mature pre-web hypertext systems, such as Microcosm2, specified links eternally to the document, so that third parties could add annotation and links. I had a few great chats about this with one of the Southampton Web Science DTC students; in particular, about whether Google or Wikipedia effectively provide all the external links one needs.

Paul’s brief history of hypertext started, predictably, with Vannevar Bush‘s  “As We May Think” and Memex; however he pointed out that Bush’s vision was based on associative connections (like the human mind) and trails (a form of narrative), not pairwise hypertext links. The latter reminded me of Nick Hammond’s bus tour metaphor for guided educational hypertext in the 1980s — occasionally since I have seen things a little like this, and indeed narrative was an issue that arose in different guises throughout the day.

While Bush’s trails are at least related to the links of later hypertext and the web, the idea of associative connections seem to have been virtually forgotten.  More recently in the web however, IR (information retrieval) based approaches for page suggestions like Alexa and content-based social networking have elements of associative linking as does the use of spreading activation in web contexts3

It was of course Nelson who coined the term hypertext, but Paul reminded us that Ted Nelson’s vision of hypertext in Xanadu is far richer than the current web.  As well as external linkage (and indeed more complex forms in his ZigZag structures, a form of faceted navigation.), Xanadu’s linking was often in the form of transclusions pieces of one document appearing, quoted, in another. Nelson was particularly keen on having only one copy of anything, hence the transclusion is not so much a copy as a reference to a portion. The idea of having exactly one copy seems a bit of computing obsession, and in non-technical writing it is common to have quotations that are in some way edited (elision, emphasis), but the core thing to me seems to be the fact that the target of a link as well as the source need not be the whole document, but some fragment.

Paul de Bra's keynote at Web Art/Science Camp (photo Clare Hooper)

Over a period 30 years hypertext developed and started to mature … until in the early 1990s came the web and so much of hypertext died with its birth … I guess a bit like the way Java all but stiltified programming languages. Paul had a lovely list of bad things about the web compared with (1990s) state of the art hypertext:

Key properties/limitations in the basic Web:

  1. uni-directional links between single nodes
  2. links are not objects (have no properties of their own)
  3. links are hardwired to their source anchor
  4. only pre-authored link destinations are possible
  5. monolithic browser
  6. static content, limited dynamic content through CGI
  7. links can break
  8. no transclusion of text, only of images

Note that 1, 3 and 4 are all connected with the way that HTML embeds links in pages rather than adopting some form of external linkage. However, 2 is also interesting; the fact that links are not ‘first class objects’. This has been preserved in the semantic web where an RDF triple is not itself easily referenced (except by complex ‘reification’) and so it is hard to add information about relationships such as provenance.

Of course, this same simplicity (or even that it was simplistic) that reduced the expressivity of HTML compared with earlier hypertext is also the reasons for its success compared with earlier more heavy weight and usually centralised solutions.

However, Paul went on to describe how many of the features that were lost have re-emerged in plugins, server enhancements (this made me think of systems such as zLinks, which start to add an element of external linkage). I wasn’t totally convinced as these features are still largely in research prototypes and not entered the mainstream, but it made a good end to the story!

demos and documentation

There was a demo session as well as some short demos as part of talks. Lots’s of interesting ideas. One that particularly caught my eye (although not incredibly webby) was Ana Nelson‘s documentation generator “dexy” (not to be confused with doxygen, another documentation generator). Dexy allows you to include code and output, including screen shots, in documentation (LaTeX, HTML, even Word if you work a little) and live updates the documentation as the code updates (at least updates the code and output, you need to change the words!). It seems to be both a test harness and multi-version documentation compiler all in one!

I recall that many years ago, while he was still at York, Harold Thimbleby was doing something a little similar when he was working on his C version of Knuth’s WEB literate programming system. Ana’s system is language neutral and takes advantage of recent developments, in particular the use of VMs to be able to test install scripts and to be sure to run code in a consistent environments. Also it can use browser automation for web docs — very cool 🙂

Relating back to Paul’s keynote this is exactly an example of Nelson’s transclusion — the code and outputs included in the document but still tied to their original source.

And on this same theme I demoed Snip!t as an example of both:

  1. attempting to bookmark parts of web pages, a form of transclusion
  2. using data detectors a form of external linkage

Another talk/demo also showed how Compendium could be used to annotate video (in the talk regarding fashion design) and build rationale around … yet another example of external linkage in action.

… and when looking after the event at some of Weigang Wang‘s work on collaborative hypermedia it was pleasing to see that it uses a theoretical framework for shared understanding in collaboratuve hypermedia that builds upon my own CSCW framework from the early 1990s 🙂

sessions: narrative, creativity and the absurd

Impossible to capture in a few words, but one session included different talks and discussion about the relation of narrative and various forms of web experiences — including a talk on the cognitive psychology of the Kafkaesque. Also discussion of creativity with Nathan live recording in IBIS!

what is web science

I guess inevitably in a new area there was some discussion about “what is web science” and even “is web science a discipline”. I recall similar discussions about the nature of HCI 25 years ago and not entirely resolved today … and, as an artist who was there reminded us, they still struggle with “what is art?”!

Whether or not there is a well defined discipline of ‘web science’, the web definitely throws up new issues for many disciplines including new challenges for computing in terms of scale, and new opportunities for the social sciences in terms of intrinsically documented social interactions. One of the themes that recurred to distinguish web science from simply web technology is the human element — joy to my ears of course as a HCI man, but I think maybe not the whole story.

Certainly the gathering of people from different backgrounds in a sort of disciplinary bohemia is exciting whether or not it has a definition.

  1. see also “Names, URIs and why the web discards 50 years of computing experience“[back]
  2. Wendy Hall, Hugh Davis and Gerard Hutchings, “Rethinking Hypermedia:: The Microcosm Approach, Springer, 1996.[back]
  3. Spreading activation is used by a number of people, some of my own work with others at Athens, Rome and Talis is reported in “Ontologies and the Brain: Using Spreading Activation through Ontologies to Support Personal Interaction” and “Spreading Activation Over Ontology-Based Resources: From Personal Context To Web Scale Reasoning“.[back]

Qualification vs unlimited education

In “Adrift in Caledonia“, Nick Thorpe is in the Shetland Isles speaking to Stuart Hill (aka ‘Captain Calamity’).  Stuart says:

“What does qualification mean? … Grammatically, a qualification limits the meaning of a sentence. And that’s what qualifications seem to do to people. When you become a lawyer it becomes impossible to think of yourself outside that definition. The whole of the education system is designed to fit people into employment, into the system. It’s not designed to realise their full creativity.”

Now Stuart may be being slightly cynical and maybe the ‘whole of education system’ is not like that, but sadly the general thrust often seems so.

Indeed I recently tweeted a link to @fmeawad‘s post “Don’t be Shy to #fail” as it echoed my own long standing worries (see “abject failures“) that we have a system that encourages students to make early, virtually unchangeable, choices about academic or career choices, and then systematically tell them how badly they do at it. Instead the whole purpose of education should be to enable people to discover their strengths and their purposes and help them to excel in those things, which are close to their heart and build on their abilities.  And this may involve ‘failures’ along the way and may mean shifting areas and directions.

At a university level the very idea behind the name ‘university’ was the bringing together of disparate scholars.  In “The Rise and Progress of  Universities” (Chapter 2. What is a University?, 1854) John Henry Newman (Cardinal Newman, recently beatified) wrote:

“IF I were asked to describe as briefly and popularly as I could, what a University was, I should draw my answer from its ancient designation of a Studium Generale, or “School of Universal Learning.” This description implies the assemblage of strangers from all parts in one spot;—from all parts; else, how will you find professors and students for every department of knowledge? and in one spot; else, how can there be any school at all? Accordingly, in its simple and rudimental form, it is a school of knowledge of every kind, consisting of teachers and learners from every quarter. Many things are requisite to complete and satisfy the idea embodied in this description; but such as this a University seems to be in its essence, a place for the communication and circulation of thought, by means of personal intercourse, through a wide extent of country.”

Note the emphasis on having representatives of many fields of knowledge ‘in one spot’: the meeting and exchange, the flow across disciplines, and yet is this the experience of many students?  In the Scottish university system, students are encouraged to study a range of subjects early on, and then specialise later; however, this is as part of a four year undergraduate programme that starts at 17.  At Lancaster there is an element of this with students studying three subjects in their first year, but the three year degree programmes (normally starting at 18) means that for computing courses we now encourage students to take 2/3 of that first year in computing in order to lay sufficient ground to cover material in the rest of their course.  In most UK Universities there is less choice.

However, to be fair, the fault here is not simply that of university teaching and curricula; students seem less and less willing to take a wider view of their studies, indeed unwilling to consider anything that is not going to be marked for final assessment.  A five year old is not like this, and I assume this student resistance is the result of so many years in school, assessed and assessed since they are tiny; one of the reasons Fiona and I opted to home educate our own children (a right that seems often under threat, see “home education – let parents alone!“).  In fact, in the past there was greater degree of cross-curricula activity in British schools, but this was made far more difficult by the combination of the National Curriculum prescribing content,  SATs used for ‘ranking’ schools, and increasingly intrusive ‘quality’ and targets bureaucracy introduced from the 1980s onwards.

Paradoxically, once a student has chosen a particular discipline, we often then force a particular form of breadth within it.  Sometimes this is driven by external bodies, such as the BPA, which largely determines the curriculum in psychology courses across the UK.  However, we also do it within university departments as we determine what for us is considered a suitable spread of studies, and then forcing students into it no matter what their leanings and inclinations, and despite the fact that similar institutions may have completely different curricula.  So, when a student ‘fails’ a module they must retake the topic on which they are clearly struggling in order to scrape a pass or else ‘fail’ the entire course.  Instead surely we should use this this as an indication of aptitude and maybe instead allow students to take alternative modules in areas of strength.

Several colleagues at Talis are very interested in the Peer 2 Peer University (P2PU), which is attempting to create a much more student-led experience. I would guess that Stuart Hill might have greater sympathy with this endeavour, than with the traditional education system.  Personally, I have my doubts as to whether being virtually / digitally ‘in one spot‘ is the same as actually being co-present (but the OU manage), and whether being totally student-led looses the essence of scholarship, teaching1 and mentoring, which seems the essence of what a university should be. However, P2PU and similar forms of open education (such as the Khan Academy)  pose a serious intellectual challenge to the current academic system: Can we switch the balance back from assessment to education?  Can we enable students to find their true potential wherever it lies?

  1. Although ‘teaching’ is almost a dirty word now-a-days, perhaps I should write ‘facilitating learning’![back]

Paris dawn

Dawn in Paris from the 29th floor of Hôtel Concorde La Fayette, looking north-east towards Sacre Cœur.  End of a few days as external expert for the INRIA Evaluation Seminar on “Interaction and Visualisation”.

I was also in Paris last September and changed my view of the city (hitherto rather poor).  I started a post about that then “Paris and the redemption of the French restaurant“, but never uploaded it at the time; however, I have now done so (post dated to Sept 2009).  Also on Flickr are more photos of dawn over Sacre Cœur and also I am uploading my photos from the previous Paris trip last Sept.

UK internet far from ubiquitous

On the last page of the Guardian on Saturday (13th Oct) in a sort of ‘interesting numbers’ section, they say that:

“30% of the UK population have no internet access at home”

I couldn’t find the exact source of this, however, another  guardian article “UK internet audience rises by 1.9 million over last year” dated Wednesday 30 June 2010 has a similar figure.  This says that Internet use  has grown to 38.8 million. The National Statistics office say the overall UK population is 61,792,000 with 1/5 under 16, so call that 2 in 16 under 10 or around 8 million. That gives an overall population of a little under 54 million over 10 years old, that is still only 70% actually using the web at all.

My guess is that some of the people with internet at home do not use it, and some of the ones without home connections use it using other means (mobile, use at school, cyber cafe’s), but by both measures we are hardly a society where the web is as ubiquitous as one might have imagined.

cracks in the ceiling, windows on childhood

This bedroom.  Where she knew the pattern of cracks in the ceiling better than any other fact of her life.
Shipping News, p.54

Reading this, I realised I also remember the patterns on the ceiling above Mum and Dad’s bed in the big bedroom in Bangor Street, where my sister and I also slept in bunk beds.  As I lay, falling asleep with monkey held close, the pattern above seemed like the face and shoulders of some giant that had slept in the attic and left his impression in the plaster like the smaller dent in the sheets when I got up in the morning.

I had forgotten it, but I can see it now, patterns upon light green woodchip ceiling paper, as clear as the sky and grass before me where I am sitting now, and mornings with tea from the Goblin Teasmade and Dad bringing up marmalade-laden toast cut in triangles before, workman-fashion, he sipped hot tea from the saucer.

wisdom of the crowds goes to court

Expert witnesses often testify in court cases whether on DNA evidence, IT security or blood splatter patterns.  However, in the days of Web 2.0 who is the ‘expert’ witness?  Would then true Web 2.0 court submit evidence to public comments, maybe, like the Viking Thing or Wild West lynch mob, a vote of the masses using Facebook ‘Like’ could determine guilt or innocence.

However, it will be a conventional judge, not the justice of social networks, who will adjudicate if the hoteliers threatening to sue TripAdvisor1 do indeed bring the case to court. When TripAdvisor seeks to defend its case, they will not rely on crowd-sourced legal opinions, but lawyers whose advice is trusted because they are trained, examined and experienced and who are held responsible for their advice.  What is at stake is precisely the fact that TripAdvisor’s own site has none of these characteristics.

This may well, like the Shetland newspaper case in the 1990s2, become a critical precedent for many crowd-sourced sites and so is something we should all be watching.

Unlike Wikipedia or legal advice itself, ‘expertise’ is not the key issue in the case of TripAdvisior: every hotel guest is in a way the best expert as to their own experience.  However, how is the reader to know that the reviews posted are really by disgruntled guests rather than business rivals?  In science we are expected to declare sources of research funding, so that the reader can make judgements on the reliability of evidence funded by the tobacco or oil industry or indeed the burgeoning renewables sector.  Those who flout these conventions and rules may expect their papers to be withdrawn and their careers to flounder.  Similarly if I make a defamatory public statement about friend, colleague or public figure, then not only can the reliability of my words be determined by my own reputation for trustworthiness, but if my words turn out to be knowingly or culpably false and damaging then I can be sued for libel.   In the case of TripAdvisor there are none of the checks and balances of science or the law and yet the impact on individual hoteliers can make or break their business.    Who is responsible for damage caused by any untrue or malicious reviews posted on the site: the anonymous ‘crowd’ or TripAdvisor?

Of course users of review sites are not stupid, they know (or do they) that anonymous reviews should be taken with a pinch of salt.  My guess is that a crucial aspect of the case may be the extent to which TripAdvisor itself appears to lend credence to the reviews it publishes.  Indeed every page of TripAdvisior is headed with their strap line “World’s most trusted travel advice™”.

At the top of the home page there is also the phrase “Find Hotels Travelers Trust” and further down, “Whether you prefer worldwide hotel chains or cozy boutique hotels, you’ll find real hotel reviews you can trust at TripAdvisor“.  The former arguably puts the issue of trust back to the reviewers, but the latter is definitely TripAdvisor asserting to the trustworthiness of the reviews.

I think if I were in TripAdvisor I would be worried!

Issues of trust and reliability, provenance and responsibility are also going to be an important strand of the work I’ll be part of myself  at Talis: how do we assess the authority of crowd-sourced material, how do we convey to users the level of reliability of the information they view, especially if it is ‘mashed’ from different sources, how do we track the provenance of information in order to be able to do this?   Talis is interested because as a major provider and facilitator of open data, the reliability of the information it and its clients provide is a crucial part of that information — unreliable information is not information!

However, these issues are critical for everyone involved in the new web; if those of us engaged in research and practice in IT do not address these key problems then the courts will.

  1. see The Independent, “Hoteliers to take their revenge on TripAdvisor’s critiques in court“, Saturday 11th Sept. 2010[back]
  2. The case around 1996/1997 involved the Shetland Times obtaining a copyright against ‘deep linking’ by the rival Shetland News, that is links directly to news stories bypassing the Shetland News home page.  This was widely reported at the time and became an important case in Internet law: see, for example, Nov 1996 BBC News story or netlitigation.com article.  The out of court settlement allowed the deep linking so long as the link was clearly acknowledged.  However, while the settlement was sensible, the uncertainty left by the case pervaded the industry for years, leading to some sites abandoning link pages, or only linking after obtaining explicit permissions, thus stifling the link-economy of the web. [back]

beyond books and blood

With most others, I was sickened by Pastor Terry Jones’ threat to burn copies of the Qur’an; it is directly counter to the Christian message and basic human decency.  Happily, this now seems to have been abandoned. However, while this was provocative and insensitive and may be used as an excuse for violence across the world, there seems to be a subtle and worrying shift as many have suggested that he will be responsible for any violence or even deaths.

However vile Jones’ threat was, the responsibility for violence lies with the perpetrators.

We seem to have lost the plot somehow when the burning of a book claims more news time and more  condemnation than those persecuting, maiming and killing people.

I am sure both true Muslims and Christians know that God’s dignity is not diminished one iota by the desecration of any book or building (including Ground Zero), even though our own feelings, dignity or pride may suffer. And I am certain they also know that God’s love extends to victims whatever their beliefs.

Let’s set our attention on the important things and leave those like Pastor Jones to the obscurity they deserve.

Across Ireland to Limerick: Stepping Out of Time

Early last week I had  a few days external examining the iMedia course at Limerick.  A wonderful course I was impressed again at the Dawn 2010 show pieces produced by the students who come predominantly from arts or design backgrounds and many of whom have never touched code or soldering iron before starting the course.

As it was Bank Holiday weekend, flying would have meant spending 24 hours in an airport between flights and airport hotels in each direction, or alternatively driving south to an airport.  It seemed more sensible and more fun to drive south through Ireland itself, and in the process satisfy a little my itinerant spirit.

I didn’t manage to write as I went along, but have retrospectively made a number of post-dated photo-blogs:

Roads of the Sea — Tiree to Larne

Into the West — Larne to Westport

Serendipity and Song — Westport to Doolin

Last Day — Doolin to Limerick

Full set of photos at my Limerick-Aug-2010 Flickr photo set

Said goodbye to our little dog Tansy over weekend.  I am not one of nature’s dog lovers, but it is amazing how one gets attached to a small bundle of fur, I just wish I had been at home in Tiree with Fiona at the time.  At nearly 17 she was very old in doggy years, seemed to be happy to the end and certainly gave a lot of happiness to others, which is a pretty good epitaph for anyone.