Roads of the Sea — Tiree to Larne

Friday morning at 9am saw me at the ferry queue in Scarinish waiting for the Tiree–Oban ferry, saying goodbye to Fiona and to Tiree as I won’t be home again for most of the next two months.  Friday is an early ferry — 5am check-in for those coming from Oban, but a more civilised time for going to the mainland and a 1pm arrival gives plenty of time to get down to Troon for the 8:20pm ferry.

I’ve never taken the Troon–Larne ferry before, always travelling to Ireland from Stranraer in the North or Holyhead in the South.  However, arriving with a full three hours to spare, I found it is a good place to wait, eating a late picnic lunch of pork pie and salad overlooking the bay with windsurfers and kite surfers along the strand, home from home.

Long distance ‘commutes’ and remote relationships have become common amongst professional workers.  I recall Richard Bentley at one stage working in Germany while his partner was in Jersey, and in the States several West-Coast East-Coast marriages.  However, this is not a recent phenomena, on Tiree there are families of trawler men or those working on the North Sea rigs, where ‘going to work’ means many weeks or months away.  The ‘Express’  Troon–Larne ferry fairly sped along compared with Calmac’s more leisurely vessels, and as I sat and watched us pass Ailsa Craig, struck up conversation with a man on his way home to his wife and family after three months away doing forestry work in Scotland.

Larne, like Crewe, is a place you pass through and rarely stop, but given the late ferry I ended up spending a night there in a small seafront guesthouse, Beach Vista.  The late arrival of the ferry was compounded by a wrong turning1, but despite the late hour Bob the proprietor was waiting with a warm welcome in a rich Northern accent. My childhood images of Northern Ireland are all from news stories of ‘The Troubles’; amidst these images of sectarian violence and bomb blasts, it is easy to forget the warmth of the people and the beauty of the countryside. So I spent a first peaceful night, hearing the sound of the waves lapping against the sea wall.

Walking along the seashore before breakfast, watching a freight ship glide quietly into port past the James Chaine memorial tower, I understood some of Bob’s love of the place.  “Sometimes when my wife and I go away on holiday”, he told me, “we sometimes just wonder why, when we have this at home”.  Despite being less than an hour from Belfast, the pace of life is clearly somewhat slower in Larne.  Bob, as well as running the B&B with his family, also has a Taxi firm, and explained that, just like on Tiree, he doesn’t worry about locking the taxi and even leaving the keys inside.

next Into the West — Larne to Westport

  1. why can’t Google maps include a scale on the printed maps![back]

Names, URIs and why the web discards 50 years of computing experience

Names and naming have always been a big issue both in computer science and philosophy, and a topic I have posted on before (see “names – a file by any other name“).

In computer science, and in particular programming languages, a whole vocabulary has arisen to talk about names: scope, binding, referential transparency. As in philosophy, it is typically the association between a name and its ‘meaning’ that is of interest. Names and words, whether in programming languages or day-to-day language, are, what philosophers call, ‘intentional‘: they refer to something else. In computer science the ‘something else’ is typically some data or code or a placeholder/variable containing data or code, and the key question of semantics or ‘meaning’ is about how to identify which variable, function or piece of data a name refers to in a particular context at a particular time.

The emphasis in computing has tended to be about:

(a) Making sure names have unambiguous meaning when looking locally inside code. Concerns such as referential transparency, avoiding dynamic binding and the deprecation of global variables are about this.

(b) Putting boundaries on where names can be seen/understood, both as a means to ensure (a) and also as part of encapsulation of semantics in object-based languages and abstract data types.

However, there has always been a tension between clarity of intention (in both the normal and philosophical sense) and abstraction/reuse. If names are totally unambiguous then it becomes impossible to say general things. Without a level of controlled ambiguity in language a legal statement such as “if a driver exceeds the speed limit they will be fined” would need to be stated separately for every citizen. Similarly in computing when we write:

function f(x) { return (x+1)*(x-1); }

The meaning of x is different when we use it in ‘f(2)’ or ‘f(3)’ and must be so to allow ‘f’ to be used generically. Crucially there is no internal ambiguity, the two ‘x’s refer to the same thing in a particular invocation of ‘f’, but the precise meaning of ‘x’ for each invocation is achieved by external binding (the argument list ‘(2)’).

Come the web and URLs and URIs.

Fiona@lovefibre was recently making a test copy of a website built using WordPress. In a pure html website, this is easy (so long as you have used relative or site-relative links within the site), you just copy the files and put them in the new location and they work 🙂 Occasionally a more dynamic site does need to know its global name (URL), for example if you want to send a link in an email, but this can usually be achieved using configuration file. For example, there is a development version of Snip!t at cardiff.snip!t.org (rather then www.snipit.org), and there is just one configuration file that needs to be changed between this test site and the live one.

Similarly in a pristine WordPress install there is just such a configuration file and one or two database entries. However, as soon as it has been used to create a site, the database content becomes filled with URLs. Some are in clear locations, but many are embedded within HTML fields or serialised plugin options. Copying and moving the database requires a series of SQL updates with string replacements matching the old site name and replacing it with the new — both tedious and needing extreme care not to corrupt the database in the process.

Is this just a case of WordPress being poorly engineered?

In fact I feel more a problem endemic in the web and driven largely by the URL.

Recently I was experimenting with Firefox extensions. Being a good 21st century programmer I simply found an existing extension that was roughly similar to what I was after and started to alter it. First of course I changed its name and then found I needed to make changes through pretty much every file in the extension as the knowledge of the extension name seemed to permeate to the lowest level of the code. To be fair XUL has mechanisms to achieve a level of encapsulation introducing local URIs through the ‘chrome:’ naming scheme and having been through the process once. I maybe understand a bit better how to design extensions to make them less reliant on the external name, and also which names need to be changed and which are more like the ‘x’ in the ‘f(x)’ example. However, despite this, the experience was so different to the levels of encapsulation I have learnt to take for granted in traditional programming.

Much of the trouble resides with the URL. Going back to the two issues of naming, the URL focuses strongly on (a) making the name unambiguous by having a single universal namespace;  URLs are a bit like saying “let’s not just refer to ‘Alan’, but ‘the person with UK National Insurance Number XXXX’ so we know precisely who we are talking about”. Of course this focus on uniqueness of naming has a consequential impact on generality and abstraction. There are many visitors on Tiree over the summer and maybe one day I meet one at the shop and then a few days later pass the same person out walking; I don’t need to know the persons NI number or URL in order to say it was the same person.

Back to Snip!t, over the summer I spent some time working on the XML-based extension mechanism. As soon as these became even slightly complex I found URLs sneaking in, just like the WordPress database 🙁 The use of namespaces in the XML file can reduce this by at least limiting full URLs to the XML header, but, still, embedded in every XML file are un-abstracted references … and my pride in keeping the test site and live site near identical was severely dented1.

In the years when the web was coming into being the Hypertext community had been reflecting on more than 30 years of practical experience, embodied particularly in the Dexter Model2. The Dexter model and some systems, such as Wendy Hall’s Microcosm3, incorporated external linkage; that is, the body of content had marked hot spots, but the association of these hot spots to other resources was in a separate external layer.

Sadly HTML opted for internal links in anchor and image tags in order to make html files self-contained, a pattern replicated across web technologies such as XML and RDF. At a practical level this is (i) why it is hard to have a single anchor link to multiple things, as was common in early Hypertext systems such as Intermedia, and (ii), as Fiona found, a real pain for maintenance!

  1. I actually resolved this by a nasty ‘hack’ of having internal functions alias the full site name when encountered and treating them as if they refer to the test site — very cludgy![back]
  2. Halasz, F. and Schwartz, M. 1994. The Dexter hypertext reference model. Commun. ACM 37, 2 (Feb. 1994), 30-39. DOI= http://doi.acm.org/10.1145/175235.175237[back]
  3. Hall, W., Davis, H., and Hutchings, G. 1996 Rethinking Hypermedia: the Microcosm Approach. Kluwer Academic Publishers.[back]

Phoenix rises – vfridge online again

vfridge is back!

I mentioned ‘Project Phoenix’ in my last previous post, and this was it – getting vfridge up and running again.

Ten years ago I was part of a dot.com company aQtive1 with Russell Beale, Andy Wood and others.  Just before it folded in the aftermath of the dot.com crash, aQtive spawned a small spin-off vfridge.com.  The virtual fridge was a social networking web site before the term existed, and while vfridge the company went the way of most dot.coms, for some time after I kept the vfridge web site running on Fiona’s servers until it gradually ‘decayed’ partly due to Javascript/DOM changes and partly due to Java’s interactions with mysql becoming unstable (note very, very old Java code!).  But it is now back online 🙂

The core idea of vfridge is placing small notes, photos and ‘magnets’ in a shareable web area that can be moved around and arranged like you might with notes held by magnets to a fridge door.

Underlying vfridge was what we called the websharer vision, which looked towards a web of user-generated content.  Now this is passé, but at the time  was directly counter to accepted wisdom and looking back seem prescient – remember this was written in 1999:

Although everyone isn’t a web developer, it is likely that soon everyone will become an Internet communicator — email, PC-voice-comms, bulletin boards, etc. For some this will be via a PC, for others using a web-phone, set-top box or Internet-enabled games console.

The web/Internet is not just a medium for publishing, but a potential shared place.

Everyone may be a web sharer — not a publisher of formal public ‘content’, but personal or semi-private sharing of informal ‘bits and pieces’ with family, friends, local community and virtual communities such as fan clubs.

This is not just a future for the cognoscenti, but for anyone who chats in the pub or wants to show granny in Scunthorpe the baby’s first photos.

Just over a year ago I thought it would be good to write a retrospective about vfridge in the light of the social networking revolution.  We did a poster “Designing a virtual fridge” about vfridge years ago at a Computers and Fun workshop, but have never written at length abut its design and development.  In particular it would be good to analyse the reasons, technical, social and commercial, why it did not ‘take off’ the time.  However, it is hard to do write about it without good screen shots, and could I find any? (Although now I have)  So I thought it would be good to revive it and now you can try it out again. I started with a few days effort last year at Christmas and Easter time (leisure activity), but now over the last week have at last used the fact that I have half my time unpaid and so free for my own activities … and it is done 🙂

The original vfridge was implemented using Java Servlets, but I have rebuilt it in PHP.  While the original development took over a year (starting down in Coornwall while on holiday watching the solar eclipse), this re-build took about 10 days effort, although of course with no design decisions needed.  The reason it took so much development back then is one of the things I want to consider when I write the retrospective.

As far as possible the actual behaviour and design is exactly as it was back in 2000 … and yes it does feel clunky, with lots of refreshing (remember no AJAX or web2.0 in those days) and of course loads of frames!  In fact there is a little cleverness that allowed some client-end processing pre-AJAX2.    Also the new implementation uses the same templates as the original one, although the expansion engine had to be rewritten in PHP.  In fact this template engine was one of our most re-used bits of Java code, although now of course many alternatives.  Maybe I will return to a discussion of that in another post.

I have even resurrected the old mobile interface.  Yes there were WAP phones even in 2000, albeit with tiny green and black screens.  I still recall the excitement I felt the first time I entered a note on the phone and saw it appear on a web page 🙂  However, this was one place I had to extensively edit the page templates as nothing seems to process WML anymore, so the WML had to be converted to plain-text-ish HTML, as close as possible to those old phones!  Looks rather odd on the iPhone :-/

So, if you were one of those who had an account back in 2000 (Panos Markopoulos used it to share his baby photos 🙂 ), then everything is still there just as you left it!

If not, then you can register now and play.

  1. The old aQtive website is still viewable at aqtive.org, but don’t try to install onCue, it was developed in the days of Windows NT.[back]
  2. One trick used the fact that you can get Javascript to pre-load images.  When the front-end Javascript code wanted to send information back to the server it preloaded an image URL that was really just to activate a back-end script.  The frames  used a change-propagation system, so that only those frames that were dependent on particular user actions were refreshed.  All of this is preserved in the current system, peek at the Javascript on the pages.    Maybe I’ll write about the details of these another time.[back]

language, dreams and the Jabberwocky circuit

If life is always a learning opportunity, then so are dreams.

Last night I both learnt something new about language and cognition, and also developed a new trick for creativity!

In the dream in question I was in a meeting. I know, a sad topic for a dream, and perhaps even sadder it had started with me filling in forms!  The meeting was clearly one after I’d given a talk somewhere as a person across the table said she’d been wanting to ask me (obviously as a sort of challenge) if there was a relation between … and here I’ll expand later … something like evolutionary and ecological something.  Ever one to think on my feet I said something like “that’s an interesting question”, but it was also clear that the question arose partly because the terms sounded somewhat similar, so had some of the sense of a rhyming riddle “what’s the difference between a jeweller and a jailor”.  So I went on to mention random metaphors as a general creativity technique and then, so as to give practical advice, suggested choosing two words next to each other in a dictionary and then trying to link them.

Starting with the last of these, the two words in a dictionary method is one I have never suggested to anyone before, not even thought about. It was clearly prompted by the specific example where the words had an alliterative nature, and so was a sensible generalisation, and after I woke realised was worth suggesting in future as an exercise.  But it was entirely novel to me, I had effectively done the exactly sort of thinking / problem solving that I would have done in the real life situation, but while dreaming.

One of the reasons I find dreams fascinating is that in some ways they are so normal — we clearly have no or little sensory input, and certain parts of our brain shut down (e.g. motor control to stop us thrashing about too much in our sleep) — but other parts seem to function perfectly as normal.  I have written before about the cognitive nature of dreams (including maybe how to model dreaming) and what we may be able to learn about cognitive function because not everything is working, rather like running an engine when it is out of the car.

In this dream clearly the ‘conscious’ (I know an oxymoron) problem-solving part of the mind was operating just the same as when awake.  Which is an interesting fact about dreaming, but  I was already aware of it from previous dreams.

In this dream it was the language that was interesting, the original conundrum I was given.  The problem came as I woke up and tried to reconstruct exactly what my interlocutor had asked me.  The words clearly *meant* evolutionary and ecological, but in the dream had ‘sounded’ even closer aurally, more like evolution and elocution (interesting to consider, images of God speaking forth creation).

So how had the two words sound more similar in my dream than in real speech?

For this we need the Jabberwocky circuit.

There is a certain neurological condition that arises, I think due to tumours or damage in particular areas of the grain, which disrupts particular functions of language.   The person speaks interminably; the words make sense and the grammar is flawless, but there is no overall sense.  Each small snippet of speech is fine, just there is no larger scale linkage.

When explaining this phenomenon to people I often evoke the Jabberwocky circuit.  Now I should note that this is not a word used by linguists, neurolinguists, or cognitive scientists, and is a gross simplification, but I think captures the essence of what is happening.  Basically there is a part of your mind (the conscious, thinking bit) that knows what to say and it asks another bit, the Jabberwocky circuit, to actually articulate the words.  The Jabberwocky circuit knows about the sound form of words and how to string them together grammatically, but basically does what it is told.  The thinking bit needs to know enough about what can be said, but doesn’t have time to deal with precisely how they are strung together and leaves that to Jabberwocky.

Even without brain damage we can see occasional slips in this process.  For example, if you are talking to someone (and even more if typing) and there is some other speech audible (maybe radio in the background), occasionally a word intrudes into your own speech that isn’t part of what you meant to say, but is linked to the background intruding sound.

Occasionally too, you find yourself stopping in mid sentence when the words don’t quite make sense, for example, when what would be reasonable grammar overlaps with a colloquialism, so that it no longer makes sense.  Or you may simply not be able to say a word that you ‘know’ is there and insert “thingy” or “what’s it called” where you should say “spanner”.

The relationship between the two is rather like a manager and someone doing the job: the manager knows pretty much what is possible and can give general directions, but the person doing the job knows the details.  Occasionally, the instructions get confused (when there is intruding background speech) or the manager thinks something is possible which turns out not to be.

Going back to the dream I thought I ‘heard’ the words, but examining more closely after I woke I realised that no word would actually fit.  I think what is happening is that during dreaming (and maybe during imagined dialogue while awake), the Jabberwocky circuit is not active, or not being attended to.  It is like I am hearing the intentions to speak of the other person, not articulated words.  The pre-Jabberwocky bit of the mind does know that there are two words, and knows what they *mean*.  It also knows that they sound rather similar at the beginning (“eco”, “evo”), but not exactly what they sound like throughout.

I have noticed a similar thing with the written word.  Often in dreams I am reading a book, sheet of paper or poster, and the words make sense, but if I try to look more closely at the precise written form of the text, I cannot focus, and indeed often wake at that point1.  That is the dream is creating the interpretation of the text, but not the actual sensory form, although if asked I would normally say that I had ‘seen’ the words on the page in the dream, it is more that I ‘see’ that there are words.

Fiona does claim to be able to see actual letters in dreams, so maybe it is possible to recreate more precise sensory images, or maybe this is just the difference between simply writing and reading, and more conscious spelling-out or attending to words, as in the well known:

Paris in the
the spring

Anyway, I am awake now and the wiser.  I know a little more about dreaming, which cognitive functions are working and which are not;  I know a little more about the brain and language; and I know a new creativity technique.

Not bad for a night in bed.

What do you learn from your dreams?

  1. The waking is interesting, I have often noticed that if the ‘logic’ of the dream becomes irreconcilable I wake.  This is a long story in itself, but I think similar to the way you get a ‘breakdown’ situation when things don’t work as expected and are forced to think about what you are doing.  It seems like the ‘kick’ that changes your mode of thinking often wakes you up![back]

names – a file by any other name

Naming things seems relatively unproblematic until you try to do it — ask any couple with a baby on the way.  Naming files is no easier.

Earlier today Fiona @lovefibre was using the MAC OS Time Machine to retrieve an old version of a file (let’s call it “fisfile.doc”).  She wanted to extract a part that she knew she had deleted in order to use in the current version.  Of course the file you are retrieving has the same name as the current file, and the default is to overwrite the current version; that is a simple backup restore.  However, you can ask Time Machine to retain both versions; at which point you end up with two files called, for example, “fisfile.doc” and “fisfile-original.doc”.  In this case ‘original’ means ‘the most recent version’ and the unlabelled one is the old version you have just restored.  This was not  too confusing, but personally I would have been tempted to call the restored file something like “fisfile-2010-01-17-10-33.doc”, in particular because one wonders what will happen if you try to restore several copies of the same file to work on, for example, to work out when an error slipped into a document.

OK, just an single incident, but only a few minutes later I had another example of problematic naming.

Continue reading

grammer aint wot it used two be

Fiona @ lovefibre and I have often discussed the worrying decline of language used in many comments and postings on the web. Sometimes people are using compressed txtng language or even leetspeak, both of these are reasonable alternative codes to ‘proper’ English, and potentially part of the natural growth of the language.  However, it is often clear that the cause is ignorance not choice.  One of the reasons may be that many more people are getting a voice on the Internet; it is not just the journalists, academics and professional classes.  If so, this could be a positive social sign indicating that a public voice is no longer restricted to university graduates, who, of course, know their grammar perfectly …

Earlier today I was using Google to look up the author of a book I was reading and one of the top links was a listing on ratemyprofessors.com.  For interest I clicked through and saw:

“He sucks.. hes mean and way to demanding if u wanan work your ass off for a C+ take his class1

Hmm I wonder what this student’s course assignment looked like?

Continue reading

  1. In case you think I’m a complete pedant, personally, I am happy with both the slang ‘sucks’ and ‘ass’ (instead of ‘arse’!), and the compressed speech ‘u’. These could be well-considered choices in language. The mistyped ‘wanna’ is also just a slip. It is the slightly more proper “hes mean and way to demanding” that seems to show  general lack of understanding.  Happily, the other comments, were not as bad as this one, but I did find the student who wanted a “descent grade” amusing 🙂 [back]

European working time directive 2012 – the end of the UK university?

Fiona @ lovefibre just forwarded me a link to a petition about retained firefighters, who evidently may be at risk as the right to opt out of European working time directive is rescinded.  Checking through to the Hansard record, it seems this is really a precautionary debate as the crunch is not until 2012.

However, I was wondering how that was going to impact UK academia if, in 2012, the 48 hour maximum cuts in.

It may make no difference if academics are not required to work more than 48 hours, just decide to do so voluntarily.  However, this presumably has all sorts of insurance ramifications – if we do a reference or paper outside the ‘official hours’ would we be covered by the University’s professional indemnity.  I guess also, in considering promotions and appointments, we would  have to ‘downgrade’ someone’s publications etc. to only include those that were done during paid working hours otherwise we would effectively be making the extra hours a requirement (as we currently do).

The university system has become totally dependent on these extra hours.  In a survey in the early 1990s the average hours worked were over 55 per week, and in the 15 years since then this has gone up substantially. I would guess now the average is well over 60, with many academics getting close to double the 48 hour maximum. I recall one colleague, who had recently had a baby, mentioning how he had cut back on work; now he stops work at 5pm … and doesn’t start again until 7:30pm, his ‘cut back’ week was still way in excess of 60 hours even with a young baby1. Worryingly this has spread beyond the academics and  departmental administrators are often at their desks at 7 or 8 o’clock in the evening, taking piles of work home and answering email through the weekend.  While I admire and appreciate their devotion, one has to wonder at the impact on their personal lives.

So, at a human level, enforcing limited working hours would be no bad thing; certainly many companies force this, forbidding work out of office hours.  However, practically speaking,  if the working time directive does become compulsory in 2012, I  cannot imagine how the University system could continue to function.

And … if you are planning to do a 3 year course, start now; who knows what things will be like after 3 years!

  1. Yea, and I know I can’t talk, as an inveterate workaholic I ‘cut back’ from a high of averaging 95 hours a few years ago and now try to keep around 80 max.  I was however very fortunate in that I was doing a PhD and then personal fellowships when our children were small, so was able to spend time with them and only later got mired in the academic quicksands.[back]

eprints: relaxed and scalable interfaces

A story, a bit of a moan … and then I hope some constructive ideas .

It is time for the University annual report, which includes a list of all publications across the University. In previous years this was an easy job. I keep an up-to-date web page with all my publications for each year, so I simply gave our secretaries a link to the web publication list, they cut and paste it into Word, tidied the format a little … and job done. However, this year things are different … a short while ago the department installed an EPrints server. This year the department is making its submission to the University by downloading from the EPrints server, which means we have to upload to it :-/

The citation adding page runs to several screen fulls including breaking author names down into surname forename … the thought of that was somewhat daunting.

Fortunately you can import into EPrints from BibTeX and EndNote bibliographies … unfortunately mine is in plain HTML 🙁

Now the 10 million AKT project that Southampton was a lead partner in developed a free text bibliography server … but, unfortunately, not included in EPrints 🙁

So a few regular expression substitutions and a lot of hand edits later and I convert my 2007 pub list into BibTeX (actually couple of hours in total including ‘bug fixing’ syntax errors in the BibTeX).

Then upload the clean .bib file … beautiful – I get a list of all the uploaded items … but they are my ‘user workspace’ and not properly deposited. This I have to do one-by-one and not allowed to do so until I have filled in various additional fields, scattered liberally over several forms including one form for adding subjects that requires several clicks to open up a lovely tree browser that in the end has only 2 leaves.

Now after grouching the lessons.

There seems to be a few key problems:

(1) First the standard usability issues: the inclusion pages are oriented around the data in the system not the user, there are no shortcuts for previously entered authors, etc.

(2) The system will not allow data to be entered if it is not complete. Of course the institution wants full data (e.g. whether it is refereed, etc.), but making it difficult to enter data makes it likely that user will not bother. That is the alternative to perfect data may be no data!

(3) The interface to enter and edit is fine for a small number of entries, but becomes a pain when processing a complete publication list. Contrarily, the page for setting the subject categories is designed for handling large trees of categories but does not gracefully handle a small number.

Both (2) and (3) are also common problems, but not so well considred in usability iterature.

A useful inofmration systems heuristic that I often advocate is

“don’t enforce consistency, but highlight inconsistency”

In this case why not allow me to deposit incomplete records and then leave me a ‘to do list’ page … yes and maybe even badger me periodically with automatic emails to check it.

Anther maxim that applies to (2) is:

“Make it easy for the user to do what you want”

If you want people to upload references make it as easy as possible to do so. Now I’m sure the designers intend this to be the case, but it is easy sometimes to focus on usability of individual screens and interactions rather than the wider context.

In fact, this was the second time that I was faced with problem (3) today. Fiona had accidentally double clicked a large number of archived files when she was trying to drag them to Trash. She had to kill the application as it blindly started to open dozens of files (why not ask?). However, it was clearly coded resiliently and kept backup copies of the files it had started to open, so, when she tried to re-open it, InDesign started to ask her whether she wanted to recover the files … but did so one-by-one and wouldn’t let her do anything else until she had laboriously answered every dialogue box.

In this case the solution is fairly obvious, if there are many (or even ore than one) files to be recovered why not list them and aks about them all, perhaps with check boxes so you can recover some but not others. In general tabular or list-style views tend to work better with large numbers of items, allowing you to perform edits to many items in a single transaction.

Similarly in EPrints, after the import there were just a few fields required for each entry, some form of tabular view would have allowed me to scan down the link and select ‘refereed/not refereed’ for each entry.

With the subject categories, it was in a sense the opposite problem, but a symptom of the way we, as designers, often have some idea in out heads about how large a particular set is likely to be and then design around that idea. However, if you can notice this tendency one can often produce variant interaction styles depending on the size of the set. For example, in web-based systems to browse hierarchies I have often (but not always!) added code that effectively says, “if the number of entries at this level is not to great, then show this level as headings with the next level as well.”


fully expanded EPrints subjects menu

The EPrints server clearly expects that the subject tree will be far bigger, as it would be on a University-wide installation. Although even if the list is very large the number of items used by an individual would be small.

So as general design advice, if there is some form of collection:

  • are there any absolute lower or upper bounds on the size?
  • check, within these absolute bounds, what the interface would be like with 1, 3, 10, 100, 1000 in the collection
  • if the potential collection is large, is the likely size needed for a particular usre, situation, smaller?

To be fair I am an unusual user with my pretty complete HTML publication lists, if I had no systematic way of keeping my own publications then I would appreciate EPrints more. However, there will be many with word processor lists, so maybe I’m not so unusual. I assume other people just knuckle down and get on with it. So the real problem is that I am impatient user!

Which brings us to the last and most valuable piece of advice. When it comes to ussr testing cussed users are worth their weight in gold. Users that are too nice are useless,; they cope, they manage and would hate to hurt your feelings by telling you your system is not perfect. So find the nasty users, the impatient users, the ones who complain at the slightest things … they are true treasure.

material culture – textiles and technology

A couple of weeks ago I was at a ‘Long Table’ discussion on Technology and Democracy at [ space ] in Hackney. The ‘Long Table’ format was led by Lois Weaver (I now note the name although didn’t at the time) and took the form of a simulated dinner art where the participants chatted about the topic. We were invited to write on the table cloth as we posed questions or espoused viewpoints – this later became part of the Not Quite Yet exhibition opening the next day.

Because of the table cloth sitting in front of me I was reminded of the role textiles have played both as significant technology themselves and in the development of technological society. It was the spinning jenny and the cotton mills that created the industrial revolution and it was needle manufacturing that was the inspirations for Adam Smith‘s division of labour. In the context of the discussion of ‘democracy and technology this is particularly poignant. Before the factories spinning and weaving were skilled cottage industries described so well in Silas Marner [G|A] and the importance of the textile factory was as more about exerting control over production than about efficiency of production.

The Object of Labout - cover imageToday a book arrived from The Book Depository for Fiona “The Object of Labour: Art, Cloth and Cultural Production”. It is a majestic tome and I’m looking forward into dipping into it sometime. It says it explores the “personal, political, social, and economic meaning of work through the lens of art and textile production”. Interesting its 408 pages are covered in words and the etymology of ‘text’ itself from Latin texere to weave 🙂

Somehow whilst my mind wandered over this I came to ponder the term material culture (maybe because have recently been re-reading Malfouris “The cognitive basis of material engagement” and Mike Wheeler’s response to it “Minds, Things, and Materiality“). The word ‘material’ has many meanings ‘raw materials’ for industry, ‘material evidence’ in law … but if you say the word to a person in the street ‘material’ means simply cloth. So interesting that cloth has played such a strong part in material culture and is ‘material’ itself.

This led me to wonder about the words (like text and textile) when the special meaning of material as cloth arose or even of they have different roots. Turning to the Shorter OED I looked up the definition of ‘material’. It is long, over half a column The etymology is again from Latin materia – matter – and there are meanings related to that (as in material culture), the legal and philosophical meanings, the sense of documents or sources used for writing, indeed the implements of writing ‘writing materials’ … but nowhere material as simply cloth, not even in the addenda of recent words.

The most common meaning of material, the most mundane, the one that sits next to my skin as I write – forgotten, written out of the dictionary, as the hand-loom weavers were written out of industrial production.

Physicality and Middle Ages Tech Support

Ansgarr needing help to use a bookOn the forum of our MRes course at Lancaster one of the students posted a link to Middle Ages Tech Support on YouTube. It shows Ansgarr a Mediaeval monk struggling with his first book.

I first saw this video when I was giving a talk at University of Peloponnese in Tripolis. Georgios1 showed the video before I started, just because he thought it was fun. I was talking a little bit about physicality and the video brought up some really interesting issues relating to this and usability. Although it is a comic video we can unpack it and ask which of the problems that Ansgarr has as he changes technology from scroll to book would actually happen and which are more our own anachronistic view of the past.

Continue reading

  1. one of my hosts there as part of the TIM project[back]