Tiree going mobile

Tiree’s Historical Centre An Iodhlann has just been awarded funding by the Scottish Digital Research and Development Fund for Arts and Culture to make historic archive material available through a mobile application whilst ‘on the ground’ walking, cycling or driving around the island.

I’ve been involved in bigger projects, but I can’t recall being more excited than this one: I think partly because it brings together academic interests and local community.

the project

An Iodhlann (Gaelic for a stackyard) is the historical centre on the island of Tiree.  Tiree has a rich history from the Mesolithic period to the Second World war base. The archive was established in 1998, and its collection of old letters, emigrant lists, maps, photographs, stories and songs now extends to 12 000 items.  500 items are available online, but the rest of the primary data is only available at the centre itself.  A database of 3200 island place names collated by Dr Holliday, the chair of An Iodhlann, has recently been made available on the web at tireeplacenames.org.  Given the size of the island (~750 permanent residents) this is a remarkable asset.

          

To date, the online access at An Iodhlann is mainly targeted at archival / historical use, although the centre itself has a more visitor-centred exhibition.  However, the existing digital content has the potential to be used for a wider range of applications, particularly to enhance the island experience for visitors.

Over the next nine months we will create a mobile application allowing visitors and local historians to access geographically pertinent information, including old photographs, and interpretative maps/diagrams, while actually at sites of interest.  This will largely use visitors’ own devices such as smart phones and tablets.  Maps will be central to the application, using both OS OpenData and bespoke local maps and sketches of historical sites.

As well as adding an extra service for those who already visit An Iodhlann, we hope that this will attract new users, especially younger tourists.  In addition a ‘data layer’ using elements of semantic web technology will mean that the raw geo-coded information is available for third parties to mash-up and for digital humanities research.

the mouse that roars

The Scottish Digital Research and Development Fund for Arts and Culture is run by Nesta, Creative Scotland and the Arts and Humanities Research Council (AHRC).

This was a highly competitive process with 52 applications of which just 6 were funded.  The other successful organisations are: The National Piping Centre, Lyceum Theatre Company and the Edinburgh Cultural Quarter, Dundee Contemporary Arts, National Galleries of Scotland, Glasgow Film Theatre and Edinburgh Filmhouse.  These are all big city organisations as were the projects funded by an earlier similar programme run by Nesta England.

As the only rural-based project, this is a great achievement for Tiree and a great challenge for us over the next nine months!

challenges

In areas of denser population or high overall tourist numbers, historical or natural sites attract sufficient visitors to justify full time (volunteer or paid) staff.  In more remote rural locations or small islands there are neither sufficient people for volunteers to cover all, or even a significant number, of sites, nor have they sufficient tourist volume to justify commercial visitor centres.

A recent example of this on Tiree is the closing of the Thatched Cottage Museum.  This is one of the few remaining thatched houses on the island, and housed a collection of everyday historical artefacts.  This was owned by the Hebridean Trust, and staffed by local volunteers, but was recently closed and the building sold, as it proved difficult to keep it staffed sufficiently given the visitor numbers.

At some remote sites such as the Tiree chapels, dating back to the 10th century, or Iron Age hill forts, there are simple information boards and at a few locations there are also fixed indoor displays, including at An Iodhlann itself.  However, there are practical and aesthetic limits on the amount of large-scale external signage and limits on the ongoing running and maintenance of indoor exhibits.  Furthermore, limited mobile signals mean that any mobile-based solutions cannot assume continuous access.

from challenge to experience

Providing information on visitors’ own phones or tablets will address some of the problems of lack of signage and human guides.  However, achieving this without effective mobile coverage means that simple web-based solutions will not work.

The application used whilst on the ground will need to be downloaded, but then this limits the total amount of information that is available whilst mobile; our first app will be built using HTML5 to ensure it will be available on the widest range of mobile devices (iOS, Android, Windows Mobile, ordinary laptops), but using HTML5  further reduces the local storage available1.

In order to deal with this, the on-the-ground experience will be combined with a web site allowing pre-trip planning and post-trip reminiscence.  This will also be map focused, allowing visitors to see where they have been or are about to go, access additional resources, such as photos and audio files that are too large to be available when on the ground (remembering poor mobile coverage). This may also offer an opportunity to view social content including comments or photographs of previous visitors and then to associate one’s own photographs taken during the day with the different sites and create a personal diary, which can be shared with others.

On reflection, this focus on preparation and reminiscence will create a richer and more extended experience than simply providing information on demand.  Rather than reading reams of on-screen text whilst looking at a  monument or attempting to hear an audio recording in the Tiree wind, instead visitors will have some information available in the field and more when they return to their holiday base, or home2.

 

  1. For some reason HTML5 applications are restricted to a maximum of 5Mb![back]
  2. This is another example of a lesson I have seen so many times before: the power of constraints to force more innovative and better designs. So many times I have heard people say about their own designs “I wanted to make X, but couldn’t for some reason so did Y instead” and almost every time it is the latter, the resource-constrained design, that is clearly so much better.[back]

mooHCIc – a massive open online HCI course

Would you like to know more about human–computer interaction, or would you like free additional resources for your students?

Hardly a week goes by without some story about the changing face of online open education: from Khan Academy to Apple’s iTunes U and a growing number of large-scale open online courses from leading academics such as Thrun‘s AI course at Stanford.

Talis is interested in the software needed to support these new forms of large-scale open learning.  So, partly because it seems a good idea, and partly to be a willing guinea pig, I am going to run a massive online open HCI course in the autumn.

This type of course is either aimed exclusively at people outside a traditional education setting; or else, in the case of some of the university based courses, the professor/tutor is teaching their own class and then making some of the materials connected with it available on the web.

While similarly aiming to cater for those outside mainstream education, I would like also to make it easy for those in traditional university settings to use this as part of their own courses, maybe suggesting it as additional material to recommend to their students.  Even more interesting will be if the online material is incorporated more deeply into your courses, perhaps using it instead of some of the lectures/labs you would normally give.

If you are teaching a HCI course and would be interested in me being a ‘virtual guest lecturer’ by using this material (free!) please let me know.

I don’t intend to do a broad introductory HCI ‘101’ (although I may start with a short ‘laying out the area’ component), but more a series of components on particular sub-topics.  These will themselves be split into small units of perhaps 10-15 minutes ‘lecture style’ material (Khan-style voice over, or maybe mix of voice-over and head-and-shoulders).  Each sub-topic will have its own exercises, discussion areas, etc.  So, tutors intending to use this as part of their own courses can choose a sub-topic that fits into their curriculum, and likewise individuals not part of a formal course can pick appropriate topics for themselves.   I may also think of some integrative exercises for those doing more than one component.

I have yet to decide the final topics, but the following are things for which I have given some sort of tutorial or produced fresh materials recently:

  • introduction to information visualisation — using materials created for IR+InfoVis winter school in Zinal last January
  • emotion and experience — using new chapter written for next edition of HCI book, and tutorial I did at iUSEr last December
  • physicality — looking at the interactions between digital and physical product design — based on forthcoming TouchIT book
  • formal methods in HCI — as I have just written a chapter for Mads’ interaction-design.org open encyclopaedia of HCI
  • user interface software architecture — based on heavily updated chapter for next edition of HCI book
  • creativity and innovation — (wider than pure HCI audience) — drawing on experience of teaching technical creativity using ‘Bad Ideas’ and other methods, with both practical (doing it) and theoretical (understanding it) aspects
  • designing for use (adoption and appropriation) — understanding the factors that lead to products being adopted including the rich interplay between design and marketing; and then, the factors that can allow users to appropriate products to their own purposes.

I will not do all these, but if some seem particularly interesting to you or your students, let me know and I’ll make final decisions soon based on a balance between popularity and ease of production!

not forgotten! 1997 scrollbars paper – best tech writing of the week at The Verge

Thanks to Marcin Wichary for letting me know that my 1997/1998 Interfaces article “Hands across the Screen” was just named in “Best Tech Writing of the Week” at The Verge.  Some years ago Marcin reprinted the article in his GUIdebook: Graphical User Interface gallery, and The Verge picked it up from there.

Hands across the screen is about why we have scroll bars on the right-hand side, even though it makes more sense to have them on the left, close to our visual attention for text.  The answer, I suggested, was that we mentally ‘imagine’ our hand crossing the screen, so a left-hand scroll-bar seems ‘wrong’, even though it is better (more on this later).

Any appreciation is obviously gratifying, but this is particularly so because it is a 15 year old article being picked up as ‘breaking’ technology news.

Interestingly, but perhaps not inconsequentially, the article was itself both addressing an issue current in 1997 and also looking back more than 15 years to the design of the Xerox Star and other early Xerox GUI in the late 1970s early 1980s as well as work at York in the mid 1980s.

Of course this should always be the case in academic writing: if the horizon is (only) 3-5 years leave it to industry.   Academic research certainly can be relevant today (and the article in question was in 1997), but if it does not have the likelihood of being useful in 10–20 years, then it is not research.

At the turn of the Millenium I wrote in my regular HCI Education column for SIGCHI Bulletin:

Pick up a recent CHI conference proceedings and turn to a paper at random. Now look at its bibliography – how many references are there to papers or books written before 1990 (or even before 1995)? Where there are older references, look where they come from — you’ll probably find most are in other disciplines: experimental psychology, physiology, education. If our research papers find no value in the HCI literature more than 5 years ago, then what value has today’s HCI got in 5 years time? Without enduring principles we ought to be teaching vocational training courses not academic college degrees.
(“the past, the future, and the wisdom of fools“, SIGCHI Bulletin, April 2000)

At the time about 90% of CHI citations were either to work in the last 5 years, or to the authors’ own work, to me that indicated a discipline in trouble — I wonder if it is any better today?

When revising the HCI textbook I am always pleased at the things that do not need revising — indeed some parts have hardly needed revising since the first edition in 1992.  These parts seem particularly important in education – if something has remained valuable for 10, 15, 20 years, then it is likely to still be valuable to your students in a further 10, 15, 20 years.  Likewise the things that are out of date after 5 years, even when revised, are also likely to be useless to your students even before they have graduated.

In fact, I have always been pleased with Hands across the Screen, even though it was short, and not published in a major conference or journal.  It had its roots in an experiment in my first every academic job at York in the mid-1980s, when we struggled to understand why the ‘obvious’ position for scroll arrows (bottom right) turned out to not work well.  After a detailed analysis, we worked out that in fact the top-left was the best place (with some other manipulations), and this analysis was verified in use.

As an important meta-lesson what looked right turned out not to be right.  User studies showed that it was wrong, but not how to put it right, and it was detailed analysis that filled the vital design gap.  However, even when we knew what was right it still looked wrong.  It was only years later (in 1997) that I realised that the discrepancy was because one mentally imagined a hand reaching across the screen, even though really one was using a mouse on the desk surface.

Visual (and other) impressions of designers and users can be wrong; as in any mature field, quite formal, detailed analysis is necessary to compliment even the most experienced designer’s intuitions.

The original interfaces article was followed by an even shorter subsidiary article “Sinister Scrollbar in the Xerox Star Xplained“, that delved into the history of the direction of scroll arrows on a scrollbar, and how they arose partly from a mistake when Apple took over the Star designs!  This is particularly interesting today given Apple’s perverse decision to remove scroll arrows completely — scrolling now feels like a Monti Carlo exercise, hoping you end up in the right place!

However, while it is important to find underlying principles, theories and explanations that stand the test of time, the application of these will certainly change.  Whilst, for an old mouse + screen PC,  the visual ‘hands across the screen’ impression was ‘wrong’ in terms of real use experience, now touch devices such as the iPad have changed this.  It really is a good idea to have the scrollbar on the left right so that you don’t cover up the screen as you scroll.  Or to be precise it is good if you are right handed.  But look hard, there are never options to change this for left-handed users — is this not a discrimination issue?  To be fair tabs and menu items are normally found at the top of the screen equally bad for all.  As with the scroll arrows, it seems that Apple long ago gave up any pretense of caring for basic usability of ergonomics (one day those class actions will come from a crippled generation!) — if  people buy because of visual and tactile design, why bother?  And where Apple lead the rest of the market follows 🙁

Actually it is not as easy as simply moving buttons around the screen; we have expectations from large screen GUI interfaces that we bring to the small screen, so any non-standard positioning needs to be particularly clear graphically.  However, the diverse location of items on web pages and often bespoke design of mobile apps, whilst bringing their own problems of inconsistency, do give a little more flexibility.

So today, as you design, do think “hands”, and left hands as well as right hands!

And in 15 years time, who knows what we’ll have in our hands, but let’s see if the same deep principles still remain.

CSS considered harmful (the curse of floats and other scary stories)

CSS and JavaScript based sites have undoubtedly enabled experiences far richer than the grey-backgrounded days of the early web in the 1990s (and recall, the backgrounds really were grey!). However, the power and flexibility of CSS, in particular the use of floats, has led to a whole new set of usability problems on what appear to be beautifully designed sites.

I was reading a quite disturbing article on a misogynistic Dell event by Sophie Catherina Løhr at elektronista.dk.  However, I was finding it frustrating as a line of media icons on the left of the page meant only the top few lines were unobstructed.

I was clearly not the only one with this problem as one of the comments on the page read:

That social media widget on the left made me stop reading an otherwise interesting article. Very irritating.

To be fair on the page designer,  it was just on Firefox that the page rendered like this, on other browsers the left-hand page margin was wider.  Probably Firefox is strictly ‘right’ in a sense as it sticks very close to standards, but whoever is to blame, it is not helpful to readers of the blog.

For those wishing to make cross-browser styles, it is usually possible now-a-days, but you often have to reset everything at the beginning of your style files — even if CSS is standard, default styles are not:

body {
    margin: 0;
    padding 0;
    /*  etc. */
}

Sadly this is just one example of an increasingly common problem.

A short while ago I was on a site that had a large right-hand side tab.  I forget the function, maybe for comments, or a table of contents.  The problem was the tab obscured and prevented access to most of the scroll bar making navigation of the middle portion of the page virtually impossible.  Normally it is not possible to obscure the scroll bar as it is ‘outside’ the page. However this site, like many, had chosen to put the main content of the site in a fixed size scrolling <div>.  This meant that the header and footer were always visible, and the content scrolled in the middle.  Of course the scroll bar of the <div> is then part of the page and could be obscured.  I assume it was another cross-browser formatting difference that meant the designer did not notice the problem, or perhaps (not unlikely), only ever tested the style of pages with small amounts of non-scrolling text.

Some sites adopt a different strategy for providing fixed headers.  Rather than putting the main content in a fixed <div>, instead the header and footer are set to float above the main content and margins added to it to mean that the page renders correctly at top and bottom.  This means that the scrollbar for the content is the main scroll bar, and therefore cannot be hidden or otherwise mangled 🙂

Unfortunately, the web page search function does not ‘know’ about these floating elements and so when you type in a search term, will happily scroll the page to ”reveal’ the searched for word, but may do so in a way that it is underneath either header or footer and so invisible.

This is not made easier to deal with in the new MacOS Lion were the line up/down scroll arrows have been removed.  Not only can you not fine-adjust the page to reveal those hidden searched-for terms, but also, whilst reading the page, the page-up/page-down scroll does not ‘know’ about the hidden parts and so scrolls a full screen-sized page missing half the text 🙁

Visibility problems are not confined to the web, there has been a long history of modal dialogue boxes being lost behind other windows (which then often refuse to interact due to the modal dialogue box), windows happily resizing themselves to be partly obscured by the Apple Dock, or even disappearing onto non-existent secondary displays.

It may be that some better model of visibility could be built into both CSS/DOM/JavaScript and desktop window managers.  And it may even be that CSS will fix it’s slightly odd model of floats and layout.  However, I would not want to discourage the use of overlays, transparencies, and other floating elements until this happens.

In the mean time, some thoughts:

  1. restraint — Recall the early days of DTP when every newsletter sported 20 fonts. No self respecting designer would do this now-a-days, so use floats, lightboxes and the like with consideration … and if you must have popups or tabs that open on hover rather than when clicked, do make sure it is possible to move your mouse across the page without it feeling like walking a minefield.
  2. resizing — Do check your page with different window sizes, although desktop screens are now almost all at least 1024 x 768, think laptops and pads, as this is increasingly the major form of access.
  3. defaults — Be aware that, W3C not withstanding, browsers are different.  At very minimum reset all the margins and padding as a first step, so that you are not relying on browser defaults.
  4. testing — Do test (and here I mean technical testing, do user test as well!) with realistic pages, not just a paragraph of lorem ipsum.

And do my sites do this well … ?

With CSS as in all things, with great power …

P.S. Computer scientists will recognise the pun on Dijkstra’s “go to statement considered harmful“, the manifesto of structured programming.  The use of gotos in early programming langauges was incredibly flexible and powerful, but just like CSS with many concomitant potential dangers for the careless or unwary.  Strangely computer scientists have had little worry about other equally powerful yet dangerous techniques, not least macro languages (anyone for a spot of TeX debugging?), and Scheme programmers throw around continuations as if they were tennis balls.  It seemed as though the humble goto became the scapegoat for a discipline’s sins. It was interesting when the goto statement was introduced as a ‘new’ feature in PHP5.3, an otherwise post-goto C-style language; very retro.


image  xkcd.com

The value of networks: mining and building

The value of networks or graphs underlies many of the internet (and for that read global corporate) giants.  Two of the biggest: Google and Facebook harness this in very different ways — mining and building.

Years ago, when I was part of the dot.com startup aQtive, we found there was no effective understanding of internet marketing, and so had to create our own.  Part of this we called ‘market ecology‘.  This basically involved mapping out the relationships of influence between different kinds of people within some domain, and then designing families of products that exploited that structure.

The networks we were looking at were about human relationships: for example teachers who teach children, who have other children as friends and siblings, and who go home to parents.  Effectively we were into (too) early social networking1!

The first element of this was about mining — exploiting the existing network of relationships.

However in our early white papers on the topic, we also noted that the power of internet products was that it was also possible to create new relationships, for example, adding ‘share’ links.  That is building the graph.

The two are not distinct, if one is not able to exploit new relationships within a product it will die, and the mining of existing networks can establish new links (e.g. Twitter suggesting who to follow).  Furthermore, creating of links is rarely ex nihilo, an email ‘share’ link uses an existing relationships (contact in address book), but brings it into a potentially different domain (e.g. bookmarking a web page).

It is interesting to see Google and Facebook against this backdrop.  Their core strengths are in different domains (web information and social relationships), but moreover they focus differently on mining and building.

Google is, par excellence, about mining graphs (the web).  While it has been augmented and modified over the years, the link structure used in PageRank is what made Google great.  Google also mine tacit relationships, for example the use of word collocation to understand concepts and relationships, so in a sense build from what they mine.

Facebook’s power, in contrast, is in the way it is building the social graph as hundreds of millions of people tell it about their own social relationships.  As noted, this is not ex nihilo, the social relationships exist in the real word, but Facebook captures them digitally.  Of course, then Facebook mines this graph in order to derive revenue form advertisements, and (although people debate this) attempt to improve the user experience by ranking posts.

Perhaps the greatest power comes in marrying the two.   Amazon does this to great effect within the world of books and products.

As well as a long-standing academic interest, these issues are particularly germane to my research at Talis where the Education Graph is a core element.  However, they apply equally whether the core network is kite surfers, chess or bio-technology.

Between the two it is probably building that is ultimately most critical.  When one has a graph or network it is possible to find ways to exploit it, but without the network there is nothing to mine. Page and Brin knew this in the early days of their pre-Google project at Stanford, and a major effort was focused on simply gathering the crawl of the web on which they built their algorithms2.  Now Google is aware that, in principle, others can exploit the open resources on which much of its business depends; its strength lies in its intellectual capital. In contrast, with a few geographical exceptions, Facebook is the social graph, far more defensible as Google has discovered as it struggles with Google Plus.

  1. See our retrospective about vfridge  at  last year’s HCI conference and our original web sharer vision.[back]
  2. See the description of this in “In The Plex: How Google Thinks, Works and Shapes Our Lives“.[back]

using the Public Suffix list

On a number of occasions I have wanted to decompose domain names, for example in the URL recogniser in Snip!t.  However, one problem has always been the bit at the end.  It is clear that ‘com’ and ‘ac.uk’ are the principle suffixes of ‘www.alandix.com’ and ‘www.cs.bham.ac.uk’ respectively.  However, while I know that for UK domains it is the last two components that are important (second level domains), I never knew how to work this out in general for other countries.  Happily, Mozilla and other browser vendors have an initiative called the Public Suffix List , which provides a list of just these important critical second level (and deeper level) suffixes.

I recently found I needed this again as part of my Talis research.  There is a Ruby library and a Java sourceforge project for reading the Public Suffix list, and an implementation by the DKIM Reputation project, that transforms the list into generated tables for C, PHP and Perl.  However, nothing for easily and automatically maintaining access to the list.  So I have written a small PHP class to parse, store and access the Public Suffix list. There is an example in the public suffix section of the ‘code’ pages in this blog, and it also has its own microsite including more examples, documentation and a live demo to try.

spice up boring lists of web links – add favicons using jQuery

Earlier today I was laying out lists of links to web resources, initially as simple links:

However, this looked a little boring and so thought it would be good to add each site’s favicon (the little icon it shows to the left on a web browser), and have a list like this:

  jQuery home page

  Wikipedia page on favicons

  my academic home page

The pages with the lists were being generated, and the icons could have been inserted using a server-side script, but to simplify the server-side code (for speed and maintainability) I put the fetching of favicons into a small JavaScript function using jQuery.  The page is initially written (or generated) with default images, and the script simply fills in the favicons when the page is loaded.

The list above is made by hand, but look at this example page to see the script in action.

You can use this in your own web pages and applications by simply including a few JavaScript files and adding classes to certain HTML elements.

See the favicon code page for a more detailed explanation of how it works and how to use it in your own pages.

If Kodak had been more like Apple

Finally Kodak has crumbled; technology and the market changed, but Kodak could not keep up. Lots of memories of those bright yellow and black film spools, and memories in photographs piled in boxes beneath the bed.

But just imagine if Kodak had been more like Apple.

I’m wondering about the fallout from the Kodak collapse. I’m not an investor, nor an employee, or even a supplier, but I have used Kodak products since childhood and I do have 40 years of memories in Kodak’s digital photo cloud. There are talks of Fuji buying up the remains of the photo cloud service, so it maybe that they will re-emerge, but for the time being I can no longer stream my photos to friend’s kTV enabled TV sets when I visit, nor view them online.

Happily, my Kodak kReader has a cache of most of my photos. But, how many I’m not sure, when did I last look at the photos of those childhood holidays or my wedding, will they be in my reader, I’ll check my kPhone as well. I’d hate to think I’d lost the snaps of the seaside holiday when my hat blew into the water; I only half remember it, but every time I look at it I remember being told and re-told the story by my dad.

The kReader is only a few months old. I usually try to put off getting a new one as they are so expensive, but even after a couple of years the software updates put a strain on the old machines.  I had to give up when my three year old model seemed to take about a minute to show each photo. It was annoying as this wasn’t just the new photos, but ones I recall viewing instantly on my first photo-reader more than 30 years ago (I can still remember the excitement as I unwrapped it one Christmas, I was 14 at the time, but now children seem to get their first readers when they are 4). The last straw was when the software updates would no longer work on the old processor and all my newer photos were appearing in strange colours.

Some years ago, I’d tried using a Fuji-viewer, which was much cheaper than the Kodak one. In principle you could download your photo cloud collection in an industry standard format and then import them into the Fuji cloud. However, this lost all the notes and dates on the photos and kept timing out unless I downloaded them in small batches, then I lost track of where I was. Even my brother-in-law, who is usually good at this sort of thing, couldn’t help.

But now I’m glad I’ve got the newest model of kReader as it had 8 times the memory of the old one, so hopefully all of my old photos in its cache. But oh no, just thought, has it only cached the things I’ve looked at since I’ve got it?  If so I’ll have hardly anything. Please, please let the kReader have downloaded all it could.

Suddenly, I remember the days when I laughed a little when my mum was still using her reels of old Apple film and the glossy prints that would need scanning to share on the net (not that she did use the net, she’d pop them in the post!). “I know it is the future”, she used to say, “but I never really trust things I can’t hold”. Now I just wish I’d listened to her.

Wikipedia blackout and why SOPA winging gets up my nose

Nobody on the web can be unaware of the Wikipedia blackout, and if they haven’t heard of SOPA or PIPA before will have now.  Few who understand the issues would deny that SOPA and PIPA are misguided and ill-informed, even Apple and other software giants abandoned it, and Obama’s recent statement has effectively scuppered SOPA in its current form.  However, at the risk of apparently annoying everyone, am I the only person who finds some of the anti-SOPA rhetoric at best naive and at times simply arrogant?

Wikipedia Blackout screenshot

The ignorance behind SOPA and a raft of similar legislation and court cases across the world is deeply worrying.  Only recently I posted about the recent NLA case in the UK, that creates potential copyright issues when linking on the web reminiscent of the Shetland Times case nearly 15 years ago.

However, that is no excuse for blinkered views on the other side.

I got particularly fed up a few days ago reading an article “Lockdown: The coming war on general-purpose computing1  by copyright ativist Cory Doctorow based on a keynote he gave at the Chaos Computer Congress.  The argument was that attempts to limit the internet destroyed the very essence of  the computer as a general purpose device and were therefore fundamentally wrong.  I know that Sweden has just recognised Kopimism as a religion, but still an argument that relies on the inviolate nature of computation leaves one wondering.

The article also argued that elected members of Parliament and Congress are by their nature layfolk, and so quite reasonably not expert in every area:

And yet those people who are experts in policy and politics, not technical disciplines, still manage to pass good rules that make sense.

Doctorow has trust in the nature of elected democracy for every area from biochemistry to urban planning, but not information technology, which, he asserts, is in some sense special.

Now even as a computer person I find this hard to swallow, but what would a geneticist, physicist, or even a financier using the Black-Scholes model make of this?

Furthermore, Congress is chastised for finding unemployment more important than copyright, and the UN for giving first regard to health and economics — of course, any reasonable person is expected to understand this is utter foolishness.  From what parallel universe does this kind of thinking emerge?

Of course, Doctorow takes an extreme position, but the Electronic Freedom Foundation’s position statement, which Wikipedia points to, offers no alternative proposals and employs scaremongering arguments more reminiscent of the tabloid press, in particular the claim that:

venture capitalists have said en masse they won’t invest in online startups if PIPA and SOPA pass

This turns out to be a Google sponsored report2 and refers to “digital content intermediaries (DCIs)“, those “search, hosting, and distribution services for digital content“, not startups in general.

When this is the quality of argument being mustered against SOPA and PIPA is there any wonder that Congress is influenced more by the barons of the entertainment industry?

Obviously some, such as Doctorow and more fundamental anti-copyright activists, would wish to see a completely unregulated net.  Indeed, this is starting to be the case de facto in some areas, where covers are distributed pretty freely on YouTube without apparently leading to a collapse in the music industry, and offering new bands much easier ways to make an initial name for themselves.  Maybe in 20 years time Hollywood will have withered and we will live off a diet of YouTube videos :-/

I suspect most of those opposing SOPA and PIPA do not share this vision, indeed Google has been paying 1/2 million per patent in recent acquisitions!

I guess the idealist position sees a world of individual freedom, but it is not clear that is where things are heading.  In many areas online distribution has already resulted in a shift of power from the traditional producers, the different record companies and book publishers (often relatively large companies themselves), to often one mega-corporation in each sector: Amazon, Apple iTunes. For the latter this was in no small part driven by the need for the music industry to react to widespread filesharing.  To be honest, however bad the legislation, I would rather trust myself to elected representatives, than unaccountable multinational corporations3.

If we do not wish to see poor legislation passed we need to offer better alternatives, both in terms of the law of the net and how we reward and fund the creative industries.  Maybe the BBC model is best, high quality entertainment funded by the public purse and then distributed freely.  However, I don’t see the US Congress nationalising Hollywood in the near future.

Of course copyright and IP is only part of a bigger picture where the net is challenging traditional notions of national borders and sovereignty.  In the UK we have seen recent cases where Twitter was used to undermine court injunctions.  The injunctions were in place to protect a few celebrities, so were ‘fair game’ anyway, and so elicited little public sympathy.  However, the Leveson Inquiry has heard evidence from the editor of the Express defending his paper’s suggestion that the McCann’s may have killed their own daughter; we expect and enforce (the Expresss paid £500,000 after a libel case) standards in the print media, would we expect less if the Express hosted a parallel new website in the Cayman Islands?

Whether it is privacy, malware or child pornography, we do expect and need to think of ways to limit the excess of the web whilst preserving its strengths.  Maybe the solution is more international agreements, hopefull not yet more extra-terratorial laws from the US4.

Could this day without Wikipedia be not just a call to protest, but also an opportunity to envision what a better future might be.

  1. blanked out today, see Google cache[back]
  2. By Booz&Co, which I thought at first was a wind-up, but appears to be a real company![back]
  3. As I write this, I am reminded of the  corporation-controlled world of Rollerball and other dystopian SciFi.[back]
  4. How come there is more protest over plans to shut out overseas web sites than there is over unmanned drones performing extra-judicial executions each week.[back]

tread lightly — controlling user experience pollution

When thinking about usability or user experience, it is easy to focus on the application in front of us, but the way it impacts its environment may sometimes be far more critical. However, designing applications that are friendly to their environment (digital and physical) may require deep changes to the low-level operating systems.

I’m writing this post effectively ‘offline’ into a word processor for later upload. I sometimes do this as I find it easier to write without the distractions of editing within a web browser, or because I am physically disconnected from the Internet. However, now I am connected, and indeed I can see I am connected as a FTP file upload is progressing, it is just that anything else network-related is stalled.

The reason that the FTP upload is ‘hogging’ the network is, I believe, due to a quirk in the UNIX scheduling system, which was, paradoxically, originally intended to improve interactivity.

UNIX, which sits underneath Mac OS, is a multiprocessing operating system running many programs at once. Each process has a priority, called its ‘niceness‘, which can be set explicitly, but is also tweaked from moment to moment by the operating system. One of the rules for ‘tweaking’ it is that if a process is IO-bound, that is if it is constantly waiting for input or output, then its niceness is decreased, meaning that it is given higher priority.

The reason for this rule is partly to enhance interactive performance in the old days of command line interfaces; an interactive program would spend lots of time waiting for the user to enter something, and so its priority would increase meaning it would respond quickly as soon as the user entered anything. The other reason is that CPU time was seen as the scarce resource, so that processes that were IO bound were effectively being ‘nicer’ to other processes as they let them get a share of the precious CPU.

The FTP program is simply sitting there shunting out data to the network, so is almost permanently blocked waiting for the network as it can read from the disk faster than the network can transmit data. This means UNIX regards it as ‘nice’ and ups its priority. As soon as the network clears sufficiently, the FTP program is rescheduled and it puts more into the network queue, reads the next chunk from disk until the network is again full to capacity. Nothing else gets a chance, no web, no email, not even a network trace utility.

I’ve seen the same before with a database server on one of Fiona’s machines — all my fault. In the MySQL manual it suggested that you disable indices before large bulk updates (e.g. ingesting a file of data) and then re-enable them once the update is finished as indexing is more efficient on lots of data than one at a time. I duly did this and forgot about it until Fiona noticed something was wrong on the server and web traffic had ground to a near halt. When she opened a console on the server, she found that it seemed quiet, very little CPU load at all, and was puzzled until I realised it was my indexing. Indexing requires a lot of reading and writing data to and from disk, so MySQL became IO-bound, was given higher priority, as soon as the disk was free it was rescheduled, hit the disk once more … just as FTP is now hogging the network, MySQL hogged the disk and nothing else could read or write. Of course MySQL’s own performance was fine as it internally interleaved queries with indexing, it is just everything else on the system that failed.

These are hard scenarios to design for. I have written before (“why software need never hang“) about the way application designers do not think sufficiently about potential delays due to slow networks, or broken connections. However, that was about the applications that are suffering. Here the issue is not that the FTP program is badly designed for its delays, it is still responding very happily, just that it has had a knock on effect on the rest of the system. It is like cleaning your sink with industrial bleach — you have a clean house within, but pollute the watercourse without.

These kind of issues are not related solely to network and disk, any kind of resource is limited and profligacy causes damage in the digital world as much as in the physical environment.

Some years ago I had a Symbian smartphone, but it proved unusable as its battery life rarely exceeded 40 minutes from full charge. I thought I had a duff battery, but later realised it was because I was leaving applications on the phone ‘open’. For me I went to the address book, looked up a number, and that was that, I then maybe turned the phone off or switched  to something else without ‘exiting’ the address book. I was treating the phone like every previous phone I had used, but this one was different, it had a ‘real’ operating system, opening the address book launched the address book application, which then kept on running — and using power — until it was explicitly closed, a model that is maybe fine for permanently plugged in computers, but disastrous for a moble phone.

When early iPhones came out iOS was criticised for being single threaded, that is not having lots of things running in the ‘background’. However, this undoubtedly helped its battery life. Now, with newer versions of iOS, it has changed and there are lots of apps running at once, and I have noticed the battery life reducing, is that simply the battery wearing out with age or the effect of all those apps running?

Power is of course not just a problem for smartphones, but for any laptop. I try to closedown applications on my Mac when I am working without power as I know some programs just eat CPU when they are apparently idle (yes, Firefox, it’s you I’m talking about). And from an environmental point of view, lower power consumption when connected would also be good. My hope was that Apple would take the lessons learnt in the early iOS to change the nature of their mainstream OS, but sadly they succumbed to the pressure to make iOS a ‘proper’ OS!

Of course the FTP program could try to be friendly, perhaps when it is not the selected window deliberately throttle its network activity. But then the 4 hour upload would take 8 hours, instead of 20 minutes left at this point, I’d be looking forward to another 4 hours and 20 minutes, and I’d be complaining about that.

The trouble is that there needs to be better communication, more knowledge shared, between application and operating system. I would like FTP to use all the network capacity that it can, except when I am interacting with some other program. Either FTP needs to say to the OS “hey here’s a packet, send it when there’s a gap”1, or the OS needs some way for applications to determine current network state and make decisions based on that. Sometimes this sort of information is easily available, more often it is either very hard to get at or not available at all.

I recall years ago when internet was still mainly through pay-per-minute dial-up connections. You could set your PC to automatically dial when the internet was needed. However, some programs, such as chat, would periodically check with a central server to see if there was activity, this would cause the PC to dial-up the ISP. If you were lucky the PC also had an auto-disconnect after a period of inactivity, if you were not lucky the PC would connect at 2am and by the morning you’d find yourself with a phone bill more than your weeks’ wages.

When we were designing onCue at aQtive, we wanted to be able to connect to the Internet when it was available, but avoid bankrupting our users. Clearly somewhere in the TCP/IP stack, the layers of code over the network, at some level deep down it knew whether we were connected. I recall we found a very helpful function in the Windows API called something like “isConnected”2. Unfortunately, it worked by attempting to send a network packet and returning true if it succeeded and false if it failed. Of course sending the test packet caused the PC to auto-dial …

And now there is just 1 minute and 53 seconds left on the upload, so time to finish this post before I get on to garbage collection.

  1. This form of “send when you can” would also be useful in cellular networks, for example when syncing photos.[back]
  2. I had a quick peek, and fund that Windows CE has a function called InternetGetConnectedState.  I don’t know if this works better now.[back]