databases as people think – dabble DB

I was just looking at Enrico Bertini‘s blog Visuale for the first time for ages. In particular at his December entry on DabbleDB & Magic/Replace. Dabble DB allows web-based databases and in some ways sits in similar ground with Freebase, Swivel or even Google docs spreadsheet, all ways to share data of different forms on/through the web.

The USP for Dabble DB amongst other online data sharing apps, is that it appears to really be a complete database solution online … and its USB amongst conventional databses is the way they seem to have really thought about real use.  This focus on real use by ordinary users includes dynamically altering the structure of the data as you gradually understand it more.  The model they have is that you start with plain table data from a spreadsheet or other document and gradually add structure as opposed to the “first analyse and then enter” model of traditional DBs.

As I read Enrico’s blog I remembered that he had mailed me about the ‘magic/replace‘ feature ages ago.  This lets you tidy up  data during import (but apparently not data already imported … wonder why?), using a ‘by example’ approach and is a really nice example of all that ‘programming by example‘ and related work that was so hot 15 years ago eventually finding its way into real products.

The downside to Dabble DB is that editing is via forms only … it is often so much easier to enter data in a spreadsheet view, the API is quite limited, and while they have a ‘Dabble DB Commons‘ for public data (rather like Swivel), there is no directory or other way to see what people have put up 🙁

I was particularly hoping the API was better as it would have been nice to link it into my web version of Query-by-Browsing. or even integrate with the Query-through-Drilldown approach for constructing complex table joins that Damon Oram implemented more recently.

In general, while the DB and (many) UI features are strong it is not really looking outwards to creating shared linked data (in the broadest sense of the term, not just pure SemWeb world linked data), … so still room there for the absolute killer shared data app!

persistent URLs … pleeeease

I was just clicking through a link from my own 2000 publication list to an ACM paper1, and the link is broken!  So what is new, the web is full of broken links … but I hate to find one on my own site.  The URL appears to be one that is semantic (not one of those CMS “?nodeid=3179” web pages):

http://www.acm.org/pubs/citations/journals/tochi/2000-7-3/p285-dix/

At the time I used the link it was valid;  however, the ACM have clearly changed their structure, as this kind of material is all now in the ACM digital library, but they did not leave permanent redirects in place.  This would be forgiveable if the URL were for a transient news item, but TOCHI is intended to be an archival publication … and yet the URL is not regarded as persistent!  If ACM, probably the largest professional computing organisation in the world, cannot get this right, what hope for any of us.

I will fix the link, and now-a-days tend to use ACM’s DOI link as this is likely to be more persistent, however I can do this only because it is my own site.

So, if you are updating a site structure yourself ..  please, Please, PLeeeeEASE make sure you keep all those old links alive2!

  1. BTW the paper is:

    Dix, A., Rodden, T., Davies, N., Trevor, J., Friday, A., and Palfreyman, K. 2000. Exploiting space and location as a design framework for interactive mobile systems. ACM Trans. Comput.-Hum. Interact. 7, 3 (Sep. 2000), 285-321. DOI= http://doi.acm.org/10.1145/355324.355325

    and it is all about physical and virtual location … hmmm[back]

  2. For the ACM or other large sites this would be done using some data-driven approach, but if you are simply restructring your own site and you are using an Apache web server, just add a .htaccess file to your web and add Redirect directives in it mapping old URls to new ones. For example, for the paper on the ACM site:

    Redirect /pubs/citations/journals/tochi/2000-7-3/p285-dix/ http://doi.acm.org/10.1145/355324.355325

    [back]

just hit search

For years I have heard anecdotal stories of how users are increasingly unaware of the URL itself (and certainly the term,  ‘web address’ is sometimes better).  I recall having a conversation at a university meeting (non-computing) and it soon became obvious that  the term ‘browser’ was also not one they were familiar with even though they of course used it daily.  I guess like the mechanics of the car engine, the mechanics of the web are invisible.

I came across the Google Zeitgeist 2008 page that analyses the popular and the rising search terms of 2008.  The rising ones reveal things in the media “sarah palin” way in there above “obama” in the global stats.  … if Google searches were votes!  However, the ‘most popular’ searches reveal longer term habits.  For the UK the 10 most popular searches are:

  1. facebook
  2. bbc
  3. youtube
  4. ebay
  5. games
  6. news
  7. hotmail
  8. bebo
  9. yahoo
  10. jobs

Some of these terms ‘games’, ‘news’, and ‘jobs’ (no Steve, not you) are generic categories … and suggests that people approach these from the search box, not a portal.  However, of these top 10, seven of them are simply domain names of popular sites.  Instead of typing this into the address bar (which certainly on Firefox autocompletes if I type any I’ve visited before), many users just Google it (and I’m sure the same is true for LiveSearch and others).

I was told some years ago that AOL browsers swapped the relative sizes (and locations I think) of the built-in search box and address bar on the assumption that their users rarelt tyoed in URLs (although I knew of AOL users who accidentally typed URLs into the search box).  Also recalling the company that used to sell net keywords that were used by Netscape (and possibly others) if you entered terms rather than a URL into the adders bar.

… of course if I try that now … FireFox  redirects me through Google “I feel lucky” … of course

Incidentally I came to this as I was trailing back the source of the, now shown to be incorrect, Sunday Times news story that said two Google seaches used the same electricity as boiling an electric kettle.  This got challenged in a TechCrunch blog, refuted by Google, and was effectively (but not explcitly) retracted in subsequent Times online item.  The source turns out to be a junior Harvard physicist, Alex Wissner-Gross, whose own source was a blog by Rolf Kersten, one of the Sun Green Team (Sun the computer manufacturer not Sun the newspaper!), so actually not an unreasonable basis.

In fact Rolf Kersten’s estimate, which was prepared for a talk in 2007, seemed to be based on sensible calculations, although he has recently posted a blog saying the figure was out by a factor of 35 … yes it actually takes 70 Google searches to boil that kettle.  Looking deeper the cause of the discrepancy appears to be the figure he used for the number of Google searches per day.  He took 2005 data about the size of the Google server farm and used a figure of 40 million searches per day.  Although Google did not publish their full workings in their response, it is clearly this figure of 40 million hits that was way too low for 2005 as a Feb 2001 Google press release quoted 60 million searches per day in 2000.  Actually with a moment’s reflection it is clear that 40 million hits per day (500 per second) would hardly have justified a major server farm and the figure is clearly in the billions.  However, it is surprisingly difficult to find the true figure and if you Google “google searches per day” you simply find lots of people asking the same question.  In fact, it was through looking for further Google press releases to find a more up-to-date figure that got me to the Zeitgeist page!

A Eamonn Fitzgerald’s Rainy Day blog nicely lays out the timeline of this story and sees it as a triumph of the power of media consumers to challenge the authority of the press due to what Jay Rosen refers to as  ‘audience atomization‘.   Fitzgerald also sees the paradox that the story itself was sourced from the somewhat broken sources on the internet; in the past the press would have perhaps used more authoritative sources … and as I noted couple of years ago at a Memories for Life panel at the British Library, the move from BBC to YouTube could be read as mass democratisation … or simply signal the end of history.

There is another lesson though, one that I picked up in a blog “keeping track of history” not long after the Memories for Life meeting, just how hard it is to find pretty straightforward information on the web.  At that point I was after Tony Blair’s statement about the execution of Saddam Husssein, in this case trying to find out the number of Google search hits.  Neither are secret, propriety or obscure, but both difficult to track down.

… but we still trust that single hit of a search button

Backwards compatibility on the web

I just noticed the following excerpt in the web page describing a rich-text editing component:

Supported Browsers (Confirmed)
… list …

Note: This list is now out of date and some new browsers such as Safari 3.0+ and Opera 9.5+ suffer from some issues.
(Free Rich Text Editor – www.freerichtexteditor.com)

In odd moments I have recently been working on bringing vfridge back to life.  Partly this is necessary because the original Java Servlet code was such a pig1, but partly because the dynamic HTML code had ‘died’. To be fair vfridge was produced in the early days of DHTML, and so one might expect things to change between then and now. However, reading the above web page about a component produced much more recently, I wonder why is it that on the web, and elsewhere, we are so bad at being backward compatible … and I recall my own ‘pain and tears‘ struggling with broken backward compatibility in office 2008.

I’d started looking at current  rich text editors after seeing Paul James’ “Small, standards compliant, Javascript WYSIWYG HTML control“.  Unlike many of the controls that seem to produce MS-like output with <font> tags littered randomly around, Paul’s control emphasises standards compliance in HTML, and is using the emerging de-facto designMode2 support in browsers.

This seems good, but one wonders how long these standards will survive, especially the de facto one, given past history; will Paul James’ page have a similar notice in a year or two?

The W3C approach … and a common institutional one … is to define unique standards that are (intended to be) universal and unchanging, so that if we all use them everything will still work in 10,000 years time.  This is a grand vision, but only works if the standards are sufficiently:

  1. expressive so that everything you want to do now can be done (e.g. not deprecating the use of tables for layout in the absence of design grids leading to many horrible CSS ‘hacks’)
  2. omnipotent so that everyone (MS, Apple) does what they are told
  3. simple so that everyone implements it right
  4. prescient so that all future needs are anticipated before multiple differing de facto ‘standards’ emerge

The last of those is the reason why vfridge’s DHTML died, we wanted rich client-side interaction when the stable standards were not much beyond transactions; and this looks like the reason many rich-text editors are struggling now.

A completely different approach (requiring a  degree of humility from standards bodies) would be to accept that standards always fall behind practice, and design this into the standards themselves.  There needs to be simple (and so consistently supported) ways of specifying:

  • which versions of which browsers a page was designed to support – so that browsers can be backward or cross-browser compliant
  • alternative content for different browsers and versions … and no the DTD does not do this as different versions of browsers have different interpretations of and bugs in different HTML variants.  W3C groups looking at cross-device mark-up already have work in this area … although it may fail the simplicty test.

Perhaps more problematically, browsers need to commit to being backward compatible where at all possible … I am thinking especially of the way IE fixed its own broken CSS implementation, but did so in a way that broke all the standard hacks that had been developed to work around the old bugs!  Currently this would mean fossilising old design choices and even old bugs, but if web-page meta information specified the intended browser version, the browser could selectively operate on older pages in ways compatible with the older browsers whilst offering improved behaviour for newer pages.

  1. The vfridge Java Servlets used to run fine, but over time got worse and worse; as machines got faster and JVM versions improved with supposedly faster byte-code compilers, strangely the same code got slower and slower until it now only produces results intermittently … another example of backward compatibility failing.[back]
  2. I would give a link to designMode except that I notice everyone else’s links seem to be broken … presumably MSDN URLs are also not backwards compatible 🙁 Best bet is just Google “designMode” [back]

web ephemera and web privacy

Yesterday I was twittering about a web page I’d visited on the BBC1 and the tweet also became my Facebook status2.  Yanni commented on it, not because of the content of the link, but because he noticed the ‘is.gd’ url was very compact.  Thinking about this has some interesting implications for privacy/security and the kind of things you might to use different url shortening schemes for, but also led me to develop an interesting time-wasting application ‘LuckyDip‘ (well if ‘develop’ is the right word as it was just 20-30 mins hacking!).

I used the ‘is.gd’ shortening because it was one of three schemes offered by twirl, the twitter client I use.  I hadn’t actually noticed that it was significantly shorter than the others or indeed tinyurl, which is what I might have thought of using without twirl’s interface.

Here is the url of this blog <http://www.alandix.com/blog/> shortened by is.gd and three other services:

snurl:   http://snurl.com/5ot5k
twurl:  http://twurl.nl/ftgrwl
tinyurl:  http://tinyurl.com/5j98ao
is.gd:  http://is.gd/7OtF

The is.gd link is small for two reasons:

  1. ‘is.gd’ is about as short as you can get with a domain name!
  2. the ‘key’ bit after the domain is only four characters as opposed to 5 (snurl) or 6 (twurl, tinyurl)

The former is just clever domain choice, hard to get something short at all, let alone short and meaningful3.

The latter however is as a result of a design choice at is.gd.  The is.gd urls are allocated sequentially, the ‘key’ bit (7OtF) is simply an encoding of the sequence number that was allocated.  In contrast tinyurl seems to do some sort of hash either of the address or maybe of a sequence number.

The side effect of this is that if you simply type in a random key (below the last allocated sequence number) for an is.gd url it will be a valid url.  In contrast, the space of tinyurl is bigger, so ‘in principle’ only about one in a hundred keys will represent real pages … now I say ‘in principle’ because experimenting with tinyurl I find every six character seqeunce I type as a key gets me to a valid page … so maybe they do some sort of ‘closest’ match.

Whatever url shortening scheme you use by their nature the shorter url will be less redundant than a full url – more ‘random’ permutations will represent meaningful items.  This is a natural result of any ‘language’, the more concise you are the less redundant the language.

At a practical level this means that if you use a shortened url, it is more likely that someone  typing in a random is.gd (or tinyurl) key will come across your page than if they just type a random url.  Occasionally I upload large files I want to share to semi-private urls, ones that are publicly available, but not linked from anywhere.  Because they are not linked they cannot be found through search engines and because urls are long it would be highly unlikely that someone typing randomly (or mistyping) would find them.

If however, I use url shortening to tell someone about it, suddenly my semi-private url becomes a little less private!

Now of course this only matters if people are randomly typing in urls … and why would they do such a thing?

Well a random url on the web is not very interesting in general, there are 100s of millions and most turn out to be poor product or hotel listing sites.  However, people are only likely to share interesting urls … so random choices of shortened urls are actually a lot more interesting than random web pages.

So, just for Yanni, I spent a quick 1/2 hour4 and made a web page/app ‘LuckyDip‘.  This randomly chooses a new page from is.gd every 20 seconds – try it!


successive pages from LuckyDip

Some of the pages are in languages I can’t read, occasionally you get a broken link, and the ones that are readable, are … well … random … but oddly compelling.  They are not the permanently interesting pages you choose to bookmark for later, but the odd page you want to send to someone … often trivia, news items, even (given is.gd is in a twitter client) the odd tweet page on the twitter site.  These are not like the top 20 sites ever, but the ephemera of the web – things that someone at some point thought worth sharing, like overhearing the odd raised voice during a conversation in a train carriage.

Some of the pages shown are map pages, including ones with addresses on … it feels odd, voyeuristic, web curtain twitching – except you don’t know the person, the reason for the address; so maybe more like sitting watching people go by in a crowded town centre, a child cries, lovers kiss, someone’s newspaper blows away in the wind … random moments from unknown lives.

In fact most things we regard as private are not private from everyone.  It is easy to see privacy like an onion skin with the inner sanctum, then those further away, and then complete strangers – the further away someone is from ‘the secret’ the more private something is.  This is certainly the classic model in military security.  However, think further and there are many things you would be perfectly happy for a complete stranger to know, but maybe not those a little closer, your work colleagues, your commercial competitors.  The onion sort of reverses, apart from those that you explicitly want to know, in fact the further out of the onion, the safer it is.  Of course this can go wrong sometimes, as Peter Mandleson found out chatting to a stranger in a taverna (see BBC blog).

So I think LuckyDip is not too great a threat to the web’s privacy … but do watch out what you share with short urls … maybe the world needs a url lengthening service too …

And as a postscript … last night I was trying out the different shortening schemes available from twirl, and accidentally hit return, which created a tweet with the ‘test’ short url in it.  Happily you can delete tweets, and so I thought I had eradicated the blunder unless any twitter followers happened to be watching at that exact moment … but I forgot that my twitter feed also goes to my Facebook status and that deleting the tweet on twitter did not remove the status, so overnight the slip was my Facebook status and at least one person noticed.

On the web nothing stays secret long, and if anything is out there, it is there for ever … and will come back to hant you someday.

  1. This is the tweet “Just saw http://is.gd/7Irv Sad state of the world is that it took me several paragraphs before I realised it was a joke.”[back]
  2. I managed to link them up some time ago, but cannot find again the link on twitter that enabled this, so would be stuck if I wanted to stop it![back]
  3. anyone out there registering Bangaldeshi domains … if ‘is’ is available!![back]
  4. yea it should ave been less, but I had to look up how to access frames in javascript, etc.[back]

web of data practioner’s days

I am at the Web of Data Practitioners Days (WOD-PD 2008) in Vienna.  Mixture of talks and guided hands-on sessions.  I presented first half of session on “Using the Web of Data” this morning with focus (surprise) on the end user. Learnt loads about some of the applications out there – in fact Richard Cyganiak .  Interesting talk from a guy at the BBC about the way they are using RDF to link the currently disconnected parts of their web and also archives.  Jana Herwig from Semantic Web Company has been live blogging the event.

Being here has made me think about the different elements of SemWeb technology and how they individually contribute to the ‘vision’ of Linked Data.  The aim is to be able to link different data sources together.  For this having some form of shared/public vocabulary or ‘data definitions’ is essential as is some relatively uniform way of accessing data.  However, the implementation using RDF or use of SPARQL etc. seems to be secondary and useful for some data, but not other forms of data where tabular data may be more appropriate.  Linking these different representations  together seems far more important than specific internal representations.  So wondering whether there is a route to linked data that allows a more flexible interaction with existing data and applications as well as ‘sucking’ in this data into the SemWeb.  Can the vocabularies generated for SemWeb be used as meta information for other forms of information and can  query/access protocols be designed that leverage this, but include broader range of data types.

From raw experience to personal reflection

Just a week to go for deadline for this workshop on the Designing for Reflection on Experience that Corina and I are organising at CHI. Much of the time discussions of user experience are focused on trivia and even social networking often appears to stop at superficial levels.  While throwing a virtual banana at a friend may serve to maintain relationships and is perhaps less trivial than it at first appears; still there is little support for deeper reflection on life, with the possible exception of the many topic-focused chat groups.  However, in researching social networks we have found, amongst the flotsam, clear moments of poinency and conflict, traces of major life events … even divorce by Facebook. Too much navel gazing would not be a good thing, but some attention to expressing  deeper issues to others and to ourselves seems overdue.

Comics and happy problem solving

I am in Eindhoven doing CSCW, silly ideas and other things with the USI students here. On the book shelf here is Scott McCloud’s “Understanding Comics” I picked this up last year and couldn’t put it down until I had read it all. There is another book on the shelves this year “Reinventing Comics” and I daren’t pick it up until I’ve done all the work I want to today!

Understanding Comics is both an apologetic for comics as an art form and also an exploration into what makes a comic a comic and how comics manage to captivate and give a sense of narrative and action through what are basically static images. As well as being a good read about comics and about art there seem to be many lessons there for other forms of narrative and animation especially on the web.

As far as I can see (without starting to read it and not being able to stop), Reinventing Comics seems to be about the way online delivery trough the web is giving new opportunities for Comic art … but maybe when I finish everything today I will find out.

Less graphic and less fun, but no less fascinating, I have been dipping into chapters of “The Psychology of Problem Solving“, which was also sitting on the USI shelves. I was particularly enthralled by descriptions of experiments where subjects were asked to accomplish divergent thinking tasks whilst either pushing their palms upwards from under a table, or pushing down from on top. The former a positive, ‘come to me’ gesture elicited more diverse ideas than the latter, negative, ‘go away’ gesture, even though the only difference was the muscle groups in tension. I’ve seen other research that shows how our brains monitor our body state to ‘see how we feel’ (like smiling therapy), but this was one of the most subtle and conclusive.

During the week I have had the USI students work through a design brief starting with silly ideas then moving through  structured analysis to good ideas. Perhaps I should have had them pushing up on tables in the first part and down in the second?

PPIG2008 and the twenty first century coder

Last week I was giving a keynote at the annual workshop PPIG2008 of the Psychology of Programming Interest Group.   Before I went I was politely pronouncing this pee-pee-eye-gee … however, when I got there I found the accepted pronunciation was pee-pig … hence the logo!

My own keynote at PPIG2008 was “as we may code: the art (and craft) of computer programming in the 21st century” and was an exploration of the changes in coding from 1968 when Knuth published the first of his books on “the art of computer programming“.  On the web site for the talk I’ve made a relatively unstructured list of some of the distinctions I’ve noticed between 20th and 21st Century coding (C20 vs. C21); and in my slides I have started to add some more structure.  In general we have a move from more mathematical, analytic, problem solving approach, to something more akin to a search task, finding the right bits to fit together with a greater need for information management and social skills. Both this characterisation and the list are, of course, a gross simplification, but seem to capture some of the change of spirit.  These changes suggest different cognitive issues to be explored and maybe different personality types involved – as one of the attendees, David Greathead, pointed out, rather like the judging vs. perceiving personality distinction in Myers-Briggs1.

One interesting comment on this was from Marian Petre, who has studied many professional programmers.  Her impression, and echoed by others, was that the heavy-hitters were the more experienced programmers who had adapted to newer styles of programming, whereas  the younger programmers found it harder to adapt the other way when they hit difficult problems.  Another attendee suggested that perhaps I was focused more on application coding and that system coding and system programmers were still operating in the C20 mode.

The social nature of modern coding came out in several papers about agile methods and pair programming.  As well as being an important phenomena in its own right, pair programming gives a level of think-aloud  ‘for free’, so maybe this will also cast light on individual coding.

Margaret-Anne Storey gave a fascinating keynote about the use of comments and annotations in code and again this picks up the social nature of code as she was studying open-source coding where comments are often for other people in the community, maybe explaining actions, or suggesting improvements.  She reviewed a lot of material in the area and I was especially interested in one result that showed that novice programmers with small pieces of code found method comments more useful than class comments.  Given my own frequent complaint that code is inadequately documented at the class or higher level, this appeared to disagree with my own impressions.  However, in discussion it seemed that this was probably accounted for by differences in context: novice vs. expert programmers, small vs large code, internal comments vs. external documentation.  One of the big problems I find is that the way different classes work together to produce effects is particularly poorly documented.  Margaret-Anne described one system her group had worked on2 that allowed you to write a tour of your code opening windows, highlighting sections, etc.

I sadly missed some of the presentations as I had to go to other meetings (the danger of a conference at your home site!), but I did get to some and  was particularly fascinated by the more theoretical/philosophical session including one paper addressing the psychological origins of the notions of objects and another focused on (the dangers of) abstraction.

The latter, presented by Luke Church, critiqued  Jeanette Wing‘s 2006 CACM paper on Computational Thinking.  This is evidently a ‘big thing’ with loads of funding and hype … but one that I had entirely missed :-/ Basically the idea is to translate the ways that one thinks about computation to problems other than computers – nerds rule OK. The tenet’s of computational thinking seem to overlap a lot with management thinking and also reminded me of the way my own HCI community and also parts of the Design (with capital D) community in different ways are trying to say they we/they are the universal discipline  … well if we don’t say it about our own discipline who will …the physicists have been getting away with it for years 😉

Luke (and his co-authors) argument is that abstraction can be dangerous (although of course it is also powerful).  It would be interesting perhaps rather than Wing’s paper to look at this argument alongside  Jeff Kramer’s 2007 CACM article “Is abstraction the key to computing?“, which I recall liking because it says computer scientists ought to know more mathematics 🙂 🙂

I also sadly missed some of Adrian Mackenzie‘s closing keynote … although this time not due to competing meetings but because I had been up since 4:30am reading a PhD thesis and after lunch on a Friday had begin to flag!  However, this was no reflection an Adrian’s talk and the bits I heard were fascinating looking at the way bio-tech is using the language of software engineering.  This sparked a debate relating back to the overuse of abstraction, especially in the case of the genome where interactions between parts are strong and so the software component analogy weak.  It also reminded me of yet another relatively recent paper3 on the way computation can be seen in many phenomena and should not be construed solely as a science of computers.

As well as the academic content it was great to be with the PPIG crowd they are a small but very welcoming and accepting community – I don’t recall anything but constructive and friendly debate … and next year they have PPIG09 in Limerick – PPIG and Guiness what could be better!

  1. David has done some really interesting work on the relationship between personality types and different kinds of programming tasks.  I’ve seen him present before about debugging and unfortunately had to miss his talk at PPIG on comprehension.  Given his work has has shown clearly that there are strong correlations between certain personality attributes and coding, it would be good to see more qualitative work investigating the nature of the differences.   I’d like to know whether strategies change between personality types: for example, between systematic debugging and more insight-based scan and see it bug finding. [back]
  2. but I can’t find on their website :-([back]
  3. Perhaps 2006/2007 in either CACM or Computer Journal, if anyone knows the one I mean please remind me![back]

Firefox 3 seems to have fixed memory problems

I had been reluctantly considering giving up using Firefox as it crawled to a halt so often on so many sites. To be fair I think it is because I keep lots of tabs open and Firefox did not seem to deal will with pages with many refreshing elements … many air and train ticketing sites were particular problems. However, Firefox 3 has been running continuously for some time now and looking at ‘top’ in terminal window has about 1/3 the real memory footprint compared with Firefox 2 … now it is comparable with Word, Dreamweaver, etc. … I had been sticking with Firefox largely because the Firefox Snip!t bookmarklet works better than the Safari one, so now I can continue to do so without the machine crawling to a halt – well done team Mozilla 🙂