the internet laws of the jungle

firefox-copyright-1Where are the boundaries between freedom, license and exploitation, between fair use and theft?

I found myself getting increasingly angry today as Mozilla Foundation stepped firmly beyond those limits, and moreover with Trump-esque rhetoric attempts to dupe others into following them.

It all started with a small text add below the Firefox default screen search box:

firefox-copyright-2

Partly because of my ignorance of web-speak ‘TFW‘ (I know showing my age!), I clicked through to a petition page on Mozilla Foundation (PDF archive copy here).

It starts off fine, with stories of some of the silliness of current copyright law across Europe (can’t share photos of the Eiffel tower at night) and problems for use in education (which does in fact have quite a lot of copyright exemptions in many countries).  It offers a petition to sign.

This sounds all good, partly due to rapid change, partly due to knee jerk reactions, internet law does seem to be a bit of a mess.

If you blink you might miss one or two odd parts:

“This means that if you live in or visit a country like Italy or France, you’re not permitted to take pictures of certain buildings, cityscapes, graffiti, and art, and share them online through Instagram, Twitter, or Facebook.”

Read this carefully, a tourist forbidden from photographing cityscapes – silly!  But a few words on “… and art” …  So if I visit an exhibition of an artist or maybe even photographer, and share a high definition (Nokia Lumia 1020 has 40 Mega pixel camera) is that OK? Perhaps a thumbnail in the background of a selfie, but does Mozilla object to any rules to prevent copying of artworks?

mozilla-dont-break-the-internet

However, it is at the end, in a section labelled “don’t break the internet”, the cyber fundamentalism really starts.

“A key part of what makes the internet awesome is the principle of innovation without permission — that anyone, anywhere, can create and reach an audience without anyone standing in the way.”

Again at first this sounds like a cry for self expression, except if you happen to be an artist or writer and would like to make a living from that self-expression?

Again, it is clear that current laws have not kept up with change and in areas are unreasonably restrictive.  We need to be ale to distinguish between a fair reference to something and seriously infringing its IP.  Likewise, we could distinguish the aspects of social media that are more like looking at holiday snaps over a coffee, compared to pirate copies for commercial profit.

However, in so many areas it is the other way round, our laws are struggling to restrict the excesses of the internet.

Just a few weeks ago a 14 year old girl was given permission to sue Facebook.  Multiple times over a 2 year period nude pictures of her were posted and reposted.  Facebook hides behind the argument that it is user content, it takes down the images when they are pointed out, and yet a massive technology company, which is able to recognise faces is not able to identify the same photo being repeatedly posted. Back to Mozilla: “anyone, anywhere, can create and reach an audience without anyone standing in the way” – really?

Of course this vision of the internet without boundaries is not just about self expression, but freedom of speech:

“We need to defend the principle of innovation without permission in copyright law. Abandoning it by holding platforms liable for everything that happens online would have an immense chilling effect on speech, and would take away one of the best parts of the internet — the ability to innovate and breathe new meaning into old content.”

Of course, the petition is signalling out EU law, which inconveniently includes various provisions to protect the privacy and rights of individuals, not dictatorships or centrally controlled countries.

So, who benefits from such an open and unlicensed world?  Clearly not the small artist or the victim of cyber-bullying.

Laissez-faire has always been an aim for big business, but without constraint it is the law of the jungle and always ends up benefiting the powerful.

In the 19th century it was child labour in the mills only curtailed after long battles.

In the age of the internet, it is the vast US social media giants who hold sway, and of course the search engines, who just happen to account for $300 million of revenue for Mozilla Foundation annually, 90% of its income.

 

JavaScript gotcha: var scope

I have been using JavaScript for more than 15 years with some projects running to several thousand lines.  But just discovered that for all these years I have misunderstood the scope rules for variables.  I had assumed they were block scoped, but in fact every variable is effectively declared at the beginning of the function.

So if you write:

function f() {
    for( var i=0; i<10; i++ ){
        var i_squared = i * i;
        // more stuff ...
    }
}

This is treated as if you had written:

function f() {
    var i, i_squared
    for( i=0; i<10; i++ ){
         i_squared = i * i;
         // more stuff ...
    }
}

The Mozilla Developer Network describes the basic principle in detail, however, does not include any examples with inner blocks like this.

So, there is effectively a single variable that gets reused every time round the loop.  Given you do the iterations one after another this is perfectly fine … until you need a closure.

I had a simple for loop:

function f(items)
    for( var ix in items ){
        var item = items[ix];
        var value = get_value(item)
        do_something(item,value);
    }
}

This all worked well until I needed to get the value asynchronously (AJAX call) and so turned get_value into an asynchronous function:

get_value_async(item,callback)

which fetches the value and then calls callback(value) when it is ready.

The loop was then changed to

function f(items)
    for( var ix in items ){
        var item = items[ix];
        get_value_async( item, function(value) {
                                do_something(item,value);
                          }; );
    }
}

I had assumed that ‘item’ in each callback closure would be bound to the value for the particular iteration of the loop, but in fact the effective code is:

function f(items)
    var ix, item;
    for( ix in items ){
        item = items[ix];
        get_value_async( item, function(value) {
                                do_something(item,value);
                          }; );
    }
}

So all the callbacks point to the same ‘item’, which ends up as the one from the last iteration.  In this case the code is updating an onscreen menu, so only the last item got updated!

JavaScript 1.7 and ECMAScript 6 have a new ‘let’ keyword, which has precisely the semantics that I have always thought ‘var’ had, but does not seem to widely available yet in browsers.

As a workaround I have used the slightly hacky looking:

function f(items)
    for( var ix in items ){
        (function() {
            var item = items[ix];
            get_value_async( item, function(value) {
                                    do_something(item,value);
                              }; );
        })();
    }
}

The anonymous function immediately inside the for loop is simply there to create scope for the item variable, and effectively means there is a fresh variable to be bound to the innermost function.

It works, but you do need to be confident with anonymous functions!

CSS considered harmful (the curse of floats and other scary stories)

CSS and JavaScript based sites have undoubtedly enabled experiences far richer than the grey-backgrounded days of the early web in the 1990s (and recall, the backgrounds really were grey!). However, the power and flexibility of CSS, in particular the use of floats, has led to a whole new set of usability problems on what appear to be beautifully designed sites.

I was reading a quite disturbing article on a misogynistic Dell event by Sophie Catherina Løhr at elektronista.dk.  However, I was finding it frustrating as a line of media icons on the left of the page meant only the top few lines were unobstructed.

I was clearly not the only one with this problem as one of the comments on the page read:

That social media widget on the left made me stop reading an otherwise interesting article. Very irritating.

To be fair on the page designer,  it was just on Firefox that the page rendered like this, on other browsers the left-hand page margin was wider.  Probably Firefox is strictly ‘right’ in a sense as it sticks very close to standards, but whoever is to blame, it is not helpful to readers of the blog.

For those wishing to make cross-browser styles, it is usually possible now-a-days, but you often have to reset everything at the beginning of your style files — even if CSS is standard, default styles are not:

body {
    margin: 0;
    padding 0;
    /*  etc. */
}

Sadly this is just one example of an increasingly common problem.

A short while ago I was on a site that had a large right-hand side tab.  I forget the function, maybe for comments, or a table of contents.  The problem was the tab obscured and prevented access to most of the scroll bar making navigation of the middle portion of the page virtually impossible.  Normally it is not possible to obscure the scroll bar as it is ‘outside’ the page. However this site, like many, had chosen to put the main content of the site in a fixed size scrolling <div>.  This meant that the header and footer were always visible, and the content scrolled in the middle.  Of course the scroll bar of the <div> is then part of the page and could be obscured.  I assume it was another cross-browser formatting difference that meant the designer did not notice the problem, or perhaps (not unlikely), only ever tested the style of pages with small amounts of non-scrolling text.

Some sites adopt a different strategy for providing fixed headers.  Rather than putting the main content in a fixed <div>, instead the header and footer are set to float above the main content and margins added to it to mean that the page renders correctly at top and bottom.  This means that the scrollbar for the content is the main scroll bar, and therefore cannot be hidden or otherwise mangled 🙂

Unfortunately, the web page search function does not ‘know’ about these floating elements and so when you type in a search term, will happily scroll the page to ”reveal’ the searched for word, but may do so in a way that it is underneath either header or footer and so invisible.

This is not made easier to deal with in the new MacOS Lion were the line up/down scroll arrows have been removed.  Not only can you not fine-adjust the page to reveal those hidden searched-for terms, but also, whilst reading the page, the page-up/page-down scroll does not ‘know’ about the hidden parts and so scrolls a full screen-sized page missing half the text 🙁

Visibility problems are not confined to the web, there has been a long history of modal dialogue boxes being lost behind other windows (which then often refuse to interact due to the modal dialogue box), windows happily resizing themselves to be partly obscured by the Apple Dock, or even disappearing onto non-existent secondary displays.

It may be that some better model of visibility could be built into both CSS/DOM/JavaScript and desktop window managers.  And it may even be that CSS will fix it’s slightly odd model of floats and layout.  However, I would not want to discourage the use of overlays, transparencies, and other floating elements until this happens.

In the mean time, some thoughts:

  1. restraint — Recall the early days of DTP when every newsletter sported 20 fonts. No self respecting designer would do this now-a-days, so use floats, lightboxes and the like with consideration … and if you must have popups or tabs that open on hover rather than when clicked, do make sure it is possible to move your mouse across the page without it feeling like walking a minefield.
  2. resizing — Do check your page with different window sizes, although desktop screens are now almost all at least 1024 x 768, think laptops and pads, as this is increasingly the major form of access.
  3. defaults — Be aware that, W3C not withstanding, browsers are different.  At very minimum reset all the margins and padding as a first step, so that you are not relying on browser defaults.
  4. testing — Do test (and here I mean technical testing, do user test as well!) with realistic pages, not just a paragraph of lorem ipsum.

And do my sites do this well … ?

With CSS as in all things, with great power …

P.S. Computer scientists will recognise the pun on Dijkstra’s “go to statement considered harmful“, the manifesto of structured programming.  The use of gotos in early programming langauges was incredibly flexible and powerful, but just like CSS with many concomitant potential dangers for the careless or unwary.  Strangely computer scientists have had little worry about other equally powerful yet dangerous techniques, not least macro languages (anyone for a spot of TeX debugging?), and Scheme programmers throw around continuations as if they were tennis balls.  It seemed as though the humble goto became the scapegoat for a discipline’s sins. It was interesting when the goto statement was introduced as a ‘new’ feature in PHP5.3, an otherwise post-goto C-style language; very retro.


image  xkcd.com

tread lightly — controlling user experience pollution

When thinking about usability or user experience, it is easy to focus on the application in front of us, but the way it impacts its environment may sometimes be far more critical. However, designing applications that are friendly to their environment (digital and physical) may require deep changes to the low-level operating systems.

I’m writing this post effectively ‘offline’ into a word processor for later upload. I sometimes do this as I find it easier to write without the distractions of editing within a web browser, or because I am physically disconnected from the Internet. However, now I am connected, and indeed I can see I am connected as a FTP file upload is progressing, it is just that anything else network-related is stalled.

The reason that the FTP upload is ‘hogging’ the network is, I believe, due to a quirk in the UNIX scheduling system, which was, paradoxically, originally intended to improve interactivity.

UNIX, which sits underneath Mac OS, is a multiprocessing operating system running many programs at once. Each process has a priority, called its ‘niceness‘, which can be set explicitly, but is also tweaked from moment to moment by the operating system. One of the rules for ‘tweaking’ it is that if a process is IO-bound, that is if it is constantly waiting for input or output, then its niceness is decreased, meaning that it is given higher priority.

The reason for this rule is partly to enhance interactive performance in the old days of command line interfaces; an interactive program would spend lots of time waiting for the user to enter something, and so its priority would increase meaning it would respond quickly as soon as the user entered anything. The other reason is that CPU time was seen as the scarce resource, so that processes that were IO bound were effectively being ‘nicer’ to other processes as they let them get a share of the precious CPU.

The FTP program is simply sitting there shunting out data to the network, so is almost permanently blocked waiting for the network as it can read from the disk faster than the network can transmit data. This means UNIX regards it as ‘nice’ and ups its priority. As soon as the network clears sufficiently, the FTP program is rescheduled and it puts more into the network queue, reads the next chunk from disk until the network is again full to capacity. Nothing else gets a chance, no web, no email, not even a network trace utility.

I’ve seen the same before with a database server on one of Fiona’s machines — all my fault. In the MySQL manual it suggested that you disable indices before large bulk updates (e.g. ingesting a file of data) and then re-enable them once the update is finished as indexing is more efficient on lots of data than one at a time. I duly did this and forgot about it until Fiona noticed something was wrong on the server and web traffic had ground to a near halt. When she opened a console on the server, she found that it seemed quiet, very little CPU load at all, and was puzzled until I realised it was my indexing. Indexing requires a lot of reading and writing data to and from disk, so MySQL became IO-bound, was given higher priority, as soon as the disk was free it was rescheduled, hit the disk once more … just as FTP is now hogging the network, MySQL hogged the disk and nothing else could read or write. Of course MySQL’s own performance was fine as it internally interleaved queries with indexing, it is just everything else on the system that failed.

These are hard scenarios to design for. I have written before (“why software need never hang“) about the way application designers do not think sufficiently about potential delays due to slow networks, or broken connections. However, that was about the applications that are suffering. Here the issue is not that the FTP program is badly designed for its delays, it is still responding very happily, just that it has had a knock on effect on the rest of the system. It is like cleaning your sink with industrial bleach — you have a clean house within, but pollute the watercourse without.

These kind of issues are not related solely to network and disk, any kind of resource is limited and profligacy causes damage in the digital world as much as in the physical environment.

Some years ago I had a Symbian smartphone, but it proved unusable as its battery life rarely exceeded 40 minutes from full charge. I thought I had a duff battery, but later realised it was because I was leaving applications on the phone ‘open’. For me I went to the address book, looked up a number, and that was that, I then maybe turned the phone off or switched  to something else without ‘exiting’ the address book. I was treating the phone like every previous phone I had used, but this one was different, it had a ‘real’ operating system, opening the address book launched the address book application, which then kept on running — and using power — until it was explicitly closed, a model that is maybe fine for permanently plugged in computers, but disastrous for a moble phone.

When early iPhones came out iOS was criticised for being single threaded, that is not having lots of things running in the ‘background’. However, this undoubtedly helped its battery life. Now, with newer versions of iOS, it has changed and there are lots of apps running at once, and I have noticed the battery life reducing, is that simply the battery wearing out with age or the effect of all those apps running?

Power is of course not just a problem for smartphones, but for any laptop. I try to closedown applications on my Mac when I am working without power as I know some programs just eat CPU when they are apparently idle (yes, Firefox, it’s you I’m talking about). And from an environmental point of view, lower power consumption when connected would also be good. My hope was that Apple would take the lessons learnt in the early iOS to change the nature of their mainstream OS, but sadly they succumbed to the pressure to make iOS a ‘proper’ OS!

Of course the FTP program could try to be friendly, perhaps when it is not the selected window deliberately throttle its network activity. But then the 4 hour upload would take 8 hours, instead of 20 minutes left at this point, I’d be looking forward to another 4 hours and 20 minutes, and I’d be complaining about that.

The trouble is that there needs to be better communication, more knowledge shared, between application and operating system. I would like FTP to use all the network capacity that it can, except when I am interacting with some other program. Either FTP needs to say to the OS “hey here’s a packet, send it when there’s a gap”1, or the OS needs some way for applications to determine current network state and make decisions based on that. Sometimes this sort of information is easily available, more often it is either very hard to get at or not available at all.

I recall years ago when internet was still mainly through pay-per-minute dial-up connections. You could set your PC to automatically dial when the internet was needed. However, some programs, such as chat, would periodically check with a central server to see if there was activity, this would cause the PC to dial-up the ISP. If you were lucky the PC also had an auto-disconnect after a period of inactivity, if you were not lucky the PC would connect at 2am and by the morning you’d find yourself with a phone bill more than your weeks’ wages.

When we were designing onCue at aQtive, we wanted to be able to connect to the Internet when it was available, but avoid bankrupting our users. Clearly somewhere in the TCP/IP stack, the layers of code over the network, at some level deep down it knew whether we were connected. I recall we found a very helpful function in the Windows API called something like “isConnected”2. Unfortunately, it worked by attempting to send a network packet and returning true if it succeeded and false if it failed. Of course sending the test packet caused the PC to auto-dial …

And now there is just 1 minute and 53 seconds left on the upload, so time to finish this post before I get on to garbage collection.

  1. This form of “send when you can” would also be useful in cellular networks, for example when syncing photos.[back]
  2. I had a quick peek, and fund that Windows CE has a function called InternetGetConnectedState.  I don’t know if this works better now.[back]

Names, URIs and why the web discards 50 years of computing experience

Names and naming have always been a big issue both in computer science and philosophy, and a topic I have posted on before (see “names – a file by any other name“).

In computer science, and in particular programming languages, a whole vocabulary has arisen to talk about names: scope, binding, referential transparency. As in philosophy, it is typically the association between a name and its ‘meaning’ that is of interest. Names and words, whether in programming languages or day-to-day language, are, what philosophers call, ‘intentional‘: they refer to something else. In computer science the ‘something else’ is typically some data or code or a placeholder/variable containing data or code, and the key question of semantics or ‘meaning’ is about how to identify which variable, function or piece of data a name refers to in a particular context at a particular time.

The emphasis in computing has tended to be about:

(a) Making sure names have unambiguous meaning when looking locally inside code. Concerns such as referential transparency, avoiding dynamic binding and the deprecation of global variables are about this.

(b) Putting boundaries on where names can be seen/understood, both as a means to ensure (a) and also as part of encapsulation of semantics in object-based languages and abstract data types.

However, there has always been a tension between clarity of intention (in both the normal and philosophical sense) and abstraction/reuse. If names are totally unambiguous then it becomes impossible to say general things. Without a level of controlled ambiguity in language a legal statement such as “if a driver exceeds the speed limit they will be fined” would need to be stated separately for every citizen. Similarly in computing when we write:

function f(x) { return (x+1)*(x-1); }

The meaning of x is different when we use it in ‘f(2)’ or ‘f(3)’ and must be so to allow ‘f’ to be used generically. Crucially there is no internal ambiguity, the two ‘x’s refer to the same thing in a particular invocation of ‘f’, but the precise meaning of ‘x’ for each invocation is achieved by external binding (the argument list ‘(2)’).

Come the web and URLs and URIs.

Fiona@lovefibre was recently making a test copy of a website built using WordPress. In a pure html website, this is easy (so long as you have used relative or site-relative links within the site), you just copy the files and put them in the new location and they work 🙂 Occasionally a more dynamic site does need to know its global name (URL), for example if you want to send a link in an email, but this can usually be achieved using configuration file. For example, there is a development version of Snip!t at cardiff.snip!t.org (rather then www.snipit.org), and there is just one configuration file that needs to be changed between this test site and the live one.

Similarly in a pristine WordPress install there is just such a configuration file and one or two database entries. However, as soon as it has been used to create a site, the database content becomes filled with URLs. Some are in clear locations, but many are embedded within HTML fields or serialised plugin options. Copying and moving the database requires a series of SQL updates with string replacements matching the old site name and replacing it with the new — both tedious and needing extreme care not to corrupt the database in the process.

Is this just a case of WordPress being poorly engineered?

In fact I feel more a problem endemic in the web and driven largely by the URL.

Recently I was experimenting with Firefox extensions. Being a good 21st century programmer I simply found an existing extension that was roughly similar to what I was after and started to alter it. First of course I changed its name and then found I needed to make changes through pretty much every file in the extension as the knowledge of the extension name seemed to permeate to the lowest level of the code. To be fair XUL has mechanisms to achieve a level of encapsulation introducing local URIs through the ‘chrome:’ naming scheme and having been through the process once. I maybe understand a bit better how to design extensions to make them less reliant on the external name, and also which names need to be changed and which are more like the ‘x’ in the ‘f(x)’ example. However, despite this, the experience was so different to the levels of encapsulation I have learnt to take for granted in traditional programming.

Much of the trouble resides with the URL. Going back to the two issues of naming, the URL focuses strongly on (a) making the name unambiguous by having a single universal namespace;  URLs are a bit like saying “let’s not just refer to ‘Alan’, but ‘the person with UK National Insurance Number XXXX’ so we know precisely who we are talking about”. Of course this focus on uniqueness of naming has a consequential impact on generality and abstraction. There are many visitors on Tiree over the summer and maybe one day I meet one at the shop and then a few days later pass the same person out walking; I don’t need to know the persons NI number or URL in order to say it was the same person.

Back to Snip!t, over the summer I spent some time working on the XML-based extension mechanism. As soon as these became even slightly complex I found URLs sneaking in, just like the WordPress database 🙁 The use of namespaces in the XML file can reduce this by at least limiting full URLs to the XML header, but, still, embedded in every XML file are un-abstracted references … and my pride in keeping the test site and live site near identical was severely dented1.

In the years when the web was coming into being the Hypertext community had been reflecting on more than 30 years of practical experience, embodied particularly in the Dexter Model2. The Dexter model and some systems, such as Wendy Hall’s Microcosm3, incorporated external linkage; that is, the body of content had marked hot spots, but the association of these hot spots to other resources was in a separate external layer.

Sadly HTML opted for internal links in anchor and image tags in order to make html files self-contained, a pattern replicated across web technologies such as XML and RDF. At a practical level this is (i) why it is hard to have a single anchor link to multiple things, as was common in early Hypertext systems such as Intermedia, and (ii), as Fiona found, a real pain for maintenance!

  1. I actually resolved this by a nasty ‘hack’ of having internal functions alias the full site name when encountered and treating them as if they refer to the test site — very cludgy![back]
  2. Halasz, F. and Schwartz, M. 1994. The Dexter hypertext reference model. Commun. ACM 37, 2 (Feb. 1994), 30-39. DOI= http://doi.acm.org/10.1145/175235.175237[back]
  3. Hall, W., Davis, H., and Hutchings, G. 1996 Rethinking Hypermedia: the Microcosm Approach. Kluwer Academic Publishers.[back]

grammer aint wot it used two be

Fiona @ lovefibre and I have often discussed the worrying decline of language used in many comments and postings on the web. Sometimes people are using compressed txtng language or even leetspeak, both of these are reasonable alternative codes to ‘proper’ English, and potentially part of the natural growth of the language.  However, it is often clear that the cause is ignorance not choice.  One of the reasons may be that many more people are getting a voice on the Internet; it is not just the journalists, academics and professional classes.  If so, this could be a positive social sign indicating that a public voice is no longer restricted to university graduates, who, of course, know their grammar perfectly …

Earlier today I was using Google to look up the author of a book I was reading and one of the top links was a listing on ratemyprofessors.com.  For interest I clicked through and saw:

“He sucks.. hes mean and way to demanding if u wanan work your ass off for a C+ take his class1

Hmm I wonder what this student’s course assignment looked like?

Continue reading

  1. In case you think I’m a complete pedant, personally, I am happy with both the slang ‘sucks’ and ‘ass’ (instead of ‘arse’!), and the compressed speech ‘u’. These could be well-considered choices in language. The mistyped ‘wanna’ is also just a slip. It is the slightly more proper “hes mean and way to demanding” that seems to show  general lack of understanding.  Happily, the other comments, were not as bad as this one, but I did find the student who wanted a “descent grade” amusing 🙂 [back]

a new version of … on downgrades and preferences

I’m wondering why people break things when they create new versions.

Firefox used to open a discreet little window when you downloaded papers.  Now-a-days it opens a full screen window completely hiding the browser.

A minor issue, but makes me wonder about both new versions and also defaults and personalisation in general.

Continue reading

tech talks: brains, time and no time

Just scanning a few Google Tech Talks on YouTube.  I don’t visit it often, but followed a link from Rob Style‘s twitter.  I find the video’s a bit slow, so tend to flick through with the sound off, really wishing they had fast forward buttons like a DVD as quite hard to pull the little slider back and forth.

One talk was by Stuart Hameroff on A New Marriage of Brain and Computer.  He is the guy that works with Penrose on the possibility that quantum effects in microtubules may be the source of consciousness.  I notice that he used calculations for computational capacity based on traditional neuron-based models that are very similar to my own calculations some years ago in “the brain and the web” when I worked out that the memory and computational capacity of a single human brain is very similar to those of the entire web. Hameroff then went on to say that there are an order of magnitude more microtubules (sub-cellular structures, with many per neuron), so the traditional calculations do not hold!

Microtubules are fascinating things, they are like little mechano sets inside each cell.  It is these microtubules that during cell division stretch out straight the chromosomes, which are normally tangled up the nucleus.  Even stranger those fluid  movements of amoeba gradually pushing out pseudopodia, are actually made by mechanical structures composed of microtubules, only looking so organic because of the cell membrane – rather like a robot covered in latex.

pictire of amoeba

The main reason for going to the text talks was one by Steve Souders “Life’s Too Short – Write Fast Code” that has lots of tips for on speeding up web pages including allowing Javascript files to download in parallel.  I was particularly impressed by the quantification of costs of delays on web pages down to 100ms!

This is great.  Partly because of my long interest in time and delays in HCI. Partly because I want my own web scripts to be faster and I’ve already downloaded the Yahoo! YSlow plugin for FireFox that helps diagnose causes of slow pages.  And partly  because I get so frustrated waiting for things to happen, both on the web and on the desktop … and why oh why does it take a good minute to get a WiFi connection ….  and why doesn’t YouTube introduce better controls for skimming videos.

… and finally, because I’d already spent too much time skimming the tech talks, I looked at one last talk: David Levy, “No Time To Think” … how we are all so rushed that we have no time to really think about problems, not to mention life1.  At least that’s what I think it said, because I skimmed it rather fast.

  1. see also my own discussion of Slow Time[back]

IE in second place!

Looked at web stats for this blog for first time in ages.  IE is still top browser in raw hits, but between them Firefox and Mozilla  family have 39% above IE at 36%.  Is this just that there are more Mac users amongst HCI people and academics, or is Mozilla winning the browser wars?

Firefox 3 seems to have fixed memory problems

I had been reluctantly considering giving up using Firefox as it crawled to a halt so often on so many sites. To be fair I think it is because I keep lots of tabs open and Firefox did not seem to deal will with pages with many refreshing elements … many air and train ticketing sites were particular problems. However, Firefox 3 has been running continuously for some time now and looking at ‘top’ in terminal window has about 1/3 the real memory footprint compared with Firefox 2 … now it is comparable with Word, Dreamweaver, etc. … I had been sticking with Firefox largely because the Firefox Snip!t bookmarklet works better than the Safari one, so now I can continue to do so without the machine crawling to a halt – well done team Mozilla 🙂