programming as it could be: part 1

Over a cup of tea in bed I was pondering the future of business data processing and also general programming. Many problems of power-computing like web programming or complex algorithmics, and also end-user programming seem to stem from assumptions embedded in the heart of what we consider a programming language, many of which effectively date from the days of punch cards.

Often the most innovative programming/scripting environments, Smalltalk, Hypercard, Mathematica, humble spreadsheets, even (for those with very long memories) Filetab, have broken these assumptions, as have whole classes of ‘non-standard’ declarative languages.  More recently Yahoo! Pipes and Scratch have re-introduced more graphical and lego-block style programming to end-users (albeit in the case of Pipes slightly techie ones).

Yahoo! Pipes (from Wikipedia article) Scratch programming using blocks

What would programming be like if it were more incremental, more focused on live data, less focused on the language and more on the development environment?

Two things have particularly brought this to mind.

First was the bootcamp team I organised at the Winter School on Interactive Technologies in Bangalore1.  At the bootcamp we were considering “content development through the keyhole”, inspired by a working group at the Mobile Design Dialog conference last April in Cambridge.  The core issue was how one could enable near-end-use development in emerging markets where the dominant, or only, available computation is the mobile phone.  The bootcamp designs focused on more media content development, but one the things we briefly discussed was full code development on a mobile screen (not so impossible, after all home computers used to be 40×25 chars!), and where literate programming might offer some solutions, not for its original aim of producing code readable by others2, but instead to allow very succinct code that is readable by the author.

if ( << input invalid >> )
    << error handling code >>
else
    << update data >>

(example of simple literate programming)

The second is that I was doing a series of spreadsheets to produce some Fitts’ Law related modelling.  I could have written the code in Java and run it to produce outputs, but the spreadsheets were more immediate, allowed me to get the answers I needed when I needed them, and didn’t separate the code from the outputs (there were few inputs just a number of variable parameters).  However, complex spreadsheets get unmanageable quickly, notably because the only way to abstract is to drop into the level of complex spreadsheet formulae (not the most readable code!) or VB scripting.  But when I have made spreadsheets that embody calculations, why can’t I ‘abstract’ them rather than writing fresh code?

I have entitled this blog ‘part 1’ as there is more to discuss  than I can manage in one entry!  However, I will return, and focus on each of the above in turn, but in particular questioning some of those assumptions embodied in current programming languages:

(a) code comes before data

(b) you need all the code in place before you can run it

(c) abstraction is about black boxes

(d) the programming language and environment are separate

In my PPIG keynote last September I noted how programming as an activity has changed, become more dynamic, more incremental, but probably also less disciplined.  Through discussions with friends, I am also aware of some of the architectural and efficiency problems of web programming due to the opacity of code, and long standing worries about the dominance of limited models of objects3

So what would programming be like if it supported these practices, but in ways that used the power of the computer itself to help address some of the problems that arise when these practices address issues of substantial complexity?

And can we allow end-users to more easily move seamlessly from filling in a spreadsheet, to more complex scripting?

  1. The winter school was part of the UK-India Network on Interactive Technologies for the End-User.  See also my blog “From Anzere in the Alps to the Taj Bangelore in two weeks“[back]
  2. such as Knuth‘s “TeX: the program” book consisting of the full source code for TeX presented using Knuth’s original literate programming system WEB.[back]
  3. I have often referred to object-oriented programming as ‘western individualism embodied in code’.[back]

Searle’s wall, computation and representation

Reading a bit more of Brain Cantwell Smith’s “On the Origin of Objects”  and he refers (p.30-31) to Searle‘s wall that, according to Searle, can be interpreted as implementing a word processor.  This all hinges on predicates introduced by Goodman such as ‘grue’, meaning “green is examined before time t or blue if examined after”:

grue(x) = if ( now() < t ) green(x)
          else blue(x)

The problem is that an emerald apparently changes state from grue to not grue at time t, without any work being done.  Searle’s wall is just an extrapolation of this so that you can interpret the state of the wall at a time to be something arbitrarily complex, but without it ever changing at all.

This issue of the fundamental nature of computation has long seemed to me the ‘black hole’ at the heart of our discipline (I’ve alluded to this before in “What is Computing?“).  Arguably we don’t understand information much either, but at least we can measure it – we have a unit, the bit; but with computation we cannot even measure except without reference to specific implementation architecture whether Turing machine or Intel Core.  Common sense (or at least programmer’s common sense) tells us that any given computational device has only so much computational ‘power’ and that any problem has a minimum amount of computational effort needed to solve it, but we find it hard to quantify precisely.  However,  by Searle’s argument we can do arbitrary amounts of computation with a brick wall.

For me, a defining moment came about 10 years ago, I recall I was in Loughbrough for an examiner’s meeting and clearly looking through MSc scripts had lost it’s thrill as I was daydreaming about computation (as one does).  I was thinking about the relationship between computation and representation and in particular the fast (I think fastest) way to do multiplication of very large numbers, the Schönhage–Strassen algorithm.

If you’ve not come across this, the algorithm hinges on the fact that multiplication is a form of convolution (sum of a[i] * b[n-i]) and a Fourier transform converts convolution into pointwise multiplication  (simply a[i] * b[i]). The algorithm looks something like:

1. represent numbers, a and b, in base B (for suitable B)
2. perform FFT in a and b to give af and bf
3. perform pointwise multiplication on af and bf to give cf
4. perform inverse FFT on cf to give cfi
5. tidy up cfi a but doing carries etc. to give c
6. c is the answer (a*b) in base B

In this the heart of the computation is the pointwise multiplication at step 3, this is what ‘makes it’ multiplication.  However, this is a particularly extreme case where the change of representation (steps 2 and 4) makes the computation easier. What had been a quadratic O(N2) convolution is now a linear O(N) number of pointwise multiplications (strictly O(n) where n = N/log(B) ). This change of representation is in fact so extreme, that now the ‘real work’ of the algorithm in step 3 takes significantly less time (O(n) multiplications) compared to the change in representation at steps 2 and 4 (FFT is O( n log(n) ) multiplications).

Forgetting the mathematics this means the majority of the computational time in doing this multiplication is taken up by the change of representation.

In fact, if the data had been presented for multiplication already in FFT form and result expected in FFT representation, then the computational ‘cost’ of multiplication would have been linear … or to be even more extreme if instead of ‘representing’ two numbers as a and b we instead ‘represent’ them as a*b and a/b, then multiplication is free.  In general, computation lies as much in the complexity of putting something into a representation as it is in the manipulation of it once it is represented.  Computation is change of representation.

In a letter to CACM in 1966 Knuth said1:

When a scientist conducts an experiment in which he is measuring the value of some quantity, we have four things present, each of which is often called “information”: (a) The true value of the quantity being measured; (b) the approximation to this true value that is actually obtained by the measuring device; (c) the representation of the value (b) in some formal language; and (d) the concepts learned by the scientist from his study of the measurements. It would seem that the word “data” would be most appropriately applied to (c), and the word “information” when used in a technical sense should be further qualified by stating what kind of information is meant.

In these terms problems are about information, whereas algorithms are operating on data … but the ‘cost’ of computation has to also include the cost of turning information into data and back again.

Back to Searle’s wall and the Goodman’s emerald.  The emerald ‘changes’ state from grue to not grue with no cost or work, but in order to ask the question “is this emerald grue?” the answer will involve computation (if (now()<t) …).  Similarly if we have rules like this, but so complicated that Searle’s wall ‘implements’ a word processor, that is fine, but in order to work out what is on the word processor ‘screen’ based on the observation of the (unchanging) wall, the computation involved in making that observation would be equivalent to running the word processor.

At a theoretical computation level this reminds us that when we look at the computation in a Turing machine, vs. an Intel processor or lambda calculus, we need to consider the costs of change of representations between them.  And at a practical level, we all know that 90% of the complexity of any program is in the I/O.

  1. Donald Knuth, “Algorithm and Program; Information and Data”, Letters to the editor. Commun. ACM 9, 9, Sep. 1966, 653-654. DOI= http://doi.acm.org/10.1145/365813.858374 [back]

databases as people think – dabble DB

I was just looking at Enrico Bertini‘s blog Visuale for the first time for ages. In particular at his December entry on DabbleDB & Magic/Replace. Dabble DB allows web-based databases and in some ways sits in similar ground with Freebase, Swivel or even Google docs spreadsheet, all ways to share data of different forms on/through the web.

The USP for Dabble DB amongst other online data sharing apps, is that it appears to really be a complete database solution online … and its USB amongst conventional databses is the way they seem to have really thought about real use.  This focus on real use by ordinary users includes dynamically altering the structure of the data as you gradually understand it more.  The model they have is that you start with plain table data from a spreadsheet or other document and gradually add structure as opposed to the “first analyse and then enter” model of traditional DBs.

As I read Enrico’s blog I remembered that he had mailed me about the ‘magic/replace‘ feature ages ago.  This lets you tidy up  data during import (but apparently not data already imported … wonder why?), using a ‘by example’ approach and is a really nice example of all that ‘programming by example‘ and related work that was so hot 15 years ago eventually finding its way into real products.

The downside to Dabble DB is that editing is via forms only … it is often so much easier to enter data in a spreadsheet view, the API is quite limited, and while they have a ‘Dabble DB Commons‘ for public data (rather like Swivel), there is no directory or other way to see what people have put up 🙁

I was particularly hoping the API was better as it would have been nice to link it into my web version of Query-by-Browsing. or even integrate with the Query-through-Drilldown approach for constructing complex table joins that Damon Oram implemented more recently.

In general, while the DB and (many) UI features are strong it is not really looking outwards to creating shared linked data (in the broadest sense of the term, not just pure SemWeb world linked data), … so still room there for the absolute killer shared data app!

nice quote: Auden on language

Was thumbing through Brain Cantwell Smith’s “On the Origin of Objects1, and came across the following quote:

One notices, if one will trust one’s eyes, the shadow cast by language upon truth.
Auden, “Kairos & Logos

This reminded me of my own ponderings as a school child (I can still hear the clank of china as I was washing cups in the church at the time!) as to whether I would be able to think more freely if I knew more languages and thus had more words and concepts, or whether, on the contrary, my mind would be most clear if I knew no language and was thus free of the conceptual straitjacket of English vocabulary. Of course all shades of Sapir-Whorf (although I didn’t know the term at the time), and now I hold a somewhere in-between view – language shapes thought but does not totally contain it2.  Is that the moderation of maturity, or compromise of age?

  1. Trying to decide whether to start it again, as Luke Church, who I met at the PPIG meeting in September, told me it was worthwhile persevering with even though somewhat oddly written![back]
  2. I discuss this a bit in my transarticulation essay and paths and patches book chapter.[back]

MS Office and the new digital dark age

I’ve just spent best part of 2 hours simply trying to print some Powerpoint slides as PDF, only to discover it is yet more of the incompetence in Office 2008 that I have previously blogged about (pain, tears and office 2008).   I was trying to get a small PDF for the web and so was printing to a postscript file and then converting to PDF using Adobe Distiller, but Distiller kept crashing with broken postscript commands (I assume it would also have failed to print on a printer).  Strangely if I printed straight to PDF it would view OK, but would again crash if I asked Acrobat to process it to reduce the file size.

After doing a lengthy ‘binary chop’ on the file, printing smaller and smaller segments, I narrowed it down to one slide, and  then a single element on the slide that of deleted made it all work OK.

I had assumed the problem would be some big JPEG image that I had imported, but the offending element turned out to be the little patterned rectangle in the center of the excerpt below.

The little rectangle is supposed to represent a screen and was constructed simply from two Powerpoint shapes, a plain rectangle and a rounded rectangle laid on top of one another.  I assume the complication was that I had used one of the built-in textures in the previous version of Powerpoint (yes backward compatibility again).  I can only assume that Powerpoint encodes these textures in some unusual way and that the newer version of Powerpoint gets confused when it comes to print them (even though it appears to display them fine).

In meetings related to the UKCRC Grand Challenge on Memories for Life, there have been frequent worries, not least from the British Library, about digital preservation, how digital materials from some years ago are hard to access today.  A key example was the BBC ‘Doomsday Book’ project that created a two volume interactive multimedia videodisc in 1986, but by 2002 this was virtually unreadable and was only just saved (see 2002 BBC News article). This was ‘just’ 15 year old technology compared to the 1000 year old original Doomsday Book that is still readable on paper.

However, with Powerpoint we are not just seeing digital preservation problems from 15 year old technology, but between two subsequent versions of the same ‘industry standard’ software on some of its most basic features (static geometric shapes).  The British Library worries about a new digital dark age … and Microsoft’s coders seem to be hell bent on making it happen.

European working time directive 2012 – the end of the UK university?

Fiona @ lovefibre just forwarded me a link to a petition about retained firefighters, who evidently may be at risk as the right to opt out of European working time directive is rescinded.  Checking through to the Hansard record, it seems this is really a precautionary debate as the crunch is not until 2012.

However, I was wondering how that was going to impact UK academia if, in 2012, the 48 hour maximum cuts in.

It may make no difference if academics are not required to work more than 48 hours, just decide to do so voluntarily.  However, this presumably has all sorts of insurance ramifications – if we do a reference or paper outside the ‘official hours’ would we be covered by the University’s professional indemnity.  I guess also, in considering promotions and appointments, we would  have to ‘downgrade’ someone’s publications etc. to only include those that were done during paid working hours otherwise we would effectively be making the extra hours a requirement (as we currently do).

The university system has become totally dependent on these extra hours.  In a survey in the early 1990s the average hours worked were over 55 per week, and in the 15 years since then this has gone up substantially. I would guess now the average is well over 60, with many academics getting close to double the 48 hour maximum. I recall one colleague, who had recently had a baby, mentioning how he had cut back on work; now he stops work at 5pm … and doesn’t start again until 7:30pm, his ‘cut back’ week was still way in excess of 60 hours even with a young baby1. Worryingly this has spread beyond the academics and  departmental administrators are often at their desks at 7 or 8 o’clock in the evening, taking piles of work home and answering email through the weekend.  While I admire and appreciate their devotion, one has to wonder at the impact on their personal lives.

So, at a human level, enforcing limited working hours would be no bad thing; certainly many companies force this, forbidding work out of office hours.  However, practically speaking,  if the working time directive does become compulsory in 2012, I  cannot imagine how the University system could continue to function.

And … if you are planning to do a 3 year course, start now; who knows what things will be like after 3 years!

  1. Yea, and I know I can’t talk, as an inveterate workaholic I ‘cut back’ from a high of averaging 95 hours a few years ago and now try to keep around 80 max.  I was however very fortunate in that I was doing a PhD and then personal fellowships when our children were small, so was able to spend time with them and only later got mired in the academic quicksands.[back]

bullying – training for life?

Although I have heard and read similar ideas before, it was still appalling to hear cyber-bullying being described as ‘distressing’ in the tone of voice one would use for spilt tea, and tales of beatings and broken teeth being brushed aside.

I was driving back up country and listening to Tuesday’s Woman’s Hour1.  The guest was Helene Guldberg from the Open University, who had recently published views that anti-bullying initiatives were undermining children’s ability to acquire conflict management skills for later life.

While I share her concerns that we tend towards a nanny society, I cannot imagine that she would feel that being mugged in the streets was helping her to learn how to live in a world where bad things happen,  yet she, and I know she voices a common prejudice in educational theory, feels that violence that would be criminal against an adult is somehow acceptable for a child.  Evidently it is all childhood innocence and any sense of cruelty is simply our adult projections.

In her own moment of exquisite cruelty, Guldberg responded to an email from a woman in her 50s, who felt her life permanently scarred by school bullying.  The woman found it hard to trust anyone, because the instigator of the bullying had been someone whom she thought to be her best friend.  In the classic ‘blame the victim’ fashion, Guldberg explained that this was simply the fact that if we tell children that bullying will scar them for life, then it will.  The woman’s pain was not anything to do with the bullying when she was at school, but effectively self-inflicted … this despite the fact the 35 years ago no-one was telling children that bullying would do harm, as the universal view then was exactly what Guldberg now expounds.

Hearing all this, I recall my own school days and in particular infant school where most of the boys belonged to a class ‘gang’.  Now I would have been perfectly happy if our class gang had fought other classes – I was never one of life’s pacifists.  However, the purpose of the class gang was not to fight other gangs, but to pick on some member of the class, often one of the peripheral members of the gang if there was no-one else.  Now I should explain I was not of a particularly high moral frame; however, I was a romantic and had been brought up with tales of King Arthur and watching Robin Hood on television; so the idea of picking on the weak was against everything I believed in2.  I refused to join in and so became, disproportionately, the one picked on.

my first school

What is particularly striking in retrospect is that those at the heart of the gang leadership, and so of course never picked on by the gang, were the more ‘respectable’ members of the class, the ones the teacher would ask to look after the class if they had to leave.   As far as I can gather, this was not out of some misguided attempt to reform the bullies through responsibility, but purely ignorance.  The teachers were aware of the ‘naughty’ children and those that the gang leaders egged into fighting and hurting others, but not those who seemed on the surface to be the good ones.

This blindness seems odd, but appears to be common.  I recall when our children were small (and home educated), someone telling us about the school their son was at, how good it was and the excellent social environment, but seemed oblivious to the fact that each day he came back with items from his school bag missing or broken and that he kept asking to be picked up from school rather than walk the short distance home.

Later in high school I recall the dynamics were different; there the bullies tended to be the more obvious candidates: big, tough and often less advantaged.  For different reasons I often found myself at the rough end of things; I would try to talk myself out of trouble (those conflict management skills!), but in the end would never back down, no matter the odds.  One of my front teeth is still a little black from a head butt, but today, with knives everywhere, I wonder whether I would have acted the same, or if I had what the consequence would be.

In some sense, in both earlier and later school, I ‘chose’ to be one of the victims, and perhaps as it had an, albeit over romanticised, ethical aspect one could say that it may have strengthened me.  However, most of the victims were not in that position: the less clever children, the first Asian boy in school, the brothers who always had snuffles and so were labelled ‘snotty’, and when my father had died I still recall the taunts of ‘old grey hairs’.  Those who were weaker or simply cannier learnt to appease and submit, but were consequently far more likely to be repeat victims than someone who, even if hurt, would not be cowed.  I am sure the boy I knew in high school, who was learning these important life skills of appeasement and giving in to intimidation, would have developed a rounded and resilient attitude in his later life if he had not committed suicide first.

The presenter, Jane Garvey, and another guest Claude Knights from anti-bullying charity ‘Kidscape‘ did an excellent job in challenging Guldberg’s views, but she seemed completely immune to any evidence.  However, I don’t recall anyone questioning the life skills learnt by the bullies themselves.  The tough but ‘respectable’ boys, who were at the centre of the gangs in early school, are just those who are likely to have become policemen or soldiers.  What did they learn?  Might is right?

And the same attitudes are prevalent in more professional settings; some years ago a team at KPMG were helping us in our search for continued funding for aQtive, our dot.com company.  All the people there were wonderful to us, but looking at their dealings with one another I was often physically sickened by the combination of fawning to superiors and bullying of juniors that I saw.  All good lessons learnt in public school.

For that matter the circle completes and even some teachers repeat the lessons they learnt at school.  I still recall the grin on our lower-school headmaster’s face during school assemblies, when  he would take some child who had committed a misdemeanour, grab him or her by the shoulders and then, in front of everyone, violently shake them in synchrony with his words.

It is not only the victims of school bullying that are the victims; the bullies themselves are victims of those like Guldberg who tell them it is alright to misuse power – and in the deeper weight of things it is perhaps more terrible to learn to be cruel than to learn to be afraid.

  1. Oddly there isn’t a “Man’s Hour” as I guess that would be sexist? … In fact thinking about men’s magazines, perhaps I can see the point.[back]
  2. Although, I didn’t take part in the systematic bullying of the class gang, I am sure there were times during my own childhood, when I hurt others. I am not writing from a moral high ground, I just want us to take all the pains and joys of childhood seriously.[back]

From Anzere in the Alps to the Taj Bangelore in two weeks

In the last two weeks I have experienced both Swiss snow and skiing and Indian sun and traffic for the first time. The former was in Anzere for the French speaking Swiss Universities’ annual winter school and the latter in Bangalore for meetings (including another winter school) connected with the UK-India Network on Interactive Technologies for the End-User. Both have been exciting both personally because of their novelty as experiences and professionally due to stimulating discussions … happily not dry business meetings. I will blog later in more detail about both.

I guess joy always has its pains: in the case of skiing, blisters on my shins; and in India, the nearly inevitable wobbly tummy!

People have been wonderful in both Switzerland and India, both those in the meetings themselves and those I’ve met along the way.

I knew a few of the Swiss people already Denis and Pascal from a previous visit, but most were new including Micheal, my ski buddy, who had been in Switzerland for a long time, but was his first skiing too. Our ski instructor Rudy from Ecole Suisse de Ski et de Snowboard – Anzère was absolutely wonderful with seeming endless patience as we practised again and again (including the odd tumble) things that to him were so natural … if you want to learn to ski, ask for Rudy! In the village the woman at the ski shop was also wonderful helping find the right boots and equipment for someone who hardly wears shoes normally, and when she realised how bad my shins had become, she Christened me “Brave Shins’ :-/ I struggled to recognise her English accent until she explained she was brought up in Belgravia … it was just posh 🙂 However, the lady at the Anzere tourist information was my hero of the week; insisting on picking up special ‘second skin’ plasters from the pharmacy and bringing them to me at the hotel. Thanks to their ministrations my last day of skiing was blessedly pain free.

In India again so many wonderful people, Rama from HP who organized our demo day, the people on my Bootcamp team Ramprakesh, Dinoop and Ramesh, and many many others , and not forgetting the drivers of ‘autos’, including the one who smiled all the time, but got so embarrassed when accosted by the begging transvestites at the traffic lights.

Bootcamp Team: Ramesh, Dinoop, me, Ramprakash
(photo by Ramprakash)

Bangalore dinner: me, Vijay, Dinesh, Sriram
(photo by Ramprakash)

a Bengaluru auto rickshaw
see more and movie at bengaluru-net.in

persistent URLs … pleeeease

I was just clicking through a link from my own 2000 publication list to an ACM paper1, and the link is broken!  So what is new, the web is full of broken links … but I hate to find one on my own site.  The URL appears to be one that is semantic (not one of those CMS “?nodeid=3179” web pages):

http://www.acm.org/pubs/citations/journals/tochi/2000-7-3/p285-dix/

At the time I used the link it was valid;  however, the ACM have clearly changed their structure, as this kind of material is all now in the ACM digital library, but they did not leave permanent redirects in place.  This would be forgiveable if the URL were for a transient news item, but TOCHI is intended to be an archival publication … and yet the URL is not regarded as persistent!  If ACM, probably the largest professional computing organisation in the world, cannot get this right, what hope for any of us.

I will fix the link, and now-a-days tend to use ACM’s DOI link as this is likely to be more persistent, however I can do this only because it is my own site.

So, if you are updating a site structure yourself ..  please, Please, PLeeeeEASE make sure you keep all those old links alive2!

  1. BTW the paper is:

    Dix, A., Rodden, T., Davies, N., Trevor, J., Friday, A., and Palfreyman, K. 2000. Exploiting space and location as a design framework for interactive mobile systems. ACM Trans. Comput.-Hum. Interact. 7, 3 (Sep. 2000), 285-321. DOI= http://doi.acm.org/10.1145/355324.355325

    and it is all about physical and virtual location … hmmm[back]

  2. For the ACM or other large sites this would be done using some data-driven approach, but if you are simply restructring your own site and you are using an Apache web server, just add a .htaccess file to your web and add Redirect directives in it mapping old URls to new ones. For example, for the paper on the ACM site:

    Redirect /pubs/citations/journals/tochi/2000-7-3/p285-dix/ http://doi.acm.org/10.1145/355324.355325

    [back]

just hit search

For years I have heard anecdotal stories of how users are increasingly unaware of the URL itself (and certainly the term,  ‘web address’ is sometimes better).  I recall having a conversation at a university meeting (non-computing) and it soon became obvious that  the term ‘browser’ was also not one they were familiar with even though they of course used it daily.  I guess like the mechanics of the car engine, the mechanics of the web are invisible.

I came across the Google Zeitgeist 2008 page that analyses the popular and the rising search terms of 2008.  The rising ones reveal things in the media “sarah palin” way in there above “obama” in the global stats.  … if Google searches were votes!  However, the ‘most popular’ searches reveal longer term habits.  For the UK the 10 most popular searches are:

  1. facebook
  2. bbc
  3. youtube
  4. ebay
  5. games
  6. news
  7. hotmail
  8. bebo
  9. yahoo
  10. jobs

Some of these terms ‘games’, ‘news’, and ‘jobs’ (no Steve, not you) are generic categories … and suggests that people approach these from the search box, not a portal.  However, of these top 10, seven of them are simply domain names of popular sites.  Instead of typing this into the address bar (which certainly on Firefox autocompletes if I type any I’ve visited before), many users just Google it (and I’m sure the same is true for LiveSearch and others).

I was told some years ago that AOL browsers swapped the relative sizes (and locations I think) of the built-in search box and address bar on the assumption that their users rarelt tyoed in URLs (although I knew of AOL users who accidentally typed URLs into the search box).  Also recalling the company that used to sell net keywords that were used by Netscape (and possibly others) if you entered terms rather than a URL into the adders bar.

… of course if I try that now … FireFox  redirects me through Google “I feel lucky” … of course

Incidentally I came to this as I was trailing back the source of the, now shown to be incorrect, Sunday Times news story that said two Google seaches used the same electricity as boiling an electric kettle.  This got challenged in a TechCrunch blog, refuted by Google, and was effectively (but not explcitly) retracted in subsequent Times online item.  The source turns out to be a junior Harvard physicist, Alex Wissner-Gross, whose own source was a blog by Rolf Kersten, one of the Sun Green Team (Sun the computer manufacturer not Sun the newspaper!), so actually not an unreasonable basis.

In fact Rolf Kersten’s estimate, which was prepared for a talk in 2007, seemed to be based on sensible calculations, although he has recently posted a blog saying the figure was out by a factor of 35 … yes it actually takes 70 Google searches to boil that kettle.  Looking deeper the cause of the discrepancy appears to be the figure he used for the number of Google searches per day.  He took 2005 data about the size of the Google server farm and used a figure of 40 million searches per day.  Although Google did not publish their full workings in their response, it is clearly this figure of 40 million hits that was way too low for 2005 as a Feb 2001 Google press release quoted 60 million searches per day in 2000.  Actually with a moment’s reflection it is clear that 40 million hits per day (500 per second) would hardly have justified a major server farm and the figure is clearly in the billions.  However, it is surprisingly difficult to find the true figure and if you Google “google searches per day” you simply find lots of people asking the same question.  In fact, it was through looking for further Google press releases to find a more up-to-date figure that got me to the Zeitgeist page!

A Eamonn Fitzgerald’s Rainy Day blog nicely lays out the timeline of this story and sees it as a triumph of the power of media consumers to challenge the authority of the press due to what Jay Rosen refers to as  ‘audience atomization‘.   Fitzgerald also sees the paradox that the story itself was sourced from the somewhat broken sources on the internet; in the past the press would have perhaps used more authoritative sources … and as I noted couple of years ago at a Memories for Life panel at the British Library, the move from BBC to YouTube could be read as mass democratisation … or simply signal the end of history.

There is another lesson though, one that I picked up in a blog “keeping track of history” not long after the Memories for Life meeting, just how hard it is to find pretty straightforward information on the web.  At that point I was after Tony Blair’s statement about the execution of Saddam Husssein, in this case trying to find out the number of Google search hits.  Neither are secret, propriety or obscure, but both difficult to track down.

… but we still trust that single hit of a search button