fixing hung iCal

iCal hung on a sync with Google calendars and kept hanging everytime I restarted it, even after restarting the whole machine.

I found some advice on this in a few posts.

One “Fix an iCal ‘application not responding’ occasional hang” was more about occasional long pauses and suggested selecting”Reset Sync History” in  “iSync » Preferences”.  Another  “Fix an iCal hang due to system date reset” suggested resetting the ‘lastHearBeatDate‘ in Library/Preferences/com.apple.iCal.plist. Neither worked, but prompted by the latter I used TimeMachine (yawn yawn how do they make it sooooo sloooow), to restore copies of all the iCal plist files in Library/Preferences/, but again to no avail.

So several good suggestions, but none worked.

Happily I saw a comment lower down on “Fix an iCal hang due to system date reset” which suggested moving the complete ~/Library/Calendars folder out to the desktop and then recopying the calendar files in one by one after restarting iCal. I didn’t do this as such, but instead in ~/Library/Calendars there are a number of ‘Calendar Cache‘ files and also a folder labelled Calendar Sync Changes. I removed these, restarted and … it works 🙂

Hardly easy for the end user though :-/

Italian conferences: PPD10, AVI2010 and Search Computing

I got back from trip to Rome and Milan last Tuesday, this included the PPD10 workshop that Aaron, Lucia, Sri and I had organised, and the AVI 2008 conference, both in University of Rome “La Sapienza”, and a day workshop on Search Computing at Milan Polytechnic.

PPD10

The PPD10 workshop on Coupled Display Visual Interfaces1 followed on from a previous event, PPD08 at AVI 2008 and also a workshop on “Designing And Evaluating Mobile Phone-Based Interaction With Public Displays” at CHI2008.  The linking of public and private displays is something I’ve been interested in for some years and it was exciting to see some of the kinds of scenarios discussed at Lancaster as potential futures some years ago now being implemented over a range of technologies.  Many of the key issues and problems proposed then are still to be resolved and new ones arising, but certainly it seems the technology is ‘coming of age’.  As well as much work filling in the space of interactions, there were also papers that pushed some of the existing dimensions/classifications, in particular, Rasmus Gude’s paper on “Digital Hospitality” stretched the public/private dimension by considering the appropriation of technology in the home by house guests.  The full proceedings are available at the PPD10 website.

AVI 2010

AVI is always a joy, and AVI 2010 no exception; a biennial, single-track conference with high-quality papers (20% accept rate this year), and always in lovely places in Italy with good food and good company!  I first went to AVI in 1996 when it was in Gubbio to give a keynote “Closing the Loop: modelling action, perception and information“, and have gone every time since — I always say that Stefano Levialdi is a bit like a drug pusher, the first experience for free and ever after you are hooked! The high spot this year was undoubtedly Hitomi Tsujita‘s “Complete fashion coordinator2, a system for using social networking to help choose clothes to wear — partly just fun with a wonderful video, but also a very thoughtful mix of physical and digital technology.


images from Complete Fashion Coordinator

The keynotes were all great, Daniel Keim gave a really lucid state of the art in Visual Analytics (more later) and Patrick Lynch a fresh view of visual understanding based on many years experience and highlighting particularly on some of the more immediate ‘gut’ reactions we have to interfaces.  Daniel Wigdor gave an almost blow-by-blow account of work at Microsoft on developing interaction methods for next-generation touch-based user interfaces.  His paper is a great methodological exemplar for researchers combining very practical considerations, more principled design space analysis and targeted experimentation.

Looking more at the detail of Daniel’s work at Microsoft, it is interesting that he has a harder job than Apple’s interaction developers.  While Apple can design the hardware and interaction together, MS as system providers need to deal with very diverse hardware, leading to a ‘least common denominator’ approach at the level of quite basic touch interactions.  For walk-up-and use systems such as Microsoft Surface in bar tables, this means that users have a consistent experience across devices.  However, I did wonder whether this approach which is basically the presentation/lexical level of Seeheim was best, or whether it would be better to settle at some higher-level primitives more at the Seeheim dialog level, thinking particularly of the way the iPhone turns pull down menus form web pages into spinning selectors.  For devices that people own it maybe that these more device specific variants of common logical interactions allow a richer user experience.

The complete AVI 2010 proceedings (in colour or B&W) can be found at the conference website.

The very last session of AVI was a panel I chaired on “Visual Analytics: people at the heart of data” with Daniel Keim, Margit Pohl, Bob Spence and Enrico Bertini (in the order they sat at the table!).  The panel was prompted largely because the EU VisMaster Coordinated Action is producing a roadmap document looking at future challenges for visual analytics research in Europe and elsewhere.  I had been worried that it could be a bit dead at 5pm on the last day of the conference, but it was a lively discussion … and Bob served well as the enthusiastic but also slightly sceptical outsider to VisMaster!

As I write this, there is still time (just, literally weeks!) for final input into the VisMaster roadmap and if you would like a draft I’ll be happy to send you a PDF and even happier if you give some feedback 🙂

Search Computing

I was invited to go to this one-day workshop and had the joy to travel up on the train from Rome with Stu Card and his daughter Gwyneth.

The search computing workshop was organised by the SeCo project. This is a large single-site project (around 25 people for 5 years) funded as one of the EU’s ‘IDEAS Advanced Grants’ supporting ‘investigation-driven frontier research’.  Really good to see the EU funding work at the bleeding edge as so many national and European projects end up being ‘safe’.

The term search computing was entirely new to me, although instantly brought several concepts to mind.  In fact the principle focus of SeCo is the bringing together of information in deep web resources including combining result rankings; in database terms a form of distributed join over heterogeneous data sources.

The work had many personal connections including work on concept classification using ODP data dating back to aQtive days as well as onCue itself and Snip!t.  It also has similarities with linked data in the semantic web word, however with crucial differences.  SeCo’s service approach uses meta-descriptions of the services to add semantics, whereas linked data in principle includes a degree of semantics in the RDF data.  Also the ‘join’ on services is on values and so uses a degree of run-time identity matching (Stu Card’s example was how to know that LA=’Los Angeles’), whereas linked data relies on URIs so (again in principle) matching has already been done during data preparation.  My feeling is that the linking of the two paradigms would be very powerful, and even for certain kinds of raw data, such as tables, external semantics seems sensible.

One of the real opportunities for both is to harness user interaction with data as an extra source of semantics.  For example, for the identity matching issue, if a user is linking two data sources and notices that ‘LA’ and ‘Los Angeles’ are not identified, this can be added as part of the interaction to serve the user’s own purposes at that time, but by so doing adding a special case that can be used for the benefit of future users.

While SeCo is predominantly focused on the search federation, the broader issue of using search as part of algorithmics is also fascinating.  Traditional algorithmics assumes that knowledge is basically in code or rules and is applied to data.  In contrast we are seeing the rise of web algorithmics where knowledge is garnered from vast volumes of data.  For example, Gianluca Demartini at the workshop mentioned that his group had used the Google suggest API to extend keywords and I’ve seen the same trick used previously3.  To some extent this is like classic techniques of information retrieval, but whereas IR is principally focused on a closed document set, here the document set is being used to establish knowledge that can be used elsewhere.  In work I’ve been involved with, both the concept classification and folksonomy mining with Alessio apply this same broad principle.

The slides from the workshop are appearing (but not all there yet!) at the workshop web page on the SeCo site.

  1. yes I know this doesn’t give ‘PPD’ this stands for “public and private displays”[back]
  2. Hitomi Tsujita, Koji Tsukada, Keisuke Kambara, Itiro Siio, Complete Fashion Coordinator: A support system for capturing and selecting daily clothes with social network, Proceedings of the Working Conference on Advanced Visual Interfaces (AVI2010), pp.127–132.[back]
  3. The Yahoo! Related Suggestions API offers a similar service.[back]

the plague of bugs

Like some Biblical locust swarm, every attempt to do anything is thwarted by the dead weight of innumerable bugs! This time I was trying … and failing … to upload a Word file into Google docs. I uploaded the docx file and it said the file was unreadable, tried saving it as .doc, and when that failed created an rtf file. Amazingly from a 1 Meg word file the rtf was 66 Meg, but very very slowly Google docs did upload the file and when it was eventually all uploaded …

To be fair the same document imports pretty badly into Pages (all the headings disappear).  I think this is because it is originally a 2003 Word file and gets corrupted when the new Word reads it.

Now I have griped before about backward compatibility issues for Word, and in general about lack of robustness in many leading products, and to add to my woes, for the last month or so (I guess after a software update) Word has decided not to show its formatting menus on an opened document unless I first hide them, then show them, and then maximise the window. Mostly these things are annoying, sometimes really block work, and always waste time and destroy the flow of work.

However, rather than grousing once again (well I already have a bit), I am trying to make sense of this.  For some time it has become apparent that software is fundamentally breaking down, in that with every new version there is minimal new useful functionality, but more bugs.  This may be simply issues of scale, of the training of programmers, or of the nature of development processes.  Indeed in the talk I gave a bit over a  year ago to PPIG, “as we may code“, I noted that coding in th 21st Century seems to be radically different, more about finding tricks and community know-how and less about problem solving.

Whatever the reason, I don’t think the Biblical plague of bugs is simply due to laziness or indifference on the part of large vendors such as  Microsoft and Adobe, but is symptomatic of a deeper crisis in software development, certainly where there is a significant user interface.

Maybe this is simply an inevitable consequence of scale, but more optimistically I wonder if there are new ways of coding, new paradigms or new architectural models.  Can 2010 be the decade when software is reborn?

tech talks: brains, time and no time

Just scanning a few Google Tech Talks on YouTube.  I don’t visit it often, but followed a link from Rob Style‘s twitter.  I find the video’s a bit slow, so tend to flick through with the sound off, really wishing they had fast forward buttons like a DVD as quite hard to pull the little slider back and forth.

One talk was by Stuart Hameroff on A New Marriage of Brain and Computer.  He is the guy that works with Penrose on the possibility that quantum effects in microtubules may be the source of consciousness.  I notice that he used calculations for computational capacity based on traditional neuron-based models that are very similar to my own calculations some years ago in “the brain and the web” when I worked out that the memory and computational capacity of a single human brain is very similar to those of the entire web. Hameroff then went on to say that there are an order of magnitude more microtubules (sub-cellular structures, with many per neuron), so the traditional calculations do not hold!

Microtubules are fascinating things, they are like little mechano sets inside each cell.  It is these microtubules that during cell division stretch out straight the chromosomes, which are normally tangled up the nucleus.  Even stranger those fluid  movements of amoeba gradually pushing out pseudopodia, are actually made by mechanical structures composed of microtubules, only looking so organic because of the cell membrane – rather like a robot covered in latex.

pictire of amoeba

The main reason for going to the text talks was one by Steve Souders “Life’s Too Short – Write Fast Code” that has lots of tips for on speeding up web pages including allowing Javascript files to download in parallel.  I was particularly impressed by the quantification of costs of delays on web pages down to 100ms!

This is great.  Partly because of my long interest in time and delays in HCI. Partly because I want my own web scripts to be faster and I’ve already downloaded the Yahoo! YSlow plugin for FireFox that helps diagnose causes of slow pages.  And partly  because I get so frustrated waiting for things to happen, both on the web and on the desktop … and why oh why does it take a good minute to get a WiFi connection ….  and why doesn’t YouTube introduce better controls for skimming videos.

… and finally, because I’d already spent too much time skimming the tech talks, I looked at one last talk: David Levy, “No Time To Think” … how we are all so rushed that we have no time to really think about problems, not to mention life1.  At least that’s what I think it said, because I skimmed it rather fast.

  1. see also my own discussion of Slow Time[back]

just hit search

For years I have heard anecdotal stories of how users are increasingly unaware of the URL itself (and certainly the term,  ‘web address’ is sometimes better).  I recall having a conversation at a university meeting (non-computing) and it soon became obvious that  the term ‘browser’ was also not one they were familiar with even though they of course used it daily.  I guess like the mechanics of the car engine, the mechanics of the web are invisible.

I came across the Google Zeitgeist 2008 page that analyses the popular and the rising search terms of 2008.  The rising ones reveal things in the media “sarah palin” way in there above “obama” in the global stats.  … if Google searches were votes!  However, the ‘most popular’ searches reveal longer term habits.  For the UK the 10 most popular searches are:

  1. facebook
  2. bbc
  3. youtube
  4. ebay
  5. games
  6. news
  7. hotmail
  8. bebo
  9. yahoo
  10. jobs

Some of these terms ‘games’, ‘news’, and ‘jobs’ (no Steve, not you) are generic categories … and suggests that people approach these from the search box, not a portal.  However, of these top 10, seven of them are simply domain names of popular sites.  Instead of typing this into the address bar (which certainly on Firefox autocompletes if I type any I’ve visited before), many users just Google it (and I’m sure the same is true for LiveSearch and others).

I was told some years ago that AOL browsers swapped the relative sizes (and locations I think) of the built-in search box and address bar on the assumption that their users rarelt tyoed in URLs (although I knew of AOL users who accidentally typed URLs into the search box).  Also recalling the company that used to sell net keywords that were used by Netscape (and possibly others) if you entered terms rather than a URL into the adders bar.

… of course if I try that now … FireFox  redirects me through Google “I feel lucky” … of course

Incidentally I came to this as I was trailing back the source of the, now shown to be incorrect, Sunday Times news story that said two Google seaches used the same electricity as boiling an electric kettle.  This got challenged in a TechCrunch blog, refuted by Google, and was effectively (but not explcitly) retracted in subsequent Times online item.  The source turns out to be a junior Harvard physicist, Alex Wissner-Gross, whose own source was a blog by Rolf Kersten, one of the Sun Green Team (Sun the computer manufacturer not Sun the newspaper!), so actually not an unreasonable basis.

In fact Rolf Kersten’s estimate, which was prepared for a talk in 2007, seemed to be based on sensible calculations, although he has recently posted a blog saying the figure was out by a factor of 35 … yes it actually takes 70 Google searches to boil that kettle.  Looking deeper the cause of the discrepancy appears to be the figure he used for the number of Google searches per day.  He took 2005 data about the size of the Google server farm and used a figure of 40 million searches per day.  Although Google did not publish their full workings in their response, it is clearly this figure of 40 million hits that was way too low for 2005 as a Feb 2001 Google press release quoted 60 million searches per day in 2000.  Actually with a moment’s reflection it is clear that 40 million hits per day (500 per second) would hardly have justified a major server farm and the figure is clearly in the billions.  However, it is surprisingly difficult to find the true figure and if you Google “google searches per day” you simply find lots of people asking the same question.  In fact, it was through looking for further Google press releases to find a more up-to-date figure that got me to the Zeitgeist page!

A Eamonn Fitzgerald’s Rainy Day blog nicely lays out the timeline of this story and sees it as a triumph of the power of media consumers to challenge the authority of the press due to what Jay Rosen refers to as  ‘audience atomization‘.   Fitzgerald also sees the paradox that the story itself was sourced from the somewhat broken sources on the internet; in the past the press would have perhaps used more authoritative sources … and as I noted couple of years ago at a Memories for Life panel at the British Library, the move from BBC to YouTube could be read as mass democratisation … or simply signal the end of history.

There is another lesson though, one that I picked up in a blog “keeping track of history” not long after the Memories for Life meeting, just how hard it is to find pretty straightforward information on the web.  At that point I was after Tony Blair’s statement about the execution of Saddam Husssein, in this case trying to find out the number of Google search hits.  Neither are secret, propriety or obscure, but both difficult to track down.

… but we still trust that single hit of a search button

Google’s Vint Cerf avoiding responsibility

Yesterday morning I was on my way into Lancaster and listening to the Today programme. Google’s ‘internet evangelist’ Vint Cerf was being interviewed by John Humphrys and the topic was ‘should the internet be regulated like other media’.1

Not surprisingly Vint Cerf thought not, but I was surprised how well he avoided actually saying so. John Humphrys is experienced and politicians fear him in these early morning interviews, but to be honest he was completely outclassed by Vint Cerf who sidestepped, avoided and generally never addressed the question.

Web 2.0 was the heart of the issue. With end-user content now dominating the internet do service providers such as YouTube (of course owned by Google) have any responsibility for the kinds of material hosted?

This was in the context of videos of ‘happy slappers’ and other violent attacks being posted, but more generally that whereas TV in many countries is limited in the kinds of material it can show, particularly early in the evening when children are more likely to be watching, is limited by a mixture of voluntary and satutory codes. Why not the internet?

Vint Cerf repeatedly re-iterated the same message “Google is law abiding” if content is not legal it is removed. Implicitly the message was “if it is not illegal it is OK”, but as I said he carefully avoided saying so.

The closest point to actually addressing the question was when John Humphrys suggested that technologies could be misused like research for atomic power being used for nuclear weapons (strange I thought it went the other way round?). Vent Cerf’s response was, the standard neutrality of technology stance, that the makers of roads were not responsible for car deaths, strip development … the same argument used by arms dealers, manufacturers of gas guzzling cars, and scientists in every repressive regime in recent history.

According to Cerf if you are a worried parent you need to buy good filtering software; the solution is at the edges of the net … and of course does not involve the likes of Google … who it appears from the context is at the centre?
Now there are very good arguments against regulation both ethical (freedom of expression) and practical (volume of material, international access). The disappointing, and worrying, aspect of this interview was that Google’s key public face was unwilling or unable to constructively enter the debate at all.

  1. “The 0810 Interview: Godfather of the Internet”, BBC4, Today Programme, Wednesday, 29th August 2007[back]