fixing hung iCal

iCal hung on a sync with Google calendars and kept hanging everytime I restarted it, even after restarting the whole machine.

I found some advice on this in a few posts.

One “Fix an iCal ‘application not responding’ occasional hang” was more about occasional long pauses and suggested selecting”Reset Sync History” in  “iSync » Preferences”.  Another  “Fix an iCal hang due to system date reset” suggested resetting the ‘lastHearBeatDate‘ in Library/Preferences/com.apple.iCal.plist. Neither worked, but prompted by the latter I used TimeMachine (yawn yawn how do they make it sooooo sloooow), to restore copies of all the iCal plist files in Library/Preferences/, but again to no avail.

So several good suggestions, but none worked.

Happily I saw a comment lower down on “Fix an iCal hang due to system date reset” which suggested moving the complete ~/Library/Calendars folder out to the desktop and then recopying the calendar files in one by one after restarting iCal. I didn’t do this as such, but instead in ~/Library/Calendars there are a number of ‘Calendar Cache‘ files and also a folder labelled Calendar Sync Changes. I removed these, restarted and … it works 🙂

Hardly easy for the end user though :-/

Struggling with Heidegger

Heidegger and hammers have been part of HCI’s conceptualisation from pretty much as long as I can recall.  Although maybe I first heard the words at some sort of day workshop in the late 1980s as the hammer example as used in HCI annoyed me even then, so let’s start with hammers.

hammers

I should explain that problems with the hammer example are not my current struggles with Heidegger!  For the hammer it is just that Heidegger’s ‘ready at hand’ is often confused with ‘walk up and use’.  In  Heidegger ready-at-hand refers to the way one is focused on the nail, or wood to be joined, not the hammer itself:

“The work to be produced is the “towards which” of such things as the hammer, the plane, and the needle” (Being and Time1, p.70/99)

To be ‘ready to hand’ like this typically requires familiarity with the equipment (another big Heidegger word!), and is very different from the way a cash machine or tourist information systems should be in some ways accessible independent of prior knowledge (or at least only generic knowledge and skills).

My especial annoyance with the hammer example stems from the fact that my father was a carpenter and I reckon it took me around 10 years to learn how to use a hammer properly2!  Even holding it properly is not obvious, look at the picture.

There is a hand sized depression in the middle.  If you have read Norman’s POET you will think, “ah yes perceptual affordance’, and grasp it like this:

But no that is not the way to hold it!  If try to use it like this you end up using the strength of your arm to knock in the nail and not the weight of the hammer.

Give it to a child, surely the ultimate test of ‘walk up and use’, and they often grasp the head like this.

In fact this is quite sensible for a child as a ‘proper’ grip would put too much strain on their wrist.  Recall  Gibson’s definition of affordance was relational3, about the ecological fit between the object and the potential actions, and the actions depends on who is doing the acting.  For a small child with weaker arms the hammer probably only affords use at all with this grip.

In fact the ‘proper’ grip is to hold it quite near the end where you can use the maximum swing of the hammer to make most use of the weight of the hammer and its angular momentum:

Anyway, I think maybe Heidegger knew this even if many who quote him don’t!

Heidegger

OK, so its alright me complaining about other people mis-using Heidegger, but I am in the middle of writing one of the chapters for TouchIT and so need to make sure I don’t get it wrong myself … and there my struggles begin.  I need to write about ready-to-hand and present-to-hand.   I thought I understood them, but always it has been from secondary sources and as I sat with Being and Time in one hand, my Oxford Companion to Philosophy in another and various other books in my teeth … I began to doubt.

First of all what I thought the distinction was:

  • ready at hand — when you are using the tool and it is invisible to you, you just focus on the work to be done with it
  • present at hand — when there is some sort of breakdown, the hammer head is loose or you don’t have the right tool to hand and so start to focus on the tools themsleves rather than on the job at hand

Scanning the internet this is certainly what others think, for example blog posts at 251 philosophy and Matt Webb at Berg4.  Koschmann, Kuutti and Hickman produced an excellent comparison of breakdown in Heidegger, Leont’ev and Dewey5, and from this it looks as though the above distinction maybe comes Dreyfus summary of Heidegger — but again I don’t have a copy of Dreyfus’ “Being-in-the-World“, so not certain.

Now this is an important distinction, and one that Heidegger certainly makes.  The first part is very clearly what Heidegger means by ready-to-hand:

“The peculiarity of what is proximally to hand is that, in its readiness-to-hand, it must, as it were, withdraw … that with which we concern ourselves primarily is the work …” (B&T, p.69/99)

The second point Heidegger also makes at length distinguishing at least three kinds of breakdown situation.  It just seems a lot less clear whether ‘present-at-hand’ is really the right term for it.  Certainly the ‘present-at-hand’ quality of an artefact becomes foregrounded during breakdown:

“Pure presence at hand announces itself in such equipment, but only to withdraw to the readiness-in-hand with which one concerns oneself — that is to say, of the sort of thing we find when we put it back into repair.” (B&T, p.73/103)

But the preceeding sentance says

“it shows itself as an equipmental Thing which looks so and so, and which, in its readiness-to-hand as looking that way, has constantly been present-at-hand too.” (B&T, p.73/103)

That is present-at-hand is not so much in contrast to ready-at-hand, but in a sense ‘there all along’; the difference is that during breakdown the presence-at-hand becomes foregrounded. Indeed when ‘present-at-hand’ is first introduced Heidegger appears to be using it as a binary distinction between Dasein, (human) entities that exist and ponder their existence, and other entities such as a table, rock or tree (p. 42/67).  The contrast is not so much between ready-to-hand and present-to-hand, but between ready-to-hand and ‘just present-at-hand’ (p.71/101) or ‘Being-just-present-at-hand-and-no-more’ (p.73/103). For Heidegger to seems not so much that ‘ready-to-hand’ stands in in opposition to ‘present-to-hand’; it is just more significant.

To put this in context, traditional philosophy had focused exclusively on the more categorically defined aspects of things as they are in the world (‘existentia’/present-at-hand), whilst ignoring the primary way they are encountered by us (Dasein, real knowing existence) as ready-to-hand, invisible in their purposefulness.  Heidegger seeks to redress this.

“If we look at Things just ‘theoretically’, we can get along without understanding readiness-to-hand.” (B&T p.69/98)

Heidegger wants to avoid the speculation of previous science and philosophy. Although it is not a Heidegger word, I use ‘speculation’ here with all of its connotations, pondering at a distance, but without commitment, or like spectators at a sports stadium looking in at something distant and other.  In contrast, ready-to-hand suggests commitment, being actively ‘in the world’ and even when Heidegger talks about those moments when an entity ceases to be ready-to-hand and is seen as present-to-hand, he uses the term circumspection — a casting of the eye around, so that the Dasein, the person, is in the centre.

So present-at-hand is simply the mode of being of the entities that are not Dasein (aware of their own existence), but our primary mode of experience of them and thus in a sense the essence of their real existence is when they are ready-to-hand.  I note Roderick Munday’s useful “Glossary of Terms in Being and Time” highlights just this broader sense of present-at-hand.

Maybe the confusion arises because Heidegger’s concern is phenomenological and so when an artefact is ready-to-hand and its presence-to-hand ‘withdraws’, in a sense it is no longer present-to-hand as this is no longer a phenomenon; and yet he also seems to hold a foot in realism and so in another sense it is still present-to-hand.  In discussing this tension between realism and idealism in Heidegger, Stepanich6 distinguishes present-at-hand and ready-to-hand, from presence-to-hand and readiness-to-hand — however no-one else does this so maybe that is a little too subtle!

To end this section (almost) with Heidegger’s words, a key statement, often quoted, seems to say precisely what I have argued above, or maybe precisely the opposite:

“Yet only by reason of something present-at-hand ‘is there’ anything ready-to-hand.  Does it follow, however, granting this thesis for the nonce, that readiness-to-hand is ontologically founded upon presence-at-hand?” (B&T, p.71/101)

What sort of philosopher makes a key point through a rhetorical question?

So, for TouchIT, maybe my safest course is to follow the example of the Oxford Companion to Philosophy, which describes ready-to-hand, but circumspectly never mentions present-to-hand at all?

and anyway what’s wrong with …

On a last note there is another confusion, or maybe mistaken attitude, that seems to be common when referring to ready-to-hand.  Heidegger’s concern was in ontology, understanding the nature of being, and so he asserted the ontological primacy of the ready-to-hand, especially in light of the previous dominant concerns of philosophy.  However, in HCI, where we are interested not in the philosophical question, but the pragmatic one of utility, usability, and experience, Heidegger is often misapplied as a kind of fetishism of engagement, as if everything should be ready-to-hand all the time.

Of course for many purposes this is correct, as I type I do not want to be aware of the keys I press, not even of the pages of the book that I turn.

Yet there is also a merit in breaking this engagement, to encourage reflection and indeed the circumspection that Heidegger discusses.  Indeed Gaver et al.’s focus on ambiguity in design7 is often just to encourage that reflection and questioning, bringing things to the foreground that were once background.

Furthermore as HCI practitioners and academics we need to both take seriously the ready-to-hand-ness of effective design, but also (just as Heidegger is doing) actually look at the ready-to-hand-ness of things seeing them and their use not taking them for granted.  I constantly strive to find ways to become aware of the mundane, and offer students tools for estrangement to look at the world askance8.

“To lay bare what is  just present-at-hand and no more, cognition must first penetrate beyond what is ready-to-hand in our concern.” (B&T, p.71/101)

This ability to step out and be aware of what we are doing is precisely the quality that Schon recognises as being critical for the ‘Reflective Practioner‘.  Indeed, my practical advice on using the hammer in the footnotes below comes precisely through reflection on hammering, and breakdowns in hammering, not through the times when the hammer was ready-to-hand..

Heidegger is indeed right that our primary existence is being in the world, not abstractly viewing it from afar.  And yet, sometimes also, just as Heidegger himself did as he pondered and wrote about these issues, one of our crowning glories as human beings is precisely that we are able also in a sense to step outside ourselves and look in wonder.

  1. In common with much of the literature the page references to Being and Time are all of the form p.70/99 where the first number refers to the page numbers in the original German (which I have not read!) and the second number to the page in Macquarrie and Robinson’s translation of Being and Time published by Blackwell.[back]
  2. Practical hammering – a few tips: The key thing is to focus on making sure the face of the hammer is perpendicular to the nail, if there is a slight angle the nail will bend.  For thin oval wire nails, if one does bend do not knock the nail back upright, most likely it will simply bend again and just snap.  Instead, simply hit the head of the nail while still bent, but keeping the hammer face perpendicular to the nail not the hole.  So long as the nail has cut any depth of hole it will simply follow its own path and straighten of its own accord.[back]
  3. James Gibson. The Ecological Approach to Visual Perception[back]
  4. Matt Webb’s post appears to be quoting Paul Dourish’ “Where the Action Is”, but I must have lent my copy to someone, so not sure of this is really what Paul thinks.[back]
  5. Koschmann, T., Kuutti, K. & Hickman, L. (1998). The Concept of Breakdown in Heidegger, Leont’ev, and Dewey and Its Implications for Education. Mind, Culture, and Activity, 5(1), 25-41. doi:10.1207/s15327884mca0501_3[back]
  6. Lambert Stepanich. “Heidegger: Between Idealism and Realism“, The Harvard Review of Philosophy, Vol 1. Spring 1991.[back]
  7. Bill Gaver, Jacob Beaver, and Steve Benford, 2003. Ambiguity as a resource for design. CHI ’03.[back]
  8. see previous posts on “mirrors and estrangement” and “the ordinary and the normal“[back]

Names, URIs and why the web discards 50 years of computing experience

Names and naming have always been a big issue both in computer science and philosophy, and a topic I have posted on before (see “names – a file by any other name“).

In computer science, and in particular programming languages, a whole vocabulary has arisen to talk about names: scope, binding, referential transparency. As in philosophy, it is typically the association between a name and its ‘meaning’ that is of interest. Names and words, whether in programming languages or day-to-day language, are, what philosophers call, ‘intentional‘: they refer to something else. In computer science the ‘something else’ is typically some data or code or a placeholder/variable containing data or code, and the key question of semantics or ‘meaning’ is about how to identify which variable, function or piece of data a name refers to in a particular context at a particular time.

The emphasis in computing has tended to be about:

(a) Making sure names have unambiguous meaning when looking locally inside code. Concerns such as referential transparency, avoiding dynamic binding and the deprecation of global variables are about this.

(b) Putting boundaries on where names can be seen/understood, both as a means to ensure (a) and also as part of encapsulation of semantics in object-based languages and abstract data types.

However, there has always been a tension between clarity of intention (in both the normal and philosophical sense) and abstraction/reuse. If names are totally unambiguous then it becomes impossible to say general things. Without a level of controlled ambiguity in language a legal statement such as “if a driver exceeds the speed limit they will be fined” would need to be stated separately for every citizen. Similarly in computing when we write:

function f(x) { return (x+1)*(x-1); }

The meaning of x is different when we use it in ‘f(2)’ or ‘f(3)’ and must be so to allow ‘f’ to be used generically. Crucially there is no internal ambiguity, the two ‘x’s refer to the same thing in a particular invocation of ‘f’, but the precise meaning of ‘x’ for each invocation is achieved by external binding (the argument list ‘(2)’).

Come the web and URLs and URIs.

Fiona@lovefibre was recently making a test copy of a website built using WordPress. In a pure html website, this is easy (so long as you have used relative or site-relative links within the site), you just copy the files and put them in the new location and they work 🙂 Occasionally a more dynamic site does need to know its global name (URL), for example if you want to send a link in an email, but this can usually be achieved using configuration file. For example, there is a development version of Snip!t at cardiff.snip!t.org (rather then www.snipit.org), and there is just one configuration file that needs to be changed between this test site and the live one.

Similarly in a pristine WordPress install there is just such a configuration file and one or two database entries. However, as soon as it has been used to create a site, the database content becomes filled with URLs. Some are in clear locations, but many are embedded within HTML fields or serialised plugin options. Copying and moving the database requires a series of SQL updates with string replacements matching the old site name and replacing it with the new — both tedious and needing extreme care not to corrupt the database in the process.

Is this just a case of WordPress being poorly engineered?

In fact I feel more a problem endemic in the web and driven largely by the URL.

Recently I was experimenting with Firefox extensions. Being a good 21st century programmer I simply found an existing extension that was roughly similar to what I was after and started to alter it. First of course I changed its name and then found I needed to make changes through pretty much every file in the extension as the knowledge of the extension name seemed to permeate to the lowest level of the code. To be fair XUL has mechanisms to achieve a level of encapsulation introducing local URIs through the ‘chrome:’ naming scheme and having been through the process once. I maybe understand a bit better how to design extensions to make them less reliant on the external name, and also which names need to be changed and which are more like the ‘x’ in the ‘f(x)’ example. However, despite this, the experience was so different to the levels of encapsulation I have learnt to take for granted in traditional programming.

Much of the trouble resides with the URL. Going back to the two issues of naming, the URL focuses strongly on (a) making the name unambiguous by having a single universal namespace;  URLs are a bit like saying “let’s not just refer to ‘Alan’, but ‘the person with UK National Insurance Number XXXX’ so we know precisely who we are talking about”. Of course this focus on uniqueness of naming has a consequential impact on generality and abstraction. There are many visitors on Tiree over the summer and maybe one day I meet one at the shop and then a few days later pass the same person out walking; I don’t need to know the persons NI number or URL in order to say it was the same person.

Back to Snip!t, over the summer I spent some time working on the XML-based extension mechanism. As soon as these became even slightly complex I found URLs sneaking in, just like the WordPress database 🙁 The use of namespaces in the XML file can reduce this by at least limiting full URLs to the XML header, but, still, embedded in every XML file are un-abstracted references … and my pride in keeping the test site and live site near identical was severely dented1.

In the years when the web was coming into being the Hypertext community had been reflecting on more than 30 years of practical experience, embodied particularly in the Dexter Model2. The Dexter model and some systems, such as Wendy Hall’s Microcosm3, incorporated external linkage; that is, the body of content had marked hot spots, but the association of these hot spots to other resources was in a separate external layer.

Sadly HTML opted for internal links in anchor and image tags in order to make html files self-contained, a pattern replicated across web technologies such as XML and RDF. At a practical level this is (i) why it is hard to have a single anchor link to multiple things, as was common in early Hypertext systems such as Intermedia, and (ii), as Fiona found, a real pain for maintenance!

  1. I actually resolved this by a nasty ‘hack’ of having internal functions alias the full site name when encountered and treating them as if they refer to the test site — very cludgy![back]
  2. Halasz, F. and Schwartz, M. 1994. The Dexter hypertext reference model. Commun. ACM 37, 2 (Feb. 1994), 30-39. DOI= http://doi.acm.org/10.1145/175235.175237[back]
  3. Hall, W., Davis, H., and Hutchings, G. 1996 Rethinking Hypermedia: the Microcosm Approach. Kluwer Academic Publishers.[back]

Apache: pretty URLs and rewrite loops

[another techie post – a problem I had and can see that other people have had too]

It is common in various web frameworks to pass pretty much everything through a central script using Apache .htaccess file and mod_rewrite.  For example enabling permalinks in a WordPress blog generates an .htaccess file like this:

RewriteEngine On
RewriteBase /blog/
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]

I use similar patterns for various sites such as vfridge (see recent post “Phoenix rises“) and Snip!t.  For Snip!t however I was using not a local .htaccess file, but an AliasMatch in httpd.conf, which meant I needed to ask Fiona every time I needed to do a change (as I can never remember the root passwords!).  It seemed easier (even if slightly less efficient) to move this to a local .htaccess file:

RewriteEngine On
RewriteBase /
RewriteRule ^(.*)$ code/top.php/$1 [L]

The intention is to map “/an/example?args” into “/code/top.php/an/example?args”.

Unfortunately this resulted in a “500 internal server error” page and in the Apache error log messages saying there were too many internal redirects.  This seems to be a common problem reported in forums (see here, here and here).  The reason for this is that .htaccess files are encountered very late in Apache’s processing and so anything rewritten by the rules gets thrown back into Apache’s processing almost as if they were a fresh request.  While the “[L]”(last)  flags says “don’t execute any more rules”, this means “no more rules on this pass”, but when Apache gets back to the .htaccess in the fresh round the rule gets encountered again and again leading to an infinite loop “/code/top/php/code/top.php/…/code/top.php/an/example?args”.

Happily, mod_rewrite thought of this and there is an additional “[NS]” (nosubreq) flag that says “only use this rule on the first pass”.  The mod_rewrite documentation for RewriteRule in Apache 1.3, 2.0 and 2.3 says:

Use the following rule for your decision: whenever you prefix some URLs with CGI-scripts to force them to be processed by the CGI-script, the chance is high that you will run into problems (or even overhead) on sub-requests. In these cases, use this flag.

I duly added the flag:

RewriteRule ^(.*)$ code/top.php/$1 [L,NS]

This should work, but doesn’t.  I’m not sure why except that the Apache 2.2 documentation for NS|nosubreq reads:

NS|nosubreq

Use of the [NS] flag prevents the rule from being used on subrequests. For example, a page which is included using an SSI (Server Side Include) is a subrequest, and you may want to avoid rewrites happening on those subrequests.

Images, javascript files, or css files, loaded as part of an HTML page, are not subrequests – the browser requests them as separate HTTP requests.

This is identical to the documentation for 1.3, 2.0 and 2.3 except that quote about “URLs with CGI-scripts” is singularly missing.  I can’t find anything that says so, but my guess is that there was some bug (feature?) introduced 2.2 that is being fixed in 2.3.

WordPress is immune from the infinite loop as the directive “RewriteCond %{REQUEST_FILENAME} !-f” says “if the file exists use that without rewriting”.  As “index.php” is a file, the rule does not rewrite a second time.  However, the layout of my files meant that I sometimes have an actual file in the pseudo location (e.g. /an/example really exists).  I could have reorganised the complete directory structure … but then I would have been still fixing all the broken links now!

Instead I simply added an explicit “please don’t rewrite my top.php script” condition:

RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_URI}  !^/code/top.php/.*
RewriteRule ^(.*)$ code/top.php/$1 [L,NS]

I suspect that this will be unnecessary when Apache upgrades to 2.3, but for now … it works 🙂

fix for toString error in PHPUnit

I was struggling to get PHPUnit to run under PHP 5.2.9. I’ve only used PHPUnit a little, so may have simply got something wrong, but I kept getting the error:

Catchable fatal error: Object of class AbcTest could not be converted to string in {dir}/PHPUnit/Framework/TestFailure.php on line 98

The error happens in the PHPUnit_Framework_TestFailure::toString method, which tries to implicitly convert a test case to a string.

The class AbcTest is my test case, which it is trying to display following a test failure.  PHPUnit test cases all extend PHPUnit_Framework_TestCase and while this has a toString method it does not have the ‘magic method__toString required by PHP 5.2 onwards.

To fix the problem I simply added the following method to the class PHPUnit_Framework_TestCase in PHPUnit/Framework/TestCase.php .

public function __toString()
 {
 return $this->toString();
 }

I am using PHPUnit 3.4.9, but peeking at 3.5.0beta it looks the same.  I’m guessing the PHPUnit_Framework_TestFailure::toString method is not used much so has got missed since the change to PHP 5.2.x.

PHPUnit is now in GITHub so I really ought to work out how to submit corrections to that … but another day I think.

Time Machine – when it goes wrong and how to fix it

Unfortunately only fixing Mac OS X backup, not the Tardis 🙁 … but, nonetheless, critical.

What bit of software do you really need to be reliable?  If anything else goes really wrong you have the backup — but if the backup fails you really are lost.

And Mac OS X Time Machine, while it does have a very pretty  interface, is inclined to get stuck sometimes.

This is my own story of how it goes wrong … and how to put it right.

… and throughout I’ve dropped in a few lessons for anyone implementing critical system software — maybe the odd Apple engineer is reading

how to tell when things are wrong

Occasionally Time Machine seems to be stuck, but isn’t really.  When you first do a backup, or when you haven’t backed up to a particular disk for ages (perhaps if you have been away on a trip), it can spend several hours ‘preparing’.  You can tell it is ‘preparing’ because when you open the Time Machine preferences there is the little barbers pole saying ‘preparing’ 😉

This is when it is running over the disk working out what it needs to backup, and always seems to be the lengthiest operation, actually backing up the disk is often quite fast, and yet, for some reason there is no indication of how far through the ‘preparing’ process it has got.

Lesson 1: make sure you include progress indicators for anything that can take a while, not just the obvious ‘slow’ things.

So, when you see ‘preparing’, just be patient!

However, at least half-a-dozen times over the last year, my Time Machine has got completely stuck.  I have seen this happen in three ways:

(i)  it is still saying ‘preparing’ after leaving it overnight!

(ii)  it starts to transfer to disk, but then gets stuck part way:

(iii)  if you look in the Time Machine preferences it says the backup has failed

This last time in fact the first sign was (iii), but it doesn’t actually tell you (if you don’t look) until it has failed for ten days, by which time I was travelling.  In the days before Time Machine I always did a manual backup before travelling as I knew that was when things were most likely to go wrong, but now-a-days I have got used to relying on it and forget to check it is working OK … so if you are paranoid about your data, do peek occasionally at Time Machine to check it is still working!

When I got home and told Time Machine to backup to the Time Capsule here rather than my office disk (why can’t it remember that I have two backup disks??).  Then (after being very very patient while to was ‘preparing’ for four hours), I saw it got stuck in step (ii) at 1.4 GB or 4.2 GB.  Of course progress indicators are never very good for very slow operations, when transferring several GB of data there may be several minutes before the bar even moves a pixel … but I was very very patient and it definitely did not move!

Lesson 2: for very long processes supplement the progress indicator with some other indicator to show things are still working, in this case perhaps amount transferred in last minute

At this point I did the normal things, turn Time Machine On/Off,  restart machine a couple of times, etc., but when it persists then you know something is deeply wrong.

so why does it go wrong?

In fact Fiona@lovefibre has found Time Machine flawless for her desktop machine backing up to exactly the same Time Capsule.  I am guessing the problem I have is because I use a laptop so possible reasons:

  • it may go to sleep occasionally, breaking connection to the Time Capsule
  • maybe the WiFi aerial on a laptop is not as good as the desktop

However, if every laptop failed as often surely Apple would have fixed it by now.  So guessing there is an additional factor:

  • my disk has 196 Gb of data, much of it in smaller document files (word docs, code files, etc.), not just a few giant movies.

The software will be designed to withstand a certain amount of external failure, especially when connecting to disks over WiFi as the Time Capsule is designed to do.  However, I imagine that there are places in the code where there are race conditions, or critical portions where external failure really makes a difference.  If the external connections are reliable and the backup is quite fast the likelihood of hitting one of the nasty spots in the code is low.  However, if you have a lot of data to check and then transfer and the external failures more frequent, then the likelihood of hitting one increases and things start to go wrong.

I see similar problems with other software, Dreamweaver in particular, which has got better, but still can crash if the Internet connection is poor (see also “Why software need never hang“).  What happens is that during testing, the test machines often have minimal data, little software (maybe just the operating system and what is being tested), and operate in perfect situations.  In such circumstances these hidden flaws never become apparent.

Lesson 3: make sure your test machine is fully loaded with data and applications, and operates in an unreliable environment, so that testing is realistic

However, this is not like Word crashing and losing your most recent edits to one document.  When Time Machine fails it seems to occasionally leave something corrupt in the backup disk so that subsequent attempts to backup also fail.  There is no excuse for this, the techniques for dealing with potential disk-writing failures are well established in both databases and low-level disk management.  For example, one can save a timestamp file at the end of successful operations so that, when  returning to the data, if the timestamp file is not there the software knows something went wrong last time.

Maybe Time Machine is trying to be too clever, picking up where it left off when, for example, connection to the disk is broken.  If so it clearly needs some additional mechanism to notice “I’ve tried this several times and it keeps going wrong, maybe I need to back off to the last successful state”.  Perhaps not something to worry about in less critical software, but not difficult to get right when it is really needed … as in backups!

Lesson 4: build critical software defensively in layers so that errors in one part do not affect the whole; and if saving to disk ensure there is some sort of atomic transaction

The aim during testing should be what I call “fail-fast programming” trying to make sure that failures happen during testing not real use!

One thing I found particularly disturbing about my most recent Time Machine hang is that when I looked at the system console it had regular spats of “unknown SIGSEGV” several times a minute … in the kernel!  If you don’t know UNIX internals the ‘kernel’ is the heart of the operating system of the Mac, where all the lowest level work is done and where if something goes wrong everything fails.  SIGSEGV means that some bit of software is trying to access a memory location that doesn’t exist.  In fact while this is caught it is not so bad, the greater worry is that if it is trying to access non-existent memory, then it may corrupt other memory … and the kernel has access to everything – not good.

Please, please Apple if you cannot get Time Machine to work properly, do not let it affect the kernel!

how to put it right

One might hope that even if Time Machine cannot notice itself there is something wrong at least there would be an option to say “restart yourself”.  One might hope, but there is not.  However, you can do it yourself by digging a little into the backup disk itself.

First problem is to stop the Time Machine backup if it has hung.

In the Time Machine control panel, you can simply slide the OFF-ON button to OFF.  The status should change to ‘stopping’ and after a while stop.  Then you can restart the machine and try to fix things.

This is the ideal thing to do, but I find that when Time Machine is really hung this rarely works.  I do turn it to OFF, but either it never changes to ‘stopping’ and stays ‘preparing’, or it changes to ‘stopping’, but never does.  If this happens the system restart typically doesn’t restart the system as Time Machine won’t stop running.  Then, always with much trepidation, I reach for the on/off button on the Mac itself :-/

After doing a hard on/off like this, I usually do anther restart from the Apple menu … not sure if this is necessary, but just to be on the safe side!

Occasionally I skip to the next step before the hard restart.

Then you can start to fix the problem properly.

Find the backup disk.  If it is not obvious in the Finder use the ‘Go’ menu and select “Computer”; it shows all the locally connected disks (or it may simply appear in the left hand favourites pane in each Finder window).

If you skipped the restart stage (or of you just peek now to see what it is like when it hasn’t gone wrong), you will see something like “Backup of Alan Dix’s MacBook Pro” (obviously for you it will not be “Alan Dix’s MacBook Pro”!).  This is the Time Machine backup.  However, if you have restarted the machine with Time Machine off you will have to find the actual disk that you chose as your backup disk and on it look for a file called something like “Alan Dix’s MacBook  Pro_0039fc56f8a2.sparsebundle”.  This is some form of compressed disk image.  In the older versions of Time Machine there was simply a folder with all the backups in it — I felt much more secure.  Now this is a single opaque file and I worry that if one day it gets corrupted :-/

Having found the ‘sparsebundle’ double click it and it will display a little pop-up window that says ‘checking volumes’.  I keep meaning to see if this ever stops, but I am not patient enough and press the button that says to skip this state and then (after a while) it mounts the disk image and the disk “Backup of Alan Dix’s MacBook Pro” appears.

Double click “Backup of Alan Dix’s MacBook Pro” and look inside and then inside the folder “Backups_backupd” and you find loads of dated folders, which are the actual backups of your system that you can browse if you prefer instead of using the Time Machine interface.  In addition there may be one file ending “.inProgress”, which is some sort of internal file created while it is in the middle of doing the backup.

Delete the “.inProgress” file.

In addition, I usually delete the last of the dated folders (sort by “Date Modified” to get the last one).  However, if you don’t want to lose the last backup you can try just deleting the “inProgress” file and only delete the last dated backup if Time Machine still gets stuck.

Important: only delete the latest of the dated backup folders (e.g. “2010-06-09-225547” in the screen shot above), NOT the entire “Alan Dix’s Macbook Pro” folder.  If you do that you lose all your backups!

I recall doing this all with extreme trepidation the first time, but had got to the point when I couldn’t do backups or access them anyway so had nothing to lose.  Actually it seems pretty OK getting in here and doing this sort of thing, the nice thing about Time Machine is that it uses ordinary folder structures that you can peek around in and see are there all secure.  I am much happier with this than the kind of backup where you only know if it is working the day you try to restore something!  At least half the times I have used such backups over the years I’ve found the backup is in some way corrupt or incomplete. So actually one up for Time Machine 🙂

Now reboot again (for luck).  Turn Time Machine back on in the control panel and wait … a long time … it will start ‘preparing’ as if for the first backup … and several hours later hopefully all will be well.

But do remember to set the power save options not to go to sleep in the middle!

In fact the above has always worked for me except for this last time when, for some reason (maybe I missed something on the way?), it hung again and I had to go through the whole process again.  This time I waited until yesterday evening before turning Time Machine back on so that I could leave it to do the long 4 hour ‘preparing’ stage without me doing anything else.

And then:

Joy!

The Book Thief – Zusak

I have just finished reading Markus Zusak’s “The Book Thief“, about a small girl in wartime Germany and narrated by Death, as in the one who comes to take souls.  Amongst the hatred of Nazism and falling bombs, the story is of despair and love, cruelty and courage, hard words and big hearts, but told with wry humour and in a dry matter-of-fact prose so that it was only on the last few pages I wept.

Phoenix rises – vfridge online again

vfridge is back!

I mentioned ‘Project Phoenix’ in my last previous post, and this was it – getting vfridge up and running again.

Ten years ago I was part of a dot.com company aQtive1 with Russell Beale, Andy Wood and others.  Just before it folded in the aftermath of the dot.com crash, aQtive spawned a small spin-off vfridge.com.  The virtual fridge was a social networking web site before the term existed, and while vfridge the company went the way of most dot.coms, for some time after I kept the vfridge web site running on Fiona’s servers until it gradually ‘decayed’ partly due to Javascript/DOM changes and partly due to Java’s interactions with mysql becoming unstable (note very, very old Java code!).  But it is now back online 🙂

The core idea of vfridge is placing small notes, photos and ‘magnets’ in a shareable web area that can be moved around and arranged like you might with notes held by magnets to a fridge door.

Underlying vfridge was what we called the websharer vision, which looked towards a web of user-generated content.  Now this is passé, but at the time  was directly counter to accepted wisdom and looking back seem prescient – remember this was written in 1999:

Although everyone isn’t a web developer, it is likely that soon everyone will become an Internet communicator — email, PC-voice-comms, bulletin boards, etc. For some this will be via a PC, for others using a web-phone, set-top box or Internet-enabled games console.

The web/Internet is not just a medium for publishing, but a potential shared place.

Everyone may be a web sharer — not a publisher of formal public ‘content’, but personal or semi-private sharing of informal ‘bits and pieces’ with family, friends, local community and virtual communities such as fan clubs.

This is not just a future for the cognoscenti, but for anyone who chats in the pub or wants to show granny in Scunthorpe the baby’s first photos.

Just over a year ago I thought it would be good to write a retrospective about vfridge in the light of the social networking revolution.  We did a poster “Designing a virtual fridge” about vfridge years ago at a Computers and Fun workshop, but have never written at length abut its design and development.  In particular it would be good to analyse the reasons, technical, social and commercial, why it did not ‘take off’ the time.  However, it is hard to do write about it without good screen shots, and could I find any? (Although now I have)  So I thought it would be good to revive it and now you can try it out again. I started with a few days effort last year at Christmas and Easter time (leisure activity), but now over the last week have at last used the fact that I have half my time unpaid and so free for my own activities … and it is done 🙂

The original vfridge was implemented using Java Servlets, but I have rebuilt it in PHP.  While the original development took over a year (starting down in Coornwall while on holiday watching the solar eclipse), this re-build took about 10 days effort, although of course with no design decisions needed.  The reason it took so much development back then is one of the things I want to consider when I write the retrospective.

As far as possible the actual behaviour and design is exactly as it was back in 2000 … and yes it does feel clunky, with lots of refreshing (remember no AJAX or web2.0 in those days) and of course loads of frames!  In fact there is a little cleverness that allowed some client-end processing pre-AJAX2.    Also the new implementation uses the same templates as the original one, although the expansion engine had to be rewritten in PHP.  In fact this template engine was one of our most re-used bits of Java code, although now of course many alternatives.  Maybe I will return to a discussion of that in another post.

I have even resurrected the old mobile interface.  Yes there were WAP phones even in 2000, albeit with tiny green and black screens.  I still recall the excitement I felt the first time I entered a note on the phone and saw it appear on a web page 🙂  However, this was one place I had to extensively edit the page templates as nothing seems to process WML anymore, so the WML had to be converted to plain-text-ish HTML, as close as possible to those old phones!  Looks rather odd on the iPhone :-/

So, if you were one of those who had an account back in 2000 (Panos Markopoulos used it to share his baby photos 🙂 ), then everything is still there just as you left it!

If not, then you can register now and play.

  1. The old aQtive website is still viewable at aqtive.org, but don’t try to install onCue, it was developed in the days of Windows NT.[back]
  2. One trick used the fact that you can get Javascript to pre-load images.  When the front-end Javascript code wanted to send information back to the server it preloaded an image URL that was really just to activate a back-end script.  The frames  used a change-propagation system, so that only those frames that were dependent on particular user actions were refreshed.  All of this is preserved in the current system, peek at the Javascript on the pages.    Maybe I’ll write about the details of these another time.[back]

PHP syntax checker updated

Took a quick break today from Project Phoenix1.

I’ve had a PHP syntax checker on meandeviation for several years, but only checking PHP 4 as that is what is running on the server.  However, I had an email asking about PHP 5 , so now there is a PHP 5 version too 🙂

The syntax checker is a pretty simple layer over the PHP command line option “php -l” and also uses the PHP highlight_file function.  The main complication is parsing the HTML outputs of both as they change between versions of PHP!

There is also a download archive so you can also have it running locally on your own system.

  1. watch this space …[back]