the plague of bugs

Like some Biblical locust swarm, every attempt to do anything is thwarted by the dead weight of innumerable bugs! This time I was trying … and failing … to upload a Word file into Google docs. I uploaded the docx file and it said the file was unreadable, tried saving it as .doc, and when that failed created an rtf file. Amazingly from a 1 Meg word file the rtf was 66 Meg, but very very slowly Google docs did upload the file and when it was eventually all uploaded …

To be fair the same document imports pretty badly into Pages (all the headings disappear).  I think this is because it is originally a 2003 Word file and gets corrupted when the new Word reads it.

Now I have griped before about backward compatibility issues for Word, and in general about lack of robustness in many leading products, and to add to my woes, for the last month or so (I guess after a software update) Word has decided not to show its formatting menus on an opened document unless I first hide them, then show them, and then maximise the window. Mostly these things are annoying, sometimes really block work, and always waste time and destroy the flow of work.

However, rather than grousing once again (well I already have a bit), I am trying to make sense of this.  For some time it has become apparent that software is fundamentally breaking down, in that with every new version there is minimal new useful functionality, but more bugs.  This may be simply issues of scale, of the training of programmers, or of the nature of development processes.  Indeed in the talk I gave a bit over a  year ago to PPIG, “as we may code“, I noted that coding in th 21st Century seems to be radically different, more about finding tricks and community know-how and less about problem solving.

Whatever the reason, I don’t think the Biblical plague of bugs is simply due to laziness or indifference on the part of large vendors such as  Microsoft and Adobe, but is symptomatic of a deeper crisis in software development, certainly where there is a significant user interface.

Maybe this is simply an inevitable consequence of scale, but more optimistically I wonder if there are new ways of coding, new paradigms or new architectural models.  Can 2010 be the decade when software is reborn?

understanding others and understanding ourselves: intention, emotion and incarnation

One of the wonders of the human mind is the way we can get inside one another’s skin; understand what each other is thinking, wanting, feeling. I’m thinking about this now because I’m reading The Cultural Origins of Human Cognition by Michael Tomasello, which is about the way understanding intentions enables cultural development. However, this also connects a hypotheses of my own from many years back, that our idea of self is a sort of ‘accident’ of being social beings. Also at the heart of Christmas is empathy, feeling for and with people, and the very notion of incarnation.

Continue reading

reading barcodes on the iPhone

I was wondering about scanning barcodes using the iPhone and found that pic2shop (a free app) lets third part iPhone apps and even web pages access its scanning software through a simply URL scheme interface.

Their developer documentation has an example iPhone app, but not an example web page.  However, I made a web page using their interface in about 5 minutes (see PHP source of barcode reader).

If you have an iPhone you can try it out and scan a bar code now, although you need to install pic2shop first (but it is free).

By allowing third party apps to use their software they encourage downloads of their app, which will bring them revenue through product purchases.  Through free giving they bring themselves benefit; a good open access story.

Apple’s Model-View-Controller is Seeheim

Just reading the iPhone Cocoa developer docs and its description of Model-View-Controller. However, if you look at the diagram rather than the model component directly notifying the view of changes as in classic MVC, in Cocoa the controller acts as mediator, more like the Dialogue component in the Seeheim architecture1 or the Control component in PAC.

MVC from Mac Cocoa development docs

The docs describing the Cocoa MVC design pattern in more detail in fact do a detailed comparison with the Smalltalk MVC, but do not refer to Seeheim or PAC, I guess because they are less well known now-a-days.  Only a few weeks ago when discussing architecture with my students, I described Seeheim as being more a conceptual architecture and not used in actual implementations now.  I will have to update my lectures – Seeheim lives!

  1. Shocked to find no real web documentation for Seeheim, not even on Wikipedia; looks like CS memory is short.  However, it is described in chapter 8 of the HCI book and in the chapter 8 slides[back]

big file?

Encountered the following when downloading a file for proof checking from the Elsevier “E-Proofing” System web site1.  Happily it turned out to be a mere 950 K not a gigabyte!

  1. I was going to add a link to the system web site itself at http://elsevier.sps.co.in/ but instead of some sort of home page you get redirected to the proofs of someone’s article in the Journal of Computational Physics![back]

data types and interpretation in RDF

After following a link from one of Nad’s tweets, read Jeni Tennison’s “SPARQL & Visualisation Frustrations: RDF Datatyping“.  Jeni had been having problems processing RDF of MP’s expense claims, because the amounts were plain RDF strings rather than as typed numbers.  She  suggests some best practice rules for data types in RDF based on the underlying philosophy of RDF that it should be self-describing:

  • if the literal is XML, it should be an XML literal
  • if the literal is in a particular language (such as a description or a name), it should be a plain literal with that language
  • otherwise it should be given an appropriate datatype

These seem pretty sensible for simple data types.

In work on the TIM project with colleagues in Athens and Rome, we too had issues with representing data types in ontologies, but more to do with the status of a data type.  Is a date a single thing “2009-08-03T10:23+01:00″, or is it a compound [[date year=”2009″ month=”8” …]]?

I just took a quick peek at how Dublin Core handles dates and see that the closest to standard references1 still include dates as ‘bare’ strings with implied semantics only, although one of the most recent docs does say:

It is recommended that RDF applications use explicit rdf:type triples …”

and David MComb’s “An OWL version of the Dublin Core” gives an alternative OWL ontology for DC that does include an explicit type for dc:date:

<owl:DatatypeProperty rdf:about="#date">
  <rdfs:domain rdf:resource="#Document"/>
  <rdfs:range rdf:resource="http://www.w3.org/2001/XMLSchema#dateTime"/>
</owl:DatatypeProperty>

Our solution to the compound types has been to have “value classes” which do not represent ‘things’ in the world, similar to the way the RDF for vcard represents  complex elements such as names using blank nodes:

<vCard:N rdf:parseType="Resource">
  <vCard:Family> Crystal </vCard:Family>
  <vCard:Given> Corky </vCard:Given>
  ...
</vCard:N>

From2

This is fine, and we can have rules for parsing and formatting dates as compound objects to and from, say, W3C datetime strings.  However, this conflicts with the desire to have self-describing RDF as these formatting and parsing rules have to be available to any application or be present as reasoning rules in RDF stores.  If Jeni had been trying to use RDF data coded like this she would be cursing us!

This tension between representations of things (dates, names) and more semantic descriptions is also evident in other areas.  Looking again at Dublin Core the metamodal allows a property such as “subject”  to have a complex object with a URI and possibly several string values.

Very semantic, but hardly mashes well with sources that just say <dc:subject>Biology</dc:subject>.  Again a reasoning store could infer one from the other, but we still have issues about where the knowledge for such transformations resides.

Part of the problem is that the ‘self-describing’ nature of RDF is a bit illusary.   In (Piercian) semiotics the interpretant of a sign is crucial, representations are interpreted by an agent in a particular context assuming a particular language, etc.  We do not expect human language to be ‘sef describing’ in the sense of being totally acontextual.  Similarly in philosophy words and ideas are treated as intentional, in the (not standard English) sense that they refer out to something else; however, the binding of the idea to the thing it refers to is not part of the word, but separate from it.  Effectively the desire to be self-describing runs the risk of ignoring this distinction3.

Leigh Dodds commented on Jeni’s post to explain that the reason the expense amounts were not numbers was that some were published in non-standard ways such as “12345 (2004)”.  As an example this captures succinctly the perpetual problem between representation and abstracted meaning.  If a journal article was printed in the “Autumn 2007” issue of  quarterly magazine, do we express this as <dc:date>2007</dc:date> or <dc:date>2007-10-01</dc:date>  attempting to give an approximation or inference from the actual represented date.

This makes one wonder whether what is really needed here is a meta-description of the RDF source (not simply the OWL as one wants to talk about the use of dc:date or whatever in a particular context) that can say things like “mainly numbers, but also occasionally non-strandard forms”, or “amounts sometimes refer to different years”.  Of course to be machine mashable there would need to be an ontology for such annotation …

  1. see “Expressing Simple Dublin Core in RDF/XML“, “Expressing Dublin Core metadata using HTML/XHTML meta and link elements” and Stanford DC OWL[back]
  2. Renato Iannella, Representing vCard Objects in RDF/XML, W3C Note, 22 February 2001.[back]
  3. Doing a quick web seek, these issues are discussed in several places, for example: Glaser, H., Lewy, T., Millard, I. and Dowling, B. (2007) On Coreference and the Semantic Web, (Technical Report, Electronics & Computer Science, University of Southampton) and Legg, C. (2007). Peirce, meaning and the semantic web (Paper presented at Applying Peirce Conference, University of Helsinki, Finland, June 2007). [back]

On the edge: universities bureacratised to death?

Just took a quick peek at the new JISC report “Edgeless University: why higher education must embrace technology” prompted by a blog about it by Sarah Bartlett at Talis.

The report is set in the context of both an increasing number of overseas students, attracted by the UK’s educational reputation, and also the desire for widening access to universities.  I am not convinced by the idea that technology is necessarily the way to go for either of these goals as it is just so much harder and more expensive to produce good quality learning materials without massive economies of scale (as the OU has).  Also the report seems to mix up open access to research outputs and open access to learning.

However, it was not these issues, that caught my eye, but a quote by Thomas Kealey vice-chancellor of the University of Buckingham,  the UKs only private university.  For three years Buckingham has come top of UK student satisfaction surveys, and Kealey says:

This is the third year that we’ve come top because we are the only university in Britain that focuses on the student rather than on government or regulatory targets. (Edgeless University, p. 21)

Of course, those in the relevant departments of government would say that the regulations and targets are inteded to deliver education quality, but as so often this centralising of control, (started paradoxically in the UK during the Thatcher years), serves instead to constrain real quality that comes from people not rules.

In 1992 we saw the merging of the polytechnic and university sectors in the UK.  As well as diffferences in level of education, the former were tradtionally under the auspices of local goverment, whereas the latter were independent educational isntitutions. Those in the ex-polytechnic sector hoped to emulate the levels of attaiment and ethos of the older universities.  Instead, in recent years the whole sector seems to have been dragged down into a bureacratic mire where paper trails take precidence over students and scholarship.

Obviously private institutions, as  Kealey suggests, can escape this, but I hope that current and future government can have the foresight and humility to let go some of this centralised control, or risk destroying the very system it wishes to grow.

the more things change …

I’ve been reading Jeni (Tennison)’s Musings about techie web stuff XML, RDF, etc.  Two articles particularly caught my eye.  One was Versioning URIs about URIs for real world and conceptual objects (schools, towns), and in particular how to deal with the fact that these change over time.  The other was Working With Fragmented Overlapping Markup all about managing multiple hierarchies of structure for the same underlying data.

In the past I’ve studied issues both of versioning and of multiple structures on the same data1, and Jeni lays out the issues for both really clearly. However, both topics gave a sense of deja vu, not just because of my own work, but because they reminded me of similar issues that go way back before the web was even thought of.

Versioning URIs and unique identifiers2

In my very first computing job (COBOL programming for Cumbria County Council) many many years ago, I read an article in Computer Weekly about choice of keys (I think for ISAM not even relational DBs). The article argued that keys should NEVER contain anything informational as it is bound to change. The author gave an example of standard maritime identifiers for a ship’s journey (rather like a flight number) that were based on destination port and supposed to never change … except when the ship maybe moved to a different route. There is always an ‘except’, so, the author argued, keys should be non-informational.

Just a short while after reading this I was working on a personnel system for the Education Dept. and was told emphatically that every teacher had a DES code given to them by government and that this code never changed. I believed them … they were my clients. However, sure enough, after several rounds of testing and demoing when they were happy with everything I tried a first mass import from the council’s main payroll file. Validations failed on a number of the DES numbers. It turned out that every teacher had a DES number except for new teachers where the Education Dept. then issued a sort of ‘pretend’ one … and of course the DES number never changed except when the real number came through. Of course, the uniqueness of the key was core to lots of the system … major rewrite :-/

The same issues occurred in many relational DBs where the spirit (rather like RDF triples) was that the record was defined by values, not by identity … but look at most SQL DBs today and everywhere you see unique but arbitrary identifying ids. DOIs, ISBNs, the BBC programme ids – we relearn the old lessons.

Unfortunately, once one leaves the engineered world of databases or SemWeb, neither arbitrary ids nor versioned ones entirely solve things as many real world entities tend to evolve rather than metamorphose, so for many purposes http://persons.org/2009/AlanDix is the same as http://persons.org/1969/AlanDix, but for others different: ‘nearly same as’ only has limited transitivity!

  1. e.g. Modelling Versions in Collaborative Work and Collaboration on different document processing platforms; quite a few years ago now![back]
  2. edited version of comments I left on Jeni’s post[back]

fix for WordPress shortcode bug

I’m starting to use shortcodes heavily in WordPress1 as we are using it internally on the DEPtH project to coordinate our new TouchIT book.  There was minor bug which meant that HTML tags came out unbalanced (e.g. “<p></div></p”).

I’ve just been fixing it and posting a patch2, interestingly the bug was partly due to the fact that back-references in regular expressions count from the beginning of the regular expression, making it impossible to use them if the expression may be ‘glued’ into a larger one … lack of referential transparency!

For anyone having similar problems, full details and patch below (all WP and PHP techie stuff).

Continue reading

  1. see section “using dynamic binding” in What’s wrong with dynamic binding?[back]
  2. TRAC ticket #10490[back]

What’s wrong with dynamic binding?

Dynamic scoping/binding of variables has a bad name, rather like GOTO and other remnants of the Bad Old Days before Structured Programming saved us all1.  But there are times when dynamic binding is useful and looking around it is very common in web scripting languages, event propagation, meta-level programming, and document styles.

So is it really so bad?

Continue reading

  1. Strangely also the days when major advances in substance seemed to be more important than minor advances in nomenclature[back]