UK internet far from ubiquitous

On the last page of the Guardian on Saturday (13th Oct) in a sort of ‘interesting numbers’ section, they say that:

“30% of the UK population have no internet access at home”

I couldn’t find the exact source of this, however, another  guardian article “UK internet audience rises by 1.9 million over last year” dated Wednesday 30 June 2010 has a similar figure.  This says that Internet use  has grown to 38.8 million. The National Statistics office say the overall UK population is 61,792,000 with 1/5 under 16, so call that 2 in 16 under 10 or around 8 million. That gives an overall population of a little under 54 million over 10 years old, that is still only 70% actually using the web at all.

My guess is that some of the people with internet at home do not use it, and some of the ones without home connections use it using other means (mobile, use at school, cyber cafe’s), but by both measures we are hardly a society where the web is as ubiquitous as one might have imagined.

wisdom of the crowds goes to court

Expert witnesses often testify in court cases whether on DNA evidence, IT security or blood splatter patterns.  However, in the days of Web 2.0 who is the ‘expert’ witness?  Would then true Web 2.0 court submit evidence to public comments, maybe, like the Viking Thing or Wild West lynch mob, a vote of the masses using Facebook ‘Like’ could determine guilt or innocence.

However, it will be a conventional judge, not the justice of social networks, who will adjudicate if the hoteliers threatening to sue TripAdvisor1 do indeed bring the case to court. When TripAdvisor seeks to defend its case, they will not rely on crowd-sourced legal opinions, but lawyers whose advice is trusted because they are trained, examined and experienced and who are held responsible for their advice.  What is at stake is precisely the fact that TripAdvisor’s own site has none of these characteristics.

This may well, like the Shetland newspaper case in the 1990s2, become a critical precedent for many crowd-sourced sites and so is something we should all be watching.

Unlike Wikipedia or legal advice itself, ‘expertise’ is not the key issue in the case of TripAdvisior: every hotel guest is in a way the best expert as to their own experience.  However, how is the reader to know that the reviews posted are really by disgruntled guests rather than business rivals?  In science we are expected to declare sources of research funding, so that the reader can make judgements on the reliability of evidence funded by the tobacco or oil industry or indeed the burgeoning renewables sector.  Those who flout these conventions and rules may expect their papers to be withdrawn and their careers to flounder.  Similarly if I make a defamatory public statement about friend, colleague or public figure, then not only can the reliability of my words be determined by my own reputation for trustworthiness, but if my words turn out to be knowingly or culpably false and damaging then I can be sued for libel.   In the case of TripAdvisor there are none of the checks and balances of science or the law and yet the impact on individual hoteliers can make or break their business.    Who is responsible for damage caused by any untrue or malicious reviews posted on the site: the anonymous ‘crowd’ or TripAdvisor?

Of course users of review sites are not stupid, they know (or do they) that anonymous reviews should be taken with a pinch of salt.  My guess is that a crucial aspect of the case may be the extent to which TripAdvisor itself appears to lend credence to the reviews it publishes.  Indeed every page of TripAdvisior is headed with their strap line “World’s most trusted travel advice™”.

At the top of the home page there is also the phrase “Find Hotels Travelers Trust” and further down, “Whether you prefer worldwide hotel chains or cozy boutique hotels, you’ll find real hotel reviews you can trust at TripAdvisor“.  The former arguably puts the issue of trust back to the reviewers, but the latter is definitely TripAdvisor asserting to the trustworthiness of the reviews.

I think if I were in TripAdvisor I would be worried!

Issues of trust and reliability, provenance and responsibility are also going to be an important strand of the work I’ll be part of myself  at Talis: how do we assess the authority of crowd-sourced material, how do we convey to users the level of reliability of the information they view, especially if it is ‘mashed’ from different sources, how do we track the provenance of information in order to be able to do this?   Talis is interested because as a major provider and facilitator of open data, the reliability of the information it and its clients provide is a crucial part of that information — unreliable information is not information!

However, these issues are critical for everyone involved in the new web; if those of us engaged in research and practice in IT do not address these key problems then the courts will.

  1. see The Independent, “Hoteliers to take their revenge on TripAdvisor’s critiques in court“, Saturday 11th Sept. 2010[back]
  2. The case around 1996/1997 involved the Shetland Times obtaining a copyright against ‘deep linking’ by the rival Shetland News, that is links directly to news stories bypassing the Shetland News home page.  This was widely reported at the time and became an important case in Internet law: see, for example, Nov 1996 BBC News story or netlitigation.com article.  The out of court settlement allowed the deep linking so long as the link was clearly acknowledged.  However, while the settlement was sensible, the uncertainty left by the case pervaded the industry for years, leading to some sites abandoning link pages, or only linking after obtaining explicit permissions, thus stifling the link-economy of the web. [back]

Names, URIs and why the web discards 50 years of computing experience

Names and naming have always been a big issue both in computer science and philosophy, and a topic I have posted on before (see “names – a file by any other name“).

In computer science, and in particular programming languages, a whole vocabulary has arisen to talk about names: scope, binding, referential transparency. As in philosophy, it is typically the association between a name and its ‘meaning’ that is of interest. Names and words, whether in programming languages or day-to-day language, are, what philosophers call, ‘intentional‘: they refer to something else. In computer science the ‘something else’ is typically some data or code or a placeholder/variable containing data or code, and the key question of semantics or ‘meaning’ is about how to identify which variable, function or piece of data a name refers to in a particular context at a particular time.

The emphasis in computing has tended to be about:

(a) Making sure names have unambiguous meaning when looking locally inside code. Concerns such as referential transparency, avoiding dynamic binding and the deprecation of global variables are about this.

(b) Putting boundaries on where names can be seen/understood, both as a means to ensure (a) and also as part of encapsulation of semantics in object-based languages and abstract data types.

However, there has always been a tension between clarity of intention (in both the normal and philosophical sense) and abstraction/reuse. If names are totally unambiguous then it becomes impossible to say general things. Without a level of controlled ambiguity in language a legal statement such as “if a driver exceeds the speed limit they will be fined” would need to be stated separately for every citizen. Similarly in computing when we write:

function f(x) { return (x+1)*(x-1); }

The meaning of x is different when we use it in ‘f(2)’ or ‘f(3)’ and must be so to allow ‘f’ to be used generically. Crucially there is no internal ambiguity, the two ‘x’s refer to the same thing in a particular invocation of ‘f’, but the precise meaning of ‘x’ for each invocation is achieved by external binding (the argument list ‘(2)’).

Come the web and URLs and URIs.

Fiona@lovefibre was recently making a test copy of a website built using WordPress. In a pure html website, this is easy (so long as you have used relative or site-relative links within the site), you just copy the files and put them in the new location and they work 🙂 Occasionally a more dynamic site does need to know its global name (URL), for example if you want to send a link in an email, but this can usually be achieved using configuration file. For example, there is a development version of Snip!t at cardiff.snip!t.org (rather then www.snipit.org), and there is just one configuration file that needs to be changed between this test site and the live one.

Similarly in a pristine WordPress install there is just such a configuration file and one or two database entries. However, as soon as it has been used to create a site, the database content becomes filled with URLs. Some are in clear locations, but many are embedded within HTML fields or serialised plugin options. Copying and moving the database requires a series of SQL updates with string replacements matching the old site name and replacing it with the new — both tedious and needing extreme care not to corrupt the database in the process.

Is this just a case of WordPress being poorly engineered?

In fact I feel more a problem endemic in the web and driven largely by the URL.

Recently I was experimenting with Firefox extensions. Being a good 21st century programmer I simply found an existing extension that was roughly similar to what I was after and started to alter it. First of course I changed its name and then found I needed to make changes through pretty much every file in the extension as the knowledge of the extension name seemed to permeate to the lowest level of the code. To be fair XUL has mechanisms to achieve a level of encapsulation introducing local URIs through the ‘chrome:’ naming scheme and having been through the process once. I maybe understand a bit better how to design extensions to make them less reliant on the external name, and also which names need to be changed and which are more like the ‘x’ in the ‘f(x)’ example. However, despite this, the experience was so different to the levels of encapsulation I have learnt to take for granted in traditional programming.

Much of the trouble resides with the URL. Going back to the two issues of naming, the URL focuses strongly on (a) making the name unambiguous by having a single universal namespace;  URLs are a bit like saying “let’s not just refer to ‘Alan’, but ‘the person with UK National Insurance Number XXXX’ so we know precisely who we are talking about”. Of course this focus on uniqueness of naming has a consequential impact on generality and abstraction. There are many visitors on Tiree over the summer and maybe one day I meet one at the shop and then a few days later pass the same person out walking; I don’t need to know the persons NI number or URL in order to say it was the same person.

Back to Snip!t, over the summer I spent some time working on the XML-based extension mechanism. As soon as these became even slightly complex I found URLs sneaking in, just like the WordPress database 🙁 The use of namespaces in the XML file can reduce this by at least limiting full URLs to the XML header, but, still, embedded in every XML file are un-abstracted references … and my pride in keeping the test site and live site near identical was severely dented1.

In the years when the web was coming into being the Hypertext community had been reflecting on more than 30 years of practical experience, embodied particularly in the Dexter Model2. The Dexter model and some systems, such as Wendy Hall’s Microcosm3, incorporated external linkage; that is, the body of content had marked hot spots, but the association of these hot spots to other resources was in a separate external layer.

Sadly HTML opted for internal links in anchor and image tags in order to make html files self-contained, a pattern replicated across web technologies such as XML and RDF. At a practical level this is (i) why it is hard to have a single anchor link to multiple things, as was common in early Hypertext systems such as Intermedia, and (ii), as Fiona found, a real pain for maintenance!

  1. I actually resolved this by a nasty ‘hack’ of having internal functions alias the full site name when encountered and treating them as if they refer to the test site — very cludgy![back]
  2. Halasz, F. and Schwartz, M. 1994. The Dexter hypertext reference model. Commun. ACM 37, 2 (Feb. 1994), 30-39. DOI= http://doi.acm.org/10.1145/175235.175237[back]
  3. Hall, W., Davis, H., and Hutchings, G. 1996 Rethinking Hypermedia: the Microcosm Approach. Kluwer Academic Publishers.[back]

Apache: pretty URLs and rewrite loops

[another techie post – a problem I had and can see that other people have had too]

It is common in various web frameworks to pass pretty much everything through a central script using Apache .htaccess file and mod_rewrite.  For example enabling permalinks in a WordPress blog generates an .htaccess file like this:

RewriteEngine On
RewriteBase /blog/
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]

I use similar patterns for various sites such as vfridge (see recent post “Phoenix rises“) and Snip!t.  For Snip!t however I was using not a local .htaccess file, but an AliasMatch in httpd.conf, which meant I needed to ask Fiona every time I needed to do a change (as I can never remember the root passwords!).  It seemed easier (even if slightly less efficient) to move this to a local .htaccess file:

RewriteEngine On
RewriteBase /
RewriteRule ^(.*)$ code/top.php/$1 [L]

The intention is to map “/an/example?args” into “/code/top.php/an/example?args”.

Unfortunately this resulted in a “500 internal server error” page and in the Apache error log messages saying there were too many internal redirects.  This seems to be a common problem reported in forums (see here, here and here).  The reason for this is that .htaccess files are encountered very late in Apache’s processing and so anything rewritten by the rules gets thrown back into Apache’s processing almost as if they were a fresh request.  While the “[L]”(last)  flags says “don’t execute any more rules”, this means “no more rules on this pass”, but when Apache gets back to the .htaccess in the fresh round the rule gets encountered again and again leading to an infinite loop “/code/top/php/code/top.php/…/code/top.php/an/example?args”.

Happily, mod_rewrite thought of this and there is an additional “[NS]” (nosubreq) flag that says “only use this rule on the first pass”.  The mod_rewrite documentation for RewriteRule in Apache 1.3, 2.0 and 2.3 says:

Use the following rule for your decision: whenever you prefix some URLs with CGI-scripts to force them to be processed by the CGI-script, the chance is high that you will run into problems (or even overhead) on sub-requests. In these cases, use this flag.

I duly added the flag:

RewriteRule ^(.*)$ code/top.php/$1 [L,NS]

This should work, but doesn’t.  I’m not sure why except that the Apache 2.2 documentation for NS|nosubreq reads:

NS|nosubreq

Use of the [NS] flag prevents the rule from being used on subrequests. For example, a page which is included using an SSI (Server Side Include) is a subrequest, and you may want to avoid rewrites happening on those subrequests.

Images, javascript files, or css files, loaded as part of an HTML page, are not subrequests – the browser requests them as separate HTTP requests.

This is identical to the documentation for 1.3, 2.0 and 2.3 except that quote about “URLs with CGI-scripts” is singularly missing.  I can’t find anything that says so, but my guess is that there was some bug (feature?) introduced 2.2 that is being fixed in 2.3.

WordPress is immune from the infinite loop as the directive “RewriteCond %{REQUEST_FILENAME} !-f” says “if the file exists use that without rewriting”.  As “index.php” is a file, the rule does not rewrite a second time.  However, the layout of my files meant that I sometimes have an actual file in the pseudo location (e.g. /an/example really exists).  I could have reorganised the complete directory structure … but then I would have been still fixing all the broken links now!

Instead I simply added an explicit “please don’t rewrite my top.php script” condition:

RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_URI}  !^/code/top.php/.*
RewriteRule ^(.*)$ code/top.php/$1 [L,NS]

I suspect that this will be unnecessary when Apache upgrades to 2.3, but for now … it works 🙂

fix for toString error in PHPUnit

I was struggling to get PHPUnit to run under PHP 5.2.9. I’ve only used PHPUnit a little, so may have simply got something wrong, but I kept getting the error:

Catchable fatal error: Object of class AbcTest could not be converted to string in {dir}/PHPUnit/Framework/TestFailure.php on line 98

The error happens in the PHPUnit_Framework_TestFailure::toString method, which tries to implicitly convert a test case to a string.

The class AbcTest is my test case, which it is trying to display following a test failure.  PHPUnit test cases all extend PHPUnit_Framework_TestCase and while this has a toString method it does not have the ‘magic method__toString required by PHP 5.2 onwards.

To fix the problem I simply added the following method to the class PHPUnit_Framework_TestCase in PHPUnit/Framework/TestCase.php .

public function __toString()
 {
 return $this->toString();
 }

I am using PHPUnit 3.4.9, but peeking at 3.5.0beta it looks the same.  I’m guessing the PHPUnit_Framework_TestFailure::toString method is not used much so has got missed since the change to PHP 5.2.x.

PHPUnit is now in GITHub so I really ought to work out how to submit corrections to that … but another day I think.

Phoenix rises – vfridge online again

vfridge is back!

I mentioned ‘Project Phoenix’ in my last previous post, and this was it – getting vfridge up and running again.

Ten years ago I was part of a dot.com company aQtive1 with Russell Beale, Andy Wood and others.  Just before it folded in the aftermath of the dot.com crash, aQtive spawned a small spin-off vfridge.com.  The virtual fridge was a social networking web site before the term existed, and while vfridge the company went the way of most dot.coms, for some time after I kept the vfridge web site running on Fiona’s servers until it gradually ‘decayed’ partly due to Javascript/DOM changes and partly due to Java’s interactions with mysql becoming unstable (note very, very old Java code!).  But it is now back online 🙂

The core idea of vfridge is placing small notes, photos and ‘magnets’ in a shareable web area that can be moved around and arranged like you might with notes held by magnets to a fridge door.

Underlying vfridge was what we called the websharer vision, which looked towards a web of user-generated content.  Now this is passé, but at the time  was directly counter to accepted wisdom and looking back seem prescient – remember this was written in 1999:

Although everyone isn’t a web developer, it is likely that soon everyone will become an Internet communicator — email, PC-voice-comms, bulletin boards, etc. For some this will be via a PC, for others using a web-phone, set-top box or Internet-enabled games console.

The web/Internet is not just a medium for publishing, but a potential shared place.

Everyone may be a web sharer — not a publisher of formal public ‘content’, but personal or semi-private sharing of informal ‘bits and pieces’ with family, friends, local community and virtual communities such as fan clubs.

This is not just a future for the cognoscenti, but for anyone who chats in the pub or wants to show granny in Scunthorpe the baby’s first photos.

Just over a year ago I thought it would be good to write a retrospective about vfridge in the light of the social networking revolution.  We did a poster “Designing a virtual fridge” about vfridge years ago at a Computers and Fun workshop, but have never written at length abut its design and development.  In particular it would be good to analyse the reasons, technical, social and commercial, why it did not ‘take off’ the time.  However, it is hard to do write about it without good screen shots, and could I find any? (Although now I have)  So I thought it would be good to revive it and now you can try it out again. I started with a few days effort last year at Christmas and Easter time (leisure activity), but now over the last week have at last used the fact that I have half my time unpaid and so free for my own activities … and it is done 🙂

The original vfridge was implemented using Java Servlets, but I have rebuilt it in PHP.  While the original development took over a year (starting down in Coornwall while on holiday watching the solar eclipse), this re-build took about 10 days effort, although of course with no design decisions needed.  The reason it took so much development back then is one of the things I want to consider when I write the retrospective.

As far as possible the actual behaviour and design is exactly as it was back in 2000 … and yes it does feel clunky, with lots of refreshing (remember no AJAX or web2.0 in those days) and of course loads of frames!  In fact there is a little cleverness that allowed some client-end processing pre-AJAX2.    Also the new implementation uses the same templates as the original one, although the expansion engine had to be rewritten in PHP.  In fact this template engine was one of our most re-used bits of Java code, although now of course many alternatives.  Maybe I will return to a discussion of that in another post.

I have even resurrected the old mobile interface.  Yes there were WAP phones even in 2000, albeit with tiny green and black screens.  I still recall the excitement I felt the first time I entered a note on the phone and saw it appear on a web page 🙂  However, this was one place I had to extensively edit the page templates as nothing seems to process WML anymore, so the WML had to be converted to plain-text-ish HTML, as close as possible to those old phones!  Looks rather odd on the iPhone :-/

So, if you were one of those who had an account back in 2000 (Panos Markopoulos used it to share his baby photos 🙂 ), then everything is still there just as you left it!

If not, then you can register now and play.

  1. The old aQtive website is still viewable at aqtive.org, but don’t try to install onCue, it was developed in the days of Windows NT.[back]
  2. One trick used the fact that you can get Javascript to pre-load images.  When the front-end Javascript code wanted to send information back to the server it preloaded an image URL that was really just to activate a back-end script.  The frames  used a change-propagation system, so that only those frames that were dependent on particular user actions were refreshed.  All of this is preserved in the current system, peek at the Javascript on the pages.    Maybe I’ll write about the details of these another time.[back]

PHP syntax checker updated

Took a quick break today from Project Phoenix1.

I’ve had a PHP syntax checker on meandeviation for several years, but only checking PHP 4 as that is what is running on the server.  However, I had an email asking about PHP 5 , so now there is a PHP 5 version too 🙂

The syntax checker is a pretty simple layer over the PHP command line option “php -l” and also uses the PHP highlight_file function.  The main complication is parsing the HTML outputs of both as they change between versions of PHP!

There is also a download archive so you can also have it running locally on your own system.

  1. watch this space …[back]

Italian conferences: PPD10, AVI2010 and Search Computing

I got back from trip to Rome and Milan last Tuesday, this included the PPD10 workshop that Aaron, Lucia, Sri and I had organised, and the AVI 2008 conference, both in University of Rome “La Sapienza”, and a day workshop on Search Computing at Milan Polytechnic.

PPD10

The PPD10 workshop on Coupled Display Visual Interfaces1 followed on from a previous event, PPD08 at AVI 2008 and also a workshop on “Designing And Evaluating Mobile Phone-Based Interaction With Public Displays” at CHI2008.  The linking of public and private displays is something I’ve been interested in for some years and it was exciting to see some of the kinds of scenarios discussed at Lancaster as potential futures some years ago now being implemented over a range of technologies.  Many of the key issues and problems proposed then are still to be resolved and new ones arising, but certainly it seems the technology is ‘coming of age’.  As well as much work filling in the space of interactions, there were also papers that pushed some of the existing dimensions/classifications, in particular, Rasmus Gude’s paper on “Digital Hospitality” stretched the public/private dimension by considering the appropriation of technology in the home by house guests.  The full proceedings are available at the PPD10 website.

AVI 2010

AVI is always a joy, and AVI 2010 no exception; a biennial, single-track conference with high-quality papers (20% accept rate this year), and always in lovely places in Italy with good food and good company!  I first went to AVI in 1996 when it was in Gubbio to give a keynote “Closing the Loop: modelling action, perception and information“, and have gone every time since — I always say that Stefano Levialdi is a bit like a drug pusher, the first experience for free and ever after you are hooked! The high spot this year was undoubtedly Hitomi Tsujita‘s “Complete fashion coordinator2, a system for using social networking to help choose clothes to wear — partly just fun with a wonderful video, but also a very thoughtful mix of physical and digital technology.


images from Complete Fashion Coordinator

The keynotes were all great, Daniel Keim gave a really lucid state of the art in Visual Analytics (more later) and Patrick Lynch a fresh view of visual understanding based on many years experience and highlighting particularly on some of the more immediate ‘gut’ reactions we have to interfaces.  Daniel Wigdor gave an almost blow-by-blow account of work at Microsoft on developing interaction methods for next-generation touch-based user interfaces.  His paper is a great methodological exemplar for researchers combining very practical considerations, more principled design space analysis and targeted experimentation.

Looking more at the detail of Daniel’s work at Microsoft, it is interesting that he has a harder job than Apple’s interaction developers.  While Apple can design the hardware and interaction together, MS as system providers need to deal with very diverse hardware, leading to a ‘least common denominator’ approach at the level of quite basic touch interactions.  For walk-up-and use systems such as Microsoft Surface in bar tables, this means that users have a consistent experience across devices.  However, I did wonder whether this approach which is basically the presentation/lexical level of Seeheim was best, or whether it would be better to settle at some higher-level primitives more at the Seeheim dialog level, thinking particularly of the way the iPhone turns pull down menus form web pages into spinning selectors.  For devices that people own it maybe that these more device specific variants of common logical interactions allow a richer user experience.

The complete AVI 2010 proceedings (in colour or B&W) can be found at the conference website.

The very last session of AVI was a panel I chaired on “Visual Analytics: people at the heart of data” with Daniel Keim, Margit Pohl, Bob Spence and Enrico Bertini (in the order they sat at the table!).  The panel was prompted largely because the EU VisMaster Coordinated Action is producing a roadmap document looking at future challenges for visual analytics research in Europe and elsewhere.  I had been worried that it could be a bit dead at 5pm on the last day of the conference, but it was a lively discussion … and Bob served well as the enthusiastic but also slightly sceptical outsider to VisMaster!

As I write this, there is still time (just, literally weeks!) for final input into the VisMaster roadmap and if you would like a draft I’ll be happy to send you a PDF and even happier if you give some feedback 🙂

Search Computing

I was invited to go to this one-day workshop and had the joy to travel up on the train from Rome with Stu Card and his daughter Gwyneth.

The search computing workshop was organised by the SeCo project. This is a large single-site project (around 25 people for 5 years) funded as one of the EU’s ‘IDEAS Advanced Grants’ supporting ‘investigation-driven frontier research’.  Really good to see the EU funding work at the bleeding edge as so many national and European projects end up being ‘safe’.

The term search computing was entirely new to me, although instantly brought several concepts to mind.  In fact the principle focus of SeCo is the bringing together of information in deep web resources including combining result rankings; in database terms a form of distributed join over heterogeneous data sources.

The work had many personal connections including work on concept classification using ODP data dating back to aQtive days as well as onCue itself and Snip!t.  It also has similarities with linked data in the semantic web word, however with crucial differences.  SeCo’s service approach uses meta-descriptions of the services to add semantics, whereas linked data in principle includes a degree of semantics in the RDF data.  Also the ‘join’ on services is on values and so uses a degree of run-time identity matching (Stu Card’s example was how to know that LA=’Los Angeles’), whereas linked data relies on URIs so (again in principle) matching has already been done during data preparation.  My feeling is that the linking of the two paradigms would be very powerful, and even for certain kinds of raw data, such as tables, external semantics seems sensible.

One of the real opportunities for both is to harness user interaction with data as an extra source of semantics.  For example, for the identity matching issue, if a user is linking two data sources and notices that ‘LA’ and ‘Los Angeles’ are not identified, this can be added as part of the interaction to serve the user’s own purposes at that time, but by so doing adding a special case that can be used for the benefit of future users.

While SeCo is predominantly focused on the search federation, the broader issue of using search as part of algorithmics is also fascinating.  Traditional algorithmics assumes that knowledge is basically in code or rules and is applied to data.  In contrast we are seeing the rise of web algorithmics where knowledge is garnered from vast volumes of data.  For example, Gianluca Demartini at the workshop mentioned that his group had used the Google suggest API to extend keywords and I’ve seen the same trick used previously3.  To some extent this is like classic techniques of information retrieval, but whereas IR is principally focused on a closed document set, here the document set is being used to establish knowledge that can be used elsewhere.  In work I’ve been involved with, both the concept classification and folksonomy mining with Alessio apply this same broad principle.

The slides from the workshop are appearing (but not all there yet!) at the workshop web page on the SeCo site.

  1. yes I know this doesn’t give ‘PPD’ this stands for “public and private displays”[back]
  2. Hitomi Tsujita, Koji Tsukada, Keisuke Kambara, Itiro Siio, Complete Fashion Coordinator: A support system for capturing and selecting daily clothes with social network, Proceedings of the Working Conference on Advanced Visual Interfaces (AVI2010), pp.127–132.[back]
  3. The Yahoo! Related Suggestions API offers a similar service.[back]