Another year – running and walking, changing roles and new books

Yesterday I completed the Tiree Ultramarathon, I think my sixth since they began in 2014. As always a wonderful day and a little easier than last year. This is always a high spot in the year for me, and also corresponds to the academic year change, so a good point to reflect on the year past and year ahead.  Lots of things including changing job role, books published and in preparation, conferences coming to Wales … and another short walk …

Tiree Ultra and Tech Wave

Next week there will be a Tiree Tech Wave, the first since Covid struck. Really exciting to be doing this again, with a big group coming from Edinburgh University, who are particularly interested in co-design with communities.

Aside: I nearly wrote “the first post-Covid Tiree Tech Wave”, but I am very aware that for many the impact of Covid is not past: those with long Covid, immunocompromised people who are in almost as much risk now as at the peak of the pandemic, and patients in hospital where Covid adds considerably to mortality.

Albrecht Schmidt from Ludwig-Maximilians-Universität München was here again for the Ultra. He’s been several times after first coming the year of 40 mile an hour winds and rain all day … he is built of stern stuff.  Happily, yesterday was a little more mixed, wind and driving rain in the morning and glorious sunshine from noon onwards … a typical Tiree day 😊

We have hatched a plan to have Tiree Tech Wave next year immediately after the Ultra. There are a number of people in the CHI research community interested in technology related to outdoors, exercise and well-being, so hoping to have that as a theme and perhaps attract a few of the CHI folk to the Ultra too.

Changing roles

My job roles have changed over the summer.

I’ve further reduced my hours as Director of the Computational Foundry to 50%. University reorganisation at Swansea over the last couple of years has created a School of Mathematics and Computer Science, which means that some of my activities helping to foster research collaboration between CS and Maths falls more within the School role. So, this seemed a good point to scale back and focus more on cross-University digital themes.

However, I will not be idle! I’ve also started a new PT role as Professorial Fellow at Cardiff Metropolitan University. I have been a visiting professor at the Cardiff School of Art and Design for nearly 10 years, so this is partly building on many of the existing contacts I have there. However, my new role is cross-university, seeking to encourage and grow research across all subject areas. I’ve always struggled to fit within traditional disciplinary boundaries, so very much looking forward to this.

Books and writing

This summer has also seen the publication of “TouchIT: Understanding Design in a Physical-Digital World“. Steve, Devina, Jo and I first conceived this when we were working together on the DePTH project, which ran from 2007 to 2009 as part of the AHRC/EPSRC funded Designing for the 21st Century Initiative. The first parts were written in 2008 and 2009 during my sabbatical year when I first moved to Tiree and Steve was our first visitor. But then busyness of life took over until another spurt in 2017 and then much finishing off and updating. However now it is at long last in print!

Hopefully not so long in the process, three more books are due to be published in this coming year, all around an AI theme. The first is a second edition of the “Introduction to Artificial Intelligence” textbook that Janet Finlay and I wrote way back in 1996. This has stayed in print and even been translated into Japanese. For many years the fundamentals of AI only changed slowly – the long ‘AI winter’. However, over recent years things have changed rapidly, not least driven by massive increases in computational capacity and availability of data; so it seemed like a suitable time to revisit this. Janet’s world is now all about dogs, so I’ve taken up the baton. Writing the new chapters has been easy. The editing making this flow as a single volume has been far more challenging, but after a focused writing week in August, it feels as though I’ve broken the back of it.

In addition, there are two smaller volumes in preparation as part of the Routledge and CRC AI for Everything series. One is with Clara Crivellaro on “AI for Social Justice“, the other a sole-authored “AI for Human–Computer Interaction”.

All of these were promised in 2020 early in the first Covid lockdown, when I was (rather guiltily) finding the time tremendously productive. However, when the patterns of meetings started to return to normal (albeit via Zoom), things slowed down somewhat … but now I think (hope!) all on track 😊

Welcoming you to Wales

In 2023 I’m chairing and co-chairing two conferences in Swansea. In June, ACM Engineering Interactive Computer Systems (EICS 2023) and in September the European Conference on Cognitive Ergonomics (web site to come, but here is ECCE 2022). We also plan to have a Techwave Cymru in March. So I’m looking forward to seeing lots of people in Wales.

As part of the preparation to EICS I’m planning to do a series of regular blog posts on more technical aspects of user interface development … watch this space …

Alan’s on the road again

Nearly ten years ago, in 2013, I walked around Wales, a personal journey and research expedition. I always assumed I would do ‘something else’, but time and life took over. Now, the tenth anniversary is upon me and it feels time do something to mark it.

I’ve always meant to edit the day-by-day blogs into a book, but that certainly won’t happen next year. I will do some work on the dataset of biodata, GPS, text and images that has been used in a few projects and is still a unique data set, including, I believe, still the largest single ECG trace in the public domain.

However, I will do ‘something else’.

When walking around the land and ocean boundaries of Wales, I was always aware that while in some sense this ‘encompassed’ the country, it was also the edge, the outside. To be a walker is to be a voyeur, catching glimpses, but never part of what you see.  I started then to think of a different journey, to the heart of Wales, which for me, being born and brought up in Cardiff, is the coal valleys stretching northwards and outwards. The images of coal blackened miners faces and the white crosses on the green hillside after Aberfan are etched into my own conception of Wales.

So, there will be an expedition, or rather as series of expeditions, walking up and down the valleys, meeting communities, businesses, schools and individuals.

Do you know places or people I should meet?

Do you want to join me to show me places you know or to explore new places?

Busy September – talks, tutorials and an ultra-marathon

September has been a full month!

During the last two weeks things have started to kick back into action, with the normal rounds of meetings and induction week for new students.  For the latter I’d pre-recorded a video welcome, so my involvement during the week was negligible.  However, in addition I delivered a “Statistics for HCI” day course organised by the BCS Interaction Group with PhD students from across the globe and also a talk “Designing User Interactions with AI: Servant, Master or Symbiosis” at the AI Summit London.  I was also very pleased to be part of the “60 faces of IFIP” campaign by the International Federation for Information Processing.

It was the first two weeks that stood out though, as I was back on Tiree for two whole weeks.  Not 100% holiday as during the stay I gave two virtual keynotes: “Qualitative–Quantitative Reasoning: thinking informally about formal things” at the International Colloquium on Theoretical Aspects of Computing (ICTAC) in Kazakhstan and “Acting out of the Box” at the University of Wales Trinity St David (UWTSD) Postgraduate Summer School.  I also gave a couple of lectures on “Modelling interactions: digital and physical” at the ICTAC School which ran just before the conference and presented a paper on “Interface Engineering for UX Professionals” in the Workshop on HCI Engineering Education (HCI-E2) at INTERACT 2021 in Bari.  Amazing how easy it is to tour the world from a little glamping pod on a remote Scottish Island.

Of course the high point was not the talks and meetings, but the annual Tiree Ultra-marathon.  I’d missed last year, so especially wonderful to be back: thirty five miles of coastline, fourteen beaches, not to mention so many friendly faces, old friends and new.  Odd of course with Covid zero-contact and social distancing – the usual excited press of bodies at the pre-race briefing in An Talla, the Tiree community hall, replaced with a video webinar and all a little more widely spaced for the start on the beach too.

The course was slightly different too, anti-clockwise and starting half way along Gott Bay, the longest beach.  Gott Bay is usually towards the end of the race, about 28 miles in, so the long run, often into the wind is one of the challenges of the race.  I recall in 2017 running the beach with 40 mile an hour head wind and stinging rain – I knew I’d be faster walking, but was determined to run every yard of beach..  Another runner came up behind me and walked in my shelter.  However, this year had its own sting in the tail with Ben Hynish, the highest point, at 26 miles in.

The first person was across the line in about four-and-a-quarter hours, the fastest time yet.  I was about five hours later!

This was my fifth time doing the ultra, but the hardest yet, maybe in part due to lockdown couch-potato-ness!  My normal training pattern is that about a month before the ultra I think, “yikes, I’ve not run for a year” and then rapidly build up the miles – not the recommended training regime!  This year I knew I wasn’t as fit as usual, so I did start in May … but then got a knee injury, then had to self-isolate … and then it was into the second-half of July; so about a month again.

Next year it will be different, I will keep running through the winter … hmm … well, time will tell!

The different September things all sound very disparate – and they are, but there are some threads and connections.

The first thread is largely motivational.

The UWTSD keynote was about the way we are not defined by the “kind of people” we think of ourselves as being, but by the things we do.  The talk used my walk around Wales in 2013 as the central example, but the ultra would have been just as pertinent.  Someone with my waistline is not who one would naturally think as being an ultramarathon runner – not that kind of person, but I did it.

However, I was not alone.  The ‘winners’ of the ultra are typically the rangy build one would expect of a long-distance runner, but beyond the front runners, there is something about the long distance that attracts a vast range of people of all ages, and all body shapes imaginable.  For many there are physical or mental health stories: relationship breakdowns, illnesses, that led them to running and through it they have found ways to believe in themselves again.  Post Covid this was even more marked: Will, who organises the ultra, said that many people burst into tears as they crossed the finish line, something he’d never seen before.

The other thread is about the mental tools we need to be a 21st century citizen.

The ICTAC keynote was about “Qualitative–Quantitative Reasoning”, which is my term for the largely informal understanding of numbers that is so important for both day-to-day and professional life, but is not part of formal education.  The big issues of our lives from Covid to Brexit to climate change need us to make sense of large-scale numerical or data-rich phenomena.  These often seem too complex to make sense of, yet are ones where we need to make appropriate choices in both our individual lives and political voices.  It is essential that we find ways to aid understanding in the public, press and politicians – including both educational resources and support tools.

The statistics course and my “Statistics for HCI” book are about precisely this issue – offering ways to make sense of often complex results of statistical analysis and obtain some of the ‘gut’ understanding that professional statisticians develop over many years.

My 60 faces of IFIP statement also follows this broad thread:

“Digital techology is now essential to being a citizen. The future of information processing is the future of everyone; so needs to be understood and shaped by all. Often ICT simply reinforces existing patterns, but technology is only useful if we can use it to radically reimagine a better world.


More information on different events

Tiree Ultra

Tiree Ultramarathon web page and Facebook Group

Paper: Interface Engineering for UX Professionals

HCI-E2: Workshop on HCI Engineering Education – for developers, designers and more, INTERACT 2021, Bari, Italy – August 31st, 2021. See more – paper and links

Summer School Lecturea: Modelling interactions: digital and physical

Lecture at ICTAC School 2021: 18th International Colloquium on Theoretical Aspects of Computing, Nazarbayev University, Nur-Sultan, Kazakhstan, 1st September 2021. See more – abstract and links

Talk: Designing User Interactions with AI: Servant, Master or Symbiosis

The AI Summit London, 22nd Sept. 2021. See moreabstract and links

Day Course: Statistics for HCI

BCS Interaction Group One Day Course for PhD Students, 21st Sept. 2021.
See my Statistics for HCI Micro-site.

Keynote: Acting out of the Box

Rhaglen Ysgol Haf 2021 PCYDDS / UWTSD Postgraduate Summer School 2021, 10th Sept. 2021. See more – abstract and links

Keynote: Qualitative–Quantitative Reasoning: thinking informally about formal things

18th International Colloquium on Theoretical Aspects of Computing, Nazarbayev University, Nur-Sultan, Kazakhstan, 10th Sept. 2021. See more – full paper and links

Induction week greeting

 

Darwinian markets and sub-optimal AI

Do free markets generate the best AI?  Not always, and this not only hits the bottom line, but comes with costs for personal privacy and the environment.  The combination of network effects and winner-takes-all advertising expenditure means that the resulting algorithms may be worst for everyone.

A few weeks ago I was talking with Maria Ferrario (Queens University Belfast) and Emily Winter (Lancaster University) regarding privacy and personal data.  Social media sites and other platforms are using ever more sophisticated algorithms to micro-target advertising.  However, Maria had recently read a paper suggesting that this had gone beyond the point of diminishing returns: far simpler  – and less personally intrusive – algorithms achieve nearly as good performance as the most complex AI.  As well as having an impact on privacy, this will also be contributing to the ever growing carbon impact of digital technology.

At first this seemed counter-intuitive.  While privacy and the environment may not be central concerns, surely companies will not invest more resources in algorithms than is necessary to maximise profit?

However, I then remembered the peacock tail.


Jatin Sindhu, CC BY-SA 4.0, via Wikimedia Commons
The peacock tail is a stunning wonder of nature.  However, in strict survival terms, it appears to be both flagrantly wasteful and positively dangerous – like eye-catching supermarket packaging for the passing predator.

The simple story of Darwinian selection suggests that this should never happen.  The peacocks that have smaller and less noticeable tails should have a better chance of survival, passing their genes to the next generation, leading over time to more manageable and less bright plumage.  In computational terms, evolution acts as a slow, but effective optimisation algorithm, moving a population ever closer to a perfect fit with its environment.

However, this simple story has twists, notably runaway sexual selection.  The story goes like this.  Many male birds develop brighter plumage during mating season so that they are more noticeable to females.  This carries a risk of being caught by a predator, but there is a balance between the risks of being eaten and the benefits of copulation.  Stronger, faster males are better able to fight off or escape a predator, and hence can afford to have slightly more gaudy plumage.  Thus, for the canny female, brighter plumage is a proxy indicator of a more fit potential mate.  As this becomes more firmly embedded into the female selection process, there is an arms race between males – those with less bright plumage will lose out to those with brighter plumage and hence die out.  The baseline constantly increases.

Similar things can happen in free markets, which are often likened to Darwinian competition.

Large platforms such as Facebook or Google make the majority of their income through advertising.  Companies with large advertising accounts are constantly seeking the best return on their marketing budgets and will place ads on the platform that offers the greatest impact (often measured by click-through) for the least expenditure.  Rather like mating, this is a winner-takes-all choice.  If Facebook’s advertising is 1% more effective than Google’s  a canny advertiser will place all their adverts with Facebook and vice versa.  Just like the peacock there is an existential need to outdo each other and thus almost no limit on the resources that should be squandered to gain that elusive edge.

In practice there are modifying factors; the differing demographics of platforms mean that one or other may be better for particular companies and also, perhaps most critically, the platform can adjust its pricing to reflect the effectiveness so that click-through-per-dollar is similar.

The latter is the way the hidden hand of the free market is supposed to operate to deliver ‘optimal’ productivity.  If spending 10% more on a process can improve productivity by 11% you will make the investment.  However, the theory of free markets (to the extent that it ever works) relies on an ‘ideal’ situation with perfect knowledge, free competition and low barriers to entry.  Many countries operate collusion and monopoly laws in pursuit of this ‘ideal’ market.

Digital technology does not work like this. 

For many application areas, network effects mean that emergent monopolies are almost inevitable.  This was first noticed for software such as Microsoft Office – if all my collaborators use Office then it is easier to share documents with them if I use Office also.  However, it becomes even more extreme with social networks – although there are different niches, it is hard to have multiple Facebooks, or at least to create a new one – the value of the platform is because all one’s friends use it.

For the large platforms this means that a substantial part of their expenditure is based on maintaining and growing this service (social network, search engine, etc.).  While the income is obtained from advertising, only a small proportion of the costs are developing and running the algorithms that micro-target adverts.

Let’s assume that the ratio of platform to advertising algorithm costs is 10:1 (I suspect it is a lot greater).  Now imagine platform P develops an algorithm that uses 50% more computational power, but improves advertising targeting effectiveness by 10%; at first this sounds a poor balance, but remember that 10:1 ratio.

The platform can charge 10% more whilst being competitive.   However, the 50% increase in advertising algorithm costs is just 5% of the overall company running costs, as 90% are effectively fixed costs of maintaining the platform.  A 5% increase in costs has led to a 10% increase in corporate income.  Indeed one could afford to double the computational costs for that 10% increase in performance and still maintain profitability.

Of course, the competing platforms will also work hard to develop ever more sophisticated (and typically privacy reducing and carbon producing) algorithms, so that those gains will be rapidly eroded, leading to the next step.

In the end there are diminishing returns for effective advertising: there are only so many eye-hours and dollars in users’ pockets. The 10% increase in advertising effectiveness is not a real productivity gain, but is about gaining a temporary increase in users’ attention, given the current state of competing platforms’ advertising effectiveness.

Looking at the system as a whole, more and more energy and expenditure are spent on algorithms that are ever more invasive of personal autonomy, and in the end yield no return for anyone.

And it’s not even a beautiful extravagance.

Free AI book and a new one coming …

Yes a new AI book is coming … but until then you can download the first edition for FREE 🙂

Many years ago Janet Finlay and I wrote a small introduction to artificial intelligence.  At the time there were several Bible-sized tomes … some of which are still the standard textbooks today.  However, Janet was teaching a masters conversion course and found that none of these books were suitable for taking the first steps on an AI journey, especially for those coming from non-computing disciplines.

Over the years it faded to the back of our memories, with the brief exception of the time when, after we’d nearly forgotten it, CRC Press issued a Japanese translation.  Once or twice the thought of doing an update arose, but quickly passed.  This was partly because our main foci were elsewhere, but also, at the danger of insulting all my core-AI friends, not much changed in core AI for many years!

Coming soon … Second Edition

Of course over recent years things have changed dramatically, hence my decision, nearly 25 years on, to create a new edition maintaining the aim to give a rich but accessible introduction, but capturing some of the recent trends and giving these a practical and human edge.  Following the T-model of teaching, I’d like to help both newcomer and expert gain a broad perspective of the issues and landscape, whilst giving enough detail for those that want to delve into a more specific area.

A Free Book and New Resources

In the mean time the publisher, Taylor & Francis/CRC has agreed to make the PDF of the first edition available free of charge  I have updated some of the code examples from the first edition and will be incrementally adding new material to the second edition micro-site including slides, cases studies, video and interactive materials.  If you’d like to teach using this please let me know your views on the topics and also if there are areas where you’d like me to create preliminary material with greater urgency.  I won’t promise to be able to satisfy everyone, but can use this to adjust my priorities.

Why now?

The first phase of change in AI was driven by the rise of big data and the increasing use of forms of machine learning to drive adverts, search results and social media.  Within user interface design, many of the fine details of colour choices and screen layout are now performed using A–B testing …sight variants of interfaces delivered to millions of people – shallow, without understanding and arguably little more than bean counting, but in numerous areas vast data volume has been found to be ‘unreasonably effective‘ at solving problems that were previously seen to be the remit of deep AI.

In the last few years deep learning has taken over as the driver of AI research and often also media hype.  Here it has been the sheer power of computation, partly due to Moores’ Law with computation nearly a million times faster than it was when that first edition was written nearly 25 years ago.  However, it has also been enabled by cloud computing allowing large numbers of computers ti efficiently attack a single problem.  Algorithms that might have been conceived of but dismissed as impractical in the past have become commonplace.

Alongside this has been a dark side of AI, from automated weapons and mass surveillance, to election rigging and the insidious knowledge that large corporations have gathered through our day-to-day web interactions.  In the early 1990s I warned of the potential danger of ethnic and gender bias in black-box machine learning and I’ve returned to this issue more recently as those early predictions have come to pass.

Across the world there are new courses running or being planned and people who want to know more.  In Swansea we have a PhD programme on people-first AI/big data, and there is currently a SIGCHIItaly workshop call out for Teaching HCI for AI: Co-design of a Syllabus. There are several substantial textbooks that offer copious technical detail, but can be inaccessible for the newcomer or those coming from other disciplines.  There are also a number of excellent books that deal with the social and human impact of AI, but without talking about how it works.

I hope to be able to build upon the foundations that Janet and I established all those years ago to create something that fills a crucial gap: giving a human-edge to those learning artificial intelligence from a computing background and offering an accessible technical introduction for those approaching the topic from other disciplines.

 

 

Human-Like Computing

Last week I attended an EPSRC workshop on “Human-Like Computing“.

The delegate pack offered a tentative definition:

“offering the prospect of computation which is akin to that of humans, where learning and making sense of information about the world around us can match our human performance.” [E16]

However, the purpose of this workshop was to clarify, and expand on this, exploring what it might mean for computers to become more like humans.

It was an interdisciplinary meeting with some participants coming from more technical disciplines such as cognitive science, artificial intelligence, machine learning and Robotics; others from psychology or studying human and animal behaviour; and some, like myself, from HCI or human factors, bridging the two.

Why?

Perhaps the first question is why one might even want more human-like computing.

There are two obvious reasons:

(i) Because it is a good model to emulate — Humans are able to solve some problems, such as visual pattern finding, which computers find hard. If we can understand human perception and cognition, then we may be able to design more effective algorithms. For example, in my own work colleagues and I have used models based on spreading activation and layers of human memory when addressing ‘web scale reasoning’ [K10,D10].

robot-3-clip-sml(ii) For interacting with people — There is considerable work in HCI in making computers easier to use, but there are limitations. Often we are happy for computers to be simply ‘tools’, but at other times, such as when your computer notifies you of an update in the middle of a talk, you wish it had a little more human understanding. One example of this is recent work at Georgia Tech teaching human values to artificial agents by reading them stories! [F16]

To some extent (i) is simply the long-standing area of nature-inspired or biologically-inspired computing. However, the combination of computational power and psychological understanding mean that perhaps we are the point where new strides can be made. Certainly, the success of ‘deep learning’ and the recent computer mastery of Go suggest this. In addition, by my own calculations, for several years the internet as a whole has had more computational power than a single human brain, and we are very near the point when we could simulate a human brain in real time [D05b].

Both goals, but particularly (ii), suggest a further goal:

(iii) new interaction paradigms — We will need to develop new ways to design for interacting with human-like agents and robots, not least how to avoid the ‘uncanny valley’ and how to avoid the appearance of over-competence that has bedevilled much work in this broad area. (see more later)

Both goals also offer the potential for a fourth secondary goal:

(iv) learning about human cognition — In creating practical computational algorithms based in human qualities, we may come to better understand human behaviour, psychology and maybe even society. For example, in my own work on modelling regret (see later), it was aspects of the computational model that highlighted the important role of ‘positive regret’ (“the grass is greener on the other side”) to hep us avoid ‘local minima’, where we stick to the things we know and do not explore new options.

Human or superhuman?

Of course humans are not perfect, do we want to emulate limitations and failings?

For understanding humans (iv), the answer is probably “yes”, and maybe by understanding human fallibility we may be in a better position to predict and prevent failures.

Similarly, for interacting with people (ii), the agents should show at least some level of human limitations (even if ‘put on’); for example, a chess program that always wins would not be much fun!

However, for simply improving algorithms, goal (i), we may want to get the ‘best bits’, from human cognition and merge with the best aspects of artificial computation. Of course it maybe that the frailties are also the strengths, for example, the need to come to decisions and act in relatively short timescales (in terms of brain ‘ticks’) may be one way in which we avoid ‘over learning’, a common problem in machine learning.

In addition, the human mind has developed to work with the nature of neural material as a substrate, and the physical world, both of which have shaped the nature of human cognition.

Very simple animals learn purely by Skinner-like response training, effectively what AI would term sub-symbolic. However, this level of learning require many exposures to similar stimuli. For more rare occurrences, which do not occur frequently within a lifetime, learning must be at the, very slow pace of genetic development of instincts. In contrast, conscious reasoning (symbolic processing) allows us to learn through a single or very small number of exposures; ideal for infrequent events or novel environments.

Big Data means that computers effectively have access to vast amounts of ‘experience’, and researchers at Google have remarked on the ‘Unreasonable Effectiveness of Data’ [H09] that allows problems, such as translation, to be tackled in a statistical or sub-symbolic way which previously would have been regarded as essentially symbolic.

Google are now starting to recombine statistical techniques with more knowledge-rich techniques in order to achieve better results again. As humans we continually employ both types of thinking, so there are clear human-like lessons to be learnt, but the eventual system will not have the same ‘balance’ as a human.

If humans had developed with access to vast amounts of data and maybe other people’s experience directly (rather than through culture, books, etc.), would we have developed differently? Maybe we would do more things unconsciously that we do consciously. Maybe with enough experience we would never need to be conscious at all!

More practically, we need to decide how to make use of this additional data. For example, learning analytics is becoming an important part of educational practice. If we have an automated tutor working with a child, how should we make use of the vast body of data about other tutors interactions with other children?   Should we have a very human-like tutor that effectively ‘reads’ learning analytics just as a human tutor would look at a learning ‘dashboard’? Alternatively, we might have a more loosely human-inspired ‘hive-mind’ tutor that ‘instinctively’ makes pedagogic choices based on the overall experience of all tutors, but maybe in an unexplainable way?

What could go wrong …

There have been a number of high-profile statements in the last year about the potential coming ‘singularity’ (when computers are clever enough to design new computers leading to exponential development), and warnings that computers could become sentient, Terminator-style, and take over.

There was general agreement at the workshop this kind of risk was overblown and that despite breakthroughs, such as the mastery of Go, these are still very domain limited. It is many years before we have to worry about even general intelligence in robots, let alone sentience.

A far more pressing problem is that of incapable computers, which make silly mistakes, and the way in which people, maybe because of the media attention to the success stories, assume that computers are more capable than they are!

Indeed, over confidence in algorithms is not just a problem for the general public, but also among computing academics, as I found in my personal experience on the REF panel.

There are of course many ethical and legal issues raised as we design computer systems that are more autonomous. This is already being played out with driverless cars, with issues of insurance and liability. Some legislators are suggesting allowing driverless cars, but only if there is a drive there to take control … but if the car relinquishes control, how do you safely manage the abrupt change?

Furthermore, while the vision of autonomous robots taking over the world is still far fetched; more surreptitious control is already with us. Whether it is Uber cabs called by algorithm, or simply Google’s ranking of search results prompting particular holiday choices, we all to varying extents doing “what the computer tells us”. I recall in the Dalek Invasion of Earth, the very un-human-like Daleks could not move easily amongst the rubble of war-torn London. Instead they used ‘hypnotised men’ controlled by some form of neural headset. If the Daleks had landed today and simply taken over or digitally infected a few cloud computing services would we know?

Legibility

Sometimes it is sufficient to have a ‘black box’ that makes decisions and acts. So long as it works we are happy. However, a key issue for many ethical and legal issues, but also for practical interaction, is the ability to be able to interrogate a system, so seek explanations of why a decision has been made.

Back in 1992 I wrote about these issues [D92], in the early days when neural networks and other forms of machine learning were being proposed for a variety of tasks form controlling nuclear fusion reactions to credit scoring. One particular scenario, was if an algorithm were used to pre-sort large numbers of job applications. How could you know whether the algorithms were being discriminatory? How could a company using such algorithms defend themselves if such an accusation were brought?

One partial solution then, as now, was to accept underlying learning mechanisms may involve emergent behaviour form statistical, neural network or other forms of opaque reasoning. However, this opaque initial learning process should give rise to an intelligible representation. This is rather akin to a judge who might have a gut feeling that a defendant is guilty or innocent, but needs to explicate that in a reasoned legal judgement.

This approach was exemplified by Query-by-Browsing, a system that creates queries from examples (using a variant of ID3), but then converts this in SQL queries. This was subsequently implemented [D94] , and is still running as a web demonstration.

For many years I have argued that it is likely that our ‘logical’ reasoning arises precisely form this need to explain our own tacit judgement to others. While we simply act individually, or by observing the actions of others, this can be largely tacit, but as soon as we want others to act in planned collaborate ways, for example to kill a large animal, we need to convince them. Once we have the mental mechanisms to create these explanations, these become internalised so that we end up with internal means to question our own thoughts and judgement, and even use them constructively to tackle problems more abstract and complex than found in nature. That is dialogue leads to logic!

Scenarios

We split into groups and discussed scenarios as a means to understand the potential challenges for human-like computing. Over multiple session the group I was in discussed one man scenario and then a variant.

Paramedic for remote medicine

The main scenario consisted of a patient far form a central medical centre, with an intelligent local agent communicating intermittently and remotely with a human doctor. Surprisingly the remote aspect of the scenario was not initially proposed by me thinking of Tiree, but by another member of the group thinking abut some of the remote parts of the Scottish mainland.

The local agent would need to be able communicate with the patient, be able to express a level of empathy, be able to physically examine (needing touch sensing, vision), and discuss symptoms. On some occasions, like a triage nurse, the agent might be sufficiently certain to be able to make a diagnosis and recommend treatment. However, at other times it may need to pass on to the remote doctor, being able to describe what had been done in terms of examination, symptoms observed, information gathered from the patient, in the same way that a paramedic does when handing over a patient to the hospital. However, even after the handover of responsibility, the local agent may still form part of the remote diagnosis, and maybe able to take over again once the doctor has determined an overall course of action.

The scenario embodied many aspects of human-like computing:

  • The agent would require a level of emotional understanding to interact with the patient
  • It would require fine and situation contingent robotic features to allow physical examination
  • Diagnosis and decisions would need to be guided by rich human-inspired algorithms based on large corpora of medical data, case histories and knowledge of the particular patient.
  • The agent would need to be able to explain its actions both to the patient and to the doctor. That is it would not only need to transform its own internal representations into forms intelligible to a human, but do so in multiple ways depending on the inferred knowledge and nature of the person.
  • Ethical and legal responsibility are key issues in medical practice
  • The agent would need to be able manage handovers of control.
  • The agent would need to understand its own competencies in order to know when to call in the remote doctor.

The scenario could be in physical or mental health. The latter is particularly important given recent statistics, which suggested only 10% of people in the UK suffering mental health problems receive suitable help.

Physiotherapist

As a more specific scenario still, one fog the group related how he had been to an experienced physiotherapist after a failed diagnosis by a previous physician. Rather than jumping straight into a physical examination, or even apparently watching the patient’s movement, the physiotherapist proceeded to chat for 15 minutes about aspects of the patient’s life, work and exercise. At the end of this process, the physiotherapist said, “I think I know the problem”, and proceeded to administer a directed test, which correctly diagnosed the problem and led to successful treatment.

Clearly the conversation had given the physiotherapist a lot of information about potential causes of injury, aided by many years observing similar cases.

To do this using an artificial agent would suggest some level of:

  • theory/model of day-to-day life

Thinking about the more conversational aspects of this I was reminded of the PhD work of Ramanee Peiris [P97]. This concerned consultations on sensitive subjects such as sexual health. It was known that when people filled in (initially paper) forms prior to a consultation, they were more forthcoming and truthful than if they had to provide the information face-to-face. This was even if the patient knew that the person they were about to see would read the forms prior to the consultation.

Ramanee’s work extended this first to electronic forms and then to chat-bot style discussions which were semi-scripted, but used simple textual matching to determine which topics had been covered, including those spontaneously introduced by the patient. Interestingly, the more human like the system became the more truthful and forthcoming the patients were, even though they were less so wit a real human.

As well as revealing lessons for human interactions with human-like computers, this also showed that human-like computing may be possible with quite crude technologies. Indeed, even Eliza was treated (to Weizenbaum’s alarm) as if it really were a counsellor, even though people knew it was ‘just a computer’ [W66].

Cognition or Embodiment?

I think it fair to say that the overall balance, certainly in the group I was in, was towards the cognitivist: that is more Cartesian approach starting with understanding and models of internal cognition, and then seeing how these play out with external action. Indeed, the term ‘representation’ used repeatedly as an assumed central aspect of any human-like computing, and there was even talk of resurrecting Newells’s project for a ‘unified theory of cognition’ [N90]

There did not appear to be any hard-core embodiment theorist at the workshops, although several people who had sympathies. This was perhaps as well as we could easily have degenerated into well rehearsed arguments for an against embodiment/cognition centred explanations … not least about the critical word ‘representation’.

However, I did wonder whether a path that deliberately took embodiment centrally would be valuable. How many human-like behaviours could be modelled in this way, taking external perception-action as central and only taking on internal representations when they were absolutely necessary (Alan Clark’s 007 principle) [C98].

Such an approach would meet limits, not least the physiotherapist’s 25 minute chat, but I would guess would be more successful over a wider range of behaviours and scenarios then we would at first think.

Human–Computer Interaction and Human-Like Computing

Both Russell and myself were partly there representing our own research interest, but also more generally as part of the HCI community looking at the way human-like computing would intersect exiting HCI agendas, or maybe create new challenges and opportunities. (see poster) It was certainly clear during the workshop that there is a substantial role for human factors from fine motor interactions, to conversational interfaces and socio-technical systems design.

Russell and I presented a poster, which largely focused on these interactions.

HCI-HLC-poster

There are two sides to this:

  • understanding and modelling for human-like computing — HCI studies and models complex, real world, human activities and situations. Psychological experiments and models tend to be very deep and detailed, but narrowly focused and using controlled, artificial tasks. In contrast HCI’s broader, albeit more shallow, approach and focus on realistic or even ‘in the wild’ tasks and situations may mean that we are in an ideal position to inform human-like computing.

human interfaces for human-like computing — As noted in goal (iii) we will need paradigms for humans to interact with human-like computers.

As an illustration of the first of these, the poster used my work on making sense of the apparently ‘bad’ emotion of regret [D05] .

An initial cognitive model of regret was formulated involving a rich mix of imagination (in order to pull past events and action to mind), counter-factual modal reasoning (in order to work out what would have happened), emption (which is modified to feel better or worse depending on the possible alternative outcomes), and Skinner-like low-level behavioural learning (the eventual purpose of regret).

cog-model

This initial descriptive and qualitative cognitive model was then realised in a simplified computational model, which had a separate ‘regret’ module which could be plugged into a basic behavioural learning system.   Both the basic system and the system with regret learnt, but the addition of regret did so with between 5 and 10 times fewer exposures.   That is, the regret made a major improvement to the machine learning.

architecture

Turning to the second. Direct manipulation has been at the heart of interaction design since the PC revolution in the 1980s. Prior to that command line interfaces (or worse job control interfaces), suggested a mediated paradigm, where operators ‘asked’ the computer to do things for them. Direct manipulation changed that turning the computer into a passive virtual world of computational objects on which you operated with the aid of tools.

To some extent we need to shift back to the 1970s mediated paradigm, but renewed, where the computer is no longer like an severe bureaucrat demanding the precise grammatical and procedural request; but instead a helpful and understanding aide. For this we can draw upon existing areas of HCI such as human-human communications, intelligent user interfaces, conversational agents and human–robot interaction.

References

[C98] Clark, A. 1998. Being There: Putting Brain, Body and the World Together Again. MIT Press. https://mitpress.mit.edu/books/being-there

[D92] A. Dix (1992). Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451. http://www.hcibook.com/alan/papers/neuro92/

[D94] A. Dix and A. Patrick (1994). Query By Browsing. Proceedings of IDS’94: The 2nd International Workshop on User Interfaces to Databases, Ed. P. Sawyer. Lancaster, UK, Springer Verlag. 236-248.

[D05] Dix, A..(2005).  The adaptive significance of regret. (unpublished essay, 2005) https://alandix.com/academic/essays/regret.pdf

[D05b] A. Dix (2005). the brain and the web – a quick backup in case of accidents. Interfaces, 65, pp. 6-7. Winter 2005. https://alandix.com/academic/papers/brain-and-web-2005/

[D10] A. Dix, A. Katifori, G. Lepouras, C. Vassilakis and N. Shabir (2010). Spreading Activation Over Ontology-Based Resources: From Personal Context To Web Scale Reasoning. Internatonal Journal of Semantic Computing, Special Issue on Web Scale Reasoning: scalable, tolerant and dynamic. 4(1) pp.59-102. http://www.hcibook.com/alan/papers/web-scale-reasoning-2010/

[E16] EPSRC (2016). Human Like Computing Hand book. Engineering and Physical Sciences Research Council. 17 – 18 February 2016

[F16] Alison Flood (2016). Robots could learn human values by reading stories, research suggests. The Guardian, Thursday 18 February 2016 http://www.theguardian.com/books/2016/feb/18/robots-could-learn-human-values-by-reading-stories-research-suggests

[H09] Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The Unreasonable Effectiveness of Data. IEEE Intelligent Systems 24, 2 (March 2009), 8-12. DOI=10.1109/MIS.2009.36

[K10] A. Katifori, C. Vassilakis and A. Dix (2010). Ontologies and the Brain: Using Spreading Activation through Ontologies to Support Personal Interaction. Cognitive Systems Research, 11 (2010) 25–41. https://alandix.com/academic/papers/Ontologies-and-the-Brain-2010/

[N90] Allen Newell. 1990. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, USA. http://www.hup.harvard.edu/catalog.php?isbn=9780674921016

[P97] DR Peiris (1997). Computer interviews: enhancing their effectiveness by simulating interpersonal techniques. PhD Thesis, University of Dundee. http://virtual.inesc.pt/rct/show.php?id=56

[W66] Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (January 1966), 36-45. DOI=http://dx.doi.org/10.1145/365153.365168

If the light is on, they can hear (and now see) you

hello-barbie-matel-from-guardianFollowing Samsung’s warning that its television sets can listen into your conversations1, and Barbie’s, even more scary, doll that listens to children in their homes and broadcasts this to the internet2, the latest ‘advances’ make it possible to be seen even when the curtains are closed and you thought you were private.

For many years it has been possible for security services, or for that matter sophisticated industrial espionage, to pick up sounds based on incandescent light bulbs.

The technology itself is not that complicated, vibrations in the room are transmitted to the filament, which minutely changes its electrical characteristics. The only complication is extracting the high-frequency signal from the power line.

040426-N-7949W-007However, this is a fairly normal challenge for high-end listening devices. Years ago when I was working with submarine designers at Slingsby, we were using the magnetic signature of power running through undersea cables to detect where they were for repair. The magnetic signatures were up to 10,000 times weaker than the ‘noise’ from the Earth’s own magnetic field, but we were able to detect the cables with pin-point accuracy3. Military technology for this is far more advanced.

The main problem is the raw computational power needed to process the mass of data coming from even a single lightbulb, but that has never been a barrier for GCHQ or the NSA, and indeed, with cheap RaspberryPi-based super-computers, now not far from the hobbyist’s budget4.

Using the fact that each lightbulb reacts slightly differently to sound, means that it is, in principle, possible to not only listen into conversations, but work out which house and room they come from by simply adding listening equipment at a neighbourhood sub-station.

The benefits of this to security services are obvious. Whereas planting bugs involves access to a building, and all other techniques involve at least some level of targeting, lightbulb-based monitoring could simply be installed, for example, in a neighbourhood known for extremist views and programmed to listen for key words such as ‘explosive’.

For a while, it seemed that the increasing popularity of LED lightbulbs might end this. This is not because LEDs do not have an electrical response to vibrations, but because of the 12V step down transformers between the light and the mains.

Of course, there are plenty of other ways to listen into someone in their home, from obvious bugs to laser-beams bounced of glass (you can even get plans to build one of your own at Instructables), or even, as MIT researchers recently demonstrated at SIGGRAPH, picking up the images of vibrations on video of a glass of water, a crisp packet, and even the leaves of a potted plant5. However, these are all much more active and involve having an explicit suspect.

Similarly blanket internet and telephone monitoring have applications, as was used for a period to track Osama bin Laden’s movements6, but net-savvy terrorists and criminals are able to use encryption or bypass the net entirely by exchanging USB sticks.

However, while the transformer attenuates the acoustic back-signal from LEDs, this only takes more sensitive listening equipment and more computation, a lot easier than a vibrating pot-plant on video!

So you might just think to turn up the radio, or talk in a whisper. Of course, as you’ve guessed by now, and, as with all these surveillance techniques, simply yet more computation.

Once the barriers of LEDs are overcome, they hold another surprise. Every LED not only emits light, but acts as a tiny, albeit inefficient, light detector (there’s even an Arduino project to use this principle).   The output of this is a small change in DC current, which is hard to localise, but ambient sound vibrations act as a modulator, allowing, again in principle, both remote detection and localisation of light.

220px-60_LED_3W_Spot_Light_eq_25WIf you have several LEDs, they can be used to make a rudimentary camera7. Each LED lightbulb uses a small array of LEDs to create a bright enough light. So, this effectively becomes a very-low-resolution video camera, a bit like a fly’s compound eye.

While each image is of very low quality, any movement, either of the light itself (hanging pendant lights are especially good), or of objects in the room, can improve the image. This is rather like the principle we used in FireFly display8, where text mapped onto a very low-resolution LED pixel display is unreadable when stationary, but absolutely clear when moving.

pix-11  pix-21
pix-12  pix-22
LEDs produce multiple very-low-resolution image views due to small vibrations and movement9.

OLYMPUS DIGITAL CAMERA  OLYMPUS DIGITAL CAMERA
Sufficient images and processing can recover an image.

So far MI5 has not commented on whether it uses, or plans to use this technology itself, nor whether it has benefited from information gathered using it by other agencies. Of course its usual response is to ‘neither confirm nor deny’ such things, so without another Edward Snowden, we will probably never know.

So, next time you sit with a coffee in your living room, be careful what you do, the light is watching you.

  1. Not in front of the telly: Warning over ‘listening’ TV. BBC News, 9 Feb 2015. http://www.bbc.co.uk/news/technology-31296188[back]
  2. Privacy fears over ‘smart’ Barbie that can listen to your kids. Samuel Gibbs, The Guardian, 13 March 2015. http://www.theguardian.com/technology/2015/mar/13/smart-barbie-that-can-listen-to-your-kids-privacy-fears-mattel[back]
  3. “Three DSP tricks”, Alan Dix, 1998. https://alandix.com/academic/papers/DSP99/DSP99-full.html[back]
  4. “Raspberry Pi at Southampton: Steps to make a Raspberry Pi Supercomputer”, http://www.southampton.ac.uk/~sjc/raspberrypi/[back]
  5. A. Davis, M. Rubinstein, N. Wadhwa, G. Mysore, F. Durand and W. Freeman (2014). The Visual Microphone: Passive Recovery of Sound from Video. ACM Transactions on Graphics (Proc. SIGGRAPH), 33(4):79:1–79:10 http://people.csail.mit.edu/mrub/VisualMic/[back]
  6. Tracking Use of Bin Laden’s Satellite Phone, all Street Journal, Evan Perez, Wall Street Journal, 28th May, 2008. http://blogs.wsj.com/washwire/2008/05/28/tracking-use-of-bin-ladens-satellite-phone/[back]
  7. Blinkenlight, LED Camera. http://blog.blinkenlight.net/experiments/measurements/led-camera/[back]
  8. Angie Chandler, Joe Finney, Carl Lewis, and Alan Dix. 2009. Toward emergent technology for blended public displays. In Proceedings of the 11th international conference on Ubiquitous computing (UbiComp ’09). ACM, New York, NY, USA, 101-104. DOI=10.1145/1620545.1620562[back]
  9. Note using simulated images; getting some real ones may be my next Tiree Tech Wave project.[back]

Italian conferences: PPD10, AVI2010 and Search Computing

I got back from trip to Rome and Milan last Tuesday, this included the PPD10 workshop that Aaron, Lucia, Sri and I had organised, and the AVI 2008 conference, both in University of Rome “La Sapienza”, and a day workshop on Search Computing at Milan Polytechnic.

PPD10

The PPD10 workshop on Coupled Display Visual Interfaces1 followed on from a previous event, PPD08 at AVI 2008 and also a workshop on “Designing And Evaluating Mobile Phone-Based Interaction With Public Displays” at CHI2008.  The linking of public and private displays is something I’ve been interested in for some years and it was exciting to see some of the kinds of scenarios discussed at Lancaster as potential futures some years ago now being implemented over a range of technologies.  Many of the key issues and problems proposed then are still to be resolved and new ones arising, but certainly it seems the technology is ‘coming of age’.  As well as much work filling in the space of interactions, there were also papers that pushed some of the existing dimensions/classifications, in particular, Rasmus Gude’s paper on “Digital Hospitality” stretched the public/private dimension by considering the appropriation of technology in the home by house guests.  The full proceedings are available at the PPD10 website.

AVI 2010

AVI is always a joy, and AVI 2010 no exception; a biennial, single-track conference with high-quality papers (20% accept rate this year), and always in lovely places in Italy with good food and good company!  I first went to AVI in 1996 when it was in Gubbio to give a keynote “Closing the Loop: modelling action, perception and information“, and have gone every time since — I always say that Stefano Levialdi is a bit like a drug pusher, the first experience for free and ever after you are hooked! The high spot this year was undoubtedly Hitomi Tsujita‘s “Complete fashion coordinator2, a system for using social networking to help choose clothes to wear — partly just fun with a wonderful video, but also a very thoughtful mix of physical and digital technology.


images from Complete Fashion Coordinator

The keynotes were all great, Daniel Keim gave a really lucid state of the art in Visual Analytics (more later) and Patrick Lynch a fresh view of visual understanding based on many years experience and highlighting particularly on some of the more immediate ‘gut’ reactions we have to interfaces.  Daniel Wigdor gave an almost blow-by-blow account of work at Microsoft on developing interaction methods for next-generation touch-based user interfaces.  His paper is a great methodological exemplar for researchers combining very practical considerations, more principled design space analysis and targeted experimentation.

Looking more at the detail of Daniel’s work at Microsoft, it is interesting that he has a harder job than Apple’s interaction developers.  While Apple can design the hardware and interaction together, MS as system providers need to deal with very diverse hardware, leading to a ‘least common denominator’ approach at the level of quite basic touch interactions.  For walk-up-and use systems such as Microsoft Surface in bar tables, this means that users have a consistent experience across devices.  However, I did wonder whether this approach which is basically the presentation/lexical level of Seeheim was best, or whether it would be better to settle at some higher-level primitives more at the Seeheim dialog level, thinking particularly of the way the iPhone turns pull down menus form web pages into spinning selectors.  For devices that people own it maybe that these more device specific variants of common logical interactions allow a richer user experience.

The complete AVI 2010 proceedings (in colour or B&W) can be found at the conference website.

The very last session of AVI was a panel I chaired on “Visual Analytics: people at the heart of data” with Daniel Keim, Margit Pohl, Bob Spence and Enrico Bertini (in the order they sat at the table!).  The panel was prompted largely because the EU VisMaster Coordinated Action is producing a roadmap document looking at future challenges for visual analytics research in Europe and elsewhere.  I had been worried that it could be a bit dead at 5pm on the last day of the conference, but it was a lively discussion … and Bob served well as the enthusiastic but also slightly sceptical outsider to VisMaster!

As I write this, there is still time (just, literally weeks!) for final input into the VisMaster roadmap and if you would like a draft I’ll be happy to send you a PDF and even happier if you give some feedback 🙂

Search Computing

I was invited to go to this one-day workshop and had the joy to travel up on the train from Rome with Stu Card and his daughter Gwyneth.

The search computing workshop was organised by the SeCo project. This is a large single-site project (around 25 people for 5 years) funded as one of the EU’s ‘IDEAS Advanced Grants’ supporting ‘investigation-driven frontier research’.  Really good to see the EU funding work at the bleeding edge as so many national and European projects end up being ‘safe’.

The term search computing was entirely new to me, although instantly brought several concepts to mind.  In fact the principle focus of SeCo is the bringing together of information in deep web resources including combining result rankings; in database terms a form of distributed join over heterogeneous data sources.

The work had many personal connections including work on concept classification using ODP data dating back to aQtive days as well as onCue itself and Snip!t.  It also has similarities with linked data in the semantic web word, however with crucial differences.  SeCo’s service approach uses meta-descriptions of the services to add semantics, whereas linked data in principle includes a degree of semantics in the RDF data.  Also the ‘join’ on services is on values and so uses a degree of run-time identity matching (Stu Card’s example was how to know that LA=’Los Angeles’), whereas linked data relies on URIs so (again in principle) matching has already been done during data preparation.  My feeling is that the linking of the two paradigms would be very powerful, and even for certain kinds of raw data, such as tables, external semantics seems sensible.

One of the real opportunities for both is to harness user interaction with data as an extra source of semantics.  For example, for the identity matching issue, if a user is linking two data sources and notices that ‘LA’ and ‘Los Angeles’ are not identified, this can be added as part of the interaction to serve the user’s own purposes at that time, but by so doing adding a special case that can be used for the benefit of future users.

While SeCo is predominantly focused on the search federation, the broader issue of using search as part of algorithmics is also fascinating.  Traditional algorithmics assumes that knowledge is basically in code or rules and is applied to data.  In contrast we are seeing the rise of web algorithmics where knowledge is garnered from vast volumes of data.  For example, Gianluca Demartini at the workshop mentioned that his group had used the Google suggest API to extend keywords and I’ve seen the same trick used previously3.  To some extent this is like classic techniques of information retrieval, but whereas IR is principally focused on a closed document set, here the document set is being used to establish knowledge that can be used elsewhere.  In work I’ve been involved with, both the concept classification and folksonomy mining with Alessio apply this same broad principle.

The slides from the workshop are appearing (but not all there yet!) at the workshop web page on the SeCo site.

  1. yes I know this doesn’t give ‘PPD’ this stands for “public and private displays”[back]
  2. Hitomi Tsujita, Koji Tsukada, Keisuke Kambara, Itiro Siio, Complete Fashion Coordinator: A support system for capturing and selecting daily clothes with social network, Proceedings of the Working Conference on Advanced Visual Interfaces (AVI2010), pp.127–132.[back]
  3. The Yahoo! Related Suggestions API offers a similar service.[back]

databases as people think – dabble DB

I was just looking at Enrico Bertini‘s blog Visuale for the first time for ages. In particular at his December entry on DabbleDB & Magic/Replace. Dabble DB allows web-based databases and in some ways sits in similar ground with Freebase, Swivel or even Google docs spreadsheet, all ways to share data of different forms on/through the web.

The USP for Dabble DB amongst other online data sharing apps, is that it appears to really be a complete database solution online … and its USB amongst conventional databses is the way they seem to have really thought about real use.  This focus on real use by ordinary users includes dynamically altering the structure of the data as you gradually understand it more.  The model they have is that you start with plain table data from a spreadsheet or other document and gradually add structure as opposed to the “first analyse and then enter” model of traditional DBs.

As I read Enrico’s blog I remembered that he had mailed me about the ‘magic/replace‘ feature ages ago.  This lets you tidy up  data during import (but apparently not data already imported … wonder why?), using a ‘by example’ approach and is a really nice example of all that ‘programming by example‘ and related work that was so hot 15 years ago eventually finding its way into real products.

The downside to Dabble DB is that editing is via forms only … it is often so much easier to enter data in a spreadsheet view, the API is quite limited, and while they have a ‘Dabble DB Commons‘ for public data (rather like Swivel), there is no directory or other way to see what people have put up 🙁

I was particularly hoping the API was better as it would have been nice to link it into my web version of Query-by-Browsing. or even integrate with the Query-through-Drilldown approach for constructing complex table joins that Damon Oram implemented more recently.

In general, while the DB and (many) UI features are strong it is not really looking outwards to creating shared linked data (in the broadest sense of the term, not just pure SemWeb world linked data), … so still room there for the absolute killer shared data app!

strength in weakness – Judo design

Steve Gill is visiting so that we can work together on a new book on physicality.  Last night, over dinner, Steve was telling us about a litter-bin lock that he once designed.  The full story linked creative design, the structural qualities of materials, and the social setting in which it was placed … a story well worth hearing, but I’ll leave that to Steve.

One of the critical things about the design was that while earlier designs used steel, his design needed to be made out of plastic.  Steel is an obvious material for a lock: strong unyielding; however the plastic lock worked because the lock and the bin around it were designed to yield, to give a little, and is so doing to absorb the shock if kicked by a drunken passer-by.

This is a sort of Judo principle of design: rather than trying to be the strongest or toughest, instead by  yielding in the right way using the strength of your opponent.

This reminded me of trees that bend in the wind and stand the toughest storms (the wind howling down the chimney maybe helps the image), whereas those that are stiffer may break.  Also old wooden pit-props that would moan and screech when they grew weak and gave slightly under the strain of rock; whereas the stronger steel replacements would stand firm and unbending until the day they catastrophically broke.

Years ago I also read about a programme to strengthen bridges as lorries got heavier.  The old arch bridges had an infill of loose rubble, so the engineers simply replaced this with concrete.  In a short time the bridges began to fall down.  When analysed more deeply  the reason become clear.  When an area of the loose infill looses strength, it gives a little, so the strain on it is relieved and the areas around take the strain instead.  However, the concrete is unyielding and instead the weakest point takes more and more strain until eventually cracks form and the bridge collapses.  Twisted ropes work on the same principle.  Although now an old book, “The New Science of Strong Materials” opened my eyes to the wonderful way many natural materials, such as bone, make use of the relative strengths, and weaknesses, of their constituents, and how this is emulated in many composite materials such as glass fibre or carbon fibre.

In contrast both software and bureaucratic procedures are more like chains – if any link breaks the whole thing fails.

Steve’s lock design shows that it is possible to use the principle of strength in weakness when using modern materials, not only in organic elements like wood, or traditional bridge design.  For software also, one of the things I often try to teach is to design for failure – to make sure things work when they go wrong.  In particular, for intelligent user interfaces the idea of appropriate intelligence – making sure that when intelligent algorithms get things wrong, the user experience does not suffer.  It is easy to want to design the cleverest algotithms, the most complex systems – to design for everything, to make it all perfect. While it is of course right to seek the best, often it is the knowledge that what we produce will not be ‘perfect’ that in fact enables us to make it better.

robot friends

Last night we watched Jurassic Park 3 and today found you can have a little dinosaur all of your own!

Pleo Dinosaur Sony have robot dogs, Phillips robot cats (albeit stuck sitting in one place) but Ugobe have little robot dinosaurs called Pleo. In the videos they do move like little baby creatures and the lady in the shopping mall coos over one as she strokes it.

Central to Pleo seems to be:

  1. Designing Sociable Robotsembodiment – they feel through 40 sensors and move in their environment
  2. emotion – they have a relatively complex model of basic drives rather like Cynthia Breazwal describes in her book “Designing Sociable Robots“.

This seems to pay off in people’s reactions, both on Pleo’s own videos (well they would!), but also in owner’s plogs (sic) … one owner says:

“she acts just like a cat concerning keyboards.. just crawl on the darn thing while I’m typing! I know Penny,. you’re so cute it doesn’t matter what you do. But you should have a little sensor strip in your butt to spank when you’re bad1 or to pat gently to urge you to go explore. Go to sleep my little love” ArcticLotus

people play wht Pleo
Pleos making friends :-/

For researchers there is an open architecture so it should be possible to play oops experiment with them 🙂 The API doesn’t seem to be published yet, so wait until you get your cheque books out!

people play wht Pleo

  1. This could get us into the territory of agent abuse![back]