Of academic communication: overload, homeostatsis and nostalgia

open-mailbox-silhouetteRevisiting on an old paper on early email use and reflecting on scholarly communication now.

About 30 years ago, I was at a meeting in London and heard a presentation about a study of early email use in Xerox and the Open University. At Xerox the use of email was already part of their normal culture, but it was still new at OU. I’d thought they had done a before and after study of one of the departments, but remembered clearly their conclusions: email acted in addition to other forms of communication (face to face, phone, paper), but did not substitute.

Gilbert-Cockton-from-IDFIt was one of those pieces of work that I could recall, but didn’t have a reference too. Facebook to the rescue! I posted about it and in no time had a series of helpful suggestions including Gilbert Cockton who nailed it, finding the meeting, the “IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems” (3 Feb 1989) and the precise paper:

Fung , T. O’Shea , S. Bly. Electronic mail viewed as a communications catalyst. IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, , pp.1/1–1/3. INSPEC: 3381096 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=197821

In some extraordinary investigative journalism, Gilbert also noted that the first author, Pat Fung, went on to fresh territory after retirement, qualifying as a scuba-diving instructor at the age of 75.

The details of the paper were not exactly as I remembered. Rather than a before and after study, it was a comparison of computing departments at Xerox (mature use of email) and OU’s (email less ingrained, but already well used). Maybe I had simply embroidered the memory over the years, or maybe they presented newer work at the colloquium, than was in the 3 page extended abstract.   In those days this was common as researchers did not feel they needed to milk every last result in a formal ‘publication’. However, the conclusions were just as I remembered:

“An exciting finding is its indication that the use of sophisticated electronic communications media is not seen by users as replacing existing methods of communicating. On the contrary, the use of such media is seen as a way of establishing new interactions and collaboration whilst catalysing the role of more traditional methods of communication.”

As part of this process following various leads by other Facebook friends, I spent some time looking at early CSCW conference proceedings, some at Saul Greenburg’s early CSCW bibliography [1] and Ducheneaut and Watts (15 years on) review of email research [2] in the 2005 HCI special issue on ‘reinventing email’ [3] (both notably missing the Fung et al. paper). I downloaded and skimmed several early papers including Wendy McKay’s lovely early (1988) study [4] that exposed the wide variety of ways in which people used email over and above simple ‘communication’. So much to learn from this work when the field was still fresh,

This all led me to reflect both on the Fung et al. paper, the process of finding it, and the lessons for email and other ‘communication’ media today.

Communication for new purposes

A key finding was that “the use of such media is seen as a way of establishing new interactions and collaboration“. Of course, the authors and their subjects could not have envisaged current social media, but the finding if this paper was exactly an example of this. In 1989 if I had been trying to find a paper, I would have scoured my own filing cabinet and bookshelves, those of my colleagues, and perhaps asked people when I met them. Nowadays I pop the question into Facebook and within minutes the advice starts to appear, and not long after I have a scanned copy of the paper I was after.

Communication as a good thing

In the paper abstract, the authors say that an “exciting finding” of the paper is that “the use of sophisticated electronic communications media is not seen by users as replacing existing methods of communicating.” Within paper, this is phrased even more strongly:

“The majority of subjects (nineteen) also saw no likelihood of a decrease in personal interactions due to an increase in sophisticated technological communications support and many felt that such a shift in communication patterns would be undesirable.”

Effectively, email was seen as potentially damaging if it replaced other more human means of communication, and the good outcome of this report was that this did not appear to be happening (or strictly subjects believed it was not happening).

However, by the mid-1990s, papers discussing ’email overload’ started to appear [5].

I recall a morning radio discussion of email overload about ten years ago. The presenter asked someone else in the studio if they thought this was a problem. Quite un-ironically, they answered, “no, I only spend a couple of hours a day”. I have found my own pattern of email change when I switched from highly structured Eudora (with over 2000 email folders), to Gmail (mail is like a Facebook feed, if it isn’t on the first page it doesn’t exist). I was recently talking to another academic who explained that two years ago he had deliberately taken “email as stream” as a policy to control unmanageable volumes.

If only they had known …

Communication as substitute

While Fung et al.’s respondents reported that they did not foresee a reduction in other forms of non-electronic communication, in fact even in the paper the signs of this shift to digital are evident.

Here are the graphs of communication frequency for the Open University (30 people, more recent use of email) and Xerox (36 people, more established use) respectively.

( from Fung et al., 1989)

( from Fung et al., 1989)

( from Fung et al., 1989)

( from Fung et al., 1989)

It is hard to draw exact comparisons as it appears there may have been a higher overall volume of communication at Xerox (because of email?).  Certainly, at that point, face-to-face communication remains strong at Xerox, but it appears that not only the proportion, but total volume of non-digital non-face-to-face communications is lower than at OU.  That is sub substitution has already happened.

Again, this is obvious nowadays, although the volume of electronic communications would have been untenable in paper (I’ve sometimes imagined printing out a day’s email and trying to cram it in a pigeon-hole), the volume of paper communications has diminished markedly. A report in 2013 for Royal Mail recorded 3-6% pa reduction in letters over recent years and projected a further 4% pa for the foreseeable future [6].

academic communication and national meetungs

However, this also made me think about the IEE Colloquium itself. Back in the late 1980s and 1990s it was common to attend small national or local meetings to meet with others and present work, often early stage, for discussion. In other fields this still happens, but in HCI it has all but disappeared. Maybe I have is a little nostalgia, but this does seem a real loss as it was a great way for new PhD students to present their work and meet with the leaders in their field. Of course, this can happen if you get your CHI paper accepted, but the barriers are higher, particularly for those in smaller and less well-resourced departments.

Some of this is because international travel is cheaper and faster, and so national meetings have reduced in importance – everyone goes to the big global (largely US) conferences. Many years ago research on day-to-day time use suggested that we have a travel ‘time budget’ reactively constant across counties and across different kinds of areas within the same country [7]. The same is clearly true of academic travel time; we have a certain budget and if we travel more internationally then we do correspondingly less nationally.

(from Zahavi, 1979)

(from Zahavi, 1979)

However, I wonder if digital communication also had a part to play. I knew about the Fung et al. paper, even though it was not in the large reviews of CSCW and email, because I had been there. Indeed, the reason that the Fung et al.paper was not cited in relevant reviews would have been because it was in a small venue and only available as paper copy, and only if you know it existed. Indeed, it was presumably also below the digital radar until it was, I assume, scanned by IEE archivists and deposited in IEEE digital library.

However, despite the advantages of this easy access to one another and scholarly communication, I wonder if we have also lost something.

In the 1980s, physical presence and co-presence at an event was crucial for academic communication. Proceedings were paper and precious, I would at least skim read all of the proceedings of any event I had been to, even those of large conferences, because they were rare and because they were available. Reference lists at the end of my papers were shorter than now, but possibly more diverse and more in-depth, as compared to more directed ‘search for the relevant terms’ literature reviews of the digital age.

And looking back at some of those early papers, in days when publish-or-perish was not so extreme, when cardiac failure was not an occupational hazard for academics (except maybe due to the Cambridge sherry allowance), at the way this crucial piece of early research was not dressed up with an extra 6000 words of window dressing to make a ‘high impact’ publication, but simply shared. Were things more fun?


 

[1] Saul Greenberg (1991) “An annotated bibliography of computer supported cooperative work.” ACM SIGCHI Bulletin, 23(3), pp. 29-62. July. Reprinted in Greenberg, S. ed. (1991) “Computer Supported Cooperative Work and Groupware”, pp. 359-413, Academic Press. DOI: http://dx.doi.org/10.1145/126505.126508
https://pdfs.semanticscholar.org/52b4/d0bb76fcd628c00c71e0dfbf511505ae8a30.pdf

[2] Nicolas Ducheneaut and Leon A. Watts (2005). In search of coherence: a review of e-mail research. Hum.-Comput. Interact. 20, 1 (June 2005), 11-48. DOI= 10.1080/07370024.2005.9667360
http://www2.parc.com/csl/members/nicolas/documents/HCIJ-Coherence.pdf

[3] Steve Whittaker, Victoria Bellotti, and Paul Moody (2005). Introduction to this special issue on revisiting and reinventing e-mail. Hum.-Comput. Interact. 20, 1 (June 2005), 1-9.
http://www.tandfonline.com/doi/abs/10.1080/07370024.2005.9667359

[4] Wendy E. Mackay. 1988. More than just a communication system: diversity in the use of electronic mail. In Proceedings of the 1988 ACM conference on Computer-supported cooperative work (CSCW ’88). ACM, New York, NY, USA, 344-353. DOI=http://dx.doi.org/10.1145/62266.62293
https://www.lri.fr/~mackay/pdffiles/TOIS88.Diversity.pdf

[5] Steve Whittaker and Candace Sidner (1996). Email overload: exploring personal information management of email. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’96), Michael J. Tauber (Ed.). ACM, New York, NY, USA, 276-283. DOI=http://dx.doi.org/10.1145/238386.238530
https://www.ischool.utexas.edu/~i385q/readings/Whittaker_Sidner-1996-Email.pdf

[6] The outlook for UK mail volumes to 2023. PwC prepared for Royal Mail Group, 15 July 2013
http://www.royalmailgroup.com/sites/default/files/ The%20outlook%20for%20UK%20mail%20volumes%20to%202023.pdf

[7] Yacov Zahavi (1979). The ‘UMOT’ Project. Prepared For U.S. Department Of Transportation Ministry Of Transport and Fed. Rep. Of Germany.
http://www.surveyarchive.org/Zahavi/UMOT_79.pdf

Human-Like Computing

Last week I attended an EPSRC workshop on “Human-Like Computing“.

The delegate pack offered a tentative definition:

“offering the prospect of computation which is akin to that of humans, where learning and making sense of information about the world around us can match our human performance.” [E16]

However, the purpose of this workshop was to clarify, and expand on this, exploring what it might mean for computers to become more like humans.

It was an interdisciplinary meeting with some participants coming from more technical disciplines such as cognitive science, artificial intelligence, machine learning and Robotics; others from psychology or studying human and animal behaviour; and some, like myself, from HCI or human factors, bridging the two.

Why?

Perhaps the first question is why one might even want more human-like computing.

There are two obvious reasons:

(i) Because it is a good model to emulate — Humans are able to solve some problems, such as visual pattern finding, which computers find hard. If we can understand human perception and cognition, then we may be able to design more effective algorithms. For example, in my own work colleagues and I have used models based on spreading activation and layers of human memory when addressing ‘web scale reasoning’ [K10,D10].

robot-3-clip-sml(ii) For interacting with people — There is considerable work in HCI in making computers easier to use, but there are limitations. Often we are happy for computers to be simply ‘tools’, but at other times, such as when your computer notifies you of an update in the middle of a talk, you wish it had a little more human understanding. One example of this is recent work at Georgia Tech teaching human values to artificial agents by reading them stories! [F16]

To some extent (i) is simply the long-standing area of nature-inspired or biologically-inspired computing. However, the combination of computational power and psychological understanding mean that perhaps we are the point where new strides can be made. Certainly, the success of ‘deep learning’ and the recent computer mastery of Go suggest this. In addition, by my own calculations, for several years the internet as a whole has had more computational power than a single human brain, and we are very near the point when we could simulate a human brain in real time [D05b].

Both goals, but particularly (ii), suggest a further goal:

(iii) new interaction paradigms — We will need to develop new ways to design for interacting with human-like agents and robots, not least how to avoid the ‘uncanny valley’ and how to avoid the appearance of over-competence that has bedevilled much work in this broad area. (see more later)

Both goals also offer the potential for a fourth secondary goal:

(iv) learning about human cognition — In creating practical computational algorithms based in human qualities, we may come to better understand human behaviour, psychology and maybe even society. For example, in my own work on modelling regret (see later), it was aspects of the computational model that highlighted the important role of ‘positive regret’ (“the grass is greener on the other side”) to hep us avoid ‘local minima’, where we stick to the things we know and do not explore new options.

Human or superhuman?

Of course humans are not perfect, do we want to emulate limitations and failings?

For understanding humans (iv), the answer is probably “yes”, and maybe by understanding human fallibility we may be in a better position to predict and prevent failures.

Similarly, for interacting with people (ii), the agents should show at least some level of human limitations (even if ‘put on’); for example, a chess program that always wins would not be much fun!

However, for simply improving algorithms, goal (i), we may want to get the ‘best bits’, from human cognition and merge with the best aspects of artificial computation. Of course it maybe that the frailties are also the strengths, for example, the need to come to decisions and act in relatively short timescales (in terms of brain ‘ticks’) may be one way in which we avoid ‘over learning’, a common problem in machine learning.

In addition, the human mind has developed to work with the nature of neural material as a substrate, and the physical world, both of which have shaped the nature of human cognition.

Very simple animals learn purely by Skinner-like response training, effectively what AI would term sub-symbolic. However, this level of learning require many exposures to similar stimuli. For more rare occurrences, which do not occur frequently within a lifetime, learning must be at the, very slow pace of genetic development of instincts. In contrast, conscious reasoning (symbolic processing) allows us to learn through a single or very small number of exposures; ideal for infrequent events or novel environments.

Big Data means that computers effectively have access to vast amounts of ‘experience’, and researchers at Google have remarked on the ‘Unreasonable Effectiveness of Data’ [H09] that allows problems, such as translation, to be tackled in a statistical or sub-symbolic way which previously would have been regarded as essentially symbolic.

Google are now starting to recombine statistical techniques with more knowledge-rich techniques in order to achieve better results again. As humans we continually employ both types of thinking, so there are clear human-like lessons to be learnt, but the eventual system will not have the same ‘balance’ as a human.

If humans had developed with access to vast amounts of data and maybe other people’s experience directly (rather than through culture, books, etc.), would we have developed differently? Maybe we would do more things unconsciously that we do consciously. Maybe with enough experience we would never need to be conscious at all!

More practically, we need to decide how to make use of this additional data. For example, learning analytics is becoming an important part of educational practice. If we have an automated tutor working with a child, how should we make use of the vast body of data about other tutors interactions with other children?   Should we have a very human-like tutor that effectively ‘reads’ learning analytics just as a human tutor would look at a learning ‘dashboard’? Alternatively, we might have a more loosely human-inspired ‘hive-mind’ tutor that ‘instinctively’ makes pedagogic choices based on the overall experience of all tutors, but maybe in an unexplainable way?

What could go wrong …

There have been a number of high-profile statements in the last year about the potential coming ‘singularity’ (when computers are clever enough to design new computers leading to exponential development), and warnings that computers could become sentient, Terminator-style, and take over.

There was general agreement at the workshop this kind of risk was overblown and that despite breakthroughs, such as the mastery of Go, these are still very domain limited. It is many years before we have to worry about even general intelligence in robots, let alone sentience.

A far more pressing problem is that of incapable computers, which make silly mistakes, and the way in which people, maybe because of the media attention to the success stories, assume that computers are more capable than they are!

Indeed, over confidence in algorithms is not just a problem for the general public, but also among computing academics, as I found in my personal experience on the REF panel.

There are of course many ethical and legal issues raised as we design computer systems that are more autonomous. This is already being played out with driverless cars, with issues of insurance and liability. Some legislators are suggesting allowing driverless cars, but only if there is a drive there to take control … but if the car relinquishes control, how do you safely manage the abrupt change?

Furthermore, while the vision of autonomous robots taking over the world is still far fetched; more surreptitious control is already with us. Whether it is Uber cabs called by algorithm, or simply Google’s ranking of search results prompting particular holiday choices, we all to varying extents doing “what the computer tells us”. I recall in the Dalek Invasion of Earth, the very un-human-like Daleks could not move easily amongst the rubble of war-torn London. Instead they used ‘hypnotised men’ controlled by some form of neural headset. If the Daleks had landed today and simply taken over or digitally infected a few cloud computing services would we know?

Legibility

Sometimes it is sufficient to have a ‘black box’ that makes decisions and acts. So long as it works we are happy. However, a key issue for many ethical and legal issues, but also for practical interaction, is the ability to be able to interrogate a system, so seek explanations of why a decision has been made.

Back in 1992 I wrote about these issues [D92], in the early days when neural networks and other forms of machine learning were being proposed for a variety of tasks form controlling nuclear fusion reactions to credit scoring. One particular scenario, was if an algorithm were used to pre-sort large numbers of job applications. How could you know whether the algorithms were being discriminatory? How could a company using such algorithms defend themselves if such an accusation were brought?

One partial solution then, as now, was to accept underlying learning mechanisms may involve emergent behaviour form statistical, neural network or other forms of opaque reasoning. However, this opaque initial learning process should give rise to an intelligible representation. This is rather akin to a judge who might have a gut feeling that a defendant is guilty or innocent, but needs to explicate that in a reasoned legal judgement.

This approach was exemplified by Query-by-Browsing, a system that creates queries from examples (using a variant of ID3), but then converts this in SQL queries. This was subsequently implemented [D94] , and is still running as a web demonstration.

For many years I have argued that it is likely that our ‘logical’ reasoning arises precisely form this need to explain our own tacit judgement to others. While we simply act individually, or by observing the actions of others, this can be largely tacit, but as soon as we want others to act in planned collaborate ways, for example to kill a large animal, we need to convince them. Once we have the mental mechanisms to create these explanations, these become internalised so that we end up with internal means to question our own thoughts and judgement, and even use them constructively to tackle problems more abstract and complex than found in nature. That is dialogue leads to logic!

Scenarios

We split into groups and discussed scenarios as a means to understand the potential challenges for human-like computing. Over multiple session the group I was in discussed one man scenario and then a variant.

Paramedic for remote medicine

The main scenario consisted of a patient far form a central medical centre, with an intelligent local agent communicating intermittently and remotely with a human doctor. Surprisingly the remote aspect of the scenario was not initially proposed by me thinking of Tiree, but by another member of the group thinking abut some of the remote parts of the Scottish mainland.

The local agent would need to be able communicate with the patient, be able to express a level of empathy, be able to physically examine (needing touch sensing, vision), and discuss symptoms. On some occasions, like a triage nurse, the agent might be sufficiently certain to be able to make a diagnosis and recommend treatment. However, at other times it may need to pass on to the remote doctor, being able to describe what had been done in terms of examination, symptoms observed, information gathered from the patient, in the same way that a paramedic does when handing over a patient to the hospital. However, even after the handover of responsibility, the local agent may still form part of the remote diagnosis, and maybe able to take over again once the doctor has determined an overall course of action.

The scenario embodied many aspects of human-like computing:

  • The agent would require a level of emotional understanding to interact with the patient
  • It would require fine and situation contingent robotic features to allow physical examination
  • Diagnosis and decisions would need to be guided by rich human-inspired algorithms based on large corpora of medical data, case histories and knowledge of the particular patient.
  • The agent would need to be able to explain its actions both to the patient and to the doctor. That is it would not only need to transform its own internal representations into forms intelligible to a human, but do so in multiple ways depending on the inferred knowledge and nature of the person.
  • Ethical and legal responsibility are key issues in medical practice
  • The agent would need to be able manage handovers of control.
  • The agent would need to understand its own competencies in order to know when to call in the remote doctor.

The scenario could be in physical or mental health. The latter is particularly important given recent statistics, which suggested only 10% of people in the UK suffering mental health problems receive suitable help.

Physiotherapist

As a more specific scenario still, one fog the group related how he had been to an experienced physiotherapist after a failed diagnosis by a previous physician. Rather than jumping straight into a physical examination, or even apparently watching the patient’s movement, the physiotherapist proceeded to chat for 15 minutes about aspects of the patient’s life, work and exercise. At the end of this process, the physiotherapist said, “I think I know the problem”, and proceeded to administer a directed test, which correctly diagnosed the problem and led to successful treatment.

Clearly the conversation had given the physiotherapist a lot of information about potential causes of injury, aided by many years observing similar cases.

To do this using an artificial agent would suggest some level of:

  • theory/model of day-to-day life

Thinking about the more conversational aspects of this I was reminded of the PhD work of Ramanee Peiris [P97]. This concerned consultations on sensitive subjects such as sexual health. It was known that when people filled in (initially paper) forms prior to a consultation, they were more forthcoming and truthful than if they had to provide the information face-to-face. This was even if the patient knew that the person they were about to see would read the forms prior to the consultation.

Ramanee’s work extended this first to electronic forms and then to chat-bot style discussions which were semi-scripted, but used simple textual matching to determine which topics had been covered, including those spontaneously introduced by the patient. Interestingly, the more human like the system became the more truthful and forthcoming the patients were, even though they were less so wit a real human.

As well as revealing lessons for human interactions with human-like computers, this also showed that human-like computing may be possible with quite crude technologies. Indeed, even Eliza was treated (to Weizenbaum’s alarm) as if it really were a counsellor, even though people knew it was ‘just a computer’ [W66].

Cognition or Embodiment?

I think it fair to say that the overall balance, certainly in the group I was in, was towards the cognitivist: that is more Cartesian approach starting with understanding and models of internal cognition, and then seeing how these play out with external action. Indeed, the term ‘representation’ used repeatedly as an assumed central aspect of any human-like computing, and there was even talk of resurrecting Newells’s project for a ‘unified theory of cognition’ [N90]

There did not appear to be any hard-core embodiment theorist at the workshops, although several people who had sympathies. This was perhaps as well as we could easily have degenerated into well rehearsed arguments for an against embodiment/cognition centred explanations … not least about the critical word ‘representation’.

However, I did wonder whether a path that deliberately took embodiment centrally would be valuable. How many human-like behaviours could be modelled in this way, taking external perception-action as central and only taking on internal representations when they were absolutely necessary (Alan Clark’s 007 principle) [C98].

Such an approach would meet limits, not least the physiotherapist’s 25 minute chat, but I would guess would be more successful over a wider range of behaviours and scenarios then we would at first think.

Human–Computer Interaction and Human-Like Computing

Both Russell and myself were partly there representing our own research interest, but also more generally as part of the HCI community looking at the way human-like computing would intersect exiting HCI agendas, or maybe create new challenges and opportunities. (see poster) It was certainly clear during the workshop that there is a substantial role for human factors from fine motor interactions, to conversational interfaces and socio-technical systems design.

Russell and I presented a poster, which largely focused on these interactions.

HCI-HLC-poster

There are two sides to this:

  • understanding and modelling for human-like computing — HCI studies and models complex, real world, human activities and situations. Psychological experiments and models tend to be very deep and detailed, but narrowly focused and using controlled, artificial tasks. In contrast HCI’s broader, albeit more shallow, approach and focus on realistic or even ‘in the wild’ tasks and situations may mean that we are in an ideal position to inform human-like computing.

human interfaces for human-like computing — As noted in goal (iii) we will need paradigms for humans to interact with human-like computers.

As an illustration of the first of these, the poster used my work on making sense of the apparently ‘bad’ emotion of regret [D05] .

An initial cognitive model of regret was formulated involving a rich mix of imagination (in order to pull past events and action to mind), counter-factual modal reasoning (in order to work out what would have happened), emption (which is modified to feel better or worse depending on the possible alternative outcomes), and Skinner-like low-level behavioural learning (the eventual purpose of regret).

cog-model

This initial descriptive and qualitative cognitive model was then realised in a simplified computational model, which had a separate ‘regret’ module which could be plugged into a basic behavioural learning system.   Both the basic system and the system with regret learnt, but the addition of regret did so with between 5 and 10 times fewer exposures.   That is, the regret made a major improvement to the machine learning.

architecture

Turning to the second. Direct manipulation has been at the heart of interaction design since the PC revolution in the 1980s. Prior to that command line interfaces (or worse job control interfaces), suggested a mediated paradigm, where operators ‘asked’ the computer to do things for them. Direct manipulation changed that turning the computer into a passive virtual world of computational objects on which you operated with the aid of tools.

To some extent we need to shift back to the 1970s mediated paradigm, but renewed, where the computer is no longer like an severe bureaucrat demanding the precise grammatical and procedural request; but instead a helpful and understanding aide. For this we can draw upon existing areas of HCI such as human-human communications, intelligent user interfaces, conversational agents and human–robot interaction.

References

[C98] Clark, A. 1998. Being There: Putting Brain, Body and the World Together Again. MIT Press. https://mitpress.mit.edu/books/being-there

[D92] A. Dix (1992). Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451. http://www.hcibook.com/alan/papers/neuro92/

[D94] A. Dix and A. Patrick (1994). Query By Browsing. Proceedings of IDS’94: The 2nd International Workshop on User Interfaces to Databases, Ed. P. Sawyer. Lancaster, UK, Springer Verlag. 236-248.

[D05] Dix, A..(2005).  The adaptive significance of regret. (unpublished essay, 2005) http://alandix.com/academic/essays/regret.pdf

[D05b] A. Dix (2005). the brain and the web – a quick backup in case of accidents. Interfaces, 65, pp. 6-7. Winter 2005. http://alandix.com/academic/papers/brain-and-web-2005/

[D10] A. Dix, A. Katifori, G. Lepouras, C. Vassilakis and N. Shabir (2010). Spreading Activation Over Ontology-Based Resources: From Personal Context To Web Scale Reasoning. Internatonal Journal of Semantic Computing, Special Issue on Web Scale Reasoning: scalable, tolerant and dynamic. 4(1) pp.59-102. http://www.hcibook.com/alan/papers/web-scale-reasoning-2010/

[E16] EPSRC (2016). Human Like Computing Hand book. Engineering and Physical Sciences Research Council. 17 – 18 February 2016

[F16] Alison Flood (2016). Robots could learn human values by reading stories, research suggests. The Guardian, Thursday 18 February 2016 http://www.theguardian.com/books/2016/feb/18/robots-could-learn-human-values-by-reading-stories-research-suggests

[H09] Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The Unreasonable Effectiveness of Data. IEEE Intelligent Systems 24, 2 (March 2009), 8-12. DOI=10.1109/MIS.2009.36

[K10] A. Katifori, C. Vassilakis and A. Dix (2010). Ontologies and the Brain: Using Spreading Activation through Ontologies to Support Personal Interaction. Cognitive Systems Research, 11 (2010) 25–41. http://alandix.com/academic/papers/Ontologies-and-the-Brain-2010/

[N90] Allen Newell. 1990. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, USA. http://www.hup.harvard.edu/catalog.php?isbn=9780674921016

[P97] DR Peiris (1997). Computer interviews: enhancing their effectiveness by simulating interpersonal techniques. PhD Thesis, University of Dundee. http://virtual.inesc.pt/rct/show.php?id=56

[W66] Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (January 1966), 36-45. DOI=http://dx.doi.org/10.1145/365153.365168

Alan’s Guide to Winter Foot Care

My feet are quite wide and so I prefer to wear sandals.  I wore sandals for over 700 miles of my round Wales walk back in 2013, and wear them throughout the winter.

When the temperature drops below zero, or snow gathers on the ground, I am often asked, “don’t your feet get cold?“.

Having been asked so many times, I have decided to put down in writing my observations about healthy winter feet in the hope it will help others.

Basically, the thing to remember is that it is all about colours, and follows a roughly linear series of stages.  However, do note I am a sallow-skinned Caucasian, so all reference to skin colour should be read in that context.

Look at your toes.

What colour are they?

Stage1.  White

Press the side of your toe with your finger.  Does it change colour?

1.1   Yes, it goes a bit pink and then fades rapidly back to white.

That is normal and healthy, you clearly aren’t taking this whole extreme winter walking thing seriously.

1.2  Yes, it goes deep red and only very slowly back to white.

You have an infection, maybe due to stage 2.2a on a previous walk.  Visit the doctor to avoid stage 3.

1.3 No, it stays white.

Bad news, you are a zombie.

Stage 2. Red

Are your toes painful?

2.1 Yes.

Well at least they are still alive.

2.2. No.

Well at least they don’t hurt.  However numbness means does cause certain dangers.

2.2a – You might prick your toe on a thorns, or rusty wire and not notice, leading to infection.

2.2b – You might step on broken glass and bleed to death.

2.2c – You might step in a fire and burn yourself.

Stage 3.  Yellow

Blood poisoning, you missed warning 2.2a

Stage 4.  Blue.

Your circulation has stopped entirely.  This will lead ultimately to limb death, but at least you won’t bleed to death (warning 2.2b).

Stage 5. Black.

Is that charcoal black?

5.1.  Yes

You forgot warning 2.2c didn’t you?

5.2  no, more dull grey/black.

Frostbite, get to the hospital quick and they may save some of your toes.

Stage 6. Green

Gangrene, no time for the hospital, find a saw or large breadknife.

Stage 7.  What toes?

You missed stages 5 and 6.


Download and print the Quick Reference Card so that you can conveniently check your foot health at any time.

Quick Reference Card


Last word … on a serious note

My feet are still (despite misuse!) healthy.  However, for many this is a serious issue, not least for those with diabetes.  When I was child my dad, who was diabetic, dropped a table on his foot and had to be constantly monitored to make sure it didn’t develop into gangrene.  Diabetes UK have their own foot care page, and a list of diabetes charities you can support.

 

Walking Wales

As some of you already know, next year I will be walking all around Wales: from May to July covering just over 1000 miles in total.

Earlier this year the Welsh Government announced the opening of the Wales Coastal Path a new long distance footpath around the whole coast of Wales. There were several existing long distance paths covering parts of the coastline, as well as numerous stretches of public footpaths at or near the coast. However, these have now been linked, mapped and waymarked creating for the first time, a continuous single route. In addition, the existing Offa’s Dyke long distance path cuts very closely along the Welsh–English border, so that it is possible to make a complete circuit of Wales on the two paths combined.

As soon as I heard the announcement, I knew it was something I had to do, and gradually, as I discussed it with more and more people, the idea has become solid.

This will not be the first complete periplus along these paths; this summer there have been at least two sponsored walkers taking on the route. However, I will be doing the walk with a technology focus, which will, I believe, be unique.

The walk has four main aspects:

personal — I am Welsh, was born and brought up in Cardiff, but have not lived in Wales for over 30 years. The walk will be a form of homecoming, reconnecting with the land and its people that I have been away from for so long. The act of encircling can symbolically ‘encompass’ a thing, as if knowing the periphery one knows the whole. Of course life is not like this, the edge is just that, not the core, not the heart. As a long term ex-pat, a foreigner in my own land, maybe all I can hope to do is scratch the surface, nibble at the edges. However, also I always feel most comfortable as an outsider, as one at the margins, so in some ways I am going to the places where I most feel at home. I will blog, audio blog, tweet and generally share this experience to the extent the tenuous mobile signal allows, but also looking forward to periods of solitude between sea and mountain.

practical — As I walk I will be looking at the IT experience of the walker and also discuss with local communities the IT needs and problems for those at the edges, at the margins. Not least will be issues due to the paucity of network access both patchy mobile signal whilst walking and low-capacity ‘broadband’ at the limits of wind-beaten copper telephone wires — none of the mega-capacity fibre optic of the cities. This will not simply be fact-finding, but actively building prototypes and solutions, both myself (in evenings and ‘days off’) and with others who are part of the project remotely or joining me for legs of the journey1. Geolocation and mobile based applications will be a core part of this, particularly for the walkers experience, but local community needs likely to be far more diverse.

philosophical — Mixed with personal reflections will be an exploration of the meanings of place, of path, of walking, of nomadicity and of locality. Aristotle’s school of philosophy was called the Peripatetic School because discussion took place while walking; over two thousand years later Wordsworth’s poetry was nearly all composed while walking; and for time immemorial routes of pilgrimage have been a focus of both spiritual service and personal enlightenment. This will build on some of my own previous writings in particular past keynotes2 on human understanding of space, and also wider literature such as Rebecca Solnit’s wonderful “Wanderlust“.  This reflection will inform the personal blogging, and after I finish I will edit this into a book or account of the journey.

research3 — the practical outcomes will intersect with various personal research interests including social empowerment, interaction design and algorithmics4.  For the walker’s experience, I will be effectively doing a form of action research!.  This will certainly include how to incorporate local maps (such as tourists town plans) effectively into more large-scale experiences, how ‘crowdsourced’ route knowledge can augment more formal digital and paper resources, data synchonisation to deal with disconnection, and data integration between diverse sources.  In addition I am offering myself as a living lab so that others can use my trip as a place to try out their own sensors and instrumentation5, information systems, content authoring, ethnographic practices, community workshops, etc.  This may involve simply asking me to use things, coming for a single meeting or day, or joining me for parts of the walk.

If any of this interests you, do get in touch.  As well as research collaborations (living lab or supporting direct IT goals) any help in managing logistics, PR, or finding sources of funding/sponsorship for basic costs, most welcome.

I’ll get a dedicated website, Facebook page, twitter account, and charity sponsorship set up soon … watch this space!

  1. Coding whilst walking is something I have thought about (but not done!) for many years, but definitely inspired more recently by Nick the amazing cycling programmer who came to the Spring Tiree Tech Wave.[back]
  2. Welsh Mathematician Walks in Cyberspace“, and “Paths and Patches: patterns of geognosy and gnosis“.[back]
  3. I tried to think of a word beginning with ‘p’ for research, but failed![back]
  4. As I tagged this post I found I was using nearly all my my most common tags — I hadn’t realised quite how much this project cuts across so many areas of interest.[back]
  5. But with the “no blood rule”: if I get sensor sores, the sensors go in the bin 😉 [back]

Tiree Touchtable – the photos

At last, the photos from the week making and installing the Tiree Touchtable.  You can see all the photos on Flickr, but here is small selection:

The main components – projector, Kinect (on top of projector) and mini-Mac:

The Kinect disassembled:

Platform to attach to roof beams and support projector and mini-Mac:

Add mirror:

Andrea adjusting the mirror:

It works!

Alan gently centre-punches location for screws on Kinect frame:

note the tool … after this the Kinect didn’t work … can’t think why?  But happily there was a second Kinect 🙂

Then Andrea screws Kinect to timber support:

Testing on the workbench:

Moment of truth — on the way to the Rural centre to install … second Kinect carefully cradled in Alan’s old jumper:

       

Parts laid out, ready to go:

Steve fits mounting brackets, Alan looks on:

Alan thinks, “platform looks secure”.  Fiona thinks, “Alan doesn’t”.

Gets boring standing at the bottom of the ladder

Andrea fitting Kinect drop arm:

Andrea fitting the mini-Mac:

“Yep, that seems to be OK”

Let there be light:

Looking down — behold, Tiree Touchfloor:

The secret of true engineering … if the table is too low for the sensors, lift it higher:

Holiday Reading

Early in the summer Fiona and I took 10 days holiday, first touring on the West Coast of Scotlad, south from Ullapool and then over the Skye Road Bridge to spend a few days on Skye.  As well as visiting various wool-related shops on the way and a spectacular drive over the pass from Applecross, I managed a little writing, some work on regret modelling1. And, as well as writing and regret modelling, quite a lot of reading.

This was my holiday reading:

The Talking Ape: How Language Evolved, Robbins Burling (see my booknotes and review)

In Praise of the Garrulous, Allan Cameron (see my booknotes)

A Mind So Rare, Merlin Donald (see my booknotes and review)

Wanderlust, Rebecca Solnit (see my booknotes)

  1. At last!  It has been something like 6 years since I first did initial, and very promising, computational regret modelling, and have at last got back to it, writing driver code so that I have got data from a systematic spread of different parameters.  Happily this verified the early evidence that the cognitive model of regret I wrote about first in 2003 really does seem to aid learning.  However, the value of more comprehensive simulation was proved as early indications that positive regret (grass is greener feeling) was more powerful than negative regret do not seem to have been borne out.[back]

Tiree Touchtable installed

It is there!

Suspended high in the ceiling of Tiree Rural centre, a slightly Heath Robinson structure (pictures to come) that powers Tiree’s first public touchtable.

It was a long day, starting at 9:30 in the morning and not finishing until after 9pm in the evening.

The first few hours were simply getting the physical supports in place — with special thanks to Steve Nagy, who fixed the major elements, in particular the awkward tasks of suspending a two foot square (60cx x 60cm) platform that had to be positioned to lock into four steel rods, all while standing on a ladder 20 foot in the air.  Then a long task up and down ladders, huddled over computer screens, adjusting extending, puzzling over strange banding effects that we eventually concluded were artefacts of low level processing with the Konect’s image depth algorithms.

A squadron of flies constantly circled in the projector beam, their shadows suggesting maybe a virtual ‘squat the fly’ game could be developed!   There had been a sale in the cattle ring on Friday, so the flies presumably a remnant of that … but curiously, in the absence of cows or sheep, it was the Konect sensor itself that was their focus of attention, occasionally landing on one of the lenses — maybe they were attracted by the Infra-Red transmitter – a whole new area for etymological research.

The team cleaning the cattle ring, watched and chatted, and then returned with a spotlessly cleaned (it had been covered by ‘you know what’!) sheet of white wood to act as a table cover.

And now, well there is still work to do: permanent electric supply up into the rafters (to replace the temporary tangle of strung together extension cables, that hung from the ceiling during testing), improvements to the algorithms to extend the range to allow the sensors to be as high as possible (to avoid being hit by the next passing ladder), and of course applications to run in the space.

But we feel the back has been broken and a good weeks work.

D-Day for Tiree Touchable

This is it, D-Day — installing the projected touch table in Tiree Rural Centre (see “microwave — Tiree Touchtable“).  Everything made up and ready on the ground after a week of sawing, screwing, drilling and gluing.  So now just (!) climbing up ladders and bolting it all onto the beams, 5 metres up, then seeing if it all works!

A few minor problems along the way, lost one Kinect due to (Alan’s) over-enthusiastic use of a centre punch when drilling holes.  One broken mirror — oops, don’t I remember something about those?  And tetchy projector that doesn’t want to talk to its remote control (helpful manufacturers online FAQ says, “if the remote doesn’t work, use the control panel”).

If we manage the day without dropping a computer 20 foot, we will probably be happy.

Full report and photos later today …

Alt-HCI open reviews – please join in

Papers are online for the Alt-HCI trcak of British HCI conference in September.

These are papers that are trying in various ways to push the limits of HCI, and we would like as many people as possible to join in discussion around them … and this discussion will be part of process for deciding which papers are presented at the conference, and possibly how long we give them!

Here are the papers  — please visit the site, comment, discuss, Tweet/Facebook about them.

paper #154 — How good is this conference? Evaluating conference reviewing and selectivity
        do conference reviews get it right? is it possible to measure this?

paper #165 — Hackinars: tinkering with academic practice
        doing vs talking – would you swop seminars for hack days?

paper #170 — Deriving Global Navigation from Taxonomic Lexical Relations
        website design – can you find perfect words and structure for everyone?

paper #181 — User Experience Study of Multiple Photo Streams Visualization
        lots of photos, devices, people – how to see them all?

paper #186 — You Only Live Twice or The Years We Wasted Caring about Shoulder-Surfing
        are people peeking at your passwords? what’s the real security problem?

paper #191 — Constructing the Cool Wall: A Tool to Explore Teen Meanings of Cool
        do you want to make thing teens think cool?  find out how!

paper #201 — A computer for the mature: what might it look like, and can we get there from here?
        over 50s have 80% of wealth, do you design well for them?

paper #222 — Remediation of the wearable space at the intersection of wearable technologies and interactive architecture
        wearable technology meets interactive architecture

paper #223 — Designing Blended Spaces
        where real and digital worlds collide

Status Code 451- and the burning of books

I was really pleased to see that Alessio Malizia has just started to blog.  An early entry is a link to a Guardian article about Tim Bray‘s suggestion for a new status code of 451 when a site is blocked for legal reasons.

Bray’s tongue-in-cheek suggestion is both honouring Ray Bradbury, the author of Faranheit 451, and also satirising the censorship implicit in IP blocking such as the UK High Court decision in April to force ISPs to block Pirate Bay.

However, I have a feeling that perhaps the satire could be seen, so to speak, as on the other foot.

Faranheit 451 is about a future where books are burnt because they have increasingly been regarded as meaningless by a public focused on quick fix entertainment and mindless media: censorship more the result than the cause of societal malaise.

Just as Huxley’s Brave New World seemed to sneak up upon us until science fiction was everyday life, maybe Bradbury’s world is here with the web itself not the least force in the dissolution of intellectual life.

Bradbury foresaw ‘firemen’ who burnt the forbidden books, following in a long history of biblioclasts from the destruction of the Royal Library of Ashurbanipal at Ninevah to Nazi book burnings in the 1930s.  However, today it is the availability of information on the internet which is often used as an excuse for the closure of libraries, and publishers foresee the end of paper publication in the next five years.

Paradoxically it is the rearguard actions of publishers (albeit largely to protect profit not principle) that is one of the drivers behind IP blocking and ‘censorship’ of copyright piracy sites.  If I were to assign roles from Faranheit 451 to the current day protagonists it would be hard to decide which is more like the book-burning firemen.

Maybe Faranheit 451 has happened and we never noticed.