physigrams – modelling the device unplugged

Physigrams get their own micro-site!

See it now at at physicality.org/physigrams

Appropriate physical design can make the difference between an intuitively obvious device and one that is inscrutable.  Physigrams are a way of modelling and analysing the interactive physical characteristics of devices from TV remotes to electric kettles, filling the gap between foam prototypes and code.

Sketches or CAD allow you to model the static physical form of the device, and this can be realised in moulded blue foam, 3D printing or cardboard mock-ups.  Prototypes of the internal digital behaviour can be produced using tools such as Adobe Animate, proto.io or atomic or as hand-coded using standard web-design tools.  The digital behaviour can also be modelled using industry standard techniques such as UML.

  

Physigrams allow you to model the ‘device unplugged’ – the pure physical interaction potential of the device: the ways you can interact with buttons, dials and knobs, how you can open, slide or twist movable elements.  These physigrams can be attached to models of the digital behaviour to understand how well the physical and digital design compliment one another.

Physigrams were developed some years ago as part of the DEPtH project., a collaboration between product designers at Cardiff School of Art and Design and  computer scientists at Lancaster University. Physigrams have been described in various papers over the years.  However, with TouchIT ,our book on physicality and design (eventually!) reaching completion and due out next year, it felt that physigrams deserved a home of their own on the web.

The physigram micro-site, part of physicality.org includes descriptions of physical interaction properties, a complete key to the physigram notation, and many examples of physigrams in action from light switches, to complete control panels and novel devices.

why is the wind always against you? part 2 – side wind

In the first part of this two-part post, we saw that cycling into the wind takes far more additional effort than a tail wind saves.

However, Will Wright‘s original question, “why does it feel as if the wind is always against you?” was not just about head winds, but the feeling that when cycling around Tiree, while the angle of the wind is likely to be in all sorts of directions, it feels as though it is against you more than with you.

Is he right?

So in this post I’ll look at side winds, and in particular start with wind dead to the side, at 90 degrees to the road.

Clearly, a strong side wind will need some compensation, perhaps leaning slightly into the wind to balance, and on Tiree with gusty winds this may well cause the odd wobble.  However, I’ll take best case scenario and assume completely constant wind with no gusts.

There is a joke about the engineer, who, when asked a question about giraffes, begins, “let’s first assume a spherical giraffe”.  I’m not gong to make Will + bike spherical, but will assume that the air drag is similar in all directions.

Now my guess is that given the way Will is bent low over his handle-bars, he may well actually have a larger side-area to the wind than from in front.  Also I have no idea about the complex ways the moving spokes behave as the wind blows through them, although I am aware that a well-designed turbine absorbs a fair proportion of the wind, so would not be surprised if the wheels added a lot of side-drag too.

If the drag for a side wind is indeed bigger than to the front, then the following calculations will be worse; so effectively working with a perfectly cylindrical Will is going to be a best case!

To make calculations easy I’ll have the cyclist going at 20 miles an hour, with a 20 mph side wind also.

When you have two speeds at right angles, you can essentially ‘add them up’ as if they were sides of a triangle.  The resultant wind feels as if it is at 45 degrees, and approximately 30 mph (to be exact it is 20 x √2, so just over 28mph).

Recalling the squaring rule, the force is proportional to 30 squared, that is 900 units of force acting at 45 degrees.

In the same way as we add up the wind and bike speeds to get the apparent wind at 45 degrees, we can break this 900 unit force at 45 degree into a side force and a forward drag. Using the sides of the triangle rule, we get a side force and forward drag of around 600 units each.

For the side force I’ll just assume you lean into (and hope that you don’t fall off if the wind gusts!); so let’s just focus on the forward force against you.

If there were no side wind the force from the air drag would be due to the 20 mph bike speed alone, so would be (squaring rule again) 400 units.  The side wind has increased the force against you by 50%.  Remembering that more than three quarters of the energy you put into cycling is overcoming air drag, that is around 30% additional effort overall.

Turned into head speed, this is equivalent to the additional drag of cycling into a direct head wind of about 4 mph (I made a few approximations, the exact figure is 3.78 mph).

This feels utterly counterintuitive, that a pure side wind causes additional forward drag!  It perhaps feels even more counterintuitive if I tell you that in fact the wind needs to be about 10 degrees behind you, before it actually helps.

There are two ways to understand this.

The first is plain physics/maths.

For very small objects (around a 100th of a millimetre) the air drag is directly proportional to the speed (linear).  At this scale, when you redivide the force into its components ahead and to the side, they are exactly the same as if you look at the force for the side-wind and cycle speed independently.  So if you are a cyclist the size of an amoeba, side winds don’t feel like head winds … but then that is probably the least of your worries.

For ordinary sized objects, the squaring rule (quadratic drag) means that after you have combined the forces, squared them and then separated them out again, you get more than you started with!

The second way to look at it, which is not the full story, but not so far from what happens, is to consider the air just in front of you as you cycle.

You’ll know that cyclists often try to ride in each other’s slipstream to reduce drag, sometimes called ‘drafting’.

The lead cyclist is effectively dragging the air behind, and this helps the next cyclist, and that cyclist helps the one after.  In a race formation, this reduces the energy needed by the following riders by around a third.

In addition you also create a small area in front where the air is moving faster, almost like a little bubble of speed.  This is one of the reasons why even the lead cyclist gains from the followers, albeit much less (one site estimates 5%).  Now imagine adding the side wind; that lovely bubble of air is forever being blown away meaning you constantly have to speed up a new bubble of air in front.

I did the above calculations for an exact side wind at 90 degrees to make the sums easier. However, you can work out precisely how much additional force the wind causes for any wind direction, and hence how much additional power you need when cycling.

Here is a graph showing that additional power needed, ranging for a pure head wind on the right, to a pure tail wind on the left (all for 20 mph wind).  For the latter the additional force is negative – the wind is helping you. However, you can see that the breakeven point is abut 10 degrees behind a pure side wind (the green dashed line).  Also evident (depressingly) is that the area to the left – where the wind is making things worse, is a lot more than the area to the right, where it is helping.

… and if you aren’t depressed enough already, most of my assumptions were ‘best case’.  The bike almost certainly has more side drag than head drag; you will need to cycle slightly into a wind to avoid being blown across the road; and, as noted in the previous post, you will cycle more slowly into a head wind so spend more time with it.

So in answer to the question …

why does it feel as if the wind is always against you?

… because most of the time it is!

why is the wind always against you? part 1 – head and tail winds

Sometimes it feels like the wind is always against you.

Is it really?

I’ve just been out for a run.  It is not terribly windy today by Tiree standards, the Met Office reports the speed at 18mph from the north west, but it was enough to feel as I ran and certainly the gritted teeth of cyclists going past the window suggests is plenty windy for them.

As I was running I remembered a question that Will Wright once asked me, “why does it feel as if the wind is always against you?

Now Will competes in Iron Man events, and is behind Tiree Fitness, which organises island keep fit activities, and the annual Tiree 10k & half marathon and Ultra-Marathon (that I’m training for now).  In other words Will is in a different league to me … but still he feels the wind!

Will was asking about cycling rather than running, and I suspect that the main effect of a head wind for a runner is simply the way it knocks the breath out of you, rather than actual wind resistance.  That is, as most things in exercise, the full story is a mixture of physiology, psychology and physics.

For this post I’ll stick to the ‘easy’ cases when the wind is dead in front or behind you.  I’ll leave sidewinds to a second post as the physics for this is a little more complicated, and the answer somewhat more surprising.

In fact today I ran to and fro along the same road, although its angle to the wind varied.  For the purposes of this post I’ll imagine straightening it more, and having it face directly along the direction of the wind, so that running or cycling one way the wind is directly behind you and in the other the wind is directly in front.

running with the wind

To make the sums easier I’ll make the wind speed 15 mph and have me run at 5mph.

On the outward leg the wind is behind me.  I am running at 5mph, the wind is coming at 15mp, so if I had a little wind gauge as I ran it would register a 10mph tail wind.

When I turn into the wind I am now running at 5mph into a 15mph head wind, so the apparent wind speed for me is 20mph.

So half the time the wind is helping me to the tune of 10mph, half the time resisting me by 20mph, so surely that averages out as 5mph resistance for the whole journey, the same as if I was just running at 5mph on a  still day?

average apparent wind (?) = (  –10 * 4 miles  +  +20 * 4 miles ) / 8 miles = 5

Unfortunately wind resistance does not average quite like that!

Wind resistance increases with the square of your speed.  So a 10mph tail wind creates 100 units of force to help you, whereas a 20mph head wind resists you with 400 units of force, four times as much.  That is, it is like one person pushing you from behind for half the course, but four people holding you back on the other half.

It is this force, the squared speed, that it makes more sense to average

average resistance (wind) = (  –100 * 4 miles  +  +400 * 4 miles ) / 8 miles = 150

Compare this to the effects of running on a still day at 5mph.

average resistance (no wind) = 25  (5 squared)

The average wind resistance over the course is six times as much even though half the distance is into the wind and half the distance is away from it.

It really is harder!

In fact, for a runner, wind resistance (physics) is probably not the major effect on speed, and despite the wind my overall time was not significantly slower than on a still day.  The main effects of the wind are probably the ‘knocking the breath out of you’ feeling and the way the head wind affects your stride (physiology).  Of course both of these make you more aware of the times the wind is in your face and hence your perception of how long this is (psychology).

cycling hard

For a cyclist wind resistance is a far more significant issue1.  Think about Olympic sprinters who run upright compared with cyclists who bend low and even wear those Alien-like cycling helmets to reduce drag.

This is partly due to the different physical processes, for example, on bike on a still day, the bike will keep on going forward even if you don’t pedal, whereas if you don’t keep moving your legs while running you get nowhere.

It is also partly due to the different speeds.  Even Usain Bolt only manages a bit over 20mph and that for just 100 metres, and for long-distance runners this drops to around 12 mph.  Equivalent cycling events are twice as fast and even a moderately fit cyclist could compete with Usain Bolt.

So let’s imagine our Tiree cyclist, grimacing as they head into the wind.

I’m going to assume they are cycling at 15mph.

If it were a still day the air resistance would be entirely due to their own forward speed of 15mph, hence (squared remember) 225 units of force against them.

However, with a 15mph wind, when they are cycling with the wind there is no net air flow, the only effort needed to cycle is the internal resistance of chain and gears, and the rubber on the road.  In contrast, when cycling against the wind, total air resistance is equivalent to 30mph. Recalling the squaring rule, that is 900 units of resistance, 4 times as high as on a still day.

Even averaging with the easy leg, you have twice as much effort needed to overcome the air resistance.  No wonder it feels tough!

However, that is all assuming you keep a constant speed irrespective of the wind.  In practice you are likely to slow down against a head wind and speed up with a tail wind.  Let’s assume you slow down to 10mph in the head wind and manage a respectable 20mph with the wind behind you.

I’ll not do the force calculations as the numbers get a little less tidy, but crucially this means you spend twice as long doing the head wind leg as the tail wind leg.  Although you cycle the same distance with head and tail winds, you spend twice as long battling that head wind.  And I’ll bet with it feeling so tough, it seems like even longer!

 

 

  1. The Wikipedia page on Bicycle performance includes estimates suggesting over 75% of effort is overcoming drag … even with no wind[back]

Students love digital … don’t they?

In the ever accelerating rush to digital delivery, is this actually what students want or need?

Last week I was at Talis Insight conference. As with previous years, this is a mix of sessions focused on those using or thinking of using Talis products, with lots of rich experience talks. However, also about half of the time is dedicated to plenaries about the current state and future prospects for technology in higher education; so well worth attending (it is free!) whether or not you are a Talis user.

Speakers this year included Bill Rammell, now Vice-Chancellor at the University of Bedfordshire, but who was also Minister of State for Higher Education during the second Blair government, and during that time responsible for introducing the National Student Survey.

Another high profile speaker was Rosie Jones, who is Director of Library Services at the Open University … which operates somewhat differently from the standard university library!

However, among the VCs, CEOs and directors of this and that, it was the two most junior speakers who stood out for me. Eva Brittin-Snell and Alex Davie are to SAGE student scholars from Sussex. As SAGE scholars they have engaged in research on student experience amongst their peers, speak at events like this and maintain a student blog, which includes, amongst other things the story of how Eva came to buy her first textbook.

Eva and Alex’s talk was entitled “Digital through a student’s eyes” (video). Many of the talks had been about the rise of digital services and especially the eTextbook. Eva and Alex were the ‘digital natives’, so surely this was joy to their ears. Surprisingly not.

Alex, in her first year at university, started by alluding to the previous speakers, the push for book-less libraries, and general digital spiritus mundi, but offered an alternative view. Students were annoyed at being asked to buy books for a course where only a chapter or two would be relevant; they appreciated the convenience of an eBook, when core textbooks were permanently out on and, and instantly recalled once one got hold of them. However, she said they still preferred physical books, as they are far more usable (even if heavy!) than eBooks.

Eva, a fourth year student, offered a different view. “I started like Aly”, she said, and then went on to describe her change of heart. However, it was not a revelation of the pedagogical potential of digital, more that she had learnt to live through the pain. There were clear practical and logistic advantages to eBooks, there when and where you wanted, but she described a life of constant headaches from reading on-screen.

Possibly some of this is due to the current poor state of eBooks that are still mostly simply electronic versions of texts designed for paper. Also, one of their student surveys showed that very few students had eBook readers such as Kindle (evidently now definitely not cool), and used phones primarily for messaging and WhatsApp. The centre of the student’s academic life was definitely the laptop, so eBooks meant hours staring at a laptop screen.

However, it also reflects a growing body of work showing the pedagogic advantages of physical note taking, potential developmental damage of early tablet and smartphone use, and industry figures showing that across all areas eBook sales are dropping and physical book sales increasing. In addition there is evidence that children and teenagers people prefer physical books, and public library use by young people is growing.

It was also interesting that both Alex and Eva complained that eTextbooks were not ‘snappy’ enough. In the age of Tweet-stream presidents and 5-minute attention spans, ‘snappy’ was clearly the students’ term of choice to describe their expectation of digital media. Yet this did not represent a loss of their attention per se, as this was clearly not perceived as a problem with physical books.

… and I am still trying to imagine what a critical study of Aristotle’s Poetics would look like in ‘snappy’ form.

There are two lessons from this for me. First what would a ‘digital first’ textbook look like. Does it have to be ‘snappy’, or are there ways to maintain attention and depth of reading in digital texts?

The second picks up on issues in the co-authored paper I presented at NordiChi last year, “From intertextuality to transphysicality: The changing nature of the book, reader and writer“, which, amongst other things, asked how we might use digital means to augment the physical reading process, offering some of the strengths of eBooks such as the ability to share annotations, but retaining a physical reading experience.  Also maybe some of the physical limitations of availability could be relieved, for example, if university libraries work with bookshops to have student buy and return schemes alongside borrowing?

It would certainly be good if students did not have to learn to live with pain.

We have a challenge.

Of academic communication: overload, homeostatsis and nostalgia

open-mailbox-silhouetteRevisiting on an old paper on early email use and reflecting on scholarly communication now.

About 30 years ago, I was at a meeting in London and heard a presentation about a study of early email use in Xerox and the Open University. At Xerox the use of email was already part of their normal culture, but it was still new at OU. I’d thought they had done a before and after study of one of the departments, but remembered clearly their conclusions: email acted in addition to other forms of communication (face to face, phone, paper), but did not substitute.

Gilbert-Cockton-from-IDFIt was one of those pieces of work that I could recall, but didn’t have a reference too. Facebook to the rescue! I posted about it and in no time had a series of helpful suggestions including Gilbert Cockton who nailed it, finding the meeting, the “IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems” (3 Feb 1989) and the precise paper:

Fung , T. O’Shea , S. Bly. Electronic mail viewed as a communications catalyst. IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, , pp.1/1–1/3. INSPEC: 3381096 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=197821

In some extraordinary investigative journalism, Gilbert also noted that the first author, Pat Fung, went on to fresh territory after retirement, qualifying as a scuba-diving instructor at the age of 75.

The details of the paper were not exactly as I remembered. Rather than a before and after study, it was a comparison of computing departments at Xerox (mature use of email) and OU’s (email less ingrained, but already well used). Maybe I had simply embroidered the memory over the years, or maybe they presented newer work at the colloquium, than was in the 3 page extended abstract.   In those days this was common as researchers did not feel they needed to milk every last result in a formal ‘publication’. However, the conclusions were just as I remembered:

“An exciting finding is its indication that the use of sophisticated electronic communications media is not seen by users as replacing existing methods of communicating. On the contrary, the use of such media is seen as a way of establishing new interactions and collaboration whilst catalysing the role of more traditional methods of communication.”

As part of this process following various leads by other Facebook friends, I spent some time looking at early CSCW conference proceedings, some at Saul Greenburg’s early CSCW bibliography [1] and Ducheneaut and Watts (15 years on) review of email research [2] in the 2005 HCI special issue on ‘reinventing email’ [3] (both notably missing the Fung et al. paper). I downloaded and skimmed several early papers including Wendy McKay’s lovely early (1988) study [4] that exposed the wide variety of ways in which people used email over and above simple ‘communication’. So much to learn from this work when the field was still fresh,

This all led me to reflect both on the Fung et al. paper, the process of finding it, and the lessons for email and other ‘communication’ media today.

Communication for new purposes

A key finding was that “the use of such media is seen as a way of establishing new interactions and collaboration“. Of course, the authors and their subjects could not have envisaged current social media, but the finding if this paper was exactly an example of this. In 1989 if I had been trying to find a paper, I would have scoured my own filing cabinet and bookshelves, those of my colleagues, and perhaps asked people when I met them. Nowadays I pop the question into Facebook and within minutes the advice starts to appear, and not long after I have a scanned copy of the paper I was after.

Communication as a good thing

In the paper abstract, the authors say that an “exciting finding” of the paper is that “the use of sophisticated electronic communications media is not seen by users as replacing existing methods of communicating.” Within paper, this is phrased even more strongly:

“The majority of subjects (nineteen) also saw no likelihood of a decrease in personal interactions due to an increase in sophisticated technological communications support and many felt that such a shift in communication patterns would be undesirable.”

Effectively, email was seen as potentially damaging if it replaced other more human means of communication, and the good outcome of this report was that this did not appear to be happening (or strictly subjects believed it was not happening).

However, by the mid-1990s, papers discussing ’email overload’ started to appear [5].

I recall a morning radio discussion of email overload about ten years ago. The presenter asked someone else in the studio if they thought this was a problem. Quite un-ironically, they answered, “no, I only spend a couple of hours a day”. I have found my own pattern of email change when I switched from highly structured Eudora (with over 2000 email folders), to Gmail (mail is like a Facebook feed, if it isn’t on the first page it doesn’t exist). I was recently talking to another academic who explained that two years ago he had deliberately taken “email as stream” as a policy to control unmanageable volumes.

If only they had known …

Communication as substitute

While Fung et al.’s respondents reported that they did not foresee a reduction in other forms of non-electronic communication, in fact even in the paper the signs of this shift to digital are evident.

Here are the graphs of communication frequency for the Open University (30 people, more recent use of email) and Xerox (36 people, more established use) respectively.

( from Fung et al., 1989)

( from Fung et al., 1989)

( from Fung et al., 1989)

( from Fung et al., 1989)

It is hard to draw exact comparisons as it appears there may have been a higher overall volume of communication at Xerox (because of email?).  Certainly, at that point, face-to-face communication remains strong at Xerox, but it appears that not only the proportion, but total volume of non-digital non-face-to-face communications is lower than at OU.  That is sub substitution has already happened.

Again, this is obvious nowadays, although the volume of electronic communications would have been untenable in paper (I’ve sometimes imagined printing out a day’s email and trying to cram it in a pigeon-hole), the volume of paper communications has diminished markedly. A report in 2013 for Royal Mail recorded 3-6% pa reduction in letters over recent years and projected a further 4% pa for the foreseeable future [6].

academic communication and national meetungs

However, this also made me think about the IEE Colloquium itself. Back in the late 1980s and 1990s it was common to attend small national or local meetings to meet with others and present work, often early stage, for discussion. In other fields this still happens, but in HCI it has all but disappeared. Maybe I have is a little nostalgia, but this does seem a real loss as it was a great way for new PhD students to present their work and meet with the leaders in their field. Of course, this can happen if you get your CHI paper accepted, but the barriers are higher, particularly for those in smaller and less well-resourced departments.

Some of this is because international travel is cheaper and faster, and so national meetings have reduced in importance – everyone goes to the big global (largely US) conferences. Many years ago research on day-to-day time use suggested that we have a travel ‘time budget’ reactively constant across counties and across different kinds of areas within the same country [7]. The same is clearly true of academic travel time; we have a certain budget and if we travel more internationally then we do correspondingly less nationally.

(from Zahavi, 1979)

(from Zahavi, 1979)

However, I wonder if digital communication also had a part to play. I knew about the Fung et al. paper, even though it was not in the large reviews of CSCW and email, because I had been there. Indeed, the reason that the Fung et al.paper was not cited in relevant reviews would have been because it was in a small venue and only available as paper copy, and only if you know it existed. Indeed, it was presumably also below the digital radar until it was, I assume, scanned by IEE archivists and deposited in IEEE digital library.

However, despite the advantages of this easy access to one another and scholarly communication, I wonder if we have also lost something.

In the 1980s, physical presence and co-presence at an event was crucial for academic communication. Proceedings were paper and precious, I would at least skim read all of the proceedings of any event I had been to, even those of large conferences, because they were rare and because they were available. Reference lists at the end of my papers were shorter than now, but possibly more diverse and more in-depth, as compared to more directed ‘search for the relevant terms’ literature reviews of the digital age.

And looking back at some of those early papers, in days when publish-or-perish was not so extreme, when cardiac failure was not an occupational hazard for academics (except maybe due to the Cambridge sherry allowance), at the way this crucial piece of early research was not dressed up with an extra 6000 words of window dressing to make a ‘high impact’ publication, but simply shared. Were things more fun?


 

[1] Saul Greenberg (1991) “An annotated bibliography of computer supported cooperative work.” ACM SIGCHI Bulletin, 23(3), pp. 29-62. July. Reprinted in Greenberg, S. ed. (1991) “Computer Supported Cooperative Work and Groupware”, pp. 359-413, Academic Press. DOI: http://dx.doi.org/10.1145/126505.126508
https://pdfs.semanticscholar.org/52b4/d0bb76fcd628c00c71e0dfbf511505ae8a30.pdf

[2] Nicolas Ducheneaut and Leon A. Watts (2005). In search of coherence: a review of e-mail research. Hum.-Comput. Interact. 20, 1 (June 2005), 11-48. DOI= 10.1080/07370024.2005.9667360
http://www2.parc.com/csl/members/nicolas/documents/HCIJ-Coherence.pdf

[3] Steve Whittaker, Victoria Bellotti, and Paul Moody (2005). Introduction to this special issue on revisiting and reinventing e-mail. Hum.-Comput. Interact. 20, 1 (June 2005), 1-9.
http://www.tandfonline.com/doi/abs/10.1080/07370024.2005.9667359

[4] Wendy E. Mackay. 1988. More than just a communication system: diversity in the use of electronic mail. In Proceedings of the 1988 ACM conference on Computer-supported cooperative work (CSCW ’88). ACM, New York, NY, USA, 344-353. DOI=http://dx.doi.org/10.1145/62266.62293
https://www.lri.fr/~mackay/pdffiles/TOIS88.Diversity.pdf

[5] Steve Whittaker and Candace Sidner (1996). Email overload: exploring personal information management of email. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’96), Michael J. Tauber (Ed.). ACM, New York, NY, USA, 276-283. DOI=http://dx.doi.org/10.1145/238386.238530
https://www.ischool.utexas.edu/~i385q/readings/Whittaker_Sidner-1996-Email.pdf

[6] The outlook for UK mail volumes to 2023. PwC prepared for Royal Mail Group, 15 July 2013
http://www.royalmailgroup.com/sites/default/files/ The%20outlook%20for%20UK%20mail%20volumes%20to%202023.pdf

[7] Yacov Zahavi (1979). The ‘UMOT’ Project. Prepared For U.S. Department Of Transportation Ministry Of Transport and Fed. Rep. Of Germany.
http://www.surveyarchive.org/Zahavi/UMOT_79.pdf

Human-Like Computing

Last week I attended an EPSRC workshop on “Human-Like Computing“.

The delegate pack offered a tentative definition:

“offering the prospect of computation which is akin to that of humans, where learning and making sense of information about the world around us can match our human performance.” [E16]

However, the purpose of this workshop was to clarify, and expand on this, exploring what it might mean for computers to become more like humans.

It was an interdisciplinary meeting with some participants coming from more technical disciplines such as cognitive science, artificial intelligence, machine learning and Robotics; others from psychology or studying human and animal behaviour; and some, like myself, from HCI or human factors, bridging the two.

Why?

Perhaps the first question is why one might even want more human-like computing.

There are two obvious reasons:

(i) Because it is a good model to emulate — Humans are able to solve some problems, such as visual pattern finding, which computers find hard. If we can understand human perception and cognition, then we may be able to design more effective algorithms. For example, in my own work colleagues and I have used models based on spreading activation and layers of human memory when addressing ‘web scale reasoning’ [K10,D10].

robot-3-clip-sml(ii) For interacting with people — There is considerable work in HCI in making computers easier to use, but there are limitations. Often we are happy for computers to be simply ‘tools’, but at other times, such as when your computer notifies you of an update in the middle of a talk, you wish it had a little more human understanding. One example of this is recent work at Georgia Tech teaching human values to artificial agents by reading them stories! [F16]

To some extent (i) is simply the long-standing area of nature-inspired or biologically-inspired computing. However, the combination of computational power and psychological understanding mean that perhaps we are the point where new strides can be made. Certainly, the success of ‘deep learning’ and the recent computer mastery of Go suggest this. In addition, by my own calculations, for several years the internet as a whole has had more computational power than a single human brain, and we are very near the point when we could simulate a human brain in real time [D05b].

Both goals, but particularly (ii), suggest a further goal:

(iii) new interaction paradigms — We will need to develop new ways to design for interacting with human-like agents and robots, not least how to avoid the ‘uncanny valley’ and how to avoid the appearance of over-competence that has bedevilled much work in this broad area. (see more later)

Both goals also offer the potential for a fourth secondary goal:

(iv) learning about human cognition — In creating practical computational algorithms based in human qualities, we may come to better understand human behaviour, psychology and maybe even society. For example, in my own work on modelling regret (see later), it was aspects of the computational model that highlighted the important role of ‘positive regret’ (“the grass is greener on the other side”) to hep us avoid ‘local minima’, where we stick to the things we know and do not explore new options.

Human or superhuman?

Of course humans are not perfect, do we want to emulate limitations and failings?

For understanding humans (iv), the answer is probably “yes”, and maybe by understanding human fallibility we may be in a better position to predict and prevent failures.

Similarly, for interacting with people (ii), the agents should show at least some level of human limitations (even if ‘put on’); for example, a chess program that always wins would not be much fun!

However, for simply improving algorithms, goal (i), we may want to get the ‘best bits’, from human cognition and merge with the best aspects of artificial computation. Of course it maybe that the frailties are also the strengths, for example, the need to come to decisions and act in relatively short timescales (in terms of brain ‘ticks’) may be one way in which we avoid ‘over learning’, a common problem in machine learning.

In addition, the human mind has developed to work with the nature of neural material as a substrate, and the physical world, both of which have shaped the nature of human cognition.

Very simple animals learn purely by Skinner-like response training, effectively what AI would term sub-symbolic. However, this level of learning require many exposures to similar stimuli. For more rare occurrences, which do not occur frequently within a lifetime, learning must be at the, very slow pace of genetic development of instincts. In contrast, conscious reasoning (symbolic processing) allows us to learn through a single or very small number of exposures; ideal for infrequent events or novel environments.

Big Data means that computers effectively have access to vast amounts of ‘experience’, and researchers at Google have remarked on the ‘Unreasonable Effectiveness of Data’ [H09] that allows problems, such as translation, to be tackled in a statistical or sub-symbolic way which previously would have been regarded as essentially symbolic.

Google are now starting to recombine statistical techniques with more knowledge-rich techniques in order to achieve better results again. As humans we continually employ both types of thinking, so there are clear human-like lessons to be learnt, but the eventual system will not have the same ‘balance’ as a human.

If humans had developed with access to vast amounts of data and maybe other people’s experience directly (rather than through culture, books, etc.), would we have developed differently? Maybe we would do more things unconsciously that we do consciously. Maybe with enough experience we would never need to be conscious at all!

More practically, we need to decide how to make use of this additional data. For example, learning analytics is becoming an important part of educational practice. If we have an automated tutor working with a child, how should we make use of the vast body of data about other tutors interactions with other children?   Should we have a very human-like tutor that effectively ‘reads’ learning analytics just as a human tutor would look at a learning ‘dashboard’? Alternatively, we might have a more loosely human-inspired ‘hive-mind’ tutor that ‘instinctively’ makes pedagogic choices based on the overall experience of all tutors, but maybe in an unexplainable way?

What could go wrong …

There have been a number of high-profile statements in the last year about the potential coming ‘singularity’ (when computers are clever enough to design new computers leading to exponential development), and warnings that computers could become sentient, Terminator-style, and take over.

There was general agreement at the workshop this kind of risk was overblown and that despite breakthroughs, such as the mastery of Go, these are still very domain limited. It is many years before we have to worry about even general intelligence in robots, let alone sentience.

A far more pressing problem is that of incapable computers, which make silly mistakes, and the way in which people, maybe because of the media attention to the success stories, assume that computers are more capable than they are!

Indeed, over confidence in algorithms is not just a problem for the general public, but also among computing academics, as I found in my personal experience on the REF panel.

There are of course many ethical and legal issues raised as we design computer systems that are more autonomous. This is already being played out with driverless cars, with issues of insurance and liability. Some legislators are suggesting allowing driverless cars, but only if there is a drive there to take control … but if the car relinquishes control, how do you safely manage the abrupt change?

Furthermore, while the vision of autonomous robots taking over the world is still far fetched; more surreptitious control is already with us. Whether it is Uber cabs called by algorithm, or simply Google’s ranking of search results prompting particular holiday choices, we all to varying extents doing “what the computer tells us”. I recall in the Dalek Invasion of Earth, the very un-human-like Daleks could not move easily amongst the rubble of war-torn London. Instead they used ‘hypnotised men’ controlled by some form of neural headset. If the Daleks had landed today and simply taken over or digitally infected a few cloud computing services would we know?

Legibility

Sometimes it is sufficient to have a ‘black box’ that makes decisions and acts. So long as it works we are happy. However, a key issue for many ethical and legal issues, but also for practical interaction, is the ability to be able to interrogate a system, so seek explanations of why a decision has been made.

Back in 1992 I wrote about these issues [D92], in the early days when neural networks and other forms of machine learning were being proposed for a variety of tasks form controlling nuclear fusion reactions to credit scoring. One particular scenario, was if an algorithm were used to pre-sort large numbers of job applications. How could you know whether the algorithms were being discriminatory? How could a company using such algorithms defend themselves if such an accusation were brought?

One partial solution then, as now, was to accept underlying learning mechanisms may involve emergent behaviour form statistical, neural network or other forms of opaque reasoning. However, this opaque initial learning process should give rise to an intelligible representation. This is rather akin to a judge who might have a gut feeling that a defendant is guilty or innocent, but needs to explicate that in a reasoned legal judgement.

This approach was exemplified by Query-by-Browsing, a system that creates queries from examples (using a variant of ID3), but then converts this in SQL queries. This was subsequently implemented [D94] , and is still running as a web demonstration.

For many years I have argued that it is likely that our ‘logical’ reasoning arises precisely form this need to explain our own tacit judgement to others. While we simply act individually, or by observing the actions of others, this can be largely tacit, but as soon as we want others to act in planned collaborate ways, for example to kill a large animal, we need to convince them. Once we have the mental mechanisms to create these explanations, these become internalised so that we end up with internal means to question our own thoughts and judgement, and even use them constructively to tackle problems more abstract and complex than found in nature. That is dialogue leads to logic!

Scenarios

We split into groups and discussed scenarios as a means to understand the potential challenges for human-like computing. Over multiple session the group I was in discussed one man scenario and then a variant.

Paramedic for remote medicine

The main scenario consisted of a patient far form a central medical centre, with an intelligent local agent communicating intermittently and remotely with a human doctor. Surprisingly the remote aspect of the scenario was not initially proposed by me thinking of Tiree, but by another member of the group thinking abut some of the remote parts of the Scottish mainland.

The local agent would need to be able communicate with the patient, be able to express a level of empathy, be able to physically examine (needing touch sensing, vision), and discuss symptoms. On some occasions, like a triage nurse, the agent might be sufficiently certain to be able to make a diagnosis and recommend treatment. However, at other times it may need to pass on to the remote doctor, being able to describe what had been done in terms of examination, symptoms observed, information gathered from the patient, in the same way that a paramedic does when handing over a patient to the hospital. However, even after the handover of responsibility, the local agent may still form part of the remote diagnosis, and maybe able to take over again once the doctor has determined an overall course of action.

The scenario embodied many aspects of human-like computing:

  • The agent would require a level of emotional understanding to interact with the patient
  • It would require fine and situation contingent robotic features to allow physical examination
  • Diagnosis and decisions would need to be guided by rich human-inspired algorithms based on large corpora of medical data, case histories and knowledge of the particular patient.
  • The agent would need to be able to explain its actions both to the patient and to the doctor. That is it would not only need to transform its own internal representations into forms intelligible to a human, but do so in multiple ways depending on the inferred knowledge and nature of the person.
  • Ethical and legal responsibility are key issues in medical practice
  • The agent would need to be able manage handovers of control.
  • The agent would need to understand its own competencies in order to know when to call in the remote doctor.

The scenario could be in physical or mental health. The latter is particularly important given recent statistics, which suggested only 10% of people in the UK suffering mental health problems receive suitable help.

Physiotherapist

As a more specific scenario still, one fog the group related how he had been to an experienced physiotherapist after a failed diagnosis by a previous physician. Rather than jumping straight into a physical examination, or even apparently watching the patient’s movement, the physiotherapist proceeded to chat for 15 minutes about aspects of the patient’s life, work and exercise. At the end of this process, the physiotherapist said, “I think I know the problem”, and proceeded to administer a directed test, which correctly diagnosed the problem and led to successful treatment.

Clearly the conversation had given the physiotherapist a lot of information about potential causes of injury, aided by many years observing similar cases.

To do this using an artificial agent would suggest some level of:

  • theory/model of day-to-day life

Thinking about the more conversational aspects of this I was reminded of the PhD work of Ramanee Peiris [P97]. This concerned consultations on sensitive subjects such as sexual health. It was known that when people filled in (initially paper) forms prior to a consultation, they were more forthcoming and truthful than if they had to provide the information face-to-face. This was even if the patient knew that the person they were about to see would read the forms prior to the consultation.

Ramanee’s work extended this first to electronic forms and then to chat-bot style discussions which were semi-scripted, but used simple textual matching to determine which topics had been covered, including those spontaneously introduced by the patient. Interestingly, the more human like the system became the more truthful and forthcoming the patients were, even though they were less so wit a real human.

As well as revealing lessons for human interactions with human-like computers, this also showed that human-like computing may be possible with quite crude technologies. Indeed, even Eliza was treated (to Weizenbaum’s alarm) as if it really were a counsellor, even though people knew it was ‘just a computer’ [W66].

Cognition or Embodiment?

I think it fair to say that the overall balance, certainly in the group I was in, was towards the cognitivist: that is more Cartesian approach starting with understanding and models of internal cognition, and then seeing how these play out with external action. Indeed, the term ‘representation’ used repeatedly as an assumed central aspect of any human-like computing, and there was even talk of resurrecting Newells’s project for a ‘unified theory of cognition’ [N90]

There did not appear to be any hard-core embodiment theorist at the workshops, although several people who had sympathies. This was perhaps as well as we could easily have degenerated into well rehearsed arguments for an against embodiment/cognition centred explanations … not least about the critical word ‘representation’.

However, I did wonder whether a path that deliberately took embodiment centrally would be valuable. How many human-like behaviours could be modelled in this way, taking external perception-action as central and only taking on internal representations when they were absolutely necessary (Alan Clark’s 007 principle) [C98].

Such an approach would meet limits, not least the physiotherapist’s 25 minute chat, but I would guess would be more successful over a wider range of behaviours and scenarios then we would at first think.

Human–Computer Interaction and Human-Like Computing

Both Russell and myself were partly there representing our own research interest, but also more generally as part of the HCI community looking at the way human-like computing would intersect exiting HCI agendas, or maybe create new challenges and opportunities. (see poster) It was certainly clear during the workshop that there is a substantial role for human factors from fine motor interactions, to conversational interfaces and socio-technical systems design.

Russell and I presented a poster, which largely focused on these interactions.

HCI-HLC-poster

There are two sides to this:

  • understanding and modelling for human-like computing — HCI studies and models complex, real world, human activities and situations. Psychological experiments and models tend to be very deep and detailed, but narrowly focused and using controlled, artificial tasks. In contrast HCI’s broader, albeit more shallow, approach and focus on realistic or even ‘in the wild’ tasks and situations may mean that we are in an ideal position to inform human-like computing.

human interfaces for human-like computing — As noted in goal (iii) we will need paradigms for humans to interact with human-like computers.

As an illustration of the first of these, the poster used my work on making sense of the apparently ‘bad’ emotion of regret [D05] .

An initial cognitive model of regret was formulated involving a rich mix of imagination (in order to pull past events and action to mind), counter-factual modal reasoning (in order to work out what would have happened), emption (which is modified to feel better or worse depending on the possible alternative outcomes), and Skinner-like low-level behavioural learning (the eventual purpose of regret).

cog-model

This initial descriptive and qualitative cognitive model was then realised in a simplified computational model, which had a separate ‘regret’ module which could be plugged into a basic behavioural learning system.   Both the basic system and the system with regret learnt, but the addition of regret did so with between 5 and 10 times fewer exposures.   That is, the regret made a major improvement to the machine learning.

architecture

Turning to the second. Direct manipulation has been at the heart of interaction design since the PC revolution in the 1980s. Prior to that command line interfaces (or worse job control interfaces), suggested a mediated paradigm, where operators ‘asked’ the computer to do things for them. Direct manipulation changed that turning the computer into a passive virtual world of computational objects on which you operated with the aid of tools.

To some extent we need to shift back to the 1970s mediated paradigm, but renewed, where the computer is no longer like an severe bureaucrat demanding the precise grammatical and procedural request; but instead a helpful and understanding aide. For this we can draw upon existing areas of HCI such as human-human communications, intelligent user interfaces, conversational agents and human–robot interaction.

References

[C98] Clark, A. 1998. Being There: Putting Brain, Body and the World Together Again. MIT Press. https://mitpress.mit.edu/books/being-there

[D92] A. Dix (1992). Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451. http://www.hcibook.com/alan/papers/neuro92/

[D94] A. Dix and A. Patrick (1994). Query By Browsing. Proceedings of IDS’94: The 2nd International Workshop on User Interfaces to Databases, Ed. P. Sawyer. Lancaster, UK, Springer Verlag. 236-248.

[D05] Dix, A..(2005).  The adaptive significance of regret. (unpublished essay, 2005) https://alandix.com/academic/essays/regret.pdf

[D05b] A. Dix (2005). the brain and the web – a quick backup in case of accidents. Interfaces, 65, pp. 6-7. Winter 2005. https://alandix.com/academic/papers/brain-and-web-2005/

[D10] A. Dix, A. Katifori, G. Lepouras, C. Vassilakis and N. Shabir (2010). Spreading Activation Over Ontology-Based Resources: From Personal Context To Web Scale Reasoning. Internatonal Journal of Semantic Computing, Special Issue on Web Scale Reasoning: scalable, tolerant and dynamic. 4(1) pp.59-102. http://www.hcibook.com/alan/papers/web-scale-reasoning-2010/

[E16] EPSRC (2016). Human Like Computing Hand book. Engineering and Physical Sciences Research Council. 17 – 18 February 2016

[F16] Alison Flood (2016). Robots could learn human values by reading stories, research suggests. The Guardian, Thursday 18 February 2016 http://www.theguardian.com/books/2016/feb/18/robots-could-learn-human-values-by-reading-stories-research-suggests

[H09] Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The Unreasonable Effectiveness of Data. IEEE Intelligent Systems 24, 2 (March 2009), 8-12. DOI=10.1109/MIS.2009.36

[K10] A. Katifori, C. Vassilakis and A. Dix (2010). Ontologies and the Brain: Using Spreading Activation through Ontologies to Support Personal Interaction. Cognitive Systems Research, 11 (2010) 25–41. https://alandix.com/academic/papers/Ontologies-and-the-Brain-2010/

[N90] Allen Newell. 1990. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, USA. http://www.hup.harvard.edu/catalog.php?isbn=9780674921016

[P97] DR Peiris (1997). Computer interviews: enhancing their effectiveness by simulating interpersonal techniques. PhD Thesis, University of Dundee. http://virtual.inesc.pt/rct/show.php?id=56

[W66] Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (January 1966), 36-45. DOI=http://dx.doi.org/10.1145/365153.365168

Alan’s Guide to Winter Foot Care

My feet are quite wide and so I prefer to wear sandals.  I wore sandals for over 700 miles of my round Wales walk back in 2013, and wear them throughout the winter.

When the temperature drops below zero, or snow gathers on the ground, I am often asked, “don’t your feet get cold?“.

Having been asked so many times, I have decided to put down in writing my observations about healthy winter feet in the hope it will help others.

Basically, the thing to remember is that it is all about colours, and follows a roughly linear series of stages.  However, do note I am a sallow-skinned Caucasian, so all reference to skin colour should be read in that context.

Look at your toes.

What colour are they?

Stage1.  White

Press the side of your toe with your finger.  Does it change colour?

1.1   Yes, it goes a bit pink and then fades rapidly back to white.

That is normal and healthy, you clearly aren’t taking this whole extreme winter walking thing seriously.

1.2  Yes, it goes deep red and only very slowly back to white.

You have an infection, maybe due to stage 2.2a on a previous walk.  Visit the doctor to avoid stage 3.

1.3 No, it stays white.

Bad news, you are a zombie.

Stage 2. Red

Are your toes painful?

2.1 Yes.

Well at least they are still alive.

2.2. No.

Well at least they don’t hurt.  However numbness means does cause certain dangers.

2.2a – You might prick your toe on a thorns, or rusty wire and not notice, leading to infection.

2.2b – You might step on broken glass and bleed to death.

2.2c – You might step in a fire and burn yourself.

Stage 3.  Yellow

Blood poisoning, you missed warning 2.2a

Stage 4.  Blue.

Your circulation has stopped entirely.  This will lead ultimately to limb death, but at least you won’t bleed to death (warning 2.2b).

Stage 5. Black.

Is that charcoal black?

5.1.  Yes

You forgot warning 2.2c didn’t you?

5.2  no, more dull grey/black.

Frostbite, get to the hospital quick and they may save some of your toes.

Stage 6. Green

Gangrene, no time for the hospital, find a saw or large breadknife.

Stage 7.  What toes?

You missed stages 5 and 6.


Download and print the Quick Reference Card so that you can conveniently check your foot health at any time.

Quick Reference Card


Last word … on a serious note

My feet are still (despite misuse!) healthy.  However, for many this is a serious issue, not least for those with diabetes.  When I was child my dad, who was diabetic, dropped a table on his foot and had to be constantly monitored to make sure it didn’t develop into gangrene.  Diabetes UK have their own foot care page, and a list of diabetes charities you can support.

 

Walking Wales

As some of you already know, next year I will be walking all around Wales: from May to July covering just over 1000 miles in total.

Earlier this year the Welsh Government announced the opening of the Wales Coastal Path a new long distance footpath around the whole coast of Wales. There were several existing long distance paths covering parts of the coastline, as well as numerous stretches of public footpaths at or near the coast. However, these have now been linked, mapped and waymarked creating for the first time, a continuous single route. In addition, the existing Offa’s Dyke long distance path cuts very closely along the Welsh–English border, so that it is possible to make a complete circuit of Wales on the two paths combined.

As soon as I heard the announcement, I knew it was something I had to do, and gradually, as I discussed it with more and more people, the idea has become solid.

This will not be the first complete periplus along these paths; this summer there have been at least two sponsored walkers taking on the route. However, I will be doing the walk with a technology focus, which will, I believe, be unique.

The walk has four main aspects:

personal — I am Welsh, was born and brought up in Cardiff, but have not lived in Wales for over 30 years. The walk will be a form of homecoming, reconnecting with the land and its people that I have been away from for so long. The act of encircling can symbolically ‘encompass’ a thing, as if knowing the periphery one knows the whole. Of course life is not like this, the edge is just that, not the core, not the heart. As a long term ex-pat, a foreigner in my own land, maybe all I can hope to do is scratch the surface, nibble at the edges. However, also I always feel most comfortable as an outsider, as one at the margins, so in some ways I am going to the places where I most feel at home. I will blog, audio blog, tweet and generally share this experience to the extent the tenuous mobile signal allows, but also looking forward to periods of solitude between sea and mountain.

practical — As I walk I will be looking at the IT experience of the walker and also discuss with local communities the IT needs and problems for those at the edges, at the margins. Not least will be issues due to the paucity of network access both patchy mobile signal whilst walking and low-capacity ‘broadband’ at the limits of wind-beaten copper telephone wires — none of the mega-capacity fibre optic of the cities. This will not simply be fact-finding, but actively building prototypes and solutions, both myself (in evenings and ‘days off’) and with others who are part of the project remotely or joining me for legs of the journey1. Geolocation and mobile based applications will be a core part of this, particularly for the walkers experience, but local community needs likely to be far more diverse.

philosophical — Mixed with personal reflections will be an exploration of the meanings of place, of path, of walking, of nomadicity and of locality. Aristotle’s school of philosophy was called the Peripatetic School because discussion took place while walking; over two thousand years later Wordsworth’s poetry was nearly all composed while walking; and for time immemorial routes of pilgrimage have been a focus of both spiritual service and personal enlightenment. This will build on some of my own previous writings in particular past keynotes2 on human understanding of space, and also wider literature such as Rebecca Solnit’s wonderful “Wanderlust“.  This reflection will inform the personal blogging, and after I finish I will edit this into a book or account of the journey.

research3 — the practical outcomes will intersect with various personal research interests including social empowerment, interaction design and algorithmics4.  For the walker’s experience, I will be effectively doing a form of action research!.  This will certainly include how to incorporate local maps (such as tourists town plans) effectively into more large-scale experiences, how ‘crowdsourced’ route knowledge can augment more formal digital and paper resources, data synchonisation to deal with disconnection, and data integration between diverse sources.  In addition I am offering myself as a living lab so that others can use my trip as a place to try out their own sensors and instrumentation5, information systems, content authoring, ethnographic practices, community workshops, etc.  This may involve simply asking me to use things, coming for a single meeting or day, or joining me for parts of the walk.

If any of this interests you, do get in touch.  As well as research collaborations (living lab or supporting direct IT goals) any help in managing logistics, PR, or finding sources of funding/sponsorship for basic costs, most welcome.

I’ll get a dedicated website, Facebook page, twitter account, and charity sponsorship set up soon … watch this space!

  1. Coding whilst walking is something I have thought about (but not done!) for many years, but definitely inspired more recently by Nick the amazing cycling programmer who came to the Spring Tiree Tech Wave.[back]
  2. Welsh Mathematician Walks in Cyberspace“, and “Paths and Patches: patterns of geognosy and gnosis“.[back]
  3. I tried to think of a word beginning with ‘p’ for research, but failed![back]
  4. As I tagged this post I found I was using nearly all my my most common tags — I hadn’t realised quite how much this project cuts across so many areas of interest.[back]
  5. But with the “no blood rule”: if I get sensor sores, the sensors go in the bin 😉 [back]

Tiree Touchtable – the photos

At last, the photos from the week making and installing the Tiree Touchtable.  You can see all the photos on Flickr, but here is small selection:

The main components – projector, Kinect (on top of projector) and mini-Mac:

The Kinect disassembled:

Platform to attach to roof beams and support projector and mini-Mac:

Add mirror:

Andrea adjusting the mirror:

It works!

Alan gently centre-punches location for screws on Kinect frame:

note the tool … after this the Kinect didn’t work … can’t think why?  But happily there was a second Kinect 🙂

Then Andrea screws Kinect to timber support:

Testing on the workbench:

Moment of truth — on the way to the Rural centre to install … second Kinect carefully cradled in Alan’s old jumper:

       

Parts laid out, ready to go:

Steve fits mounting brackets, Alan looks on:

Alan thinks, “platform looks secure”.  Fiona thinks, “Alan doesn’t”.

Gets boring standing at the bottom of the ladder

Andrea fitting Kinect drop arm:

Andrea fitting the mini-Mac:

“Yep, that seems to be OK”

Let there be light:

Looking down — behold, Tiree Touchfloor:

The secret of true engineering … if the table is too low for the sensors, lift it higher:

Holiday Reading

Early in the summer Fiona and I took 10 days holiday, first touring on the West Coast of Scotlad, south from Ullapool and then over the Skye Road Bridge to spend a few days on Skye.  As well as visiting various wool-related shops on the way and a spectacular drive over the pass from Applecross, I managed a little writing, some work on regret modelling1. And, as well as writing and regret modelling, quite a lot of reading.

This was my holiday reading:

The Talking Ape: How Language Evolved, Robbins Burling (see my booknotes and review)

In Praise of the Garrulous, Allan Cameron (see my booknotes)

A Mind So Rare, Merlin Donald (see my booknotes and review)

Wanderlust, Rebecca Solnit (see my booknotes)

  1. At last!  It has been something like 6 years since I first did initial, and very promising, computational regret modelling, and have at last got back to it, writing driver code so that I have got data from a systematic spread of different parameters.  Happily this verified the early evidence that the cognitive model of regret I wrote about first in 2003 really does seem to aid learning.  However, the value of more comprehensive simulation was proved as early indications that positive regret (grass is greener feeling) was more powerful than negative regret do not seem to have been borne out.[back]