the educational divide – do numbers matter?

If a news article is all about numbers, why is the media shy about providing the actual data?

On the BBC News website this morning James McIvor‘s article “Clash over ‘rich v poor’ university student numbers” describes differences between Scottish Government (SNP) and Scottish Labour in the wake of Professor Peter Scott appointment as commissioner for fair access to higher education in Scotland.

Scottish Labour claim that while access to university by the most deprived has increased, the educational divide is growing, with the most deprived increasing by 0.8% since 2014, but those in the least deprived (most well off) growing at nearly three times that figure.  In contrast, the Sottish Government claims that in 2006 those from the least deprived areas were 5.8 times more likely to enter university than those in the most deprived areas, whereas now the difference is only 3.9 times, a substantial decrease in educational inequality..

The article is all about numbers, but the two parties seem to be saying contradictory things, one saying inequality is increasing, one saying it is decreasing!

Surely enough to make the average reader give up on experts, just like Michael Gove!

Of course, if you can read through the confusing array of leasts and mosts, the difference seems to be that the two parties are taking different base years: 2014 vs 2006, and that both can be true: a long term improvement with decreasing inequality, but a short term increase in inequality since 2014.  The former is good news, but the latter may be bad news, a change in direction that needs addressing, or simply ‘noise’ as we are taking about small changes on big numbers.

I looked in vain for a link to the data, web sites or reports n which this was based, after all this is an article where the numbers are the story, but there are none.

After a bit of digging, I found that the data that both are using is from the UCAS Undergraduate 2016 End of Cycle Report (the numerical data for this figure and links to CSV files are below).

Figure from UCAS 2016 End of Cycle Report

Looking at these it is clear that the university participation rate for the least deprived quintile (Q5, blue line at top) has stayed around 40% with odd ups and downs over the last ten years, whereas the participation of the most deprived quintile has been gradually increasing, again with year-by-year wiggles.  That is the ratio between least and most deprived used to be about 40:7 and now about 40:10, less inequality as the SNP say.

For some reason 2014 was a dip year for the Q5.  There is no real sign of a change in the long-term trend, but if you take 2014 to 2016, the increase in Q5 is larger than the increase in Q1, just as Scottish Labour say.  However, any other year would not give this picture.

In this case it looks like Scottish Labour either cherry picked a year that made the story they wanted, or simply accidentally chose it.

The issue for me though, is not so much who was right or wrong, but why the BBC didn’t present this data to make it possible to make this judgement?

I can understand the argument that people do not like, or understand numbers at all, but where, as in this case, the story is all about the numbers, why not at least present the raw data and ideally discuss why there is an apparent contradiction!

 

Numerical from figure 57 of UCAS  2016 End of Cycle Report

2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016
Q1 7.21 7.58 7.09 7.95 8.47 8.14 8.91 9.52 10.10 9.72 10.90
Q2 13.20 12.80 13.20 14.30 15.70 14.40 14.80 15.90 16.10 17.40 18.00
Q3 21.10 20.60 20.70 21.30 23.60 21.10 22.10 22.50 22.30 24.00 24.10
Q4 29.40 29.10 30.20 30.70 31.50 29.10 29.70 29.20 28.70 30.30 31.10
Q5 42.00 39.80 41.40 42.80 41.70 40.80 41.20 40.90 39.70 41.10 42.30

UCAS provide the data in CSV form.  I converted this to the above tabular form and this is available in CSV or XLSX.

the rise of the new liberal facism

Across Europe the ultra-right wing raise again the ugly head of racism in scenes shockingly reminiscent of the late-1930s; while in America white supremacists throw stiff-armed salutes and shout “Heil Trump!”  It has become so common that reporters no longer even remark on the swastikas daubed as part of neo-Nazi graffiti.

Yet against this we are beginning to see a counter movement, spoken in the soft language of liberalism, often well  intentioned, but creating its own brand of facism.  The extremes of the right become the means to label whole classes of people as ‘deplorable’, too ignorant, stupid or evil to be taken seriously, in just the same way as the Paris terrorist attacks or Cologne sexual assaults were used by the ultra-right to label all Muslims and migrants.

Hilary Clinton quickly recanted her “basket of depolarables”.  However, it is shocking that this was said at all, especially by a politician who made a point of preparedness, in contrast to Trump’s off-the-cuff remarks.  In a speech, which will have been past Democrat PR experts as well as Clinton herself, to label half of Trump supporters, at that stage possibly 20% of the US electorate, as ‘deplorable’ says something about the common assumptions that are taken for granted, and worrying because of that.

My concern has been growing for a long time, but I’m prompted to write now having read  ‘s “Welcome to the age of anger” in the Guardian.  Mishra’s article builds on previous work including Steven Levitt’s Freakonomics and the growing discourse on post-truth politics.  He gives us a long and scholarly view from the Enlightenment, utopian visions of the 19th century, and models of economic self interest through to the fall of the Berlin Wall, the rise of Islamic extremism and ultimately Brexit and Trump.

The toxicity of debate in both the British EU Referendum and US Presidential Election is beyond doubt.  In both debates both sides frequently showed a disregard for truth and taste, but there is little equivalence between the tenor of the Trump and Clinton campaign, and, in the UK, the Leave campaign’s flagrant disregard for fact made even Remain’s claims of imminent third world war seem tame.

Indeed, to call either debate a ‘debate’ is perhaps misleading as rancour, distrust and vitriol dominated both, so much so that Jo Cox viscous murder, even though the work of a single new-Nazi individual, was almost unsurprising in the growing paranoia.

Mishra tries to interpret the frightening tide of anger sweeping the world, which seems to stand in such sharp contrast to rational enlightened self-interest and the inevitable rise of western democracy, which was the dominant narrative of the second half of the 20th century.  It is well argued, well sourced, the epitome of the very rationalism that it sees fading in the world.

It is not the argument itself that worries me, which is both illuminating and informing, but the tacit assumptions that lie behind it: the “age of anger” in the title itself and the belief throughout that those who disagree must be driven by crude emotions: angry, subject to malign ‘ressentiment‘, irrational … or to quote Lord Kerr (who to be fair was referring to ‘native Britains’ in general) just too “bloody stupid“.

Even the carefully chosen images portray the Leave campaigner and Trump supporter as almost bestial, rather than, say, the images of exultant joy at the announcement of the first Leave success in Sunderland, or even in the article’s own Trump campaign image, if you look from the central emotion filled face to those around.

guardian-leaver  telegraph-sunderland-win
guardian-trump-1  guardian-trump-2

The article does not condemn those that follow the “venomous campaign for Brexit” or the “rancorous Twitter troll”, instead they are treated, if not compassionately, impassionately: studied as you would a colony of ants or herd of wildebeest.

If those we disagree with are lesser beings, we can ignore them, not address their real concerns.

We would not treat them with the accidental cruelty that Dickens describes in pre-revolutionary Paris, but rather the paternalistic regard for the lower orders of pre-War Britain, or even the kindness of the more benign slave owner; folk not fully human, but worthy of care as you would a favourite dog.

Once we see our enemy as animal, or the populous as cattle, then, however well intentioned, there are few limits.

The 1930s should have taught us that.

 

the internet laws of the jungle

firefox-copyright-1Where are the boundaries between freedom, license and exploitation, between fair use and theft?

I found myself getting increasingly angry today as Mozilla Foundation stepped firmly beyond those limits, and moreover with Trump-esque rhetoric attempts to dupe others into following them.

It all started with a small text add below the Firefox default screen search box:

firefox-copyright-2

Partly because of my ignorance of web-speak ‘TFW‘ (I know showing my age!), I clicked through to a petition page on Mozilla Foundation (PDF archive copy here).

It starts off fine, with stories of some of the silliness of current copyright law across Europe (can’t share photos of the Eiffel tower at night) and problems for use in education (which does in fact have quite a lot of copyright exemptions in many countries).  It offers a petition to sign.

This sounds all good, partly due to rapid change, partly due to knee jerk reactions, internet law does seem to be a bit of a mess.

If you blink you might miss one or two odd parts:

“This means that if you live in or visit a country like Italy or France, you’re not permitted to take pictures of certain buildings, cityscapes, graffiti, and art, and share them online through Instagram, Twitter, or Facebook.”

Read this carefully, a tourist forbidden from photographing cityscapes – silly!  But a few words on “… and art” …  So if I visit an exhibition of an artist or maybe even photographer, and share a high definition (Nokia Lumia 1020 has 40 Mega pixel camera) is that OK? Perhaps a thumbnail in the background of a selfie, but does Mozilla object to any rules to prevent copying of artworks?

mozilla-dont-break-the-internet

However, it is at the end, in a section labelled “don’t break the internet”, the cyber fundamentalism really starts.

“A key part of what makes the internet awesome is the principle of innovation without permission — that anyone, anywhere, can create and reach an audience without anyone standing in the way.”

Again at first this sounds like a cry for self expression, except if you happen to be an artist or writer and would like to make a living from that self-expression?

Again, it is clear that current laws have not kept up with change and in areas are unreasonably restrictive.  We need to be ale to distinguish between a fair reference to something and seriously infringing its IP.  Likewise, we could distinguish the aspects of social media that are more like looking at holiday snaps over a coffee, compared to pirate copies for commercial profit.

However, in so many areas it is the other way round, our laws are struggling to restrict the excesses of the internet.

Just a few weeks ago a 14 year old girl was given permission to sue Facebook.  Multiple times over a 2 year period nude pictures of her were posted and reposted.  Facebook hides behind the argument that it is user content, it takes down the images when they are pointed out, and yet a massive technology company, which is able to recognise faces is not able to identify the same photo being repeatedly posted. Back to Mozilla: “anyone, anywhere, can create and reach an audience without anyone standing in the way” – really?

Of course this vision of the internet without boundaries is not just about self expression, but freedom of speech:

“We need to defend the principle of innovation without permission in copyright law. Abandoning it by holding platforms liable for everything that happens online would have an immense chilling effect on speech, and would take away one of the best parts of the internet — the ability to innovate and breathe new meaning into old content.”

Of course, the petition is signalling out EU law, which inconveniently includes various provisions to protect the privacy and rights of individuals, not dictatorships or centrally controlled countries.

So, who benefits from such an open and unlicensed world?  Clearly not the small artist or the victim of cyber-bullying.

Laissez-faire has always been an aim for big business, but without constraint it is the law of the jungle and always ends up benefiting the powerful.

In the 19th century it was child labour in the mills only curtailed after long battles.

In the age of the internet, it is the vast US social media giants who hold sway, and of course the search engines, who just happen to account for $300 million of revenue for Mozilla Foundation annually, 90% of its income.

 

lies, damned lies and obesity

2016-07-15 11.02.43 - inews-obesityFacts are facts, but the facts you choose to tell change the story, and, in the case of perceptions of the ‘ideal body’, can fuel physical and mental health problems, with consequent costs to society and damage to individual lives.

Today’s i newspaper includes an article entitled “Overweight and obese men ‘have higher risk of premature death’“.  An online version of the same article “Obese men three times more likely to die early” appeared online yesterday on the iNews website.  A similar article “Obesity is three times as deadly for men than women” reporting the same Lancet article appeared in yesterday’s Telegraph.

The text describes how moderately obese men die up to three years earlier than those of ‘normal’ weight1; clearly a serious issue in the UK given growing levels of child obesity and the fact that the UK has the highest levels of obesity in Europe.  The i quotes professors from Oxford and the British Heart Foundation, and the Telegraph report says that the Lancet article’s authors suggest their results refute other recent research which found that being slightly heavier than ‘normal’ could be protective and extend lifespan.

The things in the reports are all true. However, to quote the Witness Oath of British courts, it is not sufficient to tell “the truth”, but also “the whole truth”.

The Telegraph article also helpfully includes a summary of the actual data in which the reports are based.

obesity-table

As the articles say, this does indeed show substantial risk for both men and women who are mildly obese (BMI>30) and extreme risk for those more severely obese (BMI>35). However, look to the left of the table and the column for those underweight (BMI<18.5).  The risks of being underweight exceed those of being mildly overweight, by a small amount for men and a substantial amount for women.

While obesity is major issue, so is the obsession with dieting and the ‘ideal figure’, often driven by dangerously skinny fashion models.  The resulting problems of unrealistic and unhealthy body image, especially for the young, have knock-on impacts on self-confidence and mental health. This may then lead to weight problems, paradoxically including obesity.

The original Lancet academic article is low key and balanced, but, if reported accurately, the comments of at least one of the (large number of) article co-authors less so.  However, the eventual news reports, from ‘serious’ papers at both ends of the political spectrum, while making good headlines, are not just misleading but potentially damaging to people’s lives.

 

  1. I’ve put ‘normal’ in scare quotes, as this is the term used in many medical charts and language, but means something closer to ‘medically recommended’, and is far from ‘normal’ on society today.[back]

A tale of two conferences and the future of learning technology in the UK

Over the past few weeks I’ve been to two conferences focused on different aspects of technology and learning, Talis Insight Europe and ACM Learning at Scale (L@S). This led me to reflect on the potential for and barriers to ground breaking research in these areas in the UK.

The first conference, Talis Insight Europe, grew out of the original Talis User Group, but as well as company updates on existing and new products, also has an extensive line-up of keynotes by major educational visionaries and decision makers (including pretty much the complete line-up of JISC senior staff) and end-user contributed presentations.

hole-in-the-wall-Begin02The second, Learning @ Scale, grew out of the MOOC explosion, and deals with the new technology challenges and opportunities when we are dealing with vast numbers of students. It also had an impressive array of keynote speakers, including Sugata Mitra, famous for the ‘Hole in the Wall‘, which brought technology to street children in India.

Although there were some common elements (big data and dashboards got a mention in both!), the audiences were quite different. For Insight, the large majority were from HE (Higher Education) libraries, followed by learning technologists, industry representatives, and HE decision-makers. In contrast, L@S consisted largely of academics, many from computing or technical backgrounds, with some industry researchers, including, as I was attending largely with my Talis hat on, me.

insight-2016-jisc-keynoteIn a joint keynote at Insight, Paul Fieldman and Phil Richards the CEO and CIO of JISC, described the project to provide a learning analytics service [FR16,JI16] (including student app and, of course, dashboard) for UK institutions. As well as the practical benefits, they outlined a vision where the UK leads the way in educational big data for personalised learning.

Given a long track record of education and educational technology research in the UK, the world-leading distance-learning university provision of the Open University, and recent initiatives both those outlined by JISC and FutureLearn (building on the OUs vast experience), this vision seems not unreasonable.

However, on the ground at Learning @ Scale, there was a very different picture; the vast majority of papers and attendees were from the US, an this despite the conference being held in Edinburgh.

To some extent this is as one might expect. While traditional distance learning, including the OU, has class sizes that for those in face-to-face institutions feel massive; these are dwarfed by those for MOOCs, which started in the US; and it is in the US where the main MOOC players (Coursera, udacity, edX) are based. edX alone had initial funding more than ten times that available to FutureLearn, so in sheer investment terms, the balance at L@S is representative.

FutureLearn-logoHowever, Mike Sharples, long-term educational technology researcher and Academic Lead at FutureLearn, was one of the L@S keynotes [Sh16]. In his presentation it was clear that FutureLearn and UK MOOCs punch well above their weight, with retention statistics several times higher than US counterparts. While this may partly be due to topic areas, it is also a reflection of the development strategy. Mike outlined how empirically founded educational theory has driven the design of the FutureLearn platform, not least the importance of social learning. Perhaps then not surprisingly, one of the areas where FutureLearn substantially led over US counterparts was in social aspects of learning.

So there are positive signs for UK research in these areas. While JISC has had its own austerity-driven funding problems, its role as trusted intermediary and active platform creator offers a voice and forum that few, if any, other countries posses. Similarly, while FutureLearn needs to be sustainable, so has to have a certain inward focus, it does seem to offer a wonderful potential resource for collaborative research. Furthermore the open education resource (OER) community seems strong in the UK.

The Teaching Excellence Framework (TEF) [HC16,TH15] will bring its own problems, more about justifying student fee increases than education, potentially damaging education through yet more ill-informed political interference, and re-establishing class-based educational apartheid. However, it will certainly increase universities’ interest in education technology.

Set against this are challenges.

First was the topic of my own L@S work-in-progress paper – Challenge and Potential of Fine Grain, Cross-Institutional Learning Data [Dx16]. At Talis, we manage half a million reading lists, containing over 20 million resources, spread over more than 85 institutions including more than half of UK higher education. However, these institutions are all very different, and the half million courses each only may have only tens or low hundreds of students. That is very large scale in total volume, but highly heterogeneous. The JISC learning analytics repository will have exactly the same issues, and are far more difficult to deal with by machine learning or statistical analysis than the relatively homogeneous data from a single huge MOOC.

scale-up-and-down

These issues of heterogeneous scale are not unique to education and ones that as a general information systems phenomena, I have been interested in for many years, and call the “long tail of small data” [Dx10,Dx15]. While this kind of data is more complex and difficult to deal with, this is of course a major research challenge, and potentially has greater long-term promise than the study of more homogeneous silos. I am finding this in my own work with musicologist [IC16,DC14], and is emerging as an issue in the natural sciences [Bo13,PC07].

long-tail

Another problem is REF, the UK ‘Research Excellence Framework’. My post-hoc analysis of the REF data revealed the enormous bias in the computing sub-panel against any form of applied and human-oriented work [Dx15b,Dx15c]. Of course, this is not a new issue, just that the available data has made this more obvious and undeniable. This affects my own core research area of human–computer interaction, but also, and probably much more substantially, learning technology research. Indeed, I think most learning technologists had already sussed this out well before REF2014 as there were very few papers submitted in this area to the computing panel. I assume most research on learning technology was submitted to the education panel.

To some extent it does not matter where research is submitted and assessed; however, while in theory the mapping between university departments and submitted units is fluid for REF, in practice submitting to ‘other’ panels is problematic making it difficult to write coherent narratives about the research environment. If learning technology research is not seen as REF-able in computing, computing departments will not recruit in these areas and discourage this kind of research. While my hope is that REF2020 will not re-iterate the mistakes of REF2014, there is no guarantee of this, and anyway the effects on institutional policy will already have been felt.

However, and happily, the kinds of research needed to make sense of this large-scale heterogeneous data may well prove more palatable to a computing REF panel than more traditional small-scale learning technology. It would be wonderful to see research collaborations between those with long-term experience and understanding of educational issues, with hard-core machine learning and statistical analysis – this is BIG DATA and challenging data. Indeed one of the few UK papers at L@S involved Pearson’s London-based data analysis department, and included automatic clustering, hidden Markov models, and regression analysis.

In short, while there are barriers in the UK, there is also great potential for exciting research that is both theoretically challenging and practically useful, bringing the insights available from large-scale educational data to help individual students and academics.

References

[Bo13] Christine L. Borgman. Big data and the long tail: Use and reuse of little data. Oxford eResearch Centre Seminar, 12th March 2013. http://works.bepress.com/borgman/269/

[Dx10] A. Dix (2010). In praise of inconsistency – the long tail of small data. Distinguished Alumnus Seminar, University of York, UK, 26th October 2011.
http://www.hcibook.com/alan/talks/York-Alumnus-2011-inconsistency/

[Dx15] A. Dix (2014/2015). The big story of small data. Talk at Open University, 11th November 2014; Oxford e-Research Centre, 10th July 2015; Mixed Reality Laboratory, Nottingham, 15th December 2015.
http://www.hcibook.com/alan/talks/OU-2014-big-story-small-data/

[DC14] Dix, A., Cowgill, R., Bashford, C., McVeigh, S. and Ridgewell, R. (2014). Authority and Judgement in the Digital Archive. In The 1st International Digital Libraries for Musicology workshop (DLfM 2014), ACM/IEEE Digital Libraries conference 2014, London 12th Sept. 2014. https://alandix.com/academic/papers/DLfM-2014/

[Dx15b] Alan Dix (2015/2016).  REF2014 Citation Analysis. accessed 8/5/2016.  https://alandix.com/ref2014/

[Dx15c] A. Dix (2015). Citations and Sub-Area Bias in the UK Research Assessment Process. In Workshop on Quantifying and Analysing Scholarly Communication on the Web (ASCW’15) at WebSci 2015 on June 30th in Oxford. http://ascw.know-center.tugraz.at/2015/05/26/dix-citations-and-sub-areas-bias-in-the-uk-research-assessment-process/

[Dx16]  Alan Dix (2016). Challenge and Potential of Fine Grain, Cross-Institutional Learning Data. Learning at Scale 2016. ACM. https://alandix.com/academic/papers/LS2016/

[FR16] Paul Feldman and Phil Richards (2016).  JISC – Helping the UK become the most advanced digital teaching and research nation in the world.  Talis Insight Europe 2016. https://talis.com/2016/04/29/jisc-keynote-paul-feldman-phil-richards-talis-insight-europe-2016/

[HC16] The Teaching Excellence Framework: Assessing Quality in Higher Education. House of Commons, Business, Innovation and Skills Committee, Third Report of Session 2015–16. HC 572.  29 February 2016.  http://www.publications.parliament.uk/pa/cm201516/cmselect/cmbis/572/572.pdf

[IC16] In Concert (2014-2016).  accessed 8/5/2016  http://inconcert.datatodata.com

[JI16]  Effective learning analytics. JISC, accessed   8/5/2016.  https://www.jisc.ac.uk/rd/projects/effective-learning-analytics

[PC07] C. L. Palmer, M. H. Cragin, P. B. Heidorn and L.C. Smith. 2007. Data curation for the long tail of science: The Case of environmental sciences. 3rd International Digital Curation Conference, Washington, DC. https://apps.lis.illinois.edu/wiki/ download/attachments/32666/Palmer_DCC2007.pdf

[Sh16]  Mike Sharples (2016).  Effective Pedagogy at Scale, Social Learning and Citizen Inquiry (keynote). Learning at Scale 2016. ACM. http://learningatscale.acm.org/las2016/keynotes/#k2

[TH15] Teaching excellence framework (TEF): everything you need to know.  Times Higher Education, August 4, 2015. https://www.timeshighereducation.com/news/teaching-excellence-framework-tef-everything-you-need-to-know

 

Of academic communication: overload, homeostatsis and nostalgia

open-mailbox-silhouetteRevisiting on an old paper on early email use and reflecting on scholarly communication now.

About 30 years ago, I was at a meeting in London and heard a presentation about a study of early email use in Xerox and the Open University. At Xerox the use of email was already part of their normal culture, but it was still new at OU. I’d thought they had done a before and after study of one of the departments, but remembered clearly their conclusions: email acted in addition to other forms of communication (face to face, phone, paper), but did not substitute.

Gilbert-Cockton-from-IDFIt was one of those pieces of work that I could recall, but didn’t have a reference too. Facebook to the rescue! I posted about it and in no time had a series of helpful suggestions including Gilbert Cockton who nailed it, finding the meeting, the “IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems” (3 Feb 1989) and the precise paper:

Fung , T. O’Shea , S. Bly. Electronic mail viewed as a communications catalyst. IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, , pp.1/1–1/3. INSPEC: 3381096 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=197821

In some extraordinary investigative journalism, Gilbert also noted that the first author, Pat Fung, went on to fresh territory after retirement, qualifying as a scuba-diving instructor at the age of 75.

The details of the paper were not exactly as I remembered. Rather than a before and after study, it was a comparison of computing departments at Xerox (mature use of email) and OU’s (email less ingrained, but already well used). Maybe I had simply embroidered the memory over the years, or maybe they presented newer work at the colloquium, than was in the 3 page extended abstract.   In those days this was common as researchers did not feel they needed to milk every last result in a formal ‘publication’. However, the conclusions were just as I remembered:

“An exciting finding is its indication that the use of sophisticated electronic communications media is not seen by users as replacing existing methods of communicating. On the contrary, the use of such media is seen as a way of establishing new interactions and collaboration whilst catalysing the role of more traditional methods of communication.”

As part of this process following various leads by other Facebook friends, I spent some time looking at early CSCW conference proceedings, some at Saul Greenburg’s early CSCW bibliography [1] and Ducheneaut and Watts (15 years on) review of email research [2] in the 2005 HCI special issue on ‘reinventing email’ [3] (both notably missing the Fung et al. paper). I downloaded and skimmed several early papers including Wendy McKay’s lovely early (1988) study [4] that exposed the wide variety of ways in which people used email over and above simple ‘communication’. So much to learn from this work when the field was still fresh,

This all led me to reflect both on the Fung et al. paper, the process of finding it, and the lessons for email and other ‘communication’ media today.

Communication for new purposes

A key finding was that “the use of such media is seen as a way of establishing new interactions and collaboration“. Of course, the authors and their subjects could not have envisaged current social media, but the finding if this paper was exactly an example of this. In 1989 if I had been trying to find a paper, I would have scoured my own filing cabinet and bookshelves, those of my colleagues, and perhaps asked people when I met them. Nowadays I pop the question into Facebook and within minutes the advice starts to appear, and not long after I have a scanned copy of the paper I was after.

Communication as a good thing

In the paper abstract, the authors say that an “exciting finding” of the paper is that “the use of sophisticated electronic communications media is not seen by users as replacing existing methods of communicating.” Within paper, this is phrased even more strongly:

“The majority of subjects (nineteen) also saw no likelihood of a decrease in personal interactions due to an increase in sophisticated technological communications support and many felt that such a shift in communication patterns would be undesirable.”

Effectively, email was seen as potentially damaging if it replaced other more human means of communication, and the good outcome of this report was that this did not appear to be happening (or strictly subjects believed it was not happening).

However, by the mid-1990s, papers discussing ’email overload’ started to appear [5].

I recall a morning radio discussion of email overload about ten years ago. The presenter asked someone else in the studio if they thought this was a problem. Quite un-ironically, they answered, “no, I only spend a couple of hours a day”. I have found my own pattern of email change when I switched from highly structured Eudora (with over 2000 email folders), to Gmail (mail is like a Facebook feed, if it isn’t on the first page it doesn’t exist). I was recently talking to another academic who explained that two years ago he had deliberately taken “email as stream” as a policy to control unmanageable volumes.

If only they had known …

Communication as substitute

While Fung et al.’s respondents reported that they did not foresee a reduction in other forms of non-electronic communication, in fact even in the paper the signs of this shift to digital are evident.

Here are the graphs of communication frequency for the Open University (30 people, more recent use of email) and Xerox (36 people, more established use) respectively.

( from Fung et al., 1989)

( from Fung et al., 1989)

( from Fung et al., 1989)

( from Fung et al., 1989)

It is hard to draw exact comparisons as it appears there may have been a higher overall volume of communication at Xerox (because of email?).  Certainly, at that point, face-to-face communication remains strong at Xerox, but it appears that not only the proportion, but total volume of non-digital non-face-to-face communications is lower than at OU.  That is sub substitution has already happened.

Again, this is obvious nowadays, although the volume of electronic communications would have been untenable in paper (I’ve sometimes imagined printing out a day’s email and trying to cram it in a pigeon-hole), the volume of paper communications has diminished markedly. A report in 2013 for Royal Mail recorded 3-6% pa reduction in letters over recent years and projected a further 4% pa for the foreseeable future [6].

academic communication and national meetungs

However, this also made me think about the IEE Colloquium itself. Back in the late 1980s and 1990s it was common to attend small national or local meetings to meet with others and present work, often early stage, for discussion. In other fields this still happens, but in HCI it has all but disappeared. Maybe I have is a little nostalgia, but this does seem a real loss as it was a great way for new PhD students to present their work and meet with the leaders in their field. Of course, this can happen if you get your CHI paper accepted, but the barriers are higher, particularly for those in smaller and less well-resourced departments.

Some of this is because international travel is cheaper and faster, and so national meetings have reduced in importance – everyone goes to the big global (largely US) conferences. Many years ago research on day-to-day time use suggested that we have a travel ‘time budget’ reactively constant across counties and across different kinds of areas within the same country [7]. The same is clearly true of academic travel time; we have a certain budget and if we travel more internationally then we do correspondingly less nationally.

(from Zahavi, 1979)

(from Zahavi, 1979)

However, I wonder if digital communication also had a part to play. I knew about the Fung et al. paper, even though it was not in the large reviews of CSCW and email, because I had been there. Indeed, the reason that the Fung et al.paper was not cited in relevant reviews would have been because it was in a small venue and only available as paper copy, and only if you know it existed. Indeed, it was presumably also below the digital radar until it was, I assume, scanned by IEE archivists and deposited in IEEE digital library.

However, despite the advantages of this easy access to one another and scholarly communication, I wonder if we have also lost something.

In the 1980s, physical presence and co-presence at an event was crucial for academic communication. Proceedings were paper and precious, I would at least skim read all of the proceedings of any event I had been to, even those of large conferences, because they were rare and because they were available. Reference lists at the end of my papers were shorter than now, but possibly more diverse and more in-depth, as compared to more directed ‘search for the relevant terms’ literature reviews of the digital age.

And looking back at some of those early papers, in days when publish-or-perish was not so extreme, when cardiac failure was not an occupational hazard for academics (except maybe due to the Cambridge sherry allowance), at the way this crucial piece of early research was not dressed up with an extra 6000 words of window dressing to make a ‘high impact’ publication, but simply shared. Were things more fun?


 

[1] Saul Greenberg (1991) “An annotated bibliography of computer supported cooperative work.” ACM SIGCHI Bulletin, 23(3), pp. 29-62. July. Reprinted in Greenberg, S. ed. (1991) “Computer Supported Cooperative Work and Groupware”, pp. 359-413, Academic Press. DOI: http://dx.doi.org/10.1145/126505.126508
https://pdfs.semanticscholar.org/52b4/d0bb76fcd628c00c71e0dfbf511505ae8a30.pdf

[2] Nicolas Ducheneaut and Leon A. Watts (2005). In search of coherence: a review of e-mail research. Hum.-Comput. Interact. 20, 1 (June 2005), 11-48. DOI= 10.1080/07370024.2005.9667360
http://www2.parc.com/csl/members/nicolas/documents/HCIJ-Coherence.pdf

[3] Steve Whittaker, Victoria Bellotti, and Paul Moody (2005). Introduction to this special issue on revisiting and reinventing e-mail. Hum.-Comput. Interact. 20, 1 (June 2005), 1-9.
http://www.tandfonline.com/doi/abs/10.1080/07370024.2005.9667359

[4] Wendy E. Mackay. 1988. More than just a communication system: diversity in the use of electronic mail. In Proceedings of the 1988 ACM conference on Computer-supported cooperative work (CSCW ’88). ACM, New York, NY, USA, 344-353. DOI=http://dx.doi.org/10.1145/62266.62293
https://www.lri.fr/~mackay/pdffiles/TOIS88.Diversity.pdf

[5] Steve Whittaker and Candace Sidner (1996). Email overload: exploring personal information management of email. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’96), Michael J. Tauber (Ed.). ACM, New York, NY, USA, 276-283. DOI=http://dx.doi.org/10.1145/238386.238530
https://www.ischool.utexas.edu/~i385q/readings/Whittaker_Sidner-1996-Email.pdf

[6] The outlook for UK mail volumes to 2023. PwC prepared for Royal Mail Group, 15 July 2013
http://www.royalmailgroup.com/sites/default/files/ The%20outlook%20for%20UK%20mail%20volumes%20to%202023.pdf

[7] Yacov Zahavi (1979). The ‘UMOT’ Project. Prepared For U.S. Department Of Transportation Ministry Of Transport and Fed. Rep. Of Germany.
http://www.surveyarchive.org/Zahavi/UMOT_79.pdf

principles vs guidelines

I was recently asked to clarify the difference between usability principles and guidelines.  Having written a page-full of answer, I thought it was worth popping on the blog.

As with many things the boundary between the two is not absolute … and also the term ‘guidelines’ tends to get used differently at different times!

However, as a general rule of thumb:

  • Principles tend to be very general and would apply pretty much across different technologies and systems.
  • Guidelines tend to be more specific to a device or system.

As an example of the latter, look at the iOS Human Interface Guidelines on “Adaptivity and Layout”   It starts with a general principle:

“People generally want to use their favorite apps on all their devices and in multiple contexts”,

but then rapidly turns that into more mobile specific, and then iOS specific guidelines, talking first about different screen orientations, and then about specific iOS screen size classes.

I note that the definition on page 259 of Chapter 7 of the HCI textbook is slightly ambiguous.  When it says that guidelines are less authoritative and more general in application, it means in comparison to standards … although I’d now add a few caveats for the latter too!

Basically in terms of ‘authority’, from low to high:

lowest principles agreed by community, but not mandated
guidelines proposed by manufacture, but rarely enforced
highest standards mandated by standards authority

In terms of general applicability, high to low:

highest principles very broad e.g. ‘observability’
guidelines more specific, but still allowing interpretation
lowest standards very tight

This ‘generality of application’ dimension is a little more complex as guidelines are often manufacturer specific so arguably less ‘generally applicable’ than standards, but the range of situations that standard apply to is usually much tighter.

On the whole the more specific the rules, the easier they are to apply.  For example, the general principle of observability requires that the designer think about how it applies in each new application and situation. In contrast, a more specific rule that says, “always show the current editing state in the top right of the screen” is easy to apply, but tells you nothing about other aspects of system state.

Scopus vs Google Scholar in Computer Science

In response to a Facebook thread about my recent LSE Impact Blog, “Evaluating research assessment: Metrics-based analysis exposes implicit bias in REF2014 results“, Joe Marshall commented,

“Citation databases are a pain, because you can’t standardise across fields. For computer science, Google scholar is the most comprehensive, although you could argue that it overestimates because it uses theses etc as sources. Scopus, web of knowledge etc. all miss out some key publications which is annoying”

 

My answer was getting a little too complicated for a Facebook reply; hence a short blog post.

While for any individual paper, you get a lot of variation between Scopus and Google Scholar, from my experience with the data, I would say they are not badly correlated if you look at big enough units.  There are a few exceptions, notably bio-tech papers which tend to get more highly placed under Scopus than GS.

Crucial for REF is how this works at the level of whole institution data.  I took a quick peek at the REF institution data, comparing top quartile counts for Scopus and Google Scholar. That is, the proportion of papers submitted from each institution that were in top 25% of papers when ranked by citation counts.  Top quartile is chosen as it should be a reasonably predictor of 4* (about 22% of papers).

The first of these graphs shows Scopus (x-axis) vs Google Scolar (y-axis) for whole institutions.  The red line is at 45 degree, representing an exact match.  Note that, many institutions are relatively small, so we would expect a level of spread.

inst-scopus-vs-google-top-quartile-with-line

While far from perfect, there is clustering around the line and crucially for all types of institution.  The major outlier (green triangle to the right) is Plymouth which does have a large number of biomed papers. In short, while one citation metric might be better than the other, they do give roughly similar outcomes.

This is very different from what happens in you compare either with actual REF 4* results:

inst-scopus-top-quartile-vs-REF-4star-with-line   inst-google-top-quartile-vs-REF-4star-with-line

In both cases not only is there far less agreement, but also there are systematic effects.  In particular, the post-1992 institutions largely sit below the red line; that is they are scored far less highly by REF panel than by either Scopus or Google Scholar.  This is a slightly different metric, but precisely the result I previously found looking at institutional bias in REF.

Note that all of these graphs look far tighter if you measure GPA rather than 4* results, but of course it is 4* that is largely what is funded.

hope and despair

I have spent a good part of the day drafting my personal response to Lord Stern’s review of the Research Excellence Framework; trying to add some positive suggestions to an otherwise gloomy view of the REF process.

My LSE impact blog “Evaluating research assessment: Metrics-based analysis exposes implicit bias in REF2014 results” also came out today, good to see and important to get the message out, but hardly positive; my final words were:

“despite the best efforts of all involved, the REF output assessment process is not fit for purpose”,

and this on a process that consumed a good part of a year of my life … depressing.

However, then on Facebook I saw the announcement:

Professor Tom Rodden announced as EPSRC's Deputy CEO

Yay, a sensible voice near the heart of UK research … a glimmer of light flicker’s on the horizon.

 

 

Human-Like Computing

Last week I attended an EPSRC workshop on “Human-Like Computing“.

The delegate pack offered a tentative definition:

“offering the prospect of computation which is akin to that of humans, where learning and making sense of information about the world around us can match our human performance.” [E16]

However, the purpose of this workshop was to clarify, and expand on this, exploring what it might mean for computers to become more like humans.

It was an interdisciplinary meeting with some participants coming from more technical disciplines such as cognitive science, artificial intelligence, machine learning and Robotics; others from psychology or studying human and animal behaviour; and some, like myself, from HCI or human factors, bridging the two.

Why?

Perhaps the first question is why one might even want more human-like computing.

There are two obvious reasons:

(i) Because it is a good model to emulate — Humans are able to solve some problems, such as visual pattern finding, which computers find hard. If we can understand human perception and cognition, then we may be able to design more effective algorithms. For example, in my own work colleagues and I have used models based on spreading activation and layers of human memory when addressing ‘web scale reasoning’ [K10,D10].

robot-3-clip-sml(ii) For interacting with people — There is considerable work in HCI in making computers easier to use, but there are limitations. Often we are happy for computers to be simply ‘tools’, but at other times, such as when your computer notifies you of an update in the middle of a talk, you wish it had a little more human understanding. One example of this is recent work at Georgia Tech teaching human values to artificial agents by reading them stories! [F16]

To some extent (i) is simply the long-standing area of nature-inspired or biologically-inspired computing. However, the combination of computational power and psychological understanding mean that perhaps we are the point where new strides can be made. Certainly, the success of ‘deep learning’ and the recent computer mastery of Go suggest this. In addition, by my own calculations, for several years the internet as a whole has had more computational power than a single human brain, and we are very near the point when we could simulate a human brain in real time [D05b].

Both goals, but particularly (ii), suggest a further goal:

(iii) new interaction paradigms — We will need to develop new ways to design for interacting with human-like agents and robots, not least how to avoid the ‘uncanny valley’ and how to avoid the appearance of over-competence that has bedevilled much work in this broad area. (see more later)

Both goals also offer the potential for a fourth secondary goal:

(iv) learning about human cognition — In creating practical computational algorithms based in human qualities, we may come to better understand human behaviour, psychology and maybe even society. For example, in my own work on modelling regret (see later), it was aspects of the computational model that highlighted the important role of ‘positive regret’ (“the grass is greener on the other side”) to hep us avoid ‘local minima’, where we stick to the things we know and do not explore new options.

Human or superhuman?

Of course humans are not perfect, do we want to emulate limitations and failings?

For understanding humans (iv), the answer is probably “yes”, and maybe by understanding human fallibility we may be in a better position to predict and prevent failures.

Similarly, for interacting with people (ii), the agents should show at least some level of human limitations (even if ‘put on’); for example, a chess program that always wins would not be much fun!

However, for simply improving algorithms, goal (i), we may want to get the ‘best bits’, from human cognition and merge with the best aspects of artificial computation. Of course it maybe that the frailties are also the strengths, for example, the need to come to decisions and act in relatively short timescales (in terms of brain ‘ticks’) may be one way in which we avoid ‘over learning’, a common problem in machine learning.

In addition, the human mind has developed to work with the nature of neural material as a substrate, and the physical world, both of which have shaped the nature of human cognition.

Very simple animals learn purely by Skinner-like response training, effectively what AI would term sub-symbolic. However, this level of learning require many exposures to similar stimuli. For more rare occurrences, which do not occur frequently within a lifetime, learning must be at the, very slow pace of genetic development of instincts. In contrast, conscious reasoning (symbolic processing) allows us to learn through a single or very small number of exposures; ideal for infrequent events or novel environments.

Big Data means that computers effectively have access to vast amounts of ‘experience’, and researchers at Google have remarked on the ‘Unreasonable Effectiveness of Data’ [H09] that allows problems, such as translation, to be tackled in a statistical or sub-symbolic way which previously would have been regarded as essentially symbolic.

Google are now starting to recombine statistical techniques with more knowledge-rich techniques in order to achieve better results again. As humans we continually employ both types of thinking, so there are clear human-like lessons to be learnt, but the eventual system will not have the same ‘balance’ as a human.

If humans had developed with access to vast amounts of data and maybe other people’s experience directly (rather than through culture, books, etc.), would we have developed differently? Maybe we would do more things unconsciously that we do consciously. Maybe with enough experience we would never need to be conscious at all!

More practically, we need to decide how to make use of this additional data. For example, learning analytics is becoming an important part of educational practice. If we have an automated tutor working with a child, how should we make use of the vast body of data about other tutors interactions with other children?   Should we have a very human-like tutor that effectively ‘reads’ learning analytics just as a human tutor would look at a learning ‘dashboard’? Alternatively, we might have a more loosely human-inspired ‘hive-mind’ tutor that ‘instinctively’ makes pedagogic choices based on the overall experience of all tutors, but maybe in an unexplainable way?

What could go wrong …

There have been a number of high-profile statements in the last year about the potential coming ‘singularity’ (when computers are clever enough to design new computers leading to exponential development), and warnings that computers could become sentient, Terminator-style, and take over.

There was general agreement at the workshop this kind of risk was overblown and that despite breakthroughs, such as the mastery of Go, these are still very domain limited. It is many years before we have to worry about even general intelligence in robots, let alone sentience.

A far more pressing problem is that of incapable computers, which make silly mistakes, and the way in which people, maybe because of the media attention to the success stories, assume that computers are more capable than they are!

Indeed, over confidence in algorithms is not just a problem for the general public, but also among computing academics, as I found in my personal experience on the REF panel.

There are of course many ethical and legal issues raised as we design computer systems that are more autonomous. This is already being played out with driverless cars, with issues of insurance and liability. Some legislators are suggesting allowing driverless cars, but only if there is a drive there to take control … but if the car relinquishes control, how do you safely manage the abrupt change?

Furthermore, while the vision of autonomous robots taking over the world is still far fetched; more surreptitious control is already with us. Whether it is Uber cabs called by algorithm, or simply Google’s ranking of search results prompting particular holiday choices, we all to varying extents doing “what the computer tells us”. I recall in the Dalek Invasion of Earth, the very un-human-like Daleks could not move easily amongst the rubble of war-torn London. Instead they used ‘hypnotised men’ controlled by some form of neural headset. If the Daleks had landed today and simply taken over or digitally infected a few cloud computing services would we know?

Legibility

Sometimes it is sufficient to have a ‘black box’ that makes decisions and acts. So long as it works we are happy. However, a key issue for many ethical and legal issues, but also for practical interaction, is the ability to be able to interrogate a system, so seek explanations of why a decision has been made.

Back in 1992 I wrote about these issues [D92], in the early days when neural networks and other forms of machine learning were being proposed for a variety of tasks form controlling nuclear fusion reactions to credit scoring. One particular scenario, was if an algorithm were used to pre-sort large numbers of job applications. How could you know whether the algorithms were being discriminatory? How could a company using such algorithms defend themselves if such an accusation were brought?

One partial solution then, as now, was to accept underlying learning mechanisms may involve emergent behaviour form statistical, neural network or other forms of opaque reasoning. However, this opaque initial learning process should give rise to an intelligible representation. This is rather akin to a judge who might have a gut feeling that a defendant is guilty or innocent, but needs to explicate that in a reasoned legal judgement.

This approach was exemplified by Query-by-Browsing, a system that creates queries from examples (using a variant of ID3), but then converts this in SQL queries. This was subsequently implemented [D94] , and is still running as a web demonstration.

For many years I have argued that it is likely that our ‘logical’ reasoning arises precisely form this need to explain our own tacit judgement to others. While we simply act individually, or by observing the actions of others, this can be largely tacit, but as soon as we want others to act in planned collaborate ways, for example to kill a large animal, we need to convince them. Once we have the mental mechanisms to create these explanations, these become internalised so that we end up with internal means to question our own thoughts and judgement, and even use them constructively to tackle problems more abstract and complex than found in nature. That is dialogue leads to logic!

Scenarios

We split into groups and discussed scenarios as a means to understand the potential challenges for human-like computing. Over multiple session the group I was in discussed one man scenario and then a variant.

Paramedic for remote medicine

The main scenario consisted of a patient far form a central medical centre, with an intelligent local agent communicating intermittently and remotely with a human doctor. Surprisingly the remote aspect of the scenario was not initially proposed by me thinking of Tiree, but by another member of the group thinking abut some of the remote parts of the Scottish mainland.

The local agent would need to be able communicate with the patient, be able to express a level of empathy, be able to physically examine (needing touch sensing, vision), and discuss symptoms. On some occasions, like a triage nurse, the agent might be sufficiently certain to be able to make a diagnosis and recommend treatment. However, at other times it may need to pass on to the remote doctor, being able to describe what had been done in terms of examination, symptoms observed, information gathered from the patient, in the same way that a paramedic does when handing over a patient to the hospital. However, even after the handover of responsibility, the local agent may still form part of the remote diagnosis, and maybe able to take over again once the doctor has determined an overall course of action.

The scenario embodied many aspects of human-like computing:

  • The agent would require a level of emotional understanding to interact with the patient
  • It would require fine and situation contingent robotic features to allow physical examination
  • Diagnosis and decisions would need to be guided by rich human-inspired algorithms based on large corpora of medical data, case histories and knowledge of the particular patient.
  • The agent would need to be able to explain its actions both to the patient and to the doctor. That is it would not only need to transform its own internal representations into forms intelligible to a human, but do so in multiple ways depending on the inferred knowledge and nature of the person.
  • Ethical and legal responsibility are key issues in medical practice
  • The agent would need to be able manage handovers of control.
  • The agent would need to understand its own competencies in order to know when to call in the remote doctor.

The scenario could be in physical or mental health. The latter is particularly important given recent statistics, which suggested only 10% of people in the UK suffering mental health problems receive suitable help.

Physiotherapist

As a more specific scenario still, one fog the group related how he had been to an experienced physiotherapist after a failed diagnosis by a previous physician. Rather than jumping straight into a physical examination, or even apparently watching the patient’s movement, the physiotherapist proceeded to chat for 15 minutes about aspects of the patient’s life, work and exercise. At the end of this process, the physiotherapist said, “I think I know the problem”, and proceeded to administer a directed test, which correctly diagnosed the problem and led to successful treatment.

Clearly the conversation had given the physiotherapist a lot of information about potential causes of injury, aided by many years observing similar cases.

To do this using an artificial agent would suggest some level of:

  • theory/model of day-to-day life

Thinking about the more conversational aspects of this I was reminded of the PhD work of Ramanee Peiris [P97]. This concerned consultations on sensitive subjects such as sexual health. It was known that when people filled in (initially paper) forms prior to a consultation, they were more forthcoming and truthful than if they had to provide the information face-to-face. This was even if the patient knew that the person they were about to see would read the forms prior to the consultation.

Ramanee’s work extended this first to electronic forms and then to chat-bot style discussions which were semi-scripted, but used simple textual matching to determine which topics had been covered, including those spontaneously introduced by the patient. Interestingly, the more human like the system became the more truthful and forthcoming the patients were, even though they were less so wit a real human.

As well as revealing lessons for human interactions with human-like computers, this also showed that human-like computing may be possible with quite crude technologies. Indeed, even Eliza was treated (to Weizenbaum’s alarm) as if it really were a counsellor, even though people knew it was ‘just a computer’ [W66].

Cognition or Embodiment?

I think it fair to say that the overall balance, certainly in the group I was in, was towards the cognitivist: that is more Cartesian approach starting with understanding and models of internal cognition, and then seeing how these play out with external action. Indeed, the term ‘representation’ used repeatedly as an assumed central aspect of any human-like computing, and there was even talk of resurrecting Newells’s project for a ‘unified theory of cognition’ [N90]

There did not appear to be any hard-core embodiment theorist at the workshops, although several people who had sympathies. This was perhaps as well as we could easily have degenerated into well rehearsed arguments for an against embodiment/cognition centred explanations … not least about the critical word ‘representation’.

However, I did wonder whether a path that deliberately took embodiment centrally would be valuable. How many human-like behaviours could be modelled in this way, taking external perception-action as central and only taking on internal representations when they were absolutely necessary (Alan Clark’s 007 principle) [C98].

Such an approach would meet limits, not least the physiotherapist’s 25 minute chat, but I would guess would be more successful over a wider range of behaviours and scenarios then we would at first think.

Human–Computer Interaction and Human-Like Computing

Both Russell and myself were partly there representing our own research interest, but also more generally as part of the HCI community looking at the way human-like computing would intersect exiting HCI agendas, or maybe create new challenges and opportunities. (see poster) It was certainly clear during the workshop that there is a substantial role for human factors from fine motor interactions, to conversational interfaces and socio-technical systems design.

Russell and I presented a poster, which largely focused on these interactions.

HCI-HLC-poster

There are two sides to this:

  • understanding and modelling for human-like computing — HCI studies and models complex, real world, human activities and situations. Psychological experiments and models tend to be very deep and detailed, but narrowly focused and using controlled, artificial tasks. In contrast HCI’s broader, albeit more shallow, approach and focus on realistic or even ‘in the wild’ tasks and situations may mean that we are in an ideal position to inform human-like computing.

human interfaces for human-like computing — As noted in goal (iii) we will need paradigms for humans to interact with human-like computers.

As an illustration of the first of these, the poster used my work on making sense of the apparently ‘bad’ emotion of regret [D05] .

An initial cognitive model of regret was formulated involving a rich mix of imagination (in order to pull past events and action to mind), counter-factual modal reasoning (in order to work out what would have happened), emption (which is modified to feel better or worse depending on the possible alternative outcomes), and Skinner-like low-level behavioural learning (the eventual purpose of regret).

cog-model

This initial descriptive and qualitative cognitive model was then realised in a simplified computational model, which had a separate ‘regret’ module which could be plugged into a basic behavioural learning system.   Both the basic system and the system with regret learnt, but the addition of regret did so with between 5 and 10 times fewer exposures.   That is, the regret made a major improvement to the machine learning.

architecture

Turning to the second. Direct manipulation has been at the heart of interaction design since the PC revolution in the 1980s. Prior to that command line interfaces (or worse job control interfaces), suggested a mediated paradigm, where operators ‘asked’ the computer to do things for them. Direct manipulation changed that turning the computer into a passive virtual world of computational objects on which you operated with the aid of tools.

To some extent we need to shift back to the 1970s mediated paradigm, but renewed, where the computer is no longer like an severe bureaucrat demanding the precise grammatical and procedural request; but instead a helpful and understanding aide. For this we can draw upon existing areas of HCI such as human-human communications, intelligent user interfaces, conversational agents and human–robot interaction.

References

[C98] Clark, A. 1998. Being There: Putting Brain, Body and the World Together Again. MIT Press. https://mitpress.mit.edu/books/being-there

[D92] A. Dix (1992). Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451. http://www.hcibook.com/alan/papers/neuro92/

[D94] A. Dix and A. Patrick (1994). Query By Browsing. Proceedings of IDS’94: The 2nd International Workshop on User Interfaces to Databases, Ed. P. Sawyer. Lancaster, UK, Springer Verlag. 236-248.

[D05] Dix, A..(2005).  The adaptive significance of regret. (unpublished essay, 2005) https://alandix.com/academic/essays/regret.pdf

[D05b] A. Dix (2005). the brain and the web – a quick backup in case of accidents. Interfaces, 65, pp. 6-7. Winter 2005. https://alandix.com/academic/papers/brain-and-web-2005/

[D10] A. Dix, A. Katifori, G. Lepouras, C. Vassilakis and N. Shabir (2010). Spreading Activation Over Ontology-Based Resources: From Personal Context To Web Scale Reasoning. Internatonal Journal of Semantic Computing, Special Issue on Web Scale Reasoning: scalable, tolerant and dynamic. 4(1) pp.59-102. http://www.hcibook.com/alan/papers/web-scale-reasoning-2010/

[E16] EPSRC (2016). Human Like Computing Hand book. Engineering and Physical Sciences Research Council. 17 – 18 February 2016

[F16] Alison Flood (2016). Robots could learn human values by reading stories, research suggests. The Guardian, Thursday 18 February 2016 http://www.theguardian.com/books/2016/feb/18/robots-could-learn-human-values-by-reading-stories-research-suggests

[H09] Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The Unreasonable Effectiveness of Data. IEEE Intelligent Systems 24, 2 (March 2009), 8-12. DOI=10.1109/MIS.2009.36

[K10] A. Katifori, C. Vassilakis and A. Dix (2010). Ontologies and the Brain: Using Spreading Activation through Ontologies to Support Personal Interaction. Cognitive Systems Research, 11 (2010) 25–41. https://alandix.com/academic/papers/Ontologies-and-the-Brain-2010/

[N90] Allen Newell. 1990. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, USA. http://www.hup.harvard.edu/catalog.php?isbn=9780674921016

[P97] DR Peiris (1997). Computer interviews: enhancing their effectiveness by simulating interpersonal techniques. PhD Thesis, University of Dundee. http://virtual.inesc.pt/rct/show.php?id=56

[W66] Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (January 1966), 36-45. DOI=http://dx.doi.org/10.1145/365153.365168