Physigrams get their own micro-site!
See it now at at physicality.org/physigrams
Appropriate physical design can make the difference between an intuitively obvious device and one that is inscrutable. Physigrams are a way of modelling and analysing the interactive physical characteristics of devices from TV remotes to electric kettles, filling the gap between foam prototypes and code.
Sketches or CAD allow you to model the static physical form of the device, and this can be realised in moulded blue foam, 3D printing or cardboard mock-ups. Prototypes of the internal digital behaviour can be produced using tools such as Adobe Animate, proto.io or atomic or as hand-coded using standard web-design tools. The digital behaviour can also be modelled using industry standard techniques such as UML.
Physigrams allow you to model the ‘device unplugged’ – the pure physical interaction potential of the device: the ways you can interact with buttons, dials and knobs, how you can open, slide or twist movable elements. These physigrams can be attached to models of the digital behaviour to understand how well the physical and digital design compliment one another.
Physigrams were developed some years ago as part of the DEPtH project., a collaboration between product designers at Cardiff School of Art and Design and computer scientists at Lancaster University. Physigrams have been described in various papers over the years. However, with TouchIT ,our book on physicality and design (eventually!) reaching completion and due out next year, it felt that physigrams deserved a home of their own on the web.
The physigram micro-site, part of physicality.org includes descriptions of physical interaction properties, a complete key to the physigram notation, and many examples of physigrams in action from light switches, to complete control panels and novel devices.
How long is an instant? The answer, of course, is ‘it depends’, but I’ve been finding it fascinating playing on the demo page for AngularJS tooltips. and seeing what feels like ‘instant’ for a tooltip.
The demo allows you to adjust the md-delay property so you can change the delay between hovering over a button and the tooltip appearing, and then instantly see what that feels like.
In the ever accelerating rush to digital delivery, is this actually what students want or need?
Last week I was at Talis Insight conference. As with previous years, this is a mix of sessions focused on those using or thinking of using Talis products, with lots of rich experience talks. However, also about half of the time is dedicated to plenaries about the current state and future prospects for technology in higher education; so well worth attending (it is free!) whether or not you are a Talis user.
Speakers this year included Bill Rammell, now Vice-Chancellor at the University of Bedfordshire, but who was also Minister of State for Higher Education during the second Blair government, and during that time responsible for introducing the National Student Survey.
Another high profile speaker was Rosie Jones, who is Director of Library Services at the Open University … which operates somewhat differently from the standard university library!
However, among the VCs, CEOs and directors of this and that, it was the two most junior speakers who stood out for me. Eva Brittin-Snell and Alex Davie are to SAGE student scholars from Sussex. As SAGE scholars they have engaged in research on student experience amongst their peers, speak at events like this and maintain a student blog, which includes, amongst other things the story of how Eva came to buy her first textbook.
Eva and Alex’s talk was entitled “Digital through a student’s eyes” (video). Many of the talks had been about the rise of digital services and especially the eTextbook. Eva and Alex were the ‘digital natives’, so surely this was joy to their ears. Surprisingly not.
Alex, in her first year at university, started by alluding to the previous speakers, the push for book-less libraries, and general digital spiritus mundi, but offered an alternative view. Students were annoyed at being asked to buy books for a course where only a chapter or two would be relevant; they appreciated the convenience of an eBook, when core textbooks were permanently out on and, and instantly recalled once one got hold of them. However, she said they still preferred physical books, as they are far more usable (even if heavy!) than eBooks.
Eva, a fourth year student, offered a different view. “I started like Aly”, she said, and then went on to describe her change of heart. However, it was not a revelation of the pedagogical potential of digital, more that she had learnt to live through the pain. There were clear practical and logistic advantages to eBooks, there when and where you wanted, but she described a life of constant headaches from reading on-screen.
Possibly some of this is due to the current poor state of eBooks that are still mostly simply electronic versions of texts designed for paper. Also, one of their student surveys showed that very few students had eBook readers such as Kindle (evidently now definitely not cool), and used phones primarily for messaging and WhatsApp. The centre of the student’s academic life was definitely the laptop, so eBooks meant hours staring at a laptop screen.
However, it also reflects a growing body of work showing the pedagogic advantages of physical note taking, potential developmental damage of early tablet and smartphone use, and industry figures showing that across all areas eBook sales are dropping and physical book sales increasing. In addition there is evidence that children and teenagers people prefer physical books, and public library use by young people is growing.
It was also interesting that both Alex and Eva complained that eTextbooks were not ‘snappy’ enough. In the age of Tweet-stream presidents and 5-minute attention spans, ‘snappy’ was clearly the students’ term of choice to describe their expectation of digital media. Yet this did not represent a loss of their attention per se, as this was clearly not perceived as a problem with physical books.
… and I am still trying to imagine what a critical study of Aristotle’s Poetics would look like in ‘snappy’ form.
There are two lessons from this for me. First what would a ‘digital first’ textbook look like. Does it have to be ‘snappy’, or are there ways to maintain attention and depth of reading in digital texts?
The second picks up on issues in the co-authored paper I presented at NordiChi last year, “From intertextuality to transphysicality: The changing nature of the book, reader and writer“, which, amongst other things, asked how we might use digital means to augment the physical reading process, offering some of the strengths of eBooks such as the ability to share annotations, but retaining a physical reading experience. Also maybe some of the physical limitations of availability could be relieved, for example, if university libraries work with bookshops to have student buy and return schemes alongside borrowing?
It would certainly be good if students did not have to learn to live with pain.
We have a challenge.
Revisiting an old piece of work I reflect on the processes that led to it: intuition and formalism, incubation and insight, publish or perish, and a malaise at the heart of current computer science.
A couple of weeks ago I received an email requesting an old technical report, “Finding fixed points in non-trivial domains: proofs of pending analysis and related algorithms” [Dx88]. This report was from nearly 30 years ago, when I was at York and before the time when everything was digital and online. This was one of my all time favourite pieces of work, and one of the few times I’ve done ‘real maths’ in computer science.
As well as tackling a real problem, it required new theoretical concepts and methods of proof that were generally applicable. In addition it arose through an interesting story that exposes many of the changes in academia.
[Aside, for those of more formal bent.] This involved proving the correctness of an algorithm ‘Pending Analysis’ for efficiently finding fixed points over finite lattices, which had been developed for use when optimising functional programs. Doing this led me to perform proofs where some of the intermediate functions were not monotonic, and to develop forms of partial order that enabled reasoning over these. Of particular importance was the concept of a pseudo-monotonic functional, one that preserved an ordering between functions even if one of them is not itself monotonic. This then led to the ability to perform sandwich proofs, where a potentially non-monotonic function of interest is bracketed between two monotonic functions, which eventually converge to the same function sandwiching the function of interest between them as they go.
Oddly while it was one my favourite pieces of work, it was at the periphery of my main areas of work, so had never been published apart from as a York technical report. Also, this was in the days before research assessment, before publish-or-perish fever had ravaged academia, and when many of the most important pieces of work were ‘only’ in technical report series. Indeed, our Department library had complete sets of many of the major technical report series such as Xerox Parc, Bell Labs, and Digital Equipment Corporation Labs where so much work in programming languages was happening at the time.
My main area was, as it is now, human–computer interaction, and at the time principally the formal modelling of interaction. This was the topic of my PhD Thesis and of my first book “Formal Methods for Interactive Systems” [Dx91] (an edited version of the thesis). Although I do less of this more formal work now-a-days, I’ve just been editing a book with Benjamin Weyers, Judy Bowen and Philippe Pallanque, “The Handbook of Formal Methods in Human-Computer Interaction” [WB17], which captures the current state of the art in the topic.
Moving from mathematics into computer science, the majority of formal work was far more broad, but far less deep than I had been used to. The main issues were definitional: finding ways to describe complex phenomena that both gave insight and enabled a level of formal tractability. This is not to say that there were no deep results: I recall the excitement of reading Sannella’s PhD Thesis [Sa82] on the application of category theory to formal specifications, or Luca Cardelli‘s work on complex type systems needed for more generic coding and understanding object oriented programing.
The reason for the difference in the kinds of mathematics was that computational formalism was addressing real problems, not simply puzzles interesting for themselves. Often these real world issues do not admit the kinds of neat solution that arise when you choose your own problem — the formal equivalent of Rittel’s wicked problems!
This was one of the things that I often found depressing during the REF2014 reading exercise in 2013. Over a thousand papers covering vast swathes of UK computer science, and so much that seemed to be in tiny sub-niches of sub-niches, obscure variants of inconsequential algebras, or reworking and tweaking of algorithms that appeared to be of no interest to anyone outside two or three other people in the field (I checked who was citing every output I read).
(Note the lists of outputs are all in the public domain, and links to where to find them can be found at my own REF micro-site.)
If this had been pure mathematics papers it is what I would have expected; after all mathematics is not funded in the way computer science is, so I would not expect to see the same kinds of connection to real world issues. Also I would have been disappointed if I had not seen some obscure work of this kind; you sometimes need to chase down rabbit holes to find Aladdin’s cave. It was the shear volume of this kind of work that shocked me.
Maybe in those early days, I self-selected work that was both practically and theoretically interesting, so I have a golden view of the past; maybe it was simply easier to do both before the low-hanging fruit had been gathered; or maybe just there has been a change in the social nature of the discipline. After all, most early mathematicians happily mixed pure and applied mathematics, with the areas only diverging seriously in the 20th century. However, as noted, mathematics is not funded so heavily as computer science, so it does seem to suggest a malaise, or at least loss of direction for computing as a discipline.
Anyway, roll back to the mid 1980s. A colleague of mine, David Wakeling, had been on a visit to a workshop in the States and heard there about Pending Analysis and Young and Hudak’s proof of its correctness . He wanted to use the algorithm in his own work, but there was something about the proof that he was unhappy about. It was not that he had spotted a flaw (indeed there was one, but obscure), but just that the presentation of it had left him uneasy. David was a practical computer scientist, not a mathematician, working on compilation and optimisation of lazy functional programming languages. However, he had some sixth sense that told him something was wrong.
Looking back, this intuition about formalism fascinates me. Again there may be self-selection going on, if David had had worries and they were unfounded, I would not be writing this. However, I think that there was something more than this. Hardy and Wright, the bible of number theory , listed a number of open problems in number theory (many now solved), but crucially for many gave an estimate on how likely it was that they were true or might eventually have a counter example. By definition, these were non-trivial hypotheses, and either true or not true, but Hardy and Wright felt able to offer an opinion.
For David I think it was more about the human interaction, the way the presenters did not convey confidence. Maybe this was because they were aware there was a gap in the proof, but thought it did not matter, a minor irrelevant detail, or maybe the same slight lack of precision that let the flaw through was also evident in their demeanour.
In principle academia, certainly in mathematics and science, is about the work itself, but we can rarely check each statement, argument or line of proof so often it is the nature of the people that gives us confidence.
Quite quickly I found two flaws.
One was internal to the mathematics (math alert!) essentially forgetting that a ‘monotonic’ higher order function is usually only monotonic when the functions it is applied to are monotonic.
The other was external — the formulation of the theorem to be proved did not actually match the real-world computational problem. This is an issue that I used to refer to as the formality gap. Once you are in formal world of mathematics you can analyse, prove, and even automatically check some things. However, there is first something more complex needed to adequately and faithfully reflect the real world phenomenon you are trying to model.
I’m doing a statistics course at the CHI conference in May, and one of the reasons statistics is hard is that it also needs one foot on the world of maths, but one foot on the solid ground of the real world.
Finding the problem was relatively easy … solving it altogether harder! There followed a period when it was my pet side project: reams of paper with scribbles, thinking I’d solved it then finding more problems, proving special cases, or variants of the algorithm, generalising beyond the simple binary domains of the original algorithm. In the end I put it all into a technical report, but never had the full proof of the most general case.
Then, literally a week after the report was published, I had a notion, and found an elegant and reasonably short proof of the most general case, and in so doing also created a new technique, the sandwich proof.
Reflecting back, was this merely one of those things, or a form of incubation? I used to work with psychologists Tom Ormerod and Linden Ball at Lancaster including as part of the Desire EU network on creativity. One of the topics they studied was incubation, which is one of the four standard ‘stages’ in the theory of creativity. Some put this down to sub-conscious psychological processes, but it may be as much to do with getting out of patterns of thought and hence seeing a problem in a new light.
In this case, was it the fact that the problem had been ‘put to bed’, enabled fresh insight?
Anyway, now, 30 years on, I’ve made the report available electronically … after reanimating Troff on my Mac … but that is another story.
[Dx88] A. J. Dix (1988). Finding fixed points in non-trivial domains: proofs of pending analysis and related algorithms. YCS 107, Dept. of Computer Science, University of York. https://alandix.com/academic/papers/fixpts-YCS107-88/
[HW59] G.H. Hardy, E.M. Wright (1959). An Introduction to the Theory of Numbers – 4th Ed. Oxford University Press. https://archive.org/details/AnIntroductionToTheTheoryOfNumbers-4thEd-G.h.HardyE.m.Wright
[WB17] Weyers, B., Bowen, J., Dix, A., Palanque, P. (Eds.) (2017) The Handbook of Formal Methods in Human-Computer Interaction. Springer. ISBN 978-3-319-51838-1 http://www.springer.com/gb/book/9783319518374
[YH96] J. Young and P. Hudak (1986). Finding fixpoints on function spaces. YALEU/DCS/RR-505, Yale University, Department of Computer Science http://www.cs.yale.edu/publications/techreports/tr505.pdf
If a news article is all about numbers, why is the media shy about providing the actual data?
On the BBC News website this morning James McIvor‘s article “Clash over ‘rich v poor’ university student numbers” describes differences between Scottish Government (SNP) and Scottish Labour in the wake of Professor Peter Scott appointment as commissioner for fair access to higher education in Scotland.
Scottish Labour claim that while access to university by the most deprived has increased, the educational divide is growing, with the most deprived increasing by 0.8% since 2014, but those in the least deprived (most well off) growing at nearly three times that figure. In contrast, the Sottish Government claims that in 2006 those from the least deprived areas were 5.8 times more likely to enter university than those in the most deprived areas, whereas now the difference is only 3.9 times, a substantial decrease in educational inequality..
The article is all about numbers, but the two parties seem to be saying contradictory things, one saying inequality is increasing, one saying it is decreasing!
Surely enough to make the average reader give up on experts, just like Michael Gove!
Of course, if you can read through the confusing array of leasts and mosts, the difference seems to be that the two parties are taking different base years: 2014 vs 2006, and that both can be true: a long term improvement with decreasing inequality, but a short term increase in inequality since 2014. The former is good news, but the latter may be bad news, a change in direction that needs addressing, or simply ‘noise’ as we are taking about small changes on big numbers.
I looked in vain for a link to the data, web sites or reports n which this was based, after all this is an article where the numbers are the story, but there are none.
After a bit of digging, I found that the data that both are using is from the UCAS Undergraduate 2016 End of Cycle Report (the numerical data for this figure and links to CSV files are below).
Looking at these it is clear that the university participation rate for the least deprived quintile (Q5, blue line at top) has stayed around 40% with odd ups and downs over the last ten years, whereas the participation of the most deprived quintile has been gradually increasing, again with year-by-year wiggles. That is the ratio between least and most deprived used to be about 40:7 and now about 40:10, less inequality as the SNP say.
For some reason 2014 was a dip year for the Q5. There is no real sign of a change in the long-term trend, but if you take 2014 to 2016, the increase in Q5 is larger than the increase in Q1, just as Scottish Labour say. However, any other year would not give this picture.
In this case it looks like Scottish Labour either cherry picked a year that made the story they wanted, or simply accidentally chose it.
The issue for me though, is not so much who was right or wrong, but why the BBC didn’t present this data to make it possible to make this judgement?
I can understand the argument that people do not like, or understand numbers at all, but where, as in this case, the story is all about the numbers, why not at least present the raw data and ideally discuss why there is an apparent contradiction!
Numerical from figure 57 of UCAS 2016 End of Cycle Report
I found myself getting increasingly angry today as Mozilla Foundation stepped firmly beyond those limits, and moreover with Trump-esque rhetoric attempts to dupe others into following them.
It all started with a small text add below the Firefox default screen search box:
It starts off fine, with stories of some of the silliness of current copyright law across Europe (can’t share photos of the Eiffel tower at night) and problems for use in education (which does in fact have quite a lot of copyright exemptions in many countries). It offers a petition to sign.
This sounds all good, partly due to rapid change, partly due to knee jerk reactions, internet law does seem to be a bit of a mess.
If you blink you might miss one or two odd parts:
“This means that if you live in or visit a country like Italy or France, you’re not permitted to take pictures of certain buildings, cityscapes, graffiti, and art, and share them online through Instagram, Twitter, or Facebook.”
Read this carefully, a tourist forbidden from photographing cityscapes – silly! But a few words on “… and art” … So if I visit an exhibition of an artist or maybe even photographer, and share a high definition (Nokia Lumia 1020 has 40 Mega pixel camera) is that OK? Perhaps a thumbnail in the background of a selfie, but does Mozilla object to any rules to prevent copying of artworks?
However, it is at the end, in a section labelled “don’t break the internet”, the cyber fundamentalism really starts.
“A key part of what makes the internet awesome is the principle of innovation without permission — that anyone, anywhere, can create and reach an audience without anyone standing in the way.”
Again at first this sounds like a cry for self expression, except if you happen to be an artist or writer and would like to make a living from that self-expression?
Again, it is clear that current laws have not kept up with change and in areas are unreasonably restrictive. We need to be ale to distinguish between a fair reference to something and seriously infringing its IP. Likewise, we could distinguish the aspects of social media that are more like looking at holiday snaps over a coffee, compared to pirate copies for commercial profit.
However, in so many areas it is the other way round, our laws are struggling to restrict the excesses of the internet.
Just a few weeks ago a 14 year old girl was given permission to sue Facebook. Multiple times over a 2 year period nude pictures of her were posted and reposted. Facebook hides behind the argument that it is user content, it takes down the images when they are pointed out, and yet a massive technology company, which is able to recognise faces is not able to identify the same photo being repeatedly posted. Back to Mozilla: “anyone, anywhere, can create and reach an audience without anyone standing in the way” – really?
Of course this vision of the internet without boundaries is not just about self expression, but freedom of speech:
“We need to defend the principle of innovation without permission in copyright law. Abandoning it by holding platforms liable for everything that happens online would have an immense chilling effect on speech, and would take away one of the best parts of the internet — the ability to innovate and breathe new meaning into old content.”
Of course, the petition is signalling out EU law, which inconveniently includes various provisions to protect the privacy and rights of individuals, not dictatorships or centrally controlled countries.
So, who benefits from such an open and unlicensed world? Clearly not the small artist or the victim of cyber-bullying.
Laissez-faire has always been an aim for big business, but without constraint it is the law of the jungle and always ends up benefiting the powerful.
In the 19th century it was child labour in the mills only curtailed after long battles.
In the age of the internet, it is the vast US social media giants who hold sway, and of course the search engines, who just happen to account for $300 million of revenue for Mozilla Foundation annually, 90% of its income.
Over the past few weeks I’ve been to two conferences focused on different aspects of technology and learning, Talis Insight Europe and ACM Learning at Scale (L@S). This led me to reflect on the potential for and barriers to ground breaking research in these areas in the UK.
The first conference, Talis Insight Europe, grew out of the original Talis User Group, but as well as company updates on existing and new products, also has an extensive line-up of keynotes by major educational visionaries and decision makers (including pretty much the complete line-up of JISC senior staff) and end-user contributed presentations.
The second, Learning @ Scale, grew out of the MOOC explosion, and deals with the new technology challenges and opportunities when we are dealing with vast numbers of students. It also had an impressive array of keynote speakers, including Sugata Mitra, famous for the ‘Hole in the Wall‘, which brought technology to street children in India.
Although there were some common elements (big data and dashboards got a mention in both!), the audiences were quite different. For Insight, the large majority were from HE (Higher Education) libraries, followed by learning technologists, industry representatives, and HE decision-makers. In contrast, L@S consisted largely of academics, many from computing or technical backgrounds, with some industry researchers, including, as I was attending largely with my Talis hat on, me.
In a joint keynote at Insight, Paul Fieldman and Phil Richards the CEO and CIO of JISC, described the project to provide a learning analytics service [FR16,JI16] (including student app and, of course, dashboard) for UK institutions. As well as the practical benefits, they outlined a vision where the UK leads the way in educational big data for personalised learning.
Given a long track record of education and educational technology research in the UK, the world-leading distance-learning university provision of the Open University, and recent initiatives both those outlined by JISC and FutureLearn (building on the OUs vast experience), this vision seems not unreasonable.
However, on the ground at Learning @ Scale, there was a very different picture; the vast majority of papers and attendees were from the US, an this despite the conference being held in Edinburgh.
To some extent this is as one might expect. While traditional distance learning, including the OU, has class sizes that for those in face-to-face institutions feel massive; these are dwarfed by those for MOOCs, which started in the US; and it is in the US where the main MOOC players (Coursera, udacity, edX) are based. edX alone had initial funding more than ten times that available to FutureLearn, so in sheer investment terms, the balance at L@S is representative.
However, Mike Sharples, long-term educational technology researcher and Academic Lead at FutureLearn, was one of the L@S keynotes [Sh16]. In his presentation it was clear that FutureLearn and UK MOOCs punch well above their weight, with retention statistics several times higher than US counterparts. While this may partly be due to topic areas, it is also a reflection of the development strategy. Mike outlined how empirically founded educational theory has driven the design of the FutureLearn platform, not least the importance of social learning. Perhaps then not surprisingly, one of the areas where FutureLearn substantially led over US counterparts was in social aspects of learning.
So there are positive signs for UK research in these areas. While JISC has had its own austerity-driven funding problems, its role as trusted intermediary and active platform creator offers a voice and forum that few, if any, other countries posses. Similarly, while FutureLearn needs to be sustainable, so has to have a certain inward focus, it does seem to offer a wonderful potential resource for collaborative research. Furthermore the open education resource (OER) community seems strong in the UK.
The Teaching Excellence Framework (TEF) [HC16,TH15] will bring its own problems, more about justifying student fee increases than education, potentially damaging education through yet more ill-informed political interference, and re-establishing class-based educational apartheid. However, it will certainly increase universities’ interest in education technology.
Set against this are challenges.
First was the topic of my own L@S work-in-progress paper – Challenge and Potential of Fine Grain, Cross-Institutional Learning Data [Dx16]. At Talis, we manage half a million reading lists, containing over 20 million resources, spread over more than 85 institutions including more than half of UK higher education. However, these institutions are all very different, and the half million courses each only may have only tens or low hundreds of students. That is very large scale in total volume, but highly heterogeneous. The JISC learning analytics repository will have exactly the same issues, and are far more difficult to deal with by machine learning or statistical analysis than the relatively homogeneous data from a single huge MOOC.
These issues of heterogeneous scale are not unique to education and ones that as a general information systems phenomena, I have been interested in for many years, and call the “long tail of small data” [Dx10,Dx15]. While this kind of data is more complex and difficult to deal with, this is of course a major research challenge, and potentially has greater long-term promise than the study of more homogeneous silos. I am finding this in my own work with musicologist [IC16,DC14], and is emerging as an issue in the natural sciences [Bo13,PC07].
Another problem is REF, the UK ‘Research Excellence Framework’. My post-hoc analysis of the REF data revealed the enormous bias in the computing sub-panel against any form of applied and human-oriented work [Dx15b,Dx15c]. Of course, this is not a new issue, just that the available data has made this more obvious and undeniable. This affects my own core research area of human–computer interaction, but also, and probably much more substantially, learning technology research. Indeed, I think most learning technologists had already sussed this out well before REF2014 as there were very few papers submitted in this area to the computing panel. I assume most research on learning technology was submitted to the education panel.
To some extent it does not matter where research is submitted and assessed; however, while in theory the mapping between university departments and submitted units is fluid for REF, in practice submitting to ‘other’ panels is problematic making it difficult to write coherent narratives about the research environment. If learning technology research is not seen as REF-able in computing, computing departments will not recruit in these areas and discourage this kind of research. While my hope is that REF2020 will not re-iterate the mistakes of REF2014, there is no guarantee of this, and anyway the effects on institutional policy will already have been felt.
However, and happily, the kinds of research needed to make sense of this large-scale heterogeneous data may well prove more palatable to a computing REF panel than more traditional small-scale learning technology. It would be wonderful to see research collaborations between those with long-term experience and understanding of educational issues, with hard-core machine learning and statistical analysis – this is BIG DATA and challenging data. Indeed one of the few UK papers at L@S involved Pearson’s London-based data analysis department, and included automatic clustering, hidden Markov models, and regression analysis.
In short, while there are barriers in the UK, there is also great potential for exciting research that is both theoretically challenging and practically useful, bringing the insights available from large-scale educational data to help individual students and academics.
[Dx10] A. Dix (2010). In praise of inconsistency – the long tail of small data. Distinguished Alumnus Seminar, University of York, UK, 26th October 2011.
[Dx15] A. Dix (2014/2015). The big story of small data. Talk at Open University, 11th November 2014; Oxford e-Research Centre, 10th July 2015; Mixed Reality Laboratory, Nottingham, 15th December 2015.
[DC14] Dix, A., Cowgill, R., Bashford, C., McVeigh, S. and Ridgewell, R. (2014). Authority and Judgement in the Digital Archive. In The 1st International Digital Libraries for Musicology workshop (DLfM 2014), ACM/IEEE Digital Libraries conference 2014, London 12th Sept. 2014. https://alandix.com/academic/papers/DLfM-2014/
[Dx15c] A. Dix (2015). Citations and Sub-Area Bias in the UK Research Assessment Process. In Workshop on Quantifying and Analysing Scholarly Communication on the Web (ASCW’15) at WebSci 2015 on June 30th in Oxford. http://ascw.know-center.tugraz.at/2015/05/26/dix-citations-and-sub-areas-bias-in-the-uk-research-assessment-process/
[FR16] Paul Feldman and Phil Richards (2016). JISC – Helping the UK become the most advanced digital teaching and research nation in the world. Talis Insight Europe 2016. https://talis.com/2016/04/29/jisc-keynote-paul-feldman-phil-richards-talis-insight-europe-2016/
[HC16] The Teaching Excellence Framework: Assessing Quality in Higher Education. House of Commons, Business, Innovation and Skills Committee, Third Report of Session 2015–16. HC 572. 29 February 2016. http://www.publications.parliament.uk/pa/cm201516/cmselect/cmbis/572/572.pdf
[JI16] Effective learning analytics. JISC, accessed 8/5/2016. https://www.jisc.ac.uk/rd/projects/effective-learning-analytics
[PC07] C. L. Palmer, M. H. Cragin, P. B. Heidorn and L.C. Smith. 2007. Data curation for the long tail of science: The Case of environmental sciences. 3rd International Digital Curation Conference, Washington, DC. https://apps.lis.illinois.edu/wiki/ download/attachments/32666/Palmer_DCC2007.pdf
[Sh16] Mike Sharples (2016). Effective Pedagogy at Scale, Social Learning and Citizen Inquiry (keynote). Learning at Scale 2016. ACM. http://learningatscale.acm.org/las2016/keynotes/#k2
[TH15] Teaching excellence framework (TEF): everything you need to know. Times Higher Education, August 4, 2015. https://www.timeshighereducation.com/news/teaching-excellence-framework-tef-everything-you-need-to-know
About 30 years ago, I was at a meeting in London and heard a presentation about a study of early email use in Xerox and the Open University. At Xerox the use of email was already part of their normal culture, but it was still new at OU. I’d thought they had done a before and after study of one of the departments, but remembered clearly their conclusions: email acted in addition to other forms of communication (face to face, phone, paper), but did not substitute.
It was one of those pieces of work that I could recall, but didn’t have a reference too. Facebook to the rescue! I posted about it and in no time had a series of helpful suggestions including Gilbert Cockton who nailed it, finding the meeting, the “IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems” (3 Feb 1989) and the precise paper:
Fung , T. O’Shea , S. Bly. Electronic mail viewed as a communications catalyst. IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, , pp.1/1–1/3. INSPEC: 3381096 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=197821
In some extraordinary investigative journalism, Gilbert also noted that the first author, Pat Fung, went on to fresh territory after retirement, qualifying as a scuba-diving instructor at the age of 75.
The details of the paper were not exactly as I remembered. Rather than a before and after study, it was a comparison of computing departments at Xerox (mature use of email) and OU’s (email less ingrained, but already well used). Maybe I had simply embroidered the memory over the years, or maybe they presented newer work at the colloquium, than was in the 3 page extended abstract. In those days this was common as researchers did not feel they needed to milk every last result in a formal ‘publication’. However, the conclusions were just as I remembered:
“An exciting finding is its indication that the use of sophisticated electronic communications media is not seen by users as replacing existing methods of communicating. On the contrary, the use of such media is seen as a way of establishing new interactions and collaboration whilst catalysing the role of more traditional methods of communication.”
As part of this process following various leads by other Facebook friends, I spent some time looking at early CSCW conference proceedings, some at Saul Greenburg’s early CSCW bibliography  and Ducheneaut and Watts (15 years on) review of email research  in the 2005 HCI special issue on ‘reinventing email’  (both notably missing the Fung et al. paper). I downloaded and skimmed several early papers including Wendy McKay’s lovely early (1988) study  that exposed the wide variety of ways in which people used email over and above simple ‘communication’. So much to learn from this work when the field was still fresh,
This all led me to reflect both on the Fung et al. paper, the process of finding it, and the lessons for email and other ‘communication’ media today.
Communication for new purposes
A key finding was that “the use of such media is seen as a way of establishing new interactions and collaboration“. Of course, the authors and their subjects could not have envisaged current social media, but the finding if this paper was exactly an example of this. In 1989 if I had been trying to find a paper, I would have scoured my own filing cabinet and bookshelves, those of my colleagues, and perhaps asked people when I met them. Nowadays I pop the question into Facebook and within minutes the advice starts to appear, and not long after I have a scanned copy of the paper I was after.
Communication as a good thing
In the paper abstract, the authors say that an “exciting finding” of the paper is that “the use of sophisticated electronic communications media is not seen by users as replacing existing methods of communicating.” Within paper, this is phrased even more strongly:
“The majority of subjects (nineteen) also saw no likelihood of a decrease in personal interactions due to an increase in sophisticated technological communications support and many felt that such a shift in communication patterns would be undesirable.”
Effectively, email was seen as potentially damaging if it replaced other more human means of communication, and the good outcome of this report was that this did not appear to be happening (or strictly subjects believed it was not happening).
However, by the mid-1990s, papers discussing ’email overload’ started to appear .
I recall a morning radio discussion of email overload about ten years ago. The presenter asked someone else in the studio if they thought this was a problem. Quite un-ironically, they answered, “no, I only spend a couple of hours a day”. I have found my own pattern of email change when I switched from highly structured Eudora (with over 2000 email folders), to Gmail (mail is like a Facebook feed, if it isn’t on the first page it doesn’t exist). I was recently talking to another academic who explained that two years ago he had deliberately taken “email as stream” as a policy to control unmanageable volumes.
If only they had known …
Communication as substitute
While Fung et al.’s respondents reported that they did not foresee a reduction in other forms of non-electronic communication, in fact even in the paper the signs of this shift to digital are evident.
Here are the graphs of communication frequency for the Open University (30 people, more recent use of email) and Xerox (36 people, more established use) respectively.
It is hard to draw exact comparisons as it appears there may have been a higher overall volume of communication at Xerox (because of email?). Certainly, at that point, face-to-face communication remains strong at Xerox, but it appears that not only the proportion, but total volume of non-digital non-face-to-face communications is lower than at OU. That is sub substitution has already happened.
Again, this is obvious nowadays, although the volume of electronic communications would have been untenable in paper (I’ve sometimes imagined printing out a day’s email and trying to cram it in a pigeon-hole), the volume of paper communications has diminished markedly. A report in 2013 for Royal Mail recorded 3-6% pa reduction in letters over recent years and projected a further 4% pa for the foreseeable future .
academic communication and national meetungs
However, this also made me think about the IEE Colloquium itself. Back in the late 1980s and 1990s it was common to attend small national or local meetings to meet with others and present work, often early stage, for discussion. In other fields this still happens, but in HCI it has all but disappeared. Maybe I have is a little nostalgia, but this does seem a real loss as it was a great way for new PhD students to present their work and meet with the leaders in their field. Of course, this can happen if you get your CHI paper accepted, but the barriers are higher, particularly for those in smaller and less well-resourced departments.
Some of this is because international travel is cheaper and faster, and so national meetings have reduced in importance – everyone goes to the big global (largely US) conferences. Many years ago research on day-to-day time use suggested that we have a travel ‘time budget’ reactively constant across counties and across different kinds of areas within the same country . The same is clearly true of academic travel time; we have a certain budget and if we travel more internationally then we do correspondingly less nationally.
However, I wonder if digital communication also had a part to play. I knew about the Fung et al. paper, even though it was not in the large reviews of CSCW and email, because I had been there. Indeed, the reason that the Fung et al.paper was not cited in relevant reviews would have been because it was in a small venue and only available as paper copy, and only if you know it existed. Indeed, it was presumably also below the digital radar until it was, I assume, scanned by IEE archivists and deposited in IEEE digital library.
However, despite the advantages of this easy access to one another and scholarly communication, I wonder if we have also lost something.
In the 1980s, physical presence and co-presence at an event was crucial for academic communication. Proceedings were paper and precious, I would at least skim read all of the proceedings of any event I had been to, even those of large conferences, because they were rare and because they were available. Reference lists at the end of my papers were shorter than now, but possibly more diverse and more in-depth, as compared to more directed ‘search for the relevant terms’ literature reviews of the digital age.
And looking back at some of those early papers, in days when publish-or-perish was not so extreme, when cardiac failure was not an occupational hazard for academics (except maybe due to the Cambridge sherry allowance), at the way this crucial piece of early research was not dressed up with an extra 6000 words of window dressing to make a ‘high impact’ publication, but simply shared. Were things more fun?
 Saul Greenberg (1991) “An annotated bibliography of computer supported cooperative work.” ACM SIGCHI Bulletin, 23(3), pp. 29-62. July. Reprinted in Greenberg, S. ed. (1991) “Computer Supported Cooperative Work and Groupware”, pp. 359-413, Academic Press. DOI: http://dx.doi.org/10.1145/126505.126508
 Nicolas Ducheneaut and Leon A. Watts (2005). In search of coherence: a review of e-mail research. Hum.-Comput. Interact. 20, 1 (June 2005), 11-48. DOI= 10.1080/07370024.2005.9667360
 Steve Whittaker, Victoria Bellotti, and Paul Moody (2005). Introduction to this special issue on revisiting and reinventing e-mail. Hum.-Comput. Interact. 20, 1 (June 2005), 1-9.
 Wendy E. Mackay. 1988. More than just a communication system: diversity in the use of electronic mail. In Proceedings of the 1988 ACM conference on Computer-supported cooperative work (CSCW ’88). ACM, New York, NY, USA, 344-353. DOI=http://dx.doi.org/10.1145/62266.62293
 Steve Whittaker and Candace Sidner (1996). Email overload: exploring personal information management of email. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’96), Michael J. Tauber (Ed.). ACM, New York, NY, USA, 276-283. DOI=http://dx.doi.org/10.1145/238386.238530
 The outlook for UK mail volumes to 2023. PwC prepared for Royal Mail Group, 15 July 2013
I was recently asked to clarify the difference between usability principles and guidelines. Having written a page-full of answer, I thought it was worth popping on the blog.
As with many things the boundary between the two is not absolute … and also the term ‘guidelines’ tends to get used differently at different times!
However, as a general rule of thumb:
- Principles tend to be very general and would apply pretty much across different technologies and systems.
- Guidelines tend to be more specific to a device or system.
As an example of the latter, look at the iOS Human Interface Guidelines on “Adaptivity and Layout” It starts with a general principle:
“People generally want to use their favorite apps on all their devices and in multiple contexts”,
but then rapidly turns that into more mobile specific, and then iOS specific guidelines, talking first about different screen orientations, and then about specific iOS screen size classes.
I note that the definition on page 259 of Chapter 7 of the HCI textbook is slightly ambiguous. When it says that guidelines are less authoritative and more general in application, it means in comparison to standards … although I’d now add a few caveats for the latter too!
Basically in terms of ‘authority’, from low to high:
|lowest||principles||agreed by community, but not mandated|
|guidelines||proposed by manufacture, but rarely enforced|
|highest||standards||mandated by standards authority|
In terms of general applicability, high to low:
|highest||principles||very broad e.g. ‘observability’|
|guidelines||more specific, but still allowing interpretation|
This ‘generality of application’ dimension is a little more complex as guidelines are often manufacturer specific so arguably less ‘generally applicable’ than standards, but the range of situations that standard apply to is usually much tighter.
On the whole the more specific the rules, the easier they are to apply. For example, the general principle of observability requires that the designer think about how it applies in each new application and situation. In contrast, a more specific rule that says, “always show the current editing state in the top right of the screen” is easy to apply, but tells you nothing about other aspects of system state.