Free AI book and a new one coming …

Yes a new AI book is coming … but until then you can download the first edition for FREE ūüôā

Many years ago Janet Finlay and I wrote a small introduction to artificial intelligence.¬† At the time there were several Bible-sized tomes … some of which are still the standard textbooks today.¬† However, Janet was teaching a masters conversion course and found that none of these books were suitable for taking the first steps on an AI journey, especially for those coming from non-computing disciplines.

Over the years it faded to the back of our memories, with the brief exception of the time when, after we’d nearly forgotten it, CRC Press issued a Japanese translation.¬† Once or twice the thought of doing an update arose, but quickly passed.¬† This was partly because our main foci were elsewhere, but also, at the danger of insulting all my core-AI friends, not much changed in core AI for many years!

Coming soon … Second Edition

Of course over recent years things have changed dramatically, hence my decision, nearly 25 years on, to create a new edition maintaining the aim to give a rich but accessible introduction, but capturing some of the recent trends and giving these a practical and human edge.¬† Following the T-model of teaching, I’d like to help both newcomer and expert gain a broad perspective of the issues and landscape, whilst giving enough detail for those that want to delve into a more specific area.

A Free Book and New Resources

In the mean time the publisher, Taylor & Francis/CRC has agreed to make the PDF of the first edition available free of charge¬† I have updated some of the code examples from the first edition and will be incrementally adding new material to the second edition micro-site including slides, cases studies, video and interactive materials.¬† If you’d like to teach using this please let me know your views on the topics and also if there are areas where you’d like me to create preliminary material with greater urgency.¬† I won’t promise to be able to satisfy everyone, but can use this to adjust my priorities.

Why now?

The first phase of change in AI was driven by the rise of big data and the increasing use of forms of machine learning to drive adverts, search results and social media.¬† Within user interface design, many of the fine details of colour choices and screen layout are now performed using A–B testing …sight variants of interfaces delivered to millions of people ‚Äď shallow, without understanding and arguably little more than bean counting, but in numerous areas vast data volume has been found to be ‘unreasonably effective‘ at solving problems that were previously seen to be the remit of deep AI.

In the last few years deep learning has taken over as the driver of AI research and often also media hype.¬† Here it has been the sheer power of computation, partly due to Moores’ Law with computation nearly a million times faster than it was when that first edition was written nearly 25 years ago.¬† However, it has also been enabled by cloud computing allowing large numbers of computers ti efficiently attack a single problem.¬† Algorithms that might have been conceived of but dismissed as impractical in the past have become commonplace.

Alongside this has been a dark side of AI, from automated weapons and mass surveillance, to election rigging and the insidious knowledge that large corporations have gathered through our day-to-day web interactions.¬† In the early 1990s I warned of the potential danger of ethnic and gender bias in black-box machine learning and I’ve returned to this issue more recently as those early predictions have come to pass.

Across the world there are new courses running or being planned and people who want to know more.  In Swansea we have a PhD programme on people-first AI/big data, and there is currently a SIGCHIItaly workshop call out for Teaching HCI for AI: Co-design of a Syllabus. There are several substantial textbooks that offer copious technical detail, but can be inaccessible for the newcomer or those coming from other disciplines.  There are also a number of excellent books that deal with the social and human impact of AI, but without talking about how it works.

I hope to be able to build upon the foundations that Janet and I established all those years ago to create something that fills a crucial gap: giving a human-edge to those learning artificial intelligence from a computing background and offering an accessible technical introduction for those approaching the topic from other disciplines.

 

 

Software for 2050

New Year’s resolutions are for a year ahead, but with the start of a new decade it is worth looking a bit further.
How many of the software systems we use today will be around in 2050 — or even 2030?
Story 1.¬† This morning the BBC reported that NHS staff need up to 15 different logins to manage ‘outdated’ IT systems and I have seen exactly this in a video produced by a local hospital consultant. Another major health organisation I talked to mentioned that their key systems are written in FoxBase Pro, which has not been supported by Microsoft for 10 years.
Story 2.¬† Nearly all worldwide ATM transactions are routed through systems that include COBOL code (‘natural language’ programming of the 1960s) … happily IBM still do support CICS, but there is concern that COBOL expertise is literally dying out.
Story 3.¬† Good millennial¬†tech typically¬†involves an assemblage of cloud-based services: why try to deal with images when you have Flickr … except Flickr is struggling to survive financially; why have your own version control system when you can use Google Code, except Google Code shut down in 2016 after 10 years.
Story 3a.¬† Google have a particularly bad history of starting or buying services and then dropping them: Freebase (sigh), Revolv Hub home automation, too many to list. They are doing their best with AngularJS, which has a massive uptake in hi-tech, and is being put into long-term maintenance mode — however, ‘long-term’ here will not mean COBOL long-term, just a few years of critical security updates.
Story 4.  Success at last. Berners-Lee did NOT build the web on cutting edge technology (an edge of sadness here as hypertext research, including external linkage, pretty much died in 1994), and because of this it has survived and probably will still be functioning in 2050.
Story 5.¬† I’m working with David Frohlich and others who have been developing slow, meaningful social media for the elderly and their families. This could potentially contribute to very long term domestic memories, which may help as people suffer dementia and families grieve after death. However, alongside the design issues for such long-term interaction, what technical infrastructure will survive a current person’s lifetime?
You can see the challenge here.¬† Start-ups are about creating something that will grow rapidly in 2–5 years, but then be sold, thrown away or re-engineered from scratch.¬† Government and health systems need to run for 30 years or more … as do our personal lives.
What practical advice do we give to people designing now for systems that are likely to still be in use in 2050?

On the edge of chaos

Running in the early morning, the dawn sun drives a burnt orange road across the bay. The water’s margin is often the best place to tread, the sand damp and solid, sound underfoot, but unpredictable. The tide was high and at first I thought it had just turned, the damp line a full five yards beyond the edge of the current waves. Some waves pushed higher and I had to swerve and dance to avoid the frothing edge, others lower, wave following wave, but in longer cycles, some higher, some lower.

It was only later I realised the tide was still moving in, the damp line I had seen as the zenith of high tide, had merely been the high point of a cycle and I had run out during a temporary low. Cycles within cycles, the larger cycles predictable and periodic, driven by moon and sun, but the smaller ones, the waves and patterns of waves, driven by wind and distant storms thousands of miles away.

I’m reading Kate Raworth’s Doughnut Economics. She describes the way 20th century economists (and many still) were wedded to simple linear models of closed processes, hence missed the crucial complexities of an interconnected world, and so making the (predictable) crashes far worse.

I was fortunate in that even in school I recall watching the BBC documentary on chaos theory and then attending an outreach lecture at Cardiff University, targeted at children, where the speaker was an expert in Chaos and Catastrophe Theory giving a more mathematical treatment. Ideas of quasi-periodicity, non-linearity, feedback, phase change, tipping points and chaotic behaviour have been part of my understanding of the world since early in my education.

Now-a-days ideas of complexity are more common; Hollywood embraced the idea that the flutter of a butterfly wing could be the final straw that causes a hurricane. This has been helped in no small part by the high-profile of the Santa-Fe Institute and numerous popular science books. However, only recently I was with a number of academics in computing and mathematics, who had not come across ‘criticality’ as a term.

Criticality is about the way many natural phenomena self-organise to be on the edge so that small events have a large impact. The classic example is a pile of sand: initially a whole bucketful tipped on the top will just stay there, but after a point the pile gets to a particular (critical) angle, where even a single grain may cause a minor avalanche.

If we understand the world in terms of stable phenomena, where small changes cause small effects, and things that go out of kilter are brought back by counter effects, it is impossible to make sense of the wild fluctuations of global economics, political swings to extremism, and cataclysmic climate change.

One of the things ignored by some of the most zealous proponents of complexity is that many of the phenomena that we directly observe day-to-day do in fact follow the easier laws of stability and small change. Civilisation develops in parts of the world that are relatively stable and then when we modify the world and design artefacts within it, we engineer things that are understandable and controllable, where simple rules work. There are times when we have to stare chaos in the face, but where possible it is usually best to avoid it.

lovefibre ‚Äď waves

However, even this is changing. The complexity of economics is due to the large-scale networks within global markets with many feedback loops, some rapid, some delayed. In modern media and more recently the internet and social media, we have amplified this further, and many of the tools of big-data analysis, not least deep neural networks, gain their power precisely because they have stepped out of the world of simple cause and effect and embrace complex and often incomprehensible interconnectivity.

The mathematical and computational analyses of these phenomena are not for the faint hearted. However, the qualitative understanding of the implications of this complexity should be part of the common vocabulary of society, essential to make sense of climate, economics and technology.

In education we often teach the things we can simply describe, that are neat and tidy, explainable, where we don’t have to say “I don’t know”. Let’s make space for piles of sand alongside pendulums in physics, screaming speaker-microphone feedback in maths, and contingency alongside teleological inevitability in historic narrative.

Paying On Time ‚Äď universities are failing

Universities are not living up to Government prompt payment targets.  As many suppliers will be local SMEs this threatens the cashflow of businesses that may be teetering on the edge, and the well being of local economies.

I’ve twice in the last couple of months been hit by university finance systems that have a monthly payment run so that if a claim or invoice is not submitted by a certain date, often the first day or two of the month, then it is not paid until the end of the following month, leading to a seven week delay in payment.¬† This is despite Government guidelines for a normal 30 day payment period and to aim for 80% payment within 5 working days.

I’d like to say these are rare cases, but are sadly typical of university payment and expense systems.¬† In some cases this is because one is being treated as a casual employee, so falling into payroll systems.¬† However, often the same systems are clearly being used for commercial payments.¬† This means that if a supplier misses a monthly deadline they may wait nearly two months for payment … and of course if they are VAT registered may have already had to pay the VAT portion to HMRC before they actual receive the payment.

The idea of monthly cheque runs is a relic of the 1970s when large reels of magnetic tapes had to be mounted on refrigerator-sized machines and special paper had to be loaded into line-printers for cheque runs.  In the 21st century when the vast proportion of payments are electronic, it is an embarrassing and unethical anachronism.

As well as these cliff-edge deadline issues, I’ve seen university finance systems who bounce payments to external suppliers if data is on an out of date form, even if the form was provided in error by a member of university staff.

Even worse are universities finance systems which are organised so that when there is a problem in payment, for example, a temporary glitch in electronic bank payments, instead of retrying the payment, or informing the payee or relevant university contact, the system simply ignores it leaving it in limbo.¬† I’ve encountered missing payments of this kind up to a year after the original payment date.¬† If one were cynical one might imagine that they simply hope the supplier will never notice.

The issue of late payments became a major issue a few years ago.¬† Following the recession, many SMEs were constantly teetering on the edge of bankruptcy, yet larger firms were lax in paying promptly knowing that they were in a position of power (e.g. see “Getting paid on time” issued by the Department for Business, Innovation & Skills,¬†February 2012).

Five years on this is still a problem.¬† In April last year The Independent estimated¬†that British SMEs were owed 44.6 billion in late or overdue payments(see “The scourge of late payment“).¬† There is now mandatory reporting of payment processes for larger companies, and recent returns showed that some companies missed prompt payment up to 96% of the time, with bad performers including major names such as Deloitte (see “Ten of the UK’s big businesses that fail to pay suppliers on time get named and shamed by the Government“).

There is also a voluntary “Prompt Payment Code“, but, amongst the¬†signatories, there are only two universities (Huddersfield and Westminster) and three colleges.

Universities are often proud of the way they support local economies and communities: being major employers and often offering advice to local businesses.  However, in respect to prompt payment they are failing those same communities.

So, well done Huddersfield and Westminster, and for the rest of the university system ‚Äď up your game.

physigrams ‚Äď modelling the device unplugged

Physigrams get their own micro-site!

See it now at at physicality.org/physigrams

Appropriate physical design can make the difference between an intuitively obvious device and one that is inscrutable.  Physigrams are a way of modelling and analysing the interactive physical characteristics of devices from TV remotes to electric kettles, filling the gap between foam prototypes and code.

Sketches or CAD allow you to model the static physical form of the device, and this can be realised in moulded blue foam, 3D printing or cardboard mock-ups.  Prototypes of the internal digital behaviour can be produced using tools such as Adobe Animate, proto.io or atomic or as hand-coded using standard web-design tools.  The digital behaviour can also be modelled using industry standard techniques such as UML.

  

Physigrams allow you to model the ‘device unplugged’ ‚Äď the pure physical interaction potential of the device: the ways you can interact with buttons, dials and knobs, how you can open, slide or twist movable elements.¬† These physigrams can be attached to models of the digital behaviour to understand how well the physical and digital design compliment one another.

Physigrams were developed some years ago as part of the DEPtH project., a collaboration between product designers at Cardiff School of Art and Design and  computer scientists at Lancaster University. Physigrams have been described in various papers over the years.  However, with TouchIT ,our book on physicality and design (eventually!) reaching completion and due out next year, it felt that physigrams deserved a home of their own on the web.

The physigram micro-site, part of physicality.org includes descriptions of physical interaction properties, a complete key to the physigram notation, and many examples of physigrams in action from light switches, to complete control panels and novel devices.

Timing matters!

How long is an instant? The answer, of course, is ‘it depends’, but I’ve been finding it fascinating playing on the demo page for AngularJS tooltips. and seeing what feels like ‘instant’ for a tooltip.

The demo allows you to adjust the md-delay property so you can change the delay between hovering over a button and the tooltip appearing, and then instantly see what that feels like.

Try it yourself, set a time and then either move over the button as if you were about to click t, or wondering what it does, or simply pass over it as if you were moving your pointer to another part of the page.
 
If the delay is too short (e.g. 0), the tooltip flickers as you simply pass over the icon.
 
If you want it as a backup for when someone forgets the action, then something longer about a second is fine – the aim is to be there only if the user has that moment doubt.
 
However, I was fascinated by how long the delay needed to be to feel ‘instant’ and yet not appear by accident.
 
For me about 150 ms is not noticeable as a delay, whereas 200ms I can start to notice – not an annoying delay, but a very slight sense of lack of responsiveness.

Solr Rocks!

After struggling with large FULLTEXT indexes in MySQL, Solr comes to the rescue, 16 million records ingested in 20 minutes ‚Äď wow!

One small Gotcha was the security classes, which have obviously moved since the documentation was written (see fix at end of the post).

For web apps I live off MySQL, albeit now-a-days often wrapped with my own NoSQLite libraries to do Mongo-style databases over the LAMP stack. I’d also recently had a successful experience using MySQL FULLTEXT indices with a smaller database (10s of thousands of records) for the HCI Book search. ¬†So when I wanted to index 16 million the book titles with their author names from OpenLibrary¬†I thought I might as well have a go.

For some MySQL table types, the normal recommendation used to be to insert records without an index and add the index later. ¬†However, in the past I have had a very bad experience with this approach as there doesn’t appear to be a way to tell MySQL to go easy with this process – I recall the disk being absolutely thrashed and Fiona having to restart the web server ūüôĀ

Happily,¬†Ernie Souhrada¬† reports that for MyISAM tables incremental inserts with an index are no worse than bulk insert followed by adding the index. ¬†So I went ahead and set off a script adding batches of a 10,000 records at a time, with small gaps ‘just in case’. ¬†The just in case was definitely the case and 16 hours later I’d barely managed a million records and MySQL was getting slower and slower.

I cut my losses, tried an upload without the FULLTEXT index and 20 minutes later, that was fine … but no way could I dare doing that ‘CREATE FULLTEXT’!

In my heart I knew that lucene/Solr was the right way to go.  These are designed for search engine performance, but I dreaded the pain of trying to install and come up to speed with yet a different system that might not end up any better in the end.

However, I bit the bullet, and my dread was utterly unfounded. ¬†Fiona got the right version of Java running and then within half an hour of downloading Solr I had it up and running with one of the examples. ¬†I then tried experimental ingests with small chunks of the data: 1000 records, 10,000 records, 100,000 records, a million records … Solr lapped it up, utterly painless. ¬†The only fix I needed was because my tab-separated records had quote characters that needed mangling.

So, ¬†a quick split into million record chunks (I couldn’t bring myself to do a single multi-gigabyte POST …but maybe that would have been OK!), set the ingest going and 20 minutes later ‚Äď hey presto 16 million full text indexed records ūüôā ¬†I then realised I’d forgotten to give fieldnames, so the ingest had taken the first record values as a header line. ¬†No problems, just clear the database and re-ingest … at 20 minutes for the whole thing, who cares!

As noted there was one slight gotcha. ¬†In the¬†Securing Solr section of the Solr Reference guide, it explains how to set up the security.json file. ¬†This kept failing until I realised it was failing to find the classes solr.BasicAuthPlugin and solr.RuleBasedAuthorizationPlugin¬†(solr.log is your friend!). ¬†After a bit of listing of contents of jars, I found tat these are now in¬†org.apache.solr.security. ¬†I also found that the JSON parser struggled a little with indents … I think maybe tab characters, but after explicitly selecting and then re-typing spaces yay! ‚Äď I have a fully secured Solr instance with 16 million book titles – wow ūüôā

This is my final security.json file (actual credentials obscured of course!

{
  "authentication":{
    "blockUnknown": true,
    "class":"org.apache.solr.security.BasicAuthPlugin",
    "credentials":{
      "tom":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo=",
      "dick":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo=",
      "harry":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo="},
     },

  "authorization":{"class":"org.apache.solr.security.RuleBasedAuthorizationPlugin"}
}

End of an era

A few weeks ago, I gave a panel presentation at the ARMA conference in Liverpool — however, this was my last official duty with a Talis hat on.

Talis is a small employee-owned company, and maintaining a research strand has been far sighted, but unusual. After a period focusing more in the development of new products, Talis is shifting to a phase when every resource should be focused on delivery … and hence long-term research, and my own role in the company, has had to cease.

Talis has been a wonderful place to work over the past seven years, both the individuals there, but also, and crucially important, the company atmosphere, which combines the excitement of a start-up, with real care and sense of community.   So if you spot posts advertised there, it is a great place to be.

Talis was my principal regular income, as my academic role at Birmingham has only been 20%, so long-term I need to think about whether I should increase again my academic time, or do other things. I have been very fortunate never having previously had a time without regular income, so this is a new experience for me, although, of course, common.

Over the past few years, I have kept some time ‘unwaged’ for other projects (such as walking round Wales!) and occasional consultancy, and my to do list is far from empty, so this summer and autumn I am intending to spend more time writing (yes TouchIT will be finished, and editing the Alan Walks Wales blog into a book), picking up some of the many half-finished coding projects, and doing videoing for Interaction Design Foundation

value for money in research ‚Äď excellence or diversity

Government research funding policy in many countries, including the UK, has focused on centres of excellence, putting more funding into a few institutions and research groups who are creating the most valuable outputs.

Is this the best policy, and does evidence support it?

From “Big Science vs. Little Science: How Scientific Impact Scales with Funding”

I’m prompted to write as Leonel Morgado (Facebook, web) shared a link to a 2013 PLOS ONE paper “Big Science vs. Little Science: How Scientific Impact Scales with Funding” by Jean-Michel Fortin and David Currie. ¬†The paper analyses work funded by¬†Natural Sciences and Engineering Research Council of Canada (NSERC), and looked at size of grant vs. research outcomes. ¬†The paper demonstrates diminishing returns: large grants produce more research outcomes than smaller grants, but less per dollar spend. ¬†That is concentrating research funding appears to reduce the overall research output.

Of course, those obtaining research grants have all been through a highly competitive process, so the NSERC results may simply be a factor of the fact that we are already looking at the very top level of the research projects.

However, a report many years ago reinforces this story, and suggests it holds more broadly.

Sometime in the mid-late 1990s HEFCE the UK higher education funding agency, did a study where they ranked all universities against every simple research output metrics1. One of the outputs was the number of PhD completions and another was industrial research income (arguably whether an output!), but I forget the third.

Not surprisingly Oxford and Cambridge came top of the list when ranked by aggregate research output.

However, the speadsheet also included the amount of research money HEFCE paid into the university and a value-for-money column.

When ranked against value-for-money, the table was near reversed, with Oxford and Cambridge at the very bottom and Northampton University (not typically known as the peak of the university excellence ratings) was the top. That is HEFCE got more research output for pound spent at Northampton than anywhere else in the UK.

The UK REF2014 used an extensive and time-consuming peer-review mechanism to rank the research quality of each discipline in each UK university-level institution, on a 1* to 4* scale (4* being best). Funding is heavily ramped towards 4* (in England the weighting is 10:3:0:0 for 4*:3*:2*:1*). As part of the process, comprehensive funding information was produced for each unit of assessment (typically a department), including UK government income, European projects, charity and industrial funding.

So, we have an officially accepted assessment of research outcomes (that is government funds against it!), and also of the income that generated it.

At a public meeting following the 2014 exercise, I asked a senior person at HEFCE whether they planned to take the two and create a value for money metric, for example, the cost per 4* output.

There was a distinct lack of enthusiasm for the idea!

Furthermore, my analysis of REF measures vs citation metrics¬†suggested that this very focused official funding model was further concentrated by an almost unbelievably extreme bias towards elite institutions in the grading: apparently equal work in terms of external metrics was ranked nearly an order of magnitude higher for ‘better’ institutions, leading to funding being around¬†2.5 times higher for some¬†elite universities than objective measures would suggest.

contingency-table

From “REF Redux 4 ‚Äď institutional effects“: ‘winners’ are those with 25% or more than metrics would estimate, ‘losers’ those with 25% or more less.

In summary, the implications both from¬†Fortin and Currie’s PLOS ONE paper and from the 1990s HEFCE report¬†suggest spreading funding more widely would increase overall research outcomes, but both official policy and implicit review bias do the opposite.

  1. I recall reading this, but it was before the days when I rolled everything over on my computer, so can’t find the exact reference. If anyone recalls¬†the name of the report, or has a copy, I would be very grateful.[back]

Students love digital … don’t they?

In the ever accelerating rush to digital delivery, is this actually what students want or need?

Last week I was at Talis Insight conference. As with previous years, this is a mix of sessions focused on those using or thinking of using Talis products, with lots of rich experience talks. However, also about half of the time is dedicated to plenaries about the current state and future prospects for technology in higher education; so well worth attending (it is free!) whether or not you are a Talis user.

Speakers this year included Bill Rammell, now Vice-Chancellor at the University of Bedfordshire, but who was also Minister of State for Higher Education during the second Blair government, and during that time responsible for introducing the National Student Survey.

Another high profile speaker was Rosie Jones, who is Director of Library Services at the Open University … which operates somewhat differently from the standard university library!

However, among the VCs, CEOs and directors of this and that, it was the two most junior speakers who stood out for me. Eva Brittin-Snell and Alex Davie are to SAGE student scholars from Sussex. As SAGE scholars they have engaged in research on student experience amongst their peers, speak at events like this and maintain a student blog, which includes, amongst other things the story of how Eva came to buy her first textbook.

Eva and Alex’s talk was entitled “Digital through a student’s eyes” (video). Many of the talks had been about the rise of digital services and especially the eTextbook. Eva and Alex were the ‘digital natives’, so surely this was joy to their ears. Surprisingly not.

Alex, in her first year at university, started by alluding to the previous speakers, the push for book-less libraries, and general digital spiritus mundi, but offered an alternative view. Students were annoyed at being asked to buy books for a course where only a chapter or two would be relevant; they appreciated the convenience of an eBook, when core textbooks were permanently out on and, and instantly recalled once one got hold of them. However, she said they still preferred physical books, as they are far more usable (even if heavy!) than eBooks.

Eva, a fourth year student, offered a different view. “I started like Aly”, she said, and then went on to describe her change of heart. However, it was not a revelation of the pedagogical potential of digital, more that she had learnt to live through the pain. There were clear practical and logistic advantages to eBooks, there when and where you wanted, but she described a life of constant headaches from reading on-screen.

Possibly some of this is due to the current poor state of eBooks that are still mostly simply electronic versions of texts designed for paper. Also, one of their student surveys showed that very few students had eBook readers such as Kindle (evidently now definitely not cool), and used phones primarily for messaging and WhatsApp. The centre of the student’s academic life was definitely the laptop, so eBooks meant hours staring at a laptop screen.

However, it also reflects a growing body of work showing the pedagogic advantages of physical note taking, potential developmental damage of early tablet and smartphone use, and industry figures showing that across all areas eBook sales are dropping and physical book sales increasing. In addition there is evidence that children and teenagers people prefer physical books, and public library use by young people is growing.

It was also interesting that both Alex and Eva complained that eTextbooks were not ‘snappy’ enough. In the age of Tweet-stream presidents and 5-minute attention spans, ‘snappy’ was clearly the students’ term of choice to describe their expectation of digital media. Yet this did not represent a loss of their¬†attention per se, as this was clearly not perceived as a problem with physical books.

… and I am still trying to imagine what a critical study of Aristotle’s Poetics would look like in ‘snappy’ form.

There are two lessons from this for me. First what would a ‘digital first’ textbook look like. Does it have to be ‘snappy’, or are there ways to maintain attention and depth of reading in digital texts?

The second picks up on issues in the co-authored paper I presented at NordiChi last year, “From intertextuality to transphysicality: The changing nature of the book, reader and writer“, which, amongst other things, asked how we might use digital means to augment the physical reading process, offering some of the strengths of eBooks such as the ability to share annotations, but retaining a physical reading experience. ¬†Also maybe some of the physical limitations of availability could be relieved, for example, if university libraries work with bookshops to have student buy and return schemes alongside borrowing?

It would certainly be good if students did not have to learn to live with pain.

We have a challenge.