Everything feels easy

Today looked like a good Tiree Ultra day, with 40 mile an hour winds (the odd gust at 50) and occasional shafts of sunshine between driving rain!

So buoyed by knowledge from three weeks ago that I could do it, I took my first run since the ultra.

My left leg is still feeling a little gammy, but with a 40 mph wind at my back I fair sailed along – until I turned round.  Progress on the return leg was … well suffice say I could have walked faster.

I have always avoided running in the rain, but after the ultra I knew I could do it and it wasn’t so bad.  I also had a new rain poof that I’d got for the ultra – good equipment really does help.

There is something liberating about that “it can’t be worse than …” feeling.

When I did the first Tiree Ultramarathon in 2014, it was a year after I’d walked around Wales.  If I got a pain whilst walking there was always the fear that it would be worse the next day, or that it would be the thing that stopped me entirely.

Just over 2/3 of the way round the 2014 ultra I began to get some pain in my right leg.  I’d pulled the Achilles tendon on that ankle a few years before, and so I was a little worried that it would go again.  But I thought, “only 10 miles to go, and it’s just one day. I don’t have to run again tomorrow and the next day; so what if I’m hobbling for a few weeks.

After walking 1000 miles day on day, a single day and mere 35 miles was suddenly less daunting.

Now, knowing I could endure a whole day running with horizontal rain stinging my cheeks, well what of a couple of miles in heavy drizzle and 50 mile an hour winds …

After Tiree Ultra 2017, everything feels easy.

Tiree Ultra 2017 – what a difference a week makes

Last Sunday I completed my third Tiree Ultamarathon … and definitely the wettest, windiest and boggiest!

However, this Sunday what a difference …

The ultra circuit flows the coast of Tiree taking in almost all of the beaches, but also includes some road sections as well as off-road grass sward and boggy moor. There is relatively little height gain, but Will Wright tries to organise the route to ‘make the most’ of the hills there are.

Previous years have been wonderful weather, light breeze and some sun, enough to be pleasant, but not enough to cause heat problems. However, the fates had been saving their fury, and this year the heavens opened and Odysseus let the western winds loose gathering water from the warm Atlantic and flinging it at us in horizontal sheets.

It was the first time I had every run in the rain so was, well maybe not a baptism of fire, but certainly a dramatic introduction., I had recently bought a waterproof for running in, but it sill had its label on as each wet August day, I thought “well maybe run on a brighter day”. Although everyone says that the right equipment helps, I sort of only half believed it – however, I was amazed at how even driving rain was not a problem.

I only run the Tiree ultra in September and sometimes the Tiree half marathon in May. I always mean to keep on running between, but then I forget, or I am too busy – so many excuses. So, in previous years I haven’t got round to any running until a month before the ultra doubling my distance each week – far from the recommended 10-15% a week increase! To be honest I’ve been very lucky to have not injured myself.

This year I decided to break my habit and be well prepared, so started a whole two months in advance. I wondered if this had been wise as I felt I’d peaked at the end of July and seemed to be going downhill ever since.  However, this year I am definitely hobbling less afterwards and I managed to run every inch of road and beach, with just a few walking sections over bog. This said, when the wind gusted mid to high thirty miles an hour in my face, I would almost certainly have walked faster than I ran.  Indeed on Gott Bay as I ran (very slowly) into the wind another runner was power walking just behind me sheltering in my lee.

One thing I noticed while running was a subtle change in psychology.  After about 10 miles, as pain and exhaustion kicked in, I was aware of myself occasionally wondering if there was any way I could bow out without losing too much face, and then not that many miles later I caught myself thinking “next year I’ll ….” – at that point I knew I was OK!  However, the exhaustion must still have been in my face at mile 17 as the marshal as we came off the beach at Balephetrish said, “you look as if you could do with a hug”.

This year two off-island friends, Albrecht and Alun, also came to Tiree for the Ultra, which was wonderful.  Being a good host I of course let them finish ahead of me by an hour or so 😉

Albrecht has already booked for next year, but not sure if Alun is convinced!

However, the weather can only be better.

 

 

running on the verge

Tiree Fitness Facebook – Photo by Alan Millar

In a week and half’s time I’ll be joining about 250 others on the Tiree Ultramarathon, running around the edge of Tiree, which is itself on the Atlantic edge of Scotland.   Some of this will be on beach and moor, but some along single track roads, where you often have to step onto the grassy verge as cars go by.

Running on the verge has its own challenges which I’m sure are shared by many rural areas as well as Tiree.  For those coming to the Tiree Ultra or running (or cycling) in rural areas, here’s my short guide to the hazards of the verge.

on narrow roads do stop – Some roads do have space for a car to pass a runner or cyclist, but it can be close especially if you are a little tired and ‘wandering’ a little as you run.  So usually best to stop … and you get a moments breather 😉

beware the ragged tarmac edge – It is tempting to just squeeze to the left and keep going, but the tarmac often peters out, this is worst of you are cycling as the wheel can slip off the road and get trapped in the furrow between tarmac ad grass (cyclists have ended up in hospital!), but you can also trip when running … and you don’t want to fall into the path of the car that is passing.

Tiree Fitness Facebook photos – verges are not the only road hazard

running on the verge – I know many will ignore this, but just don’t.  They seem wide, tamer than running on full off-road terrain, and well within the capabilities of a off-road bike.  However there are often drainage channels hidden by the long grass – these can be a foot or more deep and can be invisible.  Even when there isn’t a deep drainage channel running parallel to the road, there are often smaller drainage channels running outwards from the road; these are typically only a few inches deep, but just designed to trip you up.  The one possible exception is where someone has mown the verge outside their house, but even then be careful of the cross-channels as they often aren’t obvious even on mown grass.

stepping onto the verge – At the risk of sounding like your granny, still take care!  I have stepped off the road and, even looking down at the ground as I did so, my foot has disappeared into a channel and I’ve almost sprained my ankle … and that was standing still not running.  On the bike be even more careful, you stop, put your outer foot into what you believe to be grass and … on a bike there is little you can do apart from topple full head over heels … and, yes, I know because I have done it.

standing on the verge – Will it never stop!  Yep, even standing has it’s dangers.  On Tiree it is normal to wave to those passing, friend and stranger alike.  However, if you are a little tired twisting round can put you off balance.  Don’t feel embarrassed to put a hand on a fence post to keep you sure footed, better than stumbling back into the path of that nicely waving driver.

stepping off the verge – Do take a peek back down the road before stepping back onto the tarmac.  Tiree is windy and when the wind is coming from in front it is hard to hear cars from behind, as a car passes you it is easy to just step back, but often there is a second car driving in convoy, especially when the road has had a lot of obstacles (such as runners), so that cars catch up with one another.

Tiree Fitness Facebook photos

… and then if you survive the verges

… there is just Dun Mor to climb …

Solr Rocks!

After struggling with large FULLTEXT indexes in MySQL, Solr comes to the rescue, 16 million records ingested in 20 minutes – wow!

One small Gotcha was the security classes, which have obviously moved since the documentation was written (see fix at end of the post).

For web apps I live off MySQL, albeit now-a-days often wrapped with my own NoSQLite libraries to do Mongo-style databases over the LAMP stack. I’d also recently had a successful experience using MySQL FULLTEXT indices with a smaller database (10s of thousands of records) for the HCI Book search.  So when I wanted to index 16 million the book titles with their author names from OpenLibrary I thought I might as well have a go.

For some MySQL table types, the normal recommendation used to be to insert records without an index and add the index later.  However, in the past I have had a very bad experience with this approach as there doesn’t appear to be a way to tell MySQL to go easy with this process – I recall the disk being absolutely thrashed and Fiona having to restart the web server 🙁

Happily, Ernie Souhrada  reports that for MyISAM tables incremental inserts with an index are no worse than bulk insert followed by adding the index.  So I went ahead and set off a script adding batches of a 10,000 records at a time, with small gaps ‘just in case’.  The just in case was definitely the case and 16 hours later I’d barely managed a million records and MySQL was getting slower and slower.

I cut my losses, tried an upload without the FULLTEXT index and 20 minutes later, that was fine … but no way could I dare doing that ‘CREATE FULLTEXT’!

In my heart I knew that lucene/Solr was the right way to go.  These are designed for search engine performance, but I dreaded the pain of trying to install and come up to speed with yet a different system that might not end up any better in the end.

However, I bit the bullet, and my dread was utterly unfounded.  Fiona got the right version of Java running and then within half an hour of downloading Solr I had it up and running with one of the examples.  I then tried experimental ingests with small chunks of the data: 1000 records, 10,000 records, 100,000 records, a million records … Solr lapped it up, utterly painless.  The only fix I needed was because my tab-separated records had quote characters that needed mangling.

So,  a quick split into million record chunks (I couldn’t bring myself to do a single multi-gigabyte POST …but maybe that would have been OK!), set the ingest going and 20 minutes later – hey presto 16 million full text indexed records 🙂  I then realised I’d forgotten to give fieldnames, so the ingest had taken the first record values as a header line.  No problems, just clear the database and re-ingest … at 20 minutes for the whole thing, who cares!

As noted there was one slight gotcha.  In the Securing Solr section of the Solr Reference guide, it explains how to set up the security.json file.  This kept failing until I realised it was failing to find the classes solr.BasicAuthPlugin and solr.RuleBasedAuthorizationPlugin (solr.log is your friend!).  After a bit of listing of contents of jars, I found tat these are now in org.apache.solr.security.  I also found that the JSON parser struggled a little with indents … I think maybe tab characters, but after explicitly selecting and then re-typing spaces yay! – I have a fully secured Solr instance with 16 million book titles – wow 🙂

This is my final security.json file (actual credentials obscured of course!

{
  "authentication":{
    "blockUnknown": true,
    "class":"org.apache.solr.security.BasicAuthPlugin",
    "credentials":{
      "tom":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo=",
      "dick":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo=",
      "harry":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo="},
     },

  "authorization":{"class":"org.apache.solr.security.RuleBasedAuthorizationPlugin"}
}

why is the wind always against you? part 2 – side wind

In the first part of this two-part post, we saw that cycling into the wind takes far more additional effort than a tail wind saves.

However, Will Wright‘s original question, “why does it feel as if the wind is always against you?” was not just about head winds, but the feeling that when cycling around Tiree, while the angle of the wind is likely to be in all sorts of directions, it feels as though it is against you more than with you.

Is he right?

So in this post I’ll look at side winds, and in particular start with wind dead to the side, at 90 degrees to the road.

Clearly, a strong side wind will need some compensation, perhaps leaning slightly into the wind to balance, and on Tiree with gusty winds this may well cause the odd wobble.  However, I’ll take best case scenario and assume completely constant wind with no gusts.

There is a joke about the engineer, who, when asked a question about giraffes, begins, “let’s first assume a spherical giraffe”.  I’m not gong to make Will + bike spherical, but will assume that the air drag is similar in all directions.

Now my guess is that given the way Will is bent low over his handle-bars, he may well actually have a larger side-area to the wind than from in front.  Also I have no idea about the complex ways the moving spokes behave as the wind blows through them, although I am aware that a well-designed turbine absorbs a fair proportion of the wind, so would not be surprised if the wheels added a lot of side-drag too.

If the drag for a side wind is indeed bigger than to the front, then the following calculations will be worse; so effectively working with a perfectly cylindrical Will is going to be a best case!

To make calculations easy I’ll have the cyclist going at 20 miles an hour, with a 20 mph side wind also.

When you have two speeds at right angles, you can essentially ‘add them up’ as if they were sides of a triangle.  The resultant wind feels as if it is at 45 degrees, and approximately 30 mph (to be exact it is 20 x √2, so just over 28mph).

Recalling the squaring rule, the force is proportional to 30 squared, that is 900 units of force acting at 45 degrees.

In the same way as we add up the wind and bike speeds to get the apparent wind at 45 degrees, we can break this 900 unit force at 45 degree into a side force and a forward drag. Using the sides of the triangle rule, we get a side force and forward drag of around 600 units each.

For the side force I’ll just assume you lean into (and hope that you don’t fall off if the wind gusts!); so let’s just focus on the forward force against you.

If there were no side wind the force from the air drag would be due to the 20 mph bike speed alone, so would be (squaring rule again) 400 units.  The side wind has increased the force against you by 50%.  Remembering that more than three quarters of the energy you put into cycling is overcoming air drag, that is around 30% additional effort overall.

Turned into head speed, this is equivalent to the additional drag of cycling into a direct head wind of about 4 mph (I made a few approximations, the exact figure is 3.78 mph).

This feels utterly counterintuitive, that a pure side wind causes additional forward drag!  It perhaps feels even more counterintuitive if I tell you that in fact the wind needs to be about 10 degrees behind you, before it actually helps.

There are two ways to understand this.

The first is plain physics/maths.

For very small objects (around a 100th of a millimetre) the air drag is directly proportional to the speed (linear).  At this scale, when you redivide the force into its components ahead and to the side, they are exactly the same as if you look at the force for the side-wind and cycle speed independently.  So if you are a cyclist the size of an amoeba, side winds don’t feel like head winds … but then that is probably the least of your worries.

For ordinary sized objects, the squaring rule (quadratic drag) means that after you have combined the forces, squared them and then separated them out again, you get more than you started with!

The second way to look at it, which is not the full story, but not so far from what happens, is to consider the air just in front of you as you cycle.

You’ll know that cyclists often try to ride in each other’s slipstream to reduce drag, sometimes called ‘drafting’.

The lead cyclist is effectively dragging the air behind, and this helps the next cyclist, and that cyclist helps the one after.  In a race formation, this reduces the energy needed by the following riders by around a third.

In addition you also create a small area in front where the air is moving faster, almost like a little bubble of speed.  This is one of the reasons why even the lead cyclist gains from the followers, albeit much less (one site estimates 5%).  Now imagine adding the side wind; that lovely bubble of air is forever being blown away meaning you constantly have to speed up a new bubble of air in front.

I did the above calculations for an exact side wind at 90 degrees to make the sums easier. However, you can work out precisely how much additional force the wind causes for any wind direction, and hence how much additional power you need when cycling.

Here is a graph showing that additional power needed, ranging for a pure head wind on the right, to a pure tail wind on the left (all for 20 mph wind).  For the latter the additional force is negative – the wind is helping you. However, you can see that the breakeven point is abut 10 degrees behind a pure side wind (the green dashed line).  Also evident (depressingly) is that the area to the left – where the wind is making things worse, is a lot more than the area to the right, where it is helping.

… and if you aren’t depressed enough already, most of my assumptions were ‘best case’.  The bike almost certainly has more side drag than head drag; you will need to cycle slightly into a wind to avoid being blown across the road; and, as noted in the previous post, you will cycle more slowly into a head wind so spend more time with it.

So in answer to the question …

why does it feel as if the wind is always against you?

… because most of the time it is!

why is the wind always against you? part 1 – head and tail winds

Sometimes it feels like the wind is always against you.

Is it really?

I’ve just been out for a run.  It is not terribly windy today by Tiree standards, the Met Office reports the speed at 18mph from the north west, but it was enough to feel as I ran and certainly the gritted teeth of cyclists going past the window suggests is plenty windy for them.

As I was running I remembered a question that Will Wright once asked me, “why does it feel as if the wind is always against you?

Now Will competes in Iron Man events, and is behind Tiree Fitness, which organises island keep fit activities, and the annual Tiree 10k & half marathon and Ultra-Marathon (that I’m training for now).  In other words Will is in a different league to me … but still he feels the wind!

Will was asking about cycling rather than running, and I suspect that the main effect of a head wind for a runner is simply the way it knocks the breath out of you, rather than actual wind resistance.  That is, as most things in exercise, the full story is a mixture of physiology, psychology and physics.

For this post I’ll stick to the ‘easy’ cases when the wind is dead in front or behind you.  I’ll leave sidewinds to a second post as the physics for this is a little more complicated, and the answer somewhat more surprising.

In fact today I ran to and fro along the same road, although its angle to the wind varied.  For the purposes of this post I’ll imagine straightening it more, and having it face directly along the direction of the wind, so that running or cycling one way the wind is directly behind you and in the other the wind is directly in front.

running with the wind

To make the sums easier I’ll make the wind speed 15 mph and have me run at 5mph.

On the outward leg the wind is behind me.  I am running at 5mph, the wind is coming at 15mp, so if I had a little wind gauge as I ran it would register a 10mph tail wind.

When I turn into the wind I am now running at 5mph into a 15mph head wind, so the apparent wind speed for me is 20mph.

So half the time the wind is helping me to the tune of 10mph, half the time resisting me by 20mph, so surely that averages out as 5mph resistance for the whole journey, the same as if I was just running at 5mph on a  still day?

average apparent wind (?) = (  –10 * 4 miles  +  +20 * 4 miles ) / 8 miles = 5

Unfortunately wind resistance does not average quite like that!

Wind resistance increases with the square of your speed.  So a 10mph tail wind creates 100 units of force to help you, whereas a 20mph head wind resists you with 400 units of force, four times as much.  That is, it is like one person pushing you from behind for half the course, but four people holding you back on the other half.

It is this force, the squared speed, that it makes more sense to average

average resistance (wind) = (  –100 * 4 miles  +  +400 * 4 miles ) / 8 miles = 150

Compare this to the effects of running on a still day at 5mph.

average resistance (no wind) = 25  (5 squared)

The average wind resistance over the course is six times as much even though half the distance is into the wind and half the distance is away from it.

It really is harder!

In fact, for a runner, wind resistance (physics) is probably not the major effect on speed, and despite the wind my overall time was not significantly slower than on a still day.  The main effects of the wind are probably the ‘knocking the breath out of you’ feeling and the way the head wind affects your stride (physiology).  Of course both of these make you more aware of the times the wind is in your face and hence your perception of how long this is (psychology).

cycling hard

For a cyclist wind resistance is a far more significant issue1.  Think about Olympic sprinters who run upright compared with cyclists who bend low and even wear those Alien-like cycling helmets to reduce drag.

This is partly due to the different physical processes, for example, on bike on a still day, the bike will keep on going forward even if you don’t pedal, whereas if you don’t keep moving your legs while running you get nowhere.

It is also partly due to the different speeds.  Even Usain Bolt only manages a bit over 20mph and that for just 100 metres, and for long-distance runners this drops to around 12 mph.  Equivalent cycling events are twice as fast and even a moderately fit cyclist could compete with Usain Bolt.

So let’s imagine our Tiree cyclist, grimacing as they head into the wind.

I’m going to assume they are cycling at 15mph.

If it were a still day the air resistance would be entirely due to their own forward speed of 15mph, hence (squared remember) 225 units of force against them.

However, with a 15mph wind, when they are cycling with the wind there is no net air flow, the only effort needed to cycle is the internal resistance of chain and gears, and the rubber on the road.  In contrast, when cycling against the wind, total air resistance is equivalent to 30mph. Recalling the squaring rule, that is 900 units of resistance, 4 times as high as on a still day.

Even averaging with the easy leg, you have twice as much effort needed to overcome the air resistance.  No wonder it feels tough!

However, that is all assuming you keep a constant speed irrespective of the wind.  In practice you are likely to slow down against a head wind and speed up with a tail wind.  Let’s assume you slow down to 10mph in the head wind and manage a respectable 20mph with the wind behind you.

I’ll not do the force calculations as the numbers get a little less tidy, but crucially this means you spend twice as long doing the head wind leg as the tail wind leg.  Although you cycle the same distance with head and tail winds, you spend twice as long battling that head wind.  And I’ll bet with it feeling so tough, it seems like even longer!

 

 

  1. The Wikipedia page on Bicycle performance includes estimates suggesting over 75% of effort is overcoming drag … even with no wind[back]

End of an era

A few weeks ago, I gave a panel presentation at the ARMA conference in Liverpool — however, this was my last official duty with a Talis hat on.

Talis is a small employee-owned company, and maintaining a research strand has been far sighted, but unusual. After a period focusing more in the development of new products, Talis is shifting to a phase when every resource should be focused on delivery … and hence long-term research, and my own role in the company, has had to cease.

Talis has been a wonderful place to work over the past seven years, both the individuals there, but also, and crucially important, the company atmosphere, which combines the excitement of a start-up, with real care and sense of community.   So if you spot posts advertised there, it is a great place to be.

Talis was my principal regular income, as my academic role at Birmingham has only been 20%, so long-term I need to think about whether I should increase again my academic time, or do other things. I have been very fortunate never having previously had a time without regular income, so this is a new experience for me, although, of course, common.

Over the past few years, I have kept some time ‘unwaged’ for other projects (such as walking round Wales!) and occasional consultancy, and my to do list is far from empty, so this summer and autumn I am intending to spend more time writing (yes TouchIT will be finished, and editing the Alan Walks Wales blog into a book), picking up some of the many half-finished coding projects, and doing videoing for Interaction Design Foundation

value for money in research – excellence or diversity

Government research funding policy in many countries, including the UK, has focused on centres of excellence, putting more funding into a few institutions and research groups who are creating the most valuable outputs.

Is this the best policy, and does evidence support it?

From “Big Science vs. Little Science: How Scientific Impact Scales with Funding”

I’m prompted to write as Leonel Morgado (Facebook, web) shared a link to a 2013 PLOS ONE paper “Big Science vs. Little Science: How Scientific Impact Scales with Funding” by Jean-Michel Fortin and David Currie.  The paper analyses work funded by Natural Sciences and Engineering Research Council of Canada (NSERC), and looked at size of grant vs. research outcomes.  The paper demonstrates diminishing returns: large grants produce more research outcomes than smaller grants, but less per dollar spend.  That is concentrating research funding appears to reduce the overall research output.

Of course, those obtaining research grants have all been through a highly competitive process, so the NSERC results may simply be a factor of the fact that we are already looking at the very top level of the research projects.

However, a report many years ago reinforces this story, and suggests it holds more broadly.

Sometime in the mid-late 1990s HEFCE the UK higher education funding agency, did a study where they ranked all universities against every simple research output metrics1. One of the outputs was the number of PhD completions and another was industrial research income (arguably whether an output!), but I forget the third.

Not surprisingly Oxford and Cambridge came top of the list when ranked by aggregate research output.

However, the speadsheet also included the amount of research money HEFCE paid into the university and a value-for-money column.

When ranked against value-for-money, the table was near reversed, with Oxford and Cambridge at the very bottom and Northampton University (not typically known as the peak of the university excellence ratings) was the top. That is HEFCE got more research output for pound spent at Northampton than anywhere else in the UK.

The UK REF2014 used an extensive and time-consuming peer-review mechanism to rank the research quality of each discipline in each UK university-level institution, on a 1* to 4* scale (4* being best). Funding is heavily ramped towards 4* (in England the weighting is 10:3:0:0 for 4*:3*:2*:1*). As part of the process, comprehensive funding information was produced for each unit of assessment (typically a department), including UK government income, European projects, charity and industrial funding.

So, we have an officially accepted assessment of research outcomes (that is government funds against it!), and also of the income that generated it.

At a public meeting following the 2014 exercise, I asked a senior person at HEFCE whether they planned to take the two and create a value for money metric, for example, the cost per 4* output.

There was a distinct lack of enthusiasm for the idea!

Furthermore, my analysis of REF measures vs citation metrics suggested that this very focused official funding model was further concentrated by an almost unbelievably extreme bias towards elite institutions in the grading: apparently equal work in terms of external metrics was ranked nearly an order of magnitude higher for ‘better’ institutions, leading to funding being around 2.5 times higher for some elite universities than objective measures would suggest.

contingency-table

From “REF Redux 4 – institutional effects“: ‘winners’ are those with 25% or more than metrics would estimate, ‘losers’ those with 25% or more less.

In summary, the implications both from Fortin and Currie’s PLOS ONE paper and from the 1990s HEFCE report suggest spreading funding more widely would increase overall research outcomes, but both official policy and implicit review bias do the opposite.

  1. I recall reading this, but it was before the days when I rolled everything over on my computer, so can’t find the exact reference. If anyone recalls the name of the report, or has a copy, I would be very grateful.[back]

Students love digital … don’t they?

In the ever accelerating rush to digital delivery, is this actually what students want or need?

Last week I was at Talis Insight conference. As with previous years, this is a mix of sessions focused on those using or thinking of using Talis products, with lots of rich experience talks. However, also about half of the time is dedicated to plenaries about the current state and future prospects for technology in higher education; so well worth attending (it is free!) whether or not you are a Talis user.

Speakers this year included Bill Rammell, now Vice-Chancellor at the University of Bedfordshire, but who was also Minister of State for Higher Education during the second Blair government, and during that time responsible for introducing the National Student Survey.

Another high profile speaker was Rosie Jones, who is Director of Library Services at the Open University … which operates somewhat differently from the standard university library!

However, among the VCs, CEOs and directors of this and that, it was the two most junior speakers who stood out for me. Eva Brittin-Snell and Alex Davie are to SAGE student scholars from Sussex. As SAGE scholars they have engaged in research on student experience amongst their peers, speak at events like this and maintain a student blog, which includes, amongst other things the story of how Eva came to buy her first textbook.

Eva and Alex’s talk was entitled “Digital through a student’s eyes” (video). Many of the talks had been about the rise of digital services and especially the eTextbook. Eva and Alex were the ‘digital natives’, so surely this was joy to their ears. Surprisingly not.

Alex, in her first year at university, started by alluding to the previous speakers, the push for book-less libraries, and general digital spiritus mundi, but offered an alternative view. Students were annoyed at being asked to buy books for a course where only a chapter or two would be relevant; they appreciated the convenience of an eBook, when core textbooks were permanently out on and, and instantly recalled once one got hold of them. However, she said they still preferred physical books, as they are far more usable (even if heavy!) than eBooks.

Eva, a fourth year student, offered a different view. “I started like Aly”, she said, and then went on to describe her change of heart. However, it was not a revelation of the pedagogical potential of digital, more that she had learnt to live through the pain. There were clear practical and logistic advantages to eBooks, there when and where you wanted, but she described a life of constant headaches from reading on-screen.

Possibly some of this is due to the current poor state of eBooks that are still mostly simply electronic versions of texts designed for paper. Also, one of their student surveys showed that very few students had eBook readers such as Kindle (evidently now definitely not cool), and used phones primarily for messaging and WhatsApp. The centre of the student’s academic life was definitely the laptop, so eBooks meant hours staring at a laptop screen.

However, it also reflects a growing body of work showing the pedagogic advantages of physical note taking, potential developmental damage of early tablet and smartphone use, and industry figures showing that across all areas eBook sales are dropping and physical book sales increasing. In addition there is evidence that children and teenagers people prefer physical books, and public library use by young people is growing.

It was also interesting that both Alex and Eva complained that eTextbooks were not ‘snappy’ enough. In the age of Tweet-stream presidents and 5-minute attention spans, ‘snappy’ was clearly the students’ term of choice to describe their expectation of digital media. Yet this did not represent a loss of their attention per se, as this was clearly not perceived as a problem with physical books.

… and I am still trying to imagine what a critical study of Aristotle’s Poetics would look like in ‘snappy’ form.

There are two lessons from this for me. First what would a ‘digital first’ textbook look like. Does it have to be ‘snappy’, or are there ways to maintain attention and depth of reading in digital texts?

The second picks up on issues in the co-authored paper I presented at NordiChi last year, “From intertextuality to transphysicality: The changing nature of the book, reader and writer“, which, amongst other things, asked how we might use digital means to augment the physical reading process, offering some of the strengths of eBooks such as the ability to share annotations, but retaining a physical reading experience.  Also maybe some of the physical limitations of availability could be relieved, for example, if university libraries work with bookshops to have student buy and return schemes alongside borrowing?

It would certainly be good if students did not have to learn to live with pain.

We have a challenge.

Sandwich proofs and odd orders

Revisiting an old piece of work I reflect on the processes that led to it: intuition and formalism, incubation and insight, publish or perish, and a malaise at the heart of current computer science.

A couple of weeks ago I received an email requesting an old technical report, “Finding fixed points in non-trivial domains: proofs of pending analysis and related algorithms” [Dx88].  This report was from nearly 30 years ago, when I was at York and before the time when everything was digital and online. This was one of my all time favourite pieces of work, and one of the few times I’ve done ‘real maths’ in computer science.

As well as tackling a real problem, it required new theoretical concepts and methods of proof that were generally applicable. In addition it arose through an interesting story that exposes many of the changes in academia.

[Aside, for those of more formal bent.] This involved proving the correctness of an algorithm ‘Pending Analysis’ for efficiently finding fixed points over finite lattices, which had been developed for use when optimising functional programs. Doing this led me to perform proofs where some of the intermediate functions were not monotonic, and to develop forms of partial order that enabled reasoning over these. Of particular importance was the concept of a pseudo-monotonic functional, one that preserved an ordering between functions even if one of them is not itself monotonic. This then led to the ability to perform sandwich proofs, where a potentially non-monotonic function of interest is bracketed between two monotonic functions, which eventually converge to the same function sandwiching the function of interest between them as they go.

Oddly while it was one my favourite pieces of work, it was at the periphery of my main areas of work, so had never been published apart from as a York technical report. Also, this was in the days before research assessment, before publish-or-perish fever had ravaged academia, and when many of the most important pieces of work were ‘only’ in technical report series. Indeed, our Department library had complete sets of many of the major technical report series such as Xerox Parc, Bell Labs, and Digital Equipment Corporation Labs where so much work in programming languages was happening at the time.

My main area was, as it is now, human–computer interaction, and at the time principally the formal modelling of interaction. This was the topic of my PhD Thesis and of my first book “Formal Methods for Interactive Systems” [Dx91] (an edited version of the thesis).   Although I do less of this more formal work now-a-days, I’ve just been editing a book with Benjamin Weyers, Judy Bowen and Philippe Pallanque, “The Handbook of Formal Methods in Human-Computer Interaction” [WB17], which captures the current state of the art in the topic.

Moving from mathematics into computer science, the majority of formal work was far more broad, but far less deep than I had been used to. The main issues were definitional: finding ways to describe complex phenomena that both gave insight and enabled a level of formal tractability. This is not to say that there were no deep results: I recall the excitement of reading Sannella’s PhD Thesis [Sa82] on the application of category theory to formal specifications, or Luca Cardelli‘s work on complex type systems needed for more generic coding and understanding object oriented programing.

The reason for the difference in the kinds of mathematics was that computational formalism was addressing real problems, not simply puzzles interesting for themselves. Often these real world issues do not admit the kinds of neat solution that arise when you choose your own problem — the formal equivalent of Rittel’s wicked problems!

Crucially, where there were deep results and complex proofs these were also typically addressed at real issues. By this I do not mean the immediate industry needs of the day (although much of the most important theoretical work was at industrial labs); indeed functional programming, which has now found critical applications in big-data cloud computation and even JavaScript web programming, was at the time a fairly obscure field. However, there was a sense in which these things connected to a wider sphere of understanding in computing and that they could eventually have some connection to real coding and computer systems.

This was one of the things that I often found depressing during the REF2014 reading exercise in 2013. Over a thousand papers covering vast swathes of UK computer science, and so much that seemed to be in tiny sub-niches of sub-niches, obscure variants of inconsequential algebras, or reworking and tweaking of algorithms that appeared to be of no interest to anyone outside two or three other people in the field (I checked who was citing every output I read).

(Note the lists of outputs are all in the public domain, and links to where to find them can be found at my own REF micro-site.)

If this had been pure mathematics papers it is what I would have expected; after all mathematics is not funded in the way computer science is, so I would not expect to see the same kinds of connection to real world issues. Also I would have been disappointed if I had not seen some obscure work of this kind; you sometimes need to chase down rabbit holes to find Aladdin’s cave. It was the shear volume of this kind of work that shocked me.

Maybe in those early days, I self-selected work that was both practically and theoretically interesting, so I have a golden view of the past; maybe it was simply easier to do both before the low-hanging fruit had been gathered; or maybe just there has been a change in the social nature of the discipline. After all, most early mathematicians happily mixed pure and applied mathematics, with the areas only diverging seriously in the 20th century. However, as noted, mathematics is not funded so heavily as computer science, so it does seem to suggest a malaise, or at least loss of direction for computing as a discipline.

Anyway, roll back to the mid 1980s. A colleague of mine, David Wakeling, had been on a visit to a workshop in the States and heard there about Pending Analysis and Young and Hudak’s proof of its correctness . He wanted to use the algorithm in his own work, but there was something about the proof that he was unhappy about. It was not that he had spotted a flaw (indeed there was one, but obscure), but just that the presentation of it had left him uneasy. David was a practical computer scientist, not a mathematician, working on compilation and optimisation of lazy functional programming languages. However, he had some sixth sense that told him something was wrong.

Looking back, this intuition about formalism fascinates me. Again there may be self-selection going on, if David had had worries and they were unfounded, I would not be writing this. However, I think that there was something more than this. Hardy and Wright, the bible of number theory , listed a number of open problems in number theory (many now solved), but crucially for many gave an estimate on how likely it was that they were true or might eventually have a counter example. By definition, these were non-trivial hypotheses, and either true or not true, but Hardy and Wright felt able to offer an opinion.

For David I think it was more about the human interaction, the way the presenters did not convey confidence.  Maybe this was because they were aware there was a gap in the proof, but thought it did not matter, a minor irrelevant detail, or maybe the same slight lack of precision that let the flaw through was also evident in their demeanour.

In principle academia, certainly in mathematics and science, is about the work itself, but we can rarely check each statement, argument or line of proof so often it is the nature of the people that gives us confidence.

Quite quickly I found two flaws.

One was internal to the mathematics (math alert!) essentially forgetting that a ‘monotonic’ higher order function is usually only monotonic when the functions it is applied to are monotonic.

The other was external — the formulation of the theorem to be proved did not actually match the real-world computational problem. This is an issue that I used to refer to as the formality gap. Once you are in formal world of mathematics you can analyse, prove, and even automatically check some things. However, there is first something more complex needed to adequately and faithfully reflect the real world phenomenon you are trying to model.

I’m doing a statistics course at the CHI conference in May, and one of the reasons statistics is hard is that it also needs one foot on the world of maths, but one foot on the solid ground of the real world.

Finding the problem was relatively easy … solving it altogether harder! There followed a period when it was my pet side project: reams of paper with scribbles, thinking I’d solved it then finding more problems, proving special cases, or variants of the algorithm, generalising beyond the simple binary domains of the original algorithm. In the end I put it all into a technical report, but never had the full proof of the most general case.

Then, literally a week after the report was published, I had a notion, and found an elegant and reasonably short proof of the most general case, and in so doing also created a new technique, the sandwich proof.

Reflecting back, was this merely one of those things, or a form of incubation? I used to work with psychologists Tom Ormerod and Linden Ball at Lancaster including as part of the Desire EU network on creativity. One of the topics they studied was incubation, which is one of the four standard ‘stages’ in the theory of creativity. Some put this down to sub-conscious psychological processes, but it may be as much to do with getting out of patterns of thought and hence seeing a problem in a new light.

In this case, was it the fact that the problem had been ‘put to bed’, enabled fresh insight?

Anyway, now, 30 years on, I’ve made the report available electronically … after reanimating Troff on my Mac … but that is another story.

References

[Dx91] A. J. Dix (1991). Formal Methods for Interactive Systems. Academic Press.ISBN 0-12-218315-0 http://www.hiraeth.com/books/formal/

[Dx88] A. J. Dix (1988). Finding fixed points in non-trivial domains: proofs of pending analysis and related algorithms. YCS 107, Dept. of Computer Science, University of York. https://alandix.com/academic/papers/fixpts-YCS107-88/

[HW59] G.H. Hardy, E.M. Wright (1959). An Introduction to the Theory of Numbers – 4th Ed. Oxford University Press.   https://archive.org/details/AnIntroductionToTheTheoryOfNumbers-4thEd-G.h.HardyE.m.Wright

[Sa82] Don Sannella (1982). Semantics, Imlementation and Pragmatics of Clear, a Program Specification Language. PhD, University of Edinburgh. https://www.era.lib.ed.ac.uk/handle/1842/6633

[WB17] Weyers, B., Bowen, J., Dix, A., Palanque, P. (Eds.) (2017) The Handbook of Formal Methods in Human-Computer Interaction. Springer. ISBN 978-3-319-51838-1 http://www.springer.com/gb/book/9783319518374

[YH96] J. Young and P. Hudak (1986). Finding fixpoints on function spaces. YALEU/DCS/RR-505, Yale University, Department of Computer Science http://www.cs.yale.edu/publications/techreports/tr505.pdf