Paying On Time – universities are failing

Universities are not living up to Government prompt payment targets.  As many suppliers will be local SMEs this threatens the cashflow of businesses that may be teetering on the edge, and the well being of local economies.

I’ve twice in the last couple of months been hit by university finance systems that have a monthly payment run so that if a claim or invoice is not submitted by a certain date, often the first day or two of the month, then it is not paid until the end of the following month, leading to a seven week delay in payment.  This is despite Government guidelines for a normal 30 day payment period and to aim for 80% payment within 5 working days.

I’d like to say these are rare cases, but are sadly typical of university payment and expense systems.  In some cases this is because one is being treated as a casual employee, so falling into payroll systems.  However, often the same systems are clearly being used for commercial payments.  This means that if a supplier misses a monthly deadline they may wait nearly two months for payment … and of course if they are VAT registered may have already had to pay the VAT portion to HMRC before they actual receive the payment.

The idea of monthly cheque runs is a relic of the 1970s when large reels of magnetic tapes had to be mounted on refrigerator-sized machines and special paper had to be loaded into line-printers for cheque runs.  In the 21st century when the vast proportion of payments are electronic, it is an embarrassing and unethical anachronism.

As well as these cliff-edge deadline issues, I’ve seen university finance systems who bounce payments to external suppliers if data is on an out of date form, even if the form was provided in error by a member of university staff.

Even worse are universities finance systems which are organised so that when there is a problem in payment, for example, a temporary glitch in electronic bank payments, instead of retrying the payment, or informing the payee or relevant university contact, the system simply ignores it leaving it in limbo.  I’ve encountered missing payments of this kind up to a year after the original payment date.  If one were cynical one might imagine that they simply hope the supplier will never notice.

The issue of late payments became a major issue a few years ago.  Following the recession, many SMEs were constantly teetering on the edge of bankruptcy, yet larger firms were lax in paying promptly knowing that they were in a position of power (e.g. see “Getting paid on time” issued by the Department for Business, Innovation & Skills, February 2012).

Five years on this is still a problem.  In April last year The Independent estimated that British SMEs were owed 44.6 billion in late or overdue payments(see “The scourge of late payment“).  There is now mandatory reporting of payment processes for larger companies, and recent returns showed that some companies missed prompt payment up to 96% of the time, with bad performers including major names such as Deloitte (see “Ten of the UK’s big businesses that fail to pay suppliers on time get named and shamed by the Government“).

There is also a voluntary “Prompt Payment Code“, but, amongst the signatories, there are only two universities (Huddersfield and Westminster) and three colleges.

Universities are often proud of the way they support local economies and communities: being major employers and often offering advice to local businesses.  However, in respect to prompt payment they are failing those same communities.

So, well done Huddersfield and Westminster, and for the rest of the university system – up your game.

physigrams – modelling the device unplugged

Physigrams get their own micro-site!

See it now at at physicality.org/physigrams

Appropriate physical design can make the difference between an intuitively obvious device and one that is inscrutable.  Physigrams are a way of modelling and analysing the interactive physical characteristics of devices from TV remotes to electric kettles, filling the gap between foam prototypes and code.

Sketches or CAD allow you to model the static physical form of the device, and this can be realised in moulded blue foam, 3D printing or cardboard mock-ups.  Prototypes of the internal digital behaviour can be produced using tools such as Adobe Animate, proto.io or atomic or as hand-coded using standard web-design tools.  The digital behaviour can also be modelled using industry standard techniques such as UML.

  

Physigrams allow you to model the ‘device unplugged’ – the pure physical interaction potential of the device: the ways you can interact with buttons, dials and knobs, how you can open, slide or twist movable elements.  These physigrams can be attached to models of the digital behaviour to understand how well the physical and digital design compliment one another.

Physigrams were developed some years ago as part of the DEPtH project., a collaboration between product designers at Cardiff School of Art and Design and  computer scientists at Lancaster University. Physigrams have been described in various papers over the years.  However, with TouchIT ,our book on physicality and design (eventually!) reaching completion and due out next year, it felt that physigrams deserved a home of their own on the web.

The physigram micro-site, part of physicality.org includes descriptions of physical interaction properties, a complete key to the physigram notation, and many examples of physigrams in action from light switches, to complete control panels and novel devices.

Timing matters!

How long is an instant? The answer, of course, is ‘it depends’, but I’ve been finding it fascinating playing on the demo page for AngularJS tooltips. and seeing what feels like ‘instant’ for a tooltip.

The demo allows you to adjust the md-delay property so you can change the delay between hovering over a button and the tooltip appearing, and then instantly see what that feels like.

Try it yourself, set a time and then either move over the button as if you were about to click t, or wondering what it does, or simply pass over it as if you were moving your pointer to another part of the page.
 
If the delay is too short (e.g. 0), the tooltip flickers as you simply pass over the icon.
 
If you want it as a backup for when someone forgets the action, then something longer about a second is fine – the aim is to be there only if the user has that moment doubt.
 
However, I was fascinated by how long the delay needed to be to feel ‘instant’ and yet not appear by accident.
 
For me about 150 ms is not noticeable as a delay, whereas 200ms I can start to notice – not an annoying delay, but a very slight sense of lack of responsiveness.

Solr Rocks!

After struggling with large FULLTEXT indexes in MySQL, Solr comes to the rescue, 16 million records ingested in 20 minutes – wow!

One small Gotcha was the security classes, which have obviously moved since the documentation was written (see fix at end of the post).

For web apps I live off MySQL, albeit now-a-days often wrapped with my own NoSQLite libraries to do Mongo-style databases over the LAMP stack. I’d also recently had a successful experience using MySQL FULLTEXT indices with a smaller database (10s of thousands of records) for the HCI Book search.  So when I wanted to index 16 million the book titles with their author names from OpenLibrary I thought I might as well have a go.

For some MySQL table types, the normal recommendation used to be to insert records without an index and add the index later.  However, in the past I have had a very bad experience with this approach as there doesn’t appear to be a way to tell MySQL to go easy with this process – I recall the disk being absolutely thrashed and Fiona having to restart the web server 🙁

Happily, Ernie Souhrada  reports that for MyISAM tables incremental inserts with an index are no worse than bulk insert followed by adding the index.  So I went ahead and set off a script adding batches of a 10,000 records at a time, with small gaps ‘just in case’.  The just in case was definitely the case and 16 hours later I’d barely managed a million records and MySQL was getting slower and slower.

I cut my losses, tried an upload without the FULLTEXT index and 20 minutes later, that was fine … but no way could I dare doing that ‘CREATE FULLTEXT’!

In my heart I knew that lucene/Solr was the right way to go.  These are designed for search engine performance, but I dreaded the pain of trying to install and come up to speed with yet a different system that might not end up any better in the end.

However, I bit the bullet, and my dread was utterly unfounded.  Fiona got the right version of Java running and then within half an hour of downloading Solr I had it up and running with one of the examples.  I then tried experimental ingests with small chunks of the data: 1000 records, 10,000 records, 100,000 records, a million records … Solr lapped it up, utterly painless.  The only fix I needed was because my tab-separated records had quote characters that needed mangling.

So,  a quick split into million record chunks (I couldn’t bring myself to do a single multi-gigabyte POST …but maybe that would have been OK!), set the ingest going and 20 minutes later – hey presto 16 million full text indexed records 🙂  I then realised I’d forgotten to give fieldnames, so the ingest had taken the first record values as a header line.  No problems, just clear the database and re-ingest … at 20 minutes for the whole thing, who cares!

As noted there was one slight gotcha.  In the Securing Solr section of the Solr Reference guide, it explains how to set up the security.json file.  This kept failing until I realised it was failing to find the classes solr.BasicAuthPlugin and solr.RuleBasedAuthorizationPlugin (solr.log is your friend!).  After a bit of listing of contents of jars, I found tat these are now in org.apache.solr.security.  I also found that the JSON parser struggled a little with indents … I think maybe tab characters, but after explicitly selecting and then re-typing spaces yay! – I have a fully secured Solr instance with 16 million book titles – wow 🙂

This is my final security.json file (actual credentials obscured of course!

{
  "authentication":{
    "blockUnknown": true,
    "class":"org.apache.solr.security.BasicAuthPlugin",
    "credentials":{
      "tom":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo=",
      "dick":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo=",
      "harry":"blabbityblabbityblabbityblabbityblabbityblo= blabbityblabbityblabbityblabbityblabbityblo="},
     },

  "authorization":{"class":"org.apache.solr.security.RuleBasedAuthorizationPlugin"}
}

End of an era

A few weeks ago, I gave a panel presentation at the ARMA conference in Liverpool — however, this was my last official duty with a Talis hat on.

Talis is a small employee-owned company, and maintaining a research strand has been far sighted, but unusual. After a period focusing more in the development of new products, Talis is shifting to a phase when every resource should be focused on delivery … and hence long-term research, and my own role in the company, has had to cease.

Talis has been a wonderful place to work over the past seven years, both the individuals there, but also, and crucially important, the company atmosphere, which combines the excitement of a start-up, with real care and sense of community.   So if you spot posts advertised there, it is a great place to be.

Talis was my principal regular income, as my academic role at Birmingham has only been 20%, so long-term I need to think about whether I should increase again my academic time, or do other things. I have been very fortunate never having previously had a time without regular income, so this is a new experience for me, although, of course, common.

Over the past few years, I have kept some time ‘unwaged’ for other projects (such as walking round Wales!) and occasional consultancy, and my to do list is far from empty, so this summer and autumn I am intending to spend more time writing (yes TouchIT will be finished, and editing the Alan Walks Wales blog into a book), picking up some of the many half-finished coding projects, and doing videoing for Interaction Design Foundation

value for money in research – excellence or diversity

Government research funding policy in many countries, including the UK, has focused on centres of excellence, putting more funding into a few institutions and research groups who are creating the most valuable outputs.

Is this the best policy, and does evidence support it?

From “Big Science vs. Little Science: How Scientific Impact Scales with Funding”

I’m prompted to write as Leonel Morgado (Facebook, web) shared a link to a 2013 PLOS ONE paper “Big Science vs. Little Science: How Scientific Impact Scales with Funding” by Jean-Michel Fortin and David Currie.  The paper analyses work funded by Natural Sciences and Engineering Research Council of Canada (NSERC), and looked at size of grant vs. research outcomes.  The paper demonstrates diminishing returns: large grants produce more research outcomes than smaller grants, but less per dollar spend.  That is concentrating research funding appears to reduce the overall research output.

Of course, those obtaining research grants have all been through a highly competitive process, so the NSERC results may simply be a factor of the fact that we are already looking at the very top level of the research projects.

However, a report many years ago reinforces this story, and suggests it holds more broadly.

Sometime in the mid-late 1990s HEFCE the UK higher education funding agency, did a study where they ranked all universities against every simple research output metrics1. One of the outputs was the number of PhD completions and another was industrial research income (arguably whether an output!), but I forget the third.

Not surprisingly Oxford and Cambridge came top of the list when ranked by aggregate research output.

However, the speadsheet also included the amount of research money HEFCE paid into the university and a value-for-money column.

When ranked against value-for-money, the table was near reversed, with Oxford and Cambridge at the very bottom and Northampton University (not typically known as the peak of the university excellence ratings) was the top. That is HEFCE got more research output for pound spent at Northampton than anywhere else in the UK.

The UK REF2014 used an extensive and time-consuming peer-review mechanism to rank the research quality of each discipline in each UK university-level institution, on a 1* to 4* scale (4* being best). Funding is heavily ramped towards 4* (in England the weighting is 10:3:0:0 for 4*:3*:2*:1*). As part of the process, comprehensive funding information was produced for each unit of assessment (typically a department), including UK government income, European projects, charity and industrial funding.

So, we have an officially accepted assessment of research outcomes (that is government funds against it!), and also of the income that generated it.

At a public meeting following the 2014 exercise, I asked a senior person at HEFCE whether they planned to take the two and create a value for money metric, for example, the cost per 4* output.

There was a distinct lack of enthusiasm for the idea!

Furthermore, my analysis of REF measures vs citation metrics suggested that this very focused official funding model was further concentrated by an almost unbelievably extreme bias towards elite institutions in the grading: apparently equal work in terms of external metrics was ranked nearly an order of magnitude higher for ‘better’ institutions, leading to funding being around 2.5 times higher for some elite universities than objective measures would suggest.

contingency-table

From “REF Redux 4 – institutional effects“: ‘winners’ are those with 25% or more than metrics would estimate, ‘losers’ those with 25% or more less.

In summary, the implications both from Fortin and Currie’s PLOS ONE paper and from the 1990s HEFCE report suggest spreading funding more widely would increase overall research outcomes, but both official policy and implicit review bias do the opposite.

  1. I recall reading this, but it was before the days when I rolled everything over on my computer, so can’t find the exact reference. If anyone recalls the name of the report, or has a copy, I would be very grateful.[back]

Students love digital … don’t they?

In the ever accelerating rush to digital delivery, is this actually what students want or need?

Last week I was at Talis Insight conference. As with previous years, this is a mix of sessions focused on those using or thinking of using Talis products, with lots of rich experience talks. However, also about half of the time is dedicated to plenaries about the current state and future prospects for technology in higher education; so well worth attending (it is free!) whether or not you are a Talis user.

Speakers this year included Bill Rammell, now Vice-Chancellor at the University of Bedfordshire, but who was also Minister of State for Higher Education during the second Blair government, and during that time responsible for introducing the National Student Survey.

Another high profile speaker was Rosie Jones, who is Director of Library Services at the Open University … which operates somewhat differently from the standard university library!

However, among the VCs, CEOs and directors of this and that, it was the two most junior speakers who stood out for me. Eva Brittin-Snell and Alex Davie are to SAGE student scholars from Sussex. As SAGE scholars they have engaged in research on student experience amongst their peers, speak at events like this and maintain a student blog, which includes, amongst other things the story of how Eva came to buy her first textbook.

Eva and Alex’s talk was entitled “Digital through a student’s eyes” (video). Many of the talks had been about the rise of digital services and especially the eTextbook. Eva and Alex were the ‘digital natives’, so surely this was joy to their ears. Surprisingly not.

Alex, in her first year at university, started by alluding to the previous speakers, the push for book-less libraries, and general digital spiritus mundi, but offered an alternative view. Students were annoyed at being asked to buy books for a course where only a chapter or two would be relevant; they appreciated the convenience of an eBook, when core textbooks were permanently out on and, and instantly recalled once one got hold of them. However, she said they still preferred physical books, as they are far more usable (even if heavy!) than eBooks.

Eva, a fourth year student, offered a different view. “I started like Aly”, she said, and then went on to describe her change of heart. However, it was not a revelation of the pedagogical potential of digital, more that she had learnt to live through the pain. There were clear practical and logistic advantages to eBooks, there when and where you wanted, but she described a life of constant headaches from reading on-screen.

Possibly some of this is due to the current poor state of eBooks that are still mostly simply electronic versions of texts designed for paper. Also, one of their student surveys showed that very few students had eBook readers such as Kindle (evidently now definitely not cool), and used phones primarily for messaging and WhatsApp. The centre of the student’s academic life was definitely the laptop, so eBooks meant hours staring at a laptop screen.

However, it also reflects a growing body of work showing the pedagogic advantages of physical note taking, potential developmental damage of early tablet and smartphone use, and industry figures showing that across all areas eBook sales are dropping and physical book sales increasing. In addition there is evidence that children and teenagers people prefer physical books, and public library use by young people is growing.

It was also interesting that both Alex and Eva complained that eTextbooks were not ‘snappy’ enough. In the age of Tweet-stream presidents and 5-minute attention spans, ‘snappy’ was clearly the students’ term of choice to describe their expectation of digital media. Yet this did not represent a loss of their attention per se, as this was clearly not perceived as a problem with physical books.

… and I am still trying to imagine what a critical study of Aristotle’s Poetics would look like in ‘snappy’ form.

There are two lessons from this for me. First what would a ‘digital first’ textbook look like. Does it have to be ‘snappy’, or are there ways to maintain attention and depth of reading in digital texts?

The second picks up on issues in the co-authored paper I presented at NordiChi last year, “From intertextuality to transphysicality: The changing nature of the book, reader and writer“, which, amongst other things, asked how we might use digital means to augment the physical reading process, offering some of the strengths of eBooks such as the ability to share annotations, but retaining a physical reading experience.  Also maybe some of the physical limitations of availability could be relieved, for example, if university libraries work with bookshops to have student buy and return schemes alongside borrowing?

It would certainly be good if students did not have to learn to live with pain.

We have a challenge.

Sandwich proofs and odd orders

Revisiting an old piece of work I reflect on the processes that led to it: intuition and formalism, incubation and insight, publish or perish, and a malaise at the heart of current computer science.

A couple of weeks ago I received an email requesting an old technical report, “Finding fixed points in non-trivial domains: proofs of pending analysis and related algorithms” [Dx88].  This report was from nearly 30 years ago, when I was at York and before the time when everything was digital and online. This was one of my all time favourite pieces of work, and one of the few times I’ve done ‘real maths’ in computer science.

As well as tackling a real problem, it required new theoretical concepts and methods of proof that were generally applicable. In addition it arose through an interesting story that exposes many of the changes in academia.

[Aside, for those of more formal bent.] This involved proving the correctness of an algorithm ‘Pending Analysis’ for efficiently finding fixed points over finite lattices, which had been developed for use when optimising functional programs. Doing this led me to perform proofs where some of the intermediate functions were not monotonic, and to develop forms of partial order that enabled reasoning over these. Of particular importance was the concept of a pseudo-monotonic functional, one that preserved an ordering between functions even if one of them is not itself monotonic. This then led to the ability to perform sandwich proofs, where a potentially non-monotonic function of interest is bracketed between two monotonic functions, which eventually converge to the same function sandwiching the function of interest between them as they go.

Oddly while it was one my favourite pieces of work, it was at the periphery of my main areas of work, so had never been published apart from as a York technical report. Also, this was in the days before research assessment, before publish-or-perish fever had ravaged academia, and when many of the most important pieces of work were ‘only’ in technical report series. Indeed, our Department library had complete sets of many of the major technical report series such as Xerox Parc, Bell Labs, and Digital Equipment Corporation Labs where so much work in programming languages was happening at the time.

My main area was, as it is now, human–computer interaction, and at the time principally the formal modelling of interaction. This was the topic of my PhD Thesis and of my first book “Formal Methods for Interactive Systems” [Dx91] (an edited version of the thesis).   Although I do less of this more formal work now-a-days, I’ve just been editing a book with Benjamin Weyers, Judy Bowen and Philippe Pallanque, “The Handbook of Formal Methods in Human-Computer Interaction” [WB17], which captures the current state of the art in the topic.

Moving from mathematics into computer science, the majority of formal work was far more broad, but far less deep than I had been used to. The main issues were definitional: finding ways to describe complex phenomena that both gave insight and enabled a level of formal tractability. This is not to say that there were no deep results: I recall the excitement of reading Sannella’s PhD Thesis [Sa82] on the application of category theory to formal specifications, or Luca Cardelli‘s work on complex type systems needed for more generic coding and understanding object oriented programing.

The reason for the difference in the kinds of mathematics was that computational formalism was addressing real problems, not simply puzzles interesting for themselves. Often these real world issues do not admit the kinds of neat solution that arise when you choose your own problem — the formal equivalent of Rittel’s wicked problems!

Crucially, where there were deep results and complex proofs these were also typically addressed at real issues. By this I do not mean the immediate industry needs of the day (although much of the most important theoretical work was at industrial labs); indeed functional programming, which has now found critical applications in big-data cloud computation and even JavaScript web programming, was at the time a fairly obscure field. However, there was a sense in which these things connected to a wider sphere of understanding in computing and that they could eventually have some connection to real coding and computer systems.

This was one of the things that I often found depressing during the REF2014 reading exercise in 2013. Over a thousand papers covering vast swathes of UK computer science, and so much that seemed to be in tiny sub-niches of sub-niches, obscure variants of inconsequential algebras, or reworking and tweaking of algorithms that appeared to be of no interest to anyone outside two or three other people in the field (I checked who was citing every output I read).

(Note the lists of outputs are all in the public domain, and links to where to find them can be found at my own REF micro-site.)

If this had been pure mathematics papers it is what I would have expected; after all mathematics is not funded in the way computer science is, so I would not expect to see the same kinds of connection to real world issues. Also I would have been disappointed if I had not seen some obscure work of this kind; you sometimes need to chase down rabbit holes to find Aladdin’s cave. It was the shear volume of this kind of work that shocked me.

Maybe in those early days, I self-selected work that was both practically and theoretically interesting, so I have a golden view of the past; maybe it was simply easier to do both before the low-hanging fruit had been gathered; or maybe just there has been a change in the social nature of the discipline. After all, most early mathematicians happily mixed pure and applied mathematics, with the areas only diverging seriously in the 20th century. However, as noted, mathematics is not funded so heavily as computer science, so it does seem to suggest a malaise, or at least loss of direction for computing as a discipline.

Anyway, roll back to the mid 1980s. A colleague of mine, David Wakeling, had been on a visit to a workshop in the States and heard there about Pending Analysis and Young and Hudak’s proof of its correctness . He wanted to use the algorithm in his own work, but there was something about the proof that he was unhappy about. It was not that he had spotted a flaw (indeed there was one, but obscure), but just that the presentation of it had left him uneasy. David was a practical computer scientist, not a mathematician, working on compilation and optimisation of lazy functional programming languages. However, he had some sixth sense that told him something was wrong.

Looking back, this intuition about formalism fascinates me. Again there may be self-selection going on, if David had had worries and they were unfounded, I would not be writing this. However, I think that there was something more than this. Hardy and Wright, the bible of number theory , listed a number of open problems in number theory (many now solved), but crucially for many gave an estimate on how likely it was that they were true or might eventually have a counter example. By definition, these were non-trivial hypotheses, and either true or not true, but Hardy and Wright felt able to offer an opinion.

For David I think it was more about the human interaction, the way the presenters did not convey confidence.  Maybe this was because they were aware there was a gap in the proof, but thought it did not matter, a minor irrelevant detail, or maybe the same slight lack of precision that let the flaw through was also evident in their demeanour.

In principle academia, certainly in mathematics and science, is about the work itself, but we can rarely check each statement, argument or line of proof so often it is the nature of the people that gives us confidence.

Quite quickly I found two flaws.

One was internal to the mathematics (math alert!) essentially forgetting that a ‘monotonic’ higher order function is usually only monotonic when the functions it is applied to are monotonic.

The other was external — the formulation of the theorem to be proved did not actually match the real-world computational problem. This is an issue that I used to refer to as the formality gap. Once you are in formal world of mathematics you can analyse, prove, and even automatically check some things. However, there is first something more complex needed to adequately and faithfully reflect the real world phenomenon you are trying to model.

I’m doing a statistics course at the CHI conference in May, and one of the reasons statistics is hard is that it also needs one foot on the world of maths, but one foot on the solid ground of the real world.

Finding the problem was relatively easy … solving it altogether harder! There followed a period when it was my pet side project: reams of paper with scribbles, thinking I’d solved it then finding more problems, proving special cases, or variants of the algorithm, generalising beyond the simple binary domains of the original algorithm. In the end I put it all into a technical report, but never had the full proof of the most general case.

Then, literally a week after the report was published, I had a notion, and found an elegant and reasonably short proof of the most general case, and in so doing also created a new technique, the sandwich proof.

Reflecting back, was this merely one of those things, or a form of incubation? I used to work with psychologists Tom Ormerod and Linden Ball at Lancaster including as part of the Desire EU network on creativity. One of the topics they studied was incubation, which is one of the four standard ‘stages’ in the theory of creativity. Some put this down to sub-conscious psychological processes, but it may be as much to do with getting out of patterns of thought and hence seeing a problem in a new light.

In this case, was it the fact that the problem had been ‘put to bed’, enabled fresh insight?

Anyway, now, 30 years on, I’ve made the report available electronically … after reanimating Troff on my Mac … but that is another story.

References

[Dx91] A. J. Dix (1991). Formal Methods for Interactive Systems. Academic Press.ISBN 0-12-218315-0 http://www.hiraeth.com/books/formal/

[Dx88] A. J. Dix (1988). Finding fixed points in non-trivial domains: proofs of pending analysis and related algorithms. YCS 107, Dept. of Computer Science, University of York. https://alandix.com/academic/papers/fixpts-YCS107-88/

[HW59] G.H. Hardy, E.M. Wright (1959). An Introduction to the Theory of Numbers – 4th Ed. Oxford University Press.   https://archive.org/details/AnIntroductionToTheTheoryOfNumbers-4thEd-G.h.HardyE.m.Wright

[Sa82] Don Sannella (1982). Semantics, Imlementation and Pragmatics of Clear, a Program Specification Language. PhD, University of Edinburgh. https://www.era.lib.ed.ac.uk/handle/1842/6633

[WB17] Weyers, B., Bowen, J., Dix, A., Palanque, P. (Eds.) (2017) The Handbook of Formal Methods in Human-Computer Interaction. Springer. ISBN 978-3-319-51838-1 http://www.springer.com/gb/book/9783319518374

[YH96] J. Young and P. Hudak (1986). Finding fixpoints on function spaces. YALEU/DCS/RR-505, Yale University, Department of Computer Science http://www.cs.yale.edu/publications/techreports/tr505.pdf

the educational divide – do numbers matter?

If a news article is all about numbers, why is the media shy about providing the actual data?

On the BBC News website this morning James McIvor‘s article “Clash over ‘rich v poor’ university student numbers” describes differences between Scottish Government (SNP) and Scottish Labour in the wake of Professor Peter Scott appointment as commissioner for fair access to higher education in Scotland.

Scottish Labour claim that while access to university by the most deprived has increased, the educational divide is growing, with the most deprived increasing by 0.8% since 2014, but those in the least deprived (most well off) growing at nearly three times that figure.  In contrast, the Sottish Government claims that in 2006 those from the least deprived areas were 5.8 times more likely to enter university than those in the most deprived areas, whereas now the difference is only 3.9 times, a substantial decrease in educational inequality..

The article is all about numbers, but the two parties seem to be saying contradictory things, one saying inequality is increasing, one saying it is decreasing!

Surely enough to make the average reader give up on experts, just like Michael Gove!

Of course, if you can read through the confusing array of leasts and mosts, the difference seems to be that the two parties are taking different base years: 2014 vs 2006, and that both can be true: a long term improvement with decreasing inequality, but a short term increase in inequality since 2014.  The former is good news, but the latter may be bad news, a change in direction that needs addressing, or simply ‘noise’ as we are taking about small changes on big numbers.

I looked in vain for a link to the data, web sites or reports n which this was based, after all this is an article where the numbers are the story, but there are none.

After a bit of digging, I found that the data that both are using is from the UCAS Undergraduate 2016 End of Cycle Report (the numerical data for this figure and links to CSV files are below).

Figure from UCAS 2016 End of Cycle Report

Looking at these it is clear that the university participation rate for the least deprived quintile (Q5, blue line at top) has stayed around 40% with odd ups and downs over the last ten years, whereas the participation of the most deprived quintile has been gradually increasing, again with year-by-year wiggles.  That is the ratio between least and most deprived used to be about 40:7 and now about 40:10, less inequality as the SNP say.

For some reason 2014 was a dip year for the Q5.  There is no real sign of a change in the long-term trend, but if you take 2014 to 2016, the increase in Q5 is larger than the increase in Q1, just as Scottish Labour say.  However, any other year would not give this picture.

In this case it looks like Scottish Labour either cherry picked a year that made the story they wanted, or simply accidentally chose it.

The issue for me though, is not so much who was right or wrong, but why the BBC didn’t present this data to make it possible to make this judgement?

I can understand the argument that people do not like, or understand numbers at all, but where, as in this case, the story is all about the numbers, why not at least present the raw data and ideally discuss why there is an apparent contradiction!

 

Numerical from figure 57 of UCAS  2016 End of Cycle Report

2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016
Q1 7.21 7.58 7.09 7.95 8.47 8.14 8.91 9.52 10.10 9.72 10.90
Q2 13.20 12.80 13.20 14.30 15.70 14.40 14.80 15.90 16.10 17.40 18.00
Q3 21.10 20.60 20.70 21.30 23.60 21.10 22.10 22.50 22.30 24.00 24.10
Q4 29.40 29.10 30.20 30.70 31.50 29.10 29.70 29.20 28.70 30.30 31.10
Q5 42.00 39.80 41.40 42.80 41.70 40.80 41.20 40.90 39.70 41.10 42.30

UCAS provide the data in CSV form.  I converted this to the above tabular form and this is available in CSV or XLSX.

the internet laws of the jungle

firefox-copyright-1Where are the boundaries between freedom, license and exploitation, between fair use and theft?

I found myself getting increasingly angry today as Mozilla Foundation stepped firmly beyond those limits, and moreover with Trump-esque rhetoric attempts to dupe others into following them.

It all started with a small text add below the Firefox default screen search box:

firefox-copyright-2

Partly because of my ignorance of web-speak ‘TFW‘ (I know showing my age!), I clicked through to a petition page on Mozilla Foundation (PDF archive copy here).

It starts off fine, with stories of some of the silliness of current copyright law across Europe (can’t share photos of the Eiffel tower at night) and problems for use in education (which does in fact have quite a lot of copyright exemptions in many countries).  It offers a petition to sign.

This sounds all good, partly due to rapid change, partly due to knee jerk reactions, internet law does seem to be a bit of a mess.

If you blink you might miss one or two odd parts:

“This means that if you live in or visit a country like Italy or France, you’re not permitted to take pictures of certain buildings, cityscapes, graffiti, and art, and share them online through Instagram, Twitter, or Facebook.”

Read this carefully, a tourist forbidden from photographing cityscapes – silly!  But a few words on “… and art” …  So if I visit an exhibition of an artist or maybe even photographer, and share a high definition (Nokia Lumia 1020 has 40 Mega pixel camera) is that OK? Perhaps a thumbnail in the background of a selfie, but does Mozilla object to any rules to prevent copying of artworks?

mozilla-dont-break-the-internet

However, it is at the end, in a section labelled “don’t break the internet”, the cyber fundamentalism really starts.

“A key part of what makes the internet awesome is the principle of innovation without permission — that anyone, anywhere, can create and reach an audience without anyone standing in the way.”

Again at first this sounds like a cry for self expression, except if you happen to be an artist or writer and would like to make a living from that self-expression?

Again, it is clear that current laws have not kept up with change and in areas are unreasonably restrictive.  We need to be ale to distinguish between a fair reference to something and seriously infringing its IP.  Likewise, we could distinguish the aspects of social media that are more like looking at holiday snaps over a coffee, compared to pirate copies for commercial profit.

However, in so many areas it is the other way round, our laws are struggling to restrict the excesses of the internet.

Just a few weeks ago a 14 year old girl was given permission to sue Facebook.  Multiple times over a 2 year period nude pictures of her were posted and reposted.  Facebook hides behind the argument that it is user content, it takes down the images when they are pointed out, and yet a massive technology company, which is able to recognise faces is not able to identify the same photo being repeatedly posted. Back to Mozilla: “anyone, anywhere, can create and reach an audience without anyone standing in the way” – really?

Of course this vision of the internet without boundaries is not just about self expression, but freedom of speech:

“We need to defend the principle of innovation without permission in copyright law. Abandoning it by holding platforms liable for everything that happens online would have an immense chilling effect on speech, and would take away one of the best parts of the internet — the ability to innovate and breathe new meaning into old content.”

Of course, the petition is signalling out EU law, which inconveniently includes various provisions to protect the privacy and rights of individuals, not dictatorships or centrally controlled countries.

So, who benefits from such an open and unlicensed world?  Clearly not the small artist or the victim of cyber-bullying.

Laissez-faire has always been an aim for big business, but without constraint it is the law of the jungle and always ends up benefiting the powerful.

In the 19th century it was child labour in the mills only curtailed after long battles.

In the age of the internet, it is the vast US social media giants who hold sway, and of course the search engines, who just happen to account for $300 million of revenue for Mozilla Foundation annually, 90% of its income.