Sandwich proofs and odd orders

Revisiting an old piece of work I reflect on the processes that led to it: intuition and formalism, incubation and insight, publish or perish, and a malaise at the heart of current computer science.

A couple of weeks ago I received an email requesting an old technical report, “Finding fixed points in non-trivial domains: proofs of pending analysis and related algorithms” [Dx88].  This report was from nearly 30 years ago, when I was at York and before the time when everything was digital and online. This was one of my all time favourite pieces of work, and one of the few times I’ve done ‘real maths’ in computer science.

As well as tackling a real problem, it required new theoretical concepts and methods of proof that were generally applicable. In addition it arose through an interesting story that exposes many of the changes in academia.

[Aside, for those of more formal bent.] This involved proving the correctness of an algorithm ‘Pending Analysis’ for efficiently finding fixed points over finite lattices, which had been developed for use when optimising functional programs. Doing this led me to perform proofs where some of the intermediate functions were not monotonic, and to develop forms of partial order that enabled reasoning over these. Of particular importance was the concept of a pseudo-monotonic functional, one that preserved an ordering between functions even if one of them is not itself monotonic. This then led to the ability to perform sandwich proofs, where a potentially non-monotonic function of interest is bracketed between two monotonic functions, which eventually converge to the same function sandwiching the function of interest between them as they go.

Oddly while it was one my favourite pieces of work, it was at the periphery of my main areas of work, so had never been published apart from as a York technical report. Also, this was in the days before research assessment, before publish-or-perish fever had ravaged academia, and when many of the most important pieces of work were ‘only’ in technical report series. Indeed, our Department library had complete sets of many of the major technical report series such as Xerox Parc, Bell Labs, and Digital Equipment Corporation Labs where so much work in programming languages was happening at the time.

My main area was, as it is now, human–computer interaction, and at the time principally the formal modelling of interaction. This was the topic of my PhD Thesis and of my first book “Formal Methods for Interactive Systems” [Dx91] (an edited version of the thesis).   Although I do less of this more formal work now-a-days, I’ve just been editing a book with Benjamin Weyers, Judy Bowen and Philippe Pallanque, “The Handbook of Formal Methods in Human-Computer Interaction” [WB17], which captures the current state of the art in the topic.

Moving from mathematics into computer science, the majority of formal work was far more broad, but far less deep than I had been used to. The main issues were definitional: finding ways to describe complex phenomena that both gave insight and enabled a level of formal tractability. This is not to say that there were no deep results: I recall the excitement of reading Sannella’s PhD Thesis [Sa82] on the application of category theory to formal specifications, or Luca Cardelli‘s work on complex type systems needed for more generic coding and understanding object oriented programing.

The reason for the difference in the kinds of mathematics was that computational formalism was addressing real problems, not simply puzzles interesting for themselves. Often these real world issues do not admit the kinds of neat solution that arise when you choose your own problem — the formal equivalent of Rittel’s wicked problems!

Crucially, where there were deep results and complex proofs these were also typically addressed at real issues. By this I do not mean the immediate industry needs of the day (although much of the most important theoretical work was at industrial labs); indeed functional programming, which has now found critical applications in big-data cloud computation and even JavaScript web programming, was at the time a fairly obscure field. However, there was a sense in which these things connected to a wider sphere of understanding in computing and that they could eventually have some connection to real coding and computer systems.

This was one of the things that I often found depressing during the REF2014 reading exercise in 2013. Over a thousand papers covering vast swathes of UK computer science, and so much that seemed to be in tiny sub-niches of sub-niches, obscure variants of inconsequential algebras, or reworking and tweaking of algorithms that appeared to be of no interest to anyone outside two or three other people in the field (I checked who was citing every output I read).

(Note the lists of outputs are all in the public domain, and links to where to find them can be found at my own REF micro-site.)

If this had been pure mathematics papers it is what I would have expected; after all mathematics is not funded in the way computer science is, so I would not expect to see the same kinds of connection to real world issues. Also I would have been disappointed if I had not seen some obscure work of this kind; you sometimes need to chase down rabbit holes to find Aladdin’s cave. It was the shear volume of this kind of work that shocked me.

Maybe in those early days, I self-selected work that was both practically and theoretically interesting, so I have a golden view of the past; maybe it was simply easier to do both before the low-hanging fruit had been gathered; or maybe just there has been a change in the social nature of the discipline. After all, most early mathematicians happily mixed pure and applied mathematics, with the areas only diverging seriously in the 20th century. However, as noted, mathematics is not funded so heavily as computer science, so it does seem to suggest a malaise, or at least loss of direction for computing as a discipline.

Anyway, roll back to the mid 1980s. A colleague of mine, David Wakeling, had been on a visit to a workshop in the States and heard there about Pending Analysis and Young and Hudak’s proof of its correctness . He wanted to use the algorithm in his own work, but there was something about the proof that he was unhappy about. It was not that he had spotted a flaw (indeed there was one, but obscure), but just that the presentation of it had left him uneasy. David was a practical computer scientist, not a mathematician, working on compilation and optimisation of lazy functional programming languages. However, he had some sixth sense that told him something was wrong.

Looking back, this intuition about formalism fascinates me. Again there may be self-selection going on, if David had had worries and they were unfounded, I would not be writing this. However, I think that there was something more than this. Hardy and Wright, the bible of number theory , listed a number of open problems in number theory (many now solved), but crucially for many gave an estimate on how likely it was that they were true or might eventually have a counter example. By definition, these were non-trivial hypotheses, and either true or not true, but Hardy and Wright felt able to offer an opinion.

For David I think it was more about the human interaction, the way the presenters did not convey confidence.  Maybe this was because they were aware there was a gap in the proof, but thought it did not matter, a minor irrelevant detail, or maybe the same slight lack of precision that let the flaw through was also evident in their demeanour.

In principle academia, certainly in mathematics and science, is about the work itself, but we can rarely check each statement, argument or line of proof so often it is the nature of the people that gives us confidence.

Quite quickly I found two flaws.

One was internal to the mathematics (math alert!) essentially forgetting that a ‘monotonic’ higher order function is usually only monotonic when the functions it is applied to are monotonic.

The other was external — the formulation of the theorem to be proved did not actually match the real-world computational problem. This is an issue that I used to refer to as the formality gap. Once you are in formal world of mathematics you can analyse, prove, and even automatically check some things. However, there is first something more complex needed to adequately and faithfully reflect the real world phenomenon you are trying to model.

I’m doing a statistics course at the CHI conference in May, and one of the reasons statistics is hard is that it also needs one foot on the world of maths, but one foot on the solid ground of the real world.

Finding the problem was relatively easy … solving it altogether harder! There followed a period when it was my pet side project: reams of paper with scribbles, thinking I’d solved it then finding more problems, proving special cases, or variants of the algorithm, generalising beyond the simple binary domains of the original algorithm. In the end I put it all into a technical report, but never had the full proof of the most general case.

Then, literally a week after the report was published, I had a notion, and found an elegant and reasonably short proof of the most general case, and in so doing also created a new technique, the sandwich proof.

Reflecting back, was this merely one of those things, or a form of incubation? I used to work with psychologists Tom Ormerod and Linden Ball at Lancaster including as part of the Desire EU network on creativity. One of the topics they studied was incubation, which is one of the four standard ‘stages’ in the theory of creativity. Some put this down to sub-conscious psychological processes, but it may be as much to do with getting out of patterns of thought and hence seeing a problem in a new light.

In this case, was it the fact that the problem had been ‘put to bed’, enabled fresh insight?

Anyway, now, 30 years on, I’ve made the report available electronically … after reanimating Troff on my Mac … but that is another story.


[Dx91] A. J. Dix (1991). Formal Methods for Interactive Systems. Academic Press.ISBN 0-12-218315-0

[Dx88] A. J. Dix (1988). Finding fixed points in non-trivial domains: proofs of pending analysis and related algorithms. YCS 107, Dept. of Computer Science, University of York.

[HW59] G.H. Hardy, E.M. Wright (1959). An Introduction to the Theory of Numbers – 4th Ed. Oxford University Press.

[Sa82] Don Sannella (1982). Semantics, Imlementation and Pragmatics of Clear, a Program Specification Language. PhD, University of Edinburgh.

[WB17] Weyers, B., Bowen, J., Dix, A., Palanque, P. (Eds.) (2017) The Handbook of Formal Methods in Human-Computer Interaction. Springer. ISBN 978-3-319-51838-1

[YH96] J. Young and P. Hudak (1986). Finding fixpoints on function spaces. YALEU/DCS/RR-505, Yale University, Department of Computer Science

Mathematics, Jewishness, and Direction

When I was nearly 18 I was part of the British team to the International Mathematical Olympiad (IMO) in Bucharest (see my account of the experience).  The US team were Jewish1, all eight of them.  While this was noteworthy, it was not surprising. There does seem to be a remarkable number of high achieving Jewish mathematicians, including nearly a quarter of Fields Medal recipients (the Maths equivalent of the Nobel Prize) and half of the mathematics members of the US National Academy of Sciences2.

Is this culture or genes, nature or nurture?

As with most things, I’d guess the answer is a mix.  But, if of culture, what? There is a tradition of Biblical numerology, but hardly widespread enough to make the substantial effects. Is it to do with the discipline of learning Hebrew, maybe just discipline, or perhaps is it that mathematics is one of the fields where there has been less prejudice in academic appointments3.

I have just read a paper, “Disembodying Cognition” by Anjan Chatterjee, that may shed a little light on this.  The paper is an excellent overview of current neuroscience research  on embodiment and also its limits (hence ‘disembodying’).  One positive embodiment result related to representations of actions, such as someone kicking a ball, which are often depicted with the agent on the left and the acted upon object on the right.  However, when these experiments are repeated for Arab participants, the direction effects are reversed (p.102).  Chaterjee surmises that this is due to the right-to-left reading direction in Arabic.

In mathematics an equation is strictly symmetrical, simply stating that two thinsg are equal.  However, we typically see equations such as:

y = 3x + 7

where the declarative reading may well be:

y is the same as “3x + 7”

but the more procedural ‘arithmatic’ reading is:

take x, multiple by three, add seven and this gives y

In programming languages this is of course the normal semantics … and can give rise to confusion in statements such as:

x = x + 1

This is both confusing if read as an equation (why some programming languages have := read as “becomes equal to”), but also conflicts with the left-to-right reading of English and European languages.

COBOL which was designed for business use, used English-like syntax, which did read left to right:

ADD Tiree-Total TO Coll-Total GIVING Overall-Total.

Returning to Jewish mathematicians, does the right-to-left reading of Hebrew help in early understanding of algebra?  But if so then surely there should be many more contemporary Arab mathematicians also.  This is clearly not the full story, but maybe it is one contributory factor.

And, at the risk of confusing all of us brought up with the ‘conventional’ way of writing equations, would it be easier for English-speaking children if they were introduced to the mathematically equivalent, but linguistically more comprehensible:

3x + 7 = y

  1. Although they did have to ‘forget’ while they were there otherwise they would have starved on the all-pork cuisine[back]
  2. Source “Jews in Mathematics“.[back]
  3. The Russians did not send a team to the IMO in 1978.  There were three explanations of this (i) because it was in Romania, (ii) because the Romanians had invited a Chinese team and (iii), because the Russian national mathematical Olympiad had also produced an all Jewish team and the major Moscow university that always admitted the team did not want that many Jewish students.  Whether the last explanation is true or not, it certainly is consonant with the levels of explicit discrimination in the USSR at the time. [back]

reading: Computing is a Natural Science

I’ve just been re-reading Peter Denning’s article “Computing is a Natural Science1.    The basic thesis is that computation as broad concept goes way beyond digital computers and many aspects of science and life have a computational flavour; “Computing is the study of natural and artificial information processes“.  As an article this is in some ways not so controversial as computational analogies have been used in cognitive science for almost as long as there have been computers (and before that analogies with steam engines) and readers of popular science magazines can’t have missed the physicists using parallels with information and computation in cosmology.  However, Denning’s paper is to some extent a manifesto, or call to arms: “computing is not just a load of old transistor, but something to be proud of, the roots of understanding the universe”.

Particularly useful are the “principles framework for computing” that Denning has been constructing through a community discussion process.  I mentioned these before when blogging about “what is computing? The article lists the top level categories and gives examples of how each of these can be found in areas that one would not generally think of as ‘computing’.

Computation (meaning and limits of computation)
(reliable data transmission)
(cooperation among networked entities)
(storage and retrieval of information)
(meaning and limits of automation)
(performance prediction and capacity planning)
(building reliable software systems)
categories from “Great Principles of Computing

Occasionally the mappings are stretched … I’m not convinced that natural fractals are about hierarchical aggregation … but they do paint a picture of ubiquitous parallels across the key areas of computation.

Denning is presenting a brief manifesto not a treatise, so examples of very different kinds tend to be presented together.  There seem to be three main kinds:

  • ubiquity of computational devices – from iPods to the internet, computation is part of day-to-day life
  • ubiquity of computation as a tool –  from physical simulations to mathematical proofs and eScience
  • ubiquity of computation as an explanatory framework – modelling physical and biological systems as if they were performing a computational function

It is the last, computation as analogy or theoretical lens that seems most critical to the argument as the others are really about computers in the traditional sense and few would argue that computers (rather than computation) are everywhere.

Rather like weak AI, one can look treat these analogies as simply that, rather like the fact that electrical flow in circuits can be compared with water flow in pipes.  So we may feel that computation may be a good way to understand a phenomena, but with no intention of saying the phenomena is fundamentally computational.

However, some things are closer to a ‘strong’ view of computation as natural science.  One example of this is the “socio-organisational Church-Turing hypothesis”, a term I often use in talks with its origins in a 1998 paper “Redefining Organisational Memory“.  The idea is that organisations are, amongst other things, information processing systems, and therefore it is reasonable to expect to see structural similarities between phenomena in organizations and those in other information processing systems such as digital computers or our brains.  Here computation is not simply an analogy or useful model, the similarity is because there is a deep rooted relationship – an organisation is not just like a computer, it is actually computational.

Apparently absent are examples of where the methods of algorithmics or software engineering are being applied in other domains; what has become known as ‘computational thinking‘. This maybe because there are two sides to computing:

  • computing (as a natural science) understanding how computers (and similar things) behave – related to issues such as emergence, interaction and communication, limits of computability
  • computing (as a design discipline) making computers (and similar things) behave as we want them to – related to issues such as hierarchical decomposition and separation of concerns

The first can be seen as about understanding complexity and the latter controlling it.  Denning is principally addressing the former, whereas computational thinking is about the latter.

Denning’s thesis could be summarised as “computation is about more than computers”.  However, that is perhaps obvious in that the early study of computation by Church and Turing was before the advent of digital computers; indeed Church was primarily addressing what could be computed by mathematicians not machines!  Certainly I was left wondering what exactly was ubiquitous: computation, mathematics, cybernetics?

Denning notes how the pursuit of cybernetics as an all embracing science ran aground as it seemed to claim too much territory (although the practical application of control theory is still live and well) … and one wonders if the same could happen for Denning’s vision.  Certainly embodied mind theorists would find more in common with cybernetics than much of computing and mathematicians are not going to give up the language of God too easily.

Given my interest in all three subjects, computation, mathematics, cybernetics, it prompts me to look for the distinctions, and overlaps, and maybe the way that they are all linked (perhaps all just part of mathematics ;-)).  Furthermore as I am also interested in understanding the nature of computation I wonder whether looking at how natural things are ‘like’ computation may be not only an application of computational understanding, but more a lens or mirror that helps us see computation itself more clearly.

  1. well when I say re-reading I think the first ‘read’ was more of a skim, great thing about being on sabbatical is I actually DO get to read things 🙂 [back]

The Cult of Ignorance

Throughout society, media, and academia, it seems that ignorance is no longer a void to be filled, but a virtue to be lauded.  Ignorance is certainly not a ‘problem’, not something to be ashamed of, but is either an opportunity to learn or a signal that you need to seek external expertise.  However, when ignorance is seen as something not just good in itself, but almost a sign of superiority over those who do have knowledge or expertise, then surely this is a sign of a world in decadence.

Although it is something of which I’ve been aware for a long time, two things prompt to think again about this: a mailing list discussion about science in schools and a recent paper review.

The CPHC mailing list discussion was prompted by a report by the BBC on a recent EU survey on attitudes to science amongst 15-25 year olds.  The survey found that around 1/2 of Irish and British respondents felt they “lacked the skills to pursue a career in science” compared with only 10% in several eastern European countries.  The discussion was prompted not so much by the result itself but by the official government response that the UK science community needed to do more “to understand what excites and enthuses young people and will switch them on to a science future.”  While no-one disagrees with the sentiment, regarding it as ‘the problem’ disregards the fact that those countries where scientific and mathematical education is not a problem are precisely those where the educational systems are more traditional, less focused on motivation and fun!

I have blogged before about my concerns regarding basic numeracy, but that was about ‘honest ignorance’, people who should know not knowing.  However, there is a common attitude to technical subjects that makes it a matter of pride for otherwise educated people to say “I could never do maths” or “I was never good at science”, in a way that would be incongruous if it were said about reading or writing (although as we shall see below technologists do do precisely that), and often with the implication that to have been otherwise would have been somehow ‘nerdy’ and made them less well-balanced people.

Sadly this cult of ignorance extends also to academia.

A colleague of mine recently had reviews back on a paper.  One reviewer criticised the use of the term ‘capitalisation’ (which was in context referring to ‘social capital’) as to the reviewer word meant making letters upper case.  The reviewer suggested that this might be a word in the author’s native language.

At a time when the recapitalisation of banks is a major global issue, this surely feels like culpable ignorance.  Obviously the word was being used in a technical sense, but the reviewer was suggesting it was not standard English.  Of course, ‘capital’ in the financial sense dates back certainly 300 years, the verb ‘capitalise’ is part of everyday speech “let’s capitalise on our success”, and my 30 year old Oxford English Dictionary includes the following:

Capitalize 1850. …. 2. The project of capitalizing incomes 1856. Hence Capitalization.

Now I should emphasise it is not the ignorance of the reviewer I object to; I know I am ignorant of many things and ready to admit it.  The problem is that the reviewer feels confident enough in that ignorance to criticise the author for the use of the word … apparently without either (a) consulting a dictionary, or (b) while filling out the online review form bothering to Google it!

This reminded me of a review of a paper I once received that criticised my statistical language, suggesting I should use the proper statistical term ‘significance’ rather than the informal language ‘confidence’.  Now many people do not really understand the difference between significance testing (evidence of whether things are different) and confidence intervals (evidence of how different or how similar they are) – and so rarely use the latter, even though confidence intervals are a more powerful statistical tool.  However the problem here is not so much the ignorance of the reviewer (albeit that a basic awareness of statistical vocabulary would seem reasonable in a discipline with a substantial experimental side), but the fact that the reviewer felt confident enough in his/her ignorance to criticise without either consulting an elementary statistical text book or Googling “statistics confidence”.

So, let’s be proud of our skills and our knowledge, humble in accepting the limits of what we know, and confident enough in ourselves, so that we do not need to denegrate others for doing what we cannot.  Then ignorance becomes a spring board to learn more and a launching point for collaboration

PPIG2008 and the twenty first century coder

Last week I was giving a keynote at the annual workshop PPIG2008 of the Psychology of Programming Interest Group.   Before I went I was politely pronouncing this pee-pee-eye-gee … however, when I got there I found the accepted pronunciation was pee-pig … hence the logo!

My own keynote at PPIG2008 was “as we may code: the art (and craft) of computer programming in the 21st century” and was an exploration of the changes in coding from 1968 when Knuth published the first of his books on “the art of computer programming“.  On the web site for the talk I’ve made a relatively unstructured list of some of the distinctions I’ve noticed between 20th and 21st Century coding (C20 vs. C21); and in my slides I have started to add some more structure.  In general we have a move from more mathematical, analytic, problem solving approach, to something more akin to a search task, finding the right bits to fit together with a greater need for information management and social skills. Both this characterisation and the list are, of course, a gross simplification, but seem to capture some of the change of spirit.  These changes suggest different cognitive issues to be explored and maybe different personality types involved – as one of the attendees, David Greathead, pointed out, rather like the judging vs. perceiving personality distinction in Myers-Briggs1.

One interesting comment on this was from Marian Petre, who has studied many professional programmers.  Her impression, and echoed by others, was that the heavy-hitters were the more experienced programmers who had adapted to newer styles of programming, whereas  the younger programmers found it harder to adapt the other way when they hit difficult problems.  Another attendee suggested that perhaps I was focused more on application coding and that system coding and system programmers were still operating in the C20 mode.

The social nature of modern coding came out in several papers about agile methods and pair programming.  As well as being an important phenomena in its own right, pair programming gives a level of think-aloud  ‘for free’, so maybe this will also cast light on individual coding.

Margaret-Anne Storey gave a fascinating keynote about the use of comments and annotations in code and again this picks up the social nature of code as she was studying open-source coding where comments are often for other people in the community, maybe explaining actions, or suggesting improvements.  She reviewed a lot of material in the area and I was especially interested in one result that showed that novice programmers with small pieces of code found method comments more useful than class comments.  Given my own frequent complaint that code is inadequately documented at the class or higher level, this appeared to disagree with my own impressions.  However, in discussion it seemed that this was probably accounted for by differences in context: novice vs. expert programmers, small vs large code, internal comments vs. external documentation.  One of the big problems I find is that the way different classes work together to produce effects is particularly poorly documented.  Margaret-Anne described one system her group had worked on2 that allowed you to write a tour of your code opening windows, highlighting sections, etc.

I sadly missed some of the presentations as I had to go to other meetings (the danger of a conference at your home site!), but I did get to some and  was particularly fascinated by the more theoretical/philosophical session including one paper addressing the psychological origins of the notions of objects and another focused on (the dangers of) abstraction.

The latter, presented by Luke Church, critiqued  Jeanette Wing‘s 2006 CACM paper on Computational Thinking.  This is evidently a ‘big thing’ with loads of funding and hype … but one that I had entirely missed :-/ Basically the idea is to translate the ways that one thinks about computation to problems other than computers – nerds rule OK. The tenet’s of computational thinking seem to overlap a lot with management thinking and also reminded me of the way my own HCI community and also parts of the Design (with capital D) community in different ways are trying to say they we/they are the universal discipline  … well if we don’t say it about our own discipline who will …the physicists have been getting away with it for years 😉

Luke (and his co-authors) argument is that abstraction can be dangerous (although of course it is also powerful).  It would be interesting perhaps rather than Wing’s paper to look at this argument alongside  Jeff Kramer’s 2007 CACM article “Is abstraction the key to computing?“, which I recall liking because it says computer scientists ought to know more mathematics 🙂 🙂

I also sadly missed some of Adrian Mackenzie‘s closing keynote … although this time not due to competing meetings but because I had been up since 4:30am reading a PhD thesis and after lunch on a Friday had begin to flag!  However, this was no reflection an Adrian’s talk and the bits I heard were fascinating looking at the way bio-tech is using the language of software engineering.  This sparked a debate relating back to the overuse of abstraction, especially in the case of the genome where interactions between parts are strong and so the software component analogy weak.  It also reminded me of yet another relatively recent paper3 on the way computation can be seen in many phenomena and should not be construed solely as a science of computers.

As well as the academic content it was great to be with the PPIG crowd they are a small but very welcoming and accepting community – I don’t recall anything but constructive and friendly debate … and next year they have PPIG09 in Limerick – PPIG and Guiness what could be better!

  1. David has done some really interesting work on the relationship between personality types and different kinds of programming tasks.  I’ve seen him present before about debugging and unfortunately had to miss his talk at PPIG on comprehension.  Given his work has has shown clearly that there are strong correlations between certain personality attributes and coding, it would be good to see more qualitative work investigating the nature of the differences.   I’d like to know whether strategies change between personality types: for example, between systematic debugging and more insight-based scan and see it bug finding. [back]
  2. but I can’t find on their website :-([back]
  3. Perhaps 2006/2007 in either CACM or Computer Journal, if anyone knows the one I mean please remind me![back]

Basic Numeracy

When the delayed SATS results eventually arrive, I’m sure there will be the regular navel gazing at the state of basic numeracy and literacy in UK schools. But what about those who were in primary schools 30 years ago?

This morning on BBC News Channel an interviewer was talking to an economist from the City. They were discussing the reduction in bank lending (a fall of 3% during June, with 32% year-on-year drop ) and its implications for the housing market and the economy in general. The interviewer asked if it was accelerating and the economist agreed, mentioning how the year-on-year drop had gone from 10% in one quarter to 20% in the next and now over 30%.

Of course these figures are all based on a year-on-year average that includes the period before the credit crunch began last autumn and in fact are consistent with a steady linear fall of around 3% per month for the 9 months since the Northern Rock collapse. That is an alarming rate of fall, but not evidence of an accelerating fall.

This apparent lack of basic numeracy reminds me of a discussion some years ago with senior financial executives who dismissed any attempt to quantify projected company income as ‘just numbers’. Having lost money in the Northern Rock collapse I wonder whether the executives in Northern Rock and other banks had a similar attitude!

I know it is easy for me as a trained mathematician to hold up my hands in horror, but still these are people who are playing not only with their own livelihoods, but also the lives of their investors, ordinary people and even the state of the entire economy.

We do have a peculiar attitude in the UK where it is acceptable for highly educated people (including many computer scientists) to just ‘not do math’, and furthermore say so with a level of pride, whereas to say the same about reading would be unconscionable. Other European countries seem far more numerate, so this seems to be a cultural phenomena not an intellectual problem.

I have heard that one of the best predictors of educational success is if a child is willing to put off a treat for another day. Mathematics does require doing work at one stage to see benefit maybe many years later, but this to some extent runs counter to the increasingly common expectation of students to want to know fully and completely how something is useful to them now.

Maybe the answer is for schools to have lessons in leaving sweeties until tomorrow … and perhaps remedial lessons for City economists who matured during the Thatcher years.

mathematics goes reality TV!

In 1978 I was on the British team for the 20th International Mathematical Olympiad (recollections of the trip). It was in Romania and the event was prime time news … and I was one of a group interviewed for the news of the event. the following year the 21st IMO was held in London and there was no press coverage that I found whatsoever. OK mathematics is hardly a spectator sport, but the complete British lack of interest in anything remotely intellectual was disturbing.

But now … nearly 30 years later … perhaps things are changing. On Sunday BBC2 are showing a 90min documentary about the olympiad team. Maybe maths will get sexy!

Beautiful Young Minds1
BBC2 Sun 14 Oct, 9:00 pm – 10:30 pm 90mins
Beautiful Young Minds tells the story of some of the brightest mathematical brains of a generation. Each year, exceptionally gifted teenagers from over 90 countries compete for medals at the International Mathematical Olympiad. The film follows a group of brilliant teenagers as they battle it out to become the chosen six selected to represent the UK.

  1. unfortunately the BBC’s own page on this disappeared at the end of the week – why do they do this! – but there are many descriptions and reviews of it on the web including one at [back]