I’ve just been re-reading Peter Denning’s article “Computing is a Natural Science”1. The basic thesis is that computation as broad concept goes way beyond digital computers and many aspects of science and life have a computational flavour; “Computing is the study of natural and artificial information processes“. As an article this is in some ways not so controversial as computational analogies have been used in cognitive science for almost as long as there have been computers (and before that analogies with steam engines) and readers of popular science magazines can’t have missed the physicists using parallels with information and computation in cosmology. However, Denning’s paper is to some extent a manifesto, or call to arms: “computing is not just a load of old transistor, but something to be proud of, the roots of understanding the universe”.
Particularly useful are the “principles framework for computing” that Denning has been constructing through a community discussion process. I mentioned these before when blogging about “what is computing? The article lists the top level categories and gives examples of how each of these can be found in areas that one would not generally think of as ‘computing’.
Computation (meaning and limits of computation)
Communication (reliable data transmission)
Coordination (cooperation among networked entities)
Recollection (storage and retrieval of information)
Automation (meaning and limits of automation)
Evaluation (performance prediction and capacity planning)
Design (building reliable software systems)
categories from “Great Principles of Computing“
Occasionally the mappings are stretched … I’m not convinced that natural fractals are about hierarchical aggregation … but they do paint a picture of ubiquitous parallels across the key areas of computation.
Denning is presenting a brief manifesto not a treatise, so examples of very different kinds tend to be presented together. There seem to be three main kinds:
- ubiquity of computational devices – from iPods to the internet, computation is part of day-to-day life
- ubiquity of computation as a tool – from physical simulations to mathematical proofs and eScience
- ubiquity of computation as an explanatory framework – modelling physical and biological systems as if they were performing a computational function
It is the last, computation as analogy or theoretical lens that seems most critical to the argument as the others are really about computers in the traditional sense and few would argue that computers (rather than computation) are everywhere.
Rather like weak AI, one can look treat these analogies as simply that, rather like the fact that electrical flow in circuits can be compared with water flow in pipes. So we may feel that computation may be a good way to understand a phenomena, but with no intention of saying the phenomena is fundamentally computational.
However, some things are closer to a ‘strong’ view of computation as natural science. One example of this is the “socio-organisational Church-Turing hypothesis”, a term I often use in talks with its origins in a 1998 paper “Redefining Organisational Memory“. The idea is that organisations are, amongst other things, information processing systems, and therefore it is reasonable to expect to see structural similarities between phenomena in organizations and those in other information processing systems such as digital computers or our brains. Here computation is not simply an analogy or useful model, the similarity is because there is a deep rooted relationship – an organisation is not just like a computer, it is actually computational.
Apparently absent are examples of where the methods of algorithmics or software engineering are being applied in other domains; what has become known as ‘computational thinking‘. This maybe because there are two sides to computing:
- computing (as a natural science) understanding how computers (and similar things) behave – related to issues such as emergence, interaction and communication, limits of computability
- computing (as a design discipline) making computers (and similar things) behave as we want them to – related to issues such as hierarchical decomposition and separation of concerns
The first can be seen as about understanding complexity and the latter controlling it. Denning is principally addressing the former, whereas computational thinking is about the latter.
Denning’s thesis could be summarised as “computation is about more than computers”. However, that is perhaps obvious in that the early study of computation by Church and Turing was before the advent of digital computers; indeed Church was primarily addressing what could be computed by mathematicians not machines! Certainly I was left wondering what exactly was ubiquitous: computation, mathematics, cybernetics?
Denning notes how the pursuit of cybernetics as an all embracing science ran aground as it seemed to claim too much territory (although the practical application of control theory is still live and well) … and one wonders if the same could happen for Denning’s vision. Certainly embodied mind theorists would find more in common with cybernetics than much of computing and mathematicians are not going to give up the language of God too easily.
Given my interest in all three subjects, computation, mathematics, cybernetics, it prompts me to look for the distinctions, and overlaps, and maybe the way that they are all linked (perhaps all just part of mathematics ;-)). Furthermore as I am also interested in understanding the nature of computation I wonder whether looking at how natural things are ‘like’ computation may be not only an application of computational understanding, but more a lens or mirror that helps us see computation itself more clearly.
- well when I say re-reading I think the first ‘read’ was more of a skim, great thing about being on sabbatical is I actually DO get to read things 🙂 [back]