Recently I was in a meeting where the issue of ‘core’ computer science came up. One person listed a few areas, but then this was challenged by another member of the group who said (to be fair, partly in jest), that core computer science should certainly include computer architecture, but not the ‘human stuff’.
I felt a little like a teenager complete with T-shirt and iPod dropped into Jurassic Park arguments that I thought had been put to bed in the 1980s suddenly resurfacing – how do you explain this white thing that makes sounds from its earphones to a caveman wearing skins?
However, I also felt a certain sympathy as I often wonder about computer science as a whole; indeed it has its own arguments in the 1960s and ’70s as to whether it was a ‘discipline’ as opposed to just an application domain for maths or electronics, or just a tool for business. Maybe one of the clinchers was the theoretical foundations of computing in the work of Church and Turing … but strangely enough at Lancaster the closest to this, the course on algorithmic complexity, is taught by a HCI person!
One of my worries in computing is that these theoretical foundations are still weak, there is black hole in the theoretical centre of computer science1. However, these theoretical issues were certainly not what was bothering my colleague. To answer his challenge and my own worries about the discipline we really need to know – what is computing?
Fundamentals: algorithmics and systemics
One way to ‘carve up’ computing is in terms of algorithmics and systemics2.
Algorithmics – the way computational devices behave individually and largely internally to achieve particular input/output behaviours.
Systemics – the way computational devices fit together in systems with their environments, in order to achieve external purposes.
When a computer scientist thinks of the ‘core’ of computing, it is the algorithmic side that springs most readily to mind and this is certainly the principle focus of more theoretical computer science. However, when Peter Denning suggests seven categories for fundamental principles of computing: computation, communication, coordination, recollection, automation, evaluation and design; it is notable that only two, computation and recollection, fall relatively soundly under algorithmics3.
Indeed, the topics and sub-disciplines of computing address different aspects of both algorithmics and systemics. Some topics are almost entirely about algorithmics: programming languages (theory and practice), compilation, data structures and algorithms, and algorithmic complexity. Other topics, notably software engineering and human-computer interaction are focused most strongly on systemics. Most topics incorporate aspects of both: for example most courses on databases consider both the context in which databases work and the way in which they connect to code (systemics), but also the data structures and algorithms used to achieve this behaviour (algorithmics); and in my own HCI courses I cover both the more external human design issues and algorithmic and architectural issues of interactive systems4.
So, while issues of systemics underlie the majority of modern computer science, it is still algorithmics alone that comprises our caricature of the subject. Systemics is like the Cinderella of the discipline who does all the work but gets none of the limelight.
Taking systemics seriously
Apart from being the grist for bar table discussions of computing does this matter?
It certainly seems to influence what we regard as basics of computing for first year computing students.
When I was in previous departments that had both ‘computer science’ and business computing students, the former were taught Java and the later Visual Basic. VB was regarded as a somewhat Noddy language in particular as it was poor in data structures and so less suitable for fundamentals of real programming such as arrays and iteration (interestingly both fine in VB!). What interested me was that students doing VB started out with event-based programming as the ‘easy’ way into programming with VB is to construct a GUI using visual tools and then add behaviours. In contrast, for those doing Java this would have been an ‘advanced’ topic they did when they eventually got on to GUI programming, or maybe never.
Event-based programming is core when you think systemics – that the computer program you are writing is part of a larger whole, but seems peripheral to a more narrow algorithmic focus. Of course, it is not just graphical user interfaces where event-based programming is core, bur also any sort of distributed, networked or embedded programming and not least web computing both at the server-end with transactional code and now at the client-end also with AJAX. Absolutely central to so many aspects of computing but often an ‘optional extra’ in our teaching
At a theoretical level Wegner has been vocal in advocating that an adequate account of theoretical computing has to take into account interaction with the environment.5 Personally, I don’t buy all his arguments, but the crucial message is clearly true: computational devices do not simply take an input and produce (eventually) an output, bur are in fact processes involving their own state and interacting with their environment: input and output are in fact sequences not singular occurrences6.
Systemics also forces you to take seriously embodied computation both in the theoretical sense that I have considered myself7 and also in the practical sense that computational devices are physically in the world as well as interacting with it and so we need to take this into account. For example, issues of power consumption were central to the One Laptop per Child project as they are in ubiquitous and mobile computing.
With the exceptions of space and time complexity algorithmics focuses us on functional properties whereas systemics connects more closely to non-functional properties. Interestingly all the problems I cited in my recent blog ‘”I just wanted to print a file” are about systemics
- This demands a discussion of its own, but the basic problem is that while Church and Turing gave us understanding of disembodied computation, we still do not have clear understanding of generic computation when embodied in devices in general only particular architectures. [back]
- Perhaps I am still being too narrow in this distinction. Arguably there ought to be a heading pragmatics: how we understand and manage the processes external to the computational system, those that give rise to it (e.g. requirements), and those that sustain it (e.g. maintenance, testing, documentation). However, for the time being I’ll lump these under the systemic label. [back]
- The principles are listed at Denning, P. J. 2007. Computing is a natural science. Comm. ACM 50, 7 (Jul. 2007), 13-18. DOI= http://doi.acm.org/10.1145/1272516.1272529. They are part of Denning’s programme to develop “Great Principles of Computing“. [back]
- In fact, Lancaster Computing prides itself on its systems orientation and this is reflected in areas of expertise that are strongly focused on systemics: networking, distributed systems, software engineering, HCI, embedded systems.[back]
- Wegner, P. 1997. Why interaction is more powerful than algorithms. Comm. ACM 40, 5 (May. 1997), 80-91. DOI= http://doi.acm.org/10.1145/253769.253801, see also Peter Wegner’s home page and Dina Goldin’s papers on Interactive Computation [back]
- In Wegner and Goldin’s work IO are strictly sequences, not time series – they assume synchronised stepwise processing. As an aside it is interesting to think more widely about time in computing. Time is taken into account in various areas notably algorithmic complexity and (naturally) real-time computing. However, the central notion of computability in Church and Turing ignores time so long as it is finite.[back]
- See “matterealities and the physical embodiment of code” and my notes on embodied computation. [back]
I agree with so much of this. Here at York, students on our ‘flagship’ MEng programme MUST do a double option on Real-Time Systems in their third year, but can only take HCI as an option. How many real-time systems are there in the world and how many systems are there that have an interface to a person?
At the same time, a former member of the HCI Group here used to ask the same question of most HCI PhD students: ‘Where is the Computer Science in your work’. I used to worry about that, until I realized that if it is HCI, then it IS computer science.
Pingback: Alan’s blog » reading: Computing is a Natural Science
Pingback: Alan’s blog » Searle’s wall, computation and representation