Research Report - RR9405
|At time of publication:|| hci@hud, School of Computing and Mathematics,
University of Huddersfield
|Currently:|| Lancaster University
Seven years ago I wrote a paper entitled "The Myth of the Infinitely Fast Machine". It investigated certain temporal problems for user interfaces due principally to computational delays. In this paper I revisit the same ground to see how much has changed in the intervening years - computers are faster, have the delays and consequent problems gone away? Sadly, many of the same problems remain. In some cases network delays have replaced computation, in others the greater computational demands of graphical interfaces have led to a Murphy's law situation. The general issues which were identified seven years ago are still evident. One issue, which recurs in many related areas, is the richness of user's coping strategies which compensate for, but also mask, usability problems. Another is the lack of appropriate software infra-structure. However, to counter these perstent problems some design guidelines are given.
The most critical point of the 1987 paper was that designers and documentors of interactive systems effectively believe that computers are infinitely fast (fast enough). Seven years on, this myth continues ...
This paper keeps a tight focus on the current problems which are directly related to those identified in the original Myth paper. However, other temporal issues are important to interface design. Relevant aspects of my own work include status/event analysis (Abowd and Dix 1994), the central concept of pace (Dix 1992) and problems of long timescale interactions (Dix 1994). The TAU project in Glasgow has also been studying these issues for several years (Gray et al. 1994) and in June 1995 a workshop dedicated to this issue will be held in Glasgow organised by the TAU project. The web pages below will contain pointers to work in the area:
I am writing in the late summer of 1994, seven years ago at the HCI'87 conference I presented a paper entitled "The Myth of the Infinitely Fast Machine" (Dix 1987). The thesis of that paper was that designers and documentors of interactive systems assume that the computer is fast enough to give the illusion of instant response; that is, they effectively believed in an infinitely fast machine.
Of course, if asked, they would have denied this belief and perhaps pointed out the parts of their systems which they considered too slow, or where they had included progress indicators during heavy computation. However, despite the words, their actions betrayed their acceptance of the myth. Looking at existing systems, the style of interaction clearly expected that users would wait for the system's response at each step. The major exception to this was type-ahead and mouse-ahead, a weak attempt to paper over the cracks. Furthermore, the documentation was written entirely in terms of the statics of the system: "the text entry screen looks like this", and the atomic effects of user actions: "when you hit the F3 key the system does this". Even where some elements of more dynamic behaviour were included in the system, they were not described in the manuals. There was some excuse for this, it is very difficult to describe dynamic behaviour on the written page, but even where a simple explanation was possible, it was still omitted.
In short, systems were designed as if the machines they were to run on were infinitely fast. Concessions were only made where this assumption lead to severe problems, such as the loss of user input and then it was usually minimal. Predictably, this lead to usability problems.
One key problem was that it was often impossible to tell whether there was any remaining buffered input, or whether the system had finished processing. For typing this was not too bad as one only intermittantly looked at the screen. However, for cursor movement, where feedback is part of the process, it was more of a problem. I called this 'cursor tracking' (Box 1). These problems were exacerbated by the unpredictable delays in time-share systems. For long periods full interactive response might be maintained, and then, suddenly, it would get you!
|The user is trying to move the cursor to a particular point on the screen. She keeps pressing the cursor-right key until the cursor is over the target and then stops. Unfortunately, there is some buffered input remaining, the buffered cursor keys are then processed and the cursor overshoots. So the user presses the cursor-left key until the cursor is over target ... and overshoots again. This could go on for some time. Even when the cursor stopped over the target the user could not be sure whether it had really stopped, or whether there was still buffered input. She could begin to type, but there was a chance that the typing might subsequently appear in the wrong place.|
Box 1. Cursor Tracking
People are infinitely resourceful. Faced with this problem users devise various coping strategies, ways of avoiding or alleviating the problem. A strategy which I used myself and observed in others was as follows. Press the cursor as before. When you stop pressing it, type an 'X'. If the 'X' appears where you want to type, then delete it and type. If it appears somewhere else, delete it and correct the position. A variant on this was to move the cursor along and then up one line, when the cursor went up, you knew it had stopped. Coping strategies appear in various contexts to enable the user to correct interface problems. Sometimes they can be so successful that the user no longer recognises that the problem exists. I once asked a colleague whether he experienced the cursor tracking problem. "No" he said, "whenever I move the cursor I press an 'X' when I stop ..." Of course, even though the problem is no longer noticed, it is still there and the coping strategy puts an extra load on the user.
Since then the power and moreover the speed of desktop computers has increased dramatically. Furthermore, we now live on the lee side of the 'personal computer revolution' and vt100s attached to time share computers have become the stuff of coffee time reminiscences. So, seven years on, how have things changed?
Sadly, despite the massive increase in computational power, some of the old problems remain, and some new ones have arisen.
Most of us will now be using a personal computer or workstation and word-processors rarely have any perceivable lag for typing or cursor movement, even when in WYSIWYG modes. But, even where the VT100 has been consigned to the rubbish skip, its spirit lives on . Across the land, on PCs and workstations, in the land of menus and mice, there are windows: 80 by 25 characters large, connected by modems to bulletin boards, or using Telnet across the Internet. Emulators replace dumb terminals, network delays replace processing time, but cursor tracking is on the move, long forgotten habits resurface and I am typing 'X's again.
Users of diskless workstations will also be aware of the strange behaviours resulting from delays accessing remote disks. Sometimes these are predictable (e.g., when you load or save a file), but at other times may be due to some background processing, or virtual memory. Indeed, the design of some UNIX systems means that you experience swopping delays even when there is sufficient real-memory for all your applications.
Portable computers exhibit similar problems when their disk has been powered down and has to spin-up again. This is particularly noticeable on the Macintosh because of its lazy loading of resources. Parts of an application: icons, dialogue box descriptions and portions of code, are only read off the disk when they are needed. This means that the application takes less time to start up and does not use up memory on features that you never use. When you eventually use the feature it is loaded off the disk. This is hardly noticeable on a desktop computer, but on a portable computer, pressing, say, a cursor key for the first time in session may lead to several seconds delay. As desktop computers become more environmentally friendly, they will incorporate the same power aving features and such delays will become common.
Networks and disks are commonly regarded as the slow parts of a system, but, the problem does not stop there. Developers will typically design applications with more and more features, to exploit the computational and graphic power of current computers. However, there is a three year rule for personal computers. This year's state of the art system will be obsolete in three years time. Software which runs tolerably well on the top end machines of today may struggle in the mid-range and be in severe trouble on machines a few years old. I'll describe a few problems with Word on the Macintosh (not that Word is particularly bad, but simply commonly available). These problems are all noticeable on the machine I am currently using, which is six months old and mid-range. They are crippling on an older machine.
First of all consider scrolling, one of the most common actions and surely one where some usability effort would be expended. Clicking the scroll bar below the handle scrolls the display a screen a time. Click it several times in quick succession and the screen painfully keeps redrawing until the buffered input is all processed. On a slower machine this is also noticeable when using line at a time scrolling (and even more painful to watch). As it is likely that the majority of the delay is due to actually drawing the display, a standard solution, intermittent update, is obviously required (although does have some problems discussed later). When the application detects several end-to-end scroll events it should compress them into one big scroll to the final page.
In general, applications can chunk buffered input in order to reduce the time taken in graphics processing and hence improve response time. Unfortunately the event driven model of most GUIs discourages this mode of thinking (you don't explicitly inspect the event queue). So, one can understand the above lapse, but, it is not unavoidable as other Mac applications do manage intermittent update. For example Now Up-To-Date, a calendar/diary package will interpret a quick triple strike of the 'next month' button as a single three month move, but if the three are slower it may show all of the intermediate months.
The scroll-bar delay is annoying, but does not usually lead to errors as, unlike cursor tracking, there is no problem detecting when scrolling is complete. A more serious problem arises with the spell checker. A simplified version of its dialogue box is shown in Figure 1. It works in a three stage manner:
Figure 1. Simplified spelling checker dialogue box
On a very fast machine (or a slow user!) this works fine. But it is an interaction based on an infinitely fast machine - what happens when the system is slower. Stage 1 is not too much of a problem, although the only indication that it is processing is a percentage indicator at the bottom of the document window - not on the dialogue box itself. However, it is quite quick and a reasonable thing to wait for. Stage 2 is more of a problem. The unrecognised word has been displayed and the user may know that it is acceptable before the second stage has completed - and this is often a substantial wait. If during this process the user clicks the 'ignore' button, the computer simply continues processing stage 2, although it is now unneeded:
In fact, the users mouse click is buffered and acted upon when stage 2 is complete. However, there is no feedback when the button is pressed
Not only is there no feedback, but the behaviour of the buttons during stage 2 is substantially, and very oddly, different from normal. Normally on the Mac and under other windowing systems, a button press can be cancelled by pulling the mouse off the button before the mouse is released. This works for the spelling dialogue box during stage 3, if one depress the mouse over the ignore button it highlights, moving it off whilst it is down removes the highlights and then the subsequent release has no effect (Figure 2). Unfortunately, during stage 2, things are somewhat different. First of all, as already noted, there is no feedback. Still, we can live with that, the user depresses the mouse over the 'ignore' button and simply imagines the highlighting. At that stage the user realises that she has pressed the wrong button - no problem, she simply drags the mouse out of the 'ignore' button an realeases it. When stage 2 processing is completed, the user's mouse actions are interpretted. But, she then finds that the spell checker acts as if she had clicked the 'ignore' button, the drag off did not cancel the button press.
After this litany of disaster it is not surprising to find that the dialogue box also dynamically changes the labelling of buttons and has no undo.
Figure 2. Normal method to cancel a button press
Although users may develop their own coping strategies, it is the designer who ought to be thinking about the issues. The fundamental problem is simple:
How is it that they overlook such an apparently obvious class of errors? the answer is again simple:
Every software house should have a rule that each employee should for one day a week use a two year old mid-range machine. Not only would 80% of timing problems never find their way into delivered software, but other problems, like excessive use of dsk and memory, might be solved.
Assuming you have noticed a reponse time problem, how do you deal with it? Unfortunately, the frequent response is simply to attempt to code it faster, which may work. If this fails, well, no worry, the keyboard and mouse events are buffered! This simplistic approach is both unsatisfactory and unnecessary. There are ways of coping with slow response times, it only needs a little care in the design.
One solution has already been mentioned - intermittant update. That is, not updating the display on every user event. Of course, this needs to be used carefully as the resulting jerky display update might produce fresh problems. However, if the actual rendering of the display is the major bottleneck, it may be sufficient to restore interactive response.
I describe a variant this, non-deterministic intermittant display, in (Dix 1990). Often the work in an interactive system falls into three parts:
Intermittent update saves part 3. The additional step is to apply the same reasoning to part 2 of the process and only repair the application's display structures intermittently. Doing this can further reduce computation and hence eventual interface delays, but may lead to non-deterministic updates. However, the resulting system may appear to the user to be more predictable and natural than one with long delays.
Another display saving strategy is partial update. The display is only partially updated on each user action, with occasional full repair. This was seen on some early word-processors (for example, Wordstar under CPM (MicroPro 1981)), where only the current line of text was updated in real-time and the rest of the paragraph layout caught up when the user paused in typing. The use of outlines whilst dragging windows and in graphics programs is a similar trick. The general rule is:
The trade-off between fidelity and timeliness is particularly clear in the Xerox Buttons interface (MacLean et al. 1990). Small buttons are dragged as images, but for large buttons, where the cost of redrawing is too high, outlines are used.
The thrust of direct manipulation is that reponse is instant, especially where hand-eye coordination is required. Whereas this at first appears reasonable in office systems, it is rarely approporiate in industrial control applications. In these, there is a definite separation between the computer interacting with the user and the plant being controlled and monitored. In particular, the response of the plant to the operator's actions may take a significant time. The operator therefore requires feedback at both levels: immediate feedback from the interface and feedback via instruments of the state of the plant - mediated interaction (Figure 3).
Figure 3. Industrial Control: Mediated Interaction
Of course, the thrust of this paper is that response from office systems is not instant and where, as with the spell checker, the response cannot meet the demands of direct manipulation, a different paradigm of interaction is required. Indeed, if the spell checker were treated more like an industrial system, then several of the problems would disappear. The slow computation of alternative spelling can be regarded as the plant, and the user ought to be allowed to interact with the interface whilst the 'plant' is operating. This would mean that feedback would be immediate and would probably then be consistent. Furthemore, where the user's action does not depend on the results of the computation it would be reasonable to 'control the plant' and abort the remainder of the computation. So,
Mediated interaction is also relevent for the Telnet example. There is clearly a local and remote system, but standard Telnet connections leave all the interface to the remote machine. The long response times are partly due to network delays and partly due to overloading of network servers and gateways. Compare this with World Wide Web, where the interface is on the user's local machine and the remote machine is only acessed for file transfers. This trade-off between local and remote processing has been tackled extensively over the years and is partly dependent on the assumptions which can be made about each side. For example, the Bell Labs' BLIT workstation could achieve acceptable performance over a 9600 baud telephone lines because applications were split into a front-end interafce to run on the BLIT and a backend to run on the host machine (Pike 1984). In comparison, X assumes that the workstation/terminal is only powerful enough to generate the graphics themselves. The case for telnet is even harder due to the heteregeneity of machines on the Internet. However, as network interaction becomes the norm rather than the exception, it is an issue which must be faced.
The general lesson we are learning is flexibility, direct manipulation does break down, and, when it does, one should not ignore the fact but accept it and adopt an alternative. This might mean mediated interaction as above or even an 'old fashioned' batch oriented interface. On the UNIX systems I have used, there has been an interactive spell checker, but it is far too slow too be comfortable, one spends the majority of one's time waiting for the next word to be shown. Alternatively, one can use a simple 'spell' program which produces a list of badly spelt words. One can then go through this list and using a standard text editor's search/replace facility change the words one by one. Again, on a large document, one spends one's time waiting for the search to find the next word. As both were unsatisfactory, I adopted a third alternative which blended batch and interactive activity where it seemed appropriate. The initial checking was done (using a modified version of 'spell') in batch mode, this produced a list of all the potentially mis-spelt words, checked against system and local dictionaries. You were then able to interactively annotate this list with corrections, and directives to add a word to a dictionary or look at the word in context. Finally, in a second batch activity it updated the file(s). The slow activities where interactive response could not be guaranteed, were now allocated to batch activiteies during which the user could do something else.
The use of some batch processing is particularly relevent where the user is engaged in several simultaneous activities and gaps in one can be used effectively. However, variants of this can be used during more 'interactive' dialogues. Look again at the Word spell checker. Once you have commenced a spell check, it is likely that you will continue to check all, or a substantial part of the document. Therefore, while the system is waiting for your response it might as well start to look for the next mis-spelt word and its alternatives. Then, when you have responded to the current word, the response time will be reduced. Note that this is also a form of mediated interaction as the 'plant' continues to work whilst the user is interacting. The lesson is:
One of the conclusions I reached back in 1987, was that the infrastructure made it very difficult to write prograns with good timing behaviour. It was often difficult to tell whether there was buffered input as the buffering could occur at so many levels. Also it was generally impossible to tell when the screen was updated and so to relate mouse and keyboard events to the display context in which they occured. Whereas most window managers timestamped input, there was no similar event when, say, a window had been updated.
Happily some things and have improved and Xlib events include the serial number of the last output request processed (Scheifler and Gettys 1992). Unfortunately, things haven't improved that much and there is little evidence of people using this facility! Indeed, some things are worse. The standard text editor of OpenWindows has a find/replace facility. If you select 'find' from the menu and then start to type the string to be searched for, you find that only the last part of the string appears in the 'find' dialogue box. The rest is inserted into your text! This is because of the particular way in which keyboard focus is handled by the window manager. Similar effects are noticed when new applications are launched from the background menu. These take some time to appear, but you can continue to work while they launch. Unfortunately, they grab the keyboard when (eventually) they appear. This behaviour would be acceptable if the launch was instantaneous, but not where there is delay.
Some of the recomendations rely on multiple threads of activity for the interface and computation. To some extent this is encouraged by event driven window managers. Indeed, where there is no pre-emptive scheduling, the application programmer is encouraged to periodically give control back to the window manager during long computations. This is to allow the window manager to service other applications. The application could similarly be prepared to service its own interface at such times.
As we have seen though, the event driven paradigm may make programmers less likely to adopt intermittant or partial update stragtegies. When you explicitly read an event queue (or got a bufferful of characters from a terminal), it was clear when there was a lot of input pending and hence when some modified behaviour is required. For an event driven program, this information may be impossible to obtain, or even where it is accessible, the whole paradigm suggests against its use.
In general, it may e very hard for a program to tell just how long certain operations may take. For example, whether a disk access will be serviced from a cache or local disk or whether it requires powering up the disk (on a portable) or accessing a remote fileserver. Operating systems often strive for transparency, not letting the user or the programmer now aout such details. However, that transparency nearly always breaks down when one considers reponse times. Some years ago, Randy Trigg was demonstrating a new feature of NoteCards (Halasz et al. 1987). Suddenly it froze. The features were under devlopment and so the system had been occasionally crashing into the debugger, but this time there was no error, just silence. Eventually, he launched a debugger to see what the matter was. A few minutes later 'Aha!', it was no bug at all. The particular feature had been last used in th US and he was visiting EuroParc in the UK. The feature wanted to print a message in a particular font. Unfortunatley, the font was not available on that machine. However, nothing daunted, the machine looked to the rest of the machines on the LAN to see whether they had the font, it continued to search the network and would no doubt have eventually found the font back at Palo Alto. Transparency?
Notice how the myth of the infinitely fast machine, portrays a perfect, almost magic, world, where everything happens in the twinkling of an eye. The standard response to the imperfection (slowness) of the real world is to try and make it closer to perfection (make the code run faster), or make it perfect eventually (buffering). The alternative responses described above all have this in common: when you can't get it perfect - do something else! They accept and work with the imperfection of the world.
G. Abowd and A. Dix (1994). Integrating status and event phenomena in formal specifications of interactive systems. SIGSOFT'94, Ed. D. Wile. New Orleans, ACM Press. pp. 44-52.
A. Dix (1994). Que sera sera - The problem of the future perfect in open and cooperative systems. Proceedings of HCI'94: People and Computers IX, Eds. G. Cockton, S. W. Draper and G. R. S. Weir. Glasgow, Cambridge University Press. pp. 397-408.
A. J. Dix (1987). The myth of the infinitely fast machine. People and Computers III - Proceedings of HCI'87, Eds. D. Diaper and R. Winder. Cambridge University Press. pp. 215-228.
A. J. Dix (1990). Non-determinism as a paradigm for understanding the user interface. In Formal Methods in Human-Computer Interaction, Eds. H. W. Thimbleby and M. D. Harrison. Cambridge University Press. pp. 97-127.
A. J. Dix (1992). Pace and interaction. Proceedings of HCI'92: People and Computers VII, Eds. A. Monk, D. Diaper and M. Harrison. Cambridge University Press. pp. 193-207.
P. Gray, D. England and S. McGowan (1994). XUAN: Enhancing UAN to Capture Temporal Relationships among Actions. Proceedings of HCI'94: People and Computers IX, Eds. G. Cockton, S. W. Draper and G. R. S. Weir. Glasgow, Cambridge University Press. pp. 313-326.
F. Halasz, T. Moran and R. Trigg (1987). NoteCards in a nutshell. In Proceedings of the CHI+GI, . ACM, New York. pp. 45-52.
A. MacLean, K. Carter, L. Lovstrand and T. Moran (1990). User Tailorable Systems: Pressing the Issues with Buttons. Proceedings of CHI'90: Human Factors in Computing Systems, Eds. J. C. Chew and J. Whiteside. ACM Press. pp. 175-182.
MicroPro (1981). Wordstar. MicroPro International.
R. Pike (1984). The BLIT: a multiplexed graphics terminal.
R. Scheifler and J. Gettys (1992). X Window System, 3/E. Prentice Hall International.