The Active Web

part I

Alan Dix


download onCue now - it's free!

Full reference:

Alan Dix (1998).
The Active Web - part I.
Interfaces, 38 pp. 18-21. Summer 1998.

See also:

The web is changing. Its pages are no longer static, but moving, changing, interacting. Even as we watch, it is evolving from an information repository into a distributed interface to a global networked computational engine. But this change has its price. Can users understand what they are seeing? How do you choose the right technology? Or do you just try a bit of everything? No, the right answer, as with any design problem, is to match the technology to the needs. Sometimes, for marketing or image reasons, this means the newest technology just because it is the newest and sexiest. But when functionality and usability matter it is likely to mean a blend of technologies and often a socio-technical solution, involving, dare we say it, people!

This is the first part of a two-part article. In this part we'll look at the issues affecting the choice of an appropriate web technology and at the use of basic animation and media. In the next issue of Interfaces we'll examine several scenarios for adding interactive elements to web pages and for generating and updating web pages from databases.

keywords: world-wide web, interaction, design, HCI, CSCW, Java, JavaScript, JDBC, CGI, servlets

Why I'm writing this

I'm writing this article because I've had to tell it more times than I care to count. Again and again undergraduate and postgraduate students have come to me wanting to do a project using the web. What technology should they be looking at?

Of course, there is so much to choose from:

The essence of design is to choose techniques appropriate for the material and purpose. Although this may mean the latest or most exciting technology, often it will not. Most important, most students (and, looking at systems around us, many interface designers) look only to a technical solution. The real art is in designing the whole socio-technical system. This has the added plus that the technical part is easier and more robust!

I've sent my students scurrying through every bookshop and library, searching for a good comparison of techniques. But to no avail. Books on CGI scripting tell you why it is a good idea, books on Java tell you about applets, books on Dynamic HTML tell you everything it can achieve, and of course the vendor-specific products will give you the earth.

So, I'm writing my own short account to save me telling my students the same story again and again next year and also so that you can perhaps give it to yours.

The static web - should it stay still?

In the early days the web was simply a collection of (largely text) pages linked together. The material was static or slowly changing and much of it authored and updated by hand. Some pages were generated on the fly, in particular the gateways into ftp servers and to gophers, which were so important in adding 'free' content to the web (see my discussion of this in [10]). However, even here the user's model was still of a static repository of information. Web surfers may not have always known where they were, but they had a pretty good idea of what they were seeing and that if they came back it would be the same.

It was a pleasant, if somewhat boring world, but from a usability viewpoint was wonderful - a consistent interface to terabytes of information. Who could ask for more? Indeed, this is one of the key arguments Nielsen brings against frames-rich sites in his famous alertbox, Why frames suck (most of the time) [6] - frames break this simple user model and hence cause trouble. Nielsen calls for a new richer model for the web, which preserves the simplicity of the old model, but which can accommodate and guide the development of new features.

Well, if frames cause trouble, what about applets, timed refreshing pages, roll-overs, dynamic content creation? What are we interacting with - is it information, is it computer systems? In fact this has been a problem with hypertext interfaces well before the web existed. Back in 1989, Janet Finlay (our editor) and I wrote about the potential problems of these shifts between passive and active paradigm within an interface [8]. Our solution was to accept these differences, but to make them evident to the user through the design of an effective medium of interaction. Of course it's easy to say...

As HCI researchers and designers, we can neither ignore nor uncritically accept new technology in the web. The active web is here, our job is to understand it and to learn how to use it appropriately.

Let's look at the issues which affect the choice of active web techniques.

The user view

One set of issues are based on what the end-user sees, the end-user here being the web viewer.

We'll use the 'what changes?' categories as we examine alternatives and trade-offs in more detail below. But first we need to also look at the technological constraints.

Technology and security

The fundamental question here is where 'computation' is happening. If pages are changing, there must be some form of 'computation' of those changes. Where does it happen?

It is easy to roll out maxims such as 'users first', but, in reality, the choice between these options is not solely a matter of matching the end-user requirements. The best choice also depends on the expertise of the web developer and external limitations. If the server runs on a UNIX machine, you can't expect to use Microsoft Active Server Pages. On the other hand, if you are designing for an intranet you may even get to influence the choice of client software and so make it easier to use more complex client end.

The choice of technological solution is also heavily influenced by issues of security. When we do any computation on data the computation and the data must be in the same place [11]. This apparently simple factor means that if we want to keep data secure (read 'on well protected servers') then we must also perform the critical computation on the servers. For example, imagine a password check. It would be foolish to send the correct password to a Java applet to check!

Animation and media

Simple animations, such as animated gifs and QuickTime movies are now passé. The appropriate use of them is rather rare. The majority are used as the on-screen equivalent of Las Vegas neon lights, not only threatening neurological damage, but sucking up network bandwidth and CPU cycles in the process. The worst offenders are perhaps cycling animated gifs which cause reloads every cycle. Of course long cycles with subtle changes are OK (but then I would say that because I've used them myself in my Magisoft pages [14]). On the other hand there are excellent uses of short video clips to add life and give explanations of dynamic phenomena. An example is the Glasgow University web site for the Hunterian Museum, which is aimed at children [13].

Hunterian Museum - learning about Romans

The need to download movies and gifs puts sharp limits on the length of clip that can be shown. Streaming media over the Internet, such as RealVideo, RealAudio and CuSeeMe, allow potentially unlimited sources. As well as longer prepared clips, these techniques allow live transmission (e.g. live radio broadcasts over RealAudio) and long recorded sequences for asynchronous communication. An excellent use of the latter is the Classroom 2000 project [1, 15], which links recordings of audio and video during a lecture with pen strokes on an electronic whiteboard, so that students can play the part of a lecture associated with any slide, or annotation.

Classroom 2000 - recording a lecture

Classroom 2000 - indexed playback on student's web browser

Acceptable streaming video and audio is achieved by a combination of high compression and large client-end buffers. The former leads to loss of quality including blurring and ghosting after rapid changes in screen content. The latter leads to delays, often of several seconds, which makes it impossible to support video-conferencing (CuSeeMe uses little buffering and hence is more likely to suffer break-up of video and audio). The challenges of achieving high quality transmissions (e.g. for video on demand) and low latency (e.g. for video-conferencing) are active research topics in multimedia technology.

Stepping back a bit to look again at this, note that it is often not raw bandwidth which is the problem on the Internet, but packet losses and jitter (varying latency). This can be solved by trading off quality against delay, which is OK for fixed content, or low pace change (as in Classroom 2000), but is problematic when we require a high pace of interaction (as in video-conferencing).

Magisoft Wand - watch it carefully

Are you interested?

Are you working on active web technology either as a researcher or practitioner? If so you may be interested in a workshop on 'The Active Web' to be held in January 1999 (not February as originally announced in Interfaces). The call for papers will be on the web by the time this issue reaches you, so look at:

for up-to-date information ...


Web research and comment

  1. Abowd, G. D., C. G. Atkeson, J. Brotherton, T. Enqvist, P. Gulley and J. LeMon (1998). Investigating the capture, integration and access problem of ubiquitous computing in an educational setting. CHI'98 Conference Proceedings, Los Angeles, ACM Press. pp. 440- 47.
  2. Bentley, R., W. Appelt, U. Busbach, E. Hinrichs, D. Kerr, K. Sikkel, J. Trevor and G. Woetzel (1997). Basic Support for Cooperative Work on the World Wide Web. International Journal of Human Computer Studies: Special issue on Novel Applications of the WWW, Spring 1997.
  3. Buckingham Shum, S. and C. McKnight (1997). Special Issue on World Wide Web Usability. International Journal of Human-Computer Studies, 47(1): 1- 222.
  4. Busbach, U., D. Kerr and K. Sikkel, eds (1996). CSCW and the Web - Proceedings of the 5th ERCIM/W4G Workshop. Arbeitspapiere der GMD 984. Sankt Augustin, GMD.
  5. Clarke, D., A. Dix, D. Ramduny and D. Trepess, eds (1997). Collected Abstracts from a Workshop on Time and the Web. SOCTR/97/05, School of Computing, Staffordshire University. also reported in SIGCHI Bulletin 30(1):30- 33 and Interfaces, Autumn 1997
  6. Nielsen, J. (1996). Why Frames Suck (most of the time).
  7. Theng, Y. L. and H. Thimbleby (1998). Practical authoring tools for web authoring , Interfaces, 38 Summer 1998

Some of my own related publications

  1. Dix, A. J. and J. E. Finlay (1989). AMO - the interface as medium. Poster sessions, HCI International '89, Boston. p. 22.
  2. Dix, A. J. (1992). Pace and interaction. Proceedings of HCI'92: People and Computers VII, Cambridge University Press. pp. 193- 207.
  3. Dix, A. (1997). Challenges for Cooperative Work on the Web: An analytical approach. Computer-Supported Cooperative Work: The Journal of Collaborative Computing, 6. pp.135- 56. (also reprinted in Groupware and the World Wide Web, R. Bentley, U. Busbach, D. Kerr and K. Sikkel (eds), Kluwer 1997)
  4. Ramduny, D. and A. Dix (1997). Why, What, Where, When: Architectures for Co-operative work on the WWW. Proceedings of HCI'97, Bristol, UK, Springer. pp. 283- 301.
  5. A. Dix (1998). PopuNET - the Net, everywhere, everywhen.

Other URLs to visit

  1. The Hunterian Museum, Glasgow. (movies).
  2. Magisoft Wand (animated gif)
  3. Classroom 2000 project, GVU Georgia Tech. (Real Audio and Video)
  4. Basic Support for Cooperative Work, GMD, Germany. (web-based shared document space)

See also my Active Web glossary and links
Alan Dix 31/7/98