I'll try to split up the papers as separate PDF files if I get time...

The 2nd Workshop was in Edinburgh as part of Interact'99. Copies of the papers can be accessed; they are at the bottom of the page so dont be put off by the call for papers.

A third workshop is planned for early in 2001. Probably as part of HCI-IHM 2001 in Lille. Mail johnson@dcs.glasgow.ac.uk if you would like details when they're available.

Proceedings of the

First Workshop on Human Computer Interaction with Mobile Devices

Editor: Chris Johnson

GIST Technical Report G98-1.
21-23rd May 1998.

Department of Computing Science,
University of Glasgow,

Rebuilding the Babel Tower. *

Chris Johnson.

Usability and Mobility; Interactions on the move. *

Peter Johnson

Exploiting Context in HCI Design for Mobile Systems *

Tom Rodden, Keith Chervest, Nigel Davies and Alan Dix

Ubiquitous Input for Wearable Computing: Qwerty Keyboard without A Board *

Mikael Goldstein, Robert Book, Gunilla Alsio and Silvia Tessa

Using Non-Speech Sounds in Mobile Computing Devices *

Stephen Brewster, Grégory Leplâtre and Murray Crease

Design Lifecycles and Wearable Computers for Users with Disabilities *

Helen Petrie, Stephen Furner, Thomas Strothotte,

Developing Scenarios for Mobile CSCW *

Steinar Kristoffersen , Jo Herstad, Fredrik Ljungberg,, Frode Løbers, Jan R. Sandbakken, Kari Thoresen

Human-Computer-Giraffe Interaction: HCI in the Field *

Jason Pascoe, Nick Ryan, and David Morse

Some Lessons for Location-Aware Applications *

Peter J. Brown

Developing a Context Sensitive Tourist Guide *

Nigel Davies, Keith Mitchell, Keith Cheverst, Gordon Blair,

On the Importance of Translucence for Mobile Computing *

Maria R. Ebling and M. Satyanarayanan

Developing Interfaces For Collaborative Mobile Systems *

Keith Cheverst, Nigel Davies, Adrian Friday,

Wireless Markup Language as a Framework for Interaction with Mobile Computing and Communication Devices *

Jo Herstad, Do Van Thanh and Steinar Kristoffersen

Giving Users the Choice between a Picture and a Thousand Words *

Malcolm McIlhagga, Ann Light and Ian Wakeman,

User Needs for Mobile Communication Devices: *

Kaisa Väänänen-Vainio-Mattila and Satu Ruuska


Rebuilding the Babel Tower.


Chris Johnson.


Department of Computing Science, University of Glasgow, Glasgow, Scotland, G12 8QQ.



This meeting stands at the intersection of two streams of technological development. The first stems from improvements in portable computing devices, ranging from lap tops to 'ubiquitous devices' with embedded processing power. The second strand of technological development originates in the growth of mobile telecommunications through cellular and satellite infrastructures. An increasing number of devices are being developed to exploit techniques from both areas and this workshop is really about the design challenges that are created by such an integration.

These challenges are partly technological. It is unclear what mechanisms will be needed to support user tasks with future generations of 'integrated' mobile devices. A number of papers in this collection address these basic infrastructure questions. Herstad, Van Thanh and Kristoffersen present a number of innovative programming and application development techniques for the development of mobile systems. Chervest, Davies and Friday analyse the fundamental characteristics for quality of service in heterogenous networks. McIlhagga, Light and Wakeman argue that designers must consider the usability costs, as well as the benefits, of allowing applications to adapt to different levels of connectivity. Ebling and Satyanarayanan explore the wide trade-offs that exist between communications connectivity and power consumption, between cost and performance, between translucent and opaque user interfaces.

These architectures will only be successfully exploited if designers have a clear idea of the requirements that mobile systems must satisfy. Existing requirements analysis techniques provide limited support for the diverse user groups that form and interact in a dynamic and ad hoc manner over mobile networks. Petrie, Furner and Strothotte's paper describe the changes that must be made to the conventional development cycle for such applications. There are further challenges. Research in Human Computer Interaction has recently begun to acknowledge the importance of the users' context and environment when designing interactive systems. The challenge of mobile systems is that this environment may be continually changing. A number of further papers address the problems that this poses for requirements analysis. Mattila and Ruuska demonstrate that social enquiry methods from ethnography can be used to identify user significant concepts and associations during the operation of mobile devices. Several other papers, such as that by Pascoe, Ryan and Morse, argue that the same contextual approaches that are necessary in requirements analysis, must also be used during the evaluation of mobile systems. User interfaces that work well in lab setting may not work so well on the plains of Africa or 30 feet up an electricity pylon. This is a contiuning theme behind Pete Johnson's position paper and the design techniques advocated by Rodden, Chervest, Davies and Dix. Of particular concern is the manner in which location-aware computing devices integrate with existing tasks; The papers by Brown and by Davies, Mitchell, Chervest and Blair argue that this is critical if innovative mobile applications are to move from the laboratory to the 'real world'.

Even if it is possible to identify user requirements for mobile computing devices, it is far from clear whether we have appropriate devices to satisfy their needs. Goldstein, Brook, Alsio and Tessa focus on the use of virtual keyboards to avoid the problems of data entry on mobile devices. Brewster, Leplâtre and Crease argue that the limited display resources of existing systems must be augmented with more diverse and meaningful auditory cues.

This brief review has only touched on the many diverse problems that are facing the designers of human computer interfaces to mobile devices. We have not, however, mentioned the most significant challenge: there is little or no dialogue being conducted ebtween the diverse groups that are working on user interfaces to mobile devices. There are dozens of commercial and academic research groups in this area. My experience in organisiang this event is that most of them are completely unaware of each other's existance.


Many people have contributed to the organisation of this workshop. In particular, thanks are due to my friends and colleagues in Glasgow Interactive Systems Group (GIST). Steve Brewster, Mark Dunlop and Phil Gray inspired the event and contributed to the detailed planning. Colin Burns Daniela Busse and Meurig Sage helped to shoulder some of the leg work involved in preparing badges etc.


Chris Johnson, Glasgow, 20th May 1998.



Usability and Mobility; Interactions on the move.


Peter Johnson.


Department of Computer Science, Queen Mary and Westfield College, University of London. London E1 4NS.



The developments in wireless communication, distributed systems, and increases in the power and interactive capabilities of hand-held and portable devices, provide us with the possibility to have wide-ranging and continual access to computing resources in a variety of contexts. These technological changes make increasing demands on the quality of the user interface and offer the potential to further progress the functionality of computing devices. However, this makes human-computer interaction all the more central to the design and development of such mobile systems. The case remains that functionality does not exist for the user if that functionality is not usable.

This paper considers aspects of mobile systems from an HCI perspective and in doing so reflects upon how well-equipped HCI is to support the design and development of mobile systems. Four areas of concern for HCI are raised and briefly discussed with example scenarios in which mobile systems might be developed to illustrate how these design situations present HCI researchers and practitioners with new challenges.

The Scenarios

Three scenarios of usage are presented here to illustrate the novelty and challenging aspects of the design problems. The scenarios are taken from actual situations in which mobile computing is being used or being considered for use in experimental forms. The scenarios themselves are not derived from real-life situations that have been analysed as part of this research. The three scenarios are: the memory aid; the roadside accident; the home patient.

The memory aid scenario.

We all forget things and at times we forget quite important things. Some people who have had brain damage through old age, illness or accident experience memory problems. Work at Addenbrookes Hospital and the MRC APU in Cambridge by Wilson and her colleagues (Wilson 1997) has been focused upon providing such people with a portable computer that acts as a memory aid. The people concerned have severe memory problems such that they would forget to feed the cat, to buy food, or to take medication.

The scenario is one in which an ageing person with mild memory loss is given a device to wear for example, on their belt (no larger than a telephone pager). The device has been programmed by a relative (say their daughter) to give them a "bleep" and a message at relevant points of time, location, or task situation throughout the day . This is intended to remind the patient to carry out tasks such as, take their medicine (and which medicine in what quantity), to go to the shops, to buy the food for their dinner, to make a shopping list and to take the shopping list and their money with them.

Designing such a device raises many HCI design questions. How would the different users interact with it, how would the reminders appear, what would happen if the reminder was forgotten, or the task already carried out? Does the system also allow the shopping list to be entered in and used in the shop? How conspicuous should the device itself be, would people be willing to carry or wear it, or would it be a source of stigma? How would the daughter interact with the device, or know that the parent had actually taken the medicine etc.? These are just some of the questions that arise in this situation.

The roadside accident scenario.

Important uses of portable devices of various sorts are made in emergency situations. In particular, in medical settings, ambulance crew (both land and air-based) carry life-saving devices to the scene of an emergency. One useful application of mobile computing systems in such settings is the transmission of patient data to a medical centre and providing increased communication between the ambulance crew and the clinical staff in the centre. For example, in the case of head injuries, a mobile device might transmit images of the results of various tests and body-state parameters, together with information about the patient's level of consciousness (e.g. the quality of their speech and comprehension, the state of their pupils) and their respiratory and heart rates to the consultant neurosurgeon in the medical centre. In addition, the consultant might have audio and video contact with the ambulance crew to advise and assist in the care of the patient and in sending the patient to an appropriate specialist hospital.

The context of use of this equipment would mean that the design would have to take account of the system being used out in the open, in all weathers at all times of day or night. The quality and speed of the image transmission would need to be reliable, secure and of a high enough standard to meet the requirements of the NHS, as would the quality of the interaction between the consultant and the ambulance crew. As well as considering the environmental conditions of use, attention would also have to be given to the conditions of the users. The consultant and the ambulance crew would be in very different contexts and conditions to each other, the device would not have to add to the stress and cognitive load placed on the various members of this distributed team. These are just some of the challenging HCI problems to be addressed in designing such a mobile system for roadside accident patient care.

The patient in the home scenario.

The third scenario is yet again a clinical one. The situation is one in which patient data is monitored and transmitted to the medical centre on a regular basis during the patient's normal every-day life. Johnson (1997) has studied the data collected in the home and in the clinical centre, of patients such as pregnant women, babies and chronic asthmatics. His findings show that there is a lack of correspondence between the two sets of data. From this he suggests that data collected in a clinical setting may be less reliable than data collected on a regular basis in the patient's everyday life. He further suggests that mobile data collection devices that could be worn by the patient could be used to collect and transmit data to a clinical centre.

The scenario involves the patient putting the device on, checking that it is working properly, and then being able to ignore it again until they remove the device. In the case of some patients (such as chronic asthmatics) the device might be worn during the night as well as during the daytime. The device might also allow the patient to see their own data so that they themselves can check their blood pressure, or respiratory levels over various time periods, with annotations added by the patient describing the context and situation of activity alongside the data recording. Furthermore, the transmitted data would also be monitored by the medical centre and in cases where the received data suggested a cause for concern, the clinicians would be immediately put in touch with the patient and if necessary emergency treatment instigated. The device worn by the patient would need to be easy to fit, easy to wear, and not appear to have any stigma associated with it. In addition, it would need to be easy to interact with by the patient and clinician, (separately) in testing if it is working correctly and in annotating and monitoring the data collected.

The problems of Usability and Mobility.

There are at least four problems to be faced in addressing the HCI of mobile systems. These concerns are:

(i) the demands of designing for mobile users, their tasks and contexts.

(ii) accommodating the diversity and integration of devices, network services and applications,

(iii) the current inadequacy of HCI models to address the varied demands of mobile systems,

(iv) the demands of evaluating mobile systems

Generally speaking, HCI has developed a good understanding of how to design and evaluate forms of human computer interaction in "fixed" contexts of use, in a single domain, with the users always using the same computer, to undertake tasks alone, or in collaboration with others. This is not the situation of use for mobile computing. Now the computer is moving from the workplace and occasional home use onto the streets and into our everyday lives, it will become as commonplace as wearing a watch is for many of us. Will it be the same "watch" we always wear for all occasions, and will we know how to "tell the time" on it in all its different guises and contexts of use? Clearly, this is a limited analogy since a watch will normally only tell the time, date and ring an alarm.

This short paper briefly touches upon the four concerns above in turn. Following this each of the remaining three concerns are briefly discussed and some general conclusions for HCI in the area of mobile computing are made.

Mobile systems design issues with users, tasks and contexts

With the promise of a technology that will give access to computing resources to a wider range of users carrying out more complex and multiple tasks and in a wider range of situations our existing "craft" knowledge of design will be found as wanting as our more formalised HCI knowledge. The classes of users, tasks and contexts of use will be novel to the general HCI community and outside the scope of experience of many current designers. Design problems escalate the more the context of usage has to be considered, and the more variable and unpredictable that context becomes.

Within HCI, design methods and support for designers has largely considered the design of the artefact itself, while some attempts have been made to consider the tasks, the users, the contexts and situations of use these have been less extensive.

To contribute to the design of mobile systems we need to understand what the design problems of mobile systems are. This may sound circular or tautological but it is not. There are extensive psychological, sociological, organisational and environmental phenomena to be studied when we start to investigate the "worlds" in which mobile computing might take place. However, whether or not these phenomena have any relevance to system design has to be considered, and if they do have relevance, how that relevance can be used to inform and contribute to the quality of the design. For example, much current interest has been given to "distributed cognition" and to "activity theory" as possible approaches to understanding people, groups and activity in a social/organisational context. However, the problem for design is not to understand or explain that behaviour, structure, or society, but design systems to work within it and improve upon it. The designer's problem is a different one to that of psychologist, sociologist or organisational theorist.

Diversity and Integration.

In a recent address to the ACM Intelligent User Interfaces 98 conference in January, Dan Olsen projected a glimpse of the challenges facing the computing and HCI communities, (Olsen, 1998). The title of Olsen's address was "Interacting in Chaos" and his thesis is as follows: Today it is not uncommon to find a single person interacting with many different computers. For example, you might have a PC or an Apple Macintosh in your office, a further PC or Apple Macintosh at your home, a portable lap-top computer that you take with you on business trips or to conferences. You might also carry a mobile phone or pager (or both), and a Pilot or electronic notebook/dairy, along with all the different devices you might find in your car, on the train or in public places. In addition to these devices there are the various printers, computing servers, scanners, servers and other useful devices to which you could be connected. Together with the various electronic mail, internet, news, groupware and other network services and applications that you might connect to and use.

The "chaos" comes about because each of these computers, devices and network services has only limited ability to communicate with each other and often have widely different formats of data, processing and interaction. For instance, when you read your email on your Pilot you do not see it in the same format as you do when you read it on your Apple Macintosh, and when you enter a phone number into your mobile phone you are not also able to add it to your electronic address book, hence, the "chaos". Olsen's analysis goes even deeper than this, for he points out that the problem is not that there is a proliferation of devices, services and software, but that we do not have the ability to model the properties and variability of these in such a way that we can begin to solve the problem. He claims that our existing forms of modelling software, systems and interaction do not adequately address problems of diversity, inconsistency, accessibility (or the lack thereof), replication and integration. From an HCI perspective, we must be prepared to develop new ways of modelling, designing and evaluating the usability of mobile interactive systems.


Can we develop useful HCI models that will contribute to the design and development of mobile systems? In his excellent address to the EPSRC MNA workshop, Nigel Davies (Davies, 1997), described the work at Lancaster University on the mobile GUIDE system. This system makes use of an available land-based network around the city of Lancaster together with a wireless communication network from nodes on the land-based network, to provide the user with a location-sensitive guide to the city. In a prototype version, the user has a flat-panel, pen-based device that receives and sends signals to the local network node and downloads "packets" of information relevant to the specific geographical location they are now in. Quite apart from the fact that an early version of the prototype device would not work in the wet, there are some interesting HCI issues here.

From an HCI modelling perspective we are well-equipped to model cognitive aspects of users (e.g. Barnard & May, 1997), their tasks (e.g. Johnson, Johnson & Hamilton, 1997), the domain (e.g. Lim & Long, 1995) and to model aspects of collaboration and group working. In addition, we can model aspects of an interface and in some cases generate standard forms of interfaces from abstract models (e.g. Wilson & Johnson, 1996).

However, can we model users adequately such that we could, for example, say how using a mobile guide would interfere with their ability to notice a speeding car as they stepped out onto a street, or how much attention they would pay to the guide while trying to deal with a child pleading for another ice-cream? The point here is that the complexity of the activities becomes large because there are so many things going on at the same time, and this makes it increasingly difficult to model the interactions between these activities. Similarly, from a task perspective could we model the interaction between the various tasks of using the guide, touring the city, minding the child and crossing a busy road, in such a way that we could understand how they influenced each other? From the perspective of domain modelling, what is the domain here - it is an interaction between several domains including, road safety, tourism and child minding. From a collaboration and co-operative working perspective there is also a problem because the nature of the activities are often such that they and the people involved in them are conflicting rather than co-operating or collaborating. From an interface modelling perspective it would be difficult to model the various interactions the user would be having with the guide. In user interface design where model-based design has been used it is obvious that the ability to generate interfaces is limited to standard forms of interaction with well-defined interaction styles. The types of interaction used in a mobile tourist guide could require (at present) less common forms of input (e.g. pen and speech) and output (e.g. text, graphics, sounds, speech, pictures, movies) as well as unconventional forms of interaction and with some variability in the form and quality of interaction available at any given time. Current model-based user interface techniques would not provide for this variability.

Evaluation of interaction and usability in mobile contexts

Regarding evaluation, we have a number of evaluation techniques to choose from such as empirical testing, discount usability methods and cognitive and task analytic methods. Evaluation would clearly be possible, but the criteria and the methods used would need to be researched. It is obvious that discount usability methods would not adequately assess the usability of a mobile tourist guide (or any other mobile system) since they ignore the context of use. Similarly, the conventional usability laboratory would not be able to adequately simulate such important aspects as the weather and could not easily provide for the wide range of competing activities and demands on users that might arise in a natural setting. Data collection methods such as video recording or observations in natural settings would be very hard and extremely difficult to do in anything but an unnatural manner when a mobile computer system is the subject of the evaluation. Consequently, forms of data and data collection methods would be needed that were outside the common range of usability studies.


Mobile systems in the form of portable telephones, pagers, notebooks and lap-top computers are common place but at present they are poorly integrated and only represent a small proportion of the range of different types of mobile systems that we are likely to see. The development of mobile systems to be used in everyday life will place demands upon the HCI community. It is easy to see that in the design of video recorders, HCI has had no impact at all. They are just as unusable as they ever were and they are sold on the basis of increased functionality which most users never get to use because the interface is so bad. In some of the contexts in which mobile systems could be of use to us, the quality of the interface will matter. In some of the examples described above the interfaces may save or cost lives, in others such as the tourist guide, the device will simply not be used if the interface is poorly designed. HCI methods, models and techniques will need to be reconsidered if they are to address the concerns of interaction on the move.


Barnard, P.J., & May, J. (1997) Cognitive Task Modelling, NATO workshop on Cognitive Task Analysis, Washington, November, 1997.

Davies, N. (1997) Invited presentation to EPSRC workshop on Multimedia Network Applications, Warwick November, 1997.

Lim, K., & Long, J.B. (1995) The MUSE methodology for usability software engineering. CUP, Cambridge, UK.

Johnson Paul, (1997) Invited contribution to EPSRC workshop on Healthcare Informatics, Abbingdon.

Johnson, P., Johnson, H., & Hamilton, F. (1997) Task Knowledge Structures., Paper presented to NATO workshop on Cognitive Task Analysis, Washington, November, 1997.

Olsen, D. (1998) Interaction in chaos. In Proceedings of ACM 2nd International Conference on Intelligent User Interfaces, San francisco January. ACM press.

Wilson, B. (1997) Memory aids. Invited presentation to the EPSRC workshop on Healthcare Informatics, Abbingdon, December 1997.

Wilson, S. M., & Johnson, P. (1996) in Vanderdonck J. (ed.) Computer aided user interface design. University of Namur Press.

Exploiting Context in HCI Design for Mobile Systems


Tom Rodden, Keith Chervest, Nigel Davies,

Department of Computing, Lancaster University, Lancaster, LA1 4YR.



Alan Dix,

School of Computing, Staffordshire University, Stafford, ST18 ODG.


The last five years has seen a shift in the nature of mobile computers. The development of increasingly powerful laptop computer systems has been mirrored by the production of a range of small computational devices. The increased prominence of these devices outlined a number of distinct research challenges. These challenges have tended to focus on extending the utility of these devices using new forms of interaction; techniques to overcome display limitations or improvements in the general ergonomics of these devices. The merging of these devices with existing telecommunication services and the production of devices that offer connections to other systems presents yet another set of research challenges in terms of the development of cooperative multi-user applications.

The authors are engaged in a number of projects investigating various aspects of mobile systems development. In particular, an MNA funded project "Interfaces And Infrastructure For Mobile Multimedia Applications" is looking at the way in which the special user interface requirements of cooperative mobile systems can be used directly to drive the development of an effective system architecture, user interface toolkit and underlying communications infrastructure.

In various ways mobile systems break assumptions that are implicit in the design of fixed-location computer applications leading to new design challenges and feeding back to a better understanding of the richness of human–computer interaction.

One central aspect of our work is the temporal issues that arise due to network delays and intermittent network availability. We have already addressed this in some detail based on previous theoretical work on pace of interaction and practical experience in building collaborative mobile applications [Dix, 1992; Davies 1994; Dix, 1995]. In addition, there has been considerable wider interest in temporal issues, both in the context of mobile systems and also more generally [Johnson, 1996; Johnson, 1997; BCSHCI, 1997; Howard and Fabre, 1998].

However, this paper considers a second critical issue in the design and development of cooperative mobile systems, the context sensitive nature of mobile devices. the importance of this is clear in the recent research in ubiquitous computing, wearable computers and augmented reality [Weiser, 1991, 1994; Aliaga, 1997]. Furthermore, more prosaic developments such as mobile phones, GPS and embedded in-car automation all point to a more mobile and embedded future for computation. This development of applications which exploit the potential offered by this technology brings together issues from distributed systems, HCI and CSCW. However, designers of these systems currently have few principles to guide their work. In this paper we explore the development of a framework that articulates the design space for this class of system and in doing so points to future principles for the development of these systems.

Fixed-location computers are clearly used for a variety of tasks and are set within a rich social and organisational context. However, this is at best realised within individual applications and the nature of the device as a whole is fixed and acontextual. In contrast, the very nature of mobile devices sets them within a multi-faceted contextual matrix, bound into the physical nature of the application domain and closely meshed with existing work settings. In this paper we seek to articulate the nature of this matrix and how it may used as resource for designers and developers.

Making use of the context of a device is important for two reasons. Firstly, it may allow us to produce new applications based on the special nature of the context, for example interactive guide maps. Secondly, and equally important, it can help us tailor standard applications for mobile devices, for example when a sales rep visits a company, the spreadsheet can have a default files menu which includes the recent ordering history for the company. Such tailoring is not just an added extra, limited screen displays mean that highly adaptive, contextual interfaces become necessary for acceptable interaction.

Moving from the device to the context of use

A considerable amount of research surrounding the development of mobile devices has obviously focused on the portable nature of these devices and the technical problems in realising these. Mobile computing devices represent real technical challenges and have always stretched the state of the art in terms of displays and interaction devices. This focus on the development of appropriate forms of device is perhaps best exemplified by the development of so called "wearable computers". These have seen the construction of new forms of interaction devices that support a limited number of dedicated tasks. These have included support for mechanics, portable teaching aids and note taking machines [Fickas 1997].

The development of dedicated function devices is complemented by the emergence of a range of general purpose devices are normally characterised as Personal Digital Assistants. The majority of these devices focus on supporting some form of personal organisation by combining diary and note taking facilities. These devices are characterised by their personal and individual nature and any communication provided has focused on providing support for access to on-line information such as the email and the World Wide Web.

The emergence of mobile telecommunication standards such as GSM and the increased availability of these services has also led more recently to the development of a range devices that provide mobile access to on-line services (e.g., the Nokia communicator). This merging of computer and communication facilities allows the development of systems that provide on-line immediate access to information. These portable networked devices have also been combined with the use of GPS technologies to develop a range of portable devices that are aware of their position [Long 1996].

The ability of the current generation of portable devices to have an awareness of their setting and an increased ability to access network resources means that we need to broaden our consideration of these devices to see their use in tandem with other portable devices. This view of portable devices means that we need to balance the current consideration of the interaction properties of individual devices with a broader consideration of the context of use. This move toward a consideration of the context of use builds upon previous trends in the development of portable devices, includes the use of TABS in developing mediaspaces at PARC and the associated emergence of the notion of Ubiquitous computing [Weiser, 1991, 1993]. More recent work at MIT has also focused on the development of small-scale devices that exploit context to provide an ambient awareness of interaction [Ishii 1997].

Considering the context of Mobile Systems

Our particular focus is a consideration of applications that we term advanced mobile applications. Although research prototypes exist that demonstrate the technical possibilities many of these have yet to emerge as fully-fledged applications. These applications are distributed in nature and characterised by peer-to-peer and group communications, use of multimedia data and support for collaborating users. Examples of such applications include mobile multimedia conferencing and collaborative applications to support the emergency services.

In considering the design and development of interfaces for mobile devices we wish to particularly focus on the situation where mobile devices behave differently and offer different interaction possibilities depending on the particular context in which the system is been used. For example, in the development of mobile multimedia guides such as the systems at Georgia Tech [Long 1996] and the Lancaster Guide [Davies 1998] the information presented to the user and the interaction possibilities is strongly linked to the location where the device is been used. Interaction is no longer solely a property of the device but rather is strongly dependant on the context in which the device is been used.

In this paper we wish to examine the nature of the context in which mobile devices are used and the implications for future HCI design. The aim of this focus on the context is to allow the highly situated nature of the devices to be reflected in the design of interactive systems that exploit these systems. This focus on the situated nature of these devices reflects the growing acceptance of these devices and the need to allow them to closely mesh with the existing practices. This needs to focus on the context of use mirrors previous work in the development of interactive systems within CSCW [Hughes 1994]

In considering context as a starting point for the design of interaction within we need to unpack what we actually mean by the term context and how we may exploit to determine different interaction possibilities within mobile systems. The following sections consider some of the ways in which context has played a key design role in the development of distributed mobile applications and the consequences suggested for the development of future applications.

Infrastructure Context

The interaction offered by advanced mobile applications is not solely dependent on the particular features of the mobile devices used. Rather it is a product of the device and the supporting infrastructure used to realise the application. The impact of the properties of the supporting distribution infrastructure to different styles of interaction has been discussed in CSCW and HCI [Greenberg 1994]. In mobile systems the nature of the infrastructure is even more likely to change as the application is used and the sort of service available may alter dramatically. This variability in the infrastructure may dramatically effect interaction and it is essential that interaction styles and interfaces provide access to information reflecting the state of the infrastructure

This issue is particularly acute in the case of safety critical applications where applications must be rigorously engineered to ensure a high level of dependability. The dependency of these systems comes not only from the reliability of the communication infrastructure and devices but also from the users awareness of the nature of the application. Provision of this awareness require us to reconsider the traditional views of distribution transparency and abstraction and allow the user access to the properties of the infrastructure and infer different interaction results from this contextual information.

In essence, the user interfaces to mobile applications must be designed to cope with the level of uncertainty that is inevitably introduced into any system that uses wireless communications. For example, consider our experiences in the development of an advanced mobile application used to support collaborative access to safety critical information by a group of field engineers [Davies 1994]. If one of these engineers becomes disconnected from the group as a result of communications failure then it is vital that the remaining user's interfaces reflect this fact. This requires interaction between the application's user interface and the underlying communications infrastructure via which failures will be reported. In addition, if the information being manipulated is replicated by the underlying distributed systems platform the validity of each replica will clearly be important to the engineers. In this case the user interface will need to reflect information being obtained from the platform.

The design of these applications needs to not only reflect the semantics of the application and the features supported but it must also consider as a key design element the variability of the supporting infrastructure and how this variability is reflected to the user. Similarly, the particular features of the infrastructure may need to be put in place and designed in line with the interaction needs of the mobile application.

Application Context

In addition to the infrastructure issues discussed above, distributed mobile applications need to consider the detailed semantics of the application. In the case of mobile applications the normal design considerations are amplified by the need to consider the limited interaction facilities of mobile devices. In addition, a number of additional contextual issues need to be considered in the design of these applications.

Mobile devices are intended to be readily available and of use to the community of users being supported. As a consequence we need to consider the highly situated nature of this interaction. Developing a clear understanding of what people do in practice and the relationship with technology is essential to informing the development of these applications. The relationship between users and mobile technology is still unclear and few studies have taken place that considers the development of mobile cooperative application [Davies 1994].

For example we may choose to exploit the personal nature of these devices to associate mobile devices with users. This allows us to tailor applications to allow them to be sensitive to the identity of the user of the device. This information may be exploited along with additional contextual information (e.g. location) to present appropriate information. One example of this would be a particular doctor visiting patients within a hospital. At a particular bed, who the doctor is and their relationship to the patient in the bed may determine the information presented. Contrast this situation with the development of a museum guide where the devices need to be considered as general purpose and no information is available about the relationship between users and the artefact being described.

The design of advanced multimedia applications needs to explicitly identify the nature of the work being supported and the practicalities of this work. In doing so developers need to consider the relationship between the mobile devices and their users and how this can be used to determine the nature of the interfaces presented. This is particularly important if devices are to be used to identify users and potentially make information about their location and what they are doing available to others. In this case a consideration of the issues of privacy and the need for some symmetry of control is essential.

System Context

In addition, to exploiting information about who will be using devices, interaction with mobile applications also needs to consider the system as a whole. The nature of these devices is that more advanced applications need to be distributed in nature. Thus rather than having functionality reside solely within a single machine (or device) it is spread across the system as a whole. This means we need to consider the interaction properties of the system in terms of the distributed nature of the application. This is particular true when we consider issues of pace and interaction [Dix, 1992]. Consider for example, the development of appropriate caching strategies for field engineers who will only ever be examining or servicing units within a sub region of a particular area.

The need for rapid feedback is an accepted premise of HCI design and many applications provide direct manipulation interfaces based on the ability to provide rapid feedback. The development of distributed applications has seen a reconsideration of the nature of feedback and the importance of considering the technical infrastructure as impacting this [Dix, 1995]. The variable nature of the Internet and the effects on World Wide Web interaction is perhaps the most readily identifiable manifestation of this effect [BCSHCI, 1997]. A natural design tension exists between replicated application architectures that maximise feedback and centralised applications that prioritise feedthrough across the application users [Ramduny and Dix, 1997]. The need to consider the overall functionality of the application and to design structures that provide appropriate access to different levels of functionality is amplified in the case of mobile applications where the infrastructure may vary considerably as the application is in use.

Location Context

One of the unique aspects of mobile devices is that the can have an awareness of the location within which they are been used. This location information may be exploited in determining the form of interaction supported. This may either be direct in that it explicitly exploits the nature of the setting within the application. For example, the development of guides that tell you about your current location. It may also be less direct in the development of systems that inform you of incidents depending on your particular location.

The degree to which the mobile application is coupled with the location of devices and how this location is made available to users is a key design decision in supporting different interaction styles. The device used and the form of interaction it supports is no the sole determinant in the form of interaction. Rather it a product of the location of the devices and the location of other devices. This means that we need to consider the issues involved in the correspondence between these devices and their location. For example, if a guide describes a particular location and is dependent on references to that location to support the interaction we must ensure that this contextual reference is maintained. It is essential that our approaches to design explicitly involve the issues of location and the link with these contextual cues.

Physical Context

Finally, mobile computer systems are likely to be aware of, or embedded into their physical surroundings. Often this is because they are embedded in an application specific device, for example in a mobile phone or car. In these situations the computer system is mobile by virtue of being part of a larger mobile artefact. This context can and does affect the application interface, for example, the telephone directory within a mobile phone can be very different from one in an independent PDA. Another example is a car radio (now often computer controlled) which has different design considerations to a static radio including the need to automatically retune as the car travels between local radio areas and transmitter zones. Because the computer systems are embedded into application specific devices they may also be aware of their environmental context, for example, the speed of the car. Some of this sensory information may be used simply to deliver information directly to the user, but others may be used to modify interface behaviour. For example, in a tourist guide, increasing text size in poor lighting conditions or, in a car system, limiting unimportant feedback during periods of rapid manoeuvring.

Each of these different context represent different portions of the design space within which mobile systems must be placed and the features of infrastructure, application, system and location all provide potential trade-offs that developers must address in realising mobile interactive systems. Currently, designers undertake this trade off with little support or guidance as the system as little is know of the extent of the design space into which mobile applications are placed. In the following section we wish to consider a taxonomy of mobile computation that charts this design space to allow developers to consider the properties of the mobile system under construction and how this may be related to other applications and systems.

Towards a taxonomy of mobile computation

Having considered some of the different ways in which context may affect or be used in mobile devices; we now want to build a classification of mobile and context-aware devices to better understand the design space. Clearly, as we are considering mobile systems, ideas of space and location are of paramount importance in our consideration of the context of these systems. We will therefore first examine different kinds of real and virtual location and different levels of mobility including issues of control. However any notion of location puts the device within an environment which has both attributes itself and may contain other devices and users with which the device may interact.

Figure 1. A device in its environment

Of real and virtual worlds

A lawnmower is a physical device; it inhabits the real world and can be used to affect the real world. Computers open up a different kind of existence in an electronic or virtual world. This is not just the realm of virtual reality, as we surf the web, use ftp to access remote files or even simply explore our own file system we are in a sense inhabiting virtual space. Even the vocabulary we use reflects this: we 'visit', 'explore', 'go to', 'navigate' ... our web browsers even have a button to go 'back'. Their has been a growing acceptance of the consideration of a virtual space and the development of electronic worlds and landscapes.

The emergence of virtual space

The turn to virtual worlds and spatial approaches generally has emerged from work in HCI and CSCW on the use of spatial metaphors and techniques to represent information and action in electronic systems. This work has its roots in the use of a rooms metaphor to allow the presentation of information [Henderson, 1985]. From these early spatial approaches we have seen concepts of spatial arrangement exploited in the development of desktop conferencing systems such as Cruiser [Root, 1988] and more generally in the work of Mediaspaces [Gaver, 1992].

The recent development of co-operative systems in CSCW has also seen a growing application of concepts drawn from spatial arrangements. These include the development of groupkit to form teamrooms [Roseman, 1996], the emergence of the worlds system [Fitzpatrick, 1996] and the use of a notion of places to support infrastructure [Patterson, 1996]. This exploitation of virtual spaces is most notable in the development of shared social worlds exsiting soleley withi the machine [Benford, 1995]. However, the use of space and virtual spaces has not been isolated to an existance solely within the computer and a number of researcher have considered how space and location can be considered in both virtually and physically within the development of applications. This is most evident in the augmenting of existing physical spaces to form digital spaces populated by electronically sensitive physical artefacts (or tangible bits)[Ishii, 1997] that are sensitive to their position within both physical and virtual space.

Combining the real and the virtual

The work in tangible bit undertaken by Ishii(1997) represent the start of a trend to interweave real and virtual spaces that exploit a capability differently offered by mobile computer applications and we would suggest that this interplay between the real and the virtual is at the core of the design of co-operative mobile applications as devices and users have a location and presence that is both virtual and physical each of which is available to the computer application.

This interplay between the real and the virtual provides a starting point for the development of our taxonomy. A direct result of the need to recognise this coupling is that many of the categories we will consider for taxonomising and understanding mobile and context-aware computation have counterparts in both the real physical world and virtual electronic world.There are important differences and the virtual world does not always behave in ways we have come to expect from the physical world and these differences are often exploited by designers and developers.

In particular, even the object of interest for mobile computation may have a physical or virtual existence depending on the nature of the application. At one extreme we have simple hand held GPS systems that simply tell you where you are in physical space – perhaps these do not even rank as mobile computation. At the other extreme there are agents which simply have an existence within the virtual world, for example web crawlers or the components within CyberDesk[Wood, 1997]. Between these we have more complex physical devices such as the PDA which have both a real world existence and also serve as windows into virtual space (especially when combined with mobile communications).

In the development of the taxonomy presented here we will focus will on physical mobile computational devices. However, we will also draw on examples of virtual agents where they are instructive to highlight the co-existence of these two forms of space and the issues of mobility that may exist in both.


Mobility makes us think of automatically about location and the way in which this sense of location can be both understood in the system and changes in location can effect the system. Any simple mobile device will have a physical location both in space and time. Understanding the nature of this location and how the developers of interactive mobile applications may exploit it is important and in this section we wish to consider what we might actually mean by the term location. This exploration is more than a mere issue of terminology as developing a understanding of what we actually mean by location represents a consideration of one of the core design concepts in the production of mobile systems.

Looking at the spatial dimension there are some devices (for example GPS based map systems) where the exact Cartesian position in 2D or 3D space is important in defining a sense of absolute physical location. For others a more topological idea of space is sufficient in understanding position and in these case location is consider not in an absolute sense but in relation to other objects or sensors. For example the Lancaster GUIDE system is based on radio cells roughly corresponding to rooms and sections of Lancaster Castle and the CyberGuide [Long,1996] system at Georgia Tech. shows visitors around the GVU laboratory by altering its behaviour depending on what item of equipment is closest.

This distinction between a sense of the absolute and relative in location can also be applied to time. We can consider a simple, linear, Cartesian time typified by a scheduler or alarm clock. However, we can also have applications where a more relative measure of time is used, for example, during a soccer match we may consider the action in the first half, second half and extra time but not care exactly whether the match was played at 3pm or 6pm. Similarly, in the record of a chess game, all that matters is the order of the moves, not how long they took. In fact, many calendar systems employ a hierarchical and relative model of time: hours within days, days within weeks. At first this might seem like a simple division of linear time, but such systems often disallow appointments spanning midnight, or multi-day meetings that cross into two weeks.

We can thus think of both space and time as falling into 'Cartesian' and topological categories and can consider location in both space and time in these terms. We may also consider location in both a physical and virtual sense. If we consider ideas of virtual location, for example position within a hypertext, we see that we may similarly have ideas of time and space within the electronic domain. As an example of virtual time consider looking up next week's appointments in a scheduler, the real time and virtual time need not correspond. For those with busy schedule these seldom correspond and the art of mapping from the real to the virtual is often an delicate balancing act worked out in practice.

This consideration of location provides us with the following categorisation of location:










stop watch

history time line

Figure 2. Location in different kinds of space

Note that these are not mutually exclusive categories: an item in a room also has a precise longitude and latitude, a device will exist at a precise moment in linear time, but may being used to view past events at that stage. Indeed possibly one of the most interesting things is where these different ideas of location are linked to allow visualisation and control. For example, moving a display up and down in physical space could be used to change the virtual time in an archaeological visualisation systems, and in an aircraft cockpit, setting the destination city (topological destination) instructs the autopilot to take an appropriate course in Cartesian space/time. This interplay between the real and the virtual is central to the development of augmented reality spaces where the movement of devices within a space may manifest in effects that are both real and virtual. These spaces only work because the location of the device can be controlled in virtual and physical space and its effects proved alterations to either the physical or virtual space.


Out core concern in the development of our design taxonomy is the issue of mobility and its implication for how we understand human computer interaction. In the previous section we considered how the issue of location can be unpacked to provide an understanding in both a physical and virtual sense and how the nature of the space effects our consideration of location. In this section we wish to focus on how might understand mobility and what potential design issues may emerge from a more detailed consideration of mobility.

Devices may be mobile for a number of reasons. They may be mobile because they are carried around by users (as with a PDA or a wearable computer), because they move themselves (robots!) or because they are embedded within some other moving object (a car computer). Furthermore a number of different devices may be spread within our environment so that they become pervasive, as in the case of an active room such as the ambient room suggested by Ishii(1997). The issue of pervasive is itself a rather thorny one in that we it is not clear what constitutes pervasive in terms of devices and how this relates to previous discussions surrounding ubiquitous devices. The issue of ubiquitous computing has focused on the backgrounding of the device and the computer essentially "disappearing" into the environment. For us the issue of pervasive device has less to do with the devices fading into the environment and more to do with an expectation that particular devices are normally available. For us pervasive computing is intimately bound up with the inter-relationship between different devices and the expectation that these devices can work in unison to provide some form of shared functionality. An active room is active because it contains a number of devices which when they work in unison provide some form function. Essentially, we are seeing a number of computing devices working in co-operation to proved some functionality and some of these devices may be mobile. However, often these devices are not. Consider for example the layout of base stations that provide the information displayed on mobile devices to allow a space to offer some form of pervasive computing facility.

We can disentangle the different levels of mobility into three dimensions which are used in Figure 3 to classify example mobile systems.

First we can consider the level of mobility within the environment:

• fixed – that is the device is not mobile at all! (e.g a base station fixed in a particular place)

• mobile – may be moved by others (e.g. carried around, e.g PDA or wearable computer)

• autonomous – may move under its own control (e.g. a robot)

Second, we can consider the extent to which the device is related to other devices or its environment:

• free – the computational device is independent of other devices and its functionality is essentially self contained.

• embedded – the device is part of a larger device

• pervasive – the functionality provided by the device is essentially spread throughout the environment and results for the a devices relation to other elements in the environment.

These separations do not consider the nature of the device and the sort of functions it may afford. The physical design of the device itself is an issue that needs to be considered carefully and needs to be considered in terms of existing traditions of aesthetic and practical design. The consideration of these features are beyond the scope of the framework and taxonomy we wish to present here which focuses on the development of the device.

As a final part of our taxonomy we can reflect the co-operative nature of advanced mobile applications by considering the extent to which the device is bound to a particular individual or group. We have three classes for this too:

• personal – the device is primarily focused on supporting one person

• group – the device supports members of a group such as a family

• public – the device is available to a wide group

We would not suggest that these categories are absolute but rather provide them as sample equivalent cases of utility to designers. All the categories have grey cases, but perhaps this last dimension most of all. In particular we should really consider both the static and dynamic nature of how these categories are applied. For example, we could classify a computer laboratory as 'public', but of course, after logging in, each computer becomes personal. We will return to these dynamic aspects when we look at how devices can become aware of their users.

In fact, the 'group' category really covers two types of device. Some, like a liveboard actually support a group working together. Others, like an active refrigerator (which allows messages to be left, email browsing etc.) may primarily support one person at a time but is available to all members of a family. In-car computers systems exhibit both sorts of 'groupness', they may perform functions for the benefit of the passengers of the car as well as the driver and also the exact mix of people from within the family (or others) within the car may vary from trip to trip.

Some of the examples in Figure 3 are clear, but some may need a little explanation. The 'Star Trek' reference is to the computer in Star Trek that responds to voice commands anywhere in the ship, but does not actually control the ship's movements. This is perhaps a wise move given the example of HAL in the 2001! (Note HAL is put in the group category as it has a small crew, but this is exactly one of the grey distinctions.) Our reference to 'shopping cart' refers to the development of supermarket trolleys that allow you to scan items as they are added and keeps track of your purchases to enable a fast checkout. Often these require the insertion of a shopper identification, in which case these become dynamically personalised.







office PC


Computer lab.




tour guides




Factory robot




Active fridge




Wearable devices

Car computer

Shopping cart




Auto pilot

Mono rail




active room




Star Trek



web agent


web crawler

Figure 3. A Taxonomy of different levels of mobility

Notice there are various blank cells in this taxonomy reflecting our use of the taxonomy as a means of charting the design space for interactive mobile devices. Some of these blanks represent difficult cases where there may not be any sensible device. For example, a fixed–pervasive–personal device would have to be something like an active hermits cell. In fact, the whole pervasive–personal category is problematic and the items 'web agent' and 'web crawler' in the final row may be better regarded as virtual devices of the free–autonomous class.

Other gaps represent potential research opportunities. For example, what would constitute a free–mobile–group device? This would be a portable computational device that supports either different individuals from a group, or a group working together – possibly an electronic map that can be passed around and marked.

Most of the examples are of physical devices. Virtual devices may also be classified in a similar way, for example, Word macros are embedded–mobile (or even autonomous in the case of macro viruses!) as are Java applets. The only virtual devices in Figure 3 are the items 'web agent' and 'web crawler' in the final row. However, these are perhaps better regarded as virtual devices of the free–autonomous class. This ambiguity is because any virtual device or agent must be stored and executed upon a physical computational device and the attributes of the physical device and virtual device may easily differ. For example, a PDA may contain a diary application. This is mobile by virtue of being stored within the PDA (a virtual device embedded within a physical device). However, if the PDA is used as a web browser it may execute a Java applet that is a form of virtual agent embedded within a web page (a virtual embedding in a mobile artefact). That is we have an embedded–mobile–public virtual agent temporarily executing on a free–mobile–personal device! This dual presence in multiple contexts is both the difficulty and the power of virtual environments and one that requires some significant research to resolve.

Populating an environment

Devices may need to be aware of aspects of their environment in addition to their location within it. These may vary because the device is moving form location to location (the headlamps on a car turning on automatically as the car goes into a tunnel) or because the environment is changing (a temperature monitor). In a sense, devices need to be aware that they populate an environment and need to reflect the coupling with the environment depicted in Figure 1.

This awareness may include both the physical environment (light, temperature, weather) and the electronic environment (network state, available memory, current operating system). A simple of the latter are Javascript web pages which run different code depending on the browser they are running on.

Environments are normally populated with a range of different devices. Within the physical and virtual environment of a device there may be other computational devices, people (including the user(s) of the device) and passive objects such as furniture. These may be used to modify the behaviour of the device. For example, in CyberDesk 'ActOn' buttons are generated depending on what other applications are available and the types of input they can accept.

Figure 4 gives examples of items in the environment that may be relevant for a mobile or context-aware device taking a car computer and an active web page as running examples.





Current driver of car

visitor at web page


other cars

running applets


roadside fence

other pages on the site

Figure 4. Examples of entities within the environment

This sense of awareness of the surrounding environment and conveying this awareness to others is an issue of some sensitivity in design. For example, in the case of active badges the issue of awareness of users and how this may be applied became embroiled within a discussion of privacy [Harper, 1992]. This may become even more problematic in the case of multiple devices that display an awareness of others. For example, consider the suggested "fun" interest badge device offered by Philips in the development of it visions of the future [Philips, 1996] design study. These badges are programmed with a set of interest profiles for people and are intended to light up when you meet someone else with a compatible profile. The social acceptability of this form of device may well become a significant issue in determining their acceptability and the general acceptance of devices of this form.

Measurement and awareness

In order to modify their behaviour devices must be able to detect or measure the various attributes we have mentioned: their location, environment, other devices, people and things.

These are mostly status phenomena and elsewhere [Dix and Abowd 1996, Ramduny, Dix and Rodden 1998] we have discussed the various ways in which an active agent can become aware of a status change. In short, these reduce to finding out directly or via another agent (human or electronic). For example, a car with a built in GPS sensor can detect its position directly and thus give directions to the driver, but a simple PDA may need to be told of the current location by its user in order to adjust timezones. Other computational agents may also be important sources of information about themselves (as in the case of CyberDesk) and about other parts of the environment (for example recommender systems).

Items in the environment (people, devices, objects) are particularly difficult: not only may they change their attributes (position etc.), but also the configuration of items may change over time (e.g. people may enter or leave an active room). This leads to three levels of awareness. We'll look at these with the example of a car computer:

• presence – someone has sat down in the driver's seat, but all the car can tell is that the door has been opened then closed

• identity – the driver enters her personal pin number and the car can then adjust the see position for the driver

• attributes – the car detects from the steering behaviour that the driver is getting drowsy and sounds a short warning buzzer

Notice how in this example, presence was not detected at all, identity was informed by the driver, but the sleepiness of the driver was detected directly. In other cases different combinations of detection or informing may be found. Security systems often have ultrasonic sensors to tell that someone is near (presence). Similarly, the car could be equipped with a pressure sensor in the driver's seat. Active badges, video-based face recognition or microphones matching footstep patterns can be used to tell a room who is there and hence play the occupant's favourite music and adjust the room temperature.

These examples are all about detecting people, but the same things occur in other settings. In the virtual world an agent may need to detect the same things: presence – whether any other applications are running, identity – if so what they are (e.g. Netscape), and attributes – what web page is currently being viewed. Also physical devices may detect one another for example allowing several people with PDAs to move into 'meeting' mode. Infact, awareness models that do just this form of detection within the virtual world abound[Rodden, 1996].

Detection and measurement may vary in accuracy: perhaps a box was put onto the car seat pressure sensor, the driver lied about her identity, the ultrasonic sensor cannot tell whether there is one or more people. It will also typically have some delay, especially when indirect means are used which is especially problematic if the attribute being measured changes rapidly. Thus actual detection is a trade-off between accuracy, timeliness and cost. Depending on the outcomes certain adaptations may be ill advised – a car wrongly identifies its driver and adjusts the seat thinking the driver is short, the real driver is quite tall and ends up squashed behind the steering wheel). The fidelity of awareness is very closely tied to the demands of the application and represents a genuine trade-off between the cost of measurement, the nature of the measurement and the importance of accuracy in the awareness information.

From requirements to architecture

As we have seen the taxonomy we suggest offers up many exciting design possibilities for specific applications suggested by the contextual nature of mobile devices. Although we are investigating some of these in a number of projects at Lancaster University the primary aim of our current 'infrastructure' project is to examine the generic requirements to emerge from taxonomies of this form. These requirements can then be exploited to develop the underlying toolkits, architecture and infrastructure needed for temporally well designed, context-aware, collaborative mobile systems. One of the issues suggested strongly by our framework is that the issues of human computer interaction involved in mobile systems extend well beyond the interface provided by the device and have significant impacts on the infrastructure.

Research has demonstrated the shortcomings of existing infrastructure components for supporting adaptive mobile applications [Davies, 1994], [Joesph, 1995]. In more detail, existing components have two critical shortcomings. Firstly, they are often highly network specific and fail to provide adequate performance over a range of network infrastructures (e.g. TCP has been shown to perform poorly over wireless networks [Caceres, 1994]). Secondly, existing components often lack suitable APIs for passing status information to higher levels. As a consequence of these shortcomings new systems are increasingly being developed using bespoke communications protocols and user interfaces. For example, the GUIDE system described in [Davies 1998].

As these devices become more widespread the need increases for generic application architectures for at least subclasses. There is clear commercial pressure for this, in particular, Windows-CE is being promoted for use in embedded systems. However, if these are simply developed by modifying architectures and toolkits originally designed for fixed environments there is a danger that some of the rich interaction possibilities afforded by mobile devices may be lost.

There are some examples of generic frameworks on which we can build. In Georgia Tech., location aware guides are being constructed using the CyberDesk/Cameo architecture [Wood et al., 1997]. Cameo is a software architecture based on the theoretical framework of status–event analysis. Status–event analysis lays equal weight to events, which occur at specific times, and status, phenomena which always have a value which can be sampled [Dix and Abowd, 1996]. The discrete nature of computation forces and emphasis in many specification and implementation notations towards the former, however, most contextual information is of the latter type – status phenomena. Computation using status phenomena requires callback-type programming, as is familiar in many user interface toolkits, to be used far more widely.

Another major architectural issue for context-aware applications is the way in which contextual issues cut across the whole system design. This is reminiscent of other aspects of user-interface where the structures apparent at the user interface often do not match those necessary for efficient implementation and sound software engineering [Dix and Harrison, 1989]. In UI design this has led to a conflict between architectures which decompose in terms of user interface layers, such as the Seeheim and ARCH-Slinkey models [Gram and Cockton, 1996] and more functionally decomposed object-oriented models. In fact the object and agent-based architectures themselves usually include a layered decomposition at the object level as in the MVC (Model–View–Controller) model [Lewis, 1995] and in the PAC (Presentation–Abstraction–Control) model [Coutaz, 1987]. Although the display and input hardware may be encapsulated in a single object or group of objects, its effects are felt in the architectural design of virtually every user-interface component. In a similar fashion the hardware that supplies contextual information may well be encapsulated within context-objects, but their effect will permeate the system. This requires a similar orthogonal matrix structure to that found in models such as PAC or MVC.


In this paper we have considered human computer interaction with mobile devices in terms of the development of advanced mobile applications. The maturing of technology to allow the emergence of multi-user distributed applications that exploit mobile applications means that we can no longer focus the issues of interaction on the nature of the device. Rather we must explicitly consider impact of the context in informing the design of different interaction techniques. The context needs to be considered in terms of the devices relationship with the technical infrastructure, the application domain, the socio-technical system in which it is situated, the location of its use and the physical nature of the device. The interaction style supported by this class of mobile application is as dependant on this context as the properties of the device itself. As a result, it is essential that work on the nature of these devices and the development of techniques that are aware of the limits of these devices is complemented by a broader consideration of the nature of interaction. However, these modified and novel forms of interaction cannot be realised without corresponding software architectures. So far we have identified two major structural principles which underlie this architectural design: the importance of representing status phenomena and the need for contextual information to cut across the software design space.


Aliaga, D. G. (1997). Virtual objects in the real world. Communications of the ACM, 40(3): 49-54.

BCS HCI (1997). British HCI Group Workshop on Time and the Web. Staffordshire University, June 1997.

Benford, S., Bowers, J., Fahlen, L., Mariani, J, Rodden. T, Supporting Cooperative Work in Virtual Environments. The Computer Journal, 1995. 38(1).

Cáceres, R., and L. Iftode. "The Effects Of Mobility on Reliable Transport Protocols." Proc. 14th International Conference on Distributed Computer Systems (ICDCS), Poznan, Poland, Pages 12-20. 22-24 June 1994.

Coutaz, J. (1987). PAC, an object oriented model for dialogue design. Human\(enComputer Interaction \(en INTERACT'87, Eds. H.-J. Bullinger and B. Shackel. Elsevier (North-Holland). pp. 431-436.

Davies, N., G. Blair, K. Cheverst, and A. Friday. "Supporting Adaptive Services in a Heterogeneous Mobile Environment." Proc. Workshop on Mobile Computing Systems and Applications (MCSA), Santa Cruz, CA, U.S., Editor: Luis-Felipe Cabrera and Mahadev Satyanarayanan, IEEE Computer Society Press, Pages 153-157. December 1994.

Davies, N., K. Mitchell, K. Cheverst, and G.S. Blair. "Developing a Context Sensitive Tourist Guide", Technical Report Computing Department, Lancaster University. March 1998.

Dix, A. and G. Abowd (1996). Modelling status and event behaviour of interactive systems. Software Engineering Journal, 11(6): 334–346.

Dix, A. J. (1992). Pace and interaction. Proceedings of HCI'92: People and Computers VII, Cambridge University Press. pp. 193-207.

Dix, A. J. (1995). Cooperation without (reliable) Communication: Interfaces for Mobile Applications. Distributed Systems Engineering, 2(3): 171–181.

Dix, A. J. and M. D. Harrison (1989). Interactive systems design and formal development are incompatible? The Theory and Practice of Refinement, Ed. J. McDermid. Butterworth Scientific. pp. 12-26.

Fickas, S., G. Kortuem, and Z. Segall. "Software Issues in Wearable Computing." Proc. CHI Workshop on Research Issues in Wearable Computers, Atlanta, GA, U.S.,

Fitzpatrick, G., et al, Physical Spaces, Virtual Places and Social Worlds: A study of work in the virtual, Proc. CSCW’96, ACM Press

Gaver W., The Affordances of Media Spaces for Collaboration, Proc. CSCW’92, 1992, ACM Press.

Gram, C. and G. Cockton, Eds. (1996). Design Principles for Interactive Software. UK, Chapman and Hall.

Greenberg S.,Marwood D., 'Real Time Groupware as a Distributed Dystem; Concurreny Control and its effect on the Interface' Proceedings of CSCW'94, North Carolina, Oct 22-26, 1994, ACM Press.

Henderson, A.J., and Card, S.A., Rooms: The Use of Multiple Virtual Workspaces to Reduce Space Contention, ACM Transactions on Graphics, Vol. 5, No. 3, July 1985.

Howard, S. and J. Fabre, Eds. (1998). Temporal Aspects of Usability: The relevance of time to the development and use of human-computer systems – Special issue of Interacting with Computers (to appear).

Hughes J., Rodden T., King V., Anderson K. 'The role of ethnography in interactive systems design', ACM Interactions, ACM Press, Vol II, no. 2, 56-65, 1995.

Ishii, H., and Ulllmer, B. (1997). Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms. Proceedings of CHI '97, ACM Press.

Johnson, C. and P. Gray (1996). Workshop Report: Temporal Aspects of Usability (Glasgow, June 1995). SIGCHI Bulletin, 28(2).

Johnson, C. W. (1997). The impact of time and place on the operation of mobile computing devices. Proceedings of HCI'97: People and Computers XII, Bristol, UK, pp. 175–190.

Joseph, A., A. deLespinasse, J. Tauber, D. Gifford, and M.F. Kaashoek. "Rover: A Toolkit for Mobile Information Access." Proc. 15th ACM Symposium on Operating System Principles (SOSP), Copper Mountain Resort, Colorado, U.S., ACM Press, Vol. 29, Pages 156-171. 3-6 December 1995.

Lewis (1995). The Art and Science of Smalltalk. Prentice Hall.

Long, S., R. Kooper, G.D. Abowd, and C.G. Atkeson. "Rapid Prototyping of Mobile Context-Aware Applications: The Cyberguide Case Study." Proc. 2nd ACM International Conference on Mobile Computing (MOBICOM’96), Rye, New York, U.S., ACM Press,

Patterson, J.F et al., Notification Servers for Synchronous Groupware, Proc. CSCW’96, ACM Press.

Ramduny, D. and A. Dix (1997). Why, What, Where, When: Architectures for Co-operative work on the WWW. Proceedings of HCI'97, Bristol, UK, Springer. pp. 283–301.

Ramduny, D., A. Dix and T. Rodden (1998). Getting to Know: the design space for notification servers. submitted to CSCW'98,

Root, R.W., Design of a Multi-Media Vehicle for Social Browsing, Proc. CSCW’88, Portland, Oregon, Spetember 26-28 1988, pp25-38

Roseman, M, Greenberg, S, TeamRooms: Network Places for Collaboration, Proc. CSCW’96, ACM Press

Weiser, M. (1991). The computer of the 21st century. Scientific American, 265(3): 66-75.

Weiser, M. (1993). Some computer science issues in ubiquitous computing. Communications of the ACM, 36(7): 75-84.

Wood, A., A. K. Dey and G. D. Abowd (1997). CyberDesk: Automated Integration of Desktop and Network Services. Proceedings of the 1997 conference on Human Factors in Computing Systems, CHI '97, pp. 552–553.

Ubiquitous Input for Wearable Computing: Qwerty Keyboard without A Board


Mikael Goldstein, Robert Book, Gunilla Alsio and Silvia Tessa


Ericsson Radio Systems AB, User Applications Lab, ERA/T/A, Torshamnsgatan 23, Kista,

SE-164 80 Stockholm, Sweden

{mikael.goldstein, robert.brook, gunilla.alsio, silvia.tessa} @era-t.ericsson.se

A different, yet familiar kind of input interface for the mobile cellular phone user that has acquired the skill of touch-typing is proposed. By picking up each finger’s muscular contractions when touch typing, along with a language model, it is possible to reduce the Qwerty keyboard to a truly ubiquitous interface: A "Qwerty keyboard without a board". Each finger covers a certain number of letters when touch-typing (between 3-6 letters). When <Space> is depressed, using one of the thumbs, the character input string is matched with the language model’s knowledge. The most probable match is selected, along with alternatives. The longer the word, the more accurate the prediction. The character interpretation is based on position and order instead of position alone. Thus, the Qwerty keyboard can in a first step be reduced to a 12-key keyboard. In a second step, the keyboard is completely omitted. The fingers’ muscular contractions when touch-typing are sensed, coded, processed and transmitted to a conversation model using wireless signal transmission. The mobile cellular phone user only has to find a flat surface to perform the touch-typing on. Thus, the size of the cellular phone may be drastically diminished without inflicting on usability or efficiency. Preliminary findings are discussed.


With the integration of telephony, data transmission and broadcasting, the traditional cellular terminal interface desktop metaphor is becoming inadequate. From an output as well as from an input point of view. At least from a usability point of view the size of the display window is not enough for easy information retrieval and the 12-key keypad (0-9, *, #) is not adequate for touch-typing. The paper proposes a different kind of interface for the highly mobile user group that has acquired the skill of touch-typing. The idea came to us after having seen the documentary film: Theremin: An electronic Odyssey (Ebert 1995). Leon Theremin, a Russian scientist, constructed in the mid-twenties a musical instrument. It looked like a wooden box with an antenna and a coil coming out of it. By moving hands and body near the machine, the player could control the pitch and volume of its output by hand gestures, without touching the interface of the instrument. Is it possible to design a different type of cellular phone (keyboard) input interface that makes text input possible with Theremin’s ideas in mind without inflicting on the size/weight/speed aspect?


One way of solving these issues is to supply the user with a Laptop computer, which suffers from other disadvantages; namely size and weight. Thus, there is a trade-off between usability and weight/size. With the introduction of the Internet concept, the possibility to retrieve and input information from the WWW poses no difficulty for the mobile laptop user.

If the mobile cellular phone user wishes to input text using the traditional telephone keypad (keys 0-9, *, #), where the characters are mapped onto each key in a many-to-one fashion, usability suffers considerably. The user prefers the traditional Qwerty keyboard layout, where each character is mapped onto a key in a 1:1 fashion. The input speed may be one factor that accounts for the sparse use of the SMS-text (Short Message (text) Service) service using the telephone keypad as input. It is slow and fatiguing. In order to increase input speed, the T9 by Tegic Communications Corporation (Tegic 1997) employs an intelligent software protocol that allows the user to input text by only using one key press per letter. The T9 algorithm combined with an internal linguistic database scans various combinations to resolve the intended word. A normal skilled touch typist using the Qwerty keyboard, may input as many as 150 words/min (two hands), whereas the typical office user manages to input 50 words/min (Shneiderman 1997). Using floating keyboard (using a pen with one hand) as input, reduces input speed to a maximum of 20-30 words/min. Entry speed for a short phrase: the quick brown fox jumped over the lazy dogs (45-characters including spaces), for different kinds of floating keyboards (Qwerty, ABC, Dvorak, Fitality, Telephone and JustType), using stylus as input, vary between 21-8 words per minute (wpm) for novice users (MacKenzie, Soukoreff and Zhang 1997). The above examples refer to a layout where each character is represented by one key (except for CAPITAL letters, where the <Shift> key is depressed simultaneously (see Shneiderman 1997)).

By letting each letter (or word) be represented by simultaneously depressing two or more keys, the input speed using two-handed input is increased drastically (up to 300 words/min). Using a Chording Glove (Rosenberg 1998) (one-handed input) made from mounting the keys of a chord keyboard onto the fingers of a glove, the average input speed was considerably slower, 16.8 wpm with an error rate of 17.4% (chords are made by pressing the fingers against any flat firm surface). The measurements were taken after 11 hours of training. Due to the long training time, many people will not trade this learning time for higher efficiency or increased mobility.


The general Qwerty keyboard is too big to be integrated into a normal small cellular phone. If the Qwerty keyboard is miniaturised, usability suffers both regarding effectiveness and efficiency. The mobile phone becomes large (e.g., NOKIA 9000) in spite of the fact that the Qwerty keyboard has been miniaturised. The NOKIA 9000 mobile phone in fact has two keyboards, one Qwerty (miniaturised) and a keypad. They take up a lot of space and make the telephone huge. Most normal users then find it difficult to use touch-typing (using both hands and all fingers when typing) when using miniaturised keyboards. For novice users familiar with traditional Qwerty keyboards, input speed (using fingers) for three short strings (14 words) was halved (from 20 to 10 wpm) when using a small compared to a large Qwerty touchscreen (6.8 cm vs. 24 cm wide)(Sears et al 1993). Using speech recognition may not be an appropriate approach in public places due to privacy issues. The ordinary cellular phone keypad (0-9, *, #), where each digit key represents several letters and where each letter is generated by pressing the appropriate key the rank order of times necessary to generate the letter on it, is slow.

The star-ten-pound keyboard

The usual typewriter interface uses the Qwerty keyboard. When touch-typing with two hands (Ben’Ary 1989), each hand covers half of the keyboard and each finger on the left hand (1= little finger, 2= ring finger, 3= middle finger), 4= index finger, 5= thumb) and on the right hand (6= thumb, 7= index finger, 8= middle finger, 9= ring finger and 10= little finger) covers a certain number of characters on the keyboard (see Table 1). Only the most frequently used characters are represented. However, due to the introduction of computational power and a language model, it would be possible to reduce the Qwerty keyboard to a Position Order Star-Ten-Pound (*10#) keyboard. Each keystroke is based on a combination of position and order rather than on spatial position alone. The thumb(s) are used to parse the text input string of characters into lexical and syntactic (and semantic) units (Suereth, 1997). The language model analyses the character-input strings by combining all the possible combinations. For example, by using the position and order approach, the four letter English word went generates the following possible outcome:

Position and order approach

Finger 2 (1st) +Finger 3 (2nd) +Finger 7 (3rd) +Finger 4 (4th) + Finger 6 (5th):

<wsx> +<edc> +<yhnujm> + <rfvtgb> +<Space>

The character string, upon obtaining the parsing command <Space>, using various lexical, semantic and syntactic rules, results in a reasonable guess of what the typed input word is. The system of course requires training. The character input string is compared to a stored dictionary for the language in question. Nonsense combinations like wdyr etc. are discarded. The position of the word in the sentence is compared to syntactic and semantic rules in the language grammar. Other possible combinations could be sent, excluding acronyms and names. The longer the input string of characters (the word) is, the more accurate is the actual prediction. According to the same line of reasoning as the T9 works.

Table 1. Different characters covered by each finger (1 to 10) (Swedish language) when using the Qwerty keyboard (desktop keyboard)(Laptop keyboards may differ).

Left hand finger

Right hand finger






































; ,

: .

















^ ¨

Caps Lock


* ‘









By using only 10 keys, with a spatial layout resembling that of two hands positioned according to traditional Qwerty touch typing (see Figure 1), it would be possible to obtain the same outcome. Each of the different characters is mapped onto each of the 10 keys according to the procedure described in Table 1. Thus, reducing the original Qwerty keyboard, where each character is mapped onto one separate key, to a Position and Order keyboard where several characters are mapped onto one key. However, the same touch typing layout as when using the Qwerty keyboard, matching the 10 fingers, prevails. The Qwerty touch typing metaphor does not prevail when the keyboard is reduced to a telephone keypad or to a chord keyboard. The layout makes it possible for positive transfer to occur between the use of the ordinary Qwerty keyboard and the Position and Order keyboard, since the same fingers are used to create each character as when using the Qwerty keyboard. The same spatial configuration metaphors as when using the ordinary Qwerty keyboard however prevail, which diminishes Learnability to a minimum. The only prerequisite is that the user is familiar to touch typing using all 10 fingers (see Table 2).

The language model

Since the number of input keys are not enough to keymap all 26 characters in a 1:1 fashion (not using chords), a fundamental help is provided by the language model. Based on statistics and grammar, the model "guesses" which letter has been typed. Assuming the writing rule that every punctuation mark <.>,<,> has to be followed by a <Space>, then every word can be parsed. With a word we mean every combination of characters. For instance, "Home, " is a word of five characters. The middle finger of the right hand (Finger 8) covers the letters <o>, <l> and <,>. The <,> mark is processed as any other character, first according to spelling, then according to grammar and finally according to context. Every string is matched first to a 3-grams dictionary. A 3-gram is a group of three letters that is present in a English dictionary. For example, the sequence unn is an allowable 3-gram while nnn is not. For every three-letter group permissible 3-grams are read from the 3-grams dictionary and matched with the following and preceding text of the character string. In this way, a lot of ungrammatical possibilities are discarded. A cache of the the most frequently used 3-grams reduces the number of memory accesses.

Finally, a match to the dictionary gives the current possibilities: if only one possibility is left, it is the right one, otherwise also the grammar and context structure are analysed. If more than one possibility exists, the most likely one is shown, according to a strong ranking principle. Particular attention is dedicated to beginning and ending 3-grams. For example, consider the sequence "<Space> esempl <Space>". The algorithm will look up "pl<Space>" in the 3-gram dictionary, without getting any match, because no word terminates in "pl". Consequently, the character string "esempl" will be discarded.

The 3-grams are tested for dictionary membership, using a Bloom filter (Bloom 1970) for storing and searching. A Bloom filter is a special kind of hash table where each entry is either a ‘1’ or ‘0’ and where repeated hashing to the same table is performed. The advantage is that a minimum amount of memory is required. For a word to be in the dictionary, all the entries determined by the hash functions have to be ‘1’. If only one of the entries is ‘0’, there is no need to go on further, the word is not in the dictionary. A word is added to a dictionary by applying the hash functions and entering ‘1’ in the corresponding positions. The problem with Bloom filters is that probability of occurrence can not be stored: an efficient solution is to divide the dictionary in groups of the same frequency ranges. Moreover, personal dictionaries can be added, for instance from address book, bookmarks or by learning.

Creating capital letters and switching between characters and numbers

Two additional keys have to be added in order to perform a frequently occurring event: generating Capital letters (see Table 2 and Figure 1). By adding a Star <*> and a Pound <#> key where the right and left hand’s <Shift> keys are located, it is now possible to generate capital letters according to the same pattern as when touch-typing. The Star-Ten-Pound concept is defined. Switching from characters to numbers (and further to cursor movements) may be accomplished by an additional toggle switch or by defining a keymap that fulfils the following five criteria: Easy to learn, Easy to remember, Minimal work, Low error and Fast typing (Rosenberg 1998, p. 60). The chord keymap can be empirically tested regarding comfort using Siebel’s (1962) Discrimination Reaction Time (DRT) measure.


Table 2. The layout of the Position and Order StarTenPound keyboard.

Left hand

Right hand

















Digit mode

























letter mode


q a z


w s x

e d c

r f v

t g b



y h n u j m

i k ,

o l .

p ö –

å ä











I K ;

O L :

P Ö _ Å Ä _



The processing order of a typed sentence is thus the following:

1) If in Letter mode: The user types the first strings of letters which is parsed into a lexical, syntactic and semantic unit using the <Space> key. The Position and Order conversation processor software predicts and selects the most probable word. The next string of characters is entered and is again parsed when the <Space> key is hit. The complete sentence is entered and finished by the punctuation <.> followed by a <Space>+ <*/#>(Shift)<letter> to create a Capital letter. The whole sentence is processed. If special signs are used (! @ # $ % etc.), which are situated at the top row on the Qwerty keyboard) these are obtained by switching to Digit mode and then using <Shift> (*/#)<digit>.

2) If in Digit mode: The user enters the telephone number by using the appropriate keys (1-9, 0) that are mapped to each of his fingers (Finger 1-10). If the user calls a voice response service, the Star (*) and the Pound (#) keys may be used.

A qwerty keyboard without a board

The reduction of keys without infliction on usability may be carried one step further. No keys, and no board at all are in fact necessary for a completely ubiquitous "Qwerty keyboard without a board" interface. By using Electromyography (EMG), sensors placed on the forearm can read the action potentials generated by muscle use. These can be coded and transmitted using a wireless radio link, for further processing. The only thing that the user needs is a reasonably flat space, anywhere, that has room for two hands. Thus, the board is no longer needed as an input device. Thus, the size of the cellular phone may be drastically diminished without inflicting on usability or efficiency.

The present proposal does not include any details about cursor movements. Neither does it address the question regarding how (visual or audial) feedback is displayed.

Preliminary findings

Only the thumb of the dominant hand is used for <Space>

Which thumb does a touch-typing user use when depressing the <Space> key? Five subjects, 4 right-handed and 1 left-handed participated in the pre-test. Four of the subjects could touch-type, whereas the fifth could only touch-type partially. The subjects were instructed to input text from a Swedish manuscript on a traditional Qwerty keyboard, using touch-typing. Four out of five (80%) used the thumb of their dominant hand (one used the index finger of the dominant hand) to depress the <Space> key.

Since the dominant (right) hand’s thumb is used for moving forward one unit <Space> a mnemonic (Rosenberg 1998, p. 63) would be to used the non-dominant (left) hand’s thumb to move backward one unit <Backspace>. To keymap the <Return> (Enter) key in a similar way, both thumbs are depressed simultaneously <Space><Space> since the operation incorporates both a forward and a backward movement.

Using the Non-keyboard shows a moderate performance decrease

Using one of the authors (Robert Book) as test subject (N=1) showed a surprisingly moderate performance decrease (only a 16%) regarding text input speed. The task was to input 124 words of continuous Swedish text (551 characters, 675 characters including <Space>) from a Swedish article written by the Swedish writer Täppas Fogelberg.

Using the Qwerty keyboard and a screen as visual feedback resulted in an input speed of 50 wpm using 147 sec of time. Two uncorrected and 10 corrected errors occurred during the text input (error rate 10%). Using the Non-keyboard (without any visual feedback) resulted in an input speed of 42 wpm using 175 sec of time. The number of corrected errors were 9 whereas the number of uncorrected errors could not be assessed. It is however possible to assume that they are of the same magnitude as for the Qwerty keyboard condition. Thus the error rate is assumed to be of the same order in the Non-keyboard condition. In spite of the fact that the sessions were filmed and played back in slow motion, it was difficult to visually keep track of the finger movements. The ability to detect error corrections when the subject was using the Non-board condition was attributed to the subject’s typing style: When assuming that an error had been made, the subject made a clearly visible movement with the right hand, reaching for the <Backspace> key with the ring and middle finger. The subjective impression was that it took considerably longer time to perform the task using the Non-keyboard. The author was not exposed to any prior training whatsoever regarding Non-keyboard input.

Lower muscular strain using the Non-keyboard?

EMG measurements (2-channel ME300, Mega Electronics Ltd.) were recorded on musculus exterior carpi radialis brevis bilateral using disposable surface electrodes (M-00-S, Medicotest A/S) when the author performed the text entry task. The difference in muscular activity (µV), favouring the Non-keyboard condition (see Table 3), could be attributable to lower input speed (42 vs. 50 wpm) as well as to the lower resistance experienced when depressing non-keys. However, it was subjectively perceived as more strenous.


Table 3. EMG surface measurements on musculus extensor carpi radialis brevis bilateral forearm (Left and Right) in microvolt (µV) using Qwerty and Non-keyboard input. Results from one subject when entering the same amount of text (124 words).



























Using a stylus for character input is very time-consuming

Another author (Gunilla Alsiö) entered 426 words (around 2700 characters) from the present manuscript using both a full Qwerty keyboard and a PalmPilot Professional (1997) using the floating miniaturised Qwerty keyboard and the stylus. Gunilla is a skilled touch typist and entered the text in around 16 minutes using the traditional Qwerty keyboard (27 wpm). When switching to the PalmPilot floating Qwerty keyboard and a stylus, it took her 70 minutes (7 wpm) to accomplish the same task. The within-person variation is thus around 1:4 regarding both input speed and completion time. Besides, her arm was aching considerably. Obviously, the stylus approach in combination with Qwerty layout is not an efficient way of entering longer texts. The input speed is thus considerably longer for a long than for a very short text (input speed 21 wpm) as obtained by MacKenzie et al. (1997). The results is also contrary to the conclusions of Greenstein and Arnaut (1998) who state that "using a stylus is less work than moving a finger or hand, making interaction faster and less tiring" (Rosenberg 1998, p. 42).


Ben’Ary, R. (1989), Touch typing in ten lessons, Perigee Books, New York: Berkely Publishing Group.

Bloom, B.H. (1970), Space/time trade-offs in hash coding with allowable errors, Communications of the ACM 13, 422-426.

Ebert, R. (1995), Theremin: An Electronic Odyssey, http://www.suntimes.com/ebert/ebert.reviews/1995/12/1011551.html

MacKenzie, I. S., Soukoreff, R. W. and Zhang, S. (1997), Text enrty using soft keyboards, URL: http://www.cis.uoguelph.ca/~mac/BIT3.html

Greenstein, J.S. and Arnaut, L.Y. (1988), Input Devices, in M. Helander (ed.) Handbook of Human-Computer Interaction, Elsevier Science Publishers B.V. (North-Holland), Ney York, NY, chapter 22.

PalmPilot (1997), 3Com Corporation, URL: http://www.palmpilot.com

Rosenberg, R (1998), Computing without Mice and Keyboards: Text and Graphic Input Devices for Mobile Computing, Doctoral dissertation, Department of Computer Science, University College, London.

Sears, A., Revis, D., Swatski, J., Crittenden, R. & Shneiderman, B. (1993), Investigating Touchscreen Typing: The Effect of Keyboard Size on Typing Speed, Behaviour and Information Technology 12, 1, 17-22

Shneiderman, B. (1997), Designing the User Interface, Strategies for Effective Human-Computer Interaction, Ch. 9, Berkeley, California: Addison-Wesley.

Suereth, R. (1997), Developing Natural Language Interfaces-Processing human conversations. New York: McGraw-Hill.

Tegic Communications Corporation (1997), URL: http://www.tegic.com.

Using Non-Speech Sounds in Mobile Computing Devices


Stephen Brewster, Grégory Leplâtre and Murray Crease


GIST, Department of Computing Science, University of Glasgow, Glasgow, G12 8QQ, UK




One of the main problems with output from small, hand-held mobile computing devices is the lack of screen space. As the device must be small to fit into the user’s hand or pocket there is no space for a large screen. Much of the work on presentation in standard, desktop interfaces relies on a large, high-resolution screen. The whole desktop metaphor, in fact, relies on such a method of presentation. This means that much of the research on effective screen design and information output cannot be generalised to mobile devices. This has resulted in devices that are hard to use, with small text that is hard to read, cramped graphics and little contextual information.

Lack of screen space is not a problem that can easily be improved with technological advances; the screen must fit on the device and the device must be small; screen space will always be in short supply. Another problem is that whatever screen there is will be unusable in a mobile telephone once the device is put to the user’s ear to make or receive a call.

There is one output channel that has, as yet, been little used to improve interaction in mobile devices (in fact very few systems of any type have used it effectively): Sound. Speech sounds are of course used in mobile phones when calls are being made but are not used by the telephone to aid the interaction with the device. Non-speech sounds are used for ringing tones or alarms (often in a quite sophisticated way) but again do not help the user interact with the system. There is now considerable evidence to suggest that sound can improve interaction and may be very powerful in limited display devices.

We suggest that sound, particularly non-speech sound can be used to overcome some of the limitations due to the lack of screen space. Non-speech sounds have advantages over speech in that they are faster and language independent. Research we are undertaking has shown that using non-speech sound can significantly increase usability without the need for more screen space.

The rest of this position paper will outline some of the work we are doing with non-speech sounds to improve the usability of human-computer interfaces. More details can be found on the web site above.

Non-speech sounds

The sounds used in our research are all based around structured audio messages called Earcons . Earcons are abstract, musical sounds that can be used in structured combinations to create sound messages to represent parts of a human-computer interface. Detailed investigations of earcons by Brewster and colleagues showed that they are an effective means of communicating information in sound. Earcons are also the most well-tested form of non-speech sounds.

Earcons are constructed from motives. These are short, rhythmic sequences that can be combined in different ways. The simplest method of combination is concatenation to produce compound earcons. By using more complex manipulations of the parameters of sound (timbre, register, intensity, pitch and rhythm) hierarchical earcons can be created . Using these techniques structured combinations of sounds can be created and varied in consistent ways. The earcons we have used are simple and within the range of sounds playable on a hand-held device. A set of guidelines to help interface designers use earcons are available .

Navigation in non-visual interfaces

One important reason for using non-speech sound is to represent menu structures in interfaces where visual feedback is not possible, for example telephone-based interfaces (phone banking or voicemail) or interfaces to mobile telephones where the screen is too small to present the hierarchy of options available. In a telephone-based interface a user might call their bank and navigate through a hierarchy of voice menus to find the service required. One problem is that the user can get lost in the hierarchy. As Yankelovich et al. say: "These interfaces, however, are often characterized by a labyrinth of invisible and tedious hierarchies which result when menu options outnumber telephone keys or when choices overload users’ short-term memory". The communication channel is very limited so little feedback can be given to the user about his/her current location. The more navigation information that is given, the more it obstructs the information the user is trying to access.

To solve this problem we have used non-speech sounds to provide navigation information . By manipulating the parameters of sound (timbre, pitch, rhythm, etc.) a hierarchy can be constructed. We constructed a hierarchy of four levels and 25 nodes with a sound for each node. The sounds we constructed based around rules – this meant that the users did not have to learn 25 different sounds but five simple rules. The idea then was that users would be able to hear a sound and from it work out their location in the hierarchy of options.

We have studied several different types of sounds and many different types of training and found very good recall rates. Users have been able to recall from 80% up to 97% of the sounds with minimal training and they could remember them over long periods of time. We believe that using sounds in this way can overcome many of the problems of presenting menus in small, hand-held devices. Users would also be able to move through the menus in a voicemail or telephone banking system, or the menu options within the telephone itself without becoming lost.

Sonically-enhanced widgets

Another important use of non-speech sounds has been to overcome problems of overloaded screens on desktop human-computer interfaces . Graphical displays can become cluttered and hard to use when too much information is put onto them. This is especially true in mobile devices where screens are small. Using sonically-enhanced widgets (such as buttons, scrollbars, etc.) means that information can be moved off the graphical display and put into sound, thus freeing the display for the most important information.

Some desktop interfaces allow the use of sounds (e.g. Windows95) but these are often gimmicks rather than additions to improve usability. This means that they are often perceived as annoying. Our sounds were added to interface components to address specific usability problems. The sounds helped our users work more effectively and therefore they did not find them annoying (as demonstrated by our experimental results).

The results of our experiments on buttons, scrollbars, menus, drag and drop, and tool palettes have shown a significant improvement in usability: time taken to complete asks and time to recover from errors were reduced, subjective workload was reduced and user preference increased . In each case we also checked to see if annoyance had been increased by the addition of sound. In none of the widgets did users rate the sonically-enhanced interface more annoying than the standard, graphical one.

As a simple example, in one of our experiments we removed the visual highlight from graphical buttons and replaced it with sound. The sounds told users when they were on the button and when they had pressed it correctly (which can be hard to see and causes users to ‘slip-off’ a button and think it has been pressed when it has not ). The sounds overcame these basic interaction problems with buttons and increased usability. Users had no problem with the lack of graphical feedback (in fact they performed much better with the sonically-enhanced buttons).

We suggest that this could also allow widgets to be reduced in size, thus saving space on a limited graphical display but without compromising usability (if the widgets were simply reduced in size then they would become harder to use and more error-prone). In the case of the buttons described above, the sounds could tell users when they were on the button even if it was small and the tip of a pointing device was obscuring it. The sounds would confirm if the button had been pressed correctly, whereas the limited amount of visual feedback from the small graphical button would be easily missed.

Presenting dynamic information on hand-held devices

One potentially large group of mobile device users is people who need to access dynamic information on the move. For example, stock market traders who want to monitor share prices or systems administrators who want to monitor network performance when offsite. Mobile devices do not currently support such activities very well.

Presenting dynamic information on devices with small displays is difficult. Techniques for large displays again do not generalise well to small ones. For example, on a large graphical display a strip-chart or line graph might be used so that users can see a change in one or more variables over time. This is very effective as the user can see general trends, compare one graph against another and also find current values. Utilising this type of presentation on a small display device is impossible as the screen is too small to provide any history so that trends are difficult to follow. If the user has the device to his/her ear then no visual display is possible.

One way to present this type of information is using sound. Research has shown that presenting graphs in sound for blind people allows them to extract much of the information needed from the graph . We are working on solutions based on non-speech sounds to present the trend information with speech to present current values when required.

We are also investigating more complex graphical visualisation techniques for example presenting complex hierarchical information (such as file systems) based on the navigation work described above. All of the research in this area is aimed at making mobile devices more useful to users by allowing them access to more complex information than they can currently get at.


A solution is needed to the problem of lack of screen space on mobile devices. This is unlikely to come from larger screens because the devices are physically limited in size to fit into the user’s hand. We suggest that non-speech sounds can solve many of these problems without making the devices larger.

Structured non-speech sounds (in combination with speech) can provide information to help users navigate through menus of options more easily. Sounds can improve the usability of on-screen widgets so that screen clutter can be avoided. They can also provide access to more complex, dynamic information that is currently very difficult to use on hand-held devices.


  1. Blattner, M., Sumikawa, D. and Greenberg, R. Earcons and icons: Their structure and common design principles. Human Computer Interaction 4, 1 (1989), 11-44.
  2. Gaver, W., Smith, R. and O'Shea, T. Effective sounds in complex systems: The ARKola simulation. In Proceedings of ACM CHI'91 (New Orleans) ACM Press, Addison-Wesley, 1991, pp. 85-90.
  3. Brewster, S.A., Wright, P.C. and Edwards, A.D.N. A detailed investigation into the effectiveness of earcons. In Proceedings of ICAD'92 (Santa Fe Institute, Santa Fe) Addison-Wesley, 1992, pp. 471-498.
  4. Blattner, M., Papp, A. and Glinert, E. Sonic enhancements of two-dimensional graphic displays. In Proceedings of ICAD'92 (Santa Fe Institute, Santa Fe) Addison-Wesley, 1992, pp. 447-470.
  5. Brewster, S.A., Wright, P.C. and Edwards, A.D.N. An evaluation of earcons for use in auditory human-computer interfaces. In Proceedings of ACM/IFIP INTERCHI'93 (Amsterdam, Holland) ACM Press, Addison-Wesley, 1993, pp. 222-227.
  6. Brewster, S.A., Wright, P.C. and Edwards, A.D.N. The design and evaluation of an auditory- enhanced scrollbar. In Proceedings of ACM CHI'94 (Boston, MA) ACM Press, Addison-Wesley, 1994, pp. 173-179.
  7. Edwards, A.D.N., Pitt, I.J., Brewster, S.A. and Stevens, R.D. Multiple modalities in adapted interfaces. In Extra-Ordinary Human-Computer Interaction, Edwards, A.D.N. (Ed.), Cambridge University Press, Cambridge, UK, 1995, 221-243.
  8. Yankelovich, N., Levow, G. and Marx, M. Designing SpeechActs: Issues in speech user interfaces. In Proceedings of ACM CHI'95 (Denver, Colorado) ACM Press, Addison-Wesley, 1995, pp. 369-376.
  9. Brewster, S.A., Wright, P.C. and Edwards, A.D.N. Parallel earcons: Reducing the length of audio messages. International Journal of Human-Computer Studies 43, 2 (1995), 153-175.
  10. Brewster, S.A., Wright, P.C. and Edwards, A.D.N. Experimentally derived guidelines for the creation of earcons. In Adjunct Proceedings of BCS HCI'95 (Huddersfield, UK), 1995, pp. 155-159.
  11. Brewster, S.A., Raty, V.-P. and Kortekangas, A. Earcons as a method of providing navigational cues in a menu hierarchy. In Proceedings of BCS HCI'96 (London, UK) Springer, 1996, pp. 169-183.
  12. Brewster, S.A. and Crease, M.G. Making Menus Musical. In Proceedings of IFIP Interact'97 (Sydney, Australia) Chapman & Hall, 1997, pp. 389-396.
  13. Brewster, S.A. Using Non-Speech Sound to Overcome Information Overload. Displays 17 (1997), 179-189.
  14. Brewster, S.A. Navigating telephone-based interfaces with earcons. In Proceedings of BCS HCI'97 (Bristol, UK) Springer, 1997, pp. 39-56.
  15. Brewster, S.A. The design of sonically-enhanced widgets. Accepted for publication in Interacting with Computers (1998).
  16. Brewster, S.A. Using earcons to improve the usability of tool palettes. In Summary proceedings of ACM CHI'98 (Los Angeles, Ca) ACM Press, Addison-Wesley, 1998, pp. 297-298.

Design Lifecycles and Wearable Computers for Users with Disabilities


Helen Petrie, Sensory Disabilities Unit, University of Hertfordshire.Hatfield, AL10 9AB.

Stephen Furner, Human Factors Division, British Telecom Laboratories, Martlesham Heath, Ipswich, IP5 7RE.

Thomas Strothotte, Dept of Simulation and Graphics, Otto von Guericke, University of Magdeburg, Germany.



As with all technological artifacts, wearable and mobile computers need to be well-designed if they are to serve their functions appropriately. We are working towards an appropriate iterative user-centred design lifecycle for the development of wearable and mobile computers for people with visual disabilities, taking many ideas from mainstream HCI but adapting them both for the particular characteristics of wearable and mobile computer systems and the particular characteristics of our user group. This process has lead us to conclude that methodologies developed for the evaluation of static interfaces will need to be adapted and extended if they are to capture the critical features of peculiarities of wearable and mobile computers, whether they are for able-bodied or disabled users.

Wearable computers have enormous potential to assist people with disabilities. For example, for people with sensory disabilities such as blindness or deafness, they could provide substitute sensory information. Very cumbersome laboratory systems have been developed which provide substitute visual information for blind people by projecting a simple image in tactile form on the back or stomach, and these have been shown to have some utility [3]. Such systems would be far more useful as a wearable technology, although the appropriate miniaturization is still in the future. However, it is already possible to provide disabled people with useful information via wearable systems, even if this is not complete sensory substitution. For example, the Low Vision Enhancement System [13] is an augmented reality headset which helps the wearer make more effective use of any remaining vision by magnifying images and increasing light/dark contrast.

MoBIC: a wearable navigational aid for blind travellers

One example of our use of the iterative user-centred design lifecycle has been in the MoBIC Project, which has developed a wearable navigational aid for blind travellers [9, 12]. Blind people have two types of problems in moving through their environment, particularly if it is unfamiliar. Firstly, they need to avoid obstacles and to find a clear path to walk in their immediate environment (we have termed this micro-navigation [8]). This problem can be addressed remarkably well by a long cane or a guide dog, although travel may never be as easy as for sighted people. Secondly, blind people need to orient and navigate through the larger environment, which may require knowing which street they are on, which way they are facing, where to cross a street safely and so on. We have termed this macro-navigation. Without visual information, this macro-navigational problem can be enormously difficult even in familiar environments and it largely impossible in unfamiliar environments.

The MoBIC Outdoor System (MoODS), a wearable system, has been designed to assist in macro-navigation. It combines GPS and dGPS receivers with an on-board GIS which locates the traveller with reasonable accuracy on the digital map. The map contains not only standard information such as the street layout, house numbers and landmarks but also additional information of particular interest to blind travellers. As MoODS wearers move about, they need to interact with the system, being given appropriate information at appropriate times by a Trip Management System (TMS) to assist them in orientation and navigation. At times information needs to be user-initiated, for example when the user is uncertain of their location. At other times information needs to be system-initiated, for example to give warning.

The output from MoODS consists of synthetic speech messages which the user receives via headphones which are similar, but not identical, to Walkman-style headphones. The difference from standard Walkman-style headphones is that the MoODS headphones sit in front of the ears and do not cover them, thus they do not obscure auditory information coming from the environment which is vitally important to blind travellers. The input to the system is via a wrist-worn keypad with eight keys. The system can be worn in a backpack, in the pockets of a jacket or vest, or in a shoulder bag; the final prototype which was developed in the project weighted approximately five kilograms, with most of the weight being required for batteries. The input/output interfaces and their associated software, in the shoulder bag version of MoODS, were the end-product of an extensive sequence of user requirements elicitation studies and evaluations, which are outlined in the next sections.

User Requirements: from paper to cardboard and plastic prototypes

Classic methods of user requirements elicition were initially employed, with interviews and focus groups of potential users of the system and related professionals. However, it quickly became clear that everyone (including the design team) had enormous difficulty in imagining what using a MoODS might be like, both in terms of interaction devices and dialogue. This will be the case in any instance where a wearable is developed to perform a new function rather than simply undertake a known function in new, mobile contexts of use. However, potential users may also have difficulties imagining how they might undertake familiar tasks in new contexts.

In the MoBIC user requirements studies it was clear that participants and the design team were falling back on existing artifacts as metaphors for the use of the MoODS: it would be like a mobile telephone or a Walkman etc. While this can be helpful, it also limits the design space which is explored. In the case of MoODS, neither of these artifacts provided an adequate metaphor for the appropriate interaction. A mobile telephone style MoODS would be carried in a pocket and only interrogated when the user thought they required assistance. However, an important aspect of the functionality which the TMS can offer users is the provision of warnings. Contacting users via a phone call may be too slow to provide this information. A Walkman-style MoODS was a closer approximation to an appropriate metaphor of use, but blind travellers rely on auditory information from the environment, and wearing Walkman-style headphones would mask some sounds. In addition, users need to interact with the system more frequently than with a Walkman, so this metaphor provided no basis for the interaction with the system.

Exploring different metaphors of use and trying to invent new ones proved to be a useful method for potential users and the design team to clarify the MoODS design. A second successful method was the use of cardboard and plastic prototypes, the wearable answer to paper prototypes for 2D interfaces. In the case of input to MoODS, an initial prototype of a wrist worn keypad, similar to a watch, was presented to users along with several cardboard mock-ups representing a number of variations. These variations included different sizes for the keypad and different configurations and sizes of keys. Whilst users were not able to fully interact with these low fidelity prototypes they were able to judge what it would be like to wear them and how easy it would be to identify and operate the keys.

A third successful method for establishing the design was the use of simple "mobile Wizard of Oz" studies. For example, to establish the style for the basic navigational messages, a study was conducted with a short, typical inner city route (which involved turning corners, finding an appropriate point to cross a street and finding certain shops). The route was carefully studied and a suggested set of messages prepared. These messages were tape-recorded and potential users then walked the route with a sighted guide, the pre-recorded instructions guiding them from point to point along the route. At each point, the user paused and listened to the next instruction before acting on the message. Users were then asked to comment on the message structure, content and level of detail. In addition to providing information about how to formulate the navigational messages, this exercise also yielded useful information concerning physical interaction with MoODS and what users felt to be important in the design of input and output devices.

Our experience of user requirements elicitation for a wearable computer suggests that at least while wearable and mobile computers remain relatively rare, spending a lot of time and effort on classic user requirements elicitation techniques such as questionnaires, interviews and focus groups is not going to be particularly useful. We found that much more useful and interesting information could be obtained from potential users when they were given some kind of prototype or Wizard of Oz simulation of the system, even if this was only a very low fidelity version of the system or a component of the system. This was clearly because potential users found it too difficult to imagine something completely new and speculate about how they would like it to be - either physically or in its behaviour. However, when they were given simple concrete prototypes, these proved to be an extremely useful starting point for speculative discussions. In retrospect, we think the design team ought to have spent more time discussing the implicit metaphors of interaction (e.g. the mobile telephone and the Walkman) and having "metaphor busting" sessions, to explicitly break away from these existing metaphors and create new metaphors of interaction which might be more appropriate to our particular wearable system. Instead, we may have to wait until users master this particular wearable system and then find new tasks and situations in which to use it. Then we may well see new interaction styles begin to emerge, which as designers we ought to be analysing via a "psychology of tasks", as Carroll, Kellogg and Rosson [1] have proposed, to develop the next generation of designs for this wearable system.

Evaluation methodologies

In order to properly evaluate wearable and mobile computers, appropriate methodologies are needed. Exactly what tasks should be used and what measures are appropriate in such methodologies depends on the nature of the use of the wearable. Such use can be basically "serial" or "concurrent" in nature. In serial use, the use of the wearable alternates with other tasks, so that only one task is undertaken at any one moment. This is typical of applications such as the mobile desktop and online manuals: the user stops other activities to perform a task with the wearable. The more challenging and interesting situation is concurrent use, in which the user wishes to perform two tasks simultaneously, one of which involves the wearable. An increasingly common example of concurrent use of a wearable-like system is people driving cars while talking on a mobile phone (unfortunately, the latter are not usually designed as wearable devices which creates potentially dangerous situations).

In the case of serial use, evaluation needs to cover only the use of the wearable itself in appropriate contexts, and perhaps comparisons with the use of non-wearable equivalents. For example, for a wearable online manual, evaluation would cover the usability and acceptability of the system in situations in which use is proposed, and perhaps comparisons with more traditional situations of use such as consulting a manual in an office or library.

In the case of concurrent use, it is necessary to evaluate not only the use of the wearable itself, but also the other task which is being undertaken, to ensure that use of the wearable does not decrease performance on that task (this is the classic dual task paradigm of experimental psychology). In the MoBIC project the main focus of our evaluations has been on the usability and acceptability of MoODS. For this purpose a range of objective measures (e.g. system interrogations, errors) and subjective measures (e.g. rating scales of usability learnability, and satisfaction) have been developed. However, in addition, we have also investigated whether MoODS has any adverse effects on the user's concurrent task, that of micro-navigation. For this purpose users are asked to walk specially constructed routes and comparisons are made of their performance with and without MoODS.

Finally, it is also important to ensure that concurrent use of a wearable while undertaking another task does not put unacceptable stress or excessive workload on the users. In evaluations of MoODS we have taken objective and subjective measures of stress and used the NASA Task Load Index (TLX) [5] as a measure of workload.

Evaluation of MoODS

An evaluation of the final prototype of MoODS were conducted as part of the field trials of the MoBIC Project held in Birmingham in the summer of 1996. These took the form of intensive evaluation sessions in which controlled testing of the system and its acceptability took place. Seven blind participants took part, six men and one women, aged between 18 and 65 years. Six of the participants were congenitally blind and one was adventitiously blind. Participants were required to study two unfamiliar routes using the MoBIC Indoor System (a talking map and route planning system [9, 12]) and then when they felt sufficiently confident, to go out and walk these routes. In one condition this walk took place using both MoODS and the participant's primary aid (long cane or guide dog), in the other the walk was completed without MoODS and using only the primary aid.

A fully counterbalanced design was employed in which all participants walked one route with MoODS and one route without it. Each route was about one kilometre long and the two routes were matched as closely as possible for navigational difficulty. As MoODS is a wearable computer whose use is concurrent with another task, that of macro-navigation, and as that task is in itself a complex one, a wide range of measures was employed. Performance measures included time taken to walk the route, errors, number of system interrogations of MoODS and percentage preferred walking speed (PPWS). PPWS has been used as an evaluative measure for several electronic travel aids [2]. The rationale underlying the use of this measure is that all pedestrians have a walking speed which they prefer and that this appears to be the speed, which for them, is the most physically efficient. The ability of any mobility aid to allow the pedestrian to walk at this preferred walking speed (PWS) is therefore argued to be a measure of its effectiveness. It has also been suggested that since walking speed may be seen as an index of the degree of stress blind pedestrians experience, PPWS may also be used as a measure of stress [4,10].

Measuring psychological stress is not an easy task and finding a truly objective measure for it is especially difficult. Some studies have attempted this by measuring heart rate. For example Peake and Leonard [7] examined the heart rate of matched pairs of blind and sighted people when they walked guided and unguided. and they concluded that some form of psychological stress was responsible for the high heart rate of blind participants when walking unguided. However, heart rate is subject to change for a variety of reasons, so a subjective measure of stress was also used. The Spielberger State Trait Anxiety Inventory (STAI) [11] asks respondents to describe their stress/anxiety levels using 20 rating scales. It can be used to measure both trait (persistent, long term) stress/anxiety and state (transitory, short term) stress/anxiety, and both measures were used in this evaluation. The trait anxiety measure was used to investigate whether participants were generally anxious people, and the state anxiety measure to anxiety just before setting out on a walk and at a point midway during the walk.

When use of a wearable computer is concurrent with another task it is likely that cognitive workload will be increased and the TLX was included to measure this variable. The TLX is a multi-dimensional rating procedure that provides an overall workload score based on a weighted average of ratings on six subscales: Mental Demands, Physical Demands, Temporal Demands, Own Performance, Effort and Frustration. Three dimensions relate to the demands imposed on the subject (Mental, Physical and Temporal Demands) and the other three to the interaction of a subject with the task.

The perceived ease of use, efficiency, effectiveness, learnability and satifaction levels with MoODS were investigated via a number of 7 point rating scales and open-ended questions. Participants were also asked to rate how conspicuous they felt wearing MoODS, as this had been an issue raised during the user requirements studies. It is clearly important that a system is usable and that it is not difficult to learn how to use it, but it is equally important that users enjoy using it and are satisfied with it. On the surface, it would seem that if a system is easy to use, learnable, effective and efficient then any user will find it satisfying to use, but this is not always the case. For instance Nielsen. & Levy [6] note that there is often no correlation between subjective preference ratings and performance.

A summary of the measures used in the evaluation of the MoODS and the findings for these measures is given in Table 1, below. The only significant differences between navigation with and without MoODS were that the majority of participants achieved closer to their preferred walking speed with MoODS and that the perceived mental

Table 1: Summary of measures and findings from the evaluation of MoODS



Performance measures:


Time taken to walk route

no significant difference with and without MoODS

Errors made during walk

no significant difference with and without MoODS

Percentage preferred walking speed achieved

4 out of 6 participants achieved closer to preferred walking speed with MoODS than without MoODS (data from final participant could not be analysed)


Stress-related measures:


Heart rate

significantly higher during first third of walk, regardless of whether with or without MoODS

Trait anxiety

measured before setting out on one of the walks, relatively low mean scores

State anxiety

no significant difference with or without MoODS, either before or during the walks

Perceived cognitive load (TLX)

Mental demand significantly lower when walking with MoODS


Preference measures:


Ease of use

Significantly higher than neutral rating


Trend towards higher than neutral rating


Neutral rating


Trend towards higher than neutral rating


Neutral rating


Neutral rating

load of the task was lower when walking with MoODS. MoODS also received neutral or significantly positive ratings on the preference scales.


The practical constraints of conducting an evaluation which required training blind people in the use of a completely new technology meant that it was only possible to involve a relatively small number of participants. Thus it is difficult to draw any difinitive conclusions about the evaluation methodology itself. However, all of the measures contributed to our understanding of the use of MoODS. We believe that a number of new measures have been introduced in the evaluation which have potential in the evaluation of many wearable systems.

Appropriate iterative user-centred design lifecycles for wearable technologies need to be developed. These can build on mainstream HCI, but need to consider the particular characteristics of wearables. In the elicitation of user requirements, particular attention needs to be paid to developing appropriate metaphors for the devices, and not relying on existing and possibly inadequate metaphors. In the evaluation of wearables, the different possible types of use of the wearable (serial or concurrent), need to be considered when developing evaluation methodologies and measures.


The MoBIC Project (TP 1148) was supported by the Technological Initiative for Disabled and Elderly (TIDE) Programme of the European Union (DG XIII). The project partners are: BT, U.K.; F.H. Papenmeier GmbH, Schwerte, Germany; Free University of Berlin, Germany; Universität Magdeburg, Germany; Royal National Institute for the Blind U.K; University of Birmingham U.K.; University of Hertfordshire, U.K.; Uppsala University, Sweden. We would like to thank the MoBIC team and the many people who participated in the MoBIC user studies.


[1] Carroll, J. M., Kellogg, W.A. and Rosson, M.B. (1991). The task-artifact cycle. Carroll, J. M. (Ed.), Designing interaction. Cambridge: Cambridge University Press.

[2] Clark-Carter, D. D., Heyes, A. D., & Howarth, C. I. (1986). The efficiency and walking speed of visually impaired people. Ergonomics, 29(6), 779 - 789.

[3] Epstein, W., Hughes, B., Schneider, S.L. and Bach-y-Rita, P. (1989). Perceptual learning in an unfamiliar modality. Journal of Experimental Psychology: Human Perception and Performance, 15, 28 - 44.

[4] Heyes, A. D., Armstrong, J. D., & Willans, P. R. (1976). A comparison of heart rates during blind mobility and car driving. Ergonomics, 19, 489 - 497.

[5] National Aeronautics and Space Administration (NASA)., Ames Research Center, Human Performance Research Group. NASA Task Load Index. Version 1.0: Paper and pencil package. Moffet Field, CA: Author.

[6] Nielsen, J., & Levy, J. (1994). Measuring usability: preference vs. performance. Communications of the ACM, 37(4), 66 - 75.

[7] Peake, P. and Leonard, J.A. (1971). The use of heart rate as an index of stress in blind pedestrians. Ergonomics, 14(2), 189 - 204.

[8] Petrie, H. L. (1995). User requirements for a GPS-based travel aid for blind people. In J.M. Gill and H. Petrie (Eds.), Proceedings of the Conference on Orientation and Navigation Systems for Blind Persons. Hatfield: University of Hertfordshire.

[9] Petrie, H., Johnson, V., Strothotte, T. Raab, A., Fritz, S., and Michel, R. (1996). MoBIC: designing a travel aid for blind and elderly people. Journal of Navigation, 49(1), 45 - 52.

[10] Shingledecker, C. A. (1978). The effects of anticipation on performance and processing load in blind mobility. Ergonomics, 21, 355 - 371.

[11] Spielberger, C.D. (1983). State-Trait Anxiety Inventory for adults: Sampler set, manual, test and scoring key. Palo Alto, CA: Mind Garden.

[12] Strothotte, T., Fritz, S., Michel, R., Raab, A., Petrie, H., Johnson, V., Reichert, L. and Shalt, A. (1996). Development of dialogue systems for a mobility aid for blind people: initial design and usability testing. Proceedings of ASSETS ‘96: The Second Annual Conference on Assistive Technology. New York: ACM Press.

[13] Available from Visionics Corporation of Minneaplois, MN, U.S.A. Homepage availabe at: http://www.visionics.com/


Developing Scenarios for Mobile CSCW


Steinar Kristoffersen1) , Jo Herstad3), Fredrik Ljungberg2), Frode Løbersli1), Jan R. Sandbakken1), Kari Thoresen1)


1) Norwegian Computing Centre, Postboks 114 Blindern, N-0314 Oslo

2) Viktoria Research Institute, Box 620, 405 30 Gothenburg, Sweden

3) University of Oslo, Postboks 1080, Blindern, 0316 Oslo, Norway



This paper presents a scenario-based approach to designing mobile applications. Based on empirical studies of consultants in a maritime classification company, a set of scenarios was developed. The scenarios are used for assessing current mobile platforms, as well as pointing to new design possibilities for the organisation concerned. Appraising some solutions for different scenarios, we found that the current trend of simply making the desktop smaller is not sufficient. Mobile computing and wireless networks cannot match the performance of unmoving technology. At the same time, work is usually organised according to the capability of the desktop. Thus, new metaphors and human-computer interaction techniques are needed to improve the design of mobile computing.


The objective of this paper is to develop scenarios for mobile CSCW, based on empirical studies of work in a maritime classification company, Det Norske Veritas (DNV). In order to achieve this, mobile informatics is set apart as a distinguished research area from informatics and IS. On the basis of the scenarios, we suggest a suite of new applications for mobile computing.

Mobile computing is currently reaching a first level of maturity. Industry interest is formidable and the adoption rate is likely to continue to increase (IEEE Internet Computing 1998, http://www.selectsurf.com/computers/hardware/mobile/).

Some popular platforms are emerging, for instance Windows CE; many of which simply denote a miniaturisation of the "desktop". This does not realise the full potential of mobile computing, since mobile work and IT use differ significantly from other settings. Traditional office-based metaphors, for instance "files and folders", references to stationary computer equipment, for instance "my computer" and spatial metaphors, such as "rooms", may not be suited for mobile work.

Mobile platforms generally have less bandwidth and processing power, whilst work itself is shaped by the performance of stationary computing. This maintains a capability gap between application requirements and the environment. It seems unfortunate, therefore, that the "desktop" has become the dominating design metaphor for mobile devices. This argument about the distinguished nature of mobile computing and IT-use is elaborated in section Error! Reference source not found..

We believe that mobile computing design needs new metaphors and concepts. One possible approach to achieving this is the use of scenarios. Scenario-based techniques have received some attention recently, and they are usually used to explore future aspects of a phenomenon.

This paper describes and reflects on the use of scenarios as vehicles to transcend current constraints in mobile application design. This paper offers a framework for discerning relevant scenarios. It constitutes a novel contribution to designing HCI for mobile devices, and several essentially new applications are proposed for the example domain.

The next section argues that mobile informatics is indeed significantly different from stationary IT-use, and this argumentation feeds directly into developing the scenario framework in the following section. Section Error! Reference source not found. discusses the findings from our empirical studies. Section Error! Reference source not found. describes the rationale and framework for developing scenarios in the DNV case. It focuses on the working situations of mobile consultants at DNV. The scenarios are presented and aspects that should be brought to bear on mobile designs are elicited and analysed in section Error! Reference source not found.. Pointers to future research based on these contributions conclude the paper.

The spirit of mobile IT use

The problems investigated by this research are intractable and interesting aspects of mobile work. This does not mean that they are limited to specific organisational forms or technologies, however. This section elaborates on some problems and possibilities of mobile IT use to clarify this perspective.


This paper presents a research effort to explore the possibilities of seamless support for interactive multimedia. Within IMIS, we collaborate, among others, with a Norwegian maritime classification company; Det Norske Veritas (DNV). DNV, established in 1864, is organised as an independent, autonomous foundation. It has 4,400 employees and 300 offices in 100 countries. Employees come from 67 different nationalities.

DNV is one of the world’s leading maritime classification societies. The objective of their activities is to safeguard life, property and environment. They provides three types of services:

Classification, which is to develop and maintain rules and standards for safe ships, offshore drilling and production units. DNV also verifies compliance with these rules during design, construction and operation.

Certification. DNV is accredited to certify companies with respect to different standards, for example ISO 9000. The main difference from classification is that the certification is grounded in standards developed by organisations outside DNV, typically government agencies.

Advisory services. DNV gives advisory services regarding technical solutions, training and safety, environment and quality management. It is within this section that we have started doing exploratory design for mobile computing.

DNV guidelines, products and services are offered to customers by geographical divisions. The consultants thus meet customers regularly at their sites, sometimes quite far away from the Home Base Unit (HBU). Mobile work is simply necessary for a global company whose business is mainly location-dependent. After all, ships and oilrigs cannot be inspected "long distance".

Use situations

It is important to be aware of the mission-critical nature of mobile IT-use. When people leave their home base to work remotely, for instance at a customer’s site, it is because they have to. It may be that the objects of their work are not accessible from home, for instance as when certifying maritime installations. It could also be the case that the work product needs to be tailored to specific requirements and demands within the user’s physical organisation. This is the case for DNV advisory services. The crucial observation is that in many cases, if the mediating technology fails to work properly, it is often not possible to do any work, and, additionally, such breakdowns often alienate working partners and customers.

Mobile applications are often described as providing a transparent "place" of work; mobile work can be performed regardless of location. Interestingly, mobile work often means not disregarding place, but rather the complex encounter of multifarious new places and faces. Offering support for communication and work with reference to such particulars is therefore likely to be more beneficial than the inverse.

Several other problems may be experienced in mobile work (Kristoffersen and Ljungberg 1998; Kristoffersen and Ljungberg 1998). We found, e.g., that planning is often impossible, since work is mobile for the very reason that not all constraints and possibilities can be discerned in advance. If they were, the need for mobility would be severely reduced.

This brief analysis also suggests that mobile work also renders virtually useless some of the mechanisms usually used for navigating in ones local habitat (Wynn 1984). Without visual and practical contact with the office environment, it becomes more "unknown". Most people have probably tried to direct someone through the telephone, to find a paper in ones office; usually it is not an easy task to achieve. Most people will, however, be able to find the same paper almost as soon as they cross the doorstep of their office. The physical environment of work facilitates access to objects as well as logical arrangements, the flow of work, important resources and people.


Mobile work is of course linked to the technology by which it is mediated. In a global, fast-paced organisation, distributed work cannot take place without communication technologies. Many traditional organisations, on the other hand exhibit the same idiosyncrasies, albeit perhaps in less pure forms.

For the problem of this paper specifically, we depart from state-of-the-art mobile computing, such as a Windows-based palmtop or laptop computer, a combination of pen and keyboard input and remote access to home base units via modem.

For mobile technologies, it is a weighty argument that in almost any mobile environment, connections are going to be unpredictably unreliable. Buildings and weather may cause interference, and parts may get lost or turn out to be incompatible with the remote network.

A related problem is that mobile information- and communication technologies often have sub-optimal performance. This is not a judgement of inferiority as such, simply an assertion that most organisations optimise work processes with reference to state-of-the-art desktop applications, just as software vendors maximise the functionality of software to the best processors available. One example from the DNV case is that even powerful laptops cannot practically replicate the complete customer-databases, although these machines today are more powerful than the ones used to install the databases at HBU originally. Even if mobile devices become more powerful and wireless networks increase their capacity, there is little evidence that they will surpass the trajectory of increasing power within the fixed computing environment.

Mobile technologies usually offer exactly the same addressing schemes and directory services as desktop technologies do, but in this new context that may just be too weak. The reason is exactly the point made previously, that mobile activities are location-dependent by nature. Using the mobile phone as an example, would it not be convenient to be able to call the car in front?

Mobile computing research

The current body of research in mobile computing is vast and diverse, and the majority is concerned with resolving technical issues in communication and data processing (MobiCom 1997). However relevant, this research will not be drawn upon in the remainder of this paper, which focuses on application categories and functionality that is pertaining to a particular case.

Some examples can be found of literature that deals with the requirements for mobile workers on an application-level, e.g. (Dix and Beale 1996). In this volume, Mitchell discusses aspects of CSCW (Computer Supported Co-operative Work) for the mobile teleworker. He points to three ways in which mobile technologies have affected mobile workers:

They can go straight to where the customer is, without articulating work through a central office (HBU), and

they can return home from work directly, without reporting back to HBU.

they can, and may have to, deal with contingencies and new requirements in the 'field'.

Focusing on the problems of retrieval and synchronisation, Dix and Beale discuss how pre-planning, caching and updating can be realised for mobile applications. Interestingly, they also assert that the primary barriers to teleworking seem to be manegerial and social, rather than purely technical (Dix and Beale 1996b). This implies that mobile computing should be considered in a use perspective as well as technical achivements.

Dearle (1998) denotes mobile IT as a whole new computational paradigm, in which processes migrate with users. Separating this paradigm from stationary computing, Dearle identifies the needs of this paradigm to provide mobility of views, processes, channels, code and state. He does not link these technical requirements with the particular needs of the users in the wider context of mobile work, however. In order to start dealing with these aspects of mobile computing, we introduce a research agenda called Mobile Informatics.

What is "Mobile Informatics"

This paper builds on an argument that mobile work and IT-use is significantly different from its stationary complement. Existing literature go some of the way in proving this point.

Some of the technical literature in this area concentrates on various types of mobility, such as user-, terminal-, and application-mobility (Thanh 1997). Another useful concept is session-mobility, which denotes the capability of migrating potentially active conference-objects around the network (Kristoffersen 1997).

Alternatively, one could focus on the people engaging in mobile activities, and differentiate between highly or slightly mobile and stationary work. This topology captures the intensity of mobility within the work. Capturing the range of mobile workers, on the other hand, on could distinguish between local, regional and global mobility. Both categorisations are meaningful in the context of the DNV case.

There is, however, more to mobility than simply moving. Its is often useful to distinguish between work that is mobile by nature, and the technology which may (or may not) support it; mobile media. There are different reasons for mobility as well, so travelling for business or pleasure belongs to a different category from transportation (of goods, or simply applying a "state-transition" perspective on mobility as getting from one place to the other). In this larger picture, we also wish to include nomadic use of stationary technology, which sometimes, but not necessarily, entails using mobile technology.

Based on the preceding discussion we propose the terms Mobile Informatics to denote information- and communication aspects of IT-use in mobile work.

Scenarios for Mobile Informatics

Generally, we see scenarios as a useful tool for outlining problem spaces, which are present, but sometimes undetected, and for exploring new solution spaces in a radical fashion. Scenarios are becoming an established way of anticipating and describing future use of computer systems.

Bardram describes how collaborative scenarios were used in the re-design of a Hospital Information System in Danish healthcare (Bardram 1998). They were used to support the creative, non- reductionable aspects of design. Based on a definition of a scenario as a concrete description of activities that the user engages in when performing a specific task, Bardram claims two benefits of scenarios: First, they are vehicles for supporting the creative meeting between users and designers. Second, they indicate the usefulness of a system on the background of work practices within the organisation.

The greatest difference between scenario-based design and traditional specification is that scenarios tend to be concrete, whilst functional specifications, e.g., tend to be abstract. Certainly there is no mutually exclusive relationship between the two, one main issue in using scenarios is how to translate the verbose, concrete description of a scenario into a precise, logical design (Carrol et al. 1994). In this paper, this will not be attempted resolved explicitely.

One potential weakness of scenario-based techniques for the design of interactive systems is that the user is not usually given the opportunity to interact with the scenarios (van Harmelen 1989). In our case, this will be sought resolved by exposing user representatives to the scenarios and engaging them in the validation and design process.

The selection of scenarios from a wide space of possibilities is another important issue that needs to be taken into consideration. Young and Barnard (1991) propose to distinguish between signature tasks, which have been deliberatively chosen to match the capabilities of the target domain, and paradigm tasks, which have been thoroughly analysed and understood on those terms. Rather than relying on the maturity of the analysis for selecting paradigm scenarios, this paper proposes a selection based on the findings from preliminary empirical studies.

The main rationale for using scenarios in Mobile Informatics is to resolve the dilemma of the "desktop metaphor" in mobile computing: small machines and wireless networks are always slower, but the organisation of work is usually adapted to superior performance. This is not a negative performance gap, as such, since most mobile devices have ample processing power. We chose to think of this as a positive design gap; Current design metaphors unable applications that fulfil the potential of Mobile Informatics, because they have not taken the particular requirements of mobile work and IT-use seriously. Mobile applications are all too often designed as miniature desktop systems. Thus, neither organisation nor technology is optimal for mobile work.

Empirical findings

Developing and discussing scenarios is a particularly fruitful approach for exploring the space of problems and solutions for a project (Schwartz 1992; Wired 1997). This project’s current objective is to investigate the possibilities of supporting DNV mobile consultants in their work. One important part of this endeavour is to implement and evaluate prototypes for mobile work. Technical contributions should build on a clear understanding of the existing application domain and its users. Preliminary studies have been carried out within the IMIS-DNV project, and a more focused, deep effort is planned to design and evaluate a production system.

There is, currently, a need to narrow the scope of the project. Design is about envisaging future IT-use. The scenarios are part of this process, as instruments for finding out what the project’s prototypes should try to achieve, how and for whom.

This section of the paper explains the scenario framework, and relates its components to findings from the preliminary empirical investigations. The scenarios are intended to cover realistic, yet potentially futuristic technological possibilities. On the process level, they explore several different organisations and cultures of work, since technological and institutional changes tend to be related. DNV is a large enterprise that influences as well as adapts to trends in society and working life, thus, we have also attempted to include a wider set of issues, such as an increasingly mobile lifestyle, flexible institutions, globalisation and a networked organisation of work.

We have tried to determine which dimensions of the potential space of problems and solutions that are most important, on the basis of fieldwork.

The empirical study comprises interviews with five respondents and one day of observing a consultant at work with a customer. We have focused on job situations, in the sense of activities that are found in all the transcripts and are recurring.

Some job situations are described in more detail below. The purpose is to provide examples to illustrate the different scenarios, and to ground our suggestions in real-life activities.

Four typical job situations emerge from the present empirical material: 1) Contacting co-workers, 2) Scheduling, 3) Document handling and 4) Information management.

Contacting co-workers

This situation occurs whenever consultants need an answer from other people, or they have to inform another person about relevant issues. It can be carried out in a number of different modes. Contacts may be synchronous or asynchronous. Technology often plays a mediating role. Contacts may be single or multiple. Single contacts occur when e.g. a mobile worker contacts a secretary to inquire about messages, while multiple contacts may be of the broadcasting type: "does anyone know how to deal with this problem?"

The type of information exchanged may vary from brief exchanges of plain text/voice to complicated drawings, etc. Different types of information have different degrees of urgency. Contacts may be directed externally towards customers, suppliers, etc, or internally to DNV co-workers. Accessibility is another key issue. People travel, work part-time and are away in meetings—all of which influence the ways in which they may be contacted.

The need for information for mobile workers can only partially be planned. Necessary files, such as plans, agendas, presentations, etc., may be downloaded on a laptop before travelling. However, people forget and they may discover too late that an important file is missing. Moreover, the remote situation’s requirements can only be fully revealed upon arrival. Thus, the need for information may be highly unpredictable: "you never know when you will need that particular piece of information".

Scheduling meetings

Configuring new situations is an integral part of mobile work. It is more complicated to schedule a meeting when the group is not co-located and the process of developing scenarios should take these issues into consideration.

To schedule a meeting requires negotiations between the people who are involved. It often involves rescheduling of people’s appointment as well as negotiated sharing of resources such as equipment and meeting rooms.

The scheduling task increases in complexity with number of persons involved. Negotiations require simple and rapid feedback and may sometimes require lengthy rescheduling.

Document handling

Producing documents and transmitting them is a characteristic activity of knowledge based work. Depending on the type of document and the production process, the need for equipment and connections vary.

The task may vary in degree of urgency. Documents may be aimed at both internal and external receivers, and the types of documents may range from simple text-based memos to possibly large and/or complex graphical or multimedia presentations.

Similarly, the reliability of the information may vary, depending upon e.g. whether handbooks are updated.

Information management

This job situation differs from the first situation in the sense that one distributes information to a group of persons, not only to a specific co-worker. To gather information in this con-text means to search for information in archives, databases, regulations etc. without the involvement of another person.

On the distribution side, the size of the group is of importance. To reach a large group may require access to directories. Furthermore, the stability of the group points to the need for meticulously updated directories in case of changes, and for easy re-establishing of group membership. The need for feedback may also be important, for instance when the distributed information calls for hearings, or concerns changes of dates and time for specific events. As in previous situations, the distributed information may be directed internally and externally as well as mixed, which calls for security considerations.

Developing scenarios

The scenarios can be categorised according to the organisation of information resources (databases as well as people), the infrastructure and the fundamental modus of work. This topology constitutes a framework for developing scenarios for mobile computer supported co-operative work. It is important to be aware that, although this framework is empirically informed, its combined elements will only partially represent actual work situations. Creating scenarios is a matter of design rather than reporting. The following figure shows the relationship between key aspects of working situations at DNV (presented above) and these three dimensions.

Figure 1: The empirical argument

These following sections describe the selected dimensions in more detail:

Centralised or decentralised information resources

At one end of this dimension, information resources such as databases and co-workers pertaining to a mobile situation can be found in one "authoritative" location, the HBU. At the other extreme, these resources are completely decentralised and under the control of its local owners.

Connected or disconnected

This dimension represents the available infrastructure: Whether the mobile consultant can connect synchronously to the information resources, or updates have to take place asynchronously. In most cases of synchronously connected infrastructures, the supporting systems will have to handle graceful degradation toward asynchronous connectivity, since a mobile environment cannot guarantee permanent connection.

Co-operative work or individual work

At one end of this dimension, we can conceive an organisation of the enterprise that encourages mobile associates to work more independently, i.e. as autonomous agents toward customers. On the other extreme, co-operative working arrangements involving several other people within DNV (or even from the ‘outside’) could be the preferred mode of operations.

The following figure graphically represents this potential space of DNV mobile applications.

Figure 2: The scenario framework

This framework provides eight reference points in a three-dimensional space of mobile situations (in between which there are obviously continuous transitions of situation types). In the remainder of this paper we elaborate these eight scenarios, and elicit design alternatives for each.

Scenarios for mobile CSCW

The following table summarises the scenario proposals coming out of the framework suggested above:




Information resources


Mode of operations









Fighter pilot
























Table 1: List of scenarios


In the satellite scenario, the group of people working together are mainly preparing and working towards the customer from HBU. When mobile consultants go out to work at customers’ sites, the remaining members of the group at HBU provide real-time support.

This scenario combines a centralised organisation of information resources with a perspective of the work as mainly co-operative. We also assume that the mobile DNV associate can access the common information resources synchronously, albeit with a minimal risk of loosing the connection temporarily.

In this scenario, the physical headquarters at Høvik (HBU) exists exactly like today and the consultants are employed by DNV, thus they have their permanent workspaces at HBU. Mobile associates, as a rule, spend some time every day at HHQ. All technical documentation and databases are stored at HHQ, and can be accessed either by librarians or, electronically, from a trusted access point. Very few people have their own private offices, however, most associates are organised in teams working toward defined segments of the market, and they usually share an open work space. Some prefer cubicles; others do their work around larger tables.

Within the group, people specialise in different areas of customers’ business, such as management, software development, production, etc., or they have different competencies relating to methodologies and products offered by DNV. Finally, some are mainly advisors, some are assessors and auditors, and some do project management, documentation and implementation on behalf of the customer.

The following is a description of a work situation as it could have been observed in the Satellite scenario. It involves a consultant, Albert, working with a customer called AkSoft to improve their quality management system.

Albert, arrives at AkSoft’s offices, and together with the CEO, the software quality manager several senior engineers, they gather in a meeting room to continue planning for the Assessment phase of an ongoing certification. Albert’s presentation of CMM and BOOTSTRAP is well conducted, and the pros and cons of each methodology are well presented. When the CEO tells Albert that TickIT needs to be considered as well, the meeting grinds to a halt.

Albert does not know TickIT. The preparations could all have been a waste of time, unless something can be done to bring TickIT reasonably well (and fast) into the discussion.

DNV has developed templates for this sort of evaluation and planning presentations, but they have to be adapted to the circumstances by a person that knows the topic well. Therefore, Albert has to get in touch with members of the group that know TickIT, because he needs their help in adapting the presentation.

In similar situations previously, Albert has used the customer’s computer network to access the DNV Extranet Server on which dedicated software made it possible to run any application as an applet within a web-page. With a little help over the mobile phone, from the relevant DNV associates, he has usually been able to tailor the presentation and learn enough about the case or method to do the job. Assistance-on-demand is one of the core support systems for DNVQA associates, since their work is too complex and too mobile for all of them to know everything always.

Unfortunately, because of security concerns, AkSoft are not on the Internet, so Albert cannot access the full set of DNV applications. This does not mean that he is unconnected, however.

Albert connects through the GSM net, inserting a PCMCIA card into his Personal Digital Assistant (PDA) and phoning up HBU with his mobile phone. He invokes an "awareness client" which uses a combination of voluntarily (but automatically) compiled activity logs and "active badges" to find out what people in his group are doing at the time. Fortunately, he localises two associates with the pertaining competencies, Beatrice, who is the maritime software expert in the group, and Charlotte, who is a TickIT assessor. The awareness client summons Beatrice and Charlotte to a teleconference using the simple conference application (SCA). SCA seamlessly adapts to various performance levels, and Beatrice and Charlotte can participate from their workstations, even if Albert is only on a PDA.

The PDA cannot run the presentation manager used by DNV, and it is not likely that it could have stored and processed the complete template specifications anyway, because of their size. It also cannot run the tools necessary to adapt the template to a specific customer. On the other hand, it is well suited to mediate the tailored version, since that is likely to be smaller. Thus, Albert asks Beatrice and Charlotte to prepare the presentation, on their workstations. It is quite a complicated procedure, since Albert has the knowledge of the project, whilst Beatrice is the expert for the market segment and Charlotte is the TickIT assessor, and the output is, necessarily, a co-operative product.

It could have taken too much time to do this during the meeting with the customer representatives. However, SPI is (at least) just as much about organisational learning as process specifications, Albert connects the audio output to the loudspeakers in the room and displays the screen of his PDA onto the whiteboard using the overhead projector.

The work proceeds smoothly and after a while the presentation of TickIT assessments, specific to the needs of AkSoft, are finished.

Now, the next, albeit slightly less critical problem, is that the new presentation should be printed to slides and paper. The PDA cannot print directly to any printer, since it would be impossible to anticipate and bring within the drivers and cables necessary for any platforms the customers might have.

Fortunately, there is a PC on the AkSoft LAN and Albert connects to it using his PDA connection toolkit. He uses Mobile File Transfer from the PDA and downloads the finished slides onto the PC, in PDF format, which can be printed literally everywhere. The AkSoft PC does not carry an Acrobat Reader, but Albert always has one in a suite of useful CP tools stored on his PDA.

Printing and distributing the slides takes only a few minutes, and soon the meeting proceeds to discuss the TickIT approach, compared to CMM and BOOTSTRAP. Since the AkSoft participants at the meeting peripherally participated in the co-operative effort to compile an AkSoft specific presentation of TickIT, it does not take long, however, to reach a conclusion.

This scenario shows the richness of the scenario approach, and many implications for mobile CSCW design can immediately be drawn from it.

The co-operative, centralised and permanently connected organisation of work would typically benefit from flexible, synchronous CSCW. This scenario, moreover, indicates the need for software solutions that take into account the unstable connections and limited bandwidth of mobile computing.

Rather than describing each scenario in full detail, the remainder of this section briefly presents scenarios 2-7 and points to preliminary design implications.


This scenario involves individual work as the prevalent mode of operation. Resolving a need for assistance with common, shared information resources, the mobile consultants have to remotely access and update items at HBU databases.

The most important difference between this scenario and the satellite is that the consultant would have to rely on common (and authorised) information resources, rather than interacting with colleagues.

Many organisational memory approaches today are aligned with this scenario, in which consultants remotely access and update information (without co-operation) through a shared information space.

Fighter pilot

Similarly to a fighter pilot, the mobile consultant of this scenario prepares for work in a co-operative mode at HBU, before going out to the customer. Whilst in the field, since a mobile connection cannot be assumed, people are on their own, until they return to HBU and can be debriefed for the benefit of the group.

The orientation of this scenario towards co-operative work needs to be realised in sessions before and after the mobile work takes place.

For the "debriefing" meetings, synchronous and co-located meeting support systems would certainly be useful.


This scenario is very close to the common organisation of work at DNV today. Information resources are found at HBU, each consultant prepares individually and cannot connect again during the working session at the customer’s site. The endeavour is mainly individual and other people are likely to be educated about the case only if they ask.

The most important difference between this scenario and the second is that relevant resources are not available when mobile.

Properly replicated databases, with an intelligent pre-fetching and synchronisation scheme is one viable design alternative for this scenario.


In the CyberGroup scenario, information is distributed amongst the mobile consultants, and most people are working elsewhere. Thus, HBU’s size is reduced and its role is mainly to facilitate the virtual groups. People within these tightly coupled groups depend on each other’s support, and therefore they are continuously connected. Information is under local control, and is placed into a context through negotiation and mutual support.

The distributed nature of information and human resources is a significant characteristic of this scenario. It is oriented towards co-operative work, however, and technology is needed to ensure the potential of information sharing and collaboration in the field.

Since members of the cybergroup need continuous connection with co-workers, whilst also interacting with customers, the mobile application should be based on principles of ubiquitous computing.


This scenario is similar to the previous one, except that work is mainly individual. Thus, information will have to be accessed in a traditional database fashion, eventually with only a limited set of automatic constraints imposed by the owner. It is similar to the Operator scenario with information resources distributed, rather than centralised.

Albert’s presentation at AkSoft could, in this scenario, only be "saved" if it was possible for him to locate and negotiate access to necessary programs and templates in the distributed space of resources.

Mobile agents that search and retrieve information on behalf of users could be considered for this scenario. Since work, in this case, is mainly individual, agents should also be able to negotiate access


Networking is decentralised and disconnected during mobile sessions. In this scenario, mobile DNV consultants rely on formal and informal personal networks of contacts to provide assistance, before and after sessions. One can imagine that such networks will consist of external consultants as well as DNV associates, loosely coupled together by their ability to provide mutual support. In the network, people know each other’s abilities.

Since a mobile connection cannot be assumed in the field, this scenario relies on technological support to do informal networking between and after sessions. Perhaps a media space type application could provide a useful medium for maintaining informal connections in the network.


In this final scenario, human and technical resources are distributed unevenly among very loosely coupled consultants, without participation in a co-operative network. Technical connectivity (between) sessions would have to be improvised, and it is not unlikely for this scenario that information is bought and sold between people who do stand-alone projects for DNV.

One application category that should be considered for this scenario is electronic commerce, which would enable agents in the market to trade business and information regarding DNV advisory services.


This paper suggests a novel approach to designing applications and user interfaces for mobile applications. Based on a brief empirical investigation, scenarios spanning a relevant space of problems and solutions were proposed and discussed. The analysis indicates that new metaphors and technical capabilities are needed to exceed the "mobile desktop". In this first examination, a range of applications from ubiquitous computing to electronic commerce is proposed, all of which represent new perspectives on mobile work for the organisation involved.

The primary usage of the results reported in this paper is to inform and inspire design considerations within the IMIS-DNV project. On a more general level, however, the scenario framework described below is offered as an instrument for developing and reflecting on similar scenarios for other projects involving mobile computer-supported co-operative work.

Future research

This paper introduced scenarios as a design tool, to inspire a discussion about current and future concerns of mobile informatics. Within the IMIS projects we are currently adapting and evaluating traditional office support systems in a mobile environment. In the next step, "mobile-aware" applications will be designed according to selected scenarios.


Bardram, J. (1998). Scenario-Based Design of Co-operative Systems. Third International Conference on the Design of Cooperative Systems (COOP'98), Cannes, France.

Carrol, J. M., R. L. Mack, et al. (1994). Binding Objects to Scenarios of Use. International Journal of Human-Computer Studies 41: 243-276.

Dearle, A. (1998). Toward Ubiquitous Environments for Mobile Users. IEEE Internet Computing 2(1): 22-32.

Dix, A. and R. Beale, Eds. (1996). Remote Cooperation. CSCW Issues for Mobile and Teleworkers. Computer Supported Cooperative Work. London, Springer-Verlag.

Dix, A. and R. Beale (1996b). Information Requirements of Distributed Workers. Remote Cooperation. CSCW Issues for Mobile and Teleworkers. A. Dix and R. Beale. London, Springer-Verlag: 113-144.

Harmelen, M. v. (1989). Exploratory User Interface Design Using Scenarios and Prototypes. Proceedings of the HCI'89 Conference on People and Computers V, User Interface Management Systems, pp. 191-201.

IEEE Internet Computing (1998). Mobile Computing. Volume 2, Number 1, January/February 1998.

Kristoffersen, S. (1997). MEDIATE: Video as a first-class datatype. GROUP'97. Proceedings of the International ACM SIGGROUP Conference on Supporting Group Work, Phoenix, Arizona, USA, ACM Press.

Kristoffersen, S. and F. Ljungberg (1998). The architecture and protocol of MOSCOW: MObile Sharing and CO-ordination of Work. Interacting with Computers, 10, pp. 45-65.

Kristoffersen, S. and F. Ljungberg (1998). DARWIN: Message pad support for networked, dispersed groups. Scandinavian Journal of Information Systems 9(1): 3-24.

MobiCom (1997). The Third Annual ACM/IEEE International Conference on Mobile Computing and Networking . September 26-30, Budapest,Hungary.

Schwartz, P. (1992). The art of the long view, Century Business.

Thanh, D. v. (1997). Mobility as an open distributed processing transparency. Department of Informatics, Factulty of Mathematics and Natural Sciences. Kjeller, University of Oslo, UniK: 253.

Wired (1997). Scenarios. Special Wired Edition.

Wynn, L. S. a. E. (1984). Procedures and problems in the office. Office: Technology and People 2: 133--154.

Young, R. M. and P. J. Barnard (1991). Signature Tasks and Paradigm Tasks: New Wrinkles on the Scenarios Methodology. Proceedings of the HCI'91 Conference on People and Computers VI, Scenarios and Rationales in Design, pp. 91-101.



Human-Computer-Giraffe Interaction: HCI in the Field


Jason Pascoe, Nick Ryan, and David Morse


University of Kent at Canterbury, Canterbury, Kent CT2 7NF, United Kingdom.

JP@ukc.ac.uk, NSR@ukc.ac.uk, DRM@ukc.ac.uk



This paper presents some findings and proposals for new research that have arisen from our work on the "Mobile Computing in Fieldwork Environments" project at the University of Kent at Canterbury [1]: a project that is sponsored by JTAP (JISC Technology Applications Programme) [2]. Our main research interest is in the development of novel software tools for the mobile fieldworker that exploit existing handheld computing and sensor technology. The work described in this paper concentrates on examining the special needs and environment of the fieldworker, reflecting on the HCI features required for a successful PDA (Personal Digital Assistant) for use in the field.

The Very Mobile Nature of Fieldwork

Handheld computing appliances are typically envisioned as tools within the businessperson’s domain, where the executive is accompanied by a subset of their business data stored on a PDA. During a meeting at the office or whilst commuting to work on the train, the PDA allows them to work with their data at a location of their choice. However, the world of the businessperson is far removed from the environment of the fieldworker. Perhaps one of the most striking differences can be seen in terms of usage patterns. The businessperson will normally be seated at a desk to use their PDA, or perhaps with the PDA rested on their lap. We could therefore describe this as portable computing rather than truly mobile computing because although the user can roam anywhere with their PDA, it is generally with the intention of bringing computing resources to use within a static workplace rather than to use them whilst on the move. The fieldworker’s environment, however, is a much more dynamic one, where the PDA will be utilised throughout the course of the user’s work, often spread over a wide geographic area. That is, the usage of the PDA is truly mobile.

Static usage of PDAs pose HCI challenges based around the problems arising from the ever diminishing size of the hardware, e.g. examining how software displays can be adapted to the dramatically smaller PDA screens. However, at least the environment of use is still in common with traditional desktop or laptop PCs. Mobile usage of PDAs offer even more challenges as not only do the issues of miniaturisation have to be addressed but also the completely different user environments too. We believe that the requirements of computing hardware and software intended for mobile usage are significantly different from that of their statically used counterparts, and it is these different requirements and how to satisfy them that we are interested in.

We have concentrated our efforts in the areas of ecological and archaeological fieldwork in particular, as two members of the project have backgrounds in these areas and we have a number of contacts that are keen to trial our prototypes. However, the ideas and prototypes we have been developing are intended to be widely applicable and are not solely aimed at these areas. Indeed, much of our work is valid for applications that require mobile usage but are outside of the fieldwork arena altogether, e.g. PDA tourist guides [3].


Four Characteristics of the Fieldworker User

The nature of fieldwork has been described in general terms as highly mobile, where the fieldworker will use the PDA throughout a variety of environments during the course of some work. More specifically, the most common form of fieldwork carried out is data collection. The aim of this activity is to record data about the environment that the user is exploring. The unique nature of mobile usage requirements within this context can be identified by four characteristics:

Dynamic User Configuration. The fieldworker will want to collect data whenever and wherever they like but it is extremely unlikely that there will be any chairs or desks nearby on which to set-up their computing apparatus. Nevertheless, the fieldworker will still want to record data during observations whether they are standing, crawling, or walking (all of which would be quite normal in fieldwork conditions).

Limited Attention Capacity. Data collection tasks are oriented around observing a subject. Depending upon the nature of the subject the user will have to pay varying amounts of attention to it. ‘Snap-shot’ observations require little more than recording the current state of the subject at a particular point in time. However, many observations are carried out over a more prolonged period of time during which the fieldworker must keep constant vigil on the subject to note any changes in state, e.g. observing giraffe behaviour. In these situations the user needs to spend as much time as possible in observing and to minimise the time devoted to interacting with the recording mechanism.

High-Speed Interaction. The subjects of some time-dependent observations are highly animated or, more commonly, have intense periods or ‘spurts’ of activity. The fieldworker is normally a passive observer whose work is subject-driven, therefore during these spurts of activity they need to able to enter high volumes of data very quickly and accurately, or it will be lost forever.

Context Dependency. The fieldworker’s activities are intimately associated with their context. For example, in recording an observation of a giraffe, its location or the location of the observation point will almost certainly be recorded too. In this way the data recorded is self-describing of the context from which it was derived. Further applications of the data often involve analysing these context dependencies in some form, e.g. plotting giraffe observations on to a map.

The relative importance of these four factors can vary with different fieldwork. For example, in testing our prototype software we have been involved with two projects: a giraffe observational study in Kenya [4], and an archaeological survey near Sevilla, Spain [5] (we refer mainly to the Kenyan work in this paper). The giraffe behavioural study strongly exhibited all four of these characteristics, whereas in the archaeological study the characteristics of limited attention capacity and high-speed interaction were not so pronounced. The differences lie in the nature of the data collection subject; giraffe are very animated whereas roman pottery is quite static. However, these attention and speed factors are still of importance in archaeological fieldwork because although, the pottery may well be fixed in absolute terms, the archaeologist will walk around an area and note any interesting subjects he passes by. Therefore, relative to the observer, the focus of observation is changing quite rapidly, and the amount of attention that can paid to observations, and the speed of recording them, are limiting factors as to how quickly the fieldwork can be completed.

The Features of a Prototype Fieldwork Tool

We have constructed some prototypes to experiment with providing fieldworkers with mobile computing technology that aims to satisfy these requirements. We have concentrated on developing novel software applications that use existing hardware, but we have carefully examined the various hardware devices available and evaluated their suitability for fieldwork environments through the following criteria:

Pen User Interface. We found that the flip-open ‘clam-shell’ pocket computers equipped with miniature keyboards were not suitable for fieldwork environments, where the user is typically standing whilst operating the device. Although ideal for static situations where it can be rested on a work-surface, in-hand use of these devices requires both the user’s hands and often involves a clumsy method of typing with the thumbs. Pen-based interfaces on a pad-like device provide a more ergonomic solution that can be held in one hand if simply viewing data, and generally use some form of handwriting recognition for entering data. They provide a natural substitute for the fieldworker’s paper notebook, similar in size and operation, and suitable for use by the user in many different dynamic situations (e.g. whilst walking).

Small form-factor. The fieldworker may already be burdened with a variety of equipment in the field. Therefore, both in terms of space to stow the device and the amount of equipment to carry, a small form-factor is essential. Ideally, the device should fit in a trouser pocket.

Battery-life. A typical fieldworker will spend a day in the field before returning to a base camp. Therefore, a device that can be used for at least a whole day without requiring replacement batteries is desirable.

Robustness. The very nature of the environment makes it necessary to have devices that are able to cope with knocks, drops, and the general conditions of outdoor life, including heat, dust, rain, etc. In short, a very durable device is required.

Connectivity. The process of data collection is not an end in itself. The collected data will need to be downloaded to a desktop computer for analysis and detailed study once the fieldwork has been completed. Therefore, a device that can be easily connected to a PC is necessary.

Based on these criteria we chose the 3Com PalmPilot as the most suitable device. There are a number of specialised manufactures of ruggedised mobile computers, but we wished to select a device that was reasonably priced, widely available, and suitable for a variety of mobile environments, not just in fieldwork.

In developing the first software prototype we wanted to provide some easy-to-use tools that allowed the fieldworker to collect data in electronic form. These would provide us with a platform for experimentation of our ideas to make data collection easier and quicker by the provision of various forms of assistance on the PDA.

The tools took the form of a suite of three prototype programs based on the stick-e note metaphor [6,7] in which notes are seen as being attached to a context. For example, a description of a shard of roman pottery could be tagged to the location of the find. However, rather than just recording simple textual notes, fieldworkers can record quite elaborate sets of data such as behavioural descriptions. To accommodate this requirement, we extended the stick-e note metaphor by eliminating the distinction between context and content. The resulting stick-e notes consist of a variable number of elements that can be viewed as both data and context (due to the self-describing contextual nature of field observations). The following describes the basic purpose of each of the three stick-e note programs:

StickePlates. Most data collection work involves recording observations as standard sets of data (e.g. recording the date, time, location, pottery type and description, for each archaeological find). The StickePlates program allows the user to define a number of note templates that describe such sets of data by defining the elements they contain.


Figure 1 - Using the StickePlates program to create a template for giraffe observations.

StickePad. This program provides the recording facilities with which the user can create new notes, based on a predefined template, or modify existing ones. The StickePad will be the most frequently used tool, so it is especially important for this program to be designed in harmony with the fieldworker’s mobile usage characteristics.

Figure 2 - Recording a new giraffe observation note in the StickePad.

StickeMap. A map screen is provided that offers an alternative method of visualising and selecting notes to the StickePad’s simple sequenced list. Icons denoting notes are overlaid onto the map and can be selected in order to view or edit their contents.

Figure 3 - Viewing the user's location (the cross-hair) in relation to the recorded notes (the note icons) in the StickeMap.

We have tested the system in a number of environments, the most rigorous of which was a two-month behavioural study of giraffe in Kenya. In this trial a willing ecologist, Kathy Pinkney, replaced her paper notebook with our prototype for the entire period of her fieldwork, using it for all of her data collection tasks. The focus of her research was to investigate the feeding behaviour of giraffe in order to assess their impact on the vegetation within the Sweetwaters game reserve. In order to do this effectively she needed to collect a large amount of raw observational data of giraffe feeding.

The simple form-based interface of our prototype software embodied the design philosophy of PalmPilot software: "if it needs a manual then it’s too difficult to use". Rather than providing a radically new interface design from what the ecologist may have previously encountered, we instead sought to provide innovative features set within a familiar interface metaphor. This approach allowed Kathy to quickly learn how to use the system on the plane flight from England to Kenya. Once in the field she created a number of templates to define data sets for observations including vegetation surveys, giraffe behavioural observations, and giraffe faeces records. The prototype software proved itself almost indispensable in the recording of giraffe behavioural observations in particular: as through a combination of automation and optimised modes of interaction, more data was recorded at a much faster rate than would otherwise have been possible with a single observer using a manual recording medium.

Each day of the two-month study the PalmPilot software was used to record giraffe observation data, which was then downloaded to a laptop computer at the research centre each night. This data would be electronically shipped back to England every two weeks when collecting supplies from the nearest town (which also happened to have a Doctor’s surgery offering an email service). At the end of the study approximately 6000 observations had been recorded. Apart from a few minor bugs in the code, the prototype performed at a level that allowed the ecologist to complete more work, in a way that was both faster and easier, than is possible in a manual system. The HCI factors in the prototype that led to this success can be formulated as two general principles:

Indirect Operation. Providing interface mechanisms that minimise the amount of user-attention, though not necessarily the amount of user-interaction, that is required to perform a particular task.

Context-Awareness [8]. Imbuing the device with the capability to sense its environment.

The remainder of this paper describes in detail how both of these principles were applied in the prototype system and discusses our work on further enhancements and research arising from our experiences in the field.

Indirect Operation

The principle of indirect operation seeks to satisfy the needs of the fieldworker with respect to their characteristics of dynamic user configuration and low attention capacity. An example of a task in the Kenyan fieldwork that illustrates both of these characteristics particularly well is the detailed giraffe observation. During one of these observations the ecologist was often hiding behind vegetation, walking through the bush, or crouching over a telescope. Data needed to be recorded in any of these circumstances. Additionally, observing a giraffe’s detailed feeding behaviour (such as the number of bites taken from a particular acacia tree) required a great deal of attention. This is especially true when observing from a distance through a telescope, where, unless the user pays constant attention, the giraffe can quickly move out of the field of view.

Conventionally, handheld computers require the direct attention of the user for the duration of the task. During this period all of the user’s attention is focused onto the device. For example, to select a document the user will hold their PDA in one hand, select the document with the pen held in the other, and all the time be looking at the device in order to correctly operate the interface. In a fieldwork environment this distracting process can negatively affect the quality of the work. Note that it is not the number of interactions occurring that is the important factor, but the amount of attention that they require from the user.

Indirect operation attempts to remedy this situation by transferring interaction tasks to modalities that require less of the user’s focus of attention. As a small experiment of this idea, our prototype software overloaded two of the hardware buttons of the PalmPilot device with a configurable increment and decrement function. These buttons could then be used to manipulate sequential data with less attention from the user because the buttons provided enough tactile feedback without requiring them to actually look at the device. The user could configure the amount decremented or incremented by these buttons for particular types of data (e.g. tree height may increment in units of five metres and giraffe bites in steps of one). This feature was most usefully employed in counting giraffe bites off a tree: here the ecologist could keep a running total of the number of bites taken whilst simultaneously observing the giraffe through the telescope. In effect, this is an eyes-free form of human-computer interaction.


Figure 4 - The ecologist observes the giraffe whilst simultaneously recording data on the PalmPilot using indirect interaction techniques.

We are exploring other methods to maximise the indirect operation of our prototype. The touch-sensitive screen provides one opportunity. If divided into selectable areas, let us say four quadrants, a particular function or data value can be assigned to each of the quadrants, e.g. a tree species selector where top-left = acacia, top-right = uclea, bottom-left = scutia, bottom-right = other. The user can easily identify the four corners of the screen with their thumb and hence operate the interface in eyes-free mode (especially if some form of audio feedback is given). Although we may not be able to divide the screen into enough areas to support all functions or data types we can certainly implement the most frequently required options to optimise for an eyes-free mode of operation.

Our interest in indirect operation is not only limited to eyes-free forms of interaction but also covers other methods that attempt to reduce the amount of focus required by the user. One-handed operation is such a method. Although the user may need to look at the screen, one-handed operation allows them to operate the device with one hand whilst continuing to perform tasks with the other. Such a facility is useful in many diverse situations, not just in fieldwork. For example, consider the businessperson who wishes to consult their diary and to-do lists for the day whilst walking to work with their briefcase in one hand and their PDA, retrieved from their pocket, in the other. In such circumstances the hand that holds the device also has to perform the interaction tasks necessary. Small devices such as the PalmPilot are ideally suited to this type of activity as they can be easily held in one hand whilst leaving the thumb free for manipulating the screen or buttons. We are planning to implement specially designed over-sized controls, such as a full-screen slider bar, that can be used to edit a variety of different data types and that are easily operable by thumb.

Thus far we have only discussed operating on individual portions of a complete data set or task. We also inherently need an indirect operation method of navigating between these individual features. For data collection activities the ubiquitous form-filling interface (as was used in our prototype) could be structured as a set of ‘layered’ screens, one for each data element. The user would then be presented with a sequence of these screens that are optimised for indirect operation, much like filling in a questionnaire question by question. Note that, as with many of the features presented, this would be an optional enhancement to the existing system rather than a replacement, because if the user’s attention is not pressed then they may prefer to work with the data in the ‘bigger picture’, e.g. viewing the whole form whilst editing a small field within it.

The areas of the prototype interface that proved least successful in the Kenyan trial were the operations that let the user navigate through the data. For example, to edit a note the user had to decide which field of the note to select, perform the editing operation using the type-specific controls, return to the list of fields, and then decide the next field to edit. Although easily executed, when the user’s concentration is directed elsewhere, such as at a giraffe, the number of decisions and manipulative processes involved in selecting and modifying the data appropriately become a distraction to the main task at hand. We hope to minimise this cognitive load through combining our ideas of layered sequential screens (eliminating the field selection process) with those of eyes-free or single-hand controls (which aim to reduce the complexity of data manipulation).

We are designing these interface techniques based on a model of fieldwork process and environment, not on a model of fieldwork data. Although some elements of interfaces are created to manipulate particular types of data, it is the user’s task and environment that shape the design of our interface. Identical data may have completely different interfaces depending on the user’s task or environment. For example, in the field the ecologist may want a simple and direct interface that is oriented to recording data quickly, wheras back in the laboratory the user will likely view the same data in a much more rich and complex form, such as in a GIS (Geographical Information System). It may be tempting to encode the visual interface into the logical template descriptions (as is often the case with HTML documents) but this severely limits the flexibility and portability of the data model. Therefore, we have been careful to keep interface and data models distinct in our prototypes.

One final opportunity to exploit indirect operation lies in the new hardware capabilities that are becoming available in-built on PDAs or available as attachments. For example, microphones are becoming more prevalent with handheld computers and provide opportunities for voice recognition, whilst small vibrating units (typically used by pager software) offer the developer with another means of providing feedback to the unfocused user.


The fieldworker is generally equipped with a plethora of equipment to assist in the observation process. In the Kenyan study, for example, a map and compass would have been required to pinpoint location and a stopwatch required for recording time series data such as giraffe behavioural observations. However, rather than taking more equipment out into the field, we believe that a fieldworker endowed with a PDA will actually take out less. The reason for this apparent paradox is that we wish to assimilate as many of the other equipment interfaces into the PDA as possible and to automate their operation. In addition, instead of providing an electronic ‘copy’ of the device, we aim to embed the appropriate function within the task it is related to. For example, automatically entering the current location into a new observation note. This is achieved by making the PDA aware of its context through various attached or embedded sensors so that it is able to supply contextual information when needed.

In our prototype we made our programs aware of two elements of their context that are useful in a wide array of activities: time and location. Knowledge of time is easily obtained through the unit’s own internal clock, and this can be used to provide various timing functions that eliminate the need for a stopwatch. Location was provided through an attached GPS (Global Positioning System) [9] receiver that could pinpoint the user to within 100 metres anywhere in the world (given an unobstructed view of the sky). The GPS receiver was a separate unit attached via a serial cable, but we expect GPS receivers, or an equivalent technology, to be integrated with PDAs in the near future. However, decisions on the nature of the physical hardware integrated with the device need not limit context-aware technology, as from a few basic sensors a number of software-derived contexts can be generated. For example, a tide-level context can be computed from the location and time contexts. Similarly, a ‘dominant vegetation context’ can be derived from a location and GIS vegetation map. In fact, the complexity of contextual information possible has spurred us into developing a Contextual Information Service in a separate research project.

One of the characteristics of the fieldworker described earlier was the need for hi-speed interaction. Context-awareness can help in this area by automating some of the fieldworker’s activities. In the prototype software the StickePad automatically defaulted any time or location fields of a newly created note to the current clock or GPS reading respectively (see figure 2). Even such a seemingly minor enhancement made the ecologist’s job much easier. For instance, recording giraffe feeding behaviour through a telescope would normally have required two people, one to dictate the observations being made through the telescope and the other to use the stopwatch to record the times and details of the rapidly occurring events. However, the prototype system allowed a single person to perform both tasks by automatically completing the timing information as soon as the user indicated a new event had taken place, leaving them to simply enter a code for the behaviour. Combining this with an indirect mode of operation to enter the event details would provide an even speedier form of data collection.


Figure 5 - Editing a location field in the StickePad illustrates how context-awareness can be used to expedite data collection; in this case by automatically entering the user's location (derived from an attached GPS receiver) and allowing it to be easily updated via the 'Here' button.

Another characteristic attributed to the fieldworker is their context dependency. As mentioned earlier, the data being collected is effectively a description of various elements of their context and at a later date the complete collection of data will be compared and analysed from the perspective of one or more of these contexts. For example, the collected notes could be plotted in a GIS (Geographic Information System) in order to visualise and analyse the data from a location context perspective. Equipped with a PDA we can effectively bring cut-down versions of these context visualisation tools into the field, where not only can we view the various notes that have been recorded but also our presence relative to them [5]. In the prototype we implemented a StickeMap program to demonstrate a form of context visualisation by plotting the recorded notes and user’s current position on a configurable map (see figure 3). Visualising data using contextual information provides a powerful mechanism that allows the user to gain an overview of the data from a particular contextual perspective, to filter information that they are interested in, and also to look for patterns in the data.

Triggering information and tasks by context is also an interesting area that we are exploring in our next generation of prototypes. When taking pre-recorded data back out into the field we can ask the PDA to automatically display this information, or trigger it, when entering the same context. For example, when updating a vegetation survey, the ecologist will be automatically notified upon reaching the location of a sample point and will have past data for that site automatically displayed. There are a wealth of potential applications that could utilise triggered information, consider the following examples:

Descriptive prose describing the general ecology, geology, archaeology, etc. of a particular site.

The observation history of a giraffe being attached to the actual animal itself (utilising its radio-collar as the sensor to indicate a ‘near-giraffe’ context).

A warning message attached to the site of a river and a heavy rain context, so that the fieldworker is alerted of flash floods prevalent in the area during bad weather.

In short, a whole field-guide could be compiled from various information sources that are automatically made available to the user in the appropriate circumstances. The limited storage resources available on most PDAs make carrying a whole encyclopaedia of information impossible, but contextual information can help filter out the relevant subset of data to download for a day’s work. We envisage a three-tier system of storage and context filtering:

Home-base repository. This is the central stationary store of information contained at the fieldworker’s permanent home location, e.g. the research laboratory.

Project-dependent repository. A portable subset of the complete data store that is taken with the ecologist to assist in a particular project, e.g. in the form of a laptop computer stored at the base-camp.

Task-dependent repository. This contains a highly specific set of information that is tailored for use during a particular task, e.g. the data carried around on the user’s PDA during a day’s work.

Filtering out the relevant data for each of the lower layers is achieved by specifying the context of the project or task. For example, if the user specifies that they will be working in the Log-Chogia area today and will be looking for giraffe, then notes relevant to this task and area can be automatically downloaded to their PDA and presented to user at appropriate points throughout their day’s work . At the end of the day, newly collected information can also be uploaded to the project repository and may be used by other fieldworkers in subsequent days.


We have introduced the environment of the fieldworker as one that is significantly different from the typical businessperson user and have presented our belief that this different environment should fundamentally effect the approach taken in the provision of PDA resources. In order to qualify these differences, four characteristics peculiar to the working environment of the fieldworker have been identified from which a selection of hardware criteria and a prototype software system have been developed.

Based on successful trials of the prototype in different fieldwork projects, we have attempted to identify the essential HCI features of PDAs for the field and have defined two important principles. The first, indirect-operation, seeks to provide alternative interface mechanisms that can be operated without the direct focus of attention of the user. The second, context-awareness, provides a method of automatically recording, presenting, and filtering information through a knowledge of the user’s current environment. We have found that a combination of these two principles can deliver very effective solutions for the fieldworker and, in many cases, the mobile user in general.


Future Work

In the summer of 1998 we are planning another extended trial of our prototypes in the Kenyan game reserve. This next generation of prototypes will include our ideas for user-interfaces that are optimised for indirect operation in an even more observationally intense application. We will be working with an ecologist who intends to track the movements of rhino throughout the day to observe their behaviour. This activity also lends itself to an exploration of how the idea of triggering could be usefully employed; one interesting prospect is to trigger information about individual rhinos based on the user’s current location, time, rhino footprint observations, and various other contextual factors.

Also under investigation at the University of Kent at Canterbury is how context-aware palmtop computers can be used as aids to using the public transportation system (where real-time timetable information could be generated from the current position of the transport). In an effort to provide general support for context-aware computers we are also developing a Contextual Information Service that aims to simplify the capture, representation, and manipulation of contextual information.


The ‘Mobile Computing in a Fieldwork Environment’ project is funded by the UK higher education ‘Joint Information Systems Committee’ (JISC) under its ‘JISC Technology Applications Program’ (JTAP), grant number JTAP-3/217. Many thanks are due to Kathy Pinkney and Alan Birkett of Manchester Metropolitan University, and Simon Keay and David Wheatley of Southampton University. Their assistance and enthusiasm, and their stimulating fieldwork projects have provided us with ideal environments in which to develop and test our ideas.


‘Mobile Computing in Fieldwork Environments’ project homepage, http://www.cs.ukc.ac.uk/research/infosys/mobicomp/Fieldwork/index.html.

JISC Technology Applications Programme homepage, http://www.jtap.ac.uk/

‘GUIDE’ project homepage, http://www.comp.lancs.ac.uk/computing/research/mpg/most/guide.html

Pascoe J Morse DR Ryan NS. Developing Personal Technology for the Field, Personal Technologies, submitted for publication.

Ryan NS Pascoe J Morse DR. Enhanced Reality Fieldwork: the Context-aware Archaeological Assistant, in V. Gaffney, M. van Leusen and S. Exxon (eds.) Computer Applications in Archaeology 1997, forthcoming.

Brown P Bovey JD Chen X. Context-aware Applications: from the Laboratory to the Marketplace, IEEE Personal Communications, 4(5), 1997.

Pascoe J. The Stick-e Note Architecture: Extending the Interface Beyond the User, International Conference on Intelligent User Interfaces, 261-264, 1997.

Schilit B Adams N Want R. Context-Aware Computing Applications, IEEE Workshop on Mobile Computing Systems and Applications, 85-90, 1994. Global Positioning System Overview, http://www.utexas.edu/depts/grg/gcraft/notes/gps/gps.html

Global Positioning System Overview, http://www.utexas.edu/depts/grg/gcraft/notes/gps/gps.html

Some Lessons for Location-Aware Applications


Peter J. Brown

Computing Lab., The University, Canterbury, Kent CT2 7NF, UK.




There are a number of different technologies that can detect the user's current location: GPS, DGPS, mobile phones [1], PARCTabs [2], active badges [3], tags, and, for cases where the user actively records their location, barcodes placed at defined locations. The field of location-sensing is blossoming so much that it has been suggested that in the future it will be standard for every computer operating system to know the location of the computer it is running on, just as at present every operating system knows the time (albeit perhaps modulo 100 years).

For portable devices, such as a PDA coupled to a GPS system, this opens the way to location-aware applications, and in particular to applications that automatically trigger information that is relevant to the user's current location. Simple applications are in tourism, where information is given about sights that the user is passing, and in maintenance, where the user is automatically given information about nearby equipment. In some applications, e.g. those concerned with the logging of events, the act of triggering causes a program to be run.

We have been working in this field for the past five years -- the original inspiration coming from Xerox Research Centre Europe in Cambridge -- and the purpose of this paper is to present some of the lessons learned. In fact our work has been in context-aware applications in general: thus we are not just interested in location, but other elements of the user's context that may be detected by sensors, e.g. time, orientation, current companions, nearby equipment, temperature, the relative position of the nearest public transport vehicle, etc. A good deal of power, in terms of relating triggered information closely to the user's needs, comes from bringing together several contextual elements. As an example, information presented to a tourist might depend not only on the location, but on the time of day, the season of the year and the current temperature. Nevertheless, although our interest goes beyond just location, location is often a sine qua non of the applications that we have worked on.

Our prime aim is to make context-aware applications easy to create and to use: to move them from the research laboratory, where most of them still reside, to the marketplace [4]. Specifically we aim to make authorship of applications simply a creative process, akin to creating web pages, rather than a programming challenge. To accomplish our aim, we have confined ourselves to discrete context-aware applications that involve triggering discrete pieces of information that are attached to discrete contexts, e.g. some information attached to the context of the location of the cathedral with a time in winter. We do not cover continuous applications where the user interface is continually changing as the user changes context. Continuous applications require programming effort, and provide a bigger authorship challenge.

Technology-driven applications

An oft-quoted saying in hypertext is: "Customers do not want hypertext: they want solutions. Hypertext is only likely to be part of any solution". The statement is equally true if `context-aware computing' is substituted for `hypertext'.

Not surprisingly, therefore, all the applications we have worked with involve combining context-aware technology with other software and hardware technology. Two software examples of this are:

  • data needs to be stored in some form of repository (e.g. database, GIS system, web presentation).
  • users do not want to be confined solely to retrieval of information by context; they are also likely to want to use normal retrieval techniques in tandem with context-aware retrieval.

In order to cater for such needs, we have developed a model [5] that allows:

  • data to be represented in a simple portable form, that can readily be exchanged and published, and can easily be converted to other forms.
  • information to be retrieved either by context or by normal retrieval methods.
  • data to be kept separate from the application(s) that use it.

In this model, each piece of context-aware information is called a note. Each note is a set of fields: there might, for example be a <location> field, which specifies the location that the information is attached to, and a <body> field that gives the information to be triggered. A note does not say which fields are "context-aware" fields to be used for retrieval: this is done by the application, and is thus controlled by the user. Notes are collected into context-aware documents. A document might, for example, be a set of notes giving tourist information about various points in a city.

Real and pretended worlds

The more we work in context-aware applications, the more we are convinced that the key to success is techniques that work both in a real world and a pretended (simulated) world, and, indeed, in a combination of both. Some examples of this are:

  • a tourist application that triggers information as the user moves around is basicly a real-world application, yet it can be enriched by adding pretended companions to the user's present context: for example the user could specify an architect as companion, and this would cause the triggered information to be enhanced or filtered to concentrate on architectural detail [6]. Moreover the user should be allowed to pretend that they were in a location other than their real one: carrying this to an extreme, the user may be a surrogate tourist sitting in an armchair, and simulating a walk round a city (e.g. by pointing at a map), triggering information as he goes.
  • oil rigs, because they are so expensive to build and to change, are simulated by computer models. These models can be highly sophisticated and realistic. There is scope for attaching context-aware information (e.g. capacities, safety information) to positions and/or states within the model, and triggering this as the user explores the model. The same information could be used on the real oil rig, where the user's location was set, say, by DGPS, and information could then be triggered in the same way, but this time on the screen of a PDA carried by the user. Thus when the user was at the location of a particular tank within the rig, information about its capacity and safety requirements might be triggered. (This dual application illustrates the importance of holding data in a common, portable, form that is separate from the application that uses it.)
  • an application suggested by Tony West, a Vice-President for planning at SUN Microsystems, involves associating context-aware notes with software that displays graphs (often multi-dimensional) of business information. A graph might, for example, show production figures over time at several locations. As the user points to a position on a graph, such as a blip in production at a certain time/location, any information that matches is triggered (e.g. "There was a week-long power cut in Auckland in March 1998").

Visual representation of contexts

Most user interfaces for context-aware applications involve a visual representation of the context. For location this visual representation will be a map. In order to act as an overview, it is useful if the map shows the locations where information can be triggered, e.g. a red dot (circle, rectangle) at each such location. Generally the visual representation and the context-aware document should be de-coupled from one another: thus a context-aware document should be usable with any map of the area the document covers, and, on the other side of the coin, a map of a city should be usable with any context-aware documents for that city. (This means of course, that our red dots would need to be superimposed dynamically on the map, depending on the document(s) in use.) Although this ideal of de-coupling may seem obvious, we have had a lively debate in our group on whether it should be relaxed in some practical applications.

Levels of triggering and human interfaces

Our work is mainly based on PDAs. Clearly the human interfaces on these PDAs for authorship/capture of context-aware information and for its subsequent use present many challenges. These have mainly been addressed by Pascoe, Morse and Ryan in [7].

One issue at a somewhat higher plane is levels of triggering. This is illustrated by the following example, which relates to context-aware information for a maintenance engineer, and gives three different levels at which triggering may occur:

1. Information should be triggered that is relevant to the engineer's immediate location (e.g. about the buildings and equipment that is there).
2. Assuming the interface on the engineer's portable computer contains a map, information can be triggered for the whole area covered by the map: this will give rise to the red dots we mentioned earlier.
3. Assuming the engineer has a PDA with limited storage capacity, the main database of context-aware information may be on the base computer. In this case, before setting out, the engineer might trigger all the notes on the base computer that apply to contexts he is likely to visit during the day, and download these into his PDA. These notes would subsequently be triggered individually (at levels 1 and 2 above) as the user moved around.

The need for these different levels of triggering has led us towards our flexible, user-controlled, model for the context to be used for triggering [5].

Structures within context-aware documents

Sometimes a context-aware document is an amorphous set of notes, each note to be triggered when the user enters its context. At other times there is a need for structuring that defines connections between notes. A simple example is a tour of a city. There may be a context-aware document describing tourist sites in a city, and on top of this it is useful to have tours, i.e. a sequence of locations that a user is recommended to take. In the user interface, a tourist taking a tour will have a Next button, which:

(1) sets the tourist's (pretended) location to the next point in the tour, thus triggering all the information for this.
(2) provides guidance on how to get from the tourist's current real position to the next tour point, e.g. by showing on the map these two points, and perhaps a suggested path between them.

Our initial implementation embedded tours and other structures within the document that they applied to, but it is much better to keep the two separate. Separation has two advantages:

  • it allows several different tours on top of the same data.
  • it allows (within reason) the tours and the underlying data to be changed separately -- indeed they may be written by different people.

In our latest implementation, a tour is just a sequence of contexts; it need not be a tour of locations: it could, for example, be a sequence of different temperatures, used to test what is triggered in an application concerned with safety information.

Thus overall we have the context-aware information separate from the applications that use it, and, on top of this, we have the structuring of the context-aware information separate from the information itself.

Accurate triggering

The ultimate success of any context-aware application depends on its accuracy, as perceived by the user. If the user is regularly fed with information she thinks is irrelevant, then the application will soon be discarded. We had experience of some of the problems in this area when we first tried our tourist application in Cambridge. At one point the user triggered information for a certain museum, on the assumption by the system that they were about to pass it; in fact they were at least ten minutes walk away, and were likely to pass a number of other sights in the meantime. The reasons for this were:

  • the application used a GPS sensor for location, which on this occasion was 100 metres out.
  • the user was in a walled area, some college grounds, and, although the museum was reasonably close, it was not visible, and to reach it involved a roundabout route that went back to the college main entrance. The application had no knowledge of the terrain, and had an underlying assumption that the user could fly like a crow.

There are answers to these problems (use DGPS, wait for SA to disappear, adjust authorship techniques to avoid dependence on very accurate positioning, introduce a knowledge of the terrain), but overall there are salutary lessons that a technology-driven application cannot ignore all the trying application-specific problems that so often arise.

On the positive side, my colleague's application in a game reserve in Kenya had none of these problems, because it was open country, and, given the lack of maps, a GPS reading, albeit a slightly inaccurate one, was a huge advantage [7].

Representing the location of physical objects

Another practical detail, which is present in most applications, but which manifested itself most strongly in our maintenance application, is the representation of location of physical objects. Assume, for example, there is context-aware information about a water pipe (its capacity, original date of installation, recent maintenance history, etc.), to be triggered by an engineer when he is near the pipe. The pipe might have a complex shape, involving many turns. If the author needs to specify this shape as a set of rectangles, circles, etc., this can be very tedious. Instead it is much better for both author and user to work in terms of some name for this object, e.g. "water pipe from Physics Laboratory to main source", and let the system deduce the detail. When there are CAD drawings, for example, that associate named objects to locations, the context-aware application should extract location information from these, and allow the author/user to work in terms of the names used in the drawings.

More generally, this illustrates the need for context services, which allow authors and users to represent contexts in terms of a hierarchy of named objects ("Canterbury", "Canterbury Cathedral", "Canterbury Festival Week", "Summer", etc.). A really good context service would also provide context elements such as location in a ubiquitous way, independent of the type of sensor used to detect it.

Distributed information

Finally we will briefly discuss a topic where our experience is limited, but nevertheless is a key to many applications: this is the distribution of information.

Context-aware applications may be distributed in many ways:

  • some or all of the context-aware notes may be stored remotely and sent to the user over a communications link. This might apply, for example, to a central repository of current road traffic problems.
  • triggering might be done remotely, so that only relevant information is transmitted to the user. In this case the user needs to report his location regularly to the remote source (and, of course, a number of important privacy issues arise).
  • the user's present context will consist of several fields, and values for these fields may be gleaned from many sources. For example a weather field may be extracted from a web page that gives a local weather forecast, thus obviating the need for the user to carry weather sensors.

Our limited experience of distributed applications involves using SMS and data calls between a PDA and a central server. Issues of distribution, which are largely complementary to the issues we have been tackling, are covered much more fully by the GUIDE project at the University of Lancaster [8].


We have described some diverse experiences of context-aware applications, but there are three threads that bring them all together.

Firstly our experiences have reinforced the saying quoted earlier, that location-aware technology, or context-aware technology in general, is only part of the solution to any real-world problem. For some applications, it is a significant part, for others just a useful starting point.

Following on from this, data used for context-aware triggering may have other roles within the overall application; it is thus vital that data has a flexible and portable form. We have carried this flexibility to two levels by keeping the data separate from the applications (and from visual representations, such as maps, used by applications), and the structuring information separate from the data.

Finally, if context-aware applications are to become mainstream applications, authorship must be made easy; we have made what we think is a good start start to this. The availability of good context services, which allow authors and users to work in terms of physical objects, would allow this to be carried further.


Several of my colleagues have contributed greatly to this work: I would especially like to mention John Bovey, Xian Chen, David Morse, Jason Pascoe, and Nick Ryan.


1.Duffet-Smith, P., `High precision CURSOR and digital CURSOR: the real alternatives to GPS', Proceedings of EURONAV 96 Conference on Vehicle Navigation and Control, Royal Institute of Navigation, 1996.

2. Adams, N., Gold, R., Schilit, W.N., Tso, M.M., and Want, R., `An infrared network for mobile computers', Proceedings of USENIX Symposium on Mobile Location-independent Computing, Cambridge, Mass., pp. 41-52, 1993.

3. Want, R., Hopper, A., Falcao, V., and Gibbons, J.J., `The active badge location system', ACM Transactions on Information Systems, 10, 1, pp. 91-102, 1992.

4. Brown, P.J., Bovey, J.D. and Chen, X. `Context-aware Applications: from the Laboratory to the Marketplace', IEEE Personal Communications, 4, 5, pp. 58-64, 1997.

5. Brown, P.J. `Triggering information by context', to be published in Personal Technologies 2, 1, 1998.

6. Oren, T., Solomon, G., Kreitman, K. and Don, A., `Guides: characterizing the interface', in Laurel, B. (Ed.), The art of human computer interface design, Addison-Wesley, Reading, Mass., pp. 367-381, 1990.

7. Pascoe, J., Morse, D.R. and Ryan, N.S. Developing personal technology in the field, Computing Lab., University of Kent at Canterbury, 1997.

8. Davies, N., "Guide in hand", http://www.epsrc.ac.uk/in-depth/corporate/publications/newsline/nl-yes/davies.htm

Developing a Context Sensitive Tourist Guide


Nigel Davies, Keith Mitchell, Keith Cheverst, Gordon Blair,


Distributed Multimedia research Group, Department of Computing, Lancaster University, Lancaster, LA1 4YR.


GUIDE is deploying a city wide, multimedia and context-sensitive guide for city visitors. The Guide system utilises web based technologies, portable end-systems, a differential global positioning service (DGPS) and a cell based wireless communications infrastructure. In this paper we describe the GUIDE system and highlight the issues raised by the system for interface design. More specifically, the paper focuses on the challenges of designing appropriate user interfaces for reflecting both virtual and actual worlds.



The Guide project is investigating the provision of context-sensitive mobile multimedia computing support for city visitors. In essence, the project is developing systems and application-level support for hand-portable multimedia end-systems which provide information to visitors as they navigate an appropriately networked city. The end-systems being developed are context-sensitive, i.e. they have knowledge of their users and of their environment including, most importantly, their physical location. This information is used to tailor the system's behavior in order to provide users with an intelligent visitor guide.

The project builds on the ideas of Ubiquitous Computing as proposed by Weiser in [Weiser,93] and borrows heavily from the work of Schilit [Schilit,94] and Brown [Brown,97] on context-sensitive computing.

The key strategic decision we have taken is to design our system based on a distributed cellular architecture. In particular, all of the information required is broadcast by cell base-stations to portables either as part of a regular schedule or in response to user requests (see figure 1 below).

Figure 1 : The Guide Architecture

The obvious alternative to this approach is to develop essentially stand-alone portable end-systems which have all of the information they require pre-installed as typified by, for example, the Cyberguide project [Long,96]. This approach is likely to have performance benefits since it does not rely on wireless networking. However, we believe this approach is limited and unlikely to be adopted in the long-term for two reasons. Firstly, our requirements study has highlighted the need for interactive services which require a communications link to the portable end-systems (see section 2). Secondly, investment by companies such as Sun in developing low-cost specialised web-clients (which are expected to retail for less than $500) makes it possible to envisage that in the medium term portable versions of these machines will be widely available. These will make ideal GUIDE units by being both cheaper and consuming less power than stand-alone PCs.

Based on their location and user preferences the end-systems receive information tailored to their current context. Information for a given geographic area is held at specific base-stations hence enabling the system to scale adequately and obviating the need for high-performance communications links between the base stations. Support for interactive services and time critical information is provided by both fixed and wireless links between the base-stations and the tourist information centre.

In this paper we present the results of our requirements analysis and early development work on the GUIDE system. In particular, we focus on the requirements for the end system’s user interface and highlight a number of key issues which remain to be addressed.


We have derived a set of requirements for the GUIDE system through a process of semi-structured interviews with members of Lancaster's Tourist Information Centre (TIC) and observation of the workings of the tourist office over a period of several days. In order to scope the development of the GUIDE system we have decided to focus on requirements in the following three categories.

(i) Flexible Tour Guide

The GUIDE system will act as a tour guide for visitors to the city. After studying the different demands that visitors make of a tour guide it has become apparent that flexibility in this area is critical. For example, visitors to Lancaster often request city tours or trails reflecting interests as diverse as history, architecture, maritime activities, cotton production and antique dealerships. Furthermore, this information is required at many different levels: from academic scholar to primary school child and in a range of languages. Additional factors which affect a visitors choice of tour/trail include the geographic area to be covered, the duration of the tour, the budget of the visitor (to cover entrance fees etc.), refreshment requirements, availability of different forms of transport and any other constraints, such as wheel chair access.

(ii) Dynamic Information

During our study we found a significant requirement for dynamic information to be made available to visitors during the course of their stay in Lancaster. For example, Lancaster Castle is available for tours only when the court is not in session. Since this can change on an hourly basis depending on the duration of the trials and defendants' pleas such information cannot be supplied by the TIC in advance. Further examples of dynamic information include the local weather forecast and the availability of special events such as park theatre. The provision of certain types of dynamic information to visitors such as news of traffic congestion and waiting times at local attractions can also help to perform an implicit load balancing function of the city’s resources. For example, given the information that the city’s cinema has a large queue but few remaining seats, the Guide could suggest to visitors that they attend one of the city’s theatres.

(iii) Support for Interactive Services

Studying tourist activities in Lancaster reveals that a surprising number of visitors make repeat visits to the TIC, often during the course of a single day. In most cases this is because they require additional information on activities or landmarks and have specific questions which require interaction with a member of the TIC. Alternatively, they might wish to make use of a service offered by the TIC, most commonly the booking of accommodation or transport.

By supplying city visitors with a GUIDE end-system we hope to address each of these requirements. More specifically, the electronic nature of GUIDE should enable the system to offer greater flexibility than conventional pre-printed information sheets. In addition, the network-based architecture we have adopted should enable us to keep visitors up-to-date with dynamic information and offer interactive services such as accommodation booking.


We are in the process of developing the GUIDE system to address the above requirements. In designing our system a number of key user interface issues have arisen. In particular, the integration of physical and virtual contextual information places new demands on user interface design techniques and toolkit components. In the following section we discuss the requirements for the GUIDE user interface as motivated by three main areas of concern: physical and environmental, user and application oriented and contextual awareness.

3.1. Physical and Environmental

By its very nature the GUIDE system must be sufficiently portable to enable it to be carried and used by city visitors for extended periods of time. As a consequence, the display area available for the user interface is limited: anecdotal evidence suggests that devices of similar proportions to today's notebooks will be unacceptable to end-users in this application domain. In addition, since the system is designed primarily for use outdoors, the interface must be visible under a range of lighting conditions and, because many visitors travel in groups, from a variety of viewing angles. Given the limitations of current display technologies the possibility of using an entirely audio based interface must be considered, though such an approach is not, of course, without problems. The requirement for outdoor use also implies that the end-system must be relatively robust and weather-proof enough to survive everyday use. This in turn places restrictions on the type of end-system, and consequently user input and presentation technologies, which can be used.

3.2. User and Application

As discussed in section 2, the GUIDE system is designed to support a wide range of users and information requirements. This clearly demands a system which can be quickly tailored to meet the needs of the end-user. While tailorable user-interfaces have become commonplace in commercial applications these tend to support a relatively narrow cross-section of users and such approaches are unsuitable for use in GUIDE. One possibility is to design a number of different user interfaces, with each interface designed to suite a particular class of user. For example, there could be one user interface to suite those visitors who are not familiar with browsing the web and do not require web browsing functionality on their tour of the city.

Furthermore, the GUIDE system must support both information retrieval style applications and interactive services within an integrated environment while subject to the physical and environmental considerations described in section 3.1.

3.3. Context-Awareness

The GUIDE end-systems are required to react to changes in their environment. In particular, as users move around the city of Lancaster they require information which is relevant to their physical location. Furthermore, this information may replace existing information with which they have been presented. The implication of this for the design of the user-interface is significant since it raises the problem of integrating changes in physical location with changes in information within the system. For example, if the GUIDE system provides a button labeled local attractions one might intuitively expect this to provide information relative to the area in which the user is currently located. However, users accustomed to web based systems are likely to be confused by a system in which returning to previously visited pages does not provide the same information. Furthermore, since a user might wish to check out the attractions local to their destination rather than their current location a means of simulating changes in their physical location must be included and supported by appropriate navigation tools.


Although development work on the GUIDE project is still on going, the project has made a number of advancements and the system is now starting to take shape. In particular, the project has established a suitable portable end system, designed a basic GUIDE infrastructure, developed a prototype client application and deployed a small number of cells.

4.1 End-System Selection

We have considered a wide range of end-systems for use in GUIDE including pen-based tablet PCs and PDAs. The selection process has been complicated by the need to balance a wide range of factors relating to the end-systems and their development environments. For example, the Apple MessagePad 2000 [Apple,97] was a strong contender because it includes support for sound, is of compact dimensions and has an easily visible display. However, its relatively low-resolution display and the poor quality of MessagePad development tools (as compared to Windows based products) counted against it. In addition to the usability issues discussed in section 2 our selection criteria was also motivated by the need to opt for a system which could accommodate a PCMCIA WaveLan [WaveLan,97] card for communication and a GPS compass for additional positional information.

As a result of our analysis we selected the Fujitsu TeamPad 7600 [Fujitsu, 98] as the GUIDE end-system. This is a compact unit measuring 8"x9"x1.5", is based on a 486 100 Mhz processor and has been ruggedised to withstand drops from approximately 4 feet onto concrete. The relatively poor performance of the processor is of little concern to us since we are not anticipating running CPU intensive tasks. Furthermore, the modest power requirements of the processor enable the unit to function for over ten hours on a single 3"x2"x1.5" battery. Initially it was hoped to use a full colour, TFT based, version of the TeamPad. However, on evaluation, the colour unit proved to be practically unreadable when used out of doors in direct sunlight. Fortunately, a greyscale version of the TeamPad is soon to be released, utilising a transflexive screen. This screen provides a very high contrast display that is readable even in direct sunlight. The other benefit of this display technology is that it provides a very wide viewing angle thus enabling a number of people to read the Guide’s display.

4.2 Guide Infrastructure

Position Sensing

The GUIDE system will utilise two methods for obtaining the current location of the city visitor. The first, and simplest, method is based on the fact that when a visitor enters a given cell of coverage the Guide system can deduce the approximate geographic location of the visitor. The second, and more accurate, method utilises a DGPS service to ascertain a relatively accurate, coordinate based, location for the city visitor. In order to use the DGPS service each GUIDE unit requires its own differential capable GPS receiver. These receivers are designed to process differential corrections in real time in order to improve on the accuracy provided by a standard GPS service. For example, by using DGPS a location accuracy of approximately five metres can be achieved, this compares with an accuracy of around one hundred metres when using an uncorrected GPS service.

The current prototype GUIDE system utilises the cell based method for realising the current location of the city visitor. However, due to the limited accuracy provided by this method (and the fact that the user may wander into areas without cell coverage) later versions of GUIDE will also use the DGPS based solution.


The design of the GUIDE communications infrastructure was influenced by the following factors:

(i) There will be a potentially large user community requiring access to data simultaneously, therefore the system must adequately scale.

(ii) The user community will tend to require similar or the same information, as described in section 2. This means that there is a relatively small subset of information that is required at regular intervals by a large number of users.

(iii) Users require support for dynamic information, such as updated weather information.

(iv) User require support for interactive services, such as reserving accommodation.

(v) The amount of data required by most users is fairly small, i.e. a brief introduction to a particular area, or a plan of the local area..

(vi) There will be areas of disconnection across the city of Lancaster which should not disrupt the services provided by GUIDE as tourists roam around the city.

Following the analysis of these factors it became clear that the familiar request-response, unicast method of data delivery is inappropriate. Instead, the GUIDE system implements a broadcast protocol to provide server-push based data delivery and interactive services.

The broadcast protocol is being designed to replace TCP as the means of communication between clients and servers. In more detail, the protocol builds on previous work on broadcast disks as a means of disseminating data [Acharya,95], [Acharya,97] to allow the system to effectively support a large number of clients within each network cell whilst making efficient use of the available bandwidth. In addition to enabling the system to scale, this approach has two further advantages. Firstly, it obviates the requirement to support Mobile-IP [Johnson,96] since clients can receive information anonymously. Secondly, we avoid the well documented problems associated with using TCP in a wireless environment [Caceres,94].

The broadcast cycle can be considered as a number of slots, each of which provides data to the end user. Information is broadcast according to a schedule which is itself broadcast at frequent intervals. The broadcast schedule will be created dynamically based on: the type of information to be broadcast and the priority of the data item. By examining the schedule for forthcoming transmissions of interest, clients are able to conserve power by electing to await future broadcasts rather than transmitting explicit requests. The broadcast protocol includes spare time-slots in which clients can choose to either transmit a request for information (if the information they require has not already been scheduled for transmission) or to communicate with the base stations as part of an interactive service session. The use of the broadcast protocol is transparent to both clients and servers.

The diagram below (figure 2) shows how the GUIDE communications infrastructure.

Figure 2 : The Guide Software Architecture

On the server side, the server agent receives DGPS corrections, HTML pages from an Apache HTTP server (running under Linux) and provides a consistent interface to the underlying broadcast protocol and scheduler. This scheduler is used to control the transmission of data across the wireless link to mobile users. The broadcast schedule dynamically schedules the next item to be broadcast based on criteria such as the type of information to be broadcast and its priority. For example, due to its time critical nature DGPS corrections will take priority in the broadcast cycle over an HTML page broadcast.

On the client side, the client agent performs a similar role to that of its peer. The client agent is responsible for caching data received from the broadcast protocol and its primary aim is to ensure that data relevant to the user, based on their user profile, always remains in the local cache. This enables users to have access to some services even in the event of the user becoming disconnected.

4.3. Prototype Client Application

An initial client application has been developed which provides visitors with access to Guide services and the ability to perform traditional web browsing. The application was constructed using Microsoft ActiveX components and has a user interface resembling a traditional web browser. The interface required a number of modifications in order to enable users to access specific areas of GUIDE functionality, such as the route guidance service. One of the key issues when designing the user interface for the application was deciding how closely to mimic the traditional web browser. The advantages of strongly basing the user interface on that of a web browser is that people familiar to web browser should find the interface quite familiar and easy to use. However, the disadvantage of this approach is that users familiar with typical browsers might assume a certain level of functionality and overlook the enhanced features provided by the GUIDE system.

A screen display showing the design of the current prototype application’s user interface is shown below in figure 3.


Figure 3 : The Guide Prototype

The application’s display window is divided into a number of areas each focusing on specific areas of functionality.

The top left area of the window contains controls for enabling the user to obtain general tourist information. More specifically, the user can ask to see a summary of their current location, information on forthcoming events, information on places of interest that are close to their current location or a map of the area. The area at the top centre of the window provides the user with controls for accessing GUIDE’s interactive services, namely requesting information on the city’s cinema and reserving accommodation. The controls at the top right of the window provide standard web page navigation functions, namely, go to previous page, go to next page or reload and redisplay the current page.

The main, central, area of the window is used for displaying information which clients request or which is acquired on their behalf. For example, this area is used to display any web pages which have been explicitly requested by the city visitor and is also used to display tour guidance information when needed.

The bottom left area of the window is used to present the user with dynamic information, such as the user's current location and local news and traffic information. When a visitor enters a new cell this is reflected by an update to this area of the display. The visitor may then choose to update their main display area to provide more information on this location. Controls for activating GUIDE’s route guidance services are located at the bottom right of the application’s window.

The currently adopted user interface is clearly unsatisfactory since there is significant potential for users to become confused when reading information pertaining to a cell in which they no longer reside. However, the alternative of automatically updating the central display area as the user moves could give rise to the situation in which the information a user is currently reading is overwritten and the user has to physically retrace their footsteps to trigger the previous page to be reloaded (the equivalent of pressing the back button on a conventional browser).

4.3. Cell Deployment

We are currently in the process of deploying cells within the city of Lancaster in order to enable us to field trial the GUIDE system. Currently four cells have been deployed although the final system will comprise over ten separate cells each with a link back to the Tourist Information Centre via either a wired or wireless link.

The proposed cell deployment map can be seen below in Figure 4.


Figure 4 : Proposed Guide Cell Deployment

The placement of cells has not been an arbitrary process, rather the cells have been placed so that they cover areas of interest e.g. the castle. By placing each cells’ WaveLAN transceiver at an appropriate geographic location it has been possible to limit each cells coverage to that required. The actual area covered by a cell depends on the cell environment and the height of the WaveLAN transceiver. For example, in a flat environment with the transceiver placed at a height of approximately three feet the cell has a coverage radius of approximately five hundred metres. However, in a built up city environment, the area of coverage would be greatly reduced. This is because, given its operating frequency, WaveLAN signals are unable to pass through solid concrete walls.


In this paper we have described our on-going development of a context sensitive tourist guide for visitors to the city of Lancaster. The requirements for such a guide have been presented and we have outlined the case for adopting a distributed cell-based approach to meet these requirements. The implications for user interface design of meeting such requirements have also been discussed under three main headings: environmental considerations, user and application requirements and support for contextual awareness. Finally, we have presented our initial prototype of the GUIDE system and highlighted a number of shortcomings of the system.

The GUIDE project will deploy a significant number of cells throughout the city of Lancaster and is required to conduct a field trial involving real end-users at the end of the project. The issues we have discussed remain to be addressed and input from members of the research community with respect to the design of the GUIDE user interface would be most welcome.


[Acharya, 94] Acharya, S., R. Alonso, M. Franklin, and S. ZDonik. "Broadcast Disks: Data Management for Asymmetric Communication Environments", Brown University. December.

[Acharya, 97] Acharya, S., M. Franklin, and S. Zdonik. "Balancing Push and Pull for Data Broadcast." Proc. ACM SIGMOG, Tuscon, Arizona,

[Apple,97] http://www.newton.apple.com/newton_overview/newton_overview.html.

[Brown, 97] Brown, P.J., J.D. Bovey, and X. Chen. "Context-aware applications: from the laboratory to the market place." 4 No. 5, Pages 58-64.

[Cáceres, 94] Cáceres, R., and L. Iftode. "The Effects Of Mobility on Reliable Transport Protocols." Proc. 14th International Conference on Distributed Computer Systems (ICDCS), Poznan, Poland, Pages 12-20. 22-24 June 1994.

[Fujitsu,98] Fujitsu TeamPad Page. http://www.fjicl.com/TeamPad/teampad76.htm.

[Johnson,96] Johnson, D.B., and D.A. Maltz. "Protocols for Adaptive Wireless and Mobile Networking." IEEE Personal Communications Vol. 3 No. 1, Pages 34-42.

[Long,96] Long, S., R. Kooper, G.D. Abowd, and C.G. Atkeson. "Rapid Prototyping of Mobile Context-Aware Applications: The Cyberguide Case Study." Proc. 2nd ACM International Conference on Mobile Computing (MOBICOM), Rye, New York, U.S., ACM Press,

[Schilit,94] Schilit, B., N. Adams, and R. Want. "Context-Aware Computing Applications." Proc. Workshop on Mobile Computing Systems and Applications, Santa Cruz, CA, U.S.,

[WaveLan,97] http://www.lucent.com/netsys/systimax/catalog/WaveLAN_PCMCIA_Interface_Kit.html

[Weiser,93] Weiser, M. "Some Computer Science Issues in Ubiquitous Computing." Communications of the ACM Vol. 36 No. 7, Pages 74-84.


On the Importance of Translucence for Mobile Computing


Maria R. Ebling and M. Satyanarayanan


Carnegie Mellon University, Computer Science Department, Pittsburgh, PA 15213
email: {mre,satya}@cs.cmu.edu



Mobile clients experience a wide range of network characteristics, and this situation is likely to continue for the foreseeable future. This range of characteristics includes fast, reliable, and cheap networks at one extreme and slow, intermittent, and expensive ones at the other. The demand for mobile connectivity has created an active area of research. As new technologies become available, mobile clients will have to choose between competing network providers offering different levels of service. In fact, mobile clients will eventually be capable of seamlessly switching from one network to another depending on current needs. Thus, mobile clients will need to choose a network provider dynamically.

The decision regarding which network provider to use involves trade-offs that include both energy and financial components. Our position is that mobile clients cannot balance these tradeoffs well without assistance from the user. The challenge is how to gain enough assistance to make wise choices without imposing undue burden on users. We believe that a key to meeting this challenge will be translucence. A translucent system exposes critical details of the system to the user in order to improve the system’s ability to service the user’s needs, while hiding non-critical details from the user to minimize the imposed burden.

Energy Trade-Off

An important tradeoff, inherent in wireless connectivity, is between battery power and communication. As one researcher characterized it, sending a packet over a wireless network is like "spraying a piece of your battery into the air." The fundamental question here is how a system that supports wireless communication should balance this tradeoff.

In order to communicate, mobile clients must expend battery power. The scarcity of this resource depends on the environment in which the mobile client finds itself, including factors such as the availability of replacement batteries, the ability to recharge batteries, and the expected length of isolation from energy sources. Thus, in deciding whether and how much to communicate, mobile clients must consider the energy consumption required.

When using a wireless network, the cost of communication has an added complication. Communicating over longer distances requires more energy than communicating over shorter distances. Under ideal conditions, the laws of physics dictate that communicating twice as far requires four times as much energy. Real world conditions, however, are less than ideal. Cellular phone companies use a rule of thumb that battery consumption increases as the third or fourth power of the distance. This complication means that different routing algorithms may require vastly different energy expenditures. An algorithm that uses a few long hops may require substantially more battery power than one that uses many more hops each covering a shorter distance. By increasing the number of hops however, the algorithm is also likely to increase the service time. Thus, in order to choose which network to use, mobile clients will need to balance energy and performance.

How does a system decide which network to use from the perspective of battery power? The choice is easy if the user has said that minimizing energy consumption or maximizing performance is the primary consideration, but this simple choice is not likely to satisfy users all the time. This energy-performance trade-off complicates decisions regarding whether and how much to communicate, as well as what network to use.

Monetary Trade-Off

Two important characteristics that will differentiate competing network providers are performance and cost. The performance characteristics offered will differ in terms of bandwidth, latency, and reliability, but they will also differ in terms of cost. Today, service providers charge a flat monthly fee or by units of connect time. Future service providers may well charge in units of kilobits or packets.

The cost of transferring a piece of data can thus be estimated based on its size. Because mobile clients will be able to seamlessly switch from one network to another, they will be able to choose the network provider best able to service each data transfer. These network providers will compete for traffic based upon cost and performance. Mobile clients, then, will need to balance cost and performance to choose a service provider.

How can a mobile client decide which network to use from the perspective of financial cost? Once again, the choice is easy if the user has said that minimizing cost or maximizing performance is the primary consideration. Unfortunately, as before, this simple decision is not likely to serve all users all the time. Just as people don’t typically send all of their physical documents via a single class of delivery service (e.g., first class mail or overnight delivery), mobile users will want to have different classes of network service available to meet different needs at different times. Although a user may want to minimize cost most of the time, certain data may be so critical to the user’s work that she may be willing to pay substantially more money for it to be delivered quickly. Thus, an important trade-off that mobile systems will face is the cost-performance trade-off.

Need for Assistance

Balancing the financial and energy costs of mobile communication is a very difficult problem, one that early systems are unlikely to solve satisfactorily without assistance from their users. The key challenge is that users’ requirements change based upon current needs and the importance of the data being transferred. Systems, in general, have little knowledge regarding the user’s current needs or the importance of a piece of information to the user’s work. In order to make wise decisions regarding network usage, systems will need more information—information that is available only from the user.


A translucent system must balance its need for information with the burden that it imposes on the user. Obviously, such a system should follow generally accepted principles of HCI design [2], but they must also follow more stringent guidelines.

User assistance should add value to the system. User assistance comes at a high price: user attention. The benefit of that assistance must be tangible. The resulting system must offer better performance, or better availability, or better usability.

User assistance should be optional. Because user assistance requires user attention, the system should not require the user to provide assistance. When no assistance is offered, the system should make decisions based upon the best information available.

Translucent systems must be unobtrusive. The user’s goal is not to babysit the mobile client, but to complete his work. These systems should alert users to critical events and allow users to influence those events (when possible). The system must not demand immediate attention, must not be annoying in its interactions, and must not "cry wolf."

Interacting with the User

A key question then is how to get meaningful assistance from the user while following these guidelines. There are three components to a translucent system supporting a mobile user:

alerting users to important events

balancing demands for the network

choosing between network providers

These components must work together, but their solutions may differ.

First, users must be alerted to important events. One metaphor for how to accomplish this is the dashboard interface. Indicator lights present a small amount of information in a minimal amount of space. They do less well at presenting detailed information, but this difficulty is easily remedied by making the interface interactive and allowing the user to request further information when it is needed. We have used this metaphor to build a translucent interface to a distributed file system [1]. Usability testing has shown that users understand the events presented by the interface.

Alerting users to important events is, perhaps, the most important characteristic of a translucent system. This notification serves to set user expectations for system behaviors according to the current operating conditions. Systems that adapt to network bandwidth changes of four or more orders of magnitude must behave differently depending upon their current network connectivity. To avoid confusion and frustration, user expectations must match the changing network environment. Thus, even a system that offers only a notification feature presents a useful amount of translucence.

Second, users must be given the opportunity to control how limited network resources are used by the system. The user’s top priority may be propagating updates back to his colleagues at home or it might be transferring data from home to his current location. The user may only need to propagate a small fraction of the updates that he has made or to transfer a small subset of the data that needs to be fetched. When network resources are extremely scarce, the system can’t make these choices automatically. Translucent systems must allow the user opportunities to balance these competing demands.

Third, users must be given the opportunity to control communication expenditures (both the financial cost and energy consumed). After all, the user is responsible for paying the bill and the user must face the consequences of a squandered battery. One possible metaphor for how to present networking decisions to users is the postal delivery model. Systems could present users with different classes of network delivery service, each with a different cost. These costs would include both financial and energy components. Users might choose a default service and selectively change that service as their needs dictate.

The postal delivery metaphor, however, does not solve our problem entirely. People are generally aware of the mail they send. It is easy for them to make decisions about individual pieces of mail. In the case of mobile clients, however, users are not always aware of the network traffic necessary to service their requests. Further, they cannot become aware of that traffic in its entirety or they would never get anything done! If we apply this metaphor to our problem, we must allow users to ascribe delivery decisions to an entire class of activities.

Another interaction technique would exploit a banking model, where users have network connectivity accounts (perhaps one for money and one for energy). As the system debits these accounts, it might provide audio feedback to users. In this way, users could track their network consumption indirectly. If the accounts were being depleted too quickly, they could change their delivery specification. Users could have a simple graphical interface that would allow them to increase performance (at the expense of money, energy, or both) or minimize cost (at the expense of performance).


The question of how to balance the need to communicate over mobile networks with the costs involved is a difficult one. Users will require some amount of control, yet too much control will make the systems unusable. These interactions need to be reviewed in detail to balance the needs of the users with the burdens imposed. In this position paper, we have identified an important problem faced by system designers. We have also suggested some preliminary ideas for addressing that problem.


[1] Ebling, Maria R. Translucent Cache Management for Mobile Computing. Doctoral Dissertation, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, March 1998.

[2] Nielsen, Jakob. Usability Engineering. Boston: AP Professional, 1993.

Developing Interfaces For Collaborative Mobile Systems


Keith Cheverst, Nigel Davies, Adrian Friday,


Distributed Multimedia Research Group, Department of Computing, Lancaster University, Lancaster, LA1 4YR.


This paper describes the issues encountered when developing user interfaces for collaborative multimedia applications designed for operation in unreliable mobile networking environments. To provide end-users with some degree of dependability applications need to provide increased levels of user-awareness in order to enable users to adapt their style of interaction to match the current quality of communications. The application described in this paper achieves this by presenting users with graphical feedback when the constraints imposed by the network violate the collaborating groups’ various communications requirements. Because traditional distributed development platforms tend to mask detailed network information from reaching the application the development platform was enhanced to enable the flow of information between the network and application level services and vice versa.


Despite the well established research field of CSCW and the growing popularity of mobile computing technologies, relatively little research has examined the issues of developing collaborative mobile systems. This paper describes research carried out under the MOST project [MOST,95] which focused on developing mobile computing support for field engineers in the safety critical domain of the U.K. power distribution industry. MOST was a collaborative project involving a partnership between the Computing Department, Lancaster University and EA Technology Ltd, Capenhurst. The project ran from April 1993 to September 1995 and was funded jointly by EPSRC and the DTI.

In order to support field engineers, MOST produced a prototype application which was arguably the first collaborative mobile application ever built that was capable of adaption in a heterogeneous environment [Cheverst,94]. To date, the project has provided the most complete study into the issues surrounding the development of collaborative mobile systems.

The MOST Application Domain

In order to provide electricity to its consumers the power distribution industry has to manage a mass of complex cabling and each REC (regional electricity company) has to deal with managing the supply of electricity to approximately 1.8 million electricity consumers. Problems with the power distribution arise quite frequently and when such problems occur and consumers are left without supply there is a strong financial incentive for the REC to return supply as soon as possible. Usually when a fault occurs on the network it is necessary to re-route supply via an alternative path, this is known as ‘switching’. Field engineers are required to physically perform switching operations at ‘Switching stations’.

A control centre is responsible for coordinating the work of field engineers and for maintaining an up-to-date view of the network state. For reasons of safety, it is crucial that such a view be maintained and that no inconsistencies regarding the current network state exist. For example, the control centre needs to be certain that a section of network has been ‘earthed’ before instructing a field engineer to perform a repair on that section of network. In order to maintain the consistency of network views the control centre imposes a sequential ordering on all operations affecting the network.

Typical Fault Scenario

The type of cable fault that might occur in part of the LV (or Low Voltage) network is illustrated in figure 1.

Figure 1: Typical cable fault scenario.

Prior to the cable fault the housing estate received supply from ‘Feed A’ and the factory received supply from ‘Feed B’. However when a cable fault occurs, as shown, then the housing estate suffers a complete loss of supply. The control centre is thus required to coordinate a switching operation to return supply to the housing estate. Once certain of the fault’s location, the control centre analyses the current view of the network in order to find a way of re-routing supply to the housing estate. The control centre decides to restore supply by utilising the factory’s reserve power feed and diverting power from the factory to the housing estate. To achieve this re-routing of supply requires field engineers to perform switching operations at switching stations ‘3’ and ‘2’ respectively. The ordering of these two operations needs to be guaranteed by the control centre in order to prevent the factory being without supply. The control centre is able to ensure that the correct ordering is achieved by imposing a sequential ordering on all operations affecting the network. A switching operation should also be performed at switching station ‘1’ in order to make the damaged cable safe for later repair work.

Requirements of the Power Distribution Industry

MOST conducted an extensive requirements capture exercise, which involved members of the project team conducting individual interviews with various members of the power distribution industry. This exercise identified the following set of requirements :-

The Need for Integration between Utilities Companies

The power distribution industry, in common with all utilities industries, deals with geographic based information such as cable or road networks. Currently, there is little sharing of such information between the various utilities industries and as a result unnecessary interference between them occurs on a regular basis. A common example of this interference is the accidental fracturing of one companies’ cable or pipe by another company laying some new cable of their own. In order to allow a greater level of integration and interoperability between the various utilities industries some form of common or open standards needs to be devised. Hopefully, this could result in a situation whereby a unified, constantly updated, network diagram becomes available to all utilities companies. This would enable extremely useful inter-utility collaboration and cooperation resulting in significant gains in safety and efficiency across the utility industries. For example, if one utility company planned to perform extensive cable maintenance on a particular street then this information could be shared by illustrating the plans on a unified network diagram. If another utility company had scheduled work on that street then one would hope that the two utilities would cooperate to the extent that the street would only be dug up once, thus reducing disruption and the cost to the utilities companies involved. An attempt at producing such a unified network diagram is currently in development in the form of the Common Street Works Register (CSWR).

The Need for Decentralisation of the Work Load

The power distribution industry currently operates a centralised control structure in which all operations are coordinated and serialised by the control centre. This structure has the inherent problem that during periods of intensive activity, e.g. during an electrical storm, the control centre can become a system bottleneck. This situation is clearly not acceptable, and a less centralised control structure needs to be put in place. Ideally, this would be achieved by distributing (i.e. replicating) the current network view amongst all concerned, rather than permitting the control centre to hoard the current view. There are obviously strong implications for maintaining consistency where multiple copies of the current network view exist. These implications are made all the more dramatic when one considers the unreliable communications infrastructure available for sending updates between the control centre and mobile field engineers.

The Need to Improve the Distribution of Information to Field Engineers

One function of the control centre is to act as a central repository of information such as composite mains records and network schematics. The problem with this approach is that whenever a field engineer requires such information he or she needs to visit the control centre in order to collect the relevant paperwork. The maintenance field engineer is also required to visit the control centre in order to receive their latest job instructions and to query any one of the companies database systems. Substantial improvements in efficiency could be made by simply enabling the field engineer to access this information whilst out in the field. This could be achieved by providing the field engineer with an appropriately equipped portable computer i.e. one containing a CD-ROM to store largely static information and a modem for receiving dynamic updates and for enabling remote database access .

The Need for Improved Collaboration

Currently field engineers are provided with little support for collaborating with one another and with members of the control centre. Gains in operational efficiency could be achieved by enabling field engineers to reliably share information and ideas between one another. Many of the problems encountered with verbal communication, such as ambiguities, could be solved by providing field engineers with tools enabling them to exchange and share multimedia information such as text, graphics and sound in a structured way. For example, given the ability to share graphical information, an engineer could demonstrate to other engineers exactly where on a network diagram he or she suspected there to be a fault.

Characteristics of Mobile Computing Environments

Mobile computing environments inherit the associated problems of both heterogeneous processing and heterogeneous networking.

The heterogeneous processing element of mobile computing environments involves the need to manage relatively low-power mobile hosts including PDAs, handheld PCs and notebook PCs. Such mobile hosts also tend to have limited resolution displays and a variety of different operating systems e.g. NewtonOS and WindowsCE.

The heterogeneous networking element arises from the need to utilise different networking technologies in order to maintain network connectivity whilst mobile. For example, mobile computers can be either disconnected, weakly connected by low speed wireless networks such as GSM, or fully connected by high speed networks ranging from Ethernet to ATM. Figure 2, below, demonstrates the range of connectivity available in a mobile environment and illustrates the basic trade-off between the freedom of movement and available bandwidth offered by a range of networking infrastructures.

Figure 2: Communications Characteristics in a Heterogeneous Networking Environment

The problem faced by system developers when building mobile applications is that when mobile users can roam between areas of different network infrastructure and this can result in rapid and massive fluctuations in the quality of service (QoS) provided by the underlying communications infrastructure. For example, a user might begin the day with their portable computer docked to a docking station with a high bandwidth (i.e. 100 Mbps) ATM network link. Later on, the user may choose to undock their portable and move around their department whilst maintaining network connectivity through a lower bandwidth, local area RF (radio frequency) based, network such as WaveLan (providing a maximum bandwidth of 2 Mbps). When required to leave the department building, the user could still achieve network connectivity by utilising the wide area but low bandwidth (i.e. 9.6 kbps) GSM service. However, whilst using this service the user might occasionally enter areas referred to as ‘coverage blackspots’ and temporarily loose network connectivity..

The MOST Approach

After considering the general set of requirements for improving the operational efficiency of power distribution companies described in section 2.1 and the implications of operating in a mobile environment described above, the MOST team produced the following set of application and platform requirements :-

Ability to Operate in a Heterogeneous Processing Environment

The application and its supporting platform must be capable of operating in a heterogeneous processing environment. If the goal of inter-utility collaboration is ever to be realised then the application must be able to exchange information between a variety of different processing architectures.

Ability to Operate in a Heterogeneous Networking Environment

Within the utilities companies there is a wide range of different networking infrastructures in use. For this reason the prototype groupware application and supporting platform should be capable of operating over different networking infrastructures and the varying levels of service that these infrastructures may provide. Indeed, where possible the platform should make use of extra bandwidth and/or reliability where it is available and make intelligent compromises where it is not. Two examples of the different networking infrastructures that the platform needs to operate over are: analogue PMR (with throughput of approximately 2.4 kbits/sec, high latency and long periods of complete disconnection) and wired ethernet (providing relatively high bandwidth and s reliable connection).

Ability to Support the Mobility of Field Engineers

In addition to the requirement for supporting operation in a heterogeneous networking environment the application also needs to support the mobility of field engineers. This means that the prototype groupware application should be tailored to run some on form of portable end-system such as a portable PC or handheld PC.

Ability to Handle Multimedia Information

The application and its supporting platform must be capable of effectively handling multimedia information. Field engineers need to be able to view and manipulate a variety of different media formats. Examples of these include scanned bitmap images, line-based (vector) drawings, text and audio.

Ability to Support Spatial Information

All utilities companies have extensive requirements for referencing the geographic location of their network equipment. Indeed, field engineers make extensive use of composite mains records to obtain a geographical ‘fix’ on the location of any part of the LV network. In order to enable field engineers to work with geographically referenced information, the prototype groupware application needs to support and correctly interpret spatial information.

Ability to Support Flexible Collaboration

The application needs to provide support for flexible collaboration. Specifically, this means supporting a range of interaction styles from the highly synchronous to asynchronous. Tools to support collaboration should include a shared whiteboard facility to enable field engineers to share geographical information such as network diagrams. In addition to this the application should provide tools that allow field engineers to graphically annotate network diagrams and synchronously share these annotations with other field engineers. Application support for asynchronous collaboration could take the form of a multimedia enhanced E-mail system. For example, this would enable the control centre to issue field engineers with job instructions without requiring immediate acknowledgment from the receiving engineer.


The development of an application to support mobile multimedia collaboration is a novel concept and it is difficult to know exactly what support the prototype groupware application needs to provide. In addition, different utilities companies, although having basically common requirements, are likely to have differences and the requirements of these companies are likely to change or adapt over time as working practices change. For this reason, the application and supporting platform needs to be designed in such a way that it is readily expandable.

The Open Distributed Processing Platform

Owing to the end-users requirement for interoperability within heterogeneous networking and processing environments MOST used the ANSAware [APM,92] Open Distributed Processing (ODP) [ISO,92] platform to build its collaborative mobile system. ANSAware is APM Ltd’s partial implementation of the Advanced Networked Systems Architecture (ANSA) [APM,89] and has been influential in the specification of RM-ODP.

The ODP model can be viewed and managed from a number of different viewpoints. Of most relevance to this paper is the computational viewpoint in which the ODP model is based on a location independent object-based model of distributed systems. In this model, interacting entities are treated uniformly as objects, i.e. encapsulations of state and behavior. Objects are accessed through interfaces and objects offering services (known as servers) are made available by exporting their interface reference to a database of interface references known as a trader. An object wishing to interact with a service (known as a client) must first import that server’s interface reference. Once the client has the interface reference it can proceed to make an invocation on the server.

The platform provides a number of abstractions or transparencies which conveniently mask network and processing heterogeneity thus enabling operation over a variety of machine/network configurations. One example of a transparency is group transparency which allows a group of multiple services to be invoked via a single group interface. Other transparencies identified in ODP include location, access, concurrency, replication, migration and failure transparencies.

Platform Modifications to Support Mobility

The current research that focuses on user interface issues for cooperative systems without reliable communications has advocated an approach based on the concept of increased user awareness [Dix,95]. The basic assumption behind this concept is that members of a collaborating group should not be forced to make assumptions regarding the current state of their connectivity with the rest of the group. Instead, the system should provide feedback, to make group members fully aware of the level of group communications currently available to them. Only by increasing user awareness can collaborating users be expected to regard the collaborative mobile system as dependable.

Figure 3: Information flow required to support mobile applications.

In order to be able to provide feedback to users regarding the current state of the underlying communications environment, the system must support the flow of information from the network to application level services. In practice, application services require support for requesting specific minimum level of service guarantees from the network. When requested guarantees cannot be met the application can receive an appropriate notification. Supporting this flow of information, also enables applications to react or adapt [Katz,94] their own behavior, e.g. by reducing the amount of data produced as the bandwidth falls. Figure 3, below, shows the information flow required between the application and the platform in order to support mobile applications.


Unfortunately, for the development of mobile applications, the standard ANSAware platform does not provide support for this flow of information. This is because the ANSA model specifies the maintenance of complete network transparency and as such encourages low-level network information to be inaccessible to application level services. For this reason, MOST found it necessary to modify and extend ANSAware to make it suitable for operation in a heterogeneous environment [Davies,96].

A new execution protocol called QEX (Quality-of-service driven remote EXecution) [Friday,96] was developed to replace ANSAware’s default REX protocol and provide support for explicit Quality of Service (QoS) driven bindings. These explicit bindings enabled detailed network QoS information to be received by interested clients. The QEX protocol was also designed to be capable of adapting to the characteristics of the underlying network by using statistics based on packet size and associated round-trip times. This contrasts with the REX protocol that was simply configured for operation over an ethernet and, consequently, performed poorly over low bandwidth, mobile, channels.

A mobile link manager called S-UDP was also written which was capable of multiplexing data across low bandwidth links such as serial or dial-up connections. S-UDP is analogous to a SLIP or PPP driver in that it supports UDP based point-to-point packet transport semantics. In addition to this S-UDP provides a number of useful facilities including the ability to provide QoS based information such as the call charging strategy currently being used and the times taken for call connection and hang-up. Extensions to the standard UDP message passing service were made to enable RPC traffic to use the appropriate transport service i.e. the standard network service or the mobile link manager. The QEX protocol utilises the modified UDP message passing service in order to direct data traffic via either the network interface or the mobile link manager (i.e. S-UDP) depending on whether or not a fixed network connection is currently available.

During the development of the platform a public domain network emulator [Davies,95] was used in order to test the platform’s performance over various simulated network topologies.

The Initial MOST Prototype Application

The MOST application was designed as an expandable toolkit comprising the following modules :-

A group coordination module to enable the establishment and maintenance of conferences of engineers. The group coordination module is also used for launching applications both in single user mode and across members of the current group

A shared GIS module enabling spatially referenced information such as maps and circuit diagrams to be viewed, manipulated and annotated using a customised GIS system. A number of annotation tools are available to the engineer including free-line drawing tools and tools for drawing set shapes such as rectangles, lines and eclipses. It is worth noting that the GIS module utilises actual real world (i.e. NorthEasting ) coordinates as opposed to actual screen coordinates. This means that members of a collaborating group can view the same map but at different scales of magnification and that map annotations automatically appear at the correct geographic points on the map regardless of current scale. The notion of utilising some form of generic X Windows based tool such as XTV [Abdel-Wahab,91] to achieve the updating of shared displays was rejected at an early design stage. Apart from the lack of WYSIWIS flexibility afforded by sharing X Windows events, the large quantities of windows/system specific data that is transmitted by such an approach is clearly unacceptable when using low bandwidth communication links.

A remote database module enabling engineers and control centre personnel to search and download records from a remote database. The module is capable of adapting to the throughput of the available communications channel by varying the quantity of information downloaded when a set of records matching a search are returned.

A collaborative image viewing module. This module enables engineers to display images, such as digitised manual pages, in a variety of popular formats both locally and across the entire group.

A job dispatch module to enable control centre personnel to create ‘Job Dispatch’ instructions. Such instructions contain both textual information and geographic information (e.g. a network schematic). On finishing a job instruction the engineer can complete and return a job reply form.

Figure 4: Object Structure of the Application Prototype.

The application modules were implemented as a set of RM-ODP compatible objects. This lead to the MOST application having the object structure that is shown below in figure 4.

User Interface Issues

In order to support highly mobile field engineers the application’s graphical user interface (GUI) was designed to make the best possible use of the small display area available on compact portable machines. For this reason, extensive use was made of pop-up windows and scrollable display areas.

Figure 5: GUI to the Group Coordinator module.

The group manager’s GUI was designed to support an increased level of user awareness by providing feedback to group members regarding the state of connectivity within the group. This was achieved by using coloured icons. For example, a disconnected group member’s icon would be displayed with a red background, while a connected group member’s icon would be displayed with a green background. An example of the GUI to this module is shown in figure 5.

On the top left hand side of the group manager’s main window are icons representing the modules which are available to the user (the globe represents the GIS module). Beneath these icons are four icons for starting and stopping application modules, for canceling certain operations before they complete and for quitting the group manager application.

On the top right hand side of the main window is a scrollable window containing icons representing users that can participate in conferences. Below this scrollable window are a further set of four icons for enabling members to be tentatively added and removed from the group membership and for forcing these tentative membership changes to occur.

In the centre of the main window is a row of icons representing current conference participants. Under each participant’s icon is a column of further icons which represent the modules which that user is currently running in conference (or shared) mode.

The user interface to the group manager module was designed in a way which would enable the full range of the module’s functionality to be tested. In a real world setting different user interfaces to the group manager module would be available for different personnel. For example, the user interface presented to control centre personnel might offer similar power and functionality to the user interface shown above. However, the user interface offered to certain field engineers might provide a much restricted level of functionality. For example, the ability to stop modules being run by other group members might not be available. By reducing the functionality of the interface available to field engineers the interface could in turn be made less complex and thus easier to use.

The GIS module provides field engineers with both public and private views. Actions performed by an engineer in their private view (window) are kept local. Alternatively, actions performed in an engineer’s public view are shared with the rest of the group. The GIS module also provides field engineers with support for storing and managing history files or ‘Clipboards’ containing lists of operations that have been performed on a particular view. Clipboards can be viewed in a scrollable text window and sent to new conference members (or existing members that have experienced periods of disconnection from the rest of the group) in order to bring them up to state and thus synchronise members’ views. Clipboards can also be used to ‘package’ a series of operations together, thus enabling a complex sequence of drawing operations to be received by other members as one logical sequence.

Figure 6: GUI to the GIS module

By using clipboard files in conjunction with private and public views, field engineers are given the freedom to alter their style of collaboration between synchronous and asynchronous. An engineer may choose to switch their style of collaboration either because of their current task or because a problem with communications forces them to do so. For example, when network connectivity is good an engineer might choose to work synchronously with the rest of the group and perform all drawing operations in the public view. However, if an engineer becomes temporarily disconnected from the rest of the group then he or she could adopt a more asynchronous style of working by composing a series of drawing or ‘red-lining’ operations on their private view. When network connectivity was resumed, the engineer could share the set of composed ‘red-lining’ operations by using a clipboard file to transfer the contents of his or her private view to the public view. The GUI to the GIS module is show below in figure 6.

Evaluation of the Initial Prototype Application

Towards the close of the project the end-users were asked to test the functionality of the MOST application by using the application in a specially tailored trail scenario. The trial scenario involved a car accident causing damage to a wiring pit. In the trial, end-users were presented with portable computers running the MOST application and connected via GSM. End-users were then required to use the application to perform the appropriate steps to re-route supply and isolate the wiring pit in order to make it safe for repair.

The evaluation produced a number of interesting results. On a positive side, end-users found that the application provided them with sufficient information to enable them to switch between synchronous and asynchronous styles of collaboration depending on the current state of group connectivity. However, the application did not provide end-users with sufficient information concerning many of the constraints that an unreliable mobile communications environment can impose on group communication. The application’s shortcomings that were encountered and which were due to the communications environment are described below.

Insufficient Temporal Feedback

Users were given no feedback or appreciation of the fact that establishing a connection to the rest of the group using the GSM service could take over ten seconds. This left end-users frustrated as they wondered what was delaying their collaboration. It was clear that a future version of the application would need to concentrate on providing users with additional feedback, including information on temporal constraints imposed by the underlying network infrastructure. There are a number of scenarios (for example the switching of power supplies) were it is critical that group updates should be very close to synchronous. In such situations a field engineer might need to specify that all group members should receive their next highlight operation within two seconds. If any group members do not receive the highlight operation within the two seconds then the engineer should be informed and thus given the opportunity to take appropriate action.

Insufficient Feedback on Group Consistency

In the initial prototype an engineer was not given feedback on whether or not members of the collaborating group actually received a group operation. Engineers should be able to stipulate whether it is necessary for any particular group members to receive their next group operation. For example, if some group members were merely informally monitoring events, then it might not be vital for them to receive all group highlighting operations. In such a situation a field engineer might want to specify that out of five group members it is only critical that "Joe" or "Dave" should receive his or her next highlighting operation. If either "Joe" or "Dave" did not receive the highlight operation then the engineer should be informed and thus given the opportunity to take appropriate action. However, if other group members did not receive the group update then the shared operation would still be deemed successful and no action would be taken despite the inconsistency of members’ views.

Need for a Flexible Quorum Size

The imposition of a fixed quorum size is too inflexible when group members are frequently disconnected from the rest of the group. Engineers should have the capability to specify the size of quorum that they wish to be used for their next group update. If, for example, an engineer specified a quorum of ‘all members’ and one group member was currently out of contact then on performing his or her next group operation the engineer should receive immediate feedback that the group operation cannot succeed. If , however, the engineer had quorum requirements that were less strict e.g. that only three out of the five members are required to receive the group operation, then, the group operation would be attempted.

Need for Flexible Ordering of Group Updates

Depending on the task currently being performed, the strict ordering of group updates might not be necessary. In an unreliable mobile environment, system performance can be greatly enhanced by relaxing strict ordering guarantees. Field engineers should be able to stipulate when it is critical that group updates should be received by group members in the same order that they were sent. For example, suppose an engineer wishes to perform the following operations in the public view :-

Operation 1. Clear Public View.

Operation 2. Display raster map.

Operation 3. Display square highlight.

Operation 4. Display cross highlight.

The engineer would probably want to stipulate that group members should receive operation 1 before operation 2 and operation 2 before operation 3. It would not, however, be necessary to receive operation 3 before operation 4. If the required ordering cannot be maintained then the engineer should be informed and therefore given the opportunity to take appropriate action.

Lack of Support for Managing the Cost of Group Operations

The prototype groupware application currently provides no facilities for enabling the management of communications costs. Maintaining minimum costs is a high priority with the electricity companies operating in the private sector. Where communications is provided by the companies own PMR system, managing the cost of calls between collaborating engineers is not usually an issue. However, the REC collaborating with the MOST project, despite using PMR, issued a number of its engineers with cellular phones in an attempt to overcome the problems of PMR coverage blackspots. Calls made with these cellular phones were charged at several pence per minute.

In situations where communications cost is an issue, the application should provide facilities for enabling field engineers to control the cost of propagating a group update. For example the application could provide a facility for allowing an engineer to stipulate that his or her next group operation should only be propagated to the group if this can be achieved at a cost of under five units.

The Development of an Enhanced Prototype

An enhanced version of the prototype which overcame the shortcomings described above would be difficult if not impossible to produce without first creating some form of group service which would be capable of handling the various constraints imposed on group communications by heterogeneous networks. Therefore, the QoS based ODP compliant group service described in [Cheverst,96] was built and is currently being used to reengineer the prototype MOST application. The three key properties of the group service that make it suitable for supporting the building of synchronous distributed groupware applications over unreliable mobile communications are:-


The most important property of the group service is that of flexibility. For example, the group service enables the relaxation of message ordering and reliability guarantees. The ability to relax such guarantees is necessary in a weakly connected environment because the typical enforcement of such guarantees results in poor performance across the entire group when one or more of the group members become disconnected from the rest of the group. If the requirements of the application do not demand strict message ordering then the blocking of messages introduced by such ordering protocols is unnecessary.

The group service enables clients to specify the following set of constraints for determining the success of a group invocation :-

Quorum constraints which stipulate the quorum of group members required to service a group invocation.

Temporal constraints which stipulate the time out period within which either the entire group or specified individual group members must acknowledge receipt of a group invocation.

Ordering constraints which stipulate whether or not the entire group or specified individual group members must receive group invocations in correct sequence.

Reliability constraints which stipulate whether or not the entire group or specified individual group members must receive the group invocation.

Cost constraints which stipulate the cost which the client is prepared to pay in order for either the entire group or specified group members to evaluate the group invocation.

Ability to Provide Feedback

In order to provide feedback to application level services, the group service enables application programmers to selectively break group transparency by enabling them to associate specific QoS based guarantees with group updates. If, when propagating a group update, one or more of these guarantees cannot be met then the group service can provide feedback to the application stating the guarantees which were violated.

Ability to Adapt

The group services are capable of performing intelligent adaption. For example, the group service can save resources by not attempting to propagate a group invocation to any group members which are known to be currently disconnected and unreachable. To give a slightly more sophisticated example, consider a situation where a user has requested that his or her next group operation needs to be completed within five seconds. However, one of the group members has network connectivity provided by a GSM handset with a call set-up time of fifteen seconds. In this situation, the response of the group service depends upon whether or not the group member equipped with GSM is connected at the time when the user issues the group operation. If the group member was not connected at the time then the group service should not attempt to propagate the operation to that member because the propagation could not occur within the specified time limit.

Providing Increased User-Awareness through the User Interface

The reengineered MOST application required an enhanced GUI in order to provide field engineers with the following :-

a convenient and easy to use mechanism for associating constraints or guarantees with group operations,

a clear and graphic method for receiving feedback should the communications environment cause any of the specified guarantees to be violated or impossible to achieve.

To provide complete flexibility, the user interface enables guarantees to made against individual group members in addition to the entire collaborating group. This is important because in a mobile environment communication difficulties might effect only certain individuals within a collaborating group. Also, in a given collaboration certain members of the group, e.g. a monitoring process, might not require the same communication guarantees as other group members.

Figure 7: Hierarchy used for structuring the specification of guarantees.

The key issue in the design of the enhanced user interface was how to design an intuitive and powerful mechanism for enabling users to stipulate combinations of guarantees. The adopted solution was based on the concept of Equal Opportunity [Thimbleby,90] using the hierarchy shown below in figure 7.

At the top of the hierarchy is the group which on the interface has an associated group icon. The next level down the hierarchy contains the module with has an associated module icon and the member with an associated member icon. At the bottom of the hierarchy are specific member/module pairings, with each pairing having an associated icon.

The mechanism for stipulating guarantees operates as follows: when the user selects one or more guarantees on layer n of the hierarchy those guarantees are automatically selected on everything which falls beneath layer n in the hierarchy. For example, if the user selects the temporal guarantee for member x, then, the temporal guarantee is automatically selected on all module/member pairings that are associated with member x. Similarly, if the user selects the reliability guarantee for module y, then the reliability guarantee is automatically selected on all module/member pairings that are associated with module y. By activating one or more of the guarantees on an individual member/module pairing the effect is constrained to the specified module associated with the specified member.

A screen shot showing the enhanced user interface is shown below in figure 8.

Figure 8: Hierarchy used for structuring the specification of guarantees.

As can be seen from figure 8, underneath the group icon are five additional smaller ‘guarantee’ icons, these icons represent from left to right: required ordering, maximum cost, required reliability, maximum delay and required quorum. For the member, module, and member/module pairing icons, it does not make semantic sense to support a quorum guarantee and so beneath these icons there are only four ‘guarantee’ icons. The user can toggle each type of guarantee between active and inactive by simply clicking on the appropriate icon. When activated the icon shows a pink border otherwise the icon’s border colour is white.

By selecting the appropriate guarantee icons the user can stipulate the exact level of feedback required regarding the state of all proceeding group operations. For example, if a user selected the reliability guarantee for ‘Sam’ but not for ‘Joe’, and the next group operation failed to reach either ‘Sam’ or ‘Joe’, then only feedback that member ‘Sam’ did not receive the group operation would be given.

Example Usage of the Enhanced User Interface

This scenario illustrates the way in which the enhanced user interface might be used by a field engineer requiring synchronous GIS based collaboration, but, whose only contact with the rest of the collaborating group is via a GSM channel. Using the enhanced user interface the engineer would select the ‘time guarantee’ icon beneath the GIS module icon. If the GSM channel was not open when the engineer specified the time guarantee then the group service would signal that the guarantee could not be met because of the channel’s ten second call set-up time. In order to inform the engineer that the required constraint could not be achieved, the ‘time guarantee’ icon would be shown with a red background. However, once the GSM channel becomes open the red background would change back to pink reflecting the fact that the temporal constraint (i.e. the call set-up time) no longer applied.

Consider now, the situation in which having selected the time guarantee the engineer proceeds with his or her GIS based collaboration. If any of the engineer’s group operations are not received by the rest of the group within the specified time period, then, the background of the time guarantee icon beneath the GIS module icon would change to red. In addition to this, the background of the time guarantee icon(s) beneath the icons representing those members who failed to receive the update within the time period would also change to red. Thus the engineer would be aware which group members were currently unable to partake in synchronous collaboration.

Concluding Remarks

Following an evaluation of the MOST application by end-users in the power distribution industry, the application was found to provide end-users with insufficient information concerning many of the constraints that operation in a mobile communications environment imposes on group communication. For example, users were given no feedback or appreciation of the fact that establishing a connection to the rest of the group using the GSM service could take over ten seconds and this would leave end-users feeling frustrated as they wondered what was delaying their collaboration.

An enhanced version of the application is currently being developed which concentrates on providing users with additional feedback, including information on temporal constraints imposed by the underlying network infrastructure. This version utilises a QoS based group service for handling the various constraints (e.g. available throughput, cost and time to establish a connection) imposed by the variety of possible network infrastructures in a generic way. This generic approach is preferable to using specific QoS based interfaces for each individual type of network infrastructure.

To summarise, the four key developments to have arisen from MOST are :-

A validated set of requirements obtained through an extensive requirements capture procedure for improving the efficiency of highly mobile field engineers using distributed mobile computing technologies.

A set of extensions to the ANSAware ODP platform in order to make it more suitable for supporting operation in a heterogeneous communications environment. This involved modifying the platform to enable information to flow between low level network protocols and applications.

An understanding of the implications for developing user interfaces for collaborative mobile systems. The crucial implication being the need to increase user awareness by providing users with sufficient feedback regarding the current state of communications.

A set of requirements for building a flexible QoS based group service to support group collaboration in a mobile environment. This service has enabled the MOST application to be re-engineered in order to provide an increased level of dependability and user-awareness.


[Abdel-Wahab, 91] Abdel-Wahab, H. M. and M. A. Feit (91). "XTV: A Framework for Sharing X Window Clients in Remote Synchronous Collaboration", IEEE Tricomm '91: Communications for Distributed Applications and Systems, Chapel Hill.

[APM,89] APM Ltd. (89). "The ANSA Reference Manual Release 01.00", Architecture Projects Management Ltd., Cambridge, U.K.

[APM,92] APM Ltd. (92). "An Introduction to ANSAware 4.0", Architecture Projects Management Ltd., Cambridge, U.K.

[Cheverst,94] Cheverst, K., G. S. Blair, et al. (94). "Moving the 'Desktop' Into the Field", IEE Colloquium on Integrating Telecommunications and IT on the Desktop, Savoy Place, London, U.K.

[Cheverst,96] Cheverst, K., N. Davies, et al. (96). "Services to Support Consistency in Mobile Collaborative Applications", 3rd International Workshop on Services in Distributed Networked Environments (SDNE), Macau, China, IEEE Computer Society Press.

[Davies, 95] Davies, N., G. S. Blair, et al. (95). "A Network Emulator To Support the Development of Adaptive Applications", 2nd USENIX Symposium on Mobile and Location-Independent Computing (MLIC), Ann Arbor, Michigan, U.S.

[Davies, 96] Davies, N., A. Friday, et al. (96). "Distributed Systems Support for Adaptive Mobile Applications.", ACM Mobile Networks and Applications special issue "Mobile Computing - System Services".

[Dix,95] Dix, A. (95). "Cooperation without (reliable) Communication", IEE Symposium on mobile computing and its applications, Savoy Place, London, IEE.

[Friday,96] Friday, A., G.S. Blair, K.W.J. Cheverst, and N. Davies. (96). "Extensions to ANSAware for advanced mobile applications.", Proc. International Conference on Distributed Platforms, Dresden, 29-43.

[ISO,92] ISO (92). Draft Recommendation X.903: Basic Reference Model of Open Distributed Processing, ISO WG7 Commitee.

[ISO,92] ISO. (92). "Basic Reference Model of Open Distributed Processing - Part1: Overview and Guide to Use", Draft Recommendation X.901, International Standards Organisation.

[Katz,94] Katz, R. (94). "Adaptation and Mobility in Wireless Information Systems.", IEEE Personal Communications 1, 6-17.

[MOST,95] Davies, N., G. S. Blair, et al. (95). "Mobile Open Systems Technologies For The Utilities Industries", Remote Cooperation - CSCW for Mobile and Tele-workers. A. Dix, Springer Verlag.

[Thimbleby,90] Thimbleby, H. (90) "User Interface Design", Addison-Wesley Publishing Company, page 345, ISBN 0-201-41618-2.

Wireless Markup Language as a Framework for Interaction with Mobile Computing Communication Devices



Jo Herstad

Do Van Thanh

Steinar Kristoffersen

University of Oslo

Ericsson Innovation Lab

Norwegian Computing Centre

P.O.Box 1080, Blindern

P.O. Box 34

P.O. Box 114, Blindern

N-1080 Oslo

N-1361 Billingstad

N-0314 Oslo




Tel/Fax: + 47 91 56 05 63

Tel: + 47 66 84 12 00

Tel: + 47 37 05 10 00


Wireless Application Protocol (WAP) is a result of continuous work to define an industry wide standard for developing applications and services over wireless communication networks. The scope for the WAP working group is to define a set of standards to be used by service applications. This document gives a general overview of the Wireless Markup Language (WML), which is an element in the WAP architecture. This progressing work is proposed by the WAP forum, which is a non-profit organization established by Motorola, Nokia, Unwired Planet and Ericsson.

Background and motivation

With the rapid growth of wireless telecommunication on one side and the exponential expansion of the Internet on the other side, the convergence of these two mainstreams is predictable. Wireless access to Internet content and services is indeed becoming an urging demand from companies having employees on the move. However, the integration of these two technologies is currently done in ad-hoc, inefficient, functionally limited and also less user-friendly manner.

This can be explained by two reasons. First, the wireless networks have much less capacity, i.e. less bandwidth, more latency, less connection stability and less predictable availability, than wired networks that are presumed for the Internet. The second reason, although obvious, is mostly unnoticed. The wireless networks and the Internet are intended for two completely different use situations, where different metaphors are used.

In order to understand how to use mobile communication and computing devices, we devise our own internal pictures and models, according to our perception of how the devices and services works. Most mental maps and models are based on simple metaphors or pictures that are easy to understand and remember. With respect to the visual, auditory and haptic interaction modalities of mobile computing and communication devices, the everyday metaphors or pictures of the "desktop computer" and the "plain old telephone" are vital. The metaphors are vital in the sense that they are used in the design of mobile communication and computing devices, like cellular phones and palmtop computers of today.

With the "desktop computer" metaphor, people usually associate with an information device comprising a keyboard augmented with a pointing device e.g. mouse used to input commands and data to the information system, and a graphical display to output information to the user. The "plain-old telephone" metaphor is commonly associated with a conversation device comprising a microphone to talk into, a loudspeaker to make the sounds audible and a keypad for dialing.

When the telecommunication and computing worlds converge, the "desktop computer" tends to also become a conversation device, e.g. IP phone, and the "plain old telephone" evolves to also become an information device. It will however take time to reach a new and common metaphor for both fields even if such an uncertain eventuality happens. The need of making the Internet available on current mobile handheld devices is obvious and urgent. This implies the need of new presentation and interaction mechanisms for mobile computing and communication devices.

In this paper, a component of the WAP architecture, the WML (Wireless Markup Language) is presented as a framework that enables the use of new interaction mechanisms for mobile computing and communication devices. WML is a markup language based on [XML], and is intended for use in specifying content and user interface for narrowband devices, including cellular phones, pagers and PDAs.

The WAP model

The WAP programming model (see Figure 6) is similar to the WWW programming model. This provides several benefits to the application developer community, including a familiar programming model, a proven architecture, and the ability to leverage existing tools (e.g., Web servers, XML tools, etc.). Optimizations and extensions have been made in order to match the characteristics of the wireless environment. Wherever possible, existing standards have been adopted or have been used as the starting point for the WAP technology.


Figure 6. WAP Programming Model

WAP content and applications are specified in a set of well-known content formats based on the familiar WWW content formats. Content is transported using a set of standard communication protocols based on the WWW communication protocols. A micro browser in the wireless terminal co-ordinates the user interface and is analogous to a standard web browser.

WAP defines a set of standard components that enable communication between mobile terminals and network servers, including:

Standard naming model – WWW-standard URLs are used to identify WAP content on origin servers. WWW-standard URIs are used to identify local resources in a device, eg call control functions.

Content typing – All WAP content is given a specific type consistent with WWW typing. This allows WAP user agents to correctly process the content based on its type.

Standard content formats – WAP content formats are based on WWW technology and include display markup, calendar information, electronic business card objects, images and scripting language.

Standard communication protocols – WAP communication protocols enable the communication of browser requests from the mobile terminal to the network web server.

The WAP content types and protocols have been optimised for mass market, hand-held wireless devices. WAP utilizes proxy technology to connect between the wireless domain and the WWW. The WAP proxy typically is comprised of the following functionality:

Protocol Gateway – The protocol gateway translates requests from the WAP protocol stack (WSP, WTP, WTLS, and WDP) to the WWW protocol stack (HTTP and TCP/IP).

Content Encoders and Decoders – The content encoders translate WAP content into compact encoded formats to reduce the size of data over the network.

This infrastructure ensures that mobile terminal users can browse a wide variety of WAP contents and applications, and that the application author is able to build content services and applications that run on a large base of mobile terminals. The WAP proxy allows content and applications to be hosted on standard WWW servers and to be developed using proven WWW technologies such as CGI scripting.

While the nominal use of WAP will include a web server, WAP proxy and WAP client, the WAP architecture can quite easily support other configurations. It is possible to create an origin server that includes the WAP proxy functionality. Such a server might be used to facilitate end-to-end security solutions, or applications that require better access control or a guarantee of responsiveness, e.g., WTA.

Example WAP Network

The following is for illustrative purposes only. An example WAP network is shown in Figure 7



Figure 7. Example WAP Network

In the example, the WAP client communicates with two servers in the wireless network. The WAP proxy translates WAP requests to WWW requests thereby allowing the WAP client to submit requests to the web server. The proxy also encodes the responses from the web server into the compact binary format understood by the client.

If the web server provides WAP content (e.g., WML), the WAP proxy retrieves it directly from the web server. However, if the web server provides WWW content (such as HTML), a filter is used to translate the WWW content into WAP content. For example, the HTML filter would translate HTML into WML.

The Wireless Telephony Application (WTA) server is an example origin or gateway server that responds to requests from the WAP client directly. The WTA server is used to provide WAP access to features of the wireless network provider’s telecommunications infrastructure.

WML overview and scope

WML is designed with the constraints of small narrowband devices in mind. These constraints include:

Small display and limited user input facilities

Narrowband network connection

Limited memory and computational resources

WML includes four major functional areas:

Text presentation and layout - WML includes text and image support, including a variety of formatting and layout commands. For example, boldfaced text may be specified.

Deck/card organizational metaphor - all information in WML is organized into a collection of cards and decks. Cards specify one or more units of user interaction (e.g. a choice menu, a screen of text or a text entry field). Logically, a user navigates through a series of WML cards, reviews the contents of each, enters requested information, makes choices, and moves on to another card.

Cards are grouped together into decks. A WML deck is similar to an HTML page, in that it is identified by a URL [RFC1738], and is the unit of content transmission.

Inter-card navigation and linking - WML includes support for explicitly managing the navigation between cards and decks. WML also includes provisions for event handling in the device, which may be used for navigational purposes, or to execute scripts. WML also supports anchored links, similar to those found in [HTML4].

String parameterization and state management - all WML decks can be parameterized, using a state model. Variables can be used in the place of strings, and are substituted at run-time. This parameterization allows for more efficient use of network resources.

Device Types

WML is designed to meet the constraints of a wide range of small, narrowband devices. These devices are primarily characterized by four constraints:

Display size - limited screen size and resolution. A small mobile device such as a phone may only have a few lines of textual display, each line containing 8-12 characters.

Limited input characteristics - a limited, or special-purpose input device. A phone typically has a numeric keypad and a few additional function-specific keys. A more sophisticated device may have software-programmable buttons, but may not have a mouse or other pointing device.

Limited computational resources - limited CPU and memory, often limited by power constraints.

Narrowband network connectivity - limited bandwidth and high latency. Devices with 300-baud network connections and 5-10 second round-trip latency are not uncommon.

This document uses the following terms to define broad classes of device functionality:

Phone - a phone-class device is limited in all major areas. The typical display size ranges from two to ten lines. Input is usually accomplished with a combination of a numeric keypad and a few additional function keys. Computational resources and network throughput is typically limited, especially when compared with more general-purpose computer equipment.

PDA - a Personal Digital Assistant is a device with a broader range of capabilities. When used in this document, it specifically refers to devices with additional display and input characteristics. A PDA display often supports resolution in the range of 160x100 pixels. A PDA may support a pointing device, handwriting recognition, and a variety of other advanced features.

These terms are meant to define very broad descriptive guidelines and to clarify certain examples in the document.

WML and URLs

The World Wide Web is a network of information and devices. Three areas of specification ensure widespread interoperability:

A unified naming model. Naming is implemented with Uniform Resource Locators (URLs), which provide standard way to name any network resource. See [RFC1738].

Standard protocols to transport information (e.g. HTTP).

Standard content types (e.g. HTML, WML).

WML assumes the same reference architecture as HTML and the World Wide Web. Content is named using URLs, and is fetched over standard protocols that have HTTP semantics, such as [WSP]. URLs are defined in [RFC1738]. The character set used to specify URLs is also defined in [RFC1738].

In WML, URLs are used in the following situations:

When specifying navigation, e.g., hyperlinking.

When specifying external resources, e.g., an image or a script.

URL Schemes

WML browsers must implement the URL schemes specified in [WAE].

Fragment Anchors

WML has also adopted the HTML de facto standard of naming locations within a resource. A WML fragment anchor is specified by the document URL, followed by a hash mark (#), followed by a fragment identifier. WML uses fragment anchors to identify individual WML cards within a WML deck and to identify function names defined in a SCRIPT element. If no fragment is specified, a URL names an entire deck. In some contexts, the deck URL also implicitly identifies the first card in a deck.

Relative URLs

WML has also adopted the use of relative URLs, as specified in [RFC1808]. [RFC1808] specifies the method used to resolve relative URLs in the context of a WML deck. The base URL of a WML deck is the URL that identifies the deck.

WML Character Set

WML is an XML language, and inherits the XML document character set. In SGML nomenclature, a document character set is the set of all logical characters that a document type may contain (e.g. the letter ’T’), and a fixed integer identifying that letter. An SGML or XML document is simply a sequence of these integer tokens, which taken together form a document.

The document character set for XML and WML is the Universal Character set of ISO/IEC-10646 ([ISO10646]). Currently, this character set is identical to Unicode 2.0 ([UNICODE]). WML will adopt future changes and enhancements to the [XML] and [ISO10646] specifications. Within this document, the terms ISO10646 and Unicode are used interchangeably, and indicate the same document character set.

There is no requirement that WML decks be encoded using the full Unicode encoding (e.g. UCS-4). Any character encoding ("charset") that contains an inclusive subset of the characters in Unicode may be used (e.g. US-ASCII, ISO-8859- 1, UTF-8, etc.). Documents not encoded using UTF-8 or UTF-16 must declare their encoding as specified in the XML specification.

Reference Processing Model

The WML reference-processing model is as follows. User agents must implement this model, or a model that is indistinguishable from it.

The user agent must correctly map a document’s external character encoding to Unicode before processing the document in any way.

Any processing of entities is done in the document character set.

A given implementation may choose any internal representation (or representations) that is convenient.

Events and Navigation

Navigation and Event Handling

WML includes a navigation and event-handling model, allowing the author to specify the processing of specific user agent events. Events may be bound to tasks by the author; when an event occurs, the bound task is executed.

An event binding is scoped to the element in which it is declared, e.g., an event binding declared in a card is local to that card. Any event binding declared in an element is active only within that element. Event bindings specified in sub-elements take precedence over any conflicting event bindings declared in a parent element. Conflicting event bindings within an element are an error.


WML includes a simple navigation history model, allowing the author to manage backward navigation in a convenient and efficient manner. The user agent history is modeled as a stack of URLs, representing the navigational path the user traversed to arrive at the current card. There are three operations that may be performed on the history stack:

Reset - the history stack may be reset to a state where it only contains the current card.

Push - a new URL is implicitly pushed onto the history stack as a side effect of navigation to a new card.

Pop - the current card’s URL (top of the stack) is popped as an implicit side effect of backward navigation.

The user agent must implement a navigation history. As each card is accessed via an explicitly specified URL, e.g., a GO task, the card URL is added to the history stack. The user agent must provide a means for the user to navigate back to the previous card in the history. Authors can depend on the existence of a user interface construct allowing the user to navigate backwards in the history. As a consequence, the author may rely on the user agent to provide default backward navigation support. The user agent must return the user to the previous card in the history if a PREV task is executed. The execution of the PREV task pops the current card URL from the history stack. No additional variable state side effects or semantics are associated with the PREV task.

The State Model

WML includes support for managing user agent state, including:

Variables - parameters used to change the characteristics and content of a WML card or deck

History - navigational history, which may be used to facilitate efficient backwards navigation

Implementation-dependent state - other state relating to the particulars of the user agent implementation and behavior.

The Browser Context

WML state is stored in a single scope, known as a browser context. The browser context is used to manage all parameters and user agent state, including variables, the navigation history, and other implementation-dependent information related to the current state of the user agent.

The NEWCONTEXT Attribute

The browser context may be initialized to a well-defined state by the NEWCONTEXT attribute of the card elements. This attribute indicates that the browser context should be re-initialized, and must perform the following operations:

Unset (remove) all variables defined in the current browser context

Clear the navigational history state

Reset implementation-specific state to a well-known value


All WML content can be parameterized, allowing the author a great deal of flexibility in creating cards and decks with improved caching behavior and better perceived interactivity. WML variables can be used in the place of strings and are substituted at run-time with their current value. A variable is said to be set if it has a value not equal to the empty string. A value is not set if it has a value equal to the empty string, or is otherwise unknown or undefined in the current browser context.

The Structure of WML Decks

WML data are structured as a collection of cards. A single collection of cards is referred to as a WML deck. Each card contains structured content and navigation specifications. Logically, a user navigates through a series of cards, reviews the contents of each, enters requested information, makes choices, and navigates to another card or returns to a previously visited card.

User Agent Semantics

Deck Access Control

The introduction of variables into WML exposes potential security issues that do not exist in other markup languages such as HTML. In particular, certain variable state may be considered private by the user. While the user may be willing to send a credit card number to a secure service, an insecure or malicious service should not be able to retrieve that number from the user agent by other means.

Low-Memory Behavior

WML is targeted at devices with limited hardware resources, including significant restrictions on memory size. It is important that the author have a clear expectation of device behavior in error situations, including those caused by lack of memory.

Limited History

The user agent may limit the size of the history stack (i.e. the depth of the historical navigation information). In the case of history size exhaustion, the user agent should delete the least-recently-used history information.

Limited Cache

Many user agents implement some form of caching. If a user agent implements deck or card caching, it must implement the following semantics.

In selecting decks to free from the cache, the user agent should refrain from freeing decks that are referenced by the history stack. If cache space remains exhausted after freeing unreferenced cache entries, the user agent should prune the history stack as described in section 10.2.1, and free any unreferenced entries in the cache until there is sufficient space to continue processing. The user agent must never delete the current deck.

Limited Browser Context Size

In some situations, it is possible that the author has defined an excessive number of variables in the browser context, leading to memory exhaustion. In this situation, the user agent should attempt to acquire additional memory by reclaiming cache and history memory as described in sections 10.2.1 and 10.2.2. If this fails, and the user agent has exhausted all memory, the user should be notified of the error.


We have presented an overview of the WML, based on the current work in progress by the WAP forum. It is optimised for specifying presentation and user interaction on limited capability devices such as cellular telephones and other wireless mobile terminals. It is specified in a way that is sufficient to allow presentation on a wide variety of devices and also flexible enough to let vendors incorporate their own MMI’s. WML does not prescribe any user agent or browser but defines fundamental services and formats necessary to ensure interoperability among various browser implementations. WML also enables the presentation of most languages and dialects since its character set is Unicode.


[ISO10646] "Information Technology - Universal Multiple-Octet Coded Character Set (UCS) - Part 1: Architecture and Basic Multilingual Plane", ISO/IEC 10646-1:1993.

[RFC1738] "Uniform Resource Locators (URL)", T. Berners-Lee, et al., December 1994. URL: ftp://ds.internic.net/rfc/rfc1738.txt

[RFC1808] "Relative Uniform Resource Locators", R. Fielding, June 1995. URL: ftp://ds.internic.net/rfc/rfc1808.txt

[UNICODE] "The Unicode Standard: Version 2.0", The Unicode Consortium, Addison-Wesley Developers Press, 1996. URL: http://www.unicode.org/

[WAE] "Wireless Application Environment Specification", WAP Forum, 30-April-1998. URL: http://www.wapforum.org/

[WSP] "Wireless Session Protocol", WAP Forum, 30-April-1998. URL: http://www.wapforum.org/

[XML] "Extensible Markup Language (XML), W3C Proposed Recommendation 10-February-1998, REC-xml-19980210", T. Bray, et al, February 10, 1998. URL: http://www.w3.org/TR/REC-xml

[HTML4] "HTML 4.0 Specification, W3C Recommendation 18-December-1997, REC-HTML40-971218", D. Raggett, et al., September 17, 1997. URL: http://www.w3.org/TR/REC-html40

Giving Users the Choice between a Picture and a Thousand Words


Malcolm McIlhagga, Ann Light and Ian Wakeman,


School of Cognitive and Computing Sciences, University of Sussex, BN1 9QH.

To be inserted when an electronic copy is available.

User Needs for Mobile Communication Devices:Requirements Gathering and Analysis through Contextual Inq uiry


Kaisa Väänänen-Vainio-Mattila and Satu Ruuska


Nokia Research Center and Nokia Mobile Phones, Sinitaival 5 & 6, 33720 Tampere, Finland

kaisa.vaananen@research.nokia.com, satu.ruuska@nmp.nokia.com

A major problem in exploring user requirements for mobile communication and personal organisation devices is the versatility of usage patterns and usage contexts in which the usage takes place. The traditional means of user interviews or usability testing in a laboratory environment are not capable of revealing insights of users’ activities and needs in their "real life". This paper describes an example user needs study at Nokia and concludes that ethnographic methods such as the Contextual Inquiry method can – despite of numerous practical challenges – be successfully applied in the development of mobile communication devices.


Mobile communication is a relatively new field of HCI activity. Although the possibility for mobile voice calls has existed for over a decade, more versatile wireless communication activities have started to emerge only recently. There is a definite requirement to make mobile communication accepted by everyone [1]; thus users’ true needs must be understood by the designers of such systems, devices and services.

Various participatory user requirements gathering methods have been exploited in early design phases of mobile communication devices [2]. One of the approaches which has been exploited in concept development is the Contextual Inquiry (CI) method [3]. Contextual Inquiry is based on ethnography and sociological research tradition where the researcher/observer goes into the research object’s own environment, rather than vice versa [4]. The observers observe the potential users of the developed product for a period of time, typically some hours.

The observer stays on the background for most of the time but also inquiries about events that are not obvious but which may be significant regarding the focus of the research. Work products (data sheets, notes, any visible outputs) can also be collected for later reference about user's specific tasks and work practises. All the evidence is recorded together with the designer's detailed written notes of events and verified conclusions about why the user behaved as he or she did.

The raw data is analysed by a group of designers, usability specialists and the CI moderator by structuring the individual observations and design ideas into an affinity diagram. This diagram of observed details of action and emerging themes of user’s activities in their natural activity context then functions as a basis for understanding potential users' requirements and their needs for the concept in focus. This method can be exploited in early design phases to get input e.g. for the requirements formation and task analysis, and especially to gather insight and create basis for design ideas.

The rest of this paper is structured as follows: Section 2 briefly describes the functionality of Nokia’s mobile communication and personal organisation device, Nokia 9000 Communicator, and lines out the main motivations of the human-centered design of such device. Section 3 then describes a Contextual Inquiry study conducted for the research of mobile communication, and presents samples of structured data and major findings. Finally, Section 4 describes the challenges of CI in mobile communication and concludes the suitability of the method for this activity field.

Motivation for user needs research of Nokia mobile communication devices

Nokia published its first multi-purpose Communicator, Nokia 9000 Communicator [5], in spring 1996 (see Figure 1). Nokia Communicator aims at fulfilling the needs of major communication needs of mobile professionals. It includes the following functionalities:

Telephone - GSM phone

Fax - Sending and receiving faxes

Short Messaging - GSM messaging, business card exchange

Internet - Email, WWW, Telnet, Terminal

Contacts - Database, Connection logs

Notes - Text editor, printing and message sending

Calendar - Appointments, to-do-lists

System - Settings, security, PC connection

Extras - Alarm clock, calculator, ringing tone composer


Figure 1: Nokia 9000 Communicator

It has been crucial for Nokia to understand the users’ needs for extended communication features, since features themselves create new needs, shape the communication process and even have an effect on the division between work and leisure time. Initial user research has been conducted before and during the development of the first Communicator, and further research is being conducted for future communication products. The user need research has investigated both the required functionality and HCI issues related to using such devices.

When designing an "all-purpose" mobile communication device such as the Nokia Communicator, the communication is not restricted to talking to one person but will include sending faxes, email, letters, etc., as well as having multi-party conference calls. Furthermore, activities concerning organising files and in general, obtaining and managing information – also from external telecommunication services – must become part of the research focus. Playing games, Web browsing, profile settings i.e. adaptation of the device into different environments and usage situations, and even composing music belongs to the scope of the research.

An example Contextual Inquiry study

The first Contextual Inquiry study for Nokia communication devices was conducted in autumn of 1996. A group of designers and usability professionals observed altogether six (Finnish) mobile professionals for a period of few hours each. The focus of the observations was on personal communication and information management and its implications on mobility. The data was collated and analysed according to the CI methodology developed by Karen Holztblatt [3].

The affinity diagram is structured from the individual observations by grouping them thematically and thus creating a hierarchy of emerging themes of user behaviour (see Figure 2).


Figure 2: The structure of an affinity diagram is formed bottom-up from individual observations grouped thematically.

The following sections present two samples of the affinity diagram which resulted from the study and which contains total 500 structured observations under approximately 120 3rd-level themes, 30 2nd-level themes and 10 top-level themes.

Sample 1:

Top-level theme: Let me work my way

2nd-level theme: I want to be efficient and not waste time accessing things and people

3rd-level theme 1: I use a fast and effective way to communicate within my company

User5: He shouts to a colleague ("partner") in the same office space over the room divider.

User1: He had a design interaction with a graphics designer and they made a design decision while looking at a common piece of paper.

3rd-level theme 2: I want to have fast access to my contact data

User5: He was going to get the Helsinki phone book to call a person. He went to another office to get the phone book but did not get it after all.

User2: Has information sources such as phone books on the table (e.g. checking area codes) as instant access.

User5: Took the written note of the other person with the phone number on it.

3rd-level theme 3: I want to have fast access to my immediate work data

User2: Makes notes on a phone call request form because someone else could use the note later.

User2: Has immediate work papers on the table (in one plastic cover).

User1: Organises papers according to what is ‘to be handled now’ and to be ‘done later’.

User2: Has a folder for several projects with separate sections containing A4 paper sheets in plastic covers. He takes them from the folder and puts on table according to what is urgent at the time.

User3: Would like to have access to previously created electronic documents on the road.

3rd-level theme 4: I want to get fast access to today’s events

User5: Has a paper table calendar but there is no markings, only the corners are ripped out.

User2: Marks the current week for fast access.

3rd-level theme 5: Special phone numbers are easier to access than the ‘phone book’

User1: He makes a distinction between the in-my-face phone numbers and the phone book phone numbers.

User1: He has written notes of names/numbers by the landline phone (3 notes). These numbers were only on these notes, not anywhere else.

User5: It is a bigger burden to look up something in a large phone book/database than to get it from a known place.


Sample 2:

Top-level theme: I work where I am

2nd-level theme 1: I do different things at work

3rd-level theme 1: I’d like to handle all my bank affairs electronically

User1 Starts paying bills using Internet.

User1 Goes to the bank only when there are special bills, e.g. taxes.

3rd-level theme 2: I have quiet days

User5 Did not have much anything to do the whole day; received just a few phone calls.

2nd-level theme 2: I am dedicated to my work

3rd-level theme 1: When I’m away I want others to know where I am and how to reach me

User3 Informs others about his absence with a written note beside his door outside his room.

User1 Writes a note: ‘I’ll be out for an hour’. Colleagues want to know where each of them is.

3rd-level theme 2: I want to be connected also on holiday

User3 Leaves a fax number where he can be reached during his holiday.

User3 Informs others that he can be reached during the holiday.

User3 Wants sometimes read email on the road, though not with a laptop.

User3 Sometimes misses important email messages. Says, "Damn, I just read that email right now!"

3rd-level theme 3: My things need to be taken care of while I’m away

User3 Asks someone to check his post during his holiday.

User3 Gets post from his inbox and makes room for it at the corner of his table as a sign of where others should bring all his post during his holiday.

2nd-level theme 3: This is my travelling office

3rd-level theme 1: I need my calendar on the road

User4 Sometimes has to come to work just to check the calendar whether it is necessary to get to work at a specific point of time.

User4 Too big a paper calendar gets left in the office.

3rd-level theme 2: I need my contact information on the road

User4 Business card book is not carried along, though it would be useful.

User4 Viewing contact database information is needed on the road but not reached there.

3rd-level theme 3: I need to make calls in the car

User1 He calls on the cellular when in the car.

3rd-level theme 4: I need several tools on the move

User2 While on move carries a cellular, calendar and calculator.

User2 Would like to have a calendar and calculator as a pair (attached) , but separable.

User3 No need to create electronic documents while travelling.

3rd-level theme 5: My tools depend on where I am

User4 Information is divided: Uses a paper calendar when being mobile or for private activities. Secondly, uses an electronic to-do list when in the office. And thirdly, uses a small address book containing only personal addresses.

3rd-level theme 6: I need a good way to carry my phone with me

User2 While just going lunch his cellular phone is attached to his braces.

User1 He took the cellular phone with him when he went outside to smoke a cigarette.

3rd-level theme 7: I forget the cell phone and have to pay for that

User2 Forgets to take his cellular when leaving office for a short time.

User2 Since the phone is not with him he ends up making the phone call from the phone booth and has to pay himself.



Example findings affecting the product concept and design

From the whole data collected and analysed in the study, some of the main findings were:

Users need to have the ability to organise their information according to their own logic. Also, it is not enough to provide only one place for user’s data but to facilitate several ways to arrange and structure data e.g. in Communicator file management, calendar, contacts database, and the PC (both at home and work).

Users have a need to distinguish between very frequent contacts and other contacts e.g. frequently called phone numbers should be immediately accessible (i.e. they should be "in your face").

Users have difficulties when having the same or similar information in multiple places (e.g. several calendars, PCs, databases). Therefore, same information should be updated in all those places automatically.

Users need to share their information on-line. Therefore, flexible ways of exchanging information are needed.

Users expect to be able to access all kinds of data and services regardless of time and place with their mobile device, e.g. company contact databases or train timetables.

Users need various ways to remind themselves; ways of reminding oneself need to be similar to what they are used to e.g. small paper notes.

Users have personal ways of highlighting important things: small written notes, ripped-out corners of a calendar, coloured tags, etc.

Users’ work and leisure time are not necessarily separate any more – serious business device should have something "fun" in it, too.

These, and further findings and design ideas created from the analysed data have been used as input to the communication product development.

Lessons learned about Contextual Inquiry of mobile communication

While observing mobile users the observer is constantly on the move with the observed person. The observer is a "shadower" who has to remain "out of sight, out of mind", still being able to make the important questions to the user on the spot when something significant, yet unobvious, is happening. This is more demanding than observing in a stationary environment since there may be more noise and distractions. In addition, the people with whom the observed person communicates may be distracted by the CI activity itself. For example, if the observer follows the observed person to a business meeting, they will have to explain thoroughly why this is happening. Still this may lead to cautious, and thus changed, communication behaviour. Much of people’s communication is generally very confidential and therefore, certain trust and relaxed atmosphere must be established before the actual inquiry can start.

Another main challenge when conducting CI for communication activity is setting the observation focus. "Personal communication" as the focus may be too wide because too much of daily activities fall within this category. For example, much of the discussion in a face-to-face meeting may be superfluous for the research focus. The focus of observations may have to be restricted more strictly to something narrower such as "Professional communication in a moving vehicle", or "Communication with the family", or what ever may be the nature of the designed product. The observer must be alert at all times to see if something may fall within the focus and thus be relevant for the research. Incidences whose significance is unclear at the time should be recorded in case they prove to become important in a broader context – other observers may have come up with similar trends of behaviour.

Third issue of concern is to find the right set of users for the study. In addition to belonging to the target user profile the following issues should be considered: The persons should not work in highly confidential professions or they will not let observers follow them everywhere during their working day. The mobility should not be too extended geographically or the session will become very expensive for the observing research organisation. The observation period should be "suitably" filled with events of mobile communication or other events interesting or relevant to the objectives of the research.

Sometimes the observed person may use "agents" (e.g. secretaries) to perform parts of the communication tasks on behalf of themselves. For example, the observed person may ask his secretary to type and send a fax to three persons in the following morning. In such case the observer may instead follow the agent for a while, or it might be more important to concentrate on how the observed person gives the instructions to do the task, what he or she does while the task is being carried out for them, and how they will be informed that the task is accomplished. The observer must constantly make decisions about what to observe. Similar problematic arises because of the basic nature of remote communication: The observer only sees one end of the "communication link" and thus may only make guesses about what are the activities at the other end. Thus, part of the events have to be concluded without firm evidence – always a non-idealistic situation in the CI research. The obvious solution is to have observers in all ends of all expected communication links but in practise this is often impossible.

Using various interactive telecommunication services is also part of any future mobile terminal. Such services can cover a wide variety of areas of functionality (service inquiries, bookings, downloading of maps, etc.) all of which may not be known at the time of design of the product. A research method such as Contextual Inquiry may not be able to cover all aspects of such product usage in advance and thus generalisations must be made based on a sample of activities related to the usage of telecommunication services.

The Contextual Inquiry method is mainly qualitative in its nature of data gathering and analysis. In general, 8 to 16 potential users from one user segment may be observed for a single study. More and different users can be added later on to create a more complete picture of the users’ needs and requirements for the product under development.

Designers should participate in all stages of CI research since the strength of the method is that it provides designers a possibility to enter and learn to understand real users’ tasks and context: both physical (e.g. office facilities, tools) and social (e.g. culture, conventions, rules) environment. Mobile professionals act in many different contexts and this method provides designers support for making design decisions based on target users’ real needs. However, CI requires substantial support from experienced CI practitioners especially in training of the mobile observers, moderating the analysis of the data and in building the affinity diagram.


[1] Väänänen-Vainio-Mattila,K., Haataja,S. Mobile Communication User Interfaces for Everyone, Advances in Human Factors/Ergonomics 21B, Design of Computing Systems: Social and Ergonomic Considerations, (Eds.) Smith, M.J., Salvendy, G., Koubek,R.J. Elsevier, August 1997, pp. 815-819.

[2] Schuler,D., Namioka, A. (Eds.) Participatory Design: Principles and Practises, Lawrence Erlbaum Associates, 1993.

[3] Hugh Beyer and Karen Holtzblatt. Contextual Design: Defining Customer-Centered Systems, San Francisco: Morgan Kaufmann, 1998.

[4] Lewis,S., Mateas,M., Palmiter,S., Lynch.G. Ethnographic Data for Product Development: A Collaborative Process, Interactions, Vol. III, No. 6, November + December 1996.

[5] Nokia 9000 Communicator, PC Week, March 25 (1996), pp. 39-40.

Chris Johnson (ed) Proceedings of the First Workshop on HCI for Mobile devices