Glasgow Interactive Systems Group (GIST) logo - click here to go to the GIST Website

The Multimodal Interaction Group website has moved. Click here to visit the new site

Grant Funded Research Projects in the Multimodal Interaction Group

This page contains a short description and the case for support documents for our funded research projects. To view the case for support documents you need Adobe's free PDF viewer or the PDF plugin for Netscape Communicator/Internet Explorer. Clicking on the title of the project will give you the case for support. For job vacancies see my jobs page for details.

Projects:

  1. Principles for improving interaction in telephone-based interfaces
  2. Guidelines for the Use of Sound in Multimedia Human-Computer Interfaces
  3. MultiVis I: A Multimodal Visualisation System for Blind Students Using Virtual Reality
  4. 3D Audio Windows: Enhancing PC Accessibility for Visually Disabled Users
  5. A multimodal visualisation system for blind people using virtual reality
  6. AudioClouds: Three-Dimensional Auditory and Gestural Interfaces for Mobile and Wearable Computers
  7. UTOPIA: Usable Technologies for Older People: Inclusive and Appropriate
  8. An investigation of multimodal interaction with tactile displays
  9. MultiVis II: Multimodal Tools to Allow Blind People to Create and Manipulate Visualisations
  10. MICOLE: Multimodal collaboration environment for inclusion of visually impaired children
  11. Multimodal, Negotiated Interaction in Mobile Scenarios
  12. GAIME: Gestural and Audio Interactions for Mobile Environments

 

1. Principles for Improving Interaction in Telephone-Based Interfaces EPSRC Logo

Telephone-based interfaces (TBIs) are an increasingly important method of interacting with computer systems (such as electronic banking or voice mail). Telephones themselves are also incorporating greater functionality (such as address books and call forwarding). In both cases this extra functionality may be rendered useless if usability is not considered. One common usability problem is users getting lost when navigating through hierarchies of options or functions. This may mean some functions are not used or that users cannot achieve the goals they wish.

The innovative aspect of this proposal is to use structured non-speech sounds (such as short pieces of music) to enhance the output of information in TBIs. Sound can present information rapidly without getting in the way of any speech output. I will investigate the use of sound to provide navigation cues to stop users getting lost and also to provide richer output methods to create more flexible interaction techniques. To ensure effectiveness I will perform a full usability evaluation. TBI designers will benefit from this research because guidelines produced will enable them to create more powerful interfaces. End-users will benefit because the resulting telephones and telephone services will be more usable.

Project web pages

 

2. Guidelines for the Use of Sound in Multimedia Human-Computer Interfaces EPSRC Logo

Sonically-enhanced graphical human-computer interfaces allow more natural communication between computer and user. Such multimedia interfaces allow users to employ two senses to solve a problem, rather than using vision to solve all problems. This leads to reductions in the time taken to complete tasks. However, this area is in its infancy and there is little systematic research to demonstrate the best ways of combining graphics and sound. This means sounds are often added in ad hoc and ineffective ways by designers.

The innovative aspect of this proposal is to produce a set of guidelines, and a toolkit based on it, to simplify the use of sounds so that designers can improve the usability of their multimedia interfaces. We will investigate the most effective places to use sound by experimental evaluation. From these experiments guidelines will be produced. A toolkit will be built, based on the guidelines, that designers can use to create effective sonically-enhanced interfaces. We will also investigate new interaction techniques that become possible by the combination of graphics and sound. Interface developers will benefit from this work because they will be able to produce more usable interfaces. End-users will benefit because the resulting interfaces will be more usable.

Project website

 

3. Multivis I: A Multimodal Visualisation System for Blind Students Using Virtual Reality EPSRC Logo

One of the main deprivations caused by blindness is the problem of access to information. Visualisation is an increasingly important method for people to understand complex information (using tables, graphs and 3D plots, etc.) and also to navigate around structured information. Computer-based visualisation techniques, however, depend almost entirely on high-resolution graphics and for visually-impaired users the problems of using complex visual displays are great. There are currently only limited methods for presenting information non-visually and these do not provide an equivalent speed and ease of use to their graphical counterparts. This means it is impossible for blind people to use visualisation techniques, so depriving them further. We will investigate and solve this problem by using techniques from Virtual Reality (VR) that will allow users to feel and hear their data.

The innovative aspect of this proposal is to investigate the different sensory modalities to see how they can best be used for visualisation and so create a powerful, multimodal visualisation system that makes the most of the senses our users have. We will be using force-feedback, 3D sound, braille, speech input and output to try and overcome the problems caused by the lack of vision. The research that will be done during the project will have a major impact because it will open up the possibilities for using these new techniques and greatly improve the quality of life of our users. The main aims of this research are to: · Investigate the cognitive and perceptual properties of the different sensory modalities and the problems blind people face when trying to visualise information; · Develop new visualisation techniques using VR and multimodality to allow blind people to use complex information; · Investigate how these new techniques can be incorporated into future visualisation systems.

MultiVis web pages

 

4. 3D Audio Windows: Enhancing PC Accessibility for Visually Disabled Users (Microsoft)

An important challenge that presents itself to modern software vendors catering for diverse populations is: will the integration of telecommunications, the Web and PC technology present access problems for disabled populations? And, furthermore, how can this technology be structured to present new opportunities for individuals with disabilities to be integrated into the mainstream?

For the visually disabled user, network-based PCs offer the possibility of much greater access to electronic and human resources; however, existing interface architectures do not support efficient interactions with this material. Audio rendering tools such as text-to-speech translators --- currently the fastest and most natural facility for making text/graphical information perceivable --- typically collapse information from a variety of concurrently operating windows into a single serial stream of sound. This information bottleneck must be overcome in order for visually disabled users to enjoy the efficiency of the full multi-tasking interface available to sighted users.

The work which we propose to undertake for Microsoft exploits 3D spatial audio to increase interface bandwidth. This solution employs rapidly developing 3D audio technology to expand and repartition a single audio stream into multiple spatially segregation streams of information --- i.e., acoustic windows --- which, like visual windows, each present information from a unique spatial position. In the same way that sighted users employ the position of a visual window to disambiguate its contents from that of other windows, so can the position of a sound source be exploited to disambiguate its contents from other temporally overlapping audio streams in a 3D audio display. In addition to display, this solution provides natural window manipulation facilities which include monitoring users listening behaviour (via head tracking facilities) so as to allow them to select and organize information directly without translating their preferences into less flexible (e.g., screen or mouse-pad) coordinates.

 

5. A Multimodal Visualisation System for Blind People Using Virtual Reality (ONCE)

This project will run in conjunction with the MultiVis EPSRC project (number 3 above). It is being funded by ONCE - Organización Nacional de Ciegos Españoles. This is the national organisation for blind people in Spain.

See the project Web pages.

 

6. Audioclouds: Three-Dimensional Auditory and Gestural Interfaces for Mobile and Wearable Computers EPSRC Logo

Mobile computing devices are extremely popular. Mobile telephones, Personal Digital Assistants and handheld computers are currently one of the fastest growth areas of computing and this growth will extend into more sophisticated, fully wearable computers in the near future. One problem with these devices is their limited input and output capabilities. Limited screen space for information display means small screens can easily become cluttered with information. Input is also limited, with small keyboards or handwriting recognition the norm. These are slow and hard to use when mobile. Current interaction techniques therefore limit mobile devices because walking or running, driving or navigating all require a large amount of visual attention and adding to this with a complex graphical display can cause problems.

The innovative aspect of this proposal is to explore a new paradigm for interacting with mobile computers, based on novel techniques using 3D sound and gestures, to create interfaces that are powerful, usable and natural. The gesture modelling itself will be an innovative combination of dynamic systems models and nonparametric statistical models. We will develop a wearable computer that uses 3D sound for output and head, hand and device gestures for input. This will allow us to investigate new presentation methods and interaction techniques to allow richer and more complex, tightly coupled interactions with mobile devices and mobile services, opening up the possibilities for using mobiles in a range of new and more powerful ways.

Audioclouds web pages

 

7. UTOPIA: Usable Technologies for Older People: Inclusive and Appropriate (SHEFC)

UTOPIA is a Scottish research project investigating the design and development of computer-based technology for older people. It is formed from a partnership of research groups at four universities (Dundee, Glasgow, Abertay and Napier)

The proportion of older people in the population is increasing and with it the demands on long-term care and help for their particular needs. Although many older people are independent and provide much to the community, as we grow older, we will, in general, experience a reduction in our abilities and usually require support in some activities, eventually even the basic activities of life.

Adopting a different viewpoint, we find that older people, especially those just past retirement age, are often economically active and, despite commonly-held stereotypes, not particularly averse to new technology.

This project aims to bring together these needs and possibilities by investigating the development of computer-based technology for older people. By bringing together researchers in different areas we hope to develop design methodologies that include the needs and wants of older people, as well as raising awareness of these issues among the research and IT communities. In addition, we hope to design and develop some technological products specifically for older people.

Project webpages

 

8. An investigation of multimodal interaction with tactile displays EPSRC Logo

The area of haptic (touch-based) HCI has grown rapidly over the last few years. A range of new applications has become possible now that touch can be used as an interaction technique. However, most current haptic devices have scant provision for tactile stimulation, being primarily kinaesthetic devices. The cutaneous (skin-based) component is ignored even though it is a key part of our experience of touch. Devices are now becoming available that allow tactile display but little research has gone into how they might actually be used at the user interface. The innovative aspect of this research is to open up a new area of study into the cutaneous aspects of HCI and to investigate a range of tactile displays to improve the whole experience of computer haptics. The research has two strands. The first is an investigation of tactile cue design, the combination of tactile and kinaesthetic displays and combined tactile auditory multimodal displays. The second strand is the application of this knowledge of tactile interface design to the key application domains of accessibility to visualisations for blind users and mobile/wearable computer interfaces. In both of these areas interaction limitations mean that tactile displays can make a major contribution to usability.

Project webpages

 

9. MultiVis II: Multimodal Tools to Allow Blind People to Create and Manipulate Visualisations EPSRC Logo

Understanding and manipulating information using visualisations such as graphs, tables and 3D plots is very common for sighted people. The skills needed are learned early in school and then used throughout life, for example, in analysing information, creating presentations to show it to others, or just managing home finances. These basic skills are needed for all parts of education and employment. Blind people have very restricted access to information presented in these visual ways. The innovative aspect of MultiVis II is to use multimodal techniques to allow blind users themselves to create and manipulate visualisations interactively using haptic and audio tools, adding and removing points and interacting with the visualisation as they go. We will develop new ways to overcome the confusion and navigation problems often experienced by allowing two-handed interaction and augmenting existing paper-based technologies with haptics and audio to maximise their usefulness. We will also investigate 3D sound to provide external memory (a fundamental problem for blind people) to mark interesting points, or to easily return to items to facilitate comparisons with other data points. Finally, we will look at the collaborative use of visualisations by blind people to allow users to work together on their data.

This project carries on the work started in the MultiVis I project above.

MultiVis web pages

 

10. MICOLE: Multimodal collaboration environment for inclusion of visually impaired children (EU Framework 6)

The work in the MICOLE project aims at developing a system that supports collaboration, data exploration, communication and creativity of visually impaired and sighted children. In addition to the immediate value as a tool the system will have societal implications through improved inclusion of the visually disabled in education, work, and society in general. While the main activity is the construction of the system, several other supporting activities are needed, especially empirical research of collaborative and cross-modal haptic interfaces for visually impaired children.

 

11. Multimodal, Negotiated Interaction in Mobile Scenarios EPSRC Logo

We propose the investigation and evaluation of an alternative approach to the integration of physical and digital resources which we call negotiated interaction (NI). This framework draws on dynamic systems theory, probabilistic reasoning and multimodal feedback. We believe this ambitious project has wide-ranging implications for
HCI in general, creating a new paradigm for analysis and design of interaction, and is especially important for the growing area of mobile computing.

Negotiated Interaction Web pages

 

12. GAIME: Gestural and Audio Interactions for Mobile Environments EPSRC Logo

Most PDAs and smart phones have sophisticated graphical interfaces and commonly use small keyboards or styli for input. The range of applications and services for such devices is growing all the time. However, there are problems which make interaction difficult when a user is on the move. Much visual attention is needed to operate many of the applications, which may not be available in mobile contexts. Oulasvirta et al. showed that attention can become very fragmented for users on the move as it must shift between navigating the environment and the device, making interaction hard. Our own research has shown that performance may drop by more than 20% when users are mobile. Another important issue is that most devices require hands to operate many of the applications. These may not be available if the user is carrying bags, holding on to children or operating machinery, for example. Therefore, the novel aspect of this proposal is to reduce the reliance on graphical displays and hands by investigating gesture input from other locations on the body combined with three-dimensional sound for output.

GAIME Web pages