Visual Impairment, Virtual Reality and Visualisation

Stephen Brewster
Department of Computing Science, University of Glasgow, Glasgow, G12 8QQ, UK
stephen@dcs.gla.ac.uk
Helen Pengelly
Department of Computing Science, University of Glasgow, Glasgow, G12 8QQ, UK
pengelhl@dcs.gla.ac.uk

Web: http://www.dcs.gla.ac.uk/~stephen/ or www.multivis.org

 

Abstract

The aim of our research is to design, build and test a usable Virtual Environment (VE) for visualisation for blind people. This will involve the use of force-feedback and three-dimensional (3D) sonified data. To make sure this combination of sensory modalities is effective and helps blind people visualise information, careful evaluation will be a very important part of the research.

Keywords:

Touch, sound, blind users, multimodal interaction, visualisation, virtual reality, haptics.

The Problem

One of the main deprivations caused by blindness is the problem of access to information. Visualisation is an increasingly important method for people to understand complex information (using tables, graphs and 3D plots, etc.) and also to navigate around structured information. Computer-based visualisation techniques, however, depend almost entirely on high-resolution graphics and for visually-impaired users the problems of using complex visual displays are great. There are currently only limited methods for presenting information non-visually and these do not provide an equivalent speed and ease of use to their graphical counterparts. This means it is impossible for blind people to use visualisation techniques, so depriving them further. We will investigate and solve this problem by using techniques from Virtual Reality (VR) that will allow users to feel and hear their data.

Current techniques for displaying information non-visually rely mainly on synthetic speech and Braille. Users hear a line of digits read out or, if they read Braille, feel a row of digits. Consider a sighted person reading a matrix of numbers. He/she would immediately be able to make certain inferences about the data. For example, there may be larger numbers at the bottom right or top left. A blind person would be unable to capitalise on these patterns in the data. He/she would just hear rows of numbers spoken, one after another. Properties of human short-term memory mean that listeners are unable to hold in mind enough information to make any non-trivial observations - they become overloaded. Things become even worse when graphs (Figure 1) or complex 3D plots (Figure 2) are used because there are almost no techniques for presenting these in a non-visual way.

Figure 1: A typical line graph. This comprises a lot of information, but a sighted person is able to cope with all that information by focusing in on whichever aspect of it is pertinent to current needs (from [1]). Presenting this in synthetic speech or Braille is extremely difficult. It is possible to do in non-speech sound [1] but usability is limited.

Figure 2: A cone tree representing a directory structure (described in [5]). This kind of complex, three-dimensional information is currently impossible for blind people to use.

The Solution

The approach we propose is to make use of the senses that our users do have, namely hearing and touch to create a virtual environment for visualisation. Techniques emerging from the field of VR (such as 3D sound and force feedback, or haptic, devices) now make this possible. Using these multiple sensory modalities to present information can overcome problems in the visual sense. However, it is not clear how the modalities should be combined to maximise their power – arbitrary combination has been shown to be ineffective, and worse to actually reduce performance [3,4]. One of the key areas to be investigated throughout the research will be the best ways to present information to the different senses (both individually and in combination).

Here is a simple example of what we are doing: When visualising graphs users will be able to feel the shape of the graph via the haptic device (see Figure 3). They could trace out the shape of a graph with a finger, hearing a change in pitch [1] to indicate the slope, and have specific values spoken with synthetic speech when necessary. The haptic device would be able to guide the user over the graph, constraining movement so that the finger is always on the graph. Several graphs could be presented simultaneously, the sound from each being presented from a different 3D spatial location, with a different timbre and surface texture, so that the graphs would not be confused.

Figure 3: The PHANToM force-feedback device.

Evaluation

There is often a problem with this type of research in that it can be technology, and not user, oriented [1]. This has resulted in many systems and devices that do not solve users’ problems. We are therefore taking a user-centred, participative approach to ensure our work focuses on user needs. We aim to use a variety of qualitative and quantitative measures to assess usability of our visualisation systems in general. We want to avoid being seduced by the technology so that we produce actual systems that are useful to blind people.

Stage 1

We have just finished the initial evaluation of haptic-only graphs and bar charts (Figure 4 shows an example) [2]. This was a two-part process with a detailed pilot study and then a full study with 10 sighted users (who could not see the graphs). We tested with sighted users as our supply of blind users is limited. The aim was to get the system working successfully with sighted users and then, when the main problems have been resolved, move on to testing with blind participants to make the best use of them that is possible. This is perhaps a risky approach as we are not designing for our end-users from the outset. This is a common problem with VE’s in that the technology is cumbersome and expensive so is hard (or impossible) to get into the real contexts of use that would be required by more traditional HCI evaluation. We aim to overcome this problem with expert blind user participation to help form the designs from now on with a full evaluation with blind users as a next stage.

We decided to use questionnaires for our initial study. These were used after each of the graphs were displayed and summatively at the end of the evaluation. We needed to explore the space of possible designs from high-level questions such as "what is the maximum value on the graph?" to much lower level things like "should there be a gap between each of the bars in the bar chart?", as there was little previous research to guide us. The questionnaires gave us the ability to ask a wide range of qualitative questions. However, they were limited in the quality of data that could be obtained and the analysis we could do on it. We need to use more formal measures for the next stage of the work to really assess the usability of our designs.

Figure 4: An example of a haptic graph used in the evaluation.

Next Stage

We intend to evaluate the graphs again, this time with blind students. We will use standard evaluation techniques such as time and error rates along with qualitative measures of workload (NASA TLX), preference and fatigue. We will again also use questionnaires. Our hope is that these tests will provide us with the ‘hard’ data we need to fully assess our designs.

This work has now continued into a 3 year funded project called MultiVis. See the Multivis website for more details and our other publications.

References

  1. Edwards, A.D.N., Ed. Extra-Ordinary Human-Computer Interaction. Cambridge University Press, Cambridge, UK, 1995.
  2. Pengelly, H. Investigating the use of force-feedback devices in human-computer interaction. MSc. Thesis, University of Glasgow, 1998.
  3. Ramstein, C. and Hayward, V. The Pantograph: A Large Workspace Haptic Device for Multi-Modal Human-Computer Interaction. In Proceedings of ACM CHI'94 (Boston, MA) ACM Press, Addison-Wesley, 1994, pp. 57-58.
  4. Ramstein, C., Martial, O., Dufresne, A., Carignan, M., Chassé, P. and Mabilleau, P. Touching and hearing GUIs" Design issues for the PC-acess system. In Proceedings of ACM Assets'96 (Vancouver, Canada) ACM Press, 1996, pp. 2-10.
  5. Young, P. Three dimensional information visualisation. Department of Computer Science, University of Durham, 1996, Technical Report, 12/96.


For full details of this and our other work see:
http://www.multivis.org