Sound in HCI bibliography
This is the bibliography from Stephen Brewster's PhD thesis. The references are
mainly about sound and interface design.
It was created in the autumn of 1994. Any reference
with 'Brewster' in it can be downloaded from my publication list. For
the others you'll have to go to a library!
1. Aldrich, F.K. & Parkin, A.J. (1989). Listening at speed. British
journal of visual impairment and blindness, 7(1), pp. 16-18.
2. Allan, J.J. & Chiu, A.M. (1977). An effectiveness study of a CAD system
augmented by audio feedback. Computers & Graphics, 2, pp. 321-323.
3. Alty, J. (1991). Multimedia-What is it and how do we exploit it? In D.
Diaper & N. Hammond (Eds.), Proceedings of HCI'91, Edinburgh:
Cambridge University Press, pp. 31-44.
4. Alty, J.L. & McCartney, C.D.C. (1991). Design of a multi-media
presentation system for a process control environment. In Eurographics
multimedia workshop, Session 8: Systems, Stockholm.
5. American National Standards Institute (1973). American National
Psychoacoustic Terminology (No. S3.20). American National Standards
Institute, New York.
6. American Standards Association (1960). Acoustical Terminology (No.
S1.1). American Standards Association, New York.
7. Apple Computer Inc. (1985). Chapter Two: The Macintosh User-Interface
Guidelines. In Inside Macintosh: Volume I, pp. 23-70. Reading,
Massachusetts: Addison-Wesley.
8. Apple Computer Inc. (1991). Chapter Two: User Interface Guidelines. In
Inside Macintosh: Volume IV, pp. 2-3 - 2-37. Reading, Massachusetts:
Addison-Wesley.
9. Arons, B. (1992). A review of the cocktail party effect. Journal of the
American Voice I/O Society, 12(July).
10. Avons, S.E., Leiser, R.G. & Carr, D.J. (1989). Paralanguage and
human-computer interaction. Part 1: Identification of recorded vocal
segregates. Behaviour and Information Technology, 8(1), pp. 21-31.
11. Badeley, A. (1990). Human Memory: Theory and Practice. London:
Lawrence Erlbaum Associates.
12. Baecker, R., Small, I. & Mander, R. (1991). Bringing icons to life. In
Proceedings of CHI'91, New Orleans: ACM Press, Addison-Wesley, pp. 1-6.
13. Ballas, J.A. & Howard, J.H. (1987). Interpreting the language of
environmental sounds. Envirionment and Behaviour, 19(1), pp. 91-114.
14. Barfield, W., Rosenberg, C. & Levasseur, G. (1991). The use of icons,
earcons and commands in the design of an online hierarchical menu. IEEE
Transactions on Professional Communication, 34(2), pp. 101-108.
15. Barker, P.G. & Manji, K.A. (1989). Pictorial dialogue methods.
International Journal of Man-Machine Studies, 31, pp. 323-347.
16. Begault, D.R. & Wenzel, E.M. (1990). Techniques and applications
for
binaural sound manipulation in human-computer interfaces (NASA Technical
Memorandum No. 102279). NASA Ames Research Centre: California.
17. Berger, K.W. (1963). Some factors in the recognition in timbre. Journal
of the Acoustical Society of America, 36(10), pp. 1888-1891.
18. Berglund, B., Preis, A. & Rankin, K. (1990). Relationship between
loudness and annoyance for ten community sounds. Environment
International, 16, pp. 523-531.
19. Bevan, N. & Macleod, M. (1994). Usability measurement in context.
International Journal of Man-Machine Studies, 13(1 & 2), pp.
123-145.
20. Björk, E.A. (1985). The perceived quality of natural sounds.
Acustica, 57(3), pp. 185-188.
21. Blandford, A., Harrison, M. & Barnard, P. (1994). Understanding the
properties of interaction. Amodeus 2 ESPRIT basic research action 7044,
Project working paper.
22. Blattner, M. & Dannenberg, R.B. (1992). Introduction: The trend toward
multimedia interfaces. In M. Blattner & R. B. Dannenberg (Eds.),
Multimedia Interface Design, pp. xvii-xxv. New York: ACM Press,
Addison-Wesley.
23. Blattner, M. & Dannenberg, R.B. (Eds.). (1992). Multimedia
Interface Design. New York: ACM Press, Addison-Wesley.
24. Blattner, M., Greenberg, R.M. & Kamegai, M. (1992). Listening to
turbulence: An example of scientific audiolization. In M. Blattner & R. B.
Dannenberg (Eds.), Multimedia Interface Design, pp. 87-104. New York:
ACM Press, Addison-Wesley.
25. Blattner, M., Sumikawa, D. & Greenberg, R. (1989). Earcons and icons:
Their structure and common design principles. Human Computer
Interaction, 4(1), pp. 11-44.
26. Blattner, M., Papp, A. & Glinert, E. (1992). Sonic enhancements of
two-dimensional graphic displays. In G. Kramer (Ed.), Auditory Display,
sonification, audification and auditory interfaces. The Proceedings of the
First International Conference on Auditory Display, Santa Fé
Institute, Santa Fé: Addison-Wesley, pp. 447-470.
27. Bly, S. (1982). Sound and computer information presentation
(Unpublished PhD Thesis No. UCRL53282). Lawrence Livermore National
Laboratory.
28. Bramwell, C.J. & Harrision, M.D. (1994). Design questions about
interactive computer programs. Unpublished paper.
29. Bregman, A. (1992). Foreword. In G. Kramer (Ed.), Auditory display,
sonification, audification and auditory interfaces. The Proceedings of the
First International Conference on Auditory Display., Santa Fé
Institute, Santa Fé: Addison-Wesley, pp. xv - xxi.
30. Bregman, A.S. (1990). Auditory Scene Analysis. Cambridge,
Massachusetts: MIT Press.
31. Brewster, S.A. (1992). Providing a model for the use of sound in user
interfaces (Technical Report No. YCS 169). University of York, Department
of Computer Science.
32. Brewster, S.A., Wright, P.C. & Edwards, A.D.N. (1992). A detailed
investigation into the effectiveness of earcons. In G. Kramer (Ed.),
Auditory display, sonification, audification and auditory interfaces. The
Proceedings of the First International Conference on Auditory Display,
Santa Fé Institute, Santa Fé: Addison-Wesley, pp. 471-498.
33. Brewster, S.A., Wright, P.C. & Edwards, A.D.N. (1993). An evaluation
of
earcons for use in auditory human-computer interfaces. In S. Ashlund, K.
Mullet, A. Henderson, E. Hollnagel, & T. White (Ed.), INTERCHI'93,
Amsterdam: ACM Press, Addison-Wesley, pp. 222-227.
34. Brewster, S.A., Wright, P.C. & Edwards, A.D.N. (1993). Parallel
earcons: Reducing the length of audio messages. Submitted to the
International Journal of Man-Machine Studies.
35. Brewster, S.A., Wright, P.C. & Edwards, A.D.N. (1994). The design and
evaluation of an auditory-enhanced scrollbar. In B. Adelson, S. Dumais, &
J. Olson (Ed.), Proceedings of CHI'94, Boston, Massachusetts: ACM
Press, Addison-Wesley, pp. 173-179.
36. Brown, M.L., Newsome, S.L. & Glinert, E.P. (1989). An experiment into
the use of auditory cues to reduce visual workload. In Proceedings of
CHI'89, Austin, Texas: ACM Press, Addison-Wesley, pp. 339-346.
37. Burgess, D. (1992). Techniques for low cost spatial audio
(Technical
Report No. GIT-GVU-92-09). Graphics, Visualization & Usability Centre,
Georgia Institute of Technology.
38. Buxton, W. (1989). Introduction to this special issue on nonspeech audio.
Human Computer Interaction, 4(1), pp. 1-9.
39. Buxton, W., Gaver, W. & Bly, S. (1991). Tutorial number 8: The use of
non-speech audio at the interface. In Proceedings of CHI'91, New
Orleans: ACM Press: Addison-Wesley.
40. Byers, J.C., Bittner, A.C. & Hill, S.G. (1989). Traditional and raw
task load index (TLX) correlations: Are paired comparisions necessary? In A.
Mital (Ed.), Advances in industrial ergonomics, pp. 481-485. Taylor
& Francis.
41. Chowning, J. (1975). Synthesis of complex audio spectra by means of
frequency modulation. Journal of the Audio Engineering Society, 21(7),
pp. 526-534.
42. Cohen, J. (1992). Monitoring background activities. In G. Kramer (Ed.),
Auditory Display, sonification, audification and auditory interfaces. The
Proceedings of the First International Conference on Auditory Display.,
Santa Fé Institute, Santa Fé: Addison-Wesley, pp. 499-532.
43. Cohen, J. (1993). "Kirk Here": Using genre sounds to monitor background
activity. In S. Ashlund, K. Mullet, A. Henderson, E. Hollnagel, & T. White
(Eds.), INTERCHI'93, Adjunct Proceedings Amsterdam: ACM Press, Addison
Wesley, pp. 63-64.
44. Cohen, M. (1993). Throwing, pitching and catching sound: Audio windowing
models and modes. International Journal of Man-Machine Studies, 39, pp.
269-304.
45. Cohen, M. & Ludwig, L.F. (1991). Multidimensional audio window
management. International Journal of Man-Machine Studies, 34, pp.
319-336.
46. Colquhoun, W.P. (1975). Evaluation of auditory, visual and dual-mode
displays for prolonged sonar monitoring tasks. Human Factors, 17, pp.
425-437.
47. Corcoran, D., Carpenter, A., Webster, J. & Woodhead, M. (1968).
Comparison of training techniques for complex sound identification. Journal
of the Acoustical Society of America, 44, pp. 157-167.
48. Deatherage, B.H. (1972). Auditory and other forms of information
presentation. In H. P. Van Cott & R. G. Kinkade (Eds.), Human
engineering guide to equipment design, pp. 123-160. Washington D.C.: U.S.
Government printing office.
49. Deutsch, D. (1980). The processing of structured and unstructured tonal
sequences. Perception and Psychophysics, 28(5), pp. 381-389.
50. Deutsch, D. (1982). Psychology of music. London: Academic Press.
51. Deutsch, D. (1983). Auditory illusions, handedness and the spatial
environment. Journal of the Audio Engineering Society, 31(9), pp.
607-620.
52. Deutsch, D. (1986). Auditory pattern recognition. In K. R. Boff, L.
Kaufman, & P. Thomas (Eds.), Handbook of perception and human
performance, pp. 32.1-32.49. New York: Wiley.
53. Dewar, K.M., Cuddy, L.L. & Mewhort, D.J. (1977). Recognition of single
tones with and without context. Journal of Experimental Psychology: Human
Learning and Memory, 3(1), pp. 60-67.
54. DiGiano, C.J. (1992) Visualizing Program Behavior Using Non-speech
Audio. MSc. Thesis, Department of Computer Science, University of
Toronto.
55. DiGiano, C.J. & Baecker, R.M. (1992). Program Auralization: Sound
Enhancements to the Programming Environment. In Proceedings of Graphics
Interface'92, pp. 44-52.
56. DiGiano, C.J., Baecker, R.M. & Owen, R.N. (1993). LogoMedia: A
sound-enhanced programming environment for monitoring program behaviour. In S.
Ashlund, K. Mullet, A. Henderson, E. Hollnagel, & T. White (Ed.),
INTERCHI'93, Amsterdam: ACM Press, Addison-Wesley, pp. 301-302.
57. Dix, A., Finlay, J., Abowd, G. & Beale, R. (1993). Chapter 9.4
Status/Event Analysis. In Human-Computer Interaction, pp. 325-334.
London: Prentice-Hall.
58. Dix, A., Finlay, J., Abowd, G. & Beale, R. (1993). Human-Computer
Interaction. London: Prentice-Hall.
59. Dix, A.J. (1991). Chapter 10: Events and Status. In Formal Methods for
Interactive Systems, pp. 239-270. London: Academic Press.
60. Dix, A.J. (1992). Beyond the Interface. In Proceedings of IFIP
TC2/WG2.7
Working Conference on Engineering for Human-Computer Interaction,10-14 August
1992, Ellivuori, Finland.
61. Dix, A.J. & Brewster, S.A. (1994). Causing trouble with buttons. In
Ancilliary Proceedings of HCI'94, Sterling, UK: Cambridge University
Press.
62. Edwards, A.D.N. (1987) Adapting user interfaces for visually disabled
users. PhD Thesis, Open University, Milton Keynes.
63. Edwards, A.D.N. (1989). Soundtrack: An auditory interface for blind users.
Human Computer Interaction, 4(1), pp. 45-66.
64. Edworthy, J., Loxley, S. & Dennis, I. (1991). Improving auditory
warning design: Relationships between warning sound parameters and perceived
urgency. Human Factors, 33(2), pp. 205-231.
65. Edworthy, J., Loxley, S., Geelhoed, E. & Dennis, I. (1989). The
perceived urgency of auditory warnings. Proceedings of the Institute of
Acoustics, 11(5), pp. 73-80.
66. European Telecommunications Standards Institute (1992). Guidelines for
the specification of tones (Technical Report No. HF Temp. Doc. 10, Annex
6.). European Telecommunications Standards Institute.
67. European Telecommunications Standards Institute (1992). Human Factors
(HF); Specification of characteristics of telephone services tones when locally
generated in terminals (Draft ETS 300 295 No. DE/HF-1003-B). European
Telecommunications Standards Institute.
68. Fitch, W.T. & Kramer, G. (1992). Sonifying the body electric:
Superiority of an auditory over a visual display in a complex, multivariate
system. In G. Kramer (Ed.), Auditory Display, sonification, audification and
auditory interfaces. The Proceedings of the First International Conference on
Auditory Display, Santa Fé Institute, Santa Fé:
Addison-Wesley, pp. 307-326.
69. Foster, S.H., Wenzel, E.M. & Taylor, R.M. (1991). Real-time synthesis
of complex acoustic environments. In IEEE workshop on applications of signal
processing to audio & acoustics, Oct. 20-23, New Paltz, N.Y.
70. Fraisse, P. (1981). Multisensory aspects of rhythm. In R. D. Walk & H.
L. Pick (Eds.), Intersensory Perception and Sensory-Integration, pp.
217-245. New York: Plenum Press.
71. Fraisse, P. (1982). Rhythm and tempo. In D. Deutsch (Ed.), The
psychology of music, pp. 149-180. San Diego, CA.: Academic Press.
72. Frysinger, S.P. (1990). Applied research in auditory data representation.
In D. Farrell (Ed.), Extracting meaning from complex data: processing,
display, interaction. Proceedings of the SPIE/SPSE symposium on electronic
imaging. 1259 Springfield, VA.: SPIE, pp. 130-139.
73. Gaver, W. (1986). Auditory Icons: Using sound in computer interfaces.
Human Computer Interaction, 2(2), pp. 167-177.
74. Gaver, W. (1989). The SonicFinder: An interface that uses auditory icons.
Human Computer Interaction, 4(1), pp. 67-94.
75. Gaver, W. (1992). Using and creating auditory icons. In G. Kramer (Ed.),
Auditory Display, sonification, audification and auditory interfaces. The
Proceedings of the First International Conference on Auditory Display,
Santa Fé Institute, Santa Fé: Addison-Wesley, pp. 417-446.
76. Gaver, W. (1993). Synthesizing auditory icons. In S. Ashlund, K. Mullet,
A.
Henderson, E. Hollnagel, & T. White (Ed.), INTERCHI'93, Amsterdam:
ACM Press, Addison-Wesley, pp. 228-235.
77. Gaver, W., Moran, T., MacLean, A., Lövstrand, L., Dourish, P.,
Carter,
K. & Buxton, W. (1992). Realizing a video environment: EuroParc's RAVE
system. In P. Bauersfeld, J. Bennett, & G. Lynch (Eds.), Proceedings of
CHI'92, Monterey, California: ACM Press, Addison-Wesley, pp. 27-35.
78. Gaver, W. & Smith, R. (1990). Auditory icons in large-scale
collaborative environments. In D. Diaper, D. Gilmore, G. Cockton, & B.
Shackel (Eds.), Human Computer Interaction: Interact'90, Cambridge, UK:
Elsevier Science Publishers B.V. (North Holland), pp. 735-740.
79. Gaver, W., Smith, R. & O'Shea, T. (1991). Effective sounds in complex
systems: The ARKola simulation. In S. Robertson, G. Olson, & J. Olson
(Eds.), Proceedings of CHI'91, New Orleans: ACM Press, Addison-Wesley,
pp. 85-90.
80. Gelfand, S.A. (1981). Hearing: An introduction to psychological and
physiological acoustics. New York: Marcel Dekker Inc.
81. Gerhing, B. & Morgan, D. (1990). Applications of Binaural Sound in the
Cockpit. Speech Technology, 5(2), pp. 46-50.
82. Gerth, J.M. (1992) Performance based refinement of a synthetic auditory
ambience: identifying and discriminating auditory sources. PhD. Thesis,
Georgia Institute of Technology.
83. Gleitman, H. (1981). Psychology. New York: W. W. Norton & Co.
84. Glinert, E. & Blattner, M. (1992). Programming the multimodal
interface. In ACM MultiMedia'93: ACM Press, Addison-Wesley, pp. 189-197.
85. Gravetter, F.J. & Wallnau, L.B. (1985). Statistics for the
behavioural sciences (2nd ed.). St Paul, MN.: West Publishing
Company.
86. Grey, J.M. (1977). Multidimensional perceptual scaling of musical timbres.
Journal of the Acoustical Society of America, 61(5), pp. 1270-1277.
87. Grey, J.M. & Gordon, J.W. (1978). Perceptual effects of spectral
modifications on musical timbres. Journal of the Acoustical Society of
America, 63(5), pp. 1493-1500.
88. Handel, S. (1989). Listening: An introduction to the perception of
auditory events. Cambridge, Massachusetts: MIT Press.
89. Hapeshi, K. & Jones, D. (1992). Interactive multimedia for
instruction:
A cognitive analysis of the role of audition and vision. International
Journal of Human-Computer Interaction, 4(1), pp. 79-99.
90. Harrison, M. & Barnard, P. (1993). On defining requirements for
interaction. In Proceedings of the IEEE International Workshop on
requirements engineering, pp. 50-54. New York: IEEE.
91. Hart, S. & Staveland, L. (1988). Development of NASA-TLX (Task Load
Index): Results of empirical and theoretical research. In P. Hancock & N.
Meshkati (Eds.), Human mental workload, pp. 139-183. Amsterdam: North
Holland B.V.
92. Hart, S.G. & Wickens, C. (1990). Workload assessment and prediction.
In
H. R. Booher (Ed.), MANPRINT, an approach to systems integration, pp.
257-296. New York: Van Nostrand Reinhold.
93. Hartson, H. & Gray, P. (1992). Temporal aspects of tasks in the User
Action Notation. Human-Computer Interaction, 7, pp. 1-45.
94. Hartson, H.R., Siochi, A.C. & Hix, D. (1990). The UAN: A user-oriented
representation for direct manipulation interface designs. ACM Transactions
on Information Systems, 8(3), pp. 181-203.
95. Hoare, C.A.R. (1985). Communicating sequential processes. Exeter,
UK: Prentice-Hall International.
96. Iverson, W. (1992). The sound of science. Computer Graphics World,
15(1), pp. 54-62.
97. Johnson, J. (1990). Modes in Non-Computer Devices. International
Journal
of Man-Machine Studies, 32(4), pp. 423-438.
98. Johnson, J. & Engelbeck, G. (1989). Modes Survey Results. ACM
SIGCHI
Bulletin, 20(4), pp. 38-50.
99. Jones, D. (1989). The Sonic Interface. In M. Smith & G. Salvendy
(Eds.), Work with computers: Organizational, Management, Stress and health
aspects,. Amsterdam: Elsevier Science publishers.
100. Jones, S.D. & Furner, S.M. (1989). The construction of audio icons
and
information cues for human-computer dialogues. In T. Megaw (Ed.),
Contemporary Ergonomics: Proceedings of the Ergonomics Society's 1989 Annual
Conference, Reading, UK: Taylor & Francis, pp. 436-441.
101. Kishi, N. (1992). SimUI: Graphical user interface evaluation using
playback. In Proceedings of the Sixteenth Annual International Computer
Software & Applications Conference, Chicago, Illinois: IEEE Computer
Society, pp. 121-127.
102. Kramer, G. (Ed.). (1992). Auditory display, sonification,
audification
and auditory interfaces. The Proceedings of the First International Conference
on Auditory Display. Reading, Massachusetts: Santa Fé Institute,
Addison-Wesley.
103. Kramer, G. (1992). An introduction to auditory display. In G. Kramer
(Ed.), Auditory Display, sonification, audification and auditory interfaces.
The Proceedings of the First International Conference on Auditory Display,
Santa Fé Institute, Santa Fé: Addison-Wesley, pp. 1-77.
104. Kramer, G. (1992). Some organizing principles for representing data with
sound. In G. Kramer (Ed.), Auditory Display, sonification, audification and
auditory interfaces. The Proceedings of the First International Conference on
Auditory Display, Santa Fé Institute, Santa Fé:
Addison-Wesley, pp. 185-222.
105. Lee, W.O. (1992). The effects of skill development and feedback on action
slips. In A. Monk, D. Diaper, & M. D. Harrison (Eds.), Proceedings of
HCI'92, VII York, UK: Cambridge University Press, pp. 73-86.
106. Leiser, R., Avons, S. & Carr, D. (1989). Paralanguage and
human-computer interaction. Part 2: Comprehension of synthesised vocal
segregates. Behaviour and Information Technology, 8(1), pp. 23-32.
107. Lerdahl, F. (1987). Timbral hierarchies. Contemporary Music
Review,
2(1), pp 135-160.
108. Levitt, H. & Voroba, B. (1974). Localization. In S. E. Gerber (Ed.),
Introductory Hearing Science: physical and physiological concepts, pp.
188-196. Philadelphia: W.B. Saunders Company.
109. Loveless, N.E., Brebner, J. & Hamilton, P. (1970). Bisensory
presentation of information. Psychological Bulletin, 73(3), pp.
161-199.
110. Loy, G. (1985). Musicians make a standard: The MIDI phenomenon.
Computer Music Journal, 9(4), pp. 8-26.
111. Lucas, P. (1994). An evaluation of the communicative ability of auditory
icons and earcons. In G. Kramer (Ed.), Accepted for publication in the
proceedings of ICAD'94, Santa Fé Institute, Santa Fé, NM.:
Addison-Wesley.
112. Ludwig, L.F., Pincever, N. & Cohen, M. (1990). Extending the notion
of
a window system to audio. IEEE Computer, August, pp. 66-72.
113. Mansur, D.L., Blattner, M. & Joy, K. (1985). Sound-Graphs: A
numerical
data analysis method for the blind. Journal of Medical Systems, 9, pp.
163-174.
114. Matoba, H., Hirabayashi, F. & Kasahara, Y. (1989). Issues in auditory
interfaces management: An extra channel for computer applications. In M. Smith
& G. Salvendy (Eds.), Work with Computers: Organizational, Management,
Stress and health aspects,. Amsterdam: Elsevier Science publishers.
115. Mayes, T. (1992). The `M' word: Multimedia interfaces and their role in
interactive learning systems. In A. D. N. Edwards & S. Holland (Eds.),
Multimedia Interface Design in Education, pp. 1-22. Berlin:
Springer-Verlag.
116. McCormick, E.J. & Sanders, M.S. (1982). Human factors in
engineering and design (5th ed.). McGraw-Hill.
117. Monk, A. (1986). Mode Errors: A user-centered analysis and some
preventative measures using keying-contingent sound. International Journal
of Man-Machine Studies, 24, pp. 313-327.
118. Moore, B.C. (1989). An Introduction to the Psychology of Hearing
(2nd ed.). London: Academic Press.
119. Mountford, S.J. & Gaver, W. (1990). Talking and listening to
computers. In B. Laurel (Ed.), The art of human-computer interface
design, pp. 319-334. Reading, Massachusetts: Addison-Wesley.
120. Myers, B. (1990). All the widgets. ACM SIGRAPH Video Review,
CHI'90
Special Issue(57).
121. Mynatt, E.D. (1992). Auditory presentation of graphical user interfaces.
In G. Kramer (Ed.), Auditory Display, sonification, audification and
auditory interfaces. The Proceedings of the First International Conference on
Auditory Display, Santa Fé Institute, Santa Fé:
Addison-Wesley, pp. 533-555.
122. NASA Human Performance Research Group (1987). Task Load Index
(NASA-TLX) v1.0 computerised version. NASA Ames Research Centre.
123. Norman, D.A. (1986). Chapter 3: Cognitive Engineering. In D. A. Norman
& S. W. Draper (Eds.), User-centered system design, pp. 31-61.
Hillsdale, New Jersey: Lawrence Erlbaum Associates.
124. Norman, D.A. (1988). The psychology of everyday things. USA: Basic
Books.
125. O'Leary, A. & Rhodes, G. (1984). Cross-modal effects on visual and
auditory object perception. Perception and Psychophysics, 35(6), pp.
565-569.
126. Open University (1990). A guide to usability. Part of the Usability
Now! Research Technology initiative. Milton Keynes: DTI and the Open
University.
127. Oppenheim, D.V., Anderson, T. & Kirk, R. (1993). Perceptual
parameters: Their specification, scoring and control within two software
composition systems. In Proceedings of the International Computer Music
Conference, 1993, Tokyo.
128. Patterson, R.D. (1982). Guidelines for auditory warning systems on
civil aircraft (CAA Paper No. 82017). Civil Aviation Authority, London.
129. Patterson, R.D. (1989). Guidelines for the design of auditory warning
sounds. Proceeding of the Institute of Acoustics, Spring Conference,
11(5), pp. 17-24.
130. Patterson, R.D. (1990). Auditory warning sounds in the work environment.
In D. E. Broadbent, A. Baddeley, & J. T. Reason (Eds.), Human Factors in
Hazardous Situations. Phil. Trans. B 327, pp. 485-492. London: The Royal
Society.
131. Patterson, R.D., Edworthy, J., Shailer, M., Lower, M. & Wheeler, P.
(1986). Alarm sounds for medical equipment in intensive care areas and
operating theatres (Report No. AC598). Institute of sound and vibration
research, University of Southampton.
132. Perrott, D., Sadralobadi, T., Saberi, K. & Strybel, T. (1991).
Aurally
aided visual search in the central visual field: Effects of visual load and
visual enhancement of the target. Human Factors, 33(4), pp. 389-400.
133. Pezdeck, K. (1987). Television comprehension as an example of applied
research in cognitive psychology. In D. Berger, K. Pezdeck, & W. Banks
(Eds.), Applications in Cognitive Psychology, pp. 3-15. Hillsdale, New
Jersey: Lawrence Erlbaum Associates.
134. Pitt, I. & Edwards, A. (1991). Navigating the interface by sound for
blind users. In D. Diaper & N. Hammond (Eds.), Proceedings of
HCI'91, VI Edinburgh: Cambridge University Press, pp. 373-383.
135. Plomp, R. (1976). Chapter 6: Timbre of complex tones. In Aspects of
tone sensation, pp. 85-110. London: Academic Press.
136. Portigal, S. (1994) Auralization of document structure. MSc.
Thesis, The University of Guelph, Canada.
137. Prior, M. & Troup, G.A. (1988). Processing of timbre and rhythm in
musicians and non-musicians. Cortex, 24(3), pp. 451-456.
138. Rasch, R.A. & Plomp, R. (1982). The perception of musical tones. In
D.
Deutsch (Ed.), The Psychology of Music, pp. 1-21. New York: Academic
Press.
139. Rayner, K. & Pollatsek, A. (1989). The Psychology of Reading.
Englewood Cliffs, New Jersey: Prentice-Hall International, Inc.
140. Reason, J. (1990). Human Error. Cambridge, UK: Cambridge
University
Press.
141. Reber, A.S. (1985). The Penguin Dictionary of Psychology. London:
Penguin Books.
142. Reich, S.S. (1980). Significance of pauses for speech perception.
Journal of Psycholinguistic Research, 9(4), pp. 379-389.
143. Reichman, R. (1986). Chapter 14: Communications paradigms for a window
system. In D. A. Norman & S. W. Draper (Eds.), User-Centered System
Design, pp. 285-314. Hillsdale, New Jersey: Lawrence Erlbaum Associates.
144. Robson, C. (1994). Experiment, design and statistics in psychology
(3rd ed.). London: Penguin Books Ltd.
145. Rosenberg, K. (1990). Statistics for behavioural sciences.
Dubuque,
IA.: Wm. C. Brown Publishers.
146. Sakamoto, N., Gotoh, T. & Kimaura, Y. (1976). On `out of head
localization' in headphone listening. Journal of the Acoustic Engineering
Society, 24(9), pp. 710-716.
147. Scharf, B. & Houtsma, A.J. (1986). Audition II: Loudness, pitch,
localization, aural distortion and pathology. In K. R. Boff, L. Kaufman, &
P. Thomas (Eds.), Handbook of perception and human performance, pp.
15.1-15.60. New York: Wiley.
148. Scholes, P.A. (1975). The oxford companion to music
(10th ed.). Oxford: Oxford University Press.
149. Scott, D. (1993). Status conspicuity, peripheral vision and text editing.
Behaviour and Information Technology, 12(1), pp. 23-31.
150. Scott, D. & Findlay, J.M. (1991). Optimum display arrangements for
presenting status information. International Journal of Man-Machine
Studies, 35, pp. 399-407.
151. Sellen, A., Kurtenbach, G. & Buxton, W. (1992). The prevention of
mode
errors through sensory feedback. Human Computer Interaction, 7, pp.
141-164.
152. Sellen, A.J., Kurtenbach, G.P. & Buxton, W. (1990). The role of
visual
and kinesthetic feedback in the prevention of mode errors. In D. Diaper, D.
Gilmore, G. Cockton, & B. Shackel (Eds.), Human Computer Interaction:
Interact'90, Cambridge, UK: Elsevier Science Publishers B.V. (North
Holland), pp. 667-673.
153. Slowiaczek, L.M. & Nusbaum, H.C. (1985). Effects of speech rate and
pitch contour on the perception of synthetic speech. Human Factors,
27(6), pp. 701-712.
154. Smith, B. (1991). UNIX goes Indigo. Byte, 16(9), pp. 40-41.
155. Smith, S., Bergeron, R.D. & Grinstein, G.G. (1990). Stereophonic and
surface sound generation for exploratory data analysis. In CHI `90,
Seattle, Washington: ACM Press: Addison-Wesley, pp. 125-132.
156. Smither, J. (1993). Short term memory demands in processing synthetic
speech by old and young adults. Behaviour and Information Technology,
12(6), pp. 330-335.
157. Sonnenwald, D.H., Gopinath, B., Haberman, G.O., Keese, W.M. & Myers,
J.S. (1990). InfoSound: An audio aid to program comprehension. Proceedings
of the 23rd Hawaii International Conference on System Sciences, pp.
541-546.
158. Speeth, S.D. (1961). Seismometer Sounds. Journal of the Acoustical
Society of America, 33(7), pp. 909-916.
159. Spivey, J.M. (1992). The Z notation: A reference manual
(2nd ed.). Hemel Hempstead, UK: Prentice Hall International.
160. Stevens, R.D., Brewster, S.A., Wright, P.C. & Edwards, A.D.N. (1994).
Providing an audio glance at algebra for blind readers. In G. Kramer (Ed.),
Accepted for publication in the proceedings of ICAD'94, Santa Fé
Institute, Santa Fé: Addison-Wesley.
161. Strybel, T., Manligas, C. & Perrott, D. (1992). Minimum audible
movement angle as a function of the azimuth and elevation of the source.
Human Factors, 34(3), pp. 267-275.
162. Sumikawa, D., Blattner, M. & Greenberg, R. (1986). Earcons:
Structured
Audio Messages. Unpublished paper.
163. Sumikawa, D., Blattner, M., Joy, K. & Greenberg, R. (1986).
Guidelines for the syntactic design of audio cues in computer interfaces
(Technical Report No. UCRL 92925). Lawrence Livermore National Laboratory.
164. Sumikawa, D.A. (1985). Guidelines for the integration of audio cues
into computer user interfaces (Technical Report No. UCRL 53656). Lawrence
Livermore National Laboratory.
165. Svean, J. (1994). AEP: Active Ear Plug. Sintef Research Laboratory,
Norway. Personal Communication.
166. Swift, C.G., Flindell, I.H. & Rice, C.G. (1989). Annoyance and
impulsivity judgements of environmental noises. Proceedings of the Institute
of Acoustics, 11(5), pp. 551-559.
167. Tessler, L. (1981). The SmallTalk environment. Byte(August), pp.
90-147.
168. Thimbleby, H. (1990). User Interface Design. New York: ACM Press,
Addison-Wesley.
169. Vanderveer, N.J. (1979). Ecological acoustics: Human perception of
environmental sounds (Thesis No. 40/09B, 4543). Dissertation Abstracts
International.
170. Von Bismarck, G. (1974). Timbre of steady sounds: A factorial
investigation of its verbal attributes. Acustica, 30(3), pp. 146-159.
171. Wagenaar, W.A., Varey, C.A. & Hudson, P.T. (1984). Do audiovisuals
aid? A study of bisensory presentation on the recall of information. In H.
Bouma & D. G. Bouwhuis (Eds.), Attention and Performance: X, pp.
379-391. Lawrence Erlbaum Associates.
172. Walker, J.T. & Scott, K.J. (1981). Auditory-visual conflicts in the
perceived duration of lights, tones and gaps. Journal of Experimental
Psychology: Human Perception and Performance, 7(6), pp. 1327-1339.
173. Warren, W.H. & Verbrugge, R.R. (1984). Auditory perception of
breaking
and bouncing events: A case study in ecological acoustics. Journal of
Experimental Psychology: Human Perception and Performance, 10, pp.
704-712.
174. Watkins, W.H. & Feehrer, C.E. (1965). Acoustic facilitation of visual
detection. Journal of Experimental Psychology, 70(3), pp. 322-333.
175. Webster, B. (1989). The NeXT Book. Reading, MA: Addison-Wesley.
176. Wedin, L. & Goude, G. (1972). Dimension analysis of the perception of
instrumental timbre. Scandinavian Journal of Psychology, 13(3), pp.
228-240.
177. Wenzel, E., Wightman, F. & Foster, S. (1988). Development of a 3D
auditory display system. SIGCHI Bulletin, 20(2), pp. 52-57.
178. Wenzel, E., Wightman, F.L. & Kistler, D. (1991). Localization with
non-individualized virtual display cues. In S. Robertson, G. Olson, & J.
Olson (Eds.), CHI'91, New Orleans: ACM Press, Addison-Wesley, pp.
351-359.
179. Wenzel, E.M. (1992). Three-Dimensional virtual acoustic displays.
Presence: teleoperators and virtual environments, 1, pp. 80-107.
180. Wenzel, E.M., Foster, S.H., Wightman, F.L. & Kistler, D.J. (1989).
Realtime digital synthesis of localized auditory cues over headphones. In
IEEE workshop on applications of signal processing to audio & acoustics.
Oct. 15-18, New Paltz, N.Y.
181. Wessell, D.L. (1979). Timbre space as a musical control structure.
Computer Music Journal, 3(2), pp. 42-52.
182. Wickens, C.D., Mountford, S.J. & Schreiner, W. (1981). Multiple
resources, task-hemispheric integrity and individual differences in
time-sharing. Human Factors, 23(2), pp. 211-229.
183. Williams, S. (1992). Perceptual principles in sound grouping. In G.
Kramer
(Ed.), Auditory Display, sonification, audification and auditory interfaces.
The Proceedings of the First International Conference on Auditory Display,
Santa Fé Institute, Santa Fé: Addison-Wesley, pp. 95-126.
184. Wright, P.C. & Monk, A.F. (1989). Evaluation for design. In A.
Sutcliffe & L. Macaulay (Eds.), People and computers 5, pp. 345-358.
Cambridge: Cambridge University Press.
185. Yager, T. (1991). The Littlest SPARC. Byte, 16(2), pp. 169-174.
|