<XML><RECORDS><RECORD><REFERENCE_TYPE>3</REFERENCE_TYPE><REFNUM>7182</REFNUM><AUTHORS><AUTHOR>Eslambolchilar,P.</AUTHOR><AUTHOR>Crossan,A.</AUTHOR><AUTHOR>Murray-Smith,R.</AUTHOR></AUTHORS><YEAR>2004</YEAR><TITLE>Model-based target sonification on mobile devices</TITLE><PLACE_PUBLISHED>International Workshop on Interactive Sonification (Human Interaction with Auditory Displays), eds. A. Hunt, Th. Hermann, Bielefeld, Germany </PLACE_PUBLISHED><PUBLISHER>N/A</PUBLISHER><LABEL>Eslambolchilar:2004:7182</LABEL><KEYWORDS><KEYWORD>Auditory interfaces</KEYWORD></KEYWORDS<ABSTRACT>We investigate the use of audio and haptic feedback to augment the display of a mobile device controlled by tilt input. We provide an example of this based on Doppler effects, which highlight the user?s approach to a target, or a target?s movement from the current state, in the same way we hear the pitch of a siren change as it passes us. Twelve participants practiced navigation/browsing a state-space that was displayed via audio and vibrotactile modalities. We implemented the experiment on a Pocket PC, with an accelerometer attached to the serial port and a headset attached to audio port. Users navigated through the environment by tilting the device. Feedback was provided by audio displayed via a headset, and by vibrotactile information displayed via a vibrotactile unit in the Pocket PC. Users selected targets placed randomly in the state-space, supported by combinations of audio, visual and vibrotactile cues. The speed of target acquisition and error rate were measured, and summary statistics on the acquisition trajectories were calculated. These data were used to compare different display combinations and configurations. The results in the paper quantified the changes brought by predictive or ?quickened? sonified displays in mobile, gestural interaction. </ABSTRACT></RECORD></RECORDS></XML>