Audio



Consonance based spectral mappings
Abstract: Presents a method of mapping the spectrum of a sound so as to make it consonant with a given specified reference spectrum. One application is to transform nonharmonic sounds into harmonic equivalents. Alternatively, it can be used to create nonharmonic instruments that retain the tonal qualities of familiar (harmonic) instruments. Musical uses of such timbres is discussed, and new forms of (nonharmonic) modulation are introduced. A series of sound examples demonstrate both the breadth and limitations of the method

Audio Feedback for Gesture Recognition
Abstract: A general framework for producing formative audio feedback for gesture recognition is presented, including the dynamic and semantic aspects of gestures. The beliefs states are probability density functions conditioned on the trajectories of the observed variables. We describe example implementations of gesture recognition based on Hidden Markov Models and a dynamic programming recognition algorithm. Granular synthesis is used to present the audio display of the changing probabilities and observed states.

Different effects of auditory feedback in man-machine interfaces
Abstract: Two experiments were carried out to estimate the effect of sound feedback (e.g., auditory alarms) on the operator's work of a man-machine system. The first experiment tested the hypothesis of additional, but redundant auditory feedback. Twelve subjects were instructed to define queries on a simple database. In this experiment, we could not find a general superiority of auditory feedback. In a second experiment we designed a process simulator so that each of eight machines made tones to indicate its hidden status over time. Eight students of computer science operated this process simulation program of an assembly line with robots. Relevant information of disturbances and machine breakdowns was given in auditory forms, too. The results indicate, that the additional feedback of auditory alarms improves significantly the operator performance and in-creases positively some mood aspects.

Audio-enhanced collaboration at an interactive electronic whiteboard
Abstract: This paper describes an experimental setup to investigate new possibilities to support cooperative work of a team with audio feedback on a large interactive electronic whiteboard, called DynaWall®. To enrich the interaction and the feedback qualities within a team work situation the DynaWall is equipped with a set of loudspeakers which are invisibly integrated into the environment. Different forms of audio feedback are realized and discussed to meet the requirements for collaborative team work situations. An audio feedback for a gesture interface with sound cues is implemented to improve the use of gestures to execute commands.

SCANNED SYNTHESIS
Abstract: This paper describes a new technique for the synthesis of musical sounds which we have named Scanned Synthesis. Scanned Synthesis is based on the psychoacoustics of how we hear and appreciate timbres and on our motor control

Modeling And Sonifying Pen Strokes On Surfaces
Abstract: This paper will describe the approach of modeling and sonifying the interaction with a pen on surfaces. The main acoustic parts and the dynamic behavior of the interaction are identified and a synthesis model is proposed to imitate the sound emanation during typical interactions on surfaces. Although a surface is twodimensional, modeling acoustical qualities of surfaces has to employ volumes to form resonances. Specific qualities of surfaces like the roughness and the texture are imitated by a...

ACHORRIPSIS: A SONIFICATION OF PROBABILITY DISTRIBUTIONS
Abstract: ABSTRACT The 1957 musical composition Achorripsis by Iannis Xenakis was composed using four different probability distributions, applied over three different organizational domains, during the course of the 7 minute piece. While Xenakis did not have sonification in mind, his artistic choices in rendering mathematical formulations into musical events (time, space, timbre, glissando speed) provide useful contributions to the “mapping problem” in three significant ways:
1. He pushes the limit of loading the ear with multiple formulations simultaneously.
2. His mapping of “velocity” to string glissando speed provides a useful method of working with a vector quantity with magnitude and direction.
3. His artistic renderings, ie. “musifications” of these distributions, invite the question, in general, as to whether musical/ artistic sonifications are more intelligible to the human ear than sonifications prepared without any musical “filtering” or constraints (e.g. that they could be notated and performed by musicians).

Timbre Space as a Musical Control Structure
Abstract: In this paper, we will describe a system for taking subjective measures of perceptual contrast between sound objects and using this data as input to some computer programs. The computer programs use multidimensional scaling algorithms to generate geometric representations from the input data. In the timbral spaces that result from the scaling programs, the various tones can be represented as points and a good statistical relationship can be sought between the distances in the space and the contrast judgments between the corresponding tones. The spatial representation is given a psychoacoustical interpretation by relating its dimensions to the acoustical properties of the tones. Controls are then applied directly to these properties in synthesis. The control schemes to be described are for additive synthesis and allow for the manipulation of the evolving spectral energy distribution and various temporal features of the tones. Tests of the control schemes have been carried out in musical contexts. Particular emphasis will be given here to the construction of melodic lines in which the timbre is manipulated on a note-to-note basis. Implications for the design of human control interfaces and of software for real-time digital sound synthesizers will be discussed.

An Auditory Display System for Aiding Interjoint Coordination
Abstract: Patients with lack of proprioception are unable to build and maintain ‘internal models’ of their limbs and monitor their limb movements because these patients do not receive the appropriate information from muscles and joints. This project was undertaken to determine if auditory signals can provide proprioceptive information normally obtained through muscle and joint receptors. Sonification of spatial location and sonification of joint motion, for monitoring arm/hand motions, was attempted in two pilot experiments with a patient. Sonification of joint motion though strong time/synchronization cues was the most successful approach. These results are encouraging and suggest that auditory feedback of joint motions may be substitute for proprioceptive input. However, additional data will have to be collected and control experiments will have to be done.

Glove-TalkII: A neural network interface which maps gestures to parallel formant speech synthesizer controls
Abstract: Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to 10 control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of GloveTalkII uses several input devices (including a Cyberglove, a...

Glove-Talk: A neural network interface between a data-glove and a speech synthesizer
Abstract: To illustrate the potential of multilayer neural networks for adaptive interfaces, we used a VPL DataGlove connected to a DECtalk speech synthesizer via five neural networks to implement a hand-gesture to speech system. Using minor variations of the standard back-propagation learning procedure, the complex mapping of hand movements to speech is learned using data obtained from a single "speaker" in a simple training phase. With a 203 gesture-to-word vocabulary, the wrong word is produced less...

Virtual Musical Instruments: Accessing the Sound Synthesis Universe as a Performer
Abstract: With current state-of-the-art human movement tracking techology it is possible to represent in real-time most of the degrees of freedom of a (part of the) human body. This allows for the design of a virtual musical instrument (VMI), analogous to a physical musical instrument, as a gestural interface, that will however provide for much greater freedom in the mapping of movement to sound. A musical performer may access therefore the currently unexplored real-time capabilities of sound synthesis..

Listen to your Data: Model-Based-Sonification for Data-Analysis

Principal Curve Sonification

Sonification of McMC Simulations



Home