Finite State Machine gesture recognition



Constructing Finite State Machines for Fast Gesture Recognition
Abstract: This paper proposes an approach to 2D gesture recognition that models each gesture as a Finite State Machine (FSM) in spatial-temporal space. The model construction works in a semi-automatic way. The structure of the model is first manually decided based on the observation of the spatial topology of the data. The model is refined iteratively between two stages: data segmentation and model training. Given the continuous training data of a single gesture, we roughly segment the gesture trajectory ...

Gesture Modeling and Recognition Using Finite State Machines
Abstract: This paper proposes a state based approach to gesture learning and recognition. Using spatial clustering and temporal alignment, each gesture is defined to be an ordered sequence of states in spatial-temporal space. The 2D image positions of the centers of the head and both hands of the user are used as features; these are located by a color based tracking method. From training data of a given gesture, we first learn the spatial information without doing data segmentation and alignment, and

Recognizing Hand Gestures
Abstract: . This paper presents a method for recognizing human-hand gestures using a model-based approach. A Finite State Machine is used to model four qualitatively distinct phases of a generic gesture. Fingertips are tracked in multiple frames to compute motion trajectories, which are then used for finding the start and stop position of the gesture. Gestures are represented as a list of vectors and are then matched to stored gesture vector models using table lookup based on vector displacements....

Using Configuration States for the Representation and Recognition of Gesture
Abstract: A state-based technique for the summarization and recognition of gesture is presented. We define a gesture to be a sequence of states in a measurement or configuration space. For a given gesture, these states are used to capture both the reapeatability and varaiability evidenced in a training set of example trajectories. The states are positioned along a prototype of the gesture, and shaped such that they are narrow in the directions in which the ensemble of examples is tightly constrained, and wide in directions in which a great deal of variability is observed. We develop techniques for computing a prototype trajectory of an ensemble of trajectories, for defining configuration states along the prototype, and for recognizing gestures from an unsegmented, continuous stream of sensor data. The approach is illustrated by application to a range of gesture-related sensory data: the two-dimensional movements of a mouse input device, the movement of the hand measured by a magnetic spatial position and orientation sensor, and, lastly, the changing eigenvector projection coefficients computed from an image sequence.



Home