<XML><RECORDS><RECORD><REFERENCE_TYPE>3</REFERENCE_TYPE><REFNUM>5795</REFNUM><AUTHORS><AUTHOR>Ju,X.</AUTHOR><AUTHOR>Siebert,J.P.</AUTHOR></AUTHORS><YEAR>2001</YEAR><TITLE>Conformation from generic animatable models to 3D scanned data,</TITLE><PLACE_PUBLISHED>Proc. 6th Numérisation 3D/Scanning 2001 Congress, Paris, France, 2001. </PLACE_PUBLISHED><PUBLISHER>N/A</PUBLISHER><PAGES>239-244</PAGES><ISBN>0-88986-310-5</ISBN><LABEL>Ju:2001:5795</LABEL><ABSTRACT>The advent of photogrammetry-based 3D data collection techniques means that the highly accurate 3D surface of a specific person can now be collected in a matter of milliseconds. Existing human animation models provide excellent tools to control articulated motion and body surface deformations according to body postures, but modelling a specific individual requires a large degree of skill and manual intervention. 3D-MATIC Laboratory proposed combining 3D images of real people with existing human animation models to achieve highly realistic human animation models. What is required is a method of conforming a generic human animation model to fit or “clone” the 3D geometry scanned from a specific individual. Hence any person can be seen walking in the virtual world. Here we present current work carried out in the 3D-MATIC Laboratory. Segmentation has been applied to the 3D human image to extract semantic information, which is necessary to find the segments of the scanned human body corresponding to those of the generic model. Establishing correspondences is essential for the surface conformation. After segmentation, a conformation procedure is applied to bring the generic model surfaces into close correspondence with those of the real-world 3D image. </ABSTRACT></RECORD></RECORDS></XML>