![]() In our work, the control point data are first acquired, then fitted to a 3D face model of a human speaker, i.e., each control point is associated with a region of the face by minimizing the distance between the control points and the surface of the face model. ![]() ![]() We have used an articulograph that provides high temporal and spatial precision, allowing tracking the positions of small electromagnetic sensors, even when occluded, which is often the case when tracking the lip movement. In this scope, we propose a technique that allows for animation of a human face with realistic lips using a limited number of control points. This is essential for challenged population as hard-of-hearing people or new language learners. Modeling well lip motion and deformation in audiovisual speech synthesis is important to achieve realism and effective communication. In audiovisual speech communication, the lower part of the face (mainly lips and jaw) actively participates during speech production.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |