25. August 2016 Johannes Wolters

Chris Landreths JALI . An Animator-Centric Viseme Model for Expressive Lip Synchronization

Es wäre toll, Chris Landreths Masterclass „Making Faces“ einmal nach Deutschland zu holen! Die Masterclass hat eine neue Facebook-Seite, die sich hier findet! – Chris eigene Webseite findet sich hier!

Chris schrieb über das nachfolgende Video:

This video shows recent work from my research at the University of Toronto, where I’m a Distinguished Artist in Residence with the Computer Science Department. I’m lucky enough to be able to research facial behaviour and animation with the best team in this field, anywhere. JALI is (we believe) a breakthrough approach to simulating human speech on a CGI face.

We present a system that, given an input audio soundtrack and speech transcript, automatically generates expressive lip-synchronized facial animation that is amenable to further artistic refinement, and that is comparable with both performance capture and professional animator output. Because of the diversity of ways we produce sound, the mapping from phonemes to visual depictions as visemes is many-valued. We draw from psycholinguistics to capture this variation using two visually distinct anatomical actions: Jaw and Lip, where sound is primarily controlled by jaw articulation and lower-face muscles, respectively. We describe the construction of a transferable template JALI 3D facial rig, built upon the popular facial muscle action unit representation FACS. We show that acoustic properties in a speech signal map naturally to the dynamic degree of jaw and lip in visual speech. We provide an array of compelling animation clips, compare against performance capture and existing procedural animation, and report on a brief user study.

, , ,

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert