Learning-Based Expression and Head Pose Transfer from Images and Videos


Details
The face and head movements are a powerful mode of nonverbal communication among humans.
Endowing anthropomorphic characters with the ability to produce such actions autonomously is crucial for making digital twins of themselves.
Join us on April 7 as Xiao Zeng of the UCLA Computer Graphics & Vision Lab, and Surya Dwarakanath of Cruise Automation present a learning-based approach to transfer facial expressions and head poses from images and videos to a biomechanical, muscle-driven face-head-neck complex.
Xiao and Surya will also introduce a deep neural-network-based method for learning the representation of expressions through Ekman’s Facial Action Coding System (FACS) in the context of the muscle actuators that drive the musculoskeletal face model augmented with a neck-head system to animate head movement during facial expression synthesis.
An MS graduate in Computer Science at the University of California, Los Angeles, Xiao continues his Ph.D. research focusing around facial expression and head pose transfer from videos and the simulation of musculoskeletal human model and artificial life.
Also a UCLA graduate in Masters in Computer Science, Surya's research interests spanned artificial life, computer graphics and deep learning.
As a software engineer in Simulation at Cruise, Surya's work involves the use of machine learning and simulations to improve technology for self-driving cars.
======================================================================
Date: April 7, 2022, Thursday
Time: 7:00 pm - 8:00 pm PT
Venue:
https://us02web.zoom.us/j/88500180936?pwd=TkZRVzIxcGpsUUpPcHhYb3B2alowUT09
See you all at the event!
Producer: Elisa Agor, Chair, Silicon Valley ACM SIGGRAPH

Learning-Based Expression and Head Pose Transfer from Images and Videos