(Hardware Animation) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
General Robotics Lab General Robotics Lab
314 subscribers
12,387 views
0

 Published On May 26, 2021

Hardware animation of the Eva robot. To appear at ICRA 2021.

Project website is at: http://www.cs.columbia.edu/~bchen/aif...

Full overview video:    • (Overview) Smile Like You Mean It: Dr...  
Demo video:    • (Demos) Smile Like You Mean It: Drivi...  
Data collection video:    • (Data Collection) Smile Like You Mean...  

Abstract:
Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots. At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans. In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts. We addressed this challenge by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry. Our algorithm does not require any knowledge of the robot's kinematic model, camera calibration or predefined expression set. By decomposing the learning process into a generative model and an inverse model, our framework can be trained using a single motor babbling dataset. Comprehensive evaluations show that our method enables accurate and diverse face mimicry across diverse human subjects.

show more

Share/Embed