Our research is at the intersection between computer vision, computer graphics and machine learning--we develop computational algorithms to efficiently digitize people and train machines to perceive people from visual data.
Current computer vision algorithms can detect people in images or estimate 2D keypoints to a remarkable accuracy. However, people are far more complex–-we effortlessly sense other people's emotional state based on facial expressions and body movements, or we make guesses about people's preferences based on what clothing they wear. Our goal is to build virtual humans that look, move and eventually think like real ones.
3 papers accepted at 3DV 2018!
Pdfs and videos coming next week!
-Neural Body Fitting: Unifying Deep Learning and Model Based Human Pose and Shape Estimation
-Detailed Human Avatars from Monocular Video
-Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB
1 paper at ECCV'18
Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera
We will release the first (and most challenging) dataset of natural scenes with multiple people with accurate 3D pose and shape! The RGB video includes scenes like taking the bus, walking on the city, shoping, sports, etc. Stay tuned!
2 papers at CVPR. One oral and one spotlight
March 1st 2018
Video Based Reconstruction of 3D People Models
DoubleFusion: Real-time Capture of Human Performance with Inner Body Shape from a Depth Sensor