Our research is at the intersection of computer vision, computer graphics and machine learning--we develop computational algorithms to efficiently digitize people and train machines to perceive people from visual data.
Current computer vision algorithms can detect people in images or estimate 2D keypoints to a remarkable accuracy. However, people are far more complex–-we effortlessly sense other people's emotional state based on facial expressions and body movements, or we make guesses about people's preferences based on what clothing they wear. Our goal is to build virtual humans that look, move and eventually think like real ones.
3 papers accepted to CVPR 2019!
February 2019Paper pdfs, videos and code coming soon!
1) Learning to Reconstruct People in Clothing from a Single RGB Camera
2) SimulCap : Single-View Human Performance Capture with Cloth Simulation
3) In the Wild Human Pose Estimation using Explicit 2D Features and Intermediate 3D Representations
Congratulations to all co-authors!
Emmy Noether starting grant!
Gerard Pons-Moll has been awarded an Emmy Noether grant. The grant, called like the group "Real Virtual Humans" (RVHu), conists of 1.6 Million euros to conduct research at the interesction of vision, graphics and learning with special focus on analyzing and digitizing humans.
3 papers accepted at 3DV 2018!
1 Paper won the best student paper award!
Pdfs and videos available!
-Neural Body Fitting: Unifying Deep Learning and Model Based Human Pose and Shape Estimation
3DV Best Student Paper Award
-Detailed Human Avatars from Monocular Video
-Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB
1 paper at ECCV'18
Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera
New dataset with ground truth 3D poses in the wild! Download
We have released the first (and most challenging) dataset of natural scenes with multiple people with accurate 3D pose and shape! The RGB video includes scenes like taking the bus, walking on the city, shoping, sports, etc.