Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing

Garvita Tiwari1,2, Nikolaos Sarafianos3, Tony Tung3, Gerard Pons-Moll1,2

1 University of Tubingen, Germany 2 Max Planck Institute for Informatics, Saarland Informatics Campus, Germany 3Facebook Reality Labs, Sausalito, USA

ICCV 2021
We present Neural Generalized Implicit Functions (Neural-GIF), to animate people in clothing as a function of body pose. Neural-GIF learns directly from scans, models complex clothing and produces pose-dependent details for realistic animation. We show for four different characters the query input pose on the left (illustrated with a skeleton) and our output animation on the right.

Abstract

We present Neural Generalized Implicit Functions(Neural-GIF), to animate people in clothing as a function of the body pose. Given a sequence of scans of a subject in various poses, we learn to animate the character for new poses. Existing methods have relied on template-based representations of the human body (or clothing). However such models usually have fixed and limited resolutions, require difficult data pre-processing steps and cannot be used with complex clothing. We draw inspiration from template-based methods, which factorize motion into articulation and non-rigid deformation, but generalize this concept for implicit shape learning to obtain a more flexible model. We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects, before evaluating the signed distance field. Our formulation allows the learning of complex and non-rigid deformations of clothing and soft tissue, without computing a template registration as it is common with current approaches. Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations. Moreover, the model can generalize to new poses. We evaluate our method on a variety of characters from different public datasets in diverse clothing styles and show significant improvements over baseline methods, quantitatively and qualitatively. We also extend our model to multiple shape setting. To stimulate further research, we will make the model, code and data publicly available.



image image image image
Re-animating people and clothing:We show results of Neural-GIF on a clothed sequence, soft tissue dynamics and separate clothing items.

Citation

@inproceedings{tiwari21neuralgif,
    title = {Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing},
    author = {Tiwari, Garvita and Sarafianos, Nikolaos and Tung, Tony and Pons-Moll, Gerard},
    booktitle = {International Conference on Computer Vision ({ICCV})},
    month = {October},
    year = {2021},
    }

Acknowledgments

Carl-Zeiss-Stiftung Tübingen AI Center University of Tübingen MPII Saarbrücken


This work is supported by the German Federal Ministry of Education and Research (BMBF): T¨ubingen AI Center, FKZ: 01IS18039A. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 409792180 (Emmy Noether Programme, project: Real Virtual Humans) and a Facebook research award. Gerard Pons-Moll is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645. The project was made possible by funding from the Carl Zeiss Foundation.