Multi-Garment Net: Learning to Dress 3D People from Images

MGN pre-trained models and registered garments

Bharat Lal Bhatnagar, Garvita Tiwari, Christian Theobalt and Gerard Pons-Moll

Max Planck Institute for Informatics, Saarland Informatics Campus, Germany

ICCV 2019 Seoul, Korea
{Arxiv} {PDF} {Supplementary}

Abstract

We present Multi-Garment Network (MGN), a method to predict body shape and clothing, layered on top of the SMPL model from a few frames (1-8) of a video. Several experiments demonstrate that this representation allows higher level of control when compared to single mesh or voxel representations of shape. Our model allows to predict garment geometry, relate it to the body shape, and transfer it to new body shapes and poses. To train MGN, we leverage a digital wardrobe containing 712 digital garments in correspondence, obtained with a novel method to register a set of clothing templates to a dataset of real 3D scans of people in different clothing and poses. Garments from the digital wardrobe, or predicted by MGN, can be used to dress any body shape in arbitrary poses.



Citation

@inproceedings{bhatnagar2019mgn,
    title = {Multi-Garment Net: Learning to Dress 3D People from Images},
    author = {Bhatnagar, Bharat Lal and Tiwari, Garvita and Theobalt, Christian and Pons-Moll, Gerard},
    booktitle = {{IEEE} International Conference on Computer Vision ({ICCV})},
    month = {oct},
    organization = {{IEEE}},
    year = {2019},
}