BEHAVE: Dataset and Method for Tracking Human Object Interactions

BEHAVE dataset and pre-trained models

Bharat Lal Bhatnagar1,2, Xianghui Xie2, Ilya Petrov1, Cristian Sminchisescu3, Christian Theobalt2and Gerard Pons-Moll1,2

1University of Tübingen, Germany
2Max Planck Institute for Informatics, Saarland Informatics Campus, Germany
3Google Research

CVPR 2022

We present BEHAVE dataset, the first full body human-object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them. We use this data to learn a model that can jointly track humans and objects in natural environments with an easy-to-use portable multi-camera setup.

Video





Description

The BEHAVE* dataset is the largest dataset of human-object interactions in natural environments, with 3D human, object and contact annotation, to date.

The dataset includes:


* formerly known as the HOI3D dataset.

Download

For further information about the BEHAVE dataset and for download links, please click here

Updates

  • Feb 05, 2024: Packed data for the second RHOBIN challenge are released. Download details see competition website.
  • January 10, 2023: Packed training data and test input for the BEHAVE challenges are released. Download here.
  • October 08, 2022: After comprehensive processing, our first version of registrations at 30fps is released! Download here.
  • August 06, 2022: Raw videos are released. Download here.




  • Citation

    If you use this dataset, you agree to cite the corresponding CVPR'22 paper:
    
        @inproceedings{bhatnagar22behave,
        title = {BEHAVE: Dataset and Method for Tracking Human Object Interactions},
        author={Bhatnagar, Bharat Lal and Xie, Xianghui and Petrov, Ilya and Sminchisescu, Cristian and Theobalt, Christian and Pons-Moll, Gerard},
        booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)},
        month = {jun},
        organization = {{IEEE}},
        year = {2022},
        }

    Acknowledgments

    Carl-Zeiss-Stiftung Tübingen AI Center University of Tübingen MPII Saarbrücken


    Special thanks to RVH team members, and reviewers, their feedback helped improve the manuscript. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 409792180 (Emmy Noether Programme, project: Real Virtual Humans), German Federal Ministry of Education and Research (BMBF): Tubingen AI ¨ Center, FKZ: 01IS18039A and ERC Consolidator Grant 4DRepLy (770784). Gerard Pons-Moll is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645. The project was made possible by funding from the Carl Zeiss Foundation.