3D Poses in the Wild Dataset
Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera
Timo von Marcard1, Roberto Henschel1, Michael Black2, Bodo Rosenhahn2 and Gerard Pons-Moll31TNT, Leibniz University of Hannover 
2Max Planck Institute for Intelligent Systems
3Max Planck Institute for Informatics, Saarland Informatics Campus
ECCV 2018 Munich, Germany
Evaluation
This dataset may be used for different tasks. If you use the dataset to evaluate human pose and shape estimation, please look at the protocols and metrics below.Protocols
The data in sequenceFiles.zip contains the sequences separated in three folders: train/, validation/, test/. In order to be able to compare different methods, we define the following evaluation protocols.- All-Test-mode: All the dataset is used as test (including the test/, train/, validation/ folders).
- Train-Test-mode: In this mode, methods can train on the train/, validate on the validation/, and report results on the test/.
- Validation-mode: In this mode, the sequences in validation/ can be used for validation (NOT TRAINING). The data in folders (train/, test/) can be used for testing.
- All-Train-mode: Obviously, the dataset can be used exclusively for training if methods are tested on other data.
Metrics
We strongly encourage you to report some or all of the following metrics in your report:- Joint error metric: mean Euclidean distance between predicted joints and the joints of SMPL.
- Mesh error metric: mean Euclidean distance between predicted 3D mesh and SMPL ground truth mesh (with clothing and/or without clothing). This metric should be used for methods estimating shape as well as pose.
- Mesh error metric unposed: mean Euclidean distance between predicted 3D mesh and SMPL ground truth mesh in the zero pose space, that is setting the pose of SMPL to zero (with clothing or/and without clothing). This metric allows to evaluate shape accuracy independently of pose accuracy.
- Orientation error metric: mean geodesic distance between predicted part rotations and ground truth part rotations. See von Marcard et al. 2017, 2018.
Updates:
- 14-11-2018: The data splits (train/validation/test) and protocols have been defined. The file sequenceFiles.zip contains the sequences of each split in a separate folder.
- 13-11-2018: We will define which sequences can be used for validation today.
1-11-2018: If your method needs to be fine tuned on 3DPW to work, then we recommend splitting every sequence in two halves with the same number of frames, and using the first half for training, and the second half for testing.
Note: we use the term "ground truth" to refer to our reference poses whose accuracy has been validated in the paper.
Download
To download, you have to first read and agree the license terms:Requirements
To run the example scripts, you will need the following:Citation
If you use this dataset, you agree to cite the corresponding ECCV'18 paper:@inproceedings{vonMarcard2018, title = {Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera}, author = {von Marcard, Timo and Henschel, Roberto and Black, Michael and Rosenhahn, Bodo and Pons-Moll, Gerard}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2018}, month = {sep} }