FORTH Occluded Articulated Human Body Dataset

This site aims to make publicly available the dataset that was collected, annotated and used for the quantitative evaluation of our methodology for articulated human body pose extraction and tracking under occlusions.

Dealing with occlusions, either self-imposed or across different users or between users and the environment, is a most challenging task in the human body tracking field. However, despite the existence of very accurate and widely used single person ground truth datasets, to the best of our knowledge there is currently no ground truth dataset with multiple persons, featuring inter-person occlusions. Therefore, the F-BODY dataset is herewith launched to provide a concrete benchmark tool for articulated body tracking approaches under the presence of occlusions.

The current version of the F-BODY dataset consists of six sequences. One sequence features ones user, four sequences contain two users, and finally one sequence involves three users. In all cases, a single user is equipped with prominent visual markers and is tracked to provide with the ground truth of the underlying joints, while the other user(s), if any, play a "dummy role" -that of the occluder- and are not tracked. As explained in the Setup Section the camera setup consists of two RGB-D cameras, so that every marker is visible at all time instances.

In the Dataset section, all sequences can be downloaded either in split frames format (RGB, Depth) or in .ONI format. Additionally,for each of the sequences, the corresponding ground truth information and the pose estimation of our methodology, namely the Top View Reprojection (TVR), are also provided.


Citation

If you make use of the provided data, please cite the following article:

M. Sigalas, M. Pateraki, and P. Trahanias, “Full-body pose tracking - the Top View Reprojection approach”, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. PP, no. 99, 2015. [doi] [bib]

Contributions

Main contributors of this project are:

who are with the Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH) and the Department of Computer Science, University of Crete, Heraklion, Crete, Greece.


Acknowledgments

This work has been partially supported by the EU Information Society Technologies research project James (FP7-045388).