---------------------------------- Konstantinos Papoutsakis ---------------------------------- Computer Vision & Robotics Lab Institute of Computer Science (ICS) Foundation of Research & Technology (FORTH) Crete, Greece www.ics.forth.gr/~papoutsa www.ics.forth.gr/cvrl/evaco ---------------------------------- Last edited: 10 March 2017 ---------------------------------- ---------------------------------- General info ---------------------------------- The provided dataset is generated based on the action clips of the Berkeley Multimodal Human Action Database (MHAD). http://tele-immersion.citris-uc.org/berkeley_mhad/ - For samples pair_00 - pair_50: each sample contains 1 common action Each sequence of a pair consists of 3 MHAD actions (clips) concatenated in a long sequence performed by different subjects. Each action of the original MHAD dataset appears 5 times as the common action (performed by different subjects) - For samples pair_51 - pair_101: Each sequence of a pair consists of 3 to 6 MHAD actions (clips) concatenated in a long sequence. Per sequence the chosen action are performed by the same subject, which is different from the subject selected to generate its paired sequence. -For samples pair_51-pair_67: each sample contains 4 common actions -For samples pair_68-pair_84: each sample contains 3 common actions -For samples pair_85-pair_101: each sample contains 2 common actions Frames of the original MHAD dataset can be downloaded from http://tele-immersion.citris-uc.org/berkeley_mhad/ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --> MHAD101s_data folder Each pair_xx.mat file contains: actions_a, actions_b : [1xNa], [1xNb] action ids for sequences a,b for pair No xx labels_a, labels_b : [1xNa], [1xNb] action labels in cells mhadclip_fnames_a, mhadclip_fnames_b : [1xNa], [1xNb] in cells containing the names of the respective .bvh and video files of the original MHAD dataset used to generated the pair of sequences segments_a, segments_b: [Na x 2], [Nb x 2] arrays containing the ground truth limits of the actions skel_data_a, skel_data_b: [64 x La], [64 x Lb] arrays containing the skeletal-based data (64d feature vector per frame). subject_a, subject_b: subject ids performing the actions in each sequence common_segments: [C x 4] array for C common actions. Each row contains boundaries of the common segments in the pair of sequences as: [start-frame-a, end-frame-a, start-frame-b, end-frame-b] --------------------------------------------------------------------------------------------------------------------------------------------------------