Human action recognition is a challenging problem, especially in the presence of multiple actors in the scene and/or viewpoint variations. In this paper, three modalities, namely, 3-D skeletons, body part images, and motion history image (MHI), are integrated into a hybrid deep learning architecture for human action recognition. The three modalities capture the main aspects of an action: body pose, part shape, and body motion. Although the 3-D skeleton modality captures the actor's pose, it lacks information about the shape of the body parts as well as the shape of manipulated objects. This is the reason for including both the body-part images and the MHI as additional modalities. The deployed architecture combines convolution neural networks (CNNs), long short-term memory (LSTM), and a fine-tuned pre-trained architecture into a hybrid one. It is called MCLP: multi-modal CNN + LSTM + VGG16 pre-trained on ImageNet. The MCLP consists of three sub-models: CL1D (for CNN1D + LSTM), CL2D (for CNN2D + LSTM), and CMHI (CNN2D for MHI), which simultaneously extract the spatial and temporal patterns in the three modalities. The decisions of these three sub-models are fused by a late multiply fusion module, which proved to yield better accuracy than averaging or maximizing fusion methods. The proposed combined model and its sub-models have been evaluated both individually and collectively on four public data sets: UTkinect Action3D, SBU Interaction, Florence3-D Action, and NTU RGB+D. Our recognition rates outperform the state-of-the-art rates on all the evaluated data sets.
ASJC Scopus subject areas
- コンピュータ サイエンス（全般）