In recent years, human action recognition has been studied by many computer vision researchers. Recent studies have attempted to use two-stream networks using appearance and motion features, but most of these approaches focused on clip-level video action recognition. In contrast to traditional methods which generally used entire images, we propose a new human instance-level video action recognition framework. In this framework, we represent the instance-level features using human boxes and keypoints, and our action region features are used as the inputs of the temporal action head network, which makes our framework more discriminative. We also propose novel temporal action head networks consisting of various modules, which reflect various temporal dynamics well. In the experiment, the proposed models achieve comparable performance with the state-of-the-art approaches on two challenging datasets. Furthermore, we evaluate the proposed features and networks to verify the effectiveness of them. Finally, we analyze the confusion matrix and visualize the recognized actions at human instance level when there are several people.
|Publication status||Published - 2021 Dec 1|
Bibliographical noteFunding Information:
This work has supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) in 2021. [2020R1A2C3011697, Research on optimizing spatial (2D to 3D)-temporal domain extension based on human visual perception].
Funding: This work has supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) in 2021. [2020R1A2C3011697, Research on optimizing spatial (2D to 3D)-temporal domain extension based on human visual perception].
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.
All Science Journal Classification (ASJC) codes
- Analytical Chemistry
- Information Systems
- Atomic and Molecular Physics, and Optics
- Electrical and Electronic Engineering