File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Learning 3D Skeletal Representation From Transformer for Action Recognition

Author(s)
Cha, JunukSaqlain, MuhammadKim, DongukLee, SeungeunLee, SeongyeongBaek, Seungryul
Issued Date
2022-06
DOI
10.1109/ACCESS.2022.3185058
URI
https://scholarworks.unist.ac.kr/handle/201301/58881
Citation
IEEE ACCESS, v.10, pp.67541 - 67550
Abstract
Skeleton-based human action recognition has attracted significant interest due to its simplicity and good accuracy. Diverse end-to-end trainable frameworks based on skeletal representation have been proposed so far to map the representation to human action classes better. Most skeleton-based human action recognition approaches are based on the skeletons, which are heuristically pre-defined by the commercial sensors. Nevertheless, it is not confirmed whether the sensor-captured skeletons are the best representation of human bodies for the action recognition task, while in general, the dedicated representation is required for achieving the successful performance on subsequent tasks such as action recognition. In this paper, we try to deal with the issue by explicitly learning the skeletal representation in the context of the human action recognition task. We start our investigation by reconstructing 3D meshes of the human bodies from RGB videos. Then we involve the transformer architecture to sample the most informative skeletal representation from reconstructed 3D meshes, considering the inner and inter structural relationship of 3D meshes and sensor-captured skeletons. Experimental results on challenging human action recognition benchmarks (i.e., SYSU and UTD-MHAD datasets) have shown the superiority of our skeletal representation compared to the sensor-captured skeletons for the action recognition task.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
ISSN
2169-3536
Keyword (Author)
Three-dimensional displaysSkeletonTransformersTask analysisImage reconstructionVideosTraining3D representationaction recognitionhuman meshtransformer

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.