File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number 4 -
dc.citation.startPage 1464 -
dc.citation.title SENSORS -
dc.citation.volume 22 -
dc.contributor.author Saqlain, Muhammad -
dc.contributor.author Kim, Donguk -
dc.contributor.author Cha, Junuk -
dc.contributor.author Lee, Changhwa -
dc.contributor.author Lee, Seongyeong -
dc.contributor.author Baek, Seungryul -
dc.date.accessioned 2023-12-21T14:37:42Z -
dc.date.available 2023-12-21T14:37:42Z -
dc.date.created 2022-04-11 -
dc.date.issued 2022-02 -
dc.description.abstract Group activity recognition is a prime research topic in video understanding and has many practical applications, such as crowd behavior monitoring, video surveillance, etc. To understand the multi-person/group action, the model should not only identify the individual person's action in the context but also describe their collective activity. A lot of previous works adopt skeleton-based approaches with graph convolutional networks for group activity recognition. However, these approaches are subject to limitation in scalability, robustness, and interoperability. In this paper, we propose 3DMesh-GAR, a novel approach to 3D human body Mesh-based Group Activity Recognition, which relies on a body center heatmap, camera map, and mesh parameter map instead of the complex and noisy 3D skeleton of each person of the input frames. We adopt a 3D mesh creation method, which is conceptually simple, single-stage, and bounding box free, and is able to handle highly occluded and multi-person scenes without any additional computational cost. We implement 3DMesh-GAR on a standard group activity dataset: the Collective Activity Dataset, and achieve state-of-the-art performance for group activity recognition. -
dc.identifier.bibliographicCitation SENSORS, v.22, no.4, pp.1464 -
dc.identifier.doi 10.3390/s22041464 -
dc.identifier.issn 1424-8220 -
dc.identifier.scopusid 2-s2.0-85124486058 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/58151 -
dc.identifier.url https://www.mdpi.com/1424-8220/22/4/1464 -
dc.identifier.wosid 000771878700001 -
dc.language 영어 -
dc.publisher MDPI -
dc.title 3DMesh-GAR: 3D Human Body Mesh-Based Method for Group Activity Recognition -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Chemistry, Analytical; Engineering, Electrical & Electronic; Instruments & Instrumentation -
dc.relation.journalResearchArea Chemistry; Engineering; Instruments & Instrumentation -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor 3D human activity recognition -
dc.subject.keywordAuthor human body mesh estimation -
dc.subject.keywordAuthor feature extraction -
dc.subject.keywordAuthor deep learning -
dc.subject.keywordAuthor video understanding -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.