File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Salt Lake City -
dc.citation.title IEEE Conference on Computer Vision and Pattern Recognition -
dc.contributor.author Garcia-Hernando, Guillermo -
dc.contributor.author Yuan, Shanxin -
dc.contributor.author Baek, Seungryul -
dc.contributor.author Kim, Tae-Kyun -
dc.date.accessioned 2023-12-19T15:47:44Z -
dc.date.available 2023-12-19T15:47:44Z -
dc.date.created 2020-04-21 -
dc.date.issued 2018-06-19 -
dc.description.abstract In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition. -
dc.identifier.bibliographicCitation IEEE Conference on Computer Vision and Pattern Recognition -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/32663 -
dc.identifier.url https://arxiv.org/abs/1704.02463 -
dc.language 영어 -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title Firstperson hand action benchmark with RGB-D videos and 3D hand pose annotations -
dc.type Conference Paper -
dc.date.conferenceDate 2018-06-19 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.