File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace IT -
dc.citation.title European Conference on Computer Vision -
dc.contributor.author Fan, Zicong -
dc.contributor.author Ohkawa, Takehiko -
dc.contributor.author Yang, Linlin -
dc.contributor.author Lin, Nie -
dc.contributor.author Zhou, Zhishan -
dc.contributor.author Zhou, Shihao -
dc.contributor.author Liang, Jiajun -
dc.contributor.author Gao, Zhong -
dc.contributor.author Zhang, Xuanyang -
dc.contributor.author Zhang, Xue -
dc.contributor.author Li, Fei -
dc.contributor.author Zheng, Liu -
dc.contributor.author Lu, Feng -
dc.contributor.author Zeid, Karim -
dc.contributor.author Leibe, Bastian -
dc.contributor.author On, Jeongwan -
dc.contributor.author Baek, Seungryul -
dc.contributor.author Prakash, Aditya -
dc.contributor.author Gupta, Saurabh -
dc.contributor.author He, Kun -
dc.contributor.author Sato, Yoichi -
dc.contributor.author Hilliges, Otmar -
dc.contributor.author Chang, Hyung Jin -
dc.contributor.author Yao, Angela -
dc.date.accessioned 2024-12-27T15:35:07Z -
dc.date.available 2024-12-27T15:35:07Z -
dc.date.created 2024-12-26 -
dc.date.issued 2024-10-04 -
dc.description.abstract We interact with the world with our hands and see it through our own (egocentric) perspective. A holistic 3D understanding of such interactions from egocentric views is important for tasks in robotics, AR/VR, action recognition and motion generation. Accurately reconstructing such interactions in 3D is challenging due to heavy occlusion, viewpoint bias, camera distortion, and motion blur from the head movement. To this end, we designed the HANDS23 challenge based on the AssemblyHands and ARCTIC datasets with carefully designed training and testing splits. Based on the results of the top submitted methods and more recent baselines on the leaderboards, we perform a thorough analysis on 3D hand(-object) reconstruction tasks. Our analysis demonstrates the effectiveness of addressing distortion specific to egocentric cameras, adopting high-capacity transformers to learn complex hand-object interactions, and fusing predictions from different views. Our study further reveals challenging scenarios intractable with state-of-the-art methods, such as fast hand motion, object reconstruction from narrow egocentric views, and close contact between two hands and objects. Our efforts will enrich the community’s knowledge foundation and facilitate future hand studies on egocentric hand-object interactions. -
dc.identifier.bibliographicCitation European Conference on Computer Vision -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/85298 -
dc.language 영어 -
dc.publisher ECVA -
dc.title Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with Objects -
dc.type Conference Paper -
dc.date.conferenceDate 2024-09-29 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.