File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace UK -
dc.citation.endPage 101 -
dc.citation.startPage 85 -
dc.citation.title European Conference on Computer Vision -
dc.contributor.author Armagan A. -
dc.contributor.author Garcia-Hernando G. -
dc.contributor.author Baek, Seungryul -
dc.contributor.author Hampali S. -
dc.contributor.author Rad M. -
dc.contributor.author Zhang Z. -
dc.contributor.author Xie S. -
dc.contributor.author Chen M.X. -
dc.contributor.author Zhang B. -
dc.contributor.author Xiong F. -
dc.contributor.author Xiao Y. -
dc.contributor.author Cao Z. -
dc.contributor.author Yuan J. -
dc.contributor.author Ren P. -
dc.contributor.author Huang W. -
dc.contributor.author Sun H. -
dc.contributor.author Hrúz M. -
dc.contributor.author Kanis J. -
dc.contributor.author Krňoul Z. -
dc.contributor.author Wan Q. -
dc.contributor.author Li S. -
dc.contributor.author Yang L. -
dc.contributor.author Lee D. -
dc.contributor.author Yao A. -
dc.contributor.author Zhou W. -
dc.contributor.author Mei S. -
dc.contributor.author Liu Y. -
dc.contributor.author Spurr A. -
dc.contributor.author Iqbal U. -
dc.contributor.author Molchanov P. -
dc.contributor.author Weinzaepfel P. -
dc.contributor.author Brégier R. -
dc.contributor.author Rogez G. -
dc.contributor.author Lepetit V. -
dc.contributor.author Kim T.-K. -
dc.date.accessioned 2024-01-31T22:39:59Z -
dc.date.available 2024-01-31T22:39:59Z -
dc.date.created 2021-01-11 -
dc.date.issued 2020-08-23 -
dc.description.abstract We study how well different types of approaches generalise in the task of 3D hand pose estimation under single hand scenarios and hand-object interaction. We show that the accuracy of state-of-the-art methods can drop, and that they fail mostly on poses absent from the training set. Unfortunately, since the space of hand poses is highly dimensional, it is inherently not feasible to cover the whole space densely, despite recent efforts in collecting large-scale training datasets. This sampling problem is even more severe when hands are interacting with objects and/or inputs are RGB rather than depth images, as RGB images also vary with lighting conditions and colors. To address these issues, we designed a public challenge (HANDS’19) to evaluate the abilities of current 3D hand pose estimators (HPEs) to interpolate and extrapolate the poses of a training set. More exactly, HANDS’19 is designed (a) to evaluate the influence of both depth and color modalities on 3D hand pose estimation, under the presence or absence of objects; (b) to assess the generalisation abilities w.r.t. four main axes: shapes, articulations, viewpoints, and objects; (c) to explore the use of a synthetic hand models to fill the gaps of current datasets. Through the challenge, the overall accuracy has dramatically improved over the baseline, especially on extrapolation tasks, from 27 mm to 13 mm mean joint error. Our analyses highlight the impacts of: Data pre-processing, ensemble approaches, the use of a parametric 3D hand model (MANO), and different HPE methods/backbones. -
dc.identifier.bibliographicCitation European Conference on Computer Vision, pp.85 - 101 -
dc.identifier.doi 10.1007/978-3-030-58592-1_6 -
dc.identifier.scopusid 2-s2.0-85097407772 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/78268 -
dc.publisher ECCV 2020 -
dc.title Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation Under Hand-Object Interaction -
dc.type Conference Paper -
dc.date.conferenceDate 2020-08-23 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.