File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.endPage 2943 -
dc.citation.startPage 2933 -
dc.citation.title Workshop on Applications of Computer Vision -
dc.contributor.author Lee, Seongyeong -
dc.contributor.author Park, Hansoo -
dc.contributor.author Kim, Dong Uk -
dc.contributor.author Kim, Jihyeon -
dc.contributor.author Boboev, Muhammadjon -
dc.contributor.author Baek, Seungryul -
dc.date.accessioned 2024-01-28T10:05:08Z -
dc.date.available 2024-01-28T10:05:08Z -
dc.date.created 2023-09-26 -
dc.date.issued 2023-01-06 -
dc.description.abstract RGB-based 3D hand pose estimation has been successful for decades thanks to large-scale databases and deep learning. However, the hand pose estimation network does not operate well for hand pose images whose characteristics are far different from the training data. This is caused by various factors such as illuminations, camera angles, diverse backgrounds in the input images, etc. Many existing methods tried to solve it by supplying additional large-scale unconstrained/target domain images to augment data space; however collecting such large-scale images takes a lot of labors. In this paper, we present a simple image-free domain generalization approach for the hand pose estimation framework that uses only source domain data. We try to manipulate the image features of the hand pose estimation network by adding the features from text descriptions using the CLIP (Contrastive Language-Image Pretraining) model. The manipulated image features are then exploited to train the hand pose estimation network via the contrastive learning framework. In experiments with STB and RHD datasets, our algorithm shows improved performance over the state-of-the-art domain generalization approaches. -
dc.identifier.bibliographicCitation Workshop on Applications of Computer Vision, pp.2933 - 2943 -
dc.identifier.doi 10.1109/WACV56688.2023.00295 -
dc.identifier.scopusid 2-s2.0-85149008828 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/72428 -
dc.language 영어 -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title Image-free Domain Generalization via CLIP for 3D Hand Pose Estimation -
dc.type Conference Paper -
dc.date.conferenceDate 2023-01-03 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.