File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백승렬

Baek, Seungryul
UNIST VISION AND LEARNING LAB.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Towards Single 2D Image-Level Self-Supervision for 3D Human Pose and Shape Estimation

Author(s)
Cha, JunukSaqlain, MuhammadLee, ChanghwaLee, SeongyeongLee, SeungeunKim, DongukPark, Won-HeeBaek, Seungryul
Issued Date
2021-10
DOI
10.3390/app11209724
URI
https://scholarworks.unist.ac.kr/handle/201301/55176
Fulltext
https://www.mdpi.com/2076-3417/11/20/9724
Citation
APPLIED SCIENCES-BASEL, v.11, no.20, pp.9724
Abstract
Three-dimensional human pose and shape estimation is an important problem in the computer vision community, with numerous applications such as augmented reality, virtual reality, human computer interaction, and so on. However, training accurate 3D human pose and shape estimators based on deep learning approaches requires a large number of images and corresponding 3D ground-truth pose pairs, which are costly to collect. To relieve this constraint, various types of weakly or self-supervised pose estimation approaches have been proposed. Nevertheless, these methods still involve supervision signals, which require effort to collect, such as unpaired large-scale 3D ground truth data, a small subset of 3D labeled data, video priors, and so on. Often, they require installing equipment such as a calibrated multi-camera system to acquire strong multi-view priors. In this paper, we propose a self-supervised learning framework for 3D human pose and shape estimation that does not require other forms of supervision signals while using only single 2D images. Our framework inputs single 2D images, estimates human 3D meshes in the intermediate layers, and is trained to solve four types of self-supervision tasks (i.e., three image manipulation tasks and one neural rendering task) whose ground-truths are all based on the single 2D images themselves. Through experiments, we demonstrate the effectiveness of our approach on 3D human pose benchmark datasets (i.e., Human3.6M, 3DPW, and LSP), where we present the new state-of-the-art among weakly/self-supervised methods.
Publisher
MDPI
ISSN
2076-3417
Keyword (Author)
deep learninghuman body pose estimationhuman body mesh estimationneural renderingself-supervised learning

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.