File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Fast 3D head avatar using triplane representation from monocular RGB video

Author(s)
Kim, Solang
Advisor
Yoon, Sung Whan
Issued Date
2024-02
URI
https://scholarworks.unist.ac.kr/handle/201301/82150 http://unist.dcollection.net/common/orgView/200000744474
Abstract
Despite significant advancements in the field of 3D head avatar creation, traditional methods have con- sistently faced challenges in efficiently reflecting expressions and pose changes in dynamic 3D fields. A major impediment has been the extensive training time required, which poses a critical hindrance to their practical application in real-world scenarios. This thesis introduces a novel approach to 3D head avatar that effectively addresses these limitations by leveraging a triplane representation from monoc- ular RGB video. Our method substantially reduces the number of parameters needed to represent dy- namic 3D fields through the utilization of a triplane structure. Furthermore, we enhance our model’s deformation capabilities by incorporating 3DMM parameters, thus increasing efficiency and versatility in complex facial reconstruction scenarios. To equip the model with rapid adaptation capabilities and a comprehensive understanding of diverse identities, we employ the StyleGAN2 Generator, pre-trained on extensive facial datasets. The effectiveness of our approach is demonstrated through comparative speed analyses, cross-identity, and self-reenactment experiments across various identities, views, and poses. Notably, our results show successful generation within two minutes in diverse scenarios, signifying a breakthrough for 3D head avatar creation in real-world applications.
Publisher
Ulsan National Institute of Science and Technology

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.