File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김형훈

Kim, Hyounghun
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Towards Fully Mobile 3D Face, Body, and Environment Capture Using Only Head-worn Cameras

Author(s)
Cha, Young-WoonPrice, TrueWei, ZhenLu, XinranRewkowski, NicholasChabra, RohanQin, ZiheKim, HyounghunSu, ZhaoqiLiu, YebinIlie, AdrianState, AndreiXu, ZhenlinFrahm, Jan-MichaelFuchs, Henry
Issued Date
2018-11
DOI
10.1109/TVCG.2018.2868527
URI
https://scholarworks.unist.ac.kr/handle/201301/59794
Citation
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, v.24, no.11, pp.2993 - 3004
Abstract
We propose a new approach for 3D reconstruction of dynamic indoor and outdoor scenes in everyday environments, leveraging only cameras worn by a user. This approach allows 3D reconstruction of experiences at any location and virtual tours from anywhere. The key innovation of the proposed ego-centric reconstruction system is to capture the wearer's body pose and facial expression from near-body views, e.g. cameras on the user's glasses, and to capture the surrounding environment using outward-facing views. The main challenge of the ego-centric reconstruction, however, is the poor coverage of the near-body views - that is, the user's body and face are observed from vantage points that are convenient for wear but inconvenient for capture. To overcome these challenges, we propose a parametric-model-based approach to user motion estimation. This approach utilizes convolutional neural networks (CNNs) for near-view body pose estimation, and we introduce a CNN-based approach for facial expression estimation that combines audio and video. For each time-point during capture, the intermediate model-based reconstructions from these systems are used to re-target a high-fidelity pre-scanned model of the user. We demonstrate that the proposed self-sufficient, head-worn capture system is capable of reconstructing the wearer's movements and their surrounding environment in both indoor and outdoor situations without any additional views. As a proof of concept, we show how the resulting 3D-plus-time reconstruction can be immersively experienced within a virtual reality system (e.g., the HTC Vive). We expect that the size of the proposed egocentric capture-and-reconstruction system will eventually be reduced to fit within future AR glasses, and will be widely useful for immersive 3D telepresence, virtual tours, and general use-anywhere 3D content creation.
Publisher
IEEE COMPUTER SOC
ISSN
1077-2626
Keyword (Author)
Terms TelepresenceEgo-centric VisionConvolutional Neural NetworksMotion Capture
Keyword
TRACKING

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.