File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.contributor.advisor Yang, Seungjoon -
dc.contributor.author Song, Chaewon -
dc.date.accessioned 2026-03-26T22:13:59Z -
dc.date.available 2026-03-26T22:13:59Z -
dc.date.issued 2026-02 -
dc.description.abstract Understanding both the external appearance and internal structure of three-dimensional objects is essential for applications such as industrial inspection, simulation, digital fabrication, and cultural heritage preservation. However, conventional Neural Radiance Fields (NeRFs) are primarily designed for surface-level reconstruction and struggle to represent hidden internal geometries, limiting their applicability in scenarios where internal reasoning is required.
In this work, we propose In-Out NeRF, a dual-modality neural radiance field framework that simultaneously reconstructs and renders both external and internal structures of 3D objects within a unified representation. Our approach integrates complementary data modalities, combining RGB images for photorealistic exterior reconstruction with X-ray–style volumetric projections for internal structure modeling. To accommodate varying data availability, we introduce two internal reconstruction strategies. First, when CAD data are available, we employ a data-efficient few-shot approach that reconstructs internal geometry using sparse cross-sectional slices rather than full volumetric supervision. Second, in the absence of internal ground truth, we leverage a generative model to infer plausible internal structures solely from external geometry.
These components are unified through a dual-modality NeRF architecture that jointly models external radiance and internal density fields. To further enhance reconstruction fidelity, we introduce an adaptive importance sampling strategy guided by structural cues, enabling the model to focus computational resources on geometrically informative regions.
We evaluate the proposed method on synthetic and real-world industrial datasets, including T-Less and MISUMI, demonstrating that In-Out NeRF outperforms existing approaches in both external novel-view synthesis and internal structure reconstruction. Additionally, we present a real-world use case involving cultural heritage building components, highlighting the potential of our framework for non-destructive internal visualization.
Overall, this work extends NeRF beyond surface-only rendering toward a structure-aware, multimodal 3D reconstruction framework capable of inferring, rendering, and synthesizing both the exterior and interior of complex objects.
-
dc.description.degree Master -
dc.description Department of Electrical Engineering -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/90962 -
dc.identifier.uri http://unist.dcollection.net/common/orgView/200000964625 -
dc.language ENG -
dc.publisher Ulsan National Institute of Science and Technology -
dc.rights.embargoReleaseDate 9999-12-31 -
dc.rights.embargoReleaseTerms 9999-12-31 -
dc.subject Resource extraction, Electrochemistry -
dc.title In-Out NeRF: Dual-Modality Neural Radiance Fields for Internal and External 3D Reconstruction -
dc.type Thesis -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.