| dc.description.abstract |
Understanding both the external appearance and internal structure of three-dimensional objects is essential for applications such as industrial inspection, simulation, digital fabrication, and cultural heritage preservation. However, conventional Neural Radiance Fields (NeRFs) are primarily designed for surface-level reconstruction and struggle to represent hidden internal geometries, limiting their applicability in scenarios where internal reasoning is required. In this work, we propose In-Out NeRF, a dual-modality neural radiance field framework that simultaneously reconstructs and renders both external and internal structures of 3D objects within a unified representation. Our approach integrates complementary data modalities, combining RGB images for photorealistic exterior reconstruction with X-ray–style volumetric projections for internal structure modeling. To accommodate varying data availability, we introduce two internal reconstruction strategies. First, when CAD data are available, we employ a data-efficient few-shot approach that reconstructs internal geometry using sparse cross-sectional slices rather than full volumetric supervision. Second, in the absence of internal ground truth, we leverage a generative model to infer plausible internal structures solely from external geometry. These components are unified through a dual-modality NeRF architecture that jointly models external radiance and internal density fields. To further enhance reconstruction fidelity, we introduce an adaptive importance sampling strategy guided by structural cues, enabling the model to focus computational resources on geometrically informative regions. We evaluate the proposed method on synthetic and real-world industrial datasets, including T-Less and MISUMI, demonstrating that In-Out NeRF outperforms existing approaches in both external novel-view synthesis and internal structure reconstruction. Additionally, we present a real-world use case involving cultural heritage building components, highlighting the potential of our framework for non-destructive internal visualization. Overall, this work extends NeRF beyond surface-only rendering toward a structure-aware, multimodal 3D reconstruction framework capable of inferring, rendering, and synthesizing both the exterior and interior of complex objects. |
- |