| dc.description.abstract |
Dataset distillation compresses large training sets into compact synthetic ones, reducing training cost and memory usage while retaining task-relevant learning behavior. In 3D point clouds, however, feature distribution matching is unstable because unordered point sets lack consistent semantic structure across samples. To stabilize matching under semantic misalignment, we introduce SADM, a sorting-based semantic alignment mechanism that orders point features into a consistent semantic arrangement before distribution matching, enabling reliable alignment between real and synthetic point clouds. Building on SADM, we further extend distillation toward greater memory efficiency and representa- tional compactness. While SADM enables accurate distribution matching, raw point clouds contain sub- stantial redundancy that limits the scalability of synthetic dataset generation. We propose a shape mor- phing based formulation in which a small set of learnable anchors are combined with morphing coeffi- cients. This approach reduces point redundancy, compresses geometric structure into a low-dimensional morphable representation, and generates diverse synthetic shapes with far fewer points. As a result, the distilled dataset achieves markedly improved memory efficiency while preserving the expressive information of the original dataset. Experiments on ModelNet10, ModelNet40, ShapeNet, and ScanObjectNN demonstrate that the pro- posed sorting-based semantic alignment and morphing-based synthesis consistently improve distilled dataset quality across various PPC, and generalize well to multiple point cloud backbones, includ- ing PointNet, PointNet++, DGCNN, PointConv, and PointTransformer. Overall, this thesis introduces sorting-driven semantic alignment for stable distribution matching and a morphing-based mechanism for compact yet diverse 3D distillation, providing practical building blocks for memory-efficient learning with point clouds. |
- |