File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Deep Learning-Based Monitoring and Seam Tracking for Automated Laser Welding

Author(s)
Nam, Kimoon
Advisor
Ki, Hyungson
Issued Date
2026-02
URI
https://scholarworks.unist.ac.kr/handle/201301/91010 http://unist.dcollection.net/common/orgView/200000964531
Abstract
Laser beam welding is a process that enables high-speed joining using high energy density and a small beam diameter. However, the process is inherently unstable, and even slight misalignment of the weld joint can significantly increase the likelihood of defects. In automated welding systems, it is impractical to perform destructive testing for every weld to evaluate strength degradation caused by such defects. Therefore, robust nondestructive-inspection-based real-time process monitoring is required. In conventional manufacturing, vision images indeed contain rich spatial and temporal information, but meaningful features must traditionally be engineered by hand, fundamentally limiting robustness and scalability. Recent advances in deep learning have alleviated this limitation by allowing models to automatically learn high-dimensional patterns and infer information that is not explicitly visible in images. As a result, deep-learning-based vision monitoring has been widely adopted and has demonstrated strong effectiveness in industrial environments. Building on this progress, multimodal learning that integrates vision and language has further expanded the capabilities of AI systems in intelligent manufacturing. Motivated by these developments, this dissertation proposes non-destructive, vision-based monitoring frameworks that leverage deep learning to predict weld bead geometry, laser beam absorptance, and three-dimensional weld seam information in laser welding processes. These advancements are expected to support the progression toward autonomous manufacturing systems. In Chapter 1, we introduce the fundamental principles of laser welding and provide an overview of the major challenges encountered in automated welding processes. This chapter further discusses the necessity of monitoring weld bead geometry, laser beam absorptance, and three-dimensional weld seam, while reviewing the limitations of laser welding and outlining deep learning algorithms that can address these issues. Chapter 2 proposes a deep learning model that predicts the top and bottom bead widths in full- penetration laser welding of aluminum alloys using melt pool image. Since the primary objective of laser welding is to create a strong metallurgical joint, the mechanical strength of the weld bead is of critical importance. Especially, tensile strength is strongly influenced by grain size, which is governed by the cooling rate, and ultimately determined by the heat input and resulting bead dimensions. Therefore, predicting bead width from melt pool images provides a means to indirectly monitor the mechanical properties of the weld in real time. The proposed CNN-based model successfully predicted both top and bottom bead widths, and it was observed that using two consecutive melt pool images as model inputs helped mitigate blurring caused by spatter or light scattering. Chapter 3 extends the Chapter 2 study by predicting penetration depth in partial-penetration laser welding. While the penetration depth prediction model demonstrated high accuracy, distinguishing defective welds solely based on depth differences remained challenging. To address this limitation, a deep learning model for laser beam absorptance prediction was developed using melt pool images. The model revealed distinct absorptance changes at moments when the penetration depth varied, highlighting regions where insufficient penetration leads to significantly reduced weld strength. These findings confirm absorptance as a highly sensitive and reliable indicator for detecting defects in partial- penetration welding. In Chapter 4, we propose a vision-to-code model that directly generates the two-dimensional (2D) relative weld-seam coordinates and the corresponding control code from a single weld seam image. This study aims to address the issue that, during high-speed laser welding, even microscopic misalignment between the laser beam and the workpiece can lead to weld defects. In particular, the ability to directly predict control commands without an intermediate mapping process offers significant potential for automated welding processes, where real-time prediction and control performance are essential. Chapter 5 extends the work of Chapter 4 by developing a vision-to-code model for narrow-gap three-dimensional (3D) butt laser welding. In automated laser welding, z-axis misalignment or geometric variations in the workpiece can reduce the effective laser intensity and consequently degrade weld quality. Therefore, accurate 3D seam prediction is essential. In this study, we utilize the depth-of- focus characteristics of the vision camera to estimate the z-value of the weld seam based on the degree of image blurring that occurs when the seam moves away from the focal plane. By exploiting the automatic feature-learning capability of deep learning, the proposed model can infer the z-axis position from a single two-dimensional image. Our approach obtains 3D information solely from 2D vision images, which provides a significant advantage and offers strong potential for extension to various manufacturing processes. This Page Intentionally Left Blank
Publisher
Ulsan National Institute of Science and Technology
Degree
Doctor
Major
Department of Mechanical Engineering

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.