File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

기형선

Ki, Hyungson
Laser Processing and Artificial Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 78 -
dc.citation.number A -
dc.citation.startPage 63 -
dc.citation.title JOURNAL OF MANUFACTURING PROCESSES -
dc.citation.volume 156 -
dc.contributor.author Nam, Kimoon -
dc.contributor.author Ki, Hyungson -
dc.date.accessioned 2025-11-25T14:56:39Z -
dc.date.available 2025-11-25T14:56:39Z -
dc.date.created 2025-11-24 -
dc.date.issued 2025-12 -
dc.description.abstract Weld seam tracking is a critical capability for automated laser welding systems, requiring high precision and adaptive control in environments where manual programming is infeasible. Existing approaches often rely on rule-based logic or task-specific models, limiting their ability to support end-to-end automation. This study proposes a novel vision-to-code framework that directly generates executable control code from a single weld seam image, enabling fully automated seam tracking without the need for handcrafted image processing or predefined alignment logic. A domain-specific dataset was constructed by annotating grayscale weld seam images with executable C# code, enabling the model to learn a direct mapping from visual input to machine-level instructions. The proposed deep learning architecture-featuring a CNN-based visual encoder, a nonautoregressive Transformer decoder, and a custom tokenizer for code generation-was trained entirely from scratch to capture the structural and semantic characteristics of the welding task. The system was validated on a butt-joint welding task using a multimode fiber laser applied to aluminum alloy specimens with varying weld geometries and surface textures. The model achieved a BLEU-4 score of 0.94851 and a pass@1 rate of 99.62 %, and demonstrated robust generalization to unseen seam geometries and material textures. These results underscore the novelty and practical utility of the proposed approach, which bridges image understanding and control code generation in an end-to-end framework for vision-driven welding automation. -
dc.identifier.bibliographicCitation JOURNAL OF MANUFACTURING PROCESSES, v.156, no.A, pp.63 - 78 -
dc.identifier.doi 10.1016/j.jmapro.2025.10.111 -
dc.identifier.issn 1526-6125 -
dc.identifier.scopusid 2-s2.0-105020377130 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/88331 -
dc.identifier.wosid 001613827900002 -
dc.language 영어 -
dc.publisher ELSEVIER SCI LTD -
dc.title Vision-to-code framework for automated weld seam tracking in laser welding -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Engineering, Manufacturing -
dc.relation.journalResearchArea Engineering -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Automated laser welding -
dc.subject.keywordAuthor Weld seam tracking -
dc.subject.keywordAuthor Vision-to-code model -
dc.subject.keywordAuthor Deep learning -
dc.subject.keywordAuthor Code generation -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.