File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

임성훈

Lim, Sunghoon
Industrial Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.startPage 110074 -
dc.citation.title COMPUTERS & INDUSTRIAL ENGINEERING -
dc.citation.volume 190 -
dc.contributor.author Choi, Jae Gyeong -
dc.contributor.author Kim, Dong Chan -
dc.contributor.author Chung, Miyoung -
dc.contributor.author Lim, Sunghoon -
dc.contributor.author Park, Hyung Wook -
dc.date.accessioned 2024-05-31T16:05:12Z -
dc.date.available 2024-05-31T16:05:12Z -
dc.date.created 2024-05-30 -
dc.date.issued 2024-04 -
dc.description.abstract There is a growing demand for carbon fiber-reinforced plastics (CFRPs) in the aerospace and automotive industries. Consequently, the assembly and repair of CFRP components has garnered considerable attention. However, at industrial sites, there are potential difficulties in the CFRP drilling process. These challenges arise from the anisotropic properties of CFRP and the limitations of using an optical microscope to assess the quality of millions of drilled holes. Therefore, this study introduced an advanced indirect prediction method for the CFRP hole quality based on multisensor data. During the drilling process, data including force, torque, acceleration, voltage, current, sound, and images of the hole exits were acquired via a robotic machining system. The delamination factors, F d and F a , which quantify the quality of the hole, can be calculated from the hole images. Preprocessing was employed to segment the sensor data into discrete drilling trials and extract spectral features from the data. This paper proposed a multimodal one-dimensional convolutional neural network (1D CNN) that predicts delamination factors from time series multi-sensor data. A case study, using a test set of 100 trials, validated the proposed model. The performance of the proposed model was evaluated by the mean squared error (MSE) and inference time, yielding 8.42 +/- 0.167 x 10 -2 and 3.48 +/- 0.192 s for F d , and 6.1 +/- 0.729 x 10 -2 and 3.13 +/- 0.098 s for F a , respectively, across the entire test set with 100 trials. This result exceeds those of previous studies in terms of both the MSE value and inference time for multimodality, emphasizing the predictive accuracy and real-time operational capabilities of the proposed model in industrial settings. Furthermore, this approach provides practicality and flexibility, facilitating the easy integration or removal of sensors through data-driven preprocessing and a multimodal learning scheme. -
dc.identifier.bibliographicCitation COMPUTERS & INDUSTRIAL ENGINEERING, v.190, pp.110074 -
dc.identifier.doi 10.1016/j.cie.2024.110074 -
dc.identifier.issn 0360-8352 -
dc.identifier.scopusid 2-s2.0-85188681248 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/82879 -
dc.identifier.wosid 001219218500001 -
dc.language 영어 -
dc.publisher PERGAMON-ELSEVIER SCIENCE LTD -
dc.title Multimodal 1D CNN for delamination prediction in CFRP drilling process with industrial robots -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Interdisciplinary Applications; Engineering, Industrial -
dc.relation.journalResearchArea Computer Science; Engineering -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Carbon fiber-reinforced plastic -
dc.subject.keywordAuthor Delamination prediction -
dc.subject.keywordAuthor Robotic drilling process -
dc.subject.keywordAuthor Multimodal learning -
dc.subject.keywordAuthor Multi-sensor analysis -
dc.subject.keywordAuthor 1D convolutional neural network -
dc.subject.keywordPlus HOLE-QUALITY -
dc.subject.keywordPlus MODEL -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.