File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

정하영

Chung, Hayoung
Computational Structural Mechanics and Design Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number 3 -
dc.citation.startPage 51 -
dc.citation.title STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION -
dc.citation.volume 69 -
dc.contributor.author Oh, Kee Seung -
dc.contributor.author Kim, Yoon Young -
dc.contributor.author Chung, Hayoung -
dc.contributor.author Oh, Joo Hwan -
dc.date.accessioned 2026-05-08T16:00:44Z -
dc.date.available 2026-05-08T16:00:44Z -
dc.date.created 2026-03-09 -
dc.date.issued 2026-02 -
dc.description.abstract Despite the growing interest in applying reinforcement learning (RL) to design optimization, its high computational cost limits its applicability to problems involving expensive function evaluations. In this study, we propose an efficient RL action strategy specifically designed for acoustic topology optimization. The key idea is to assign action values (Q-values) to each element individually and select material-filled elements in descending order of their Q-values until the target volume fraction is met, instead of evaluating Q-values for complete combinations of elements that satisfy the volume constraint. This formulation decouples the learning complexity from the combinatorial explosion of candidate layouts, making the training of the Q-value-estimating neural network more efficient and thus the RL-based approach is more suitable for topology optimization problems requiring fine meshes. As a representative application, we consider the design of a muffler's internal layout to maximize sound transmission loss-a problem where conventional gradient-based methods often fail to achieve near-global optimal solutions. By integrating the proposed method with finite element simulations and a reward function shaped by transmission loss at one or more target frequencies, the RL agent learns policies that directly determine the material distribution for single- or multi-frequency objectives. The resulting muffler designs, based on a two-dimensional finite element model, exhibit near-global optimal performance and outperform those generated by conventional gradient-based methods. The advantages of the proposed approach over standard RL-based topology optimization methods are also clearly demonstrated. -
dc.identifier.bibliographicCitation STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, v.69, no.3, pp.51 -
dc.identifier.doi 10.1007/s00158-025-04244-z -
dc.identifier.issn 1615-147X -
dc.identifier.scopusid 2-s2.0-105034264926 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/91649 -
dc.identifier.url https://link.springer.com/article/10.1007/s00158-025-04244-z?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot&getft_integrator=clarivate -
dc.identifier.wosid 001696273500005 -
dc.language 영어 -
dc.publisher SPRINGER -
dc.title An efficient reinforcement learning action strategy for topology optimization: application to muffler design -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Computer Science, Interdisciplinary Applications; Engineering, Multidisciplinary; Mechanics -
dc.relation.journalResearchArea Computer Science; Engineering; Mechanics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Element-wise Q-value evaluation -
dc.subject.keywordAuthor Topology optimization -
dc.subject.keywordAuthor Muffler design -
dc.subject.keywordAuthor Noise reduction -
dc.subject.keywordAuthor Reinforcement learning -
dc.subject.keywordPlus ACOUSTIC ATTENUATION PERFORMANCE -
dc.subject.keywordPlus SHAPE OPTIMIZATION -
dc.subject.keywordPlus NEURAL-NETWORKS -
dc.subject.keywordPlus DEEP -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.