File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

정하영

Chung, Hayoung
Computational Structural Mechanics and Design Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

An efficient reinforcement learning action strategy for topology optimization: application to muffler design

Author(s)
Oh, Kee SeungKim, Yoon YoungChung, HayoungOh, Joo Hwan
Issued Date
2026-02
DOI
10.1007/s00158-025-04244-z
URI
https://scholarworks.unist.ac.kr/handle/201301/91649
Fulltext
https://link.springer.com/article/10.1007/s00158-025-04244-z?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot&getft_integrator=clarivate
Citation
STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, v.69, no.3, pp.51
Abstract
Despite the growing interest in applying reinforcement learning (RL) to design optimization, its high computational cost limits its applicability to problems involving expensive function evaluations. In this study, we propose an efficient RL action strategy specifically designed for acoustic topology optimization. The key idea is to assign action values (Q-values) to each element individually and select material-filled elements in descending order of their Q-values until the target volume fraction is met, instead of evaluating Q-values for complete combinations of elements that satisfy the volume constraint. This formulation decouples the learning complexity from the combinatorial explosion of candidate layouts, making the training of the Q-value-estimating neural network more efficient and thus the RL-based approach is more suitable for topology optimization problems requiring fine meshes. As a representative application, we consider the design of a muffler's internal layout to maximize sound transmission loss-a problem where conventional gradient-based methods often fail to achieve near-global optimal solutions. By integrating the proposed method with finite element simulations and a reward function shaped by transmission loss at one or more target frequencies, the RL agent learns policies that directly determine the material distribution for single- or multi-frequency objectives. The resulting muffler designs, based on a two-dimensional finite element model, exhibit near-global optimal performance and outperform those generated by conventional gradient-based methods. The advantages of the proposed approach over standard RL-based topology optimization methods are also clearly demonstrated.
Publisher
SPRINGER
ISSN
1615-147X
Keyword (Author)
Element-wise Q-value evaluationTopology optimizationMuffler designNoise reductionReinforcement learning
Keyword
ACOUSTIC ATTENUATION PERFORMANCESHAPE OPTIMIZATIONNEURAL-NETWORKSDEEP

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.