File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

권철현

Kwon, Cheolhyeon
High Assurance Mobility Control Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.startPage 130095 -
dc.citation.title APPLIED MATHEMATICS AND COMPUTATION -
dc.citation.volume 527 -
dc.contributor.author Truong, Hoai Vu Anh -
dc.contributor.author Lee, Chanyong -
dc.contributor.author Kwon, Cheolhyeon -
dc.date.accessioned 2026-04-27T10:30:57Z -
dc.date.available 2026-04-27T10:30:57Z -
dc.date.created 2026-04-24 -
dc.date.issued 2026-10 -
dc.description.abstract The performance of nonlinear multi-agent systems (MAS) is severely impacted by unstructured dynamical behaviors and unstructured control coefficients. To address these challenges, this paper proposes a radial basis function neural network (RBFNN)-based adaptive reinforcement learning control scheme, synthesized with disturbance observers (DO). The RBFNN mechanism is first employed to approximate the unstructured control coefficients, whilst the DO actively mitigates the adverse influences of the unstructured dynamical behaviors and uncertainties. With the aid of the RBFNN and DO, an actor-critic neural network (ACNN)-based simplified reinforcement learning (SRL) distributed backstepping consensus control scheme is systematically developed to optimize the control policy and substantially enhance overall system performance. In the Lyapunov sense, the proposed control scheme is theoretically proven to be stable, satisfying the semi-globally uniformly ultimately bounded condition. Its effectiveness compared to an existing baseline is validated through numerical simulations, demonstrating notable improvements not only in control performance but also in the approximation of control coefficients and the estimation of unstructured dynamical behaviors, thereby improving output consensus. -
dc.identifier.bibliographicCitation APPLIED MATHEMATICS AND COMPUTATION, v.527, pp.130095 -
dc.identifier.doi 10.1016/j.amc.2026.130095 -
dc.identifier.issn 0096-3003 -
dc.identifier.scopusid 2-s2.0-105035249898 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/91563 -
dc.identifier.wosid 001743351100001 -
dc.language 영어 -
dc.publisher ELSEVIER SCIENCE INC -
dc.title Simplified reinforcement learning-based distributed consensus neural network control for second-order uncertain nonlinear multi-agent systems -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Mathematics, Applied -
dc.relation.journalResearchArea Mathematics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Neural network -
dc.subject.keywordAuthor Disturbance observer -
dc.subject.keywordAuthor Backstepping control -
dc.subject.keywordAuthor Reinforcement learning -
dc.subject.keywordAuthor Multi-agent systems -
dc.subject.keywordPlus TRACKING -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.