APPLIED MATHEMATICS AND COMPUTATION, v.527, pp.130095
Abstract
The performance of nonlinear multi-agent systems (MAS) is severely impacted by unstructured dynamical behaviors and unstructured control coefficients. To address these challenges, this paper proposes a radial basis function neural network (RBFNN)-based adaptive reinforcement learning control scheme, synthesized with disturbance observers (DO). The RBFNN mechanism is first employed to approximate the unstructured control coefficients, whilst the DO actively mitigates the adverse influences of the unstructured dynamical behaviors and uncertainties. With the aid of the RBFNN and DO, an actor-critic neural network (ACNN)-based simplified reinforcement learning (SRL) distributed backstepping consensus control scheme is systematically developed to optimize the control policy and substantially enhance overall system performance. In the Lyapunov sense, the proposed control scheme is theoretically proven to be stable, satisfying the semi-globally uniformly ultimately bounded condition. Its effectiveness compared to an existing baseline is validated through numerical simulations, demonstrating notable improvements not only in control performance but also in the approximation of control coefficients and the estimation of unstructured dynamical behaviors, thereby improving output consensus.