摘要
:
In this article, a synchronous reinforcementlearning-based algorithm is developed for input-constrained partially unknown systems. The proposed control also alleviates the need for an initial stabilizing control. A first-order rob...
展开
In this article, a synchronous reinforcementlearning-based algorithm is developed for input-constrained partially unknown systems. The proposed control also alleviates the need for an initial stabilizing control. A first-order robust exact differentiator is employed to approximate unknown drift dynamics. Critic, actor, and disturbance neural networks (NNs) are established to approximate the value function, the control policy, and the disturbance policy, respectively. The HamiltonJacobi-Isaacs equation is solved by applying the value function approximation technique. The stability of the closed-loop system can be ensured. The state and weight errors of the three NNs are all uniformly ultimately bounded. Finally, the simulation results are provided to verify the effectiveness of the proposed method.
收起