This paper presents neural adaptive control methods for a class of chaotic nonlinear systems in the presence of constrained input and unknown dynamics.To attenuate the influence of constrained input caused by actuator saturation,an effective auxiliary system is constructed to prevent the stability of closed loop system from being destroyed.Radial basis function neural networks(RBF-NNs)are used in the online learning of the unknown dynamics,which do not require an off-line training phase.Both state and output feedback control laws are developed.In the output feedback case,high-order sliding mode(HOSM)observer is utilized to estimate the unmeasurable system states.Simulation results are presented to verify the effectiveness of proposed schemes.
Deep reinforcement learning is a focus research area in artificial intelligence. The principle of optimality in dynamic programming is a key to the success of reinforcement learning methods. The principle of adaptive dynamic programming(ADP)is first presented instead of direct dynamic programming(DP),and the inherent relationship between ADP and deep reinforcement learning is developed. Next, analytics intelligence, as the necessary requirement, for the real reinforcement learning, is discussed. Finally, the principle of the parallel dynamic programming, which integrates dynamic programming and analytics intelligence, is presented as the future computational intelligence.