Journal of Systems Engineering and Electronics ›› 2024, Vol. 35 ›› Issue (4): 1042-1052.doi: 10.23919/JSEE.2024.000067

• CONTROL THEORY AND APPLICATION • Previous Articles    

Computational intelligence interception guidance law using online off-policy integral reinforcement learning

Qi WANG(), Zhizhong LIAO()   

  • Received:2022-10-31 Online:2024-08-18 Published:2024-08-06
  • Contact: Qi WANG E-mail:wangqibuaa@126.com;lzzcama@139.com
  • About author:
    WANG Qi was born in 1982. He received his bachelor degree in spacecraft design from Beihang University (BUAA), Beijing, in 2005 and Ph.D. degree in aircraft design from Chinese Aeronautical Establishment, Beijing, in 2023. He is currently a senior engineer in China Airborne Missile Academy. His research interests are navigation, guidance and control of tactical missile, machine learning, adaptive dynamic programming, and reinforcement learning. E-mail: wangqibuaa@126.com

    LIAO Zhizhong was born in 1962. He received his bachelor degree and a Master degree in aircraft design from Northwestern Polytechnical University, Xi’an, in 1982 and 1984, respectively. He received his Ph.D. degree in engineering mechanics from Tsinghua University, Beijing, in 2001. He is currently the Deputy Chief Designer of China Airborne Missile Academy and a professor of Northwestern Polytechnical University. His research interests are aircraft design and project management based on systems engineering. E-mail: lzzcama@139.com

Abstract:

Missile interception problem can be regarded as a two-person zero-sum differential games problem, which depends on the solution of Hamilton-Jacobi-Isaacs (HJI) equation. It has been proved impossible to obtain a closed-form solution due to the nonlinearity of HJI equation, and many iterative algorithms are proposed to solve the HJI equation. Simultaneous policy updating algorithm (SPUA) is an effective algorithm for solving HJI equation, but it is an on-policy integral reinforcement learning (IRL). For online implementation of SPUA, the disturbance signals need to be adjustable, which is unrealistic. In this paper, an off-policy IRL algorithm based on SPUA is proposed without making use of any knowledge of the systems dynamics. Then, a neural-network based online adaptive critic implementation scheme of the off-policy IRL algorithm is presented. Based on the online off-policy IRL method, a computational intelligence interception guidance (CIIG) law is developed for intercepting high-maneuvering target. As a model-free method, intercepting targets can be achieved through measuring system data online. The effectiveness of the CIIG is verified through two missile and target engagement scenarios.

Key words: two-person zero-sum differential games, Hamilton–Jacobi–Isaacs (HJI) equation, off-policy integral reinforcement learning (IRL), online learning, computational intelligence interception guidance (CIIG) law