Stability and monotone convergence of generalised policy iteration for discrete-Time linear quadratic regulations

Tae Yoon Chun, Jae Young Lee, Jin Bae Park, Yoon Ho Choi

Research output: Contribution to journalArticle

5 Citations (Scopus)

Abstract

In this paper, we analyse the convergence and stability properties of generalised policy iteration (GPI) applied to discrete-Time linear quadratic regulation problems. GPI is one kind of the generalised adaptive dynamic programming methods used for solving optimal control problems, and is composed of policy evaluation and policy improvement steps. To analyse the convergence and stability of GPI, the dynamic programming (DP) operator is defined. Then, GPI and its equivalent formulas are presented based on the notation of DP operator. The convergence of the approximate value function to the exact one in policy evaluation is proven based on the equivalent formulas. Furthermore, the positive semi-definiteness, stability, and the monotone convergence (PI-mode and VI-mode convergence) of GPI are presented under certain conditions on the initial value function. The online least square method is also presented for the implementation of GPI. Finally, some numerical simulations are carried out to verify the effectiveness of GPI as well as to further investigate the convergence and stability properties.

Original languageEnglish
Pages (from-to)437-450
Number of pages14
JournalInternational Journal of Control
Volume89
Issue number3
DOIs
Publication statusPublished - 2016 Mar 3

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science Applications

Fingerprint Dive into the research topics of 'Stability and monotone convergence of generalised policy iteration for discrete-Time linear quadratic regulations'. Together they form a unique fingerprint.

  • Cite this