Integral temporal difference learning for continuous-time linear quadratic regulations

Tae Yoon Chun, Jae Young Lee, Jin Bae Park, Yoon Ho Choi

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

In this paper, we propose a temporal difference (TD) learning method, called integral TD learning that efficiently finds solutions to continuous-time (CT) linear quadratic regulation (LQR) problems in an online fashion where system matrix A is unknown. The idea originates from a computational reinforcement learning method known as TD(0), which is the simplest TD method in a finite Markov decision process. For the proposed integral TD method, we mathematically analyze the positive definiteness of the updated value functions, monotone convergence conditions, and stability properties concerning the locations of the closed-loop poles in terms of the learning rate and the discount factor. The proposed method includes the existing value iteration method for CT LQR problems as a special case. Finally, numerical simulations are carried out to verify the effectiveness of the proposed method and further investigate the aforementioned mathematical properties.

Original languageEnglish
Pages (from-to)226-238
Number of pages13
JournalInternational Journal of Control, Automation and Systems
Volume15
Issue number1
DOIs
Publication statusPublished - 2017 Feb 1

Bibliographical note

Publisher Copyright:
© 2017, Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers and Springer-Verlag Berlin Heidelberg.

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Integral temporal difference learning for continuous-time linear quadratic regulations'. Together they form a unique fingerprint.

Cite this