Integral reinforcement learning with explorations for continuous-time nonlinear systems

Jae Young Lee, Jin Bae Park, Yoon Ho Choi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Citations (Scopus)

Abstract

This paper focuses on the integral reinforcement learning (I-RL) for input-affine continuous-time (CT) nonlinear systems where a known time-varying signal called an exploration is injected through the control input. First, we propose a modified I-RL method which effectively eliminates the effects of the explorations on the algorithm. Next, based on the result, an actor-critic I-RL technique is presented for the same nonlinear systems with completely unknown dynamics. Finally, the least-squares implementation method with the exact parameterizations is presented for each proposed one which can be solved under the given persistently exciting (PE) conditions. A simulation example is given to verify the effectiveness of the proposed methods.

Original languageEnglish
Title of host publication2012 International Joint Conference on Neural Networks, IJCNN 2012
DOIs
Publication statusPublished - 2012 Aug 22
Event2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012 - Brisbane, QLD, Australia
Duration: 2012 Jun 102012 Jun 15

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Other

Other2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012
CountryAustralia
CityBrisbane, QLD
Period12/6/1012/6/15

Fingerprint

Reinforcement learning
Nonlinear systems
Parameterization

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Cite this

Lee, J. Y., Park, J. B., & Choi, Y. H. (2012). Integral reinforcement learning with explorations for continuous-time nonlinear systems. In 2012 International Joint Conference on Neural Networks, IJCNN 2012 [6252508] (Proceedings of the International Joint Conference on Neural Networks). https://doi.org/10.1109/IJCNN.2012.6252508
Lee, Jae Young ; Park, Jin Bae ; Choi, Yoon Ho. / Integral reinforcement learning with explorations for continuous-time nonlinear systems. 2012 International Joint Conference on Neural Networks, IJCNN 2012. 2012. (Proceedings of the International Joint Conference on Neural Networks).
@inproceedings{045aa140e2b04e10bb25847d5aab4f0a,
title = "Integral reinforcement learning with explorations for continuous-time nonlinear systems",
abstract = "This paper focuses on the integral reinforcement learning (I-RL) for input-affine continuous-time (CT) nonlinear systems where a known time-varying signal called an exploration is injected through the control input. First, we propose a modified I-RL method which effectively eliminates the effects of the explorations on the algorithm. Next, based on the result, an actor-critic I-RL technique is presented for the same nonlinear systems with completely unknown dynamics. Finally, the least-squares implementation method with the exact parameterizations is presented for each proposed one which can be solved under the given persistently exciting (PE) conditions. A simulation example is given to verify the effectiveness of the proposed methods.",
author = "Lee, {Jae Young} and Park, {Jin Bae} and Choi, {Yoon Ho}",
year = "2012",
month = "8",
day = "22",
doi = "10.1109/IJCNN.2012.6252508",
language = "English",
isbn = "9781467314909",
series = "Proceedings of the International Joint Conference on Neural Networks",
booktitle = "2012 International Joint Conference on Neural Networks, IJCNN 2012",

}

Lee, JY, Park, JB & Choi, YH 2012, Integral reinforcement learning with explorations for continuous-time nonlinear systems. in 2012 International Joint Conference on Neural Networks, IJCNN 2012., 6252508, Proceedings of the International Joint Conference on Neural Networks, 2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012, Brisbane, QLD, Australia, 12/6/10. https://doi.org/10.1109/IJCNN.2012.6252508

Integral reinforcement learning with explorations for continuous-time nonlinear systems. / Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho.

2012 International Joint Conference on Neural Networks, IJCNN 2012. 2012. 6252508 (Proceedings of the International Joint Conference on Neural Networks).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Integral reinforcement learning with explorations for continuous-time nonlinear systems

AU - Lee, Jae Young

AU - Park, Jin Bae

AU - Choi, Yoon Ho

PY - 2012/8/22

Y1 - 2012/8/22

N2 - This paper focuses on the integral reinforcement learning (I-RL) for input-affine continuous-time (CT) nonlinear systems where a known time-varying signal called an exploration is injected through the control input. First, we propose a modified I-RL method which effectively eliminates the effects of the explorations on the algorithm. Next, based on the result, an actor-critic I-RL technique is presented for the same nonlinear systems with completely unknown dynamics. Finally, the least-squares implementation method with the exact parameterizations is presented for each proposed one which can be solved under the given persistently exciting (PE) conditions. A simulation example is given to verify the effectiveness of the proposed methods.

AB - This paper focuses on the integral reinforcement learning (I-RL) for input-affine continuous-time (CT) nonlinear systems where a known time-varying signal called an exploration is injected through the control input. First, we propose a modified I-RL method which effectively eliminates the effects of the explorations on the algorithm. Next, based on the result, an actor-critic I-RL technique is presented for the same nonlinear systems with completely unknown dynamics. Finally, the least-squares implementation method with the exact parameterizations is presented for each proposed one which can be solved under the given persistently exciting (PE) conditions. A simulation example is given to verify the effectiveness of the proposed methods.

UR - http://www.scopus.com/inward/record.url?scp=84865092901&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84865092901&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2012.6252508

DO - 10.1109/IJCNN.2012.6252508

M3 - Conference contribution

AN - SCOPUS:84865092901

SN - 9781467314909

T3 - Proceedings of the International Joint Conference on Neural Networks

BT - 2012 International Joint Conference on Neural Networks, IJCNN 2012

ER -

Lee JY, Park JB, Choi YH. Integral reinforcement learning with explorations for continuous-time nonlinear systems. In 2012 International Joint Conference on Neural Networks, IJCNN 2012. 2012. 6252508. (Proceedings of the International Joint Conference on Neural Networks). https://doi.org/10.1109/IJCNN.2012.6252508