Controlling Action Space of Reinforcement Learning-based Energy Management in Batteryless Applications

Jun Ick Ahn, Daeyong Kim, Rhan Ha, Hojung Cha

Research output: Contribution to journalArticlepeer-review


Duty cycle management is critical for energy-neutral operation of batteryless devices. Many efforts have been made to develop an effective duty cycling method, including machine learning-based approaches, but existing methods can barely handle the dynamic harvesting environments of batteryless devices. Specifically, most machine learning-based methods require the harvesting patterns to be collected in advance, as well as manual configuration of the duty-cycle boundaries. In this paper, we propose a configuration-free duty cycling scheme for batteryless devices, called CTRL, with which energy harvesting nodes tune the duty cycle themselves adapting to the surrounding environment without user intervention. This approach combines reinforcement learning (RL) with a control system to allow the learning algorithm to explore all possible search space automatically. The learning algorithm sets the target state of charge (SoC) of the energy storage, instead of explicitly setting the target task frequency at a given time. The control system then satisfies the target SoC by controlling the duty cycle. An evaluation based on real implementation of the system using publicly available trace data shows that CTRL outperforms state-of-the-art approaches, resulting in 40% less frequent power failures in energy-scarce environments, while achieving more than ten times the task frequency in energy-rich environments.

Original languageEnglish
Pages (from-to)1
Number of pages1
JournalIEEE Internet of Things Journal
Publication statusAccepted/In press - 2023

Bibliographical note

Publisher Copyright:

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Information Systems
  • Hardware and Architecture
  • Computer Science Applications
  • Computer Networks and Communications


Dive into the research topics of 'Controlling Action Space of Reinforcement Learning-based Energy Management in Batteryless Applications'. Together they form a unique fingerprint.

Cite this