Dynamic Power Management (DPM) is a design methodology aiming at reducing power consumption of electronic systems by performing selective shutdown of idle system resources. The effectiveness of a power management scheme depends critically on an accurate modeling of service requests and on the computation of the control policy. In this work, we present an online adaptive DPM scheme for systems that can be modeled as finite-state Markov chains. Online adaptation is required to deal with initially unknown or nonstationary workloads, which are very common in real-life systems. Our approach moves from exact policy optimization techniques in a known and stationary stochastic environment and it extends optimum stationary control policies to handle the unknown and nonstationary stochastic environment for practical applications. We introduce two workload learning techniques based on sliding windows and we study their properties. Furthermore, a two-dimensional interpolation technique is introduced to obtain adaptive policies from a precomputed look-up table of optimum stationary policies. The effectiveness of our approach is demonstrated by a complete DPM implementation on a laptop computer with a power-manageable hard disk that compares very favorably with existing DPM schemes.
|Number of pages||17|
|Journal||IEEE Transactions on Computers|
|Publication status||Published - 2002 Nov|
Bibliographical noteFunding Information:
The authors thank the anonymous referees for their careful reviews and many helpful comments. This research was supported in part by the US National Science Foundation under grant CCR-9901190.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Hardware and Architecture
- Computational Theory and Mathematics