An on-chip cache compression technique to reduce decompression overhead and design complexity

Jang Soo Lee, Won Kee Hong, Shin-Dug Kim

Research output: Contribution to journalArticle

22 Citations (Scopus)

Abstract

This research explores a compressed memory hierarchy model which can increase both the effective memory space and bandwidth of each level of memory hierarchy. It is well known that decompression time causes a critical effect to the memory access time and variable-sized compressed blocks tend to increase the design complexity of the compressed memory systems. This paper proposes a selective compressed memory system (SCMS) incorporating the compressed cache architecture and its management method. To reduce or hide decompression overhead, this SCMS employs several effective techniques, including selective compression, parallel decompression and the use of a decompression buffer. In addition, fixed memory space allocation method is used to achieve efficient management of the compressed blocks. Trace-driven simulation shows that the SCMS approach can not only reduce the on-chip cache miss ratio and data traffic by about 35% and 53%, respectively, but also achieve a 20% reduction in average memory access time (AMAT) over conventional memory systems (CMS). Moreover, this approach can provide both lower memory traffic at a lower cost than CMS with some architectural enhancement. Most importantly, the SCMS is a more attractive approach for future computer systems because it offers high performance in cases of long DRAM latency and limited bus bandwidth.

Original languageEnglish
Pages (from-to)1365-1382
Number of pages18
JournalJournal of Systems Architecture
Volume46
Issue number15
DOIs
Publication statusPublished - 2000 Dec 31

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture

Cite this