An on-chip cache compression technique to reduce decompression overhead and design complexity

Jang Soo Lee, Won Kee Hong, Shin-Dug Kim

Research output: Contribution to journalArticle

22 Citations (Scopus)

Abstract

This research explores a compressed memory hierarchy model which can increase both the effective memory space and bandwidth of each level of memory hierarchy. It is well known that decompression time causes a critical effect to the memory access time and variable-sized compressed blocks tend to increase the design complexity of the compressed memory systems. This paper proposes a selective compressed memory system (SCMS) incorporating the compressed cache architecture and its management method. To reduce or hide decompression overhead, this SCMS employs several effective techniques, including selective compression, parallel decompression and the use of a decompression buffer. In addition, fixed memory space allocation method is used to achieve efficient management of the compressed blocks. Trace-driven simulation shows that the SCMS approach can not only reduce the on-chip cache miss ratio and data traffic by about 35% and 53%, respectively, but also achieve a 20% reduction in average memory access time (AMAT) over conventional memory systems (CMS). Moreover, this approach can provide both lower memory traffic at a lower cost than CMS with some architectural enhancement. Most importantly, the SCMS is a more attractive approach for future computer systems because it offers high performance in cases of long DRAM latency and limited bus bandwidth.

Original languageEnglish
Pages (from-to)1365-1382
Number of pages18
JournalJournal of Systems Architecture
Volume46
Issue number15
DOIs
Publication statusPublished - 2000 Dec 31

Fingerprint

Data storage equipment
Computer systems
Bandwidth
Dynamic random access storage

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture

Cite this

@article{3d67c322b8544a87930fa5793f599662,
title = "An on-chip cache compression technique to reduce decompression overhead and design complexity",
abstract = "This research explores a compressed memory hierarchy model which can increase both the effective memory space and bandwidth of each level of memory hierarchy. It is well known that decompression time causes a critical effect to the memory access time and variable-sized compressed blocks tend to increase the design complexity of the compressed memory systems. This paper proposes a selective compressed memory system (SCMS) incorporating the compressed cache architecture and its management method. To reduce or hide decompression overhead, this SCMS employs several effective techniques, including selective compression, parallel decompression and the use of a decompression buffer. In addition, fixed memory space allocation method is used to achieve efficient management of the compressed blocks. Trace-driven simulation shows that the SCMS approach can not only reduce the on-chip cache miss ratio and data traffic by about 35{\%} and 53{\%}, respectively, but also achieve a 20{\%} reduction in average memory access time (AMAT) over conventional memory systems (CMS). Moreover, this approach can provide both lower memory traffic at a lower cost than CMS with some architectural enhancement. Most importantly, the SCMS is a more attractive approach for future computer systems because it offers high performance in cases of long DRAM latency and limited bus bandwidth.",
author = "Lee, {Jang Soo} and Hong, {Won Kee} and Shin-Dug Kim",
year = "2000",
month = "12",
day = "31",
doi = "10.1016/S1383-7621(00)00030-8",
language = "English",
volume = "46",
pages = "1365--1382",
journal = "Journal of Systems Architecture",
issn = "1383-7621",
publisher = "Elsevier",
number = "15",

}

An on-chip cache compression technique to reduce decompression overhead and design complexity. / Lee, Jang Soo; Hong, Won Kee; Kim, Shin-Dug.

In: Journal of Systems Architecture, Vol. 46, No. 15, 31.12.2000, p. 1365-1382.

Research output: Contribution to journalArticle

TY - JOUR

T1 - An on-chip cache compression technique to reduce decompression overhead and design complexity

AU - Lee, Jang Soo

AU - Hong, Won Kee

AU - Kim, Shin-Dug

PY - 2000/12/31

Y1 - 2000/12/31

N2 - This research explores a compressed memory hierarchy model which can increase both the effective memory space and bandwidth of each level of memory hierarchy. It is well known that decompression time causes a critical effect to the memory access time and variable-sized compressed blocks tend to increase the design complexity of the compressed memory systems. This paper proposes a selective compressed memory system (SCMS) incorporating the compressed cache architecture and its management method. To reduce or hide decompression overhead, this SCMS employs several effective techniques, including selective compression, parallel decompression and the use of a decompression buffer. In addition, fixed memory space allocation method is used to achieve efficient management of the compressed blocks. Trace-driven simulation shows that the SCMS approach can not only reduce the on-chip cache miss ratio and data traffic by about 35% and 53%, respectively, but also achieve a 20% reduction in average memory access time (AMAT) over conventional memory systems (CMS). Moreover, this approach can provide both lower memory traffic at a lower cost than CMS with some architectural enhancement. Most importantly, the SCMS is a more attractive approach for future computer systems because it offers high performance in cases of long DRAM latency and limited bus bandwidth.

AB - This research explores a compressed memory hierarchy model which can increase both the effective memory space and bandwidth of each level of memory hierarchy. It is well known that decompression time causes a critical effect to the memory access time and variable-sized compressed blocks tend to increase the design complexity of the compressed memory systems. This paper proposes a selective compressed memory system (SCMS) incorporating the compressed cache architecture and its management method. To reduce or hide decompression overhead, this SCMS employs several effective techniques, including selective compression, parallel decompression and the use of a decompression buffer. In addition, fixed memory space allocation method is used to achieve efficient management of the compressed blocks. Trace-driven simulation shows that the SCMS approach can not only reduce the on-chip cache miss ratio and data traffic by about 35% and 53%, respectively, but also achieve a 20% reduction in average memory access time (AMAT) over conventional memory systems (CMS). Moreover, this approach can provide both lower memory traffic at a lower cost than CMS with some architectural enhancement. Most importantly, the SCMS is a more attractive approach for future computer systems because it offers high performance in cases of long DRAM latency and limited bus bandwidth.

UR - http://www.scopus.com/inward/record.url?scp=0034499403&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034499403&partnerID=8YFLogxK

U2 - 10.1016/S1383-7621(00)00030-8

DO - 10.1016/S1383-7621(00)00030-8

M3 - Article

AN - SCOPUS:0034499403

VL - 46

SP - 1365

EP - 1382

JO - Journal of Systems Architecture

JF - Journal of Systems Architecture

SN - 1383-7621

IS - 15

ER -