The performance of conventional instruction prefetching mechanisms (IPMs) is analyzed in this paper based on two performance factors, i.e., the cache miss ratio and the average access time for successfully prefetched blocks. Although significant performance improvement (PI) can be obtained by improving these two factors, most conventional prefetching mechanisms improve only one factor out of these two factors. Fetching multiple blocks for a prefetch request and prefetching the sequentially next block together with the block that causes a cache miss in lookahead prefetching (LP) are proposed to improve both these factors. A new method to initiate a prefetch request earlier with no degradation of the prefetch accuracy is also presented for a memory system that is constructed as an interleaved memory. Performance evaluation is carried out through trace-driven simulation and the proposed prefetch scheme reduces 45-63% of the memory access delay time (MADT) for the cache system that does not perform any prefetching.
Bibliographical noteFunding Information:
This paper was supported by a Special fund for the University research institute, Korea Research Foundation, 1995.
All Science Journal Classification (ASJC) codes
- Hardware and Architecture