This paper proposes a new architecture, called Adaptive PREfetching and Scheduling (APRES), which improves cache efficiency of GPUS. APRES relies on the observation that GPU loads tend to have either high locality or strided access patterns across warps. APRES schedules warps so that as many cache hits are generated as possible before the generation of any cache miss. Without directly predicting future cache hits/misses for each warp, APRES creates a warp group that will execute the same static load shortly and prioritizes the grouped warps. If the first executed warp in the group hits the cache, grouped warps are likely to access the same cache lines. Unless, APRES considers the load as a strided type and generates prefetch requests for the grouped warps. In addition, APRES includes a new dynamic L1 prefetch and data cache partitioning to reduce contentions between demand-fetched and prefetched lines. In our evaluation, APRES achieves 27.8 percent performance improvement.
Bibliographical noteFunding Information:
This paper is an extension of our previous study, “APRES: Improving Cache Efficiency by Exploiting Load Characteristics on GPUs,” which appeared in the 43rd International Symposium on Computer Architecture (ISCA 2016). This work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP)(No. NRF-2018R1A2A2A05018941), the Technology Innovation Program (No. 10080590, Technology Development of Unified Memory System for Heterogeneous System Architecture) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea) and Korea Semiconductor Research Consortium (KSRC) support program for the development of the future semiconductor device, and the Graduate School of YONSEI University Research Scholarship Grants in 2017.
© 1968-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Hardware and Architecture
- Computational Theory and Mathematics