This paper presents a pre-execution approach for improving GPU performance, called P-mode (pre-execution mode). GPUs utilize a number of concurrent threads for hiding processing delay of operations. However, certain long-latency operations such as off-chip memory accesses often take hundreds of cycles and hence leads to stalls even in the presence of thread concurrency and fast thread switching capability. It is unclear if adding more threads can improve latency tolerance due to increased memory contention. Further, adding more threads increases on-chip storage demands. Instead we propose that when a warp is stalled on a long-latency operation it enters P-mode. In P-mode, a warp continues to fetch and decode successive instructions to identify any independent instruction that is not on the long latency dependence chain. These independent instructions are then pre-executed. To tackle write-after-write and write-after-read hazards, during P-mode output values are written to renamed physical registers. We exploit the register file underutilization to re-purpose a few unused registers to store the P-mode results. When a warp is switched from P-mode to normal execution mode it reuses pre-executed results by reading the renamed registers. Any global load operation in P-mode is transformed into a pre-load which fetches data into the L1 cache to reduce future memory access penalties. Our evaluation results show 23% performance improvement for memory intensive applications, without negatively impacting other application categories.
|Title of host publication||Proceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016|
|Publisher||IEEE Computer Society|
|Number of pages||13|
|Publication status||Published - 2016 Apr 1|
|Event||22nd IEEE International Symposium on High Performance Computer Architecture, HPCA 2016 - Barcelona, Spain|
Duration: 2016 Mar 12 → 2016 Mar 16
|Name||Proceedings - International Symposium on High-Performance Computer Architecture|
|Other||22nd IEEE International Symposium on High Performance Computer Architecture, HPCA 2016|
|Period||16/3/12 → 16/3/16|
Bibliographical noteFunding Information:
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2015R1A2A2A01008281), and by the following grants: DARPA-PERFECT-HR0011-12-2-0020 and NSF-CAREER-0954211, NSF-0834798.
© 2016 IEEE.
All Science Journal Classification (ASJC) codes
- Hardware and Architecture