Abstract
A vast amount of activation values of DNNs are zeros due to ReLU (Rectified Linear Unit), which is one of the most common activation functions used in modern neural networks. Since ReLU outputs zero for all negative inputs, the inputs to ReLU do not need to be determined exactly as long as they are negative. However, many accelerators usually do not consider such aspects of DNNs, losing a huge amount of opportunities for speedups and energy savings. To exploit such opportunities, we propose early negative detection (END), a computation pruning technique that detects the negative results at an early stage. The key to the early negative detection is the adoption of inverted two's complement representation for filter parameters. This ensures that as soon as the intermediate results become negative, the final results are guaranteed to be negative. Upon detection, the remaining computation can be skipped and the following ReLU output can be simply set to zero. We also propose a DNN accelerator architecture (ComPreEND) that takes advantage of such skipping. ComPreEND with END significantly improves both the energy efficiency and the performance according to the evaluation. Compared to the baseline, we obtain 20.5 and 29.3 percent speedup with accurate mode and predictive mode, and energy savings by 28.4 and 41.4 percent, respectively.
Original language | English |
---|---|
Pages (from-to) | 1537-1550 |
Number of pages | 14 |
Journal | IEEE Transactions on Computers |
Volume | 71 |
Issue number | 7 |
DOIs | |
Publication status | Published - 2022 Jul 1 |
Bibliographical note
Publisher Copyright:© 1968-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computational Theory and Mathematics