Spiking Neural Networks (SNNs) play an important role in neuroscience as they help neuroscientists understand how the nervous system works. To model the nervous system, SNNs incorporate the concept of time into neurons and inter-neuron interactions called spikes; a neuron’s internal state changes with respect to time and input spikes, and a neuron fires an output spike when its internal state satisfies certain conditions. As the neurons forming the nervous system behave differently, SNN simulation frameworks must be able to simulate the diverse behaviors of the neurons. To support any neuron models, some frameworks rely on general-purpose processors at the cost of inefficiency in simulation speed and energy consumption. The other frameworks employ specialized accelerators to overcome the inefficiency; however, the accelerators support only a limited set of neuron models due to their model-driven designs, making accelerator-based frameworks unable to simulate target SNNs. In this paper, we present Flexon, a flexible digital neuron which exploits the biologically common features shared by diverse neuron models, to enable efficient SNN simulations. To design Flexon, we first collect SNNs from prior work in neuroscience research and analyze the neuron models the SNNs employ. From the analysis, we observe that the neuron models share a set of biologically common features, and that the features can be combined to simulate a significantly larger set of neuron behaviors than the existing model-driven designs. Furthermore, we find that the features share a small set of computational primitives which can be exploited to further reduce the chip area. The resulting digital neurons, Flexon and spatially folded Flexon, are flexible, highly efficient, and can be easily integrated with existing hardware. Our prototyping results using TSMC 45 nm standard cell library show that a 12-neuron Flexon array improves energy efficiency by 6,186x and 422x over CPU and GPU, respectively, in a small footprint of 9.26 mm2. The results also show that a 72-neuron spatially folded Flexon array incurs a smaller footprint of 7.62 mm2 and achieves geomean speedups of 122.45x and 9.83x over CPU and GPU, respectively.
|Title of host publication||Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||14|
|Publication status||Published - 2018 Jul 19|
|Event||45th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2018 - Los Angeles, United States|
Duration: 2018 Jun 2 → 2018 Jun 6
|Name||Proceedings - International Symposium on Computer Architecture|
|Conference||45th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2018|
|Period||18/6/2 → 18/6/6|
Bibliographical noteFunding Information:
This work was partly supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2015M3C4A7065647, NRF-2017R1A2B3011038). We also appreciate the support from Automation and Systems Research Institute (ASRI), Interuniversity Semiconductor Research Center (ISRC), and Neural Processing Research Center (NPRC) at Seoul National University.
This work was partly supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2015M3C4A7065647, NRF-2017R1A2B3011038). We also appreciate the support from Automation and Systems Research Institute (ASRI), Inter-university Semiconductor Research Center (ISRC), and Neural Processing Research Center (NPRC) at Seoul National University.
© 2018 IEEE.
All Science Journal Classification (ASJC) codes
- Hardware and Architecture