Accelerating forwarding computation of artificial neural network using CUDA

Jong Hyun Park, Won Woo Ro

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recently, graphics processing units (GPUs) are widely used for accelerating general purpose workloads using programming models such as open computing language (OpenCL) or compute unified device architecture (CUDA). In this paper, we accelerated the Artificial Neural Network (ANN) algorithm, one of the popular algorithm in machine learning and cognitive science, since the ANN algorithm needs to be faster for solving more complex problem or operating in real-time. The ANN algorithm has great potential for GPU acceleration since it is constructed with large data-parallel computations. We implemented forwarding computation of ANN in CUDA and optimized it using scratchpad memory of GPUs and leveraging the thread block size. As a results, our method shows 2.32 times faster performance compared to conventional CPU.

Original languageEnglish
Title of host publicationInternational Conference on Electronics, Information, and Communications, ICEIC 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781467380164
DOIs
Publication statusPublished - 2016 Sep 7
Event15th International Conference on Electronics, Information, and Communications, ICEIC 2016 - Danang, Viet Nam
Duration: 2016 Jan 272016 Jan 30

Other

Other15th International Conference on Electronics, Information, and Communications, ICEIC 2016
CountryViet Nam
CityDanang
Period16/1/2716/1/30

Fingerprint

Neural networks
Program processors
Learning systems
Data storage equipment
Graphics processing unit

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering
  • Control and Systems Engineering

Cite this

Park, J. H., & Ro, W. W. (2016). Accelerating forwarding computation of artificial neural network using CUDA. In International Conference on Electronics, Information, and Communications, ICEIC 2016 [7562974] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ELINFOCOM.2016.7562974
Park, Jong Hyun ; Ro, Won Woo. / Accelerating forwarding computation of artificial neural network using CUDA. International Conference on Electronics, Information, and Communications, ICEIC 2016. Institute of Electrical and Electronics Engineers Inc., 2016.
@inproceedings{b703c3733c444b839f4df0213d382a15,
title = "Accelerating forwarding computation of artificial neural network using CUDA",
abstract = "Recently, graphics processing units (GPUs) are widely used for accelerating general purpose workloads using programming models such as open computing language (OpenCL) or compute unified device architecture (CUDA). In this paper, we accelerated the Artificial Neural Network (ANN) algorithm, one of the popular algorithm in machine learning and cognitive science, since the ANN algorithm needs to be faster for solving more complex problem or operating in real-time. The ANN algorithm has great potential for GPU acceleration since it is constructed with large data-parallel computations. We implemented forwarding computation of ANN in CUDA and optimized it using scratchpad memory of GPUs and leveraging the thread block size. As a results, our method shows 2.32 times faster performance compared to conventional CPU.",
author = "Park, {Jong Hyun} and Ro, {Won Woo}",
year = "2016",
month = "9",
day = "7",
doi = "10.1109/ELINFOCOM.2016.7562974",
language = "English",
booktitle = "International Conference on Electronics, Information, and Communications, ICEIC 2016",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

Park, JH & Ro, WW 2016, Accelerating forwarding computation of artificial neural network using CUDA. in International Conference on Electronics, Information, and Communications, ICEIC 2016., 7562974, Institute of Electrical and Electronics Engineers Inc., 15th International Conference on Electronics, Information, and Communications, ICEIC 2016, Danang, Viet Nam, 16/1/27. https://doi.org/10.1109/ELINFOCOM.2016.7562974

Accelerating forwarding computation of artificial neural network using CUDA. / Park, Jong Hyun; Ro, Won Woo.

International Conference on Electronics, Information, and Communications, ICEIC 2016. Institute of Electrical and Electronics Engineers Inc., 2016. 7562974.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Accelerating forwarding computation of artificial neural network using CUDA

AU - Park, Jong Hyun

AU - Ro, Won Woo

PY - 2016/9/7

Y1 - 2016/9/7

N2 - Recently, graphics processing units (GPUs) are widely used for accelerating general purpose workloads using programming models such as open computing language (OpenCL) or compute unified device architecture (CUDA). In this paper, we accelerated the Artificial Neural Network (ANN) algorithm, one of the popular algorithm in machine learning and cognitive science, since the ANN algorithm needs to be faster for solving more complex problem or operating in real-time. The ANN algorithm has great potential for GPU acceleration since it is constructed with large data-parallel computations. We implemented forwarding computation of ANN in CUDA and optimized it using scratchpad memory of GPUs and leveraging the thread block size. As a results, our method shows 2.32 times faster performance compared to conventional CPU.

AB - Recently, graphics processing units (GPUs) are widely used for accelerating general purpose workloads using programming models such as open computing language (OpenCL) or compute unified device architecture (CUDA). In this paper, we accelerated the Artificial Neural Network (ANN) algorithm, one of the popular algorithm in machine learning and cognitive science, since the ANN algorithm needs to be faster for solving more complex problem or operating in real-time. The ANN algorithm has great potential for GPU acceleration since it is constructed with large data-parallel computations. We implemented forwarding computation of ANN in CUDA and optimized it using scratchpad memory of GPUs and leveraging the thread block size. As a results, our method shows 2.32 times faster performance compared to conventional CPU.

UR - http://www.scopus.com/inward/record.url?scp=84988799967&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84988799967&partnerID=8YFLogxK

U2 - 10.1109/ELINFOCOM.2016.7562974

DO - 10.1109/ELINFOCOM.2016.7562974

M3 - Conference contribution

AN - SCOPUS:84988799967

BT - International Conference on Electronics, Information, and Communications, ICEIC 2016

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Park JH, Ro WW. Accelerating forwarding computation of artificial neural network using CUDA. In International Conference on Electronics, Information, and Communications, ICEIC 2016. Institute of Electrical and Electronics Engineers Inc. 2016. 7562974 https://doi.org/10.1109/ELINFOCOM.2016.7562974