Mapping of neural networks onto the memory-processor integrated architecture

Youngsik Kim, Mi Jung Noh, Tack Don Han, Shin Dug Kim

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

In this paper, an effective memory-processor integrated architecture, called memory-based processor array for artificial neural networks (MPAA), is proposed. The MPAA can be easily integrated into any host system via memory interface. Specifically, the MPA system provides an efficient mechanism for its local memory accesses allowed by row and column bases, using hybrid row and column decoding, which is suitable for computation models of ANNs such as the accessing and alignment patterns given for matrix-by-vector operations. Mapping algorithms to implement the multilayer perceptron with backpropagation learning on the MPAA system are also provided. The proposed algorithms support both neuron and layer level parallelisms which allow the MPAA system to operate the learning phase as well as the recall phase in the pipelined fashion. Performance evaluation is provided by detailed comparison in terms of two metrics such as the cost and number of computation steps. The results show that the performance of the proposed architecture and algorithms is superior to those of the previous approaches, such as one-dimensional single-instruction multiple data (SIMD) arrays, two-dimensional SIMD arrays, systolic ring structures, and hypercube machines.

Original languageEnglish
Pages (from-to)1083-1098
Number of pages16
JournalNeural Networks
Volume11
Issue number6
DOIs
Publication statusPublished - 1998 Aug

Fingerprint

Neural networks
Data storage equipment
Computer systems
Learning
Memory architecture
Systolic arrays
Neural Networks (Computer)
Multilayer neural networks
Parallel processing systems
Backpropagation
Neurons
Interfaces (computer)
Decoding
Costs and Cost Analysis
Costs

All Science Journal Classification (ASJC) codes

  • Cognitive Neuroscience
  • Artificial Intelligence

Cite this

@article{6691f038358048a494c231f598cc8156,
title = "Mapping of neural networks onto the memory-processor integrated architecture",
abstract = "In this paper, an effective memory-processor integrated architecture, called memory-based processor array for artificial neural networks (MPAA), is proposed. The MPAA can be easily integrated into any host system via memory interface. Specifically, the MPA system provides an efficient mechanism for its local memory accesses allowed by row and column bases, using hybrid row and column decoding, which is suitable for computation models of ANNs such as the accessing and alignment patterns given for matrix-by-vector operations. Mapping algorithms to implement the multilayer perceptron with backpropagation learning on the MPAA system are also provided. The proposed algorithms support both neuron and layer level parallelisms which allow the MPAA system to operate the learning phase as well as the recall phase in the pipelined fashion. Performance evaluation is provided by detailed comparison in terms of two metrics such as the cost and number of computation steps. The results show that the performance of the proposed architecture and algorithms is superior to those of the previous approaches, such as one-dimensional single-instruction multiple data (SIMD) arrays, two-dimensional SIMD arrays, systolic ring structures, and hypercube machines.",
author = "Youngsik Kim and Noh, {Mi Jung} and Han, {Tack Don} and Kim, {Shin Dug}",
year = "1998",
month = "8",
doi = "10.1016/S0893-6080(98)00092-6",
language = "English",
volume = "11",
pages = "1083--1098",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",
number = "6",

}

Mapping of neural networks onto the memory-processor integrated architecture. / Kim, Youngsik; Noh, Mi Jung; Han, Tack Don; Kim, Shin Dug.

In: Neural Networks, Vol. 11, No. 6, 08.1998, p. 1083-1098.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Mapping of neural networks onto the memory-processor integrated architecture

AU - Kim, Youngsik

AU - Noh, Mi Jung

AU - Han, Tack Don

AU - Kim, Shin Dug

PY - 1998/8

Y1 - 1998/8

N2 - In this paper, an effective memory-processor integrated architecture, called memory-based processor array for artificial neural networks (MPAA), is proposed. The MPAA can be easily integrated into any host system via memory interface. Specifically, the MPA system provides an efficient mechanism for its local memory accesses allowed by row and column bases, using hybrid row and column decoding, which is suitable for computation models of ANNs such as the accessing and alignment patterns given for matrix-by-vector operations. Mapping algorithms to implement the multilayer perceptron with backpropagation learning on the MPAA system are also provided. The proposed algorithms support both neuron and layer level parallelisms which allow the MPAA system to operate the learning phase as well as the recall phase in the pipelined fashion. Performance evaluation is provided by detailed comparison in terms of two metrics such as the cost and number of computation steps. The results show that the performance of the proposed architecture and algorithms is superior to those of the previous approaches, such as one-dimensional single-instruction multiple data (SIMD) arrays, two-dimensional SIMD arrays, systolic ring structures, and hypercube machines.

AB - In this paper, an effective memory-processor integrated architecture, called memory-based processor array for artificial neural networks (MPAA), is proposed. The MPAA can be easily integrated into any host system via memory interface. Specifically, the MPA system provides an efficient mechanism for its local memory accesses allowed by row and column bases, using hybrid row and column decoding, which is suitable for computation models of ANNs such as the accessing and alignment patterns given for matrix-by-vector operations. Mapping algorithms to implement the multilayer perceptron with backpropagation learning on the MPAA system are also provided. The proposed algorithms support both neuron and layer level parallelisms which allow the MPAA system to operate the learning phase as well as the recall phase in the pipelined fashion. Performance evaluation is provided by detailed comparison in terms of two metrics such as the cost and number of computation steps. The results show that the performance of the proposed architecture and algorithms is superior to those of the previous approaches, such as one-dimensional single-instruction multiple data (SIMD) arrays, two-dimensional SIMD arrays, systolic ring structures, and hypercube machines.

UR - http://www.scopus.com/inward/record.url?scp=0032144410&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0032144410&partnerID=8YFLogxK

U2 - 10.1016/S0893-6080(98)00092-6

DO - 10.1016/S0893-6080(98)00092-6

M3 - Article

AN - SCOPUS:0032144410

VL - 11

SP - 1083

EP - 1098

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

IS - 6

ER -