Parallel, Self-Organizing, Hierarchical Neural Networks

Okan K. Ersoy, Daesik Hong

Research output: Contribution to journalArticle

40 Citations (Scopus)

Abstract

This paper presents a new neural network architecture called the parallel, self-organizing, hierarchical neural network (PSHNN). The new architecture involves a number of stages in which each stage can be a particular neural network (SNN). At the end of each stage, error detection is carried out, and a number of input vectors are rejected. Between two stages there is a nonlinear transformation of those input vectors rejected by the previous stage. The new architecture has many desirable properties such as optimized system complexity in the sense of minimized self-organizing number of stages, high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all stages are operating simultaneously without waiting for data from each other during testing. The experiments performed in comparison to multilayered networks with backpropagation training indicated the superiority of the new architecture.

Original languageEnglish
Pages (from-to)167-178
Number of pages12
JournalIEEE Transactions on Neural Networks
Volume1
Issue number2
DOIs
Publication statusPublished - 1990 Jun

Fingerprint

Learning
Neural networks
Parallel architectures
Error detection
Network architecture
Backpropagation
Testing
Experiments

All Science Journal Classification (ASJC) codes

  • Software
  • Medicine(all)
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

@article{6864521c6fc841bda9dcccdb5af61fd7,
title = "Parallel, Self-Organizing, Hierarchical Neural Networks",
abstract = "This paper presents a new neural network architecture called the parallel, self-organizing, hierarchical neural network (PSHNN). The new architecture involves a number of stages in which each stage can be a particular neural network (SNN). At the end of each stage, error detection is carried out, and a number of input vectors are rejected. Between two stages there is a nonlinear transformation of those input vectors rejected by the previous stage. The new architecture has many desirable properties such as optimized system complexity in the sense of minimized self-organizing number of stages, high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all stages are operating simultaneously without waiting for data from each other during testing. The experiments performed in comparison to multilayered networks with backpropagation training indicated the superiority of the new architecture.",
author = "Ersoy, {Okan K.} and Daesik Hong",
year = "1990",
month = "6",
doi = "10.1109/72.80229",
language = "English",
volume = "1",
pages = "167--178",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "2",

}

Parallel, Self-Organizing, Hierarchical Neural Networks. / Ersoy, Okan K.; Hong, Daesik.

In: IEEE Transactions on Neural Networks, Vol. 1, No. 2, 06.1990, p. 167-178.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Parallel, Self-Organizing, Hierarchical Neural Networks

AU - Ersoy, Okan K.

AU - Hong, Daesik

PY - 1990/6

Y1 - 1990/6

N2 - This paper presents a new neural network architecture called the parallel, self-organizing, hierarchical neural network (PSHNN). The new architecture involves a number of stages in which each stage can be a particular neural network (SNN). At the end of each stage, error detection is carried out, and a number of input vectors are rejected. Between two stages there is a nonlinear transformation of those input vectors rejected by the previous stage. The new architecture has many desirable properties such as optimized system complexity in the sense of minimized self-organizing number of stages, high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all stages are operating simultaneously without waiting for data from each other during testing. The experiments performed in comparison to multilayered networks with backpropagation training indicated the superiority of the new architecture.

AB - This paper presents a new neural network architecture called the parallel, self-organizing, hierarchical neural network (PSHNN). The new architecture involves a number of stages in which each stage can be a particular neural network (SNN). At the end of each stage, error detection is carried out, and a number of input vectors are rejected. Between two stages there is a nonlinear transformation of those input vectors rejected by the previous stage. The new architecture has many desirable properties such as optimized system complexity in the sense of minimized self-organizing number of stages, high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all stages are operating simultaneously without waiting for data from each other during testing. The experiments performed in comparison to multilayered networks with backpropagation training indicated the superiority of the new architecture.

UR - http://www.scopus.com/inward/record.url?scp=0025445409&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0025445409&partnerID=8YFLogxK

U2 - 10.1109/72.80229

DO - 10.1109/72.80229

M3 - Article

AN - SCOPUS:0025445409

VL - 1

SP - 167

EP - 178

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 2

ER -