Topology preserving neural networks that achieve a prescribed feature map probability density distribution

Jongeun Choi, Roberto Horowitz

Research output: Contribution to journalConference article

1 Citation (Scopus)

Abstract

In this paper, a new learning law for one-dimensional topology preserving neural networks is presented in which the output weights of the neural network converge to a set that produces a predefined winning neuron coordinate probability distribution, when the probability density function of the input signal is unknown and not necessarily uniform. The learning algorithm also produces an orientation preserving homeomorphic function from the known neural coordinate domain to the unknown input signal space, which maps a predefined neural coordinate probability density function into the unknown probability density function of the input signal. The convergence properties of the proposed learning algorithm are analyzed using the ODE approach and verified by a simulation study.

Original languageEnglish
Article numberWeC06.4
Pages (from-to)1343-1350
Number of pages8
JournalProceedings of the American Control Conference
Volume2
Publication statusPublished - 2005 Sep 1
Event2005 American Control Conference, ACC - Portland, OR, United States
Duration: 2005 Jun 82005 Jun 10

Fingerprint

Probability density function
Topology
Neural networks
Learning algorithms
Probability distributions
Neurons

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

@article{b8655e8ca30143d59c79ba092e79ba4a,
title = "Topology preserving neural networks that achieve a prescribed feature map probability density distribution",
abstract = "In this paper, a new learning law for one-dimensional topology preserving neural networks is presented in which the output weights of the neural network converge to a set that produces a predefined winning neuron coordinate probability distribution, when the probability density function of the input signal is unknown and not necessarily uniform. The learning algorithm also produces an orientation preserving homeomorphic function from the known neural coordinate domain to the unknown input signal space, which maps a predefined neural coordinate probability density function into the unknown probability density function of the input signal. The convergence properties of the proposed learning algorithm are analyzed using the ODE approach and verified by a simulation study.",
author = "Jongeun Choi and Roberto Horowitz",
year = "2005",
month = "9",
day = "1",
language = "English",
volume = "2",
pages = "1343--1350",
journal = "Proceedings of the American Control Conference",
issn = "0743-1619",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

Topology preserving neural networks that achieve a prescribed feature map probability density distribution. / Choi, Jongeun; Horowitz, Roberto.

In: Proceedings of the American Control Conference, Vol. 2, WeC06.4, 01.09.2005, p. 1343-1350.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Topology preserving neural networks that achieve a prescribed feature map probability density distribution

AU - Choi, Jongeun

AU - Horowitz, Roberto

PY - 2005/9/1

Y1 - 2005/9/1

N2 - In this paper, a new learning law for one-dimensional topology preserving neural networks is presented in which the output weights of the neural network converge to a set that produces a predefined winning neuron coordinate probability distribution, when the probability density function of the input signal is unknown and not necessarily uniform. The learning algorithm also produces an orientation preserving homeomorphic function from the known neural coordinate domain to the unknown input signal space, which maps a predefined neural coordinate probability density function into the unknown probability density function of the input signal. The convergence properties of the proposed learning algorithm are analyzed using the ODE approach and verified by a simulation study.

AB - In this paper, a new learning law for one-dimensional topology preserving neural networks is presented in which the output weights of the neural network converge to a set that produces a predefined winning neuron coordinate probability distribution, when the probability density function of the input signal is unknown and not necessarily uniform. The learning algorithm also produces an orientation preserving homeomorphic function from the known neural coordinate domain to the unknown input signal space, which maps a predefined neural coordinate probability density function into the unknown probability density function of the input signal. The convergence properties of the proposed learning algorithm are analyzed using the ODE approach and verified by a simulation study.

UR - http://www.scopus.com/inward/record.url?scp=23944514838&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=23944514838&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:23944514838

VL - 2

SP - 1343

EP - 1350

JO - Proceedings of the American Control Conference

JF - Proceedings of the American Control Conference

SN - 0743-1619

M1 - WeC06.4

ER -