Abstract
In this paper, a new learning law for one-dimensional topology preserving neural networks is presented in which the output weights of the neural network converge to a set that produces a predefined winning neuron coordinate probability distribution, when the probability density function of the input signal is unknown and not necessarily uniform. The learning algorithm also produces an orientation preserving homeomorphic function from the known neural coordinate domain to the unknown input signal space, which maps a predefined neural coordinate probability density function into the unknown probability density function of the input signal. The convergence properties of the proposed learning algorithm are analyzed using the ODE approach and verified by a simulation study.
Original language | English |
---|---|
Article number | WeC06.4 |
Pages (from-to) | 1343-1350 |
Number of pages | 8 |
Journal | Proceedings of the American Control Conference |
Volume | 2 |
Publication status | Published - 2005 Sep 1 |
Event | 2005 American Control Conference, ACC - Portland, OR, United States Duration: 2005 Jun 8 → 2005 Jun 10 |
Fingerprint
All Science Journal Classification (ASJC) codes
- Electrical and Electronic Engineering
Cite this
}
Topology preserving neural networks that achieve a prescribed feature map probability density distribution. / Choi, Jongeun; Horowitz, Roberto.
In: Proceedings of the American Control Conference, Vol. 2, WeC06.4, 01.09.2005, p. 1343-1350.Research output: Contribution to journal › Conference article
TY - JOUR
T1 - Topology preserving neural networks that achieve a prescribed feature map probability density distribution
AU - Choi, Jongeun
AU - Horowitz, Roberto
PY - 2005/9/1
Y1 - 2005/9/1
N2 - In this paper, a new learning law for one-dimensional topology preserving neural networks is presented in which the output weights of the neural network converge to a set that produces a predefined winning neuron coordinate probability distribution, when the probability density function of the input signal is unknown and not necessarily uniform. The learning algorithm also produces an orientation preserving homeomorphic function from the known neural coordinate domain to the unknown input signal space, which maps a predefined neural coordinate probability density function into the unknown probability density function of the input signal. The convergence properties of the proposed learning algorithm are analyzed using the ODE approach and verified by a simulation study.
AB - In this paper, a new learning law for one-dimensional topology preserving neural networks is presented in which the output weights of the neural network converge to a set that produces a predefined winning neuron coordinate probability distribution, when the probability density function of the input signal is unknown and not necessarily uniform. The learning algorithm also produces an orientation preserving homeomorphic function from the known neural coordinate domain to the unknown input signal space, which maps a predefined neural coordinate probability density function into the unknown probability density function of the input signal. The convergence properties of the proposed learning algorithm are analyzed using the ODE approach and verified by a simulation study.
UR - http://www.scopus.com/inward/record.url?scp=23944514838&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=23944514838&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:23944514838
VL - 2
SP - 1343
EP - 1350
JO - Proceedings of the American Control Conference
JF - Proceedings of the American Control Conference
SN - 0743-1619
M1 - WeC06.4
ER -