Global feedforward neural network learning for classification and regression

Kar Ann Toh, Juwei Lu, Wei Yun Yau

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

This paper addresses the issues of global optimality and training of a Feedforward Neural Network (FNN) error funtion incoporating the weight decay regularizer. A network with a single hidden-layer and a single output-unit is considered. Explicit vector and matrix canonical forms for the Jacobian and Hessian of the network are presented. Convexity analysis is then performed utilizing the known canonical structure of the Hessian. Next, global optimality characterization of the FNN error function is attempted utilizing the results of convex characterization and a convex monotonic transformation. Based on this global optimality characterization, an iterative algorithm is proposed for global FNN learning. Numerical experiments with benchmark examples show better convergence of our network learning as compared to many existing methods in the literature. The network is also shown to gener­alize well for a face recognition problem.

Original languageEnglish
Title of host publicationEnergy Minimization Methods in Computer Vision and Pattern Recognition - 3rd International Workshop, EMMCVPR 2001, Proceedings
EditorsAnil K. Jain, Mario Figueiredo, Josiane Zerubia
PublisherSpringer Verlag
Pages407-422
Number of pages16
ISBN (Print)3540425233, 9783540425236
DOIs
Publication statusPublished - 2001 Jan 1
Event3rd International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2001 - Sophia Antipolis, France
Duration: 2001 Sep 32001 Sep 5

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume2134
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other3rd International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2001
CountryFrance
CitySophia Antipolis
Period01/9/301/9/5

Fingerprint

Feedforward neural networks
Feedforward Neural Networks
Global Optimality
Regression
Face recognition
Error function
Canonical form
Face Recognition
Monotonic
Iterative Algorithm
Convexity
Numerical Experiment
Decay
Benchmark
Generalise
Unit
Learning
Output
Experiments

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Toh, K. A., Lu, J., & Yau, W. Y. (2001). Global feedforward neural network learning for classification and regression. In A. K. Jain, M. Figueiredo, & J. Zerubia (Eds.), Energy Minimization Methods in Computer Vision and Pattern Recognition - 3rd International Workshop, EMMCVPR 2001, Proceedings (pp. 407-422). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 2134). Springer Verlag. https://doi.org/10.1007/3-540-44745-8_27
Toh, Kar Ann ; Lu, Juwei ; Yau, Wei Yun. / Global feedforward neural network learning for classification and regression. Energy Minimization Methods in Computer Vision and Pattern Recognition - 3rd International Workshop, EMMCVPR 2001, Proceedings. editor / Anil K. Jain ; Mario Figueiredo ; Josiane Zerubia. Springer Verlag, 2001. pp. 407-422 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{d7523aefdd2543a3922ffb3ef6c9da65,
title = "Global feedforward neural network learning for classification and regression",
abstract = "This paper addresses the issues of global optimality and training of a Feedforward Neural Network (FNN) error funtion incoporating the weight decay regularizer. A network with a single hidden-layer and a single output-unit is considered. Explicit vector and matrix canonical forms for the Jacobian and Hessian of the network are presented. Convexity analysis is then performed utilizing the known canonical structure of the Hessian. Next, global optimality characterization of the FNN error function is attempted utilizing the results of convex characterization and a convex monotonic transformation. Based on this global optimality characterization, an iterative algorithm is proposed for global FNN learning. Numerical experiments with benchmark examples show better convergence of our network learning as compared to many existing methods in the literature. The network is also shown to gener­alize well for a face recognition problem.",
author = "Toh, {Kar Ann} and Juwei Lu and Yau, {Wei Yun}",
year = "2001",
month = "1",
day = "1",
doi = "10.1007/3-540-44745-8_27",
language = "English",
isbn = "3540425233",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "407--422",
editor = "Jain, {Anil K.} and Mario Figueiredo and Josiane Zerubia",
booktitle = "Energy Minimization Methods in Computer Vision and Pattern Recognition - 3rd International Workshop, EMMCVPR 2001, Proceedings",
address = "Germany",

}

Toh, KA, Lu, J & Yau, WY 2001, Global feedforward neural network learning for classification and regression. in AK Jain, M Figueiredo & J Zerubia (eds), Energy Minimization Methods in Computer Vision and Pattern Recognition - 3rd International Workshop, EMMCVPR 2001, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 2134, Springer Verlag, pp. 407-422, 3rd International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2001, Sophia Antipolis, France, 01/9/3. https://doi.org/10.1007/3-540-44745-8_27

Global feedforward neural network learning for classification and regression. / Toh, Kar Ann; Lu, Juwei; Yau, Wei Yun.

Energy Minimization Methods in Computer Vision and Pattern Recognition - 3rd International Workshop, EMMCVPR 2001, Proceedings. ed. / Anil K. Jain; Mario Figueiredo; Josiane Zerubia. Springer Verlag, 2001. p. 407-422 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 2134).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Global feedforward neural network learning for classification and regression

AU - Toh, Kar Ann

AU - Lu, Juwei

AU - Yau, Wei Yun

PY - 2001/1/1

Y1 - 2001/1/1

N2 - This paper addresses the issues of global optimality and training of a Feedforward Neural Network (FNN) error funtion incoporating the weight decay regularizer. A network with a single hidden-layer and a single output-unit is considered. Explicit vector and matrix canonical forms for the Jacobian and Hessian of the network are presented. Convexity analysis is then performed utilizing the known canonical structure of the Hessian. Next, global optimality characterization of the FNN error function is attempted utilizing the results of convex characterization and a convex monotonic transformation. Based on this global optimality characterization, an iterative algorithm is proposed for global FNN learning. Numerical experiments with benchmark examples show better convergence of our network learning as compared to many existing methods in the literature. The network is also shown to gener­alize well for a face recognition problem.

AB - This paper addresses the issues of global optimality and training of a Feedforward Neural Network (FNN) error funtion incoporating the weight decay regularizer. A network with a single hidden-layer and a single output-unit is considered. Explicit vector and matrix canonical forms for the Jacobian and Hessian of the network are presented. Convexity analysis is then performed utilizing the known canonical structure of the Hessian. Next, global optimality characterization of the FNN error function is attempted utilizing the results of convex characterization and a convex monotonic transformation. Based on this global optimality characterization, an iterative algorithm is proposed for global FNN learning. Numerical experiments with benchmark examples show better convergence of our network learning as compared to many existing methods in the literature. The network is also shown to gener­alize well for a face recognition problem.

UR - http://www.scopus.com/inward/record.url?scp=84958664036&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84958664036&partnerID=8YFLogxK

U2 - 10.1007/3-540-44745-8_27

DO - 10.1007/3-540-44745-8_27

M3 - Conference contribution

SN - 3540425233

SN - 9783540425236

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 407

EP - 422

BT - Energy Minimization Methods in Computer Vision and Pattern Recognition - 3rd International Workshop, EMMCVPR 2001, Proceedings

A2 - Jain, Anil K.

A2 - Figueiredo, Mario

A2 - Zerubia, Josiane

PB - Springer Verlag

ER -

Toh KA, Lu J, Yau WY. Global feedforward neural network learning for classification and regression. In Jain AK, Figueiredo M, Zerubia J, editors, Energy Minimization Methods in Computer Vision and Pattern Recognition - 3rd International Workshop, EMMCVPR 2001, Proceedings. Springer Verlag. 2001. p. 407-422. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/3-540-44745-8_27