Deterministic Global Optimization for FNN Training

Research output: Contribution to journalArticle

18 Citations (Scopus)

Abstract

This paper addresses the issue of training feedforward neural networks by global optimization. The main contributions include characterization of global optimality of a network error function, and formulation of a global descent algorithm to solve the network training problem. A network with a single hidden-layer and a single-output unit is considered. By means of a monotonic transformation, a sufficient condition for global optimality of a network error function is presented. Based on this, a penalty-based algorithm is derived directing the search towards possible regions containing the global minima. Numerical comparison with benchmark problems from the neural network literature shows superiority of the proposed algorithm over some local methods, in terms of the percentage of trials attaining the desired solutions. The algorithm is also shown to be effective for several pattern recognition problems.

Original languageEnglish
Pages (from-to)977-983
Number of pages7
JournalIEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Volume33
Issue number6
DOIs
Publication statusPublished - 2003 Dec 1

Fingerprint

Global optimization
Feedforward neural networks
Pattern recognition
Neural networks

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Artificial Intelligence
  • Human-Computer Interaction

Cite this

@article{3fa6c35c73a84d1cab997684ba4ab6ad,
title = "Deterministic Global Optimization for FNN Training",
abstract = "This paper addresses the issue of training feedforward neural networks by global optimization. The main contributions include characterization of global optimality of a network error function, and formulation of a global descent algorithm to solve the network training problem. A network with a single hidden-layer and a single-output unit is considered. By means of a monotonic transformation, a sufficient condition for global optimality of a network error function is presented. Based on this, a penalty-based algorithm is derived directing the search towards possible regions containing the global minima. Numerical comparison with benchmark problems from the neural network literature shows superiority of the proposed algorithm over some local methods, in terms of the percentage of trials attaining the desired solutions. The algorithm is also shown to be effective for several pattern recognition problems.",
author = "Toh, {Kar Ann}",
year = "2003",
month = "12",
day = "1",
doi = "10.1109/TSMCB.2002.804366",
language = "English",
volume = "33",
pages = "977--983",
journal = "IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics",
issn = "1083-4419",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "6",

}

Deterministic Global Optimization for FNN Training. / Toh, Kar Ann.

In: IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 33, No. 6, 01.12.2003, p. 977-983.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Deterministic Global Optimization for FNN Training

AU - Toh, Kar Ann

PY - 2003/12/1

Y1 - 2003/12/1

N2 - This paper addresses the issue of training feedforward neural networks by global optimization. The main contributions include characterization of global optimality of a network error function, and formulation of a global descent algorithm to solve the network training problem. A network with a single hidden-layer and a single-output unit is considered. By means of a monotonic transformation, a sufficient condition for global optimality of a network error function is presented. Based on this, a penalty-based algorithm is derived directing the search towards possible regions containing the global minima. Numerical comparison with benchmark problems from the neural network literature shows superiority of the proposed algorithm over some local methods, in terms of the percentage of trials attaining the desired solutions. The algorithm is also shown to be effective for several pattern recognition problems.

AB - This paper addresses the issue of training feedforward neural networks by global optimization. The main contributions include characterization of global optimality of a network error function, and formulation of a global descent algorithm to solve the network training problem. A network with a single hidden-layer and a single-output unit is considered. By means of a monotonic transformation, a sufficient condition for global optimality of a network error function is presented. Based on this, a penalty-based algorithm is derived directing the search towards possible regions containing the global minima. Numerical comparison with benchmark problems from the neural network literature shows superiority of the proposed algorithm over some local methods, in terms of the percentage of trials attaining the desired solutions. The algorithm is also shown to be effective for several pattern recognition problems.

UR - http://www.scopus.com/inward/record.url?scp=0345690128&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0345690128&partnerID=8YFLogxK

U2 - 10.1109/TSMCB.2002.804366

DO - 10.1109/TSMCB.2002.804366

M3 - Article

VL - 33

SP - 977

EP - 983

JO - IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics

JF - IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics

SN - 1083-4419

IS - 6

ER -