Deterministic neural classification

Research output: Contribution to journalArticle

78 Citations (Scopus)

Abstract

This letter presents a minimum classification error learning formulation for a single-layer feedforward network (SLFN). By approximating the nonlinear counting step function using a quadratic function, the classification error rate is shown to be deterministically solvable. Essentially the derived solution is related to an existing weighted least-squares method with class-specific weights set according to the size of data set. By considering the class-specific weights as adjustable parameters, the learning formulation extends the classification robustness of the SLFN without sacrificing its intrinsic advantage of being a closed-form algorithm. While the method is applicable to other linear formulations, our empirical results indicate SLFN's effectiveness on classification generalization.

Original languageEnglish
Pages (from-to)1565-1595
Number of pages31
JournalNeural Computation
Volume20
Issue number6
DOIs
Publication statusPublished - 2008 Jun 1

Fingerprint

Learning
Weights and Measures
Least-Squares Analysis
Layer
Datasets
Robustness
Intrinsic
Letters

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Cite this

Toh, Kar Ann. / Deterministic neural classification. In: Neural Computation. 2008 ; Vol. 20, No. 6. pp. 1565-1595.
@article{41201d5d62174da0ab6c46f601762876,
title = "Deterministic neural classification",
abstract = "This letter presents a minimum classification error learning formulation for a single-layer feedforward network (SLFN). By approximating the nonlinear counting step function using a quadratic function, the classification error rate is shown to be deterministically solvable. Essentially the derived solution is related to an existing weighted least-squares method with class-specific weights set according to the size of data set. By considering the class-specific weights as adjustable parameters, the learning formulation extends the classification robustness of the SLFN without sacrificing its intrinsic advantage of being a closed-form algorithm. While the method is applicable to other linear formulations, our empirical results indicate SLFN's effectiveness on classification generalization.",
author = "Toh, {Kar Ann}",
year = "2008",
month = "6",
day = "1",
doi = "10.1162/neco.2007.04-07-508",
language = "English",
volume = "20",
pages = "1565--1595",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "6",

}

Deterministic neural classification. / Toh, Kar Ann.

In: Neural Computation, Vol. 20, No. 6, 01.06.2008, p. 1565-1595.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Deterministic neural classification

AU - Toh, Kar Ann

PY - 2008/6/1

Y1 - 2008/6/1

N2 - This letter presents a minimum classification error learning formulation for a single-layer feedforward network (SLFN). By approximating the nonlinear counting step function using a quadratic function, the classification error rate is shown to be deterministically solvable. Essentially the derived solution is related to an existing weighted least-squares method with class-specific weights set according to the size of data set. By considering the class-specific weights as adjustable parameters, the learning formulation extends the classification robustness of the SLFN without sacrificing its intrinsic advantage of being a closed-form algorithm. While the method is applicable to other linear formulations, our empirical results indicate SLFN's effectiveness on classification generalization.

AB - This letter presents a minimum classification error learning formulation for a single-layer feedforward network (SLFN). By approximating the nonlinear counting step function using a quadratic function, the classification error rate is shown to be deterministically solvable. Essentially the derived solution is related to an existing weighted least-squares method with class-specific weights set according to the size of data set. By considering the class-specific weights as adjustable parameters, the learning formulation extends the classification robustness of the SLFN without sacrificing its intrinsic advantage of being a closed-form algorithm. While the method is applicable to other linear formulations, our empirical results indicate SLFN's effectiveness on classification generalization.

UR - http://www.scopus.com/inward/record.url?scp=45749126424&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=45749126424&partnerID=8YFLogxK

U2 - 10.1162/neco.2007.04-07-508

DO - 10.1162/neco.2007.04-07-508

M3 - Article

VL - 20

SP - 1565

EP - 1595

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 6

ER -