In this paper, we present a practical framework of methodologies for increasing the efficiency of the training process and improving the generalization capability of neural networks. The methodologies are devised for resolving problems of neural networks primarily in three aspects: learning, architecture, and data representation. For learning we present a rapid learning method based on Aitken's Δ2 process and a training schedule called selective reinforcement learning; for architecture, a two-stage classification scheme and a multiple network scheme; and for data representation, a data generation scheme with systematic noise and a preprocessing method by hidden Markov model. In order to investigate the behavior of neural network classifiers with the proposed methodologies, we designed and implemented neural networks for recognizing on-line handwriting characters obtained by an LCD tablet. Experimental results with a large set of on-line handwriting characters show the usefulness of the proposed methodologies.
Bibliographical noteFunding Information:
* Corresponding author. Email: email@example.com ’ This work was supported in part from the Korea Science and Engineering Foundation (KOSEF) and Center for Artificial Intelligence Research (CAIR), the Engineering Research Center (ERC) of Excellence Program.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence