It has been shown that a neural network is better than induction tree applications in modeling complex relations of input attributes in sample data. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. A linear classifier is derived from a linear combination of input attributes and neuron weights in the first hidden layer of neural networks. Training data are projected onto the set of linear classifier hyperplanes and then information gain measure is applied to the data. We propose that this can reduce computational complexity to extract rules from neural networks. As a result, concise rules can be extracted from neural networks to support data with input variable relations over continuous-valued attributes.
|Title of host publication||Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings|
|Editors||Kurt Hornik, Georg Dorffner, Horst Bischof|
|Number of pages||6|
|ISBN (Print)||3540424865, 9783540446682|
|Publication status||Published - 2001|
|Event||International Conference on Artificial Neural Networks, ICANN 2001 - Vienna, Austria|
Duration: 2001 Aug 21 → 2001 Aug 25
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Other||International Conference on Artificial Neural Networks, ICANN 2001|
|Period||01/8/21 → 01/8/25|
Bibliographical notePublisher Copyright:
© Springer-Verlag Berlin Heidelberg 2001.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Computer Science(all)