Instance-based method to extract rules from neural networks

DaeEun Kim, Jaeho Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

It has been shown that a neural network is better than induction tree applications in modeling complex relations of input attributes in sample data. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. A linear classifier is derived from a linear combination of input attributes and neuron weights in the first hidden layer of neural networks. Training data are projected onto the set of linear classifier hyperplanes and then information gain measure is applied to the data. We propose that this can reduce computational complexity to extract rules from neural networks. As a result, concise rules can be extracted from neural networks to support data with input variable relations over continuous-valued attributes.

Original languageEnglish
Title of host publicationArtificial Neural Networks - ICANN 2001 - International Conference, Proceedings
EditorsKurt Hornik, Georg Dorffner, Horst Bischof
PublisherSpringer Verlag
Pages1193-1198
Number of pages6
ISBN (Print)3540424865, 9783540446682
DOIs
Publication statusPublished - 2001 Jan 1
EventInternational Conference on Artificial Neural Networks, ICANN 2001 - Vienna, Austria
Duration: 2001 Aug 212001 Aug 25

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume2130
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

OtherInternational Conference on Artificial Neural Networks, ICANN 2001
CountryAustria
CityVienna
Period01/8/2101/8/25

Fingerprint

Neural Networks
Neural networks
Classifiers
Classifier
Attribute
Information Gain
Network Modeling
Back Propagation
Backpropagation
Hyperplane
Neurons
Linear Combination
Neuron
Computational complexity
Proof by induction
Computational Complexity
Modeling

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Kim, D., & Lee, J. (2001). Instance-based method to extract rules from neural networks. In K. Hornik, G. Dorffner, & H. Bischof (Eds.), Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings (pp. 1193-1198). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 2130). Springer Verlag. https://doi.org/10.1007/3-540-44668-0_166
Kim, DaeEun ; Lee, Jaeho. / Instance-based method to extract rules from neural networks. Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings. editor / Kurt Hornik ; Georg Dorffner ; Horst Bischof. Springer Verlag, 2001. pp. 1193-1198 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{2ae27b9ec8e44284810399b11c6d7b8d,
title = "Instance-based method to extract rules from neural networks",
abstract = "It has been shown that a neural network is better than induction tree applications in modeling complex relations of input attributes in sample data. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. A linear classifier is derived from a linear combination of input attributes and neuron weights in the first hidden layer of neural networks. Training data are projected onto the set of linear classifier hyperplanes and then information gain measure is applied to the data. We propose that this can reduce computational complexity to extract rules from neural networks. As a result, concise rules can be extracted from neural networks to support data with input variable relations over continuous-valued attributes.",
author = "DaeEun Kim and Jaeho Lee",
year = "2001",
month = "1",
day = "1",
doi = "10.1007/3-540-44668-0_166",
language = "English",
isbn = "3540424865",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "1193--1198",
editor = "Kurt Hornik and Georg Dorffner and Horst Bischof",
booktitle = "Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings",
address = "Germany",

}

Kim, D & Lee, J 2001, Instance-based method to extract rules from neural networks. in K Hornik, G Dorffner & H Bischof (eds), Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 2130, Springer Verlag, pp. 1193-1198, International Conference on Artificial Neural Networks, ICANN 2001, Vienna, Austria, 01/8/21. https://doi.org/10.1007/3-540-44668-0_166

Instance-based method to extract rules from neural networks. / Kim, DaeEun; Lee, Jaeho.

Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings. ed. / Kurt Hornik; Georg Dorffner; Horst Bischof. Springer Verlag, 2001. p. 1193-1198 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 2130).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Instance-based method to extract rules from neural networks

AU - Kim, DaeEun

AU - Lee, Jaeho

PY - 2001/1/1

Y1 - 2001/1/1

N2 - It has been shown that a neural network is better than induction tree applications in modeling complex relations of input attributes in sample data. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. A linear classifier is derived from a linear combination of input attributes and neuron weights in the first hidden layer of neural networks. Training data are projected onto the set of linear classifier hyperplanes and then information gain measure is applied to the data. We propose that this can reduce computational complexity to extract rules from neural networks. As a result, concise rules can be extracted from neural networks to support data with input variable relations over continuous-valued attributes.

AB - It has been shown that a neural network is better than induction tree applications in modeling complex relations of input attributes in sample data. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. A linear classifier is derived from a linear combination of input attributes and neuron weights in the first hidden layer of neural networks. Training data are projected onto the set of linear classifier hyperplanes and then information gain measure is applied to the data. We propose that this can reduce computational complexity to extract rules from neural networks. As a result, concise rules can be extracted from neural networks to support data with input variable relations over continuous-valued attributes.

UR - http://www.scopus.com/inward/record.url?scp=84958962449&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84958962449&partnerID=8YFLogxK

U2 - 10.1007/3-540-44668-0_166

DO - 10.1007/3-540-44668-0_166

M3 - Conference contribution

SN - 3540424865

SN - 9783540446682

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 1193

EP - 1198

BT - Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings

A2 - Hornik, Kurt

A2 - Dorffner, Georg

A2 - Bischof, Horst

PB - Springer Verlag

ER -

Kim D, Lee J. Instance-based method to extract rules from neural networks. In Hornik K, Dorffner G, Bischof H, editors, Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings. Springer Verlag. 2001. p. 1193-1198. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/3-540-44668-0_166