Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.
|Title of host publication||Machine Learning|
|Subtitle of host publication||ECML 2000 - 11th European Conference on Machine Learning, Proceedings|
|Editors||Ramon Lopez de Mantaras, Enric Plaza|
|Number of pages||9|
|Publication status||Published - 2000|
|Event||11th European Conference on Machine Learning, ECML 2000 - Barcelona, Catalonia, Spain|
Duration: 2000 May 31 → 2000 Jun 2
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Other||11th European Conference on Machine Learning, ECML 2000|
|Period||00/5/31 → 00/6/2|
Bibliographical notePublisher Copyright:
© Springer-Verlag Berlin Heidelberg 2000.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Computer Science(all)