Handling continuous-valued attributes in decision tree with neural network modeling

DaeEun Kim, Jaeho Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Citations (Scopus)

Abstract

Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.

Original languageEnglish
Title of host publicationMachine Learning
Subtitle of host publicationECML 2000 - 11th European Conference on Machine Learning, Proceedings
EditorsRamon Lopez de Mantaras, Enric Plaza
PublisherSpringer Verlag
Pages211-219
Number of pages9
ISBN (Print)9783540451648
Publication statusPublished - 2000 Jan 1
Event11th European Conference on Machine Learning, ECML 2000 - Barcelona, Catalonia, Spain
Duration: 2000 May 312000 Jun 2

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume1810
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other11th European Conference on Machine Learning, ECML 2000
CountrySpain
CityBarcelona, Catalonia
Period00/5/3100/6/2

Fingerprint

Network Modeling
Decision trees
Decision tree
Attribute
Neural Networks
Neural networks
Backpropagation
Proof by induction
Classifiers
Nonlinear Modeling
Back Propagation
Compact Set
Classifier
Nonlinearity

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Kim, D., & Lee, J. (2000). Handling continuous-valued attributes in decision tree with neural network modeling. In R. L. de Mantaras, & E. Plaza (Eds.), Machine Learning: ECML 2000 - 11th European Conference on Machine Learning, Proceedings (pp. 211-219). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1810). Springer Verlag.
Kim, DaeEun ; Lee, Jaeho. / Handling continuous-valued attributes in decision tree with neural network modeling. Machine Learning: ECML 2000 - 11th European Conference on Machine Learning, Proceedings. editor / Ramon Lopez de Mantaras ; Enric Plaza. Springer Verlag, 2000. pp. 211-219 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{c86a97d76d4a4c43bf13049db3aea0f7,
title = "Handling continuous-valued attributes in decision tree with neural network modeling",
abstract = "Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.",
author = "DaeEun Kim and Jaeho Lee",
year = "2000",
month = "1",
day = "1",
language = "English",
isbn = "9783540451648",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "211--219",
editor = "{de Mantaras}, {Ramon Lopez} and Enric Plaza",
booktitle = "Machine Learning",
address = "Germany",

}

Kim, D & Lee, J 2000, Handling continuous-valued attributes in decision tree with neural network modeling. in RL de Mantaras & E Plaza (eds), Machine Learning: ECML 2000 - 11th European Conference on Machine Learning, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1810, Springer Verlag, pp. 211-219, 11th European Conference on Machine Learning, ECML 2000, Barcelona, Catalonia, Spain, 00/5/31.

Handling continuous-valued attributes in decision tree with neural network modeling. / Kim, DaeEun; Lee, Jaeho.

Machine Learning: ECML 2000 - 11th European Conference on Machine Learning, Proceedings. ed. / Ramon Lopez de Mantaras; Enric Plaza. Springer Verlag, 2000. p. 211-219 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1810).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Handling continuous-valued attributes in decision tree with neural network modeling

AU - Kim, DaeEun

AU - Lee, Jaeho

PY - 2000/1/1

Y1 - 2000/1/1

N2 - Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.

AB - Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.

UR - http://www.scopus.com/inward/record.url?scp=84974737294&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84974737294&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84974737294

SN - 9783540451648

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 211

EP - 219

BT - Machine Learning

A2 - de Mantaras, Ramon Lopez

A2 - Plaza, Enric

PB - Springer Verlag

ER -

Kim D, Lee J. Handling continuous-valued attributes in decision tree with neural network modeling. In de Mantaras RL, Plaza E, editors, Machine Learning: ECML 2000 - 11th European Conference on Machine Learning, Proceedings. Springer Verlag. 2000. p. 211-219. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).