### Abstract

Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.

Original language | English |
---|---|

Title of host publication | Machine Learning |

Subtitle of host publication | ECML 2000 - 11th European Conference on Machine Learning, Proceedings |

Editors | Ramon Lopez de Mantaras, Enric Plaza |

Publisher | Springer Verlag |

Pages | 211-219 |

Number of pages | 9 |

ISBN (Print) | 9783540451648 |

Publication status | Published - 2000 Jan 1 |

Event | 11th European Conference on Machine Learning, ECML 2000 - Barcelona, Catalonia, Spain Duration: 2000 May 31 → 2000 Jun 2 |

### Publication series

Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|

Volume | 1810 |

ISSN (Print) | 0302-9743 |

ISSN (Electronic) | 1611-3349 |

### Other

Other | 11th European Conference on Machine Learning, ECML 2000 |
---|---|

Country | Spain |

City | Barcelona, Catalonia |

Period | 00/5/31 → 00/6/2 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Theoretical Computer Science
- Computer Science(all)

### Cite this

*Machine Learning: ECML 2000 - 11th European Conference on Machine Learning, Proceedings*(pp. 211-219). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1810). Springer Verlag.

}

*Machine Learning: ECML 2000 - 11th European Conference on Machine Learning, Proceedings.*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1810, Springer Verlag, pp. 211-219, 11th European Conference on Machine Learning, ECML 2000, Barcelona, Catalonia, Spain, 00/5/31.

**Handling continuous-valued attributes in decision tree with neural network modeling.** / Kim, DaeEun; Lee, Jaeho.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

TY - GEN

T1 - Handling continuous-valued attributes in decision tree with neural network modeling

AU - Kim, DaeEun

AU - Lee, Jaeho

PY - 2000/1/1

Y1 - 2000/1/1

N2 - Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.

AB - Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.

UR - http://www.scopus.com/inward/record.url?scp=84974737294&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84974737294&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84974737294

SN - 9783540451648

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 211

EP - 219

BT - Machine Learning

A2 - de Mantaras, Ramon Lopez

A2 - Plaza, Enric

PB - Springer Verlag

ER -