An extreme learning machine (ELM) is a popular analytic single hidden layer feedforward neural network because of its rapid learning capacity. However, vanilla dense ELMs are affected by the overfitting problem when the number of hidden neurons is high. Further direct consequences of the density are decreases in both the training and prediction speeds. In this study, we propose an incremental method for sparsifying the ELM using a newly devised indicator driven by the condition number in the ELM design matrix, in which we call sparse pseudoinverse incremental-ELM (SPI-ELM). SPI-ELM exhibits better generalization performance and lower run-time complexity compared with ELM. However, the sparsification process may negatively affect the learning speed of SPI-ELM; thus, we introduce an iterative matrix decomposition algorithm to address this issue. We also demonstrate that there is a useful relationship between the condition number in the ELM design matrix and the number of hidden neurons. This relationship helps to understand the random weights and nonlinear activation functions in ELMs. We evaluated the SPI-ELM method based on 20 benchmark data sets from the University of California Irvine repository and three real-world databases from the computer vision domain.
Bibliographical noteFunding Information:
This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea ( NRF ) funded by the Ministry of Science, ICT (NRF- 2017M3C4A7069370 ). Peyman Hosseinzadeh Kassani received his master degree in computer science from University of Economic Sciences, Tehran, Iran in 2012 and became a Member of IEEE in 2013. He is currently doing his Ph.D. at Yonsei University, Seoul, South Korea. He is with computational intelligence laboratory in electrical and electronics engineering department of Yonsei University. He has worked on object detection and pattern classification. His general interests lie in machine learning and pattern recognition and their application to computer. Andrew Beng Jin Teoh obtained his B.Eng. (Electronic) in 1999 and Ph.D. degree in 2003 from National University of Malaysia. He is currently an associate professor in Electrical and Electronic Engineering Department, Yonsei University, South Korea. His research, for which he has received funding, focuses on biometric applications and biometric security. His current research interests are Machine Learning and Information Security. He has published more than 250 international refereed journal papers, conference papers, edited several book chapters and edited book volumes. He served and is serving as a guest editor of IEEE Signal Processing Magazine, associate editor of IEEE Biometrics Compendium and editor in- chief of IEEE Biometrics Council Newsletter. He was a program co-chair of ICONIP 2014, area chair of ICPR 2016 and ICIP 2017, track chair and TPC for several conferences related to computer vision, pattern recognition and biometrics. Euntai Kim was born in Seoul, Korea, in 1970. He received B.S., M.S., and Ph.D. degrees in Electronic Engineering, all from Yonsei University, Seoul, Korea, in 1992, 1994, and 1999, respectively. From 1999 to 2002, he was a Full-Time Lecturer in the Department of Control and Instrumentation Engineering, Hankyong National University, Kyonggi-do, Korea. Since 2002, he has been with the faculty of the School of Electrical and Electronic Engineering, Yonsei University, where he is currently a Professor. He was a Visiting Scholar at the University of Alberta, Edmonton, AB, Canada, in 2003, and also was a Visiting Researcher at the Berkeley Initiative in Soft Computing, University of California, Berkeley, CA, USA, in 2008. His current research interests include computational intelligence and statistical machine learning and their application to intelligent robotics, unmanned vehicles, and robot vision.
© 2018 Elsevier B.V.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence