This paper presents a deterministic solution to an approximated classification-error based objective function. In the formulation, we propose a quadratic approximation as the function for achieving smootherror counting. The solution is subsequently found to be related to the weighted least-squares wherebya robust tuning process can be incorporated. The tuning traverses between the least-squares estimate and the approximated total-error-rate estimate to cater for various situations of unbalanced attributedistributions. By adopting a linear parametric classifier model, the proposed classification-error based learning formulation is empirically shown to be superior to that using the original least-squares-errorcost function. Finally, it will be seen that the performance of the proposed formulation is comparable to other classification-error based and state-of-the-art classifiers without sacrificing the computational simplicity.
|Number of pages||12|
|Journal||IEEE transactions on pattern analysis and machine intelligence|
|Publication status||Published - 2008 Apr|
Bibliographical noteFunding Information:
The authors would like to thank Professor Jaihie Kim and Professor Sangyoun Lee for their invaluable support. Special thanks go to Dr. Louis Shue from the Institute for Infocomm Research, Singapore, for English proof reading. This work was supported by the Korea Science and Engineering Foundation (KOSEF) through the Biometrics Engineering Research Center (BERC) at Yonsei University.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition
- Computational Theory and Mathematics
- Artificial Intelligence
- Applied Mathematics