全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Single Directional SMO Algorithm for Least Squares Support Vector Machines

DOI: 10.1155/2013/968438

Full-Text   Cite this paper   Add to My Lib

Abstract:

Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones. 1. Introduction In a classification problem, we consider a set of training samples, that is, the input vectors along with corresponding class labels . Our task is to find a deterministic function that best represents the relation between input vectors and class labels. For classification or forecasting problems in machine learning, support vector machine (SVM) has been adopted in many applications because of its high precision [1–4]. SVMs require the solution of a quadratic programming problem. Another successful method for machine learning is least squares support vector machine (LS-SVM) [5]. Instead of solving a quadratic programming problem as in SVMs, the solutions of a set of linear equations are obtained in LS-SVMS. There are many proposed algorithms for training LS-SVMs: Suykens et al. proposed an iterative algorithm based on conjugate gradient (CG) algorithms [6]; Ferreira et al. presented a gradient system which can train the LS-SVM model [7] effectively; Chua introduced efficient computations for large least square support vector machine classifiers [8]; Chu et al. improved the efficiency of the CG algorithm by using one reduced system of linear equations [9]; Keerthi and Shevade extended the sequential minimal optimization (SMO) algorithms to solve the linear equations in LS-SVMs where the maximum violating pair (MVP) was selected as the working set [10]; based on the idea of SMO algorithm, Lifeng Bo et al. presented an improved method for working set selection by using functional gain (FG) [11]; Jian et al. designed a multiple kernel learning algorithm for LS-SVMs by convex programming [12]; and so on. These numerical algorithms are computationally attractive. Empirical comparisons show that SMO algorithm is more efficient than CG one for the large scale datasets. Fast SVM training speed with SMO algorithm is an important goal for practitioners and many other proposals have been given for this in the

References

[1]  C. H. Song, S. J. Yoo, C. S. Won, and H. G. Kim, “Svm based indoor/mixed/outdoor classification for digital photo annotation in a ubiquitous computing environment,” Computing and Informatics, vol. 27, no. 5, pp. 757–767, 2008.
[2]  T. Van Gestel, J. A. K. Suykens, B. Baesens et al., “Benchmarking least squares support vector machine classifiers,” Machine Learning, vol. 54, no. 1, pp. 5–32, 2004.
[3]  X. Zeng and X. W. Chen, “SMO-based pruning methods for sparse least squares support vector machines,” IEEE Transactions on Neural Networks, vol. 16, no. 6, pp. 1541–1546, 2005.
[4]  H. Esen, F. Ozgen, M. Esen, and A. Sengur, “Modelling of a new solar air heater through least-squares support vector machines,” Expert Systems with Applications, vol. 36, no. 7, pp. 10673–10682, 2009.
[5]  J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293–300, 1999.
[6]  J. A. K. Suykens, L. Lukas, P. Van Dooren, B. De Moor, and J. Vandewalle, “Least squares support vector machine classifiers: a large scale algorithm,” in Proceedings of the European Conference on Circuit Theory and Design (ECCTD '99), pp. 839–842, Stresa, Italy, 1999.
[7]  L. V. Ferreira, E. Kaszkurewicz, and A. Bhaya, “Solving systems of linear equations via gradient systems with discontinuous righthand sides: application to LS-SVM,” IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 501–505, 2005.
[8]  K. S. Chua, “Efficient computations for large least square support vector machine classifiers,” Pattern Recognition Letters, vol. 24, no. 1–3, pp. 75–80, 2003.
[9]  W. Chu, C. J. Ong, and S. S. Keerthi, “An improved conjugate gradient scheme to the solution of least squares SVM,” IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 498–501, 2005.
[10]  S. S. Keerthi and S. K. Shevade, “SMO algorithm for least-squares SVM formulations,” Neural Computation, vol. 15, no. 2, pp. 487–507, 2003.
[11]  L. Bo, L. Jiao, and L. Wang, “Working set selection using functional gain for LS-SVM,” IEEE Transactions on Neural Networks, vol. 18, no. 5, pp. 1541–1544, 2007.
[12]  L. Jian, Z. Xia, X. Liang, and C. Gao, “Design of a multiple kernel learning algorithm for LS-SVM by convex programming,” Neural Networks, vol. 24, no. 5, pp. 476–483, 2011.
[13]  J. C. Platt, “Training of support vector machines using sequential minimal optimization,” in Advances in Kernel Methods: Support Vector Learning, pp. 185–208, MIT Press, Cambridge, Mass, USA, 1999.
[14]  S. S. Keerthi, S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy, “Improvements to Platt's SMO algorithm for SVM classifier design,” Neural Computation, vol. 13, no. 3, pp. 637–649, 2001.
[15]  R. E. Fan, P. H. Chen, and C. J. Lin, “Working set selection using second order information for training support vector machines,” Journal of Machine Learning Research, vol. 6, pp. 1889–1918, 2005.
[16]  á. Barbero, J. López, and J. R. Dorronsoro, “Cycle-breaking acceleration of SVM training,” Neurocomputing, vol. 72, no. 7–9, pp. 1398–1406, 2009.
[17]  L. Jiao, L. Bo, and L. Wang, “Fast sparse approximation for least squares support vector machine,” IEEE Transactions on Neural Networks, vol. 18, no. 3, pp. 685–697, 2007.
[18]  P. H. Chen, R. E. Fan, and C. J. Lin, “A study on SMO-type decomposition methods for support vector machines,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 893–908, 2006.
[19]  Y.-L. Lin, J.-G. Hsieh, H.-K. Wu, and J.-H. Jeng, “Three-parameter sequential minimal optimization for support vector machines,” Neurocomputing, vol. 74, no. 17, pp. 3467–3475, 2011.
[20]  J. López and J. A. K. Suykens, “First and second order SMO algorithms for LS-SVM classifiers,” Neural Processing Letters, vol. 33, no. 1, pp. 31–44, 2011.
[21]  G. R. R?tsch, “Benchmark Repository,” Intelligent Data Analysis Group, Fraunhofer-FIRST, Tech. Rep., 2005.
[22]  C. C. Chang and C. J. Lin, “LIBSVM: a Library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, article 27, 2011.

Full-Text

Contact Us

[email protected]

QQ:3279437679

WhatsApp +8615387084133