Let S be a closed convex set of matrices and C be a given matrix. The matrix nearness problem considered in this paper is to find a matrix X in the set S at which max reaches its minimum value. In order to solve the matrix nearness problem, the problem is reformulated to a min-max problem firstly, then the relationship between the min-max problem and a monotone linear variational inequality (LVI) is built. Since the matrix in the LVI problem has a special structure, a projection and contraction method is suggested to solve this LVI problem. Moreover, some implementing details of the method are presented in this paper. Finally, preliminary numerical results are reported, which show that this simple algorithm is promising for this matrix nearness problem. 1. Introduction Let be a given symmetric matrix and where are given scalars and , is the identity matrix, and denotes that is a positive semidefinite matrix. It is clear that is a nonempty closed convex set. The problem considered in this paper is where Throughout the paper we assume that the solution set of problem (1.2) is nonempty. Note that when and , the set reduces to the semidefinite cone Using the terminology in interior point methods, is called semidefinite cone, and thus the related problem belongs to the class of semidefinite programming [1]. Problem (1.2) can be viewed as a type of matrix nearness problem, that is, the problem of finding a matrix that satisfies some property and is nearest to a given one. A survey on matrix nearness problems can be found in [2]. The matrix nearness problems have many applications especially in finance, statistics, and compressive sensing. For example, one application in statistics is to make adjustments to a symmetric matrix so that it is consistent with prior knowledge or assumptions, and is a valid covariance matrix [3–8]. Paper [9] discusses a new class of matrix nearness problems that measure approximation error using a directed distance measure called a Bregman divergence and proposes a framework for studying these problems, discusses some specific matrix nearness problems, and provides algorithms for solving them numerically. Note that a different norm is used in this paper than in these published papers [3–7] and this makes the objective function of problem (1.2) be nonsmooth, and the problem (1.2) cannot be solved very easily. In the next section, we will find that the problem (1.2) can be converted into a linear variational inequality and thus can be solved effectively with the projection and contraction (PC) method which is extremely simple both in
References
[1]
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
[2]
N. J. Higham, “Matrix nearness problems and applications,” in Applications of Matrix Theory, M. Gover and S. Barnett, Eds., pp. 1–27, Oxford University Press, Oxford, UK, 1989.
[3]
S. Boyd and L. Xiao, “Least-squares covariance matrix adjustment,” SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 2, pp. 532–546, 2005.
[4]
N. J. Higham, “Computing a nearest symmetric positive semidefinite matrix,” Linear Algebra and its Applications, vol. 103, pp. 103–118, 1988.
[5]
N. J. Higham, “Computing the nearest correlation matrix—a problem from finance,” IMA Journal of Numerical Analysis, vol. 22, no. 3, pp. 329–343, 2002.
[6]
G. L. Xue and Y. Y. Ye, “An efficient algorithm for minimizing a sum of Euclidean norms with applications,” SIAM Journal on Optimization, vol. 7, no. 4, pp. 1017–1036, 1997.
[7]
G. L. Xue and Y. Y. Ye, “An efficient algorithm for minimizing a sum of -norms,” SIAM Journal on Optimization, vol. 10, no. 2, pp. 551–579, 2000.
[8]
J. Yang and Y. Zhang, “Alternating direction algorithms for -problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011.
[9]
I. S. Dhillon and J. A. Tropp, “Matrix nearness problems with Bregman divergences,” SIAM Journal on Matrix Analysis and Applications, vol. 29, no. 4, pp. 1120–1146, 2007.
[10]
B. S. He, “A projection and contraction method for a class of linear complementarity problems and its application in convex quadratic programming,” Applied Mathematics and Optimization, vol. 25, no. 3, pp. 247–262, 1992.
[11]
B. S. He, “A new method for a class of linear variational inequalities,” Mathematical Programming, vol. 66, no. 2, pp. 137–144, 1994.
[12]
B. S. He, “Solving a class of linear projection equations,” Numerische Mathematik, vol. 68, no. 1, pp. 71–80, 1994.
[13]
B. S. He, “A modified projection and contraction method for a class of linear complementarity problems,” Journal of Computational Mathematics, vol. 14, no. 1, pp. 54–63, 1996.
[14]
B. He, “A class of projection and contraction methods for monotone variational inequalities,” Applied Mathematics and Optimization, vol. 35, no. 1, pp. 69–76, 1997.
[15]
M. H. Xu and T. Wu, “A class of linearized proximal alternating direction methods,” Journal of Optimization Theory and Applications, vol. 151, no. 2, pp. 321–337, 2011.
[16]
B. C. Eaves, “On the basic theorem of complementarity,” Mathematical Programming, vol. 1, no. 1, pp. 68–75, 1971.
[17]
M. H. Xu, J. L. Jiang, B. Li, and B. Xu, “An improved prediction-correction method for monotone variational inequalities with separable operators,” Computers & Mathematics with Applications, vol. 59, no. 6, pp. 2074–2086, 2010.