key: cord-0068370-kqa3y2xh authors: Bellavia, Stefania; Gondzio, Jacek; Porcelli, Margherita title: A Relaxed Interior Point Method for Low-Rank Semidefinite Programming Problems with Applications to Matrix Completion date: 2021-10-11 journal: J Sci Comput DOI: 10.1007/s10915-021-01654-1 sha: d842ca1bea03a029519241414e5c4e01d6993007 doc_id: 68370 cord_uid: kqa3y2xh A new relaxed variant of interior point method for low-rank semidefinite programming problems is proposed in this paper. The method is a step outside of the usual interior point framework. In anticipation to converging to a low-rank primal solution, a special nearly low-rank form of all primal iterates is imposed. To accommodate such a (restrictive) structure, the first order optimality conditions have to be relaxed and are therefore approximated by solving an auxiliary least-squares problem. The relaxed interior point framework opens numerous possibilities how primal and dual approximated Newton directions can be computed. In particular, it admits the application of both the first- and the second-order methods in this context. The convergence of the method is established. A prototype implementation is discussed and encouraging preliminary computational results are reported for solving the SDP-reformulation of matrix-completion problems. 90C22 · 90C51 · 65F10 · 65F50 We are concerned with an application of an interior point method (IPM) for solving large, sparse and specially structured positive semidefinite programming problems (SDPs). Let SR n×n denote the set of real symmetric matrices of order n and let U • V denote the inner product between two matrices, defined by trace(U T V ). Consider the standard semidefinite programming (SDP) problem in its primal form where A i , C ∈ SR n×n and b ∈ R m are given and X ∈ SR n×n is unknown and assume that matrices A i , i = 1, 2, . . . , m are linearly independent, that is m i=1 d i A i = 0 implies d i = 0, i = 1, . . . , m. The dual form of the SDP problem associated with (1.1) is: where y ∈ R m and S ∈ SR n×n . The number of applications which involve semidefinite programming problems as a modelling tool is already impressive [40, 44] and is still growing. Applications include problems arising in engineering, finance, optimal control, power flow, various SDP relaxations of combinatorial optimization problems, matrix completion or other applications originating from modern computational statistics and machine learning. Although the progress in the solution algorithms for SDP over the last two decades was certainly impressive (see the books on the subject [2, 15] ), the efficient solution of general semidefinite programming problems still remains a computational challenge. Among various algorithms for solving (linear) SDPs, interior point methods stand out as reliable algorithms which enjoy enviable convergence properties and usually provide accurate solutions within reasonable time. However, when sizes of SDP instances grow, traditional IPMs which require computing exact Newton search directions hit their limits. Indeed, the effort required by the linear algebra in (standard) IPMs may grow as fast as O(n 6 ). Although there exists a number of alternative approaches to interior point methods, such as for example [8, 9, 30] , which can solve certain SDPs very efficiently, they usually come with noticeably weaker convergence guarantees. Therefore there is a need to develop faster IPM-based techniques which could preserve some of the excellent theoretical properties of these methods, but compromise on the other features in quest for practical computational efficiency. Customized IPM methods have been proposed for special classes of problems. They take advantage of sparsity and structure of the problems, see e.g. [4, 5, 21, 31, 36, 41] and the references in [1] . In this paper we focus on problems in which the primal variable X is expected to be lowrank at optimality. Such situations are common in relaxations of combinatorial optimization problems [5] , for example in maximum cut problems [22] , as well as in matrix completion problems [11] , general trust region problems and quadratically constrained quadratic prob-lems in complex variables [34] . We exploit the structure of the sought solution and relax the rigid structure of IPMs for SDP. In particular we propose to weaken the usual connection between the primal and dual problem formulation and exploit any special features of the primal variable X . However, the extra flexibility added to the interior point method comes at a price: the worst-case polynomial complexity has to be sacrificed in this case. Rank plays an important role in semidefinite programming. For example, every polynomial optimization problem has a natural SDP relaxation, and this relaxation is exact when it possesses a rank-1 solution [34] . On the other hand, for any general problem of the form (1.1), there exists an equivalent formulation where an additional bound r on the rank of X may be imposed as long as r is not too small [9] . More specifically, under suitable assumptions, there exists an optimal solution X * of (1.1) with rank r satisfying r (r +1)/2 ≤ m. There have been successful attempts to identify low rank submatrices in the SDP matrices and eliminate them with the aim to reduce the rank and hence the difficulty of solving an SDP. A technique called facial reduction [26] has been analysed and demonstrated to work well in practice. Interestingly, when positive semidefinite programs are solved using interior-point algorithms, then because of the nature of logarithmic barrier function promoting the presence of nonzero eigenvalues, the primal variable X typically converges to a maximum-rank solution [24, 34] . However, in this paper we aim at achieving the opposite. We want to design an interior point method which drives the generated sequence of iterates to converge to a low-rank solution. We assume that constraint matrices are sparse and we search for a solution X of rank r of the form X = UU T with U ∈ R n×r . Special low-rank structure of X may be imposed directly in problem (1.1), but this excludes the use of an interior point algorithm (which requires all iterates X to be strictly positive definite). Burer and Monteiro [8, 9] and their followers [6, 7] have used such an approach with great success. Namely, they have substituted UU T for X in (1.1) and therefore have replaced it with the following nonlinear programming problem with U ∈ R n×r . Although such transformation removes the difficult positive definiteness constraint (it is implicit as X = UU T ), the difficulty is shifted elsewhere as both the objective and constraints in (1.3) are no longer linear, but instead quadratic and in general non-convex. In comparison with a standard IPM the method proposed in [8, 9] and applied to solve largescale problems enjoys substantially reduced memory requirements and very good efficiency and accuracy. However, due to nonconvexity of (1.3), local methods may not always recover the global optimum. In [6, 7] authors showed that, despite the non-convexity, first-and secondorder necessary optimality conditions are also sufficient, provided that rank r is large enough and constraints satisfy some regularity conditions. That is, when applied to several classes of SDPs, the low-rank Burer-Monteiro formulation is very unlikely to converge to any spurious local optima. In this paper we propose a different approach. We would like to preserve as many of the advantageous properties of interior point methods as possible and expect to achieve it by (i) working with the original problem (1.1) and (ii) exploiting the low-rank structure of X . Knowing that at optimality X is low-rank we impose a special form of the primal variable throughout the interior point algorithm with U ∈ R n×r , for a given r > 0 and μ denoting the barrier term. Hence X is full rank (as required by IPM), but approaches the low-rank matrix as μ goes to zero. Imposing such Notation The norm of the matrix associated with the inner product between two matrices U • V = trace(U T V ) is the Frobenius norm, written U F := (U • U ) 1/2 , while U 2 denotes the L 2 -operator norm of a matrix. Norms of vectors will always be Euclidean. The symbol I p denotes the identity matrix of dimension p × p. Let A be the linear operator A : SR n → R m defined by Moreover, let A T denote the matrix representation of A T with respect to the standard bases of R n , that is and where mat is the "inverse" operator to vec (i.e., mat(vec(A i )) = A i ∈ SR n×n ) and the vec operator is such that vec(A) is the vector of columns of A stacked one under the other. Interior point methods for semidefinite programming problems work with the perturbed firstorder optimality conditions for problems (1.1)-(1.2) given by: A general IPM involves a triple (X , y, S), performs steps in Newton direction for (2.1), and keeps its subsequent iterates in a neighbourhood of the central path [2, 15] . The convergence is forced by gradually reducing the barrier term μ. However, having in mind the idea of converging to a low-rank solution, we find such a structure rather restrictive and wish to relax it. This is achieved by removing explicit S from the optimality conditions and imposing a special structure of X . Substituting S = C − A T y from the first equation into the third one, we get Next, following the expectation that at optimality X has rank r , we impose on X the following special structure with U ∈ R n×r , for a given r > 0. We do not have any guarantee that there exists a solution of (2.2) with such a structure, but we can consider the least-square problem: where F r μ (U , y) : R n×r × R m → R n 2 +m is given by The nonlinear function F r μ (U , y) has been obtained substituting X = μI n + UU T in (2.2) after vectorization of the second block. The associated system F r μ (U , y) = 0 is overdetermined with (m + n 2 ) equations and (nr + m) unknowns (U , y). In the following, for the sake of simplicity, we identify R n×r × R m with R nr+m . It is worth mentioning at this point that the use of least-squares type solutions to an overdetermined systems arising in interior point methods for SDP was considered in [16, 32] . Its primary objective was to avoid symmetrization when computing search directions and the least-squares approach was applied to a standard, complete set of perturbed optimality conditions (2.1). We propose to apply to problem (2.4) a similar framework to that of interior point methods, namely: Fix μ, iterate on a tuple (U , y), and make steps towards a solution to (2.4) . This opens numerous possibilities. One could for example compute the search directions for both variables at the same time, or alternate between the steps in U and in y. Bearing in mind that (2.1) are the optimality conditions for (1.1) and assuming that a rank r optimal solution of (1.1) exists, we will derive an upper bound on the optimal residual of the least-squares problem (2.4) . Assume that a solution (X * , y * , S * ) of the KKT conditions exists such that X * = U * (U * ) T , U * ∈ R n×r , that is Then evaluating (2.5) at (U * , y * ) and using (2.6) we get Consequently, we obtain the following upper bound for the residual of the least-squares problem (2.4): where Assuming to have an estimate of ω * we are now ready to sketch in Algorithm 1 the general framework of a new relaxed interior point method. To start the procedure we need an initial guess (U 0 , y 0 ) such that U 0 is full column rank and S 0 = C − A T y 0 is positive definite, and an initial barrier parameter μ 0 > 0. At a generic iteration k, given the current barrier parameter μ k > 0, we compute an approximate solution (U k ,ȳ k ) of (2.4) such that φ μ k (U k ,ȳ k ) is below μ 2 k ω * . Then, the dual variable y k and the dual slack variable S k are updated as follows: with α k ∈ (0, 1] such that S k remains positive definite. We draw the reader's attention to the fact that although the dual variable S does not explicitly appear in optimality conditions (2.2) or (2.5), we do maintain it as the algorithm progresses and make sure that (S k , y k ) remains dual feasible. Finally, to complete the major step of the algorithm, the barrier parameter is reduced and a new iteration is performed. Note that so far we have assumed that there exists a solution to (1.1) of rank r . In case such a solution does not exist the optimal residual of the least-squares problem is not guaranteed to decrease as fast as μ 2 k . This apparently adverse case can be exploited to design an adaptive procedure that increases/decreases r without requiring the knowledge of the solution's rank. This approach will be described in Sect. 3. In the remaining part of this section we state some of the properties of the Algorithm which are essential to make it work in practice. First we note that dual constraint is always satisfied by construction and the backtracking process at Line 3 is well-defined. This is proved in Lemma 4 of [4] which is repeated below for sake of reader's convenience. Step 3 of Algorithm 1 at iteration k and S k−1 be computed at the previous iteration k − 1. Then, there exists α k ∈ (0, 1] such that S k = S k−1 + α k S is positive definite. Algorithm 1 General framework of the Relaxed Interior Point algorithm for solving low-rank SDP input: Initial (U 0 , y 0 ) with U 0 ∈ R n×r and S 0 = C −A T y 0 positive definite, μ 0 > 0, γ ≥ √ ω * , σ ∈ (0, 1). 1: for k = 1, 2, . . . do 2: Find (U k ,ȳ k ) such that (2.9) 3: If C − A Tȳ k is positive definite set α k = 1 otherwise, set y =ȳ k − y k−1 , S = −A T y and backtrack along S to ensure S k = S k−1 + α k S positive definite. Set y k = y k−1 + α k y (and X k = (U k U T k + μ k I )). Set μ k = σ μ k−1 6: end for Proof Assume that C −A Tȳ k is not positive definite, otherwise α k = 1. Noting that S k−1 0 by construction, it follows that S is indefinite and S k−1 + α k S 0 whenever α k is sufficiently small. In particular, since . Note that if backtracking is needed (i.e. α k < 1) to maintain the positive definiteness of the dual variable, then after updating S k in Step 5 the centrality measure X k S k − μ k I n may increase and it is not guaranteed to remain below γ μ k . Indeed, by setting S k =S k − (1 − α k ) S withS k = C − A Tȳ k , we have: that is the centrality measure may actually increase along the iterations whenever α k does not approach one as μ k goes to zero. In the following we analyse the convergence properties of Algorithm 1 when this adverse situation does not occur, namely under the following assumption: Assumption 1 Assume that there existsk > 0 such that α k = 1 for k ≥k. To the best of authors knowledge, it does not seem possible to demonstrate that eventually α k is equal to one. This is because we impose a special form of X in (2.3) and make only a weak requirement regarding the proximity of the iterate to the central path: with γ possibly greater than one. Let Assumption 1 hold. Assume that a solution of rank r of problem (1.1) exists and that the sequence {U k , y k } admits a limit point (U † , y † ). Then, Proof Assume for the sake of simplicity that the whole sequence is converging to (U † , y † ). Taking into account that lim k→∞ μ k = 0, it follows lim k→∞ U k (U k ) T +μ k I = (U † )(U † ) T . Then X † = (U † )(U † ) T has at most rank r and it is feasible as Moreover, from (2.10) and Assumption 1 it follows which implies X † S † = 0 and by construction ensures that S † is positive semidefinite being a limit point of a sequence of positive definite matrices. From the previous proposition it follows that (X † , y † , S † ) solves (2.1). Moreover, X † has rank r , unless U † is not full column rank. This situation can happen only in the case (1.1) admits a solution of rank smaller than r . In what follows for sake of simplicity we assume that the limit point U † is full column rank. Remark It is worth observing that due to the imposed structure of matrices (2.3) all iterates X k are full rank, but asymptotically they approach rank r matrix. Moreover, the minimum distance of X k to a rank r matrix is given by μ k , i.e., min rank(Y )=r X k − Y 2 = μ k , (2.12) and the primal infeasibility is bounded by γ μ k . This allows us to use the proposed methodology also when the sought solution is close to a rank r matrix ("nearly low-rank") and/or some entries in vector b are corrupted with a small amount of noise. The analysis carried out in the previous section requires the knowledge of γ ≥ √ ω * and of the rank r of the sought solution. As the scalar γ is generally not known, at a generic iteration k the optimization method used to compute an approximate minimizer of (2.4) is stopped when a chosen first-order criticality measure ψ μ goes below the threshold η 2 μ k where η 2 is a strictly positive constant. This way, the accuracy in the solution of (2.4) increases as μ k decreases. For ψ μ , we have chosen ψ μ (U , y) = ∇φ μ (U , y) 2 . Regarding the choice of the rank r , there are situations where the rank of the sought solution is not known. Below we describe a modification of Algorithm 1 where, starting from a small rank r , the procedure adaptively increases/decreases it. This modification is based on the observation that if a solution of rank r exists the iterative procedure used in Step 2, should provide a sequence {U k } such that the primal infeasibility also decreases with μ k . Then, at each iteration the ratio is checked. If this ratio is larger than η 1 , where η 1 is a given constant in (σ, 1) and σ is the constant used to reduce μ k , then the rank r is increased by some fixed δ r > 0 as the procedure has not been able to provide the expected decrease in the primal infeasibility. After an update of rank, the parameter μ k is not changed and δ r extra columns are appended to the current U k . As a safeguard, also a downdating strategy can be implemented. In fact, if after an increase of rank, we still have ρ k > η 1 then we come back to the previous rank and inhibit rank updates in all subsequent iterations. This is detailed in Algorithm 2 where we borrowed the Matlab notation. Variable update_r is an indicator specifying if at the previous iteration the rank was increased (update_r = up), decreased (update_r = down) or left unchanged (update_r = unch). input: The initial rank r , the rank increment/decrement δ r , initial (y 0 , U 0 ) with U 0 ∈ R n×r and y 0 ∈ R m such that Find an approximate minimizer Set y k = y k−1 + α k y. if μ k < then 7: return X k = U k U T k + μ k I . 8: else 9: Compute ρ k given in (3.1). 10: if ρ k > η 1 then 11: if update_r = unch then 12: Set r = r + δ r and update_r = up 13: Set The initial rank r should be chosen as the rank of the solution (if known) or as a small value (say 2 or 3) if it is unknown. The dimension of the initial variable U 0 is then defined accordingly. Since, for given and σ , the number of iterations to satisfy μ k < at Line 6 is predefined, the number of rank updates is predefined as well. Therefore, if an estimate of the solution rank is known, one should use it in order to define a suitable initial r . In this section we investigate the numerical solution of the nonlinear least-squares problem (2.4) . Following the derivation rules recalled in "Appendix A", we compute the Jacobian matrix J μ k ∈ R (n 2 +m)×(nr+m) of F r μ k which takes the following form: and nr ∈ R nr×nr is the unique permutation matrix such that vec(B T ) = nr vec(B) for any B ∈ R n×r , see "Appendix A". In order to apply an iterative method for approximately solving (2.4) we need to perform the action of J T μ k on a vector to compute the gradient of φ μ k . The action of J μ k on a vector is also required in case one wants to apply a Gauss-Newton approach (see Sect. 4.3). In the next section we will discuss how these computations are carried out. First, let us denote the Jacobian matrix blocks as follows: Below we will show that despite J μ k blocks contain matrices of dimension n 2 × n 2 , matrixvector products can be carried out without involving such matrices and the sparsity of the constraint matrices can be exploited. We will make use of the properties of the Kronecker product (A.1)-(A.3) and assume that if v ∈ R nr andz ∈ R n 2 then mat(v) ∈ R n×r and mat(z) ∈ R n×n . • Let v ∈ R nr and w ∈ R m and let us consider the action of J 11 and J T 11 on v and w, respectively: and (4.7) • Let v ∈ R nr andz ∈ R n 2 and let us consider the action of J T 21 J 21 and J T 21 on v andz, respectively: and • Let w ∈ R m andz ∈ R n 2 and let us consider the action of J 22 and J T 22 on w andz, respectively: and The previous analysis shows that we can perform all products involving Jacobian's blocks handling only n × n matrices. Moreover, if matrices A i are indeed very sparse, their structure can be exploited in the matrix-products in (4.7) and (4.10). (Sparsity has been exploited of course in various implementations of IPM for SDP, see e.g. [20] .) Additionally, only few elements of matrices V in (4.6) andZ in (4.12) need to be involved when products (4.5) and (4.11) are computed, respectively. More precisely, denoting with nnz(A) the number of nonzero entries of A, we need to compute nnz(A) entries of V andZ defined in (4.6) and (4.12), respectively. Noting that mat(v) ∈ R n×r and U T ∈ R r ×n , the computation of the needed entries of V amounts to In Table 1 we provide the estimate flop counts for computing various matrix-vector products with the blocks of Jacobian matrix. We consider the products that are relevant in the computation of the gradient of φ μ k and in handling the linear-algebra phase of the second order method which we will introduce in the next section. From the table, it is evident that the computation of the gradient of φ μ k requires O(max{nnz(A), n 2 }r + m) flops. Below we provide an estimate of a computational effort required by the proposed algorithm under mild assumptions: Step 3 of Algorithm 2 a line-search first-order method is used to compute an approximate Taking into account that a line-search first-order method requires in the worst-case O(μ −2 k ) iterations to achieve ∇φ(U k , y k ) ≤ μ k [23] , the computational effort of iteration k of Algorithm 2 is O(μ −2 k (n 2 r + m)) in the worst-case. Therefore, when n is large, in the early/intermediate stage of the iterative process, this effort is significantly smaller than O(n 6 ) required by a general purpose interior-point solver [2, 15] or O(n 4 ) needed by the specialized approach for nuclear norm minimization [36] . We stress that this is a worst-case analysis and in practice we expect to perform less than O(μ −2 k ) iterations of the first-order method. In case the number of iterations is of the order of O(n) the computational effort per iteration of Algorithm 2 drops to O(n 3 r + nm). Apart from all operations listed above the backtracking along S needs to ensure that S k is positive definite (Algorithm 1, Step 4) and this is verified by computing the Cholesky factorization of the matrix S k−1 + α k S, for each trial steplength α k . If the dual matrix is sparse, i.e. when matrices A i , i = 1, . . . , m and C share the sparsity patterns [43] , a sparse Cholesky factor is expected. Note that the structure of dual matrix does not change during the iterations, hence reordering of S 0 can be carried out once at the very start of Algorithm 2 and then may be reused to compute the Cholesky factorization of S k−1 + α k S at each iteration. The crucial step of our interior point framework is the computation of an approximate solution of the nonlinear least-squares problem (2.4). To accomplish the goal, a first-order approach as well as a Gauss-Newton method can be used. However, in this latter case the linear algebra phase becomes an issue, due to the large dimension of the Jacobian. Here, we propose a Nonlinear Gauss-Seidel method. We also focus on the linear algebra phase and present a matrix-free implementation well suited for structured constraint matrices as those arising in the SDP reformulation of matrix completion problems [13] . The adopted Nonlinear Gauss-Seidel method to compute (U k ,ȳ k ) at Step 3 of Algorithm 2 is detailed in Algorithm 3. The computational bottleneck of the procedure given in Algorithm 3 is the solution of the linear systems (4.13) and (4.14). Due to their large dimensions we use a CG-like approach. The coefficient matrix in (4.13) takes the form: input: y k−1 , U k−1 , μ k , η 2 from Algorithm 2 and max . 1: Set y 0 = y k−1 and U 0 = U k−1 2: for = 1, 2, . . . , max do 3: Set that is, solve the linear system Compute a Gauss-Newton step y for that is, solve the linear system and update y +1 = y + y. return U k = U +1 andȳ k = y +1 to Algorithm 2. 8: end if 9: end for and it is positive semidefinite as Q k may be rank deficient. We can apply CG to (4.13) which is known to converge to the minimum norm solution if starting from the null approximation [25] . Lettingv ∈ R nr be the unitary eigenvector associated to the maximum eigenvalue of Moreover, using 4.1 we derive the following bound as σ max (U ⊗ I n ) = σ max (U ) and σ max ((U ⊗ I n )) nr ) ≤ σ max (U ). Since both the maximum eigenvalue of S k and the maximum singular value of U k are expected to stay bounded from above, we conclude that the maximum eigenvalue of J T 11 J 11 + J T 21 J 21 remains bounded. The smallest nonzero eigenvalue may go to zero at the same speed as μ 2 k . However, in case of SDP reformulation of matrix completion problems, the term A T A acts as a regularization term and the smallest nonzero eigenvalue of J T 11 J 11 + J T 21 J 21 remains bounded away from μ k also in the later iterations of the interior point algorithm. We will report on this later on, in the numerical results section (see Fig. 1 ). Let us now consider system (4.14). The coefficient matrix takes the form and it is positive definite. We repeat the reasoning applied earlier to J T 11 J 11 + J T 21 J 21 and conclude that Taking into account that r eigenvalues of X k do not depend on μ k while the remaining are equal to μ k , we conclude that the condition number of J T In the next subsection we will show how this matrix can be preconditioned. In this subsection we assume that matrix A A T is sparse and easy to invert. At this regard we underline that in SDP reformulation of matrix-completion problems matrices A i have a very special structure that yields Let us consider a preconditioner P k of the form with This choice is motivated by the fact that we discard the term In fact, we use the approximation [46] . An alternative choice involves matrix Z k of a smaller dimension This corresponds to introducing a further approximation We will analyze spectral properties of the matrix J T 22 J 22 preconditioned with P k defined in (4.19) with Z k given in (4.20) . (4.19) with Z k given in (4.20) and σ min (A) and σ max (A) denote the minimum and maximum singular values of A, respectively. The eigenvalues of the preconditioned matrix P where ξ 1 and ξ 2 have the following forms: Proof Note that Then, Let us denote with λ M and λ m the largest and the smallest eigenvalues of matrix P Then, using the Weyl inequality we obtain Moreover, Then, noting that . Consequently, the eigenvalues of the preconditioned matrix P , and the theorem follows. Note that from the result above, as μ k approaches zero, the minimum eigenvalue of the preconditioned matrix goes to one and the maximum remains bounded. The application of P k to a vector d, needed at each CG iteration, can be performed through the solution of the (m + nq) × (m + nq) sparse augmented system: where if Z k is given by (4.20) q = n, while q = r in case (4.21) . In order to recover the vector u = P −1 k d, we can solve the linear system (4.23) and compute u as follows This process involves the inversion of A A T which can be done once at the beginning of the iterative process, and the solution of a linear system with matrix Note that E k has dimension n 2 × n 2 in case of choice (4.20) and dimension nr × nr in case of choice (4.21). Then, its inversion is impractical in case (4.20) . On the other hand, using (4.21) we can approximately solve (4.23) using a CG-like solver. At this regard, observe that the entries of E k decrease when far away from the main diagonal and E k can be preconditioned by its block-diagonal part, that is by where B is the operator that extracts from a matrix nr × nr its block diagonal part with n diagonal blocks of size r × r . We consider the problem of recovering a low-rank data matrix B ∈ Rn ×n from a sampling of its entries [13] , that is the so called matrix completion problem. The problem can be stated as min rank(X ) where is the set of locations corresponding to the observed entries of B and the equality is meant element-wise, that is X s,t = B s,t , for all (s, t) ∈ . Let m be the cardinality of and r be the rank of B. A popular convex relaxation of the problem [13] consists in finding the minimum nuclear norm ofX that satisfies the linear constraints in (5.1), that is, solving the following heuristic optimization min X * s.t.X = B , (5.2) where the nuclear norm · * ofX is defined as the sum of its singular values. Candès and Recht proved in [13] that if is sampled uniformly at random among all subset of cardinality m then with large probability, the unique solution to (5.2) is exactly B, provided that the number of samples obeys m ≥ Cn 5/4 r logn, for some positive numerical constant C. In other words, problem (5.2) is "formally equivalent" to problem (5.1). Let whereX ∈ Rn ×n is the matrix to be recovered and W 1 , W 2 ∈ SRn ×n . Then problem (5.2) can be stated as an SDP of the form (1.1) as follows where for each (s, t) ∈ the matrix st ∈ Rn ×n is defined element-wise for k, l = 1, . . . ,n as see [39] . We observe that primal variable X takes the form (5.3) with n = 2n, the symmetric matrix C in the objective of (1.1) is a scaled identity matrix of dimension n × n. The vector b ∈ R m is defined by the known elements of B and, for i = 1, . . . , m, each constraint matrix A i , corresponds to the known elements of B stored in b i . Matrices A i have a very special structure that yields nice properties in the packed matrix A. Since every constraint matrix has merely two nonzero entries the resulting matrix A has 2m nonzero elements and its density is equal to 2n −2 . Moreover, A A T = 1 2 I m and A(I n ) 2 = 0. We now discuss the relationship between a rank r solutionX of problem (5.2) and a rank r solution X of problem (5.4). Proof Let X = Q Q T with Q ∈ R 2n×r and = R r ×r be the singular value decomposition that isX = Q 1 Q T 2 has rank r . To prove the second part of the proposition, letX = Q V T with Q, V ∈ Rn ×r and = R r ×r be the SVD factorization ofX . We get the proposition by defining W 1 = Q Q T and W 2 = V V T and obtaining X = Q V Q T V T . Assume that X has the form with U ∈ R n×r full column rank and μ ∈ R, thenX has rank r. If X is a rank r solution of (5.4), thenX is a rank r solution of (5.2). Viceversa, ifX is a rank r solution of (5.2), then (5.4) admits a rank r solution. Proof The first statement follows from the equivalence between problems (5.4) and (5.2) [19, Lemma 1] . LetX be a rank r optimal solution of (5.2), t * = X * and Q V T , with Q, V ∈ Rn ×r and ∈ R r ×r , be the SVD decomposition ofX . Let us define X = W 1X X T W 2 with W 1 = Q Q T and W 2 = V V T . Then X solves (5.4). In fact, X is positive semidefinite and 1 2 I • X = 1 2 (T race(W 1 ) + T race(W 2 )) = X * = t * . This implies that t * is the optimal value of (5.4). In fact, if we had Y such that then by [19, Lemma 1] there would existȲ such that Ȳ * < t * , that is Ȳ * < X * = t * . This is a contradiction as we assumed that t * is the optimal value of (5.2). Assuming that a rank r solution to (5.2) exists, the above analysis justifies the application of our algorithm to search for a rank r solution of the SDP reformulation (5.4) of (5.2). We also observe that at each iteration our algorithm computes an approximation X k of the form X k = U k U T k + μ k I n with U k ∈ R n×r and μ k > 0. Then, if at each iteration U k is full column rank, by Corollary 5.2, it follows that we generate a sequence {X k } such that X k has exactly rank r at each iteration k and it approaches a solution of (5.2). Finally, let us observe that m 0 is the level of noise. Then, we solved problem (5.4) using the corrupted datâ B (s,t) to form the vector b. Note that, in this case A(B) − b 2 ≈ η √ m. In order to take into account the presence of noise we set = max(10 −4 , 10 −1 η) in Algorithm 2. Results of these runs are collected in Table 5 where we considered η = 10 −1 and started with the target rank r . In Table 5 we also report Table 5 IPLR-GS_P on noisy matrices (noise level η = 10 −1 ) that is the root-mean squared error per entry. Note that the root-mean error per entry in data B is of the order of the noise level 10 −1 , as well as A(B) − b 2 / √ m. Then, we claim to recover the matrix with acceptable accuracy, corresponding to an average error smaller than the level of noise. In this subsection we compare the performance of IPLR_GS_P, OptSpace and Incremental OptSpace on mildly ill-conditioned problems with exact and noisy observations. We first consider exact observation and vary the condition number of the matrix that has to be recovered κ. We fixedn = 600 and r = 6 and, following [29] , generated random matrices with a prescribed condition number κ and rank r as follows. Given a random matrix B generated as in the previous subsection, let Q V T be its SVD decomposition andQ andṼ be the matrices formed by the first r columns of Q and V , respectively. Then, we formed the matrixB that has to be recovered asB =Q˜ Ṽ T , where˜ is a r × r diagonal matrix with diagonal entries equally spaced betweenn andn/κ. In Fig. 5 we plot the RMSE value against the condition number for all the three solvers considered, using the 13% of the observations. We can observe, as noticed in [29] , that OptSpace does not manage to recover mildly ill-conditioned matrices while Incremental OptSpace improves significantly over OptSpace. According to [29] , the convergence difficulties of OptSpace on these tests have to be ascribed to the singular value decomposition of the trimmed matrix needed in Step 3 of OptSpace. In fact, the singular vector corresponding to the smallest singular value cannot be approximated with enough accuracy. On the other hand, our approach is more accurate than Incremental OptSpace and its behaviour only slightly deteriorates as κ increases. Now, let us focus on the case of noisy observations. We first fixed κ = 200 and varied the noise level. In Fig. 6 we plot the RMSE value against the noise level for all the three solvers considered, using the 20% of observations. Also in this case IPLR-GS_P is able to recover the matrixB with acceptable accuracy, corresponding to an average error smaller than the level of noise, and outperforms both OptSpace variants when the noise level is below 0.8. In fact, OptSpace managed to recoverB only with a corresponding R M S E of the order of 10 −1 for any tested noise level, consistent only with the larger noise level tested. In order to get a better insight into the behaviour of the method on mildly ill-conditioned and noisy problems, we fixed κ = 100, noise level η = 0.3 and varied the percentage of known entries from 8.3% to 50%, namely we set m = 30, 000, 45, 000, 60, 000, 120, 000, 180, 000. In Fig. 7 the value of R M S E is plotted against the percentage of known entries. The oracle error value R M S E or = η (n1r − r 2 )/m, given in [12] is plotted, too. We observe that in our experiments IPLR-GS_P recovers the sought matrix with RMSE values always smaller than 1.3R M S E or , despite the condition number of the matrix. This is not the case for OptSpace and Incremental OptSpace; OptSpace can reach a comparable accuracy only if the percentage of known entries exceeds 30%. As expected, for all methods the error decreases as the number of subsampled entries increases. In summary, for mildly ill-conditioned random matrices our approach is more reliable than OptSpace and Incremental OptSpace as the latter algorithms might struggle with computing the singular vectors of the sparsified data matrix accurately, and they cannot deliver precision comparable to that of IPLR. For the sake of completeness, we remark that we have tested OptSpace also on the well-conditioned random matrices reported in Tables 2, 4 We now test the effectiveness of the rank updating/downdating strategy described in Algorithm 2. To this purpose, we run IPLR-GS_P starting from r = 1, with rank increment/decrement δ r = 1 and report the results in Table 6 forn = 600, 800, 1000. In all runs, the target rank has been correctly identified by the updating strategy and the matrix B is well-recovered. Runs in italic have been obtained allowing 10 inner Gauss-Seidel iterations. In fact, 5 inner Gauss-Seidel iterations were not enough to sufficiently reduce the residual in (2.4) and the procedure did not terminate with the correct rank. Comparing the values of the cpu time in Tables 2 and 6 we observe that the use of rank updating strategy increases the overall time; on the other hand, it allows to adaptively modify the rank in case a solution of (5.4) with the currently attempted rank does not exist. The typical updating behaviour is illustrated in Fig. 8 where we started with rank 1 and reached the target rank 5. In the first eight iterations a solution of the current rank does not exist and therefore the procedure does not manage to reduce the primal infeasibility as expected. Then, the rank is increased. At iteration 9 the correct rank has been detected and the primal infeasibility drops down. Interestingly, the method attempted rank 6 at iteration 13, but quickly corrected itself and returned to rank 5 which was the right one. The proposed approach handles well the situation where the matrix which has to be rebuilt is nearly low-rank. We recall that by Corollary 5.2 we generate a low-rank approximation X k , while the primal variable X k is nearly low-rank and gradually approaches a low-rank solution. Then, at termination, we approximate the nearly low-rank matrix that has to be recovered with the low-rank solution approximation. Letting σ 1 ≥ σ 2 ≥ · · · ≥ σn be the singular values of B, we perturbed each singular value of B by a random scalar ξ = 10 −3 η, where η is drawn from the standard normal distribution, and using the SVD decomposition of B we obtain a nearly low-rank matrixB. We applied IPLR-GS_P to (5.4) with the aim to recover the nearly low-rank matrixB with tolerance in the stopping criterion set to = 10 −4 . Results reported in Table 7 are obtained starting from r = 1 in the rank updating strategy. In the table we also report the rankr of the rebuilt matrixX . The run corresponding to rank 8, in italic in the table, has been performed allowing a maximum of 10 inner Gauss-Seidel iterations. We observe that the method always rebuilt the matrix with accuracy consistent with the stopping tolerance. The primal infeasibility is larger than the stopping tolerance, as data b are obtained sampling a matrix which is not low-rank and therefore the method does not manage to push primal infeasibility below 10 −3 . Finally we note that in some runs (rank equal to 4,5,6) the returned matrixX has a rankr larger than that of the original matrix B. However, in this situation we can observe thatX is nearly-low rank as σ i = O(10 −3 ), i = r + 1, . . . ,r while σ i 10 −3 , i = 1, . . . , r . Therefore the matrices are well rebuilt for each considered rank r and the presence of small singular values does not affect the updating/downdating procedure. In this section we discuss matrix completion problems arising in diverse applications as the matrix to be recovered represents city-to-city distances, a grayscale image, game parameters in a basketball tournament and total number of COVID-19 infections. We now consider an application of matrix completion where one wants to find a low-rank approximation of a matrix that is only partially known. As the first test example, we consider a 312 × 312 matrix taken from the "City Distance Dataset" [10] and used in [11] , that represents the city-to-city distances between 312 cities in the US and Canada computed from latitude/longitude data. We sampled the 30% of the matrix G of geodesic distances and computed a low-rank approximationX by IPLR-GS_P inhibiting rank updating/downdating and using = 10 −4 . We compared the obtained solution with the approximationX os computed by OptSpace and the best rank-r approximationX r , computed by truncated SVD (TSVD), that requires the knowledge of the full matrix G. We considered some small values of the rank (r = 3, 4, 5) and in Table 8 reported the errors E i p = G −X F / G F , E os = G −X os F / G F and E r = G −X r F / G F . We remark that the matrix G is not nearly-low-rank, and our method correctly detects that there does not exist a feasible rank r matrix as it is not able to decrease the primal infeasibility below 1e0. On the other hand the error E i p in the provided approximation, obtained using only the 23% of the entries, is the same as that of the best rank-r approximationX r . Note that computing the 5-rank approximation is more demanding. In fact the method requires on average: 3.4 Gauss-Seidel iterations, 37 unpreconditioned CG iterations for computing U and 18 preconditioned CG iterations for computing y. In contrast, the 3-rank approximation requires on average: 3.8 Gauss-Seidel iterations, 18 unpreconditioned CG iterations for computing U and 10 preconditioned CG iterations for computing y. As a final comment, we observe that IPLR-GS fails when r = 5 since unpreconditioned CG struggles with the solution of (4.14). The computed direction y is not accurate enough and the method fails to maintain S positive definite within the maximum number of allowed backtracks. Applying the preconditioner cures the problem because more accurate directions become available. Values of the error E op obtained with OptSpace are larger than E r . However it is possible to attain comparable values for r = 3 and r = 5 under (a) True image (b) 50% random missing pixels (c) 7% nonrandom missing pixels Fig. 9 The Lake test true image and the inpainted versions the condition that the default maximum number of iterations of OptSpace is increased 10 times. In these cases, OptSpace is twice and seven time faster, respectively. As the second test example, we consider the problem of computing a low rank approximation of an image that is only partially known because some pixels are missing and we analyzed the cases when the missing pixels are distributed both randomly and not randomly (inpainting). To this purpose, we examined the Lake 512 × 512 original grayscale image 2 shown in Fig. 9c and generated the inpainted versions with the 50% of random missing pixels (Fig. 9b) and with the predetermined missing pixels (Fig. 9c) . We performed tests fixing the rank to values ranging from 10 to 150 and therefore used IPLR-BB which is computationally less sensitive than IPLR-GS to the magnitude of the rank. In Fig. 10 we plot the quality of the reconstruction in terms of relative error E and PSNR (Peak-Signal-to-Noise-Ratio) against the rank, for IPLR-BB, OptSpace and truncated SVD. We observe that when the rank is lower than 40, IPLR-BB and TSVD give comparable results, but when the rank increases the quality obtained with IPLR-BB does not improve. As expected, by adding error information available only from the knowledge of the full matrix, the truncated SVD continues to improve the accuracy as the rank increases. The reconstructions produced with OptSpace display noticeably worse values of the two relative errors (that is, larger E and smaller PSNR, respectively) despite the rank increase. Figure 11 shows that IPLR-BB is able to recover the inpainted image in Fig. 9c and that visually the quality of the reconstruction benefits from a larger rank. Images restored by OptSpace are not reported since the relative PSNR values are approximately 10 points lower than those obtained with IPLR-BB. The quality of the reconstruction of images Fig. 9b and c obtained with OptSpace cannot be improved even if the maximum number of iterations is increased tenfold. Matrix completion is used in sport predictive models to forecast match statistics [27] . We consider the dataset concerning the NCAA Men's Division I Basketball Championship, in which each year 364 teams participate. 3 The championship is organized in 32 groups, called Conferences, whose winning teams face each other in a final single elimination tournament, called March Madness. Knowing match statistics of games played in the regular Championship, the aim is to forecast the potential statistics of the missing matches played in the March Madness phase. In our tests, we have selected one match statistic of the 2015 Championship, namely the fields goals attempted (FGA) and have built a matrix where teams are placed on rows and columns and nonzero i j-values correspond to the FGA made by team i and against team j. In this season, only 3771 matches were held and therefore we obtained a rather sparse 364 × 364 matrix of FGA statistics; in fact, only the 5.7% of entries of the matrix that has to be predicted is known. To validate the quality of our predictions we used the statistics of the 134 matches actually played by the teams in March Madness. We verified that in order to obtain reasonable predictions of the missing statistics the rank of the recov- ered matrix has to be sufficiently large. Therefore we use IPLR-BB setting the starting rank r = 20, rank increment δ r = 10 and = 10 −3 . The algorithm terminated recovering matrix X of rank 30. In Fig. 12 we report the bar plot of the exact and predicted values for each March Madness match. The matches have been numbered from 1 to 134. We note that except for 12 mispredicted statistics, the number of fields goals attempted is predicted reasonably well. In fact, we notice that the relative error between the true and the predicted statistic is smaller than 20% in the 90% of predictions. On this data set, OptSpace gave similar results to those in Fig. 12 returning a matrix of rank 2. We now describe a matrix completion problem where data are the number of COVID-19 infections in provincial capitals of regions in the North of Italy. Each row and column of the matrix corresponds to a city and to a day, respectively, so that the i j-value corresponds to the total number of infected people in the city i on the day j. We used data made available by the Italian Protezione Civile 4 regarding the period between March 11th and April 4th 2020, that is, after restrictive measures have been imposed by the Italian Government until the date of paper submission. We assume that a small percentage (5%) of data is not available to simulate the real case because occasionally certain laboratories do not communicate data to the central board. In such a case our aim is to recover this missing data and provide an estimate of the complete set of data to be used to make analysis and forecasts of the COVID-19 spread. Overall, we build a 47 × 24 dense matrix and attempt to recover 56 missing entries in it. We use IPLR-GS_P with starting rank r = 2, rank increment δ r = 1 and = 10 −4 and we have obtained a matrixX of rank 2. The same rank is obtained using OptSpace but only if the maximum number of its iterations is increased threefold. In Fig. 13 both the predicted and actual data (top) and the percentage error (bottom) are plotted using the two solvers. We observe that IPLR-GS_P yields an error below 10% except for 8 cases and in the worst case it reaches 22%. The error obtained with OptSpace exceeds 10% in 15 cases and in one case reaches 37%. The good results obtained with IPLR-GS_P for this small example are encouraging for applying the matrix completion approach to larger scale data sets. We have presented a new framework for an interior point method for low-rank semidefinite programming. The method relaxes the rigid IPM structure and replaces the general matrix X with the special form (2.3) which by construction enforces a convergence to a low rank solution as μ goes to zero. Therefore effectively instead of requiring a general n × n object, the proposed method works with an n × r matrix U , which delivers significant storage and cpu time savings. It also handles well problems with noisy data and allows to adaptively correct the (unknown) rank. We performed extensive numerical tests on SDP reformulation of matrix completion problems using both the first-and the second-order methods to compute search directions. The convergence of the method has been analysed under the assumption that eventually the steplength α k is equal to one (Assumption 1). However, this seemingly strong assumption does hold in all our numerical tests except for the sports game results predictions where the number of known entries of the matrix is extremely low. Our numerical experience shows the efficiency of the proposed method and its ability to handle large scale matrix completion problems and medium scale problems arising in reallife applications. A comparison with OptSpace reveals that the proposed method is versatile and it delivers more accurate solutions when applied to ill-conditioned or to some classes of real-life applications. It is generally slower than methods specially designed for matrix completion as OptSpace, but our method has potentially a wider applicability. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. In particular we have that given the matrices A ∈ R n×m , B ∈ R p×q and X defined accordingly Interior-Point Methods for Large-scale Cone Programming Handbook of Semidefinite Two point step size gradient methods An inexact dual logarithmic barrier method for solving sparse semidefinite programs Solving large-scale sparse semidefinite programs for combinatorial optimization The non-convex Burer-Monteiro approach works on smooth semidefinite programs Deterministic guarantees for Burer-Monteiro factorizations of smooth semidefinite programs A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization Local minima and convergence in low-rank semidefinite programming Cities-City Distance Datasets A singular value thresholding algorithm for matrix completion Matrix completion with noise Exact matrix completion via convex optimization Matrix completion via an alternating direction method Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications A scaled Gauss-Newton primal-dual search direction for semidefinite optimization On the Steplength Selection in Gradient Methods for Unconstrained Optimization Notes on Matrix Calculus A rank minimization heuristic with application to minimum order system approximation Exploiting sparsity in primal-dual interior-point methods for semidefinite programming Polynomial complexity for a Nesterov-Todd potential reduction method with inexact search directions Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming On the worst-case evaluation complexity of non-monotone line search algorithms Convergence behavior of interior-point algorithms Pseudoinversus and conjugate gradients Low-rank matrix completion using nuclear norm minimization and facial reduction March madness prediction: a matrix completion approach Matrix completion from a few entries Optspace: a gradient descent algorithm on the Grassmann manifold for matrix completion On the solution of large-scale SDP problems by the modified barrier method using iterative solvers An interior-point method for large-scale 1-regularized logistic regression The Gauss-Newton direction in semidefinite programming Admira: atomic decomposition for minimum rank approximation Low-rank semidefinite programming: theory and applications, foundations and trends® The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices Interior-point method for nuclear norm approximation with application to system identification Fixed point and Bregman iterative methods for matrix rank minimization The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization Semidefinite optimization Solving some large scale semidefinite programs via the conjugate residual method An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems Chordal graphs and semidefinite optimization Semidefinite programming An alternating direction algorithm for matrix completion with nonnegative factors Modified interior-point method for large-and-sparse low-rank semidefinite programs Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations Let us also recall several useful formulae which involve Kronecker products. For each of them, we assume that matrix dimensions are consistent with the multiplications involved.Let A, B, C, D be matrices of suitable dimensions. Thenwhere is a permutation matrix which transforms vec(X ) to vec(X T ). Moreover, assume that A and B are square matrices of size n and m respectively. Let λ 1 , . . . , λ n be the eigenvalues of A and μ 1 , . . . , μ m be those of B (listed according to multiplicity). Then the eigenvalues of A ⊗ B are λ i μ j , i = 1, . . . , n, j = 1, . . . , m.Finally, following [18] , we recall some rules for derivatives of matrices that can be easily derived applying the standard derivation rules for vector functions (chain rule, composite functions) and identifying d G(X )/d (X ) by using the vectorization d vecG(X )/d vec(X ),