## Invited Session Fri.2.H 1058

#### Friday, 13:15 - 14:45 h, Room: H 1058

**Cluster 10: Implementations & software** [...]

### Conic linear programming

**Chair: Christoph Helmberg**

**Friday, 13:15 - 13:40 h, Room: H 1058, Talk 1**

**Christoph Helmberg**

Speeding up the spectral bundle method by solving the quadratic semidefinite subproblems with a PSQMR approach

**Coauthor: Kim-Chuan Toh**

**Abstract:**

The spectral bundle method is tuned to solving semidefinite programs (SDP) with large semidefinite matrix variables having constant or bounded trace. It exploits that these can be formulated equivalently as problems of minimizing the maximum eigenvalue of affine matrix functions and uses repeated eigenvalue computations to form a semidefinite model of the objective function. The next candidate point is determined as the minimizer of the model plus a quadratic augmenting term. Solving this quadratic semidefinite subproblem by interior point methods formed the bottleneck whenever bundle sizes increased. We report on our experience with a preconditioned symmetric quasi minimal residual (PSQMR) approach for computing the Newton step in this interior point method like in the package QSDP. On our test instances this results in significant savings. Indeed, the cost of the interior point method is then negligible in comparison to the cost of computing the eigenvalues and the cost matrices of the quadratic semidefinite subproblem. In combination with cutting plane approaches, however, results are less favorable and better preconditioning techniques still need to be developed.

**Friday, 13:45 - 14:10 h, Room: H 1058, Talk 2**

**Florian Jarre**

Solving large scale problems over the doubly nonnegative cone

**Coauthor: Thomas Davi**

**Abstract:**

We consider large scale problems over the doubly nonnegative cone.

Under suitable assumptions, regularity of the standard reformulation

splitting the doubly nonnegative variable into a semidefinite and a

nonnegative part can be setablished.

The application of first order methods to this reformulation of

problems over the doubly nonnegative cone is motivated by the fact

that the cost per iteration does not increase by adding nonnegativity

constraints.

Preliminary numerical experiments illustrate that even for large

scale problems highly accurate numerical solutions can be obtained.

**Friday, 14:15 - 14:40 h, Room: H 1058, Talk 3**

**Kim-Chuan Toh**

An inexact accelerated proximal gradient method for large scale convex quadratic SDP

**Coauthors: Kaifeng Jiang, Defeng Sun**

**Abstract:**

The accelerated proximal gradient (APG) method has proven to be highly efficient in solving some large scale structured convex optimization (possibly nonsmooth) problems, including nuclear norm minimization problems in matrix completion.

The method has superior worst-case iteration complexity over the classical projected gradient

method, and usually has good practical performance on problems with appropriate structures. Here, we

extend the APG method to the inexact setting where the subproblems are only solved approximately,

and show that it enjoys the same worst-case iteration complexity as the exact counterpart if the

subproblems are progressively solved to sufficient accuracy. We apply our inexact APG method to

solve convex quadratic SDP (QSDP) problems of the form: *min*_{x} { 1⁄2 ⟨ x,Q(x)

⟩ + ⟨ c,x ⟩ | A(x) = b, x\succeq 0}. The subproblem in each iteration is

solved by a semismooth Newton-CG (SSNCG) method with warm-start using the iterate from the previous

iteration. Our APG-SSNCG method is demonstrated to be efficient for QSDP problems whose positive

semidefinite linear maps *Q* are highly ill-conditioned.