Invited Session Mon.3.H 1028

Monday, 15:15 - 16:45 h, Room: H 1028

Cluster 21: Sparse optimization & compressed sensing [...]

Global rate guarantees in sparse optimization

 

Chair: Michel Baes

 

 

Monday, 15:15 - 15:40 h, Room: H 1028, Talk 1

Wotao Yin
Augmented L1 and nuclear-norm minimization with a globally linearly convergent algorithm

Coauthor: Ming-Jun Lai

 

Abstract:
L1 minimization tends to give sparse solutions while the least squares (LS) give dense solutions. We show that minimizing the weighted sum of L1 and LS, with an appropriately small weight for the LS term, can efficiently recover sparse vectors with provable recovery guarantees. For compressive sensing, exact and stable recovery guarantees can be given in terms of the null-space property, restricted isometry property, spherical section property, and “RIPless” property of the sensing matrix. Moreover, the Lagrange dual problem of L1+LS minimization is convex, unconstrained, and differentiable; hence, a rich set of classical techniques such as gradient descent, line search, Barzilai-Borwein steps, quasi-Newton methods, and Nesterov’s acceleration can be directly applied. We show that the gradient descent iteration is globally linearly convergent, and we give an explicit rate. This is the first global linear convergence result among the gradient-based algorithms for sparse optimization. We also present an algorithm based on the limited-memory BFGS and demonstrate its superior performance than several existing L1 solvers.

 

 

Monday, 15:45 - 16:10 h, Room: H 1028, Talk 2

Lin Xiao
A proximal-gradient homotopy method for the sparse least-squares problem

Coauthor: Tong Zhang

 

Abstract:
We consider the l1-regularized least-squares problem in the context of sparse recovery or compressed sensing. The standard proximal gradient method (iterative soft-thresholding) has low computational cost per iteration but a rather slow convergence rate. Nevertheless, when the solution is sparse, it often exhibits fast linear convergence in the end. We exploit this local linear convergence using a homotopy continuation strategy, i.e., we minimize the objective for a sequence of decreasing values of the regularization parameter, and use an approximate solution at the end of each stage to warm-start the next stage. Similar strategies have been studied in the literature, but there has been no theoretical analysis of their global iteration complexity. We shows that under suitable assumptions for sparse recovery, the proposed homotopy strategy ensures that all iterates along the homotopy solution path are sparse. Therefore the objective function is effectively strongly convex along the path, and geometric convergence at each stage can be established. As a result, the overall iteration complexity of our method is O(log(1/ε)) for finding an ε-optimal solution.

 

 

Monday, 16:15 - 16:40 h, Room: H 1028, Talk 3

Michel Baes
First-order methods for eigenvalue optimization

Coauthors: Michael Buergisser, Arkadi Nemirovski

 

Abstract:
Many semidefinite programming problems encountered in practice can be recast as minimizing the maximal eigenvalue of a convex combination of symmetric matrices. In this talk, we describe and analyze a series of first-order methods for solving this problem when the input matrices are large (of dimension 1000 to 10000 and more) and mildly sparse. We propose several accelerating strategies, notably in the step-size selection, and based on randomization, and illustrate the theoretical and practical efficiency of the new approach.

 

  Payday Loans In North Carolina. On the global pharmaceutical market this medicine was issued in 2003 by two companies - Eli Lilly and ICOS. Initially, permission to sell Cialis was obtained in Europe, Australia, New Zealand.