Invited Session Wed.2.H 1028

Wednesday, 13:15 - 14:45 h, Room: H 1028

Cluster 21: Sparse optimization & compressed sensing [...]

Efficient first-order methods for sparse optimization and its applications

 

Chair: Shiqian Ma

 

 

Wednesday, 13:15 - 13:40 h, Room: H 1028, Talk 1

Shiqian Ma
An alternating direction method for latent variable Gaussian graphical model selection

 

Abstract:
Latent variable Gaussian graphical model selection (LVGGMS) is an important topic in Statistics and Machine Learning. We propose an alternating direction method (ADM) that solves a convex formulation of LVGGMS proposed by Chandrasekaran, Parrilo and Willsky (2010). There are three sets of variables in this convex formulation. Our proposed ADM solves three subproblems that all have closed-form solutions in each iteration, which makes our algorithm very efficient and capable of solving very large problems. The global convergence result of the proposed algorithm is established. Numerical results on both synthetic data and gene expression data are shown to demonstrate the efficiency of the proposed method.

 

 

Wednesday, 13:45 - 14:10 h, Room: H 1028, Talk 2

Zhaosong Lu
Sparse approximation via penalty decomposition methods

Coauthor: Yong Zhang

 

Abstract:
In this talk we consider sparse approximation problems, that is, general l0 minimization problems with the l0-"norm'' of a vector being a part of constraints
or objective function. In particular, we first study the first-order optimality conditions for these problems. We then propose penalty decomposition (PD) methods for solving them in which a sequence of penalty subproblems are solved by a block coordinate descent (BCD) method.
Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the PD methods satisfies the first-order optimality conditions
of the problems. Furthermore, for the problems in which the l0 part is the only nonconvex part, we show that such an accumulation point is a local minimizer of the problems. In addition, we show that any accumulation point of the sequence generated by the BCD method is a saddle point of the penalty subproblem. Moreover, for the problems in which the l0 part is the only nonconvex part, we establish that such an accumulation point is a local minimizer of the penalty subproblem. Finally, we test the performance of our PD methods
by applying them to sparse logistic regression,

 

 

Wednesday, 14:15 - 14:40 h, Room: H 1028, Talk 3

Donald Goldfarb
An accelerated linearized Bregman method

Coauthors: Bo Huang, Shiqian Ma

 

Abstract:
We propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis
pursuit and related sparse optimization problems. Our algorithm is based on the fact that the linearized Bregman
(LB) algorithm first proposed by Stanley Osher and his collaborators is equivalent to a gradient descent method applied to
a certain dual formulation. We show that the LB method requires O(1/ε) iterations to obtain an ε-optimal solution and the
ALB algorithm reduces this iteration complexity to O(1/√ε) while requiring almost the same computational effort on each
iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the
ALB method can be significantly faster than the LB method.

 

  cash advance . Since its introduction in the market buying Order Cialis can be exclusively in pharmacy chains with a prescription from a doctor. I agree that this is very inconvenient and takes a lot of time and effort.