Invited Session Thu.2.H 1028

Thursday, 13:15 - 14:45 h, Room: H 1028

Cluster 21: Sparse optimization & compressed sensing [...]

Nonconvex sparse optimization

 

Chair: Wotao Yin

 

 

Thursday, 13:15 - 13:40 h, Room: H 1028, Talk 1

Zaiwen Wen
Alternating direction augmented Lagrangian methods for a few nonconvex problems

 

Abstract:
Recently, the alternating direction augmented Lagrangian methods (ADM) have been widely used in convex optimization. In this talk, we show
that ADM can also be quite efficent for solving nonconvex problems such as phase retrieval problem in X-ray diffractive imaging and an integer programming problem in portfolio optimization.

 

 

Thursday, 13:45 - 14:10 h, Room: H 1028, Talk 2

Francesco Solombrino
Linearly constrained nonsmooth and nonconvex minimization

Coauthor: Massimo Fornasier

 

Abstract:
Motivated by variational models in continuum mechanics, we introduce a novel algorithm for performing nonsmooth and nonconvex minimizations with linear constraints. We show how this algorithm is actually a natural generalization of well-known non-stationary augmented Lagrangian methods for convex optimization. The relevant features of this approach are its applicability to a large variety of nonsmooth and nonconvex objective functions, its guaranteed global convergence to critical points of the objective energy, and its simplicity of implementation. In fact, the algorithm results in a nested double loop iteration, where in the inner loop an augmented Lagrangian algorithm performs an adaptive finite number of iterations on a fixed quadratic and strictly convex perturbation of the objective energy, while the external loop performs an adaptation of the quadratic perturbation. To show the versatility of this new algorithm, we exemplify how it can be easily used for computing critical points in inverse free-discontinuity variational models, such as the Mumford-Shah functional, and, by doing so, we also derive and analyze new iterative thresholding algorithms.

 

 

Thursday, 14:15 - 14:40 h, Room: H 1028, Talk 3

Ming-Jun Lai
On the Schatten p-quasi-norm minimization for low rank matrix recovery

Coauthor: Louis Yang

 

Abstract:
We provide a sufficient condition to show when the Schatten p-quasi-norm minimization can be used for matrix completion
to recover the rank minimal matrix.
The condition is given in terms of the restricted isometry property in the matrix version. More precisely,
when the restricted isometry constant δ2r<1, there exists a real number p0<1 such that
any solution of the lp minimization is the minimal rank solution for p ≤ p0.

 

  Payday Loans California. If you have already decided to take Levitra, be sure to consult a doctor, you don't have any contraindications and act strictly due to a prescription.