Invited Session Wed.1.H 1028

Wednesday, 10:30 - 12:00 h, Room: H 1028

Cluster 21: Sparse optimization & compressed sensing [...]

Algorithms for sparse optimization II

 

Chair: Kimon Fountoulakis

 

 

Wednesday, 10:30 - 10:55 h, Room: H 1028, Talk 1

Kimon Fountoulakis
Matrix-free interior point method for compressed sensing problems

Coauthors: Jacek Gondzio, Pavel Zhlobich

 

Abstract:
We consider the class of l1-regularization methods
for sparse signal reconstruction from the field of Compressed
Sensing. Such problems are very well conditioned and, indeed,
can be solved easily by first-order methods, such as,
{\tt GPSR, FPC\AS, SPGL1, NestA}.
Interior point methods rely on second-order information.
They have many advantageous features and one clear drawback:
in general, the solution of a Newton's system has
computational complexity O(n3).
We remove this disadvantage by employing the matrix-free
interior point method with suitable preconditioners which
cleverly exploit special features of compressed sensing
problems. Spectral analysis of the preconditioners is presented.
Computational experience with large-scale one-dimensional
signals (n=220) confirms that the new approach is efficient
and compares favourably with other state-of-the-art solvers.

 

 

Wednesday, 11:00 - 11:25 h, Room: H 1028, Talk 2

Xiangfeng Wang
Linearized alternating direction methods for Dantzig selector

Coauthor: Xiaoming Yuan

 

Abstract:
The Dantzig selector was recently proposed to perform variable selection and model fitting in the linear regression model, and it can be solved numerically by the alternating direction method (ADM). In this paper, we show that the ADM for Dantzig selector can be speeded up significantly if one of its resulting subproblems at each iteration is linearized. The resulting linearized ADMs for Dantzig selector are shown to be globally convergent, and their efficiency is verified numerically by both simulation and real world data-sets.

 

 

Wednesday, 11:30 - 11:55 h, Room: H 1028, Talk 3

Sergey Voronin
Iteratively reweighted least squares methods for structured sparse regularization

 

Abstract:
We describe two new algorithms useful for obtaining sparse regularized solutions to large inverse problems, based on the idea of reweighted least squares.
We start from the standpoint of l1 minimization, and show
that by replacing the non-smooth one norm ||x||1 = ∑k=1N |xk| with a reweighted two norm: k=1N wk x2k, with the weights being refined at each successive iteration, we can formulate two new algorithms with good numerical performance.
We then discuss a generalization of both variants, useful
in cases of structured sparsity, where different sets of coefficients demand different treatment. We discuss in particular, an example from a large inverse problem from Geotomography, where Wavelets
are used to promote sparsity. We show that to build up a solution from a dictionary of different Wavelet bases and to have control over the different components of each Wavelet basis, the minimization of a more general functional:
||Ax - b||22 + ∑k=1N λ k |xk|qk for 1 ≤ qk < 2 is desirable. We show that our proposed schemes extend to this more general case.

 

  signature loans . Since its introduction in the market buying Cialis can be exclusively in pharmacy chains with a prescription from a doctor. I agree that this is very inconvenient and takes a lot of time and effort.