## Invited Session Mon.3.H 1012

#### Monday, 15:15 - 16:45 h, Room: H 1012

**Cluster 17: Nonsmooth optimization** [...]

### Nonsmooth optimization in imaging sciences I

**Chair: Gabriel Peyré**

**Monday, 15:15 - 15:40 h, Room: H 1012, Talk 1**

**Gabriel Peyré**

A review of proximal splitting methods with a new one

**Coauthors: Jalal Fadili, Hugo Raguet**

**Abstract:**

In the first part of this talk, I will review proximal splitting methods for the resolution of large scale non-smooth convex problems (see for instance [1, 2]). I will show how each algorithm is able to take advantage of the structure of typical imaging problems. In the second part of this talk I will present the Generalized Forward Backward (GFB) splitting method [3] that is tailored for the minimization of the sum of a smooth function and an arbitrary number of "simple'' functions (for which the proximal operator can be computed in closed form). I will show on several imaging applications the advantage of our approach over state of the art proximal splitting schemes. Demos and codes for these proximal splitting schemes can be obtained by visiting www.numerical-tours.com .

%This is a joint work with Hugo Raguet (Ceremade) and Jalal Fadili (Caen).

- P. L. Combettes and J.-C. Pesquet, "Proximal splitting methods in signal processing'', 2011.

- A. Beck and M. Teboulle, "Gradient-Based Algorithms with Applications in Signal Recovery Problems'', 2010.

- H. Raguet, J. Fadili and G. Peyré, "Generalized Forward-Backward Splitting'', preprint HAL-00613637.

**Monday, 15:45 - 16:10 h, Room: H 1012, Talk 2**

**Thomas Pock**

On parameter learning in variational models

**Coauthor: Karl Kunisch**

**Abstract:**

In this work we consider the problem of parameter learning for variational image denoising models. We formulate the learning problem as a bilevel optimization problem, where the lower level problem is given by the variational model and the higher level problem is given by a loss function that penalizes errors between the solution of the lower level problem and the ground truth data. We consider a class of image denoising models incorporating a sum of analysis based priors over a fixed set of linear operators. We devise semi-smooth Newton methods to solve the resulting non-smooth bilevel optimization problems and show that the optimized image denoising models can achieve state-of-the-art performance.

**Monday, 16:15 - 16:40 h, Room: H 1012, Talk 3**

**Volkan Cevher**

Nonconvex models with exact and approximate projections for constrained linear inverse problems

**Coauthor: Anastasios Kyrillidis**

**Abstract:**

Many natural and man-made signals exhibit a few degrees of freedom relative to their dimension due to natural parameterizations or constraints. The inherent low-dimensional structure of such signals are mathematically modeled via combinatorial and geometric concepts, such as sparsity, unions-of-subspaces, or spectral sets, and are now revolutionizing the way we address linear inverse problems from incomplete data.

In this talk, we describe a set of low-dimensional, nonconvex models for constrained linear inverse problems that feature exact and epsilon-approximate projections in polynomial time. We pay particular attention to structured sparsity models based on matroids, multi-knapsack, and clustering as well as spectrally constrained models. We describe a hybrid optimization framework which explicitly leverages these non-convex models along with additional convex constraints to improve recovery performance. We then analyze the convergence and approximation guarantees of our framework based on restrictions on the linear operator in conjunction with several well-known acceleration techniques, such as step-size selection, memory, splitting, and block coordinate descent.