Invited Session Wed.3.H 2038

Wednesday, 15:15 - 16:45 h, Room: H 2038

Cluster 4: Conic programming [...]

Conic and convex programming in statistics and signal processing IV

 

Chair: Parikshit Shah

 

 

Wednesday, 15:15 - 15:40 h, Room: H 2038, Talk 1

Defeng Sun
Finding the nearest correlation matrix of exact low rank via convex optimization

Coauthor: Weimin Miao

 

Abstract:
In this talk, we aim to find a nearest correlation matrix of exact low rank from n independent noisy observations of entries under a general sampling scheme. Since the nuclear norm (trace) of a correlation matrix is a constant, the widely used nuclear norm regularization technique can no longer be applied to achieve this goal in the noisy setting. Here, we propose a new convex optimization approach by using a linear regularization term based on the observation matrix to represent the rank information. This convex optimization problem can be easily written as an H-weighted least squares semidefinite programming problem, which can be efficiently solved, even for large-scale cases. Under certain conditions, we show that our approach possesses the rank consistency. We also provide non-asymptotic bounds on the estimation error.

 

 

Wednesday, 15:45 - 16:10 h, Room: H 2038, Talk 2

Sahand Negahban
Fast global convergence of composite gradient methods for high-dimensional statistical recovery

Coauthors: Alekh Agarwal, Martin J. Wainwright

 

Abstract:
Many statistical M-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension d to grow with (and possibly exceed) the sample size n. This high-dimensional structure precludes the usual global assumptions-namely, strong convexity and smoothness conditions-that underlie much of classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that composite gradient descent has a globally geometric rate of convergence up to the statistical precision of the model, meaning the typical distance between the true unknown parameter θ* and an optimal solution \hat{θ}. This result is substantially sharper than previous convergence guarantees. These results extend existing ones based on constrained M-estimators.

 

 

Wednesday, 16:15 - 16:40 h, Room: H 2038, Talk 3

Maryam Fazel
Algorithms for Hankel matrix rank minimization for system identification and realization

Coauthors: Ting Kei Pong, Defeng Sun, Paul Tseng

 

Abstract:
We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and Moment structures, and catalog applications from diverse fields under this framework. We discuss first-order methods for solving the resulting optimization problem, including alternating direction methods,
proximal point algorithm and gradient projection methods. We perform computational experiments comparing these methods on system identification and system realization problems. For the system identification problem, the gradient projection method (accelerated by Nesterov’s extrapolation techniques) outperforms other first-order methods in terms of CPU time on both real and simulated data; while for the system realization problem, the alternating direction method, as applied to a certain primal reformulation, outperforms other first-order methods.

 

  Most online loan lenders allow getting New Jersey Payday Loans without visiting a bank, straight to your bank account. Cialis Professional online is capableto release you reliably from the erection problems, its improved formula gives the new properties to the drug.