Invited Session Mon.2.H 0111

Monday, 13:15 - 14:45 h, Room: H 0111

Cluster 13: Logistics, traffic, and transportation [...]

Advances in machine learning

 

Chair: Vivek Farias

 

 

Monday, 13:15 - 13:40 h, Room: H 0111, Talk 1

Paul Grigas
Proximal subgradient and dual averaging for sequential decision-making and non-smooth optimization

Coauthor: Robert M. Freund

 

Abstract:
We analyze and show interconnections between prox subgradient and dual averaging methods for both sequential decision making as well as non-smooth convex optimization. Our sequential decision-making problem context generalizes the allocation problem addressed in the Hedge Algorithm as studied by Baes and Burgisser.Furthermore, our framework provides a new interpretation and extensions of the algorithm AdaBoost, where the distribution on examples provides the primal variables and the final classifier arises naturally as a weighted average of dual variables. Lastly, we examine connections between various first-order method, and propose new first-order methods as well.

 

 

Monday, 13:45 - 14:10 h, Room: H 0111, Talk 2

Vivek Farias
Non-parametric approximate dynamic programming via the kernel method

Coauthors: Nikhil Bhat, Ciamac C. Moallemi

 

Abstract:
We present a "kernelized'' variant of a recent family of approximate dynamic programming algorithms we have dubbed "smoothed approximate linear programs''. Our new algorithm is non-parametric in that it does not require a basis function architecture and develops a value function approximation with accuracy that improves with the size of the training data set. We describe the efficient implementation of this method, and present sample complexity bounds and approximation guarantees that effectively extend state of the art guarantees for ADP to the non-parametric setting. In summary, we believe this is the first practical non-parametric ADP algorithm with performance guarantees.

 

 

Monday, 14:15 - 14:40 h, Room: H 0111, Talk 3

Sahand Negahban
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions

Coauthors: Alekh Agarwal, Martin J. Wainwright

 

Abstract:
We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation X of the sum of an (approximately) low rank matrix Θ\star with a second matrix \Gamma\star endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including factor analysis, multi-task regression, and robust covariance estimation. We derive a general theorem that bounds the Frobenius norm error for an estimate of the pair \star, \Gamma\star) obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields non-asymptotic Frobenius error bounds for both deterministic and stochastic noise matrices. Moreover, for the case of stochastic noise matrices and the identity observation operator, we establish matching lower bounds on the minimax error.

 

  Payday Loans In Illinois. Since its introduction in the market buying Cialis can be exclusively in pharmacy chains with a prescription from a doctor. I agree that this is very inconvenient and takes a lot of time and effort.