## Invited Session Thu.1.MA 004

#### Thursday, 10:30 - 12:00 h, Room: MA 004

**Cluster 20: Robust optimization** [...]

### Robust optimization, estimation and machine learninig

**Chair: Aharon Ben-Tal**

**Thursday, 10:30 - 10:55 h, Room: MA 004, Talk 1**

**Shimrit Shtern**

A robust optimization approach for tracking under bounded uncertainty

**Coauthor: Aharon Ben-Tal**

**Abstract:**

Classical dynamic control theory assumes that the system is inflicted with white noise and minimizes estimation mean square estimation error, usually by applying the Kalman filter (KF). In some applications, such as tracking, the assumption of white, unbounded noise is unrealistic. In these cases a worst case analysis, specifically the maximal error norm, might be a better measure of performance. In tracking applications ignoring worst case occurrences might have grave implications, since large errors decrease the probability of successfully tracking an object, especially in presence of clutter or when tracking multiple objects. In order to analyze the worst case scenario for a general dynamic control problem, given the filter, we need to solve a non-convex Quadratic Constrained Quadratic Problem. Since this problem is generally NP-hard we try to utilize the problem's block characteristics in order to find upper and lower bounds. We find these bounds through Semidefinite Programming and Block-wise Optimal Ascent. We compared the KF results to a greedy worst case filter (UBF) and found that, in most cases, UBF indeed performs better in regard to worst case analysis.

**Thursday, 11:00 - 11:25 h, Room: MA 004, Talk 2**

**Elad Hazan**

Sublinear optimization for machine learning

**Abstract:**

Linear classification is a fundamental problem of machine learning, in which

positive and negative examples of a concept are represented in Euclidean space

by their feature vectors, and we seek to find a hyperplane separating the two

classes of vectors. We'll present the first sublinear-time algorithms for linear classification,

support vector machine training, and other related optimization problems, including

general semi-definite programming and robust linear programming.

These new algorithms are based on a primal-dual approach,

and use a combination of novel sampling techniques and the randomized

implementation of online learning algorithms. We give lower bounds which show

our running times to be nearly best possible.

**Thursday, 11:30 - 11:55 h, Room: MA 004, Talk 3**

**Chiranjib Bhattacharyya**

Making SVM Classifiers robust to uncertainty in kernel matrices

**Coauthors: Aahron Ben-Tal, Sahely Bhadra, Arkadi Nemirovskii**

**Abstract:**

We consider the problem of uncertainty in the entries of the Kernel matrix, arising in Support Vector Machine(SVM) formulation. Using Chance Constraint Programming and a novel large deviation inequality we derive a robust formulation which requires only second order statisticsof the uncertainty.The formulation in general is non-convex, but in several cases of interest it reduces to a convex program. To address the non-convexity issue we propose a robust optimization based procedure. Specifically the uncertainty is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm

-bounded uncertainty set. Subsequently using the Robust Optimization methodology, the

SVM problem can be posed as a convex concave saddle point problem. We show that the problem admits an efficient first order algorithm for this problem,

which achieves an *O(1/T*^{2}) reduction of the initial error after *T* iterations.

A comprehensive empirical study on both synthetic data and real-world

protein structure datasets show that the proposed formulations achieve desired robustness.