Invited Session Tue.2.MA 042

Tuesday, 13:15 - 14:45 h, Room: MA 042

Cluster 20: Robust optimization [...]

Advances in robust optimization


Chair: Daniel Kuhn



Tuesday, 13:15 - 13:40 h, Room: MA 042, Talk 1

Huan Xu
A distributional interpretation of robust optimization, with applications in machine learning

Coauthors: Constantine Caramanis, Shie Mannor


Motivated by data-driven decision making and sampling problems, we investigate probabilistic interpretations
of Robust Optimization (RO). We establish a connection between RO and Distributionally Robust Stochastic
Programming (DRSP), showing that the solution to any RO problem is also a solution to a DRSP problem.
Specically, we consider the case where multiple uncertain parameters belong to the same fixed dimensional
space, and find the set of distributions of the equivalent DRSP. The equivalence we derive enables us to construct
RO formulations for sampled problems (as in stochastic programming and machine learning) that are statistically
consistent, even when the original sampled problem is not. In the process, this provides a systematic approach for
tuning the uncertainty set. Applying this interpretation in machine learning, we showed that two widely used algorithms - SVM and Lasso are special cases of RO, and establish their consistency via the distributional interpretation.



Tuesday, 13:45 - 14:10 h, Room: MA 042, Talk 2

Boris Houska
Lifting methods for generalized semi-infinite programs

Coauthors: Moritz Diehl, Oliver Stein, Paul Steuermann


In this talk we present numerical solution strategies for generalized semi-infinite optimization problems (GSIP), a class of mathematical optimization problems which occur naturally in the context of design centering problems, robust optimization problems, and many fields of engineering science. GSIPs can be regarded as bilevel optimization problems, where a parametric lower-level maximization problem has to be solved in order to check feasibility of the upper level minimization problem. In this talk we discuss three strategies to reformulate a class lower-level convex GSIPs into equivalent standard minimization problems by exploiting the concept of lower level Wolfe duality. Here, the main contribution is the discussion of the non-degeneracy of the corresponding formulations under various assumptions. Finally, these non-degenerate re-formulations of the original GSIP allow us to apply standard nonlinear optimization algorithms.



Tuesday, 14:15 - 14:40 h, Room: MA 042, Talk 3

Wolfram Wiesemann
Robust Markov decision processes

Coauthors: Daniel Kuhn, Berc Rustem


Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a pre-specified probability 1-β. Afterwards, we determine a policy that attains the highest worst-case performance over this confidence region. By construction, this policy achieves or exceeds its worst-case performance with a confidence of at least 1-β. Our method involves the solution of tractable conic programs of moderate size.


  Florida Payday Loans can help you in trying times, but be sure to know the laws necessary for your loan application. Therefore, we can say that the active substances in its composition are more perfectly mixed. Vardenafil is not only present in the original Levitra Online, but also as part of its analogs.