Scientific Program

Plenary and Semi-Plenary Lectures

ISMP 2012 offers 5 plenary lectures and 10 semi-plenary lectures by well-known specialists in various fields of optimization. The lecture titles and abstracts are below.


Program -> Plenary and Semi-Plenary title only | abstract | bio sketch

Monday, 9:00 - 9:50 h, H 0105


Rakesh Vohra
Polymatroids and Auction Theory [...]


Chair: John Birge


Abstract:
A polymatroid is a polytope associated with a submodular function. Its not often one can write a sentence that contains at least three words designed to scare small animals and little children, but there it is.

Polymatroids will be familiar to students of optimization because of their attractive properties. Less well known is that these useful creatures are to be found lurking in the roots of auction theory. In this talk, I will describe how they arise and give examples of why they are useful in auction theory.

 



Monday, 17:00 - 17:50 h, H 0105


Dimitris Bertsimas
A computationally tractable theory of performance analysis in stochastic systems [...]


Chair: Friedrich Eisenbrand


Abstract:
Modern probability theory, whose foundation is based on the axioms set forth by Kolmogorov, is currently the major tool for performance analysis in stochastic systems. While it offers insights in understanding such systems, probability theory is really not a computationally tractable theory. Correspondingly, some of its major areas of application remain unsolved when the underlying systems become multidimensional: Queueing networks, network information theory, pricing multi-dimensional financial contracts, auction design in multi-item, multi-bidder auctions among others.

We propose a new approach to analyze stochastic systems based on robust optimization. The key idea is to replace the Kolmogorov axioms as primitives of probability theory, with some of the asymptotic implications of probability theory: the central limit theorem and law of large numbers and to define appropriate robust optimization problems to perform performance analysis. In this way, the performance analysis questions become highly structured optimization problems (linear, conic, mixed integer) for which there exist efficient, practical algorithms that are capable of solving truly large scale systems.

We demonstrate that the proposed approach achieves computationally tractable methods for (a) analyzing multiclass queueing networks, (b) characterizing the capacity region of network information theory and associated coding and decoding methods generalizing the work of Shannon, (c) pricing multi-dimensional financial contracts generalizing the work of Black, Scholes and Merton, (d) designing multi-item, multi-bidder auctions generalizing the work of Myerson.

This is joint work with my doctoral student at MIT Chaithanya Bandi.

 



Monday, 17:00 - 17:50 h, H 0104


Katya Scheinberg
Using randomized models in black-box and derivative free optimization [...]


Chair: Luís Nunes Vicente


Abstract:
All derivative free methods rely on sampling the objective function at one or more points at each iteration. Direct search methods (developed by Dennis, Torczon, Audet, Vicente and others) rely on sample sets of defined configuration, but different scales. Model-based DFO methods (developed by Powell, Conn, Scheinberg, Toint, Vicente, Wild and others) rely on building interpolation models using sample points in proximity of the current best iterate. Constructing and maintaining these sample sets has been one of the most essential issues in DFO. Many of the existing results have been summarized in a book by Conn, Scheinberg, Vicente, where all the sampling techniques considered for deterministic functions are deterministic ones.

We will discuss the new developments for using randomized sampled sets within the DFO framework. Randomized sample sets have many advantages over the deterministic sets. In particular, it is often easier to enforce "good" properties of the models with high probability, rather than the in the worst case. In addition, randomized sample sets can help automatically discover a good local low dimensional approximation to the high dimensional objective function. We will demonstrate how compressed sensing results can be used to show that reduced size random sample sets can provide full second order information under the assumption of the sparsity of the Hessian.

We will discuss new convergence theory developed for the randomized models where we can, for instance, show that as long as the models are "good" with probability more than 1/2 then our trust region framework is globally convergent with probability 1 under standard assumptions.

 



Tuesday, 9:00 - 9:50 h, H 0105


Yinyu Ye
Tseng Memorial Lecture - Recent Progresses in Linear Programming and the Simplex Method [...]


Chair: Sven Leyffer


Abstract:
We prove that the classic policy-iteration method (Howard 1960), including the Simplex method (Dantzig 1947) with the most-negative-reduced-cost pivoting rule, is a strongly polynomial-time algorithm for solving the Markov decision process (MDP) with a constant discount factor. Furthermore, the computational complexity of the policy-iteration method (including the Simplex method) is superior to that of the only known strongly polynomial-time interior-point algorithm for solving this problem, which seems consistent with practical observation. The result is surprising since the Simplex method with the same pivoting rule was shown to be exponential for solving a general linear programming (LP) problem, the Simplex method with the smallest-index pivoting rule was shown to be exponential for solving an MDP regardless of discount rates, and the policy-iteration method was recently shown to be exponential for solving a undiscounted MDP. We also describe most recent progresses in this research direction.the Tseng Memorial Lectureship. The prize was established in 2011 and will be awarded for the first time during the opening ceremony of ISMP 2012.

 



Tuesday, 17:00 - 17:50 h, H 0105


Rekha Thomas
Lifts and Factorizations of Convex Sets [...]


Chair: Martin Skutella


Abstract:
A basic strategy for linear optimization over a complicated convex set is to try to express the set as the projection of a simpler convex set that admits efficient algorithms. This philosophy underlies all lift-and-project methods in the literature which attempt to find polyhedral or spectrahedral lifts of complicated sets. Given a closed convex cone K and a convex set C, there is an affine slice of K that
projects to C if and only if a certain "slack operator'' associated to C can be factored through K. This theorem extends a result of Yannakakis who showed that polyhedral lifts of polytopes are controlled by the nonnegative factorizations of the slack matrix of the polytope. The connection between cone lifts and cone factorizations of convex sets yields a uniform framework within which to view all lift-and-project methods, as well as offers new tools for understanding convex sets. I will survey this evolving area and the main results that have emerged thus far.

 



Tuesday, 17:00 - 17:50 h, H 0104


Teemu Pennanen
Introduction to convex optimization in financial markets [...]


Chair: John Birge


Abstract:
Convexity arises quite naturally in financial risk management. In risk preferences concerning random cash-flows, convexity corresponds to the fundamental diversification principle. Convexity is a basic property also of budget constraints both in classical linear models as well as in more realistic models with transaction costs and constraints. Moreover, modern securities markets are based on trading protocols that result in convex trading costs. This talk gives an introduction to certain basic concepts and principles of financial risk management in simple optimization terms. We then review some convex optimization techniques used in mathematical and numerical analysis of financial optimization problems.

 



Wednesday, 9:00 - 9:50 h, H 0105


Christof Schütte
Optimal control of molecular dynamics using Markov state models [...]


Chair: Martin Skutella


Abstract:
Molecular systems exhibits complicated dynamical behavior that is responsible for its (biological or nanotechnological) functionality. The effective dynamics can be characterized by the switching behavior between several metastable states, the so-called conformations of the system. Therefore, steering a molecular system from one conformation into another one means controlling its functionality. This talk considers optimal control problems that appear relevant in molecular dynamics (MD) applications and belong to the class of ergodic stochastic control problems in high dimensions. It will be demonstrated how the associated Hamilton-Jacobi-Bellman (HJB) equation can be solved. The main idea is to first approximate the dominant modes of the MD transfer operator by a low-dimensional, so-called Markov state model (MSM), and then solve the HJB for the MSM rather than the full MD. The approach rests on the interpretation of the corresponding HJB equation as a nonlinear eigenvalue problem that, using a logarithmic transformation, can be recast as a linear eigenvalue problem, for which the principal eigenvalue and the associated eigenfunction are sought. The resulting method will be illustrated in application to the maximization of the population of alpha-helices in an ensemble of small biomolecules (Alanine dipeptide).

 



Wednesday, 17:00 - 17:50 h, H 0105


Robert Weismantel
Mixed Integer Convex Optimization [...]


Chair: Gérard Cornuéjols


Abstract:
This talk deals with algorithms and complexity results about the minimization of convex functions over integer points in convex regions.
We begin with a survey about the current state of art.
Then we discuss results about to the speed of convergence of a black box algorithm that iteratively solves special quadratic integer subproblems with a constant approximation factor. Despite the generality of the underlying problem we prove that we can detect efficiently w.r.t. our assumptions regarding the encoding of the problem a feasible solution whose objective function value is close to the optimal value.
We also show that this proximity result is best possible up to a polynomial factor.
Next we discuss a new "cone-shrinking algorithm" that allows us to prove that integer convex optimization with a constant number of variables is polynomial time solvable.
Parts of our results are based on joint work with M. Baes, A. del Pia, Y. Nesterov, S. Onn. The other part is based on joint work with M. Baes, T. Oertel, C. Wagner.

 



Wednesday, 17:00 - 17:50 h, H 0104


Claudia Sagastizábal
Divide to conquer: decomposition methods for energy optimization [...]


Chair: Alejandro Jofré


Abstract:
Modern electricity systems provide a plethora of challenging issues in optimization. The increasing penetration of low carbon renewable sources of energy introduces uncertainty in problems traditionally modeled in a deterministic setting. The liberalization of the electricity sector brought the need of designing sound markets, ensuring capacity investments while properly reflecting strategic interactions. In all these problems, hedging risk, possibly in a dynamic manner, is also a concern. The fact of representing uncertainty and/or competition of different companies in a multi-settlement power market considerably increases the number of variables and constraints. For this reason, usually a trade-off needs to be found between modeling and numerical tractability: the more details are brought into the model, the harder becomes the optimization problem.

Both for optimization and generalized equilibrium problems, we explore some variants of solution methods based on Lagrangian relaxation and on Benders decomposition. Throughout we keep as a leading thread the actual practical value of such techniques in terms of their efficiency to solve energy related problems.

 



Thursday, 9:00 - 9:50 h, H 0105


Richard Baraniuk
Compressive Signal Processing [...]


Chair: Luís Nunes Vicente


Abstract:
Sensing and imaging systems are under increasing pressure to accommodate ever larger and higher-dimensional data sets; ever faster capture, sampling, and processing rates; ever lower power consumption; communication over ever more difficult channels; and radically new sensing modalities. This talk will overview the foundations and recent progress on compressive signal processing, a new approach to data acquisition and processing in which analog signals are digitized and processed not via uniform sampling but via measurements using more general, even random, test functions. In stark contrast with conventional wisdom, the new theory asserts that one can combine "sub-Nyquist-rate sampling" with large-scale optimization for efficient and accurate signal acquisition when the signal has a sparse structure. The implications of compressive sensing are promising for many applications and enable the design of new kinds of communication systems, cameras, imagers, microscopes, and pattern recognition systems. Special emphasis will be placed on the pros and cons of the compressive sensing technique.

 



Thursday, 17:00 - 17:50 h, H 0105


Amin Saberi
Rounding by Sampling and Traveling Salesman Problems [...]


Chair: Friedrich Eisenbrand


Abstract:
I will talk about a new technique for rounding the solution
of linear programming relaxations of combinatorial optimization
problems. In particular, I will present new algorithms for symmetric and asymmetric traveling salesman problems, improving the best known approximation ratios for these problems.

 



Thursday, 17:00 - 17:50 h, H 0104


Michael Friedlander
Data fitting and optimization with randomized sampling [...]


Chair: Luís Nunes Vicente


Abstract:
For many structured data-fitting applications, incremental gradient methods (both deterministic and randomized) offer inexpensive iterations by sampling only subsets of the data. They make great progress initially, but eventually stall. Full gradient methods, in contrast, often achieve steady convergence, but may be prohibitively expensive for large problems. Applications in machine learning and robust seismic inversion motivate us to develop an inexact gradient method and sampling scheme that exhibit the benefits of both incremental and full gradient methods.

 



Friday, 9:00 - 9:50 h, H 0105


Nikhil Bansal
Semidefinite Optimization in Discrepancy Theory [...]


Chair: Friedrich Eisenbrand


Abstract:
The concept of discrepancy is intimately related to several
fundamental topics in mathematics and
theoretical computer science, and deals with the following type of question.
Given a collection of sets on some elements, color each element red
or blue such that each set in the
collection is colored as evenly as possible.

Recently, there have been several new developments in discrepancy
theory based on connections to semidefinite programming. This connection
has been useful in several ways. It gives efficient polynomial time algorithms
for several problems for which only non-constructive results were previously
known. It also leads to several new structural results in discrepancy
itself, such
as tightness of the so-called determinant lower bound, improved bounds on the
discrepancy of the union of set systems and so on. We will give a brief survey
of these results, focussing on the main ideas and the techniques involved.

 



Friday, 9:00 - 9:50 h, H 0104


Xiaojun Chen
Nonsmooth, Nonconvex Regularized Optimization for Sparse Approximations [...]


Chair: Ya-xiang Yuan


Abstract:
Minimization problems with nonsmooth, nonconvex, perhaps even non-Lipschitz regularization terms have wide applications in image restoration, signal reconstruction and variable selection, but they seem to lack optimization theory. On Lp non-Lipschitz regularized minimization, we show that finding a global optimal solution is strongly NP-hard. On the other hand, we present lower bounds of nonzero entries in every local optimal solution without assumptions on the data matrix. Such lower bounds can be used to classify zero and nonzero entries in local optimal solutions and select regularization parameters for desirable sparsity of solutions. Moreover, we show smoothing methods are efficient for solving such regularized minimization problems. In particular, we introduce a smoothing SQP method which can find an affine scaled ε-stationary point from any starting point with complexity O(ε-2), and a
smoothing trust region Newton method which can find a point satisfying the affine scaled second order necessary
condition from any starting point. Examples with six widely used nonsmooth nonconvex regularization terms are presented to illustrate the theory and algorithms.
Joint work with W. Bian, D. Ge, L. Niu, Z. Wang, Y. Ye, Y. Yuan.

 



Friday, 17:00 - 17:50 h, H 0105


Jorge Nocedal
Second-Order Methods for Stochastic, Semi-Smooth and Nonlinear Programming [...]


Chair:


Abstract:
First-order methods have been advocated for solving optimization problems of large scale. Although they are sometimes the most appropriate techniques, we argue that in many applications it is advantageous to employ second-order information as an integral part of the iteration. This is particularly so when parallel computing environments are available. In this talk, we take a broad view of second-order methods, and center our discussion around three applications: convex L1 regularized optimization, inverse covariance estimation, and nonlinear programming. We note that many efficient methods for these problems can be derived using a semi-smooth Newton framework, which allows us to compare their manifold identification and subspace minimization properties. We propose an algorithm employing a novel active-set mechanism that is of interest in machine learning, PDE-constrained optimization, and other applications. We also discuss dynamic sampling techniques, illustrate their practical performance, and provide work complexity bounds. The talk concludes with some observations about the influence that parallel computing has on large scale optimization calculations.

 

 

Program -> Plenary and Semi-Plenary title only | abstract | bio sketch

  cash loans . If you have already decided to take Levitra For Sale, be sure to consult a doctor, you don't have any contraindications and act strictly due to a prescription.