Scientific Program

Plenary and Semi-Plenary Lectures

ISMP 2012 offers 5 plenary lectures and 10 semi-plenary lectures by well-known specialists in various fields of optimization. The lecture titles, abstracts, and bio sketches of the speakers are below.


Program -> Plenary and Semi-Plenary title only | abstract | bio sketch

Monday, 9:00 - 9:50 h, H 0105


Rakesh Vohra
Polymatroids and Auction Theory [...]


Chair: John Birge


Abstract:
A polymatroid is a polytope associated with a submodular function. Its not often one can write a sentence that contains at least three words designed to scare small animals and little children, but there it is.

Polymatroids will be familiar to students of optimization because of their attractive properties. Less well known is that these useful creatures are to be found lurking in the roots of auction theory. In this talk, I will describe how they arise and give examples of why they are useful in auction theory.

 

Biographical sketch:
Rakesh Vohra is the John L. & Helen Kellogg Professor of Managerial Economics and lapsed math programmer. He occupies himself with the usual obligations of a faculty member... sitting and thinking and, when required, standing and professing. He thinks mostly about pricing, auctions and the design of markets. He professes on the same but with less success.

 



Monday, 17:00 - 17:50 h, H 0105


Dimitris Bertsimas
A computationally tractable theory of performance analysis in stochastic systems [...]


Chair: Friedrich Eisenbrand


Abstract:
Modern probability theory, whose foundation is based on the axioms set forth by Kolmogorov, is currently the major tool for performance analysis in stochastic systems. While it offers insights in understanding such systems, probability theory is really not a computationally tractable theory. Correspondingly, some of its major areas of application remain unsolved when the underlying systems become multidimensional: Queueing networks, network information theory, pricing multi-dimensional financial contracts, auction design in multi-item, multi-bidder auctions among others.

We propose a new approach to analyze stochastic systems based on robust optimization. The key idea is to replace the Kolmogorov axioms as primitives of probability theory, with some of the asymptotic implications of probability theory: the central limit theorem and law of large numbers and to define appropriate robust optimization problems to perform performance analysis. In this way, the performance analysis questions become highly structured optimization problems (linear, conic, mixed integer) for which there exist efficient, practical algorithms that are capable of solving truly large scale systems.

We demonstrate that the proposed approach achieves computationally tractable methods for (a) analyzing multiclass queueing networks, (b) characterizing the capacity region of network information theory and associated coding and decoding methods generalizing the work of Shannon, (c) pricing multi-dimensional financial contracts generalizing the work of Black, Scholes and Merton, (d) designing multi-item, multi-bidder auctions generalizing the work of Myerson.

This is joint work with my doctoral student at MIT Chaithanya Bandi.

 

Biographical sketch:
Dimitris Bertsimas is currently the Boeing Leaders for Global Operations Professor of Management and the co-director of the Operations Research Center at the Massachusetts Institute of Technology. He has received a BS in Electrical Engineering and Computer Science at the National Technical University of Athens, Greece in 1985, a MS in Operations Research at MIT in 1987, and a Ph.D. in Applied Mathematics and Operations Research at MIT in 1988. Since 1988, he has been in the MIT faculty. His research interests include optimization, stochastic systems, data mining, and their applications. In recent years he has worked in robust optimization, health care and finance. He has published over 150 papers and 3 graduate level textbooks. He is a member of the National Academy of Engineering and he has received several research awards including the Farkas prize, the SIAM Optimization prize and the Erlang Prize.

 



Monday, 17:00 - 17:50 h, H 0104


Katya Scheinberg
Using randomized models in black-box and derivative free optimization [...]


Chair: Luís Nunes Vicente


Abstract:
All derivative free methods rely on sampling the objective function at one or more points at each iteration. Direct search methods (developed by Dennis, Torczon, Audet, Vicente and others) rely on sample sets of defined configuration, but different scales. Model-based DFO methods (developed by Powell, Conn, Scheinberg, Toint, Vicente, Wild and others) rely on building interpolation models using sample points in proximity of the current best iterate. Constructing and maintaining these sample sets has been one of the most essential issues in DFO. Many of the existing results have been summarized in a book by Conn, Scheinberg, Vicente, where all the sampling techniques considered for deterministic functions are deterministic ones.

We will discuss the new developments for using randomized sampled sets within the DFO framework. Randomized sample sets have many advantages over the deterministic sets. In particular, it is often easier to enforce "good" properties of the models with high probability, rather than the in the worst case. In addition, randomized sample sets can help automatically discover a good local low dimensional approximation to the high dimensional objective function. We will demonstrate how compressed sensing results can be used to show that reduced size random sample sets can provide full second order information under the assumption of the sparsity of the Hessian.

We will discuss new convergence theory developed for the randomized models where we can, for instance, show that as long as the models are "good" with probability more than 1/2 then our trust region framework is globally convergent with probability 1 under standard assumptions.

 

Biographical sketch:
Katya Scheinberg is an associate professor in the Industrial and
Systems Engineering Department at Lehigh University. A native from Moscow, she earned her undergraduate degree in operations research from the Lomonosov Moscow State University in 1992 and then received her Ph.D. in operations research from Columbia in 1997. Scheinberg was a Research Staff Member at the IBM T.J. Watson Research center for over a decade, where she worked on various applied and theoretical problems in optimization,
until moving back to Columbia as a visiting professor in 2009 and later on to Lehigh. Her main research areas are related to developing practical algorithms (and their theoretical analysis) for various problems in continuous optimization, such as convex optimization, derivative free optimization, machine learning, quadratic programming, etc. Scheinberg has also published a book in 2008 titled, Introduction to Derivative Free Optimization, which is co-authored with Andrew R.
Conn and Luis N. Vicente. She is currently the editor of Optima, the MOS newsletter, and an associate editor of SIOPT.

 



Tuesday, 9:00 - 9:50 h, H 0105


Yinyu Ye
Tseng Memorial Lecture - Recent Progresses in Linear Programming and the Simplex Method [...]


Chair: Sven Leyffer


Abstract:
We prove that the classic policy-iteration method (Howard 1960), including the Simplex method (Dantzig 1947) with the most-negative-reduced-cost pivoting rule, is a strongly polynomial-time algorithm for solving the Markov decision process (MDP) with a constant discount factor. Furthermore, the computational complexity of the policy-iteration method (including the Simplex method) is superior to that of the only known strongly polynomial-time interior-point algorithm for solving this problem, which seems consistent with practical observation. The result is surprising since the Simplex method with the same pivoting rule was shown to be exponential for solving a general linear programming (LP) problem, the Simplex method with the smallest-index pivoting rule was shown to be exponential for solving an MDP regardless of discount rates, and the policy-iteration method was recently shown to be exponential for solving a undiscounted MDP. We also describe most recent progresses in this research direction.the Tseng Memorial Lectureship. The prize was established in 2011 and will be awarded for the first time during the opening ceremony of ISMP 2012.

 

Biographical sketch:
Yinyu Ye is currently a Professor of Management Science and Engineering and Institute of Computational and Mathematical Engineering and the Director of the MS&E Industrial Affiliates Program, Stanford University. He received the B.S. degree in System Engineering from the Huazhong University of Science and Technology, China, and the M.S. and Ph.D. degrees in Engineering-Economic Systems and Operations Research from Stanford University. His current research interests include Continuous and Discrete Optimization, Algorithm Design and Analysis, Computational Game/Market Equilibrium, Metric Distance Geometry, Dynamic Resource Allocation, and Stochastic and Robust Decision Making, etc. He is an INFORMS (The Institute for Operations Research and The Management Science) Fellow, and has received several research awards including the 2009 John von Neumann Theory Prize for fundamental sustained contributions to theory in Operations Research and the Management Sciences, the inaugural 2006 Farkas prize on Optimization, and the 2009 IBM Faculty Award. He has supervised numerous doctoral students who received the 2008 Nicholson Prize and the 2006 and 2010 INFORMS Optimization Prizes for Young Researchers.

 



Tuesday, 17:00 - 17:50 h, H 0105


Rekha Thomas
Lifts and Factorizations of Convex Sets [...]


Chair: Martin Skutella


Abstract:
A basic strategy for linear optimization over a complicated convex set is to try to express the set as the projection of a simpler convex set that admits efficient algorithms. This philosophy underlies all lift-and-project methods in the literature which attempt to find polyhedral or spectrahedral lifts of complicated sets. Given a closed convex cone K and a convex set C, there is an affine slice of K that
projects to C if and only if a certain "slack operator'' associated to C can be factored through K. This theorem extends a result of Yannakakis who showed that polyhedral lifts of polytopes are controlled by the nonnegative factorizations of the slack matrix of the polytope. The connection between cone lifts and cone factorizations of convex sets yields a uniform framework within which to view all lift-and-project methods, as well as offers new tools for understanding convex sets. I will survey this evolving area and the main results that have emerged thus far.

 

Biographical sketch:
Rekha Thomas received a Ph.D. in Operations Research from Cornell
University in 1994 under the supervision of Bernd Sturmfels. After
holding postdoctoral positions at the Cowles Foundation for Economics at Yale University and ZIB, Berlin, she worked as an assistant professor of Mathematics at Texas A&M University from 1995 - 2000. Since 2000, she has been at the University of Washington in Seattle where she is now the Robert R. and Elaine F. Phelps Endowed Professor of Mathematics. Her research interests are in optimization and computational algebra.

 



Tuesday, 17:00 - 17:50 h, H 0104


Teemu Pennanen
Introduction to convex optimization in financial markets [...]


Chair: John Birge


Abstract:
Convexity arises quite naturally in financial risk management. In risk preferences concerning random cash-flows, convexity corresponds to the fundamental diversification principle. Convexity is a basic property also of budget constraints both in classical linear models as well as in more realistic models with transaction costs and constraints. Moreover, modern securities markets are based on trading protocols that result in convex trading costs. This talk gives an introduction to certain basic concepts and principles of financial risk management in simple optimization terms. We then review some convex optimization techniques used in mathematical and numerical analysis of financial optimization problems.

 

Biographical sketch:
Teemu Pennanen is the Professor of Mathematical Finance, Probability and Statistics at King's College London. Before joining KCL, Professor Pennanen worked as Managing Director at QSA Quantitative Solvency Analysts Ltd, with a joint appointment as Professor of Stochastics at University of Jyvaskyla, Finland. His earlier appointments include a research fellowship of the Finnish Academy and several visiting positions in universities abroad. Professor Pennanen's research interests include financial risk management, financial econometrics, mathematical finance and the development of computational techniques for risk management. He has authored more than 30 journal publications and he has been a consultant to a number of financial institutions including Bank of Finland, Ministry of Social Affairs and Health and The State Pension Fund.

 



Wednesday, 9:00 - 9:50 h, H 0105


Christof Schütte
Optimal control of molecular dynamics using Markov state models [...]


Chair: Martin Skutella


Abstract:
Molecular systems exhibits complicated dynamical behavior that is responsible for its (biological or nanotechnological) functionality. The effective dynamics can be characterized by the switching behavior between several metastable states, the so-called conformations of the system. Therefore, steering a molecular system from one conformation into another one means controlling its functionality. This talk considers optimal control problems that appear relevant in molecular dynamics (MD) applications and belong to the class of ergodic stochastic control problems in high dimensions. It will be demonstrated how the associated Hamilton-Jacobi-Bellman (HJB) equation can be solved. The main idea is to first approximate the dominant modes of the MD transfer operator by a low-dimensional, so-called Markov state model (MSM), and then solve the HJB for the MSM rather than the full MD. The approach rests on the interpretation of the corresponding HJB equation as a nonlinear eigenvalue problem that, using a logarithmic transformation, can be recast as a linear eigenvalue problem, for which the principal eigenvalue and the associated eigenfunction are sought. The resulting method will be illustrated in application to the maximization of the population of alpha-helices in an ensemble of small biomolecules (Alanine dipeptide).

 

Biographical sketch:
Christof Schütte is a professor in the Mathematics and Computer Science Department at Freie Universität Berlin (FU). He holds a diploma in physics from Paderborn University and a Ph.D in mathematics from FU. Since 2008, he is the co-chair of the DFG Research Center MATHEON in Berlin and the head of the Biocomputing Group at FU. Christof gave a plenary lecture at ICIAM 2007 in Zurich and was an invited speaker at the 2010 International Congress of Mathematicians in Hiderabad. His research is on modeling, simulation and optimization in the life sciences with a special focus on stochastic multiscale problems in molecular and systems biology and on information-based medicine. He is currently the head of the Innovation Laboratory "Math for Diagnostics" in Berlin.

 



Wednesday, 17:00 - 17:50 h, H 0105


Robert Weismantel
Mixed Integer Convex Optimization [...]


Chair: Gérard Cornuéjols


Abstract:
This talk deals with algorithms and complexity results about the minimization of convex functions over integer points in convex regions.
We begin with a survey about the current state of art.
Then we discuss results about to the speed of convergence of a black box algorithm that iteratively solves special quadratic integer subproblems with a constant approximation factor. Despite the generality of the underlying problem we prove that we can detect efficiently w.r.t. our assumptions regarding the encoding of the problem a feasible solution whose objective function value is close to the optimal value.
We also show that this proximity result is best possible up to a polynomial factor.
Next we discuss a new "cone-shrinking algorithm" that allows us to prove that integer convex optimization with a constant number of variables is polynomial time solvable.
Parts of our results are based on joint work with M. Baes, A. del Pia, Y. Nesterov, S. Onn. The other part is based on joint work with M. Baes, T. Oertel, C. Wagner.

 

Biographical sketch:
Robert Weismantel was born in 1965 in München, Germany. After studying mathematics at the University of Augsburg, he moved with Martin Grötschel to the Konrad-Zuse-Zentrum für Informationstechnik in Berlin (ZIB) in 1991. From the TU Berlin he received his PhD degree in 1992 and his second PhD degree (Habilitation) in 1995.

In the years 1991 - 1997 he was a researcher at ZIB. From 1998 to 2010 he was a Professor (C4) for Mathematical Optimization at the University of Magdeburg. In 2010, he was elected Full Professor at the Department of Mathematics at ETH Zurich.
His main research interest is integer and mixed integer optimization:
specifically he was working on primal integer programming, the theory of Hilbert bases, and cutting plane theory. More recently he is working on nonlinear integer optimization.

His work has been distinguished with several prizes and honors:
His PhD thesis was awarded a Carl Ramsauer Prize. He received the Gerhard Hess Research Prize of the German Science Foundation and IBM-Faculty Awards in 2007 and 2010. He is currently a Co-Editor of Mathematical Programming A.

 



Wednesday, 17:00 - 17:50 h, H 0104


Claudia Sagastizábal
Divide to conquer: decomposition methods for energy optimization [...]


Chair: Alejandro Jofré


Abstract:
Modern electricity systems provide a plethora of challenging issues in optimization. The increasing penetration of low carbon renewable sources of energy introduces uncertainty in problems traditionally modeled in a deterministic setting. The liberalization of the electricity sector brought the need of designing sound markets, ensuring capacity investments while properly reflecting strategic interactions. In all these problems, hedging risk, possibly in a dynamic manner, is also a concern. The fact of representing uncertainty and/or competition of different companies in a multi-settlement power market considerably increases the number of variables and constraints. For this reason, usually a trade-off needs to be found between modeling and numerical tractability: the more details are brought into the model, the harder becomes the optimization problem.

Both for optimization and generalized equilibrium problems, we explore some variants of solution methods based on Lagrangian relaxation and on Benders decomposition. Throughout we keep as a leading thread the actual practical value of such techniques in terms of their efficiency to solve energy related problems.

 

Biographical sketch:
Claudia Sagastizábal is on leave from a researcher
position at INRIA, in France, and is currently adjunct as a
long-term visitor to IMPA, in Rio de Janeiro. After finishing her undergraduate math studies in Argentina, Claudia moved to Paris, where she got her PhD in 1993 and her habilitation degree in 1998, both at the University Paris I Panthéon-Sorbonne. She has taught optimization in Argentina, France, and Brazil and is co-author of the book "Numerical Optimization: Theoretical and Practical Aspects'' published by Springer. Claudia has also served in various program committees of international conferences and was elected Council Member-at-large in the Mathematical Optimization Society for the period 2009-2012.
In parallel with her academic activities, Claudia has held consulting appointments for companies such as Electricité de France, Gaz de France-Suez, Renault-France, Robert Bosch, Petrobras, and Eletrobras. Her research interests lie in the areas of nonsmooth optimization as well as convex and variational analysis, always driven by real-life applications, with emphasis on the energy sector.

 



Thursday, 9:00 - 9:50 h, H 0105


Richard Baraniuk
Compressive Signal Processing [...]


Chair: Luís Nunes Vicente


Abstract:
Sensing and imaging systems are under increasing pressure to accommodate ever larger and higher-dimensional data sets; ever faster capture, sampling, and processing rates; ever lower power consumption; communication over ever more difficult channels; and radically new sensing modalities. This talk will overview the foundations and recent progress on compressive signal processing, a new approach to data acquisition and processing in which analog signals are digitized and processed not via uniform sampling but via measurements using more general, even random, test functions. In stark contrast with conventional wisdom, the new theory asserts that one can combine "sub-Nyquist-rate sampling" with large-scale optimization for efficient and accurate signal acquisition when the signal has a sparse structure. The implications of compressive sensing are promising for many applications and enable the design of new kinds of communication systems, cameras, imagers, microscopes, and pattern recognition systems. Special emphasis will be placed on the pros and cons of the compressive sensing technique.

 

Biographical sketch:
Richard G. Baraniuk is the Victor E. Cameron Professor of Electrical and Computer Engineering at Rice University. His research interests lie in new theory, algorithms, and hardware for sensing, signal processing, and machine learning. He is a Fellow of the IEEE and AAAS and has received national young investigator awards from the US NSF and ONR, the Rosenbaum Fellowship from the Isaac Newton Institute of Cambridge University, the ECE Young Alumni Achievement Award from the University of Illinois, and the Wavelet Pioneer and Compressive Sampling Pioneer Awards from SPIE. His work on the Rice single-pixel compressive camera has been widely reported in the popular press and was selected by MIT Technology Review as a TR10 Top 10 Emerging Technology for 2007. For his teaching and education projects, including Connexions (cnx.org), he has received the C. Holmes MacDonald National Outstanding Teaching Award from Eta Kappa Nu, Tech Museum of Innovation Laureate Award, the Internet Pioneer Award from the Berkman Center for Internet and Society at Harvard Law School, the World Technology Award for Education, the IEEE-SPS Education Award, and the WISE Education Award.

 



Thursday, 17:00 - 17:50 h, H 0105


Amin Saberi
Rounding by Sampling and Traveling Salesman Problems [...]


Chair: Friedrich Eisenbrand


Abstract:
I will talk about a new technique for rounding the solution
of linear programming relaxations of combinatorial optimization
problems. In particular, I will present new algorithms for symmetric and asymmetric traveling salesman problems, improving the best known approximation ratios for these problems.

 

Biographical sketch:
Amin Saberi is an Associate Professor and 3COM faculty scholar in
Stanford University. He received his B.Sc. from Sharif University of Technology and his Ph.D. from Georgia Institute of Technology in Computer Science. His research interests include algorithms,
approximation algorithms, and algorithmic aspects of games, markets, and networks. He is a Frederick Terman Fellow (2005-2010), an Alfred Sloan Fellow (2010-2012), and the recipient of National Science Foundation Career award as well as best paper awards in FOCS 2011 and SODA 2010.

 



Thursday, 17:00 - 17:50 h, H 0104


Michael Friedlander
Data fitting and optimization with randomized sampling [...]


Chair: Luís Nunes Vicente


Abstract:
For many structured data-fitting applications, incremental gradient methods (both deterministic and randomized) offer inexpensive iterations by sampling only subsets of the data. They make great progress initially, but eventually stall. Full gradient methods, in contrast, often achieve steady convergence, but may be prohibitively expensive for large problems. Applications in machine learning and robust seismic inversion motivate us to develop an inexact gradient method and sampling scheme that exhibit the benefits of both incremental and full gradient methods.

 

Biographical sketch:
Michael P. Friedlander is Associate Professor of Computer
Science at the University of British Columbia. He received his PhD in Operations Research from Stanford University in 2002, and his BA in Physics from Cornell University in 1993. From 2002 to 2004 he was the Wilkinson Fellow in Scientific Computing at Argonne National Laboratory. He was a senior fellow at UCLA's Institute for Pure and Applied Mathematics in 2010. He serves on the editorial boards of SIAM J. on Optimization, SIAM J. on Matrix Analysis and Applications, Mathematical Programming Computation, Optimization Methods and Software, and the Electronic Transactions on Numerical Analysis. His research is primarily in developing numerical methods for constrained
optimization, their software implementation, and applications in
signal processing and image reconstruction.

 



Friday, 9:00 - 9:50 h, H 0105


Nikhil Bansal
Semidefinite Optimization in Discrepancy Theory [...]


Chair: Friedrich Eisenbrand


Abstract:
The concept of discrepancy is intimately related to several
fundamental topics in mathematics and
theoretical computer science, and deals with the following type of question.
Given a collection of sets on some elements, color each element red
or blue such that each set in the
collection is colored as evenly as possible.

Recently, there have been several new developments in discrepancy
theory based on connections to semidefinite programming. This connection
has been useful in several ways. It gives efficient polynomial time algorithms
for several problems for which only non-constructive results were previously
known. It also leads to several new structural results in discrepancy
itself, such
as tightness of the so-called determinant lower bound, improved bounds on the
discrepancy of the union of set systems and so on. We will give a brief survey
of these results, focussing on the main ideas and the techniques involved.

 

Biographical sketch:
Nikhil Bansal is an Associate Professor in the Department of
Mathematics and
Computer Science at Eindhoven University of Technology. He obtained
his PhD from Carnegie
Mellon University in 2003, and worked at the IBM T.J. Watson Research
Center until 2011,
where he also managed the Algorithms group.
His main research interests are the design and analysis of
approximation and online
algorithms. For his work, he has co-received best paper awards at FOCS
2011, ESA 2011 and ESA 2010,
and also IBM Research best paper awards for 2007 and 2010.

 



Friday, 9:00 - 9:50 h, H 0104


Xiaojun Chen
Nonsmooth, Nonconvex Regularized Optimization for Sparse Approximations [...]


Chair: Ya-xiang Yuan


Abstract:
Minimization problems with nonsmooth, nonconvex, perhaps even non-Lipschitz regularization terms have wide applications in image restoration, signal reconstruction and variable selection, but they seem to lack optimization theory. On Lp non-Lipschitz regularized minimization, we show that finding a global optimal solution is strongly NP-hard. On the other hand, we present lower bounds of nonzero entries in every local optimal solution without assumptions on the data matrix. Such lower bounds can be used to classify zero and nonzero entries in local optimal solutions and select regularization parameters for desirable sparsity of solutions. Moreover, we show smoothing methods are efficient for solving such regularized minimization problems. In particular, we introduce a smoothing SQP method which can find an affine scaled ε-stationary point from any starting point with complexity O(ε-2), and a
smoothing trust region Newton method which can find a point satisfying the affine scaled second order necessary
condition from any starting point. Examples with six widely used nonsmooth nonconvex regularization terms are presented to illustrate the theory and algorithms.
Joint work with W. Bian, D. Ge, L. Niu, Z. Wang, Y. Ye, Y. Yuan.

 

Biographical sketch:
Professor Xiaojun Chen received her PhD degree in Computational Mathematics from Xi'an Jiaotong University, China in 1987 and PhD degree in Applied Mathematics from Okayama University of Science, Japan in 1991. She was a postdoctoral
fellow at the University of Delaware,
an Australia Research Fellow in the University of New South Wales and a Professor in Hirosaki University, Japan. She joined the Hong Kong Polytechnic University as a Professor in 2007. Her current research interests include nonsmooth nonconvex optimization, stochastic equilibrium problems and numerical approximation methods on the sphere with important applications in engineering and economics. She has published over 80 papers in major international journals in operations research and computational mathematics. Prof. Chen has won many grants as a principal investigator from several government funding agencies and organized several important international conferences. She serves in the editorial boards of five mathematical journals including SIAM Journal on Numerical Analysis.

 



Friday, 17:00 - 17:50 h, H 0105


Jorge Nocedal
Second-Order Methods for Stochastic, Semi-Smooth and Nonlinear Programming [...]


Chair:


Abstract:
First-order methods have been advocated for solving optimization problems of large scale. Although they are sometimes the most appropriate techniques, we argue that in many applications it is advantageous to employ second-order information as an integral part of the iteration. This is particularly so when parallel computing environments are available. In this talk, we take a broad view of second-order methods, and center our discussion around three applications: convex L1 regularized optimization, inverse covariance estimation, and nonlinear programming. We note that many efficient methods for these problems can be derived using a semi-smooth Newton framework, which allows us to compare their manifold identification and subspace minimization properties. We propose an algorithm employing a novel active-set mechanism that is of interest in machine learning, PDE-constrained optimization, and other applications. We also discuss dynamic sampling techniques, illustrate their practical performance, and provide work complexity bounds. The talk concludes with some observations about the influence that parallel computing has on large scale optimization calculations.

 

Biographical sketch:
Jorge Nocedal is a professor in the Industrial Engineering Department at Northwestern University. He holds a B.S. degree in physics from the National University of Mexico and a Ph.D in applied mathematics from Rice University. Prior to moving to Northwestern, he taught at the Courant Institute of Mathematical Sciences. He is a SIAM Fellow and an ISI Highly Cited Researcher (mathematics category). In 1998 he was appointed Bette and Neison Harris Professor at Northwestern. Jorge was an invited speaker at the 1998 International Congress of Mathematicians in Berlin. His research focuses on the theory, algorithms and applications of nonlinear programming, and he has developed widely used software, including L-BFGS and Knitro. He is currently Editor-in-Chief of the SIAM Journal on Optimization.

 

 

Program -> Plenary and Semi-Plenary title only | abstract | bio sketch

  The majority of financial loan companies provide the service of getting Payday Loans North Carolina for U.S. citizens. But at the same time, it acts only with sexual arousal. Viagra has a number of advantages in comparison with injections in the sexual organ or other procedures aimed at treatment of impotency.