Invited Session Tue.3.H 0110

Tuesday, 15:15 - 16:45 h, Room: H 0110

Cluster 16: Nonlinear programming [...]

Recent advances in nonlinear optimization


Chair: Andrew R. Conn



Tuesday, 15:15 - 15:40 h, Room: H 0110, Talk 1

Nicholas Ian Mark Gould
SQP Filter methods without a restoration phase

Coauthors: Sven Leyffer, Yueling Loh, Daniel Robinson


We consider Filter SQP methods in which regularization is
applied explicitly rather than via a trust-region, as
suggested by Gould, Leyffer et al. in 2006. Our goal is
to provide an alternative to the unattractive "restoration''
phase that is needed to unblock iterates that become trapped
by the filter. We will consider two alternatives. In the first,
the model problem itself gives precedence to improving feasibility
and this naturally leads to unblocking. In the second, the filter
envelope is "tilted'' to allow more room for improvement, and
if this fails to unblock, the filter itself is disregarded
and progress towards optimality guided by an overall merit
function. All of this is somewhat speculative at this stage.



Tuesday, 15:45 - 16:10 h, Room: H 0110, Talk 2

Philip E. Gill
Regularization and convexification for SQP methods

Coauthor: Daniel P. Robinson


We describe a sequential quadratic programming (SQP) method for nonlinear programming that uses a primal-dual generalized augmented Lagrangian merit function to ensure global convergence. Each major iteration involves the solution of a bound-constrained subproblem defined in terms of both the primal and dual variables. A convexification method is used to give a subproblem that is equivalent to a regularized convex quadratic program (QP).
The benefits of this approach include the following: (1) The QP subproblem always has a known feasible point. (2) A projected gradient method may be used to identify the QP active set when far from the solution. (3) The application of a conventional active-set method to the bound-constrained subproblem involves the solution of a sequence of regularized KKT systems. (4) Additional regularization may be applied by imposing explicit bounds on the dual variables. (5) The method is equivalent to the stabilized SQP method in the neighborhood of a solution.



Tuesday, 16:15 - 16:40 h, Room: H 0110, Talk 3

Andreas Waechter
A hot-started NLP solver

Coauthor: Travis C. Johnson


We discuss an active-set SQP method for nonlinear continuous
optimization that avoids the re-factorization of derivative matrices
during the solution of the step computation QP in each iteration.
Instead, the approach uses hot-starts of the QP solver for a QP with
matrices corresponding to an earlier iteration, or available from the
solution of a similar NLP. The goal of this work is the acceleration
of the solution of closely related NLPs, as they appear, for instance,
during strong-branching or diving heuristics in MINLP.


  There are three major facts that should be watched out for in all payday loans in the United States. If you have already decided to take Levitra, be sure to consult a doctor, you don't have any contraindications and act strictly due to a prescription.