Invited Session Mon.3.H 0110

Monday, 15:15 - 16:45 h, Room: H 0110

Cluster 16: Nonlinear programming [...]

Nonlinear optimization III


Chair: Frank E. Curtis and Daniel Robinson



Monday, 15:15 - 15:40 h, Room: H 0110, Talk 1

Mikhail Solodov
Convergence properties of augmented Lagrangian methods under the second-order sufficient optimality condition

Coauthor: Damian Fernandez


We establish local convergence and rate of convergence
of the classical augmented Lagrangian algorithm
under the
sole assumption that the dual starting point is close to a
multiplier satisfying the second-order sufficient optimality condition (SOSC).
No constraint qualifications
of any kind are needed. Previous literature on the subject
required, in addition, the linear independence constraint qualification and
either strict complementarity or a stronger
version of SOSC.
Using only SOSC, for
penalty parameters large enough we prove primal-dual Q-linear
convergence rate, which becomes superlinear if the parameters are allowed
to go to infinity. Both exact and inexact solutions of subproblems
are considered. In the exact case, we further show that the
primal convergence rate is of the same Q-order as the primal-dual rate.
Previous assertions
for the primal sequence all had to do with the the weaker
R-rate of convergence and required the stronger assumptions
cited above. Finally, we show that under our assumptions one of the popular rules
of controlling the penalty parameters ensures they stay bounded.



Monday, 15:45 - 16:10 h, Room: H 0110, Talk 2

Frank E. Curtis
Infeasibility detection in nonlinear optimization

Coauthors: James V. Burke, Hao Wang


Contemporary numerical methods for nonlinear optimization possess strong global and fast local convergence guarantees for feasible problems under common assumptions. They also often provide guarantees for (eventually) detecting if a problem is infeasible, though in such cases there are typically no guarantees of fast local convergence. This is a critical deficiency as in the optimization of complex systems, one often finds that nonlinear optimization methods can fail or stall due to minor constraint incompatibilities. This may suggest that the problem is infeasible, but without an infeasibility certificate, no useful result is provided to the user. We present a sequential quadratic optimization (SQO) method that possesses strong global and fast local convergence guarantees for both feasible and infeasible problem instances. Theoretical results are presented along with numerical results indicating the practical advantages of our approach.



Monday, 16:15 - 16:40 h, Room: H 0110, Talk 3

Figen Oztoprak
Two-phase active set methods with applications to inverse covariance estimation


We present a semi-smooth Newton framework that gives rise to a family of second order methods for structured convex optimization. The generality of our approach allows us to analyze their convergence properties in a unified setting, and to contrast their algorithmic components. These methods are well suited for a variety of machine learning applications, and in this talk we give particular attention to an inverse covariance matrix estimation problem arising in speech recognition. We compare our method to state-of-the-art techniques, both in terms of computational efficiency and theoretical properties.


  . Of course, the choice is not that easy, as there exist great number of different preparations. Notwithstanding, Cialis Online is the one that definitely differs from all other products.