Contributed Session Tue.1.H 0107

Tuesday, 10:30 - 12:00 h, Room: H 0107

Cluster 16: Nonlinear programming [...]

Methods for nonlinear optimization IV

 

Chair: Hans-Bernd Dürr

 

 

Tuesday, 10:30 - 10:55 h, Room: H 0107, Talk 1

Charlotte Tannier
Block diagonal preconditioners using spectral information for saddle-point systems

Coauthors: Daniel Ruiz, Annick Sartenaer

 

Abstract:
For nonsingular indefinite matrices of saddle-point (or KKT) form,
Murphy, Golub and Wathen (2000) have shown how preconditioners
incorporating an exact Schur complement lead to preconditioned
matrices with exactly two or exactly three distinct eigenvalues.
Focusing on symmetric matrices with a positive definite (1,1) block
and a zero (2,2) block, we consider the case where the saddle-point
system is very badly conditioned due to the combined effect of very
small eigenvalues of the (1,1) block and of very small singular values
of the off-diagonal block. Under the assumption that spectral
information related to these very small eigenvalues/singular values
can be extracted separately, we propose and study different
approximations of the "ideal'' block diagonal preconditioner of Murphy,
Golub and Wathen (2000) with exact Schur complement, based on an
approximation of the Schur complement that combines the available
spectral information. We also derive a practical algorithm to
implement the proposed preconditioners within a standard minimum
residual method and illustrate the performance through numerical
experiments on a set of saddle-point systems.

 

 

Tuesday, 11:00 - 11:25 h, Room: H 0107, Talk 2

Hans-Bernd Dürr
Continuous-time saddle point algorithms with applications in control

Coauthor: Christian Ebenbauer

 

Abstract:
We present some recent results on a novel class of smooth optimization algorithms that compute saddle points which arise in convex optimization problems. In contrast to many related results, we are dealing with optimization algorithms which are formulated as ordinary differential equations, i.e. as smooth continuous-time vector fields, which we analyze from a dynamical systems theory perspective. The idea of using a differential equations to find a saddle point of a Lagrangian function goes back to K. J. Arrow, L. Hurwicz and to H. Uzawa. They proposed a gradient-like vector field (AHU-flow) with a non-smooth operator. An alternative vector field for saddle point problem is presented in this work. Like the AHU-flow, its trajectories are converging to the saddle point of the Lagrangian. However, this vector field has two distinct features. First, we proof that the flow also converges for linear programs, which is not the case for the AHU-flow, and second, the vector field is smooth which can be exploited in control theory to design distributed feedback laws for multi-agent systems. Furthermore, the convergence of a continuous-time Nesterov-like fast gradient variant is proved.

 

  Payday Loans In Tennessee. Since its introduction in the market buying Cialis can be exclusively in pharmacy chains with a prescription from a doctor. I agree that this is very inconvenient and takes a lot of time and effort.