Invited Session Fri.2.MA 004

Friday, 13:15 - 14:45 h, Room: MA 004

Cluster 16: Nonlinear programming [...]

Fast gradient methods for nonlinear optimization and applications I

 

Chair: William Hager

 

 

Friday, 13:15 - 13:40 h, Room: MA 004, Talk 1

Zhang Hongchao
An adaptive preconditioned nonlinear conjugate gradient method with limited memory

Coauthor: William Hager

 

Abstract:
An adaptive preconditioner is developed for the conjugate gradient method
based on a limited memory BFGS matrix. The preconditioner is only used
when the iterates lie in an ill-conditioned subspace, otherwise, the
usual conjugate gradient algorithm is applied. The resulting algorithm
uses less memory and has lower computational complexity than the
standard L-BFGS algorithm, but performs significantly better than
either the conjugate gradient method or the L-BFGS quasi-Newton method
for the CUTEr test problems.

 

 

Friday, 13:45 - 14:10 h, Room: MA 004, Talk 2

Rui Diao
A sequential quadratic programming method without a penalty function or a filter for general nonlinear constrained optimization

Coauthors: Yu-Hong Dai, Xin-Wei Liu

 

Abstract:
We present a primal-dual interior-point method without using a penalty function or a filter for solving the constrained optimization with general equality and inequality constraints. The method combines the interior-point approach with a sequential quadratic programming without using a penalty function or a filter. The algorithm is terminated with an approximate KKT point or is stopped with a singular stationary point or an infeasible stationary point. We adopt several numerical techniques for solving subproblems and updating strategy to ensure our algorithm suitable for large scale problems. The numerical experiments with CUTEr collection show that the algorithm is efficient.

 

 

Friday, 14:15 - 14:40 h, Room: MA 004, Talk 3

Gerardo Toraldo
On the use of spectral properties of the steepest descent method

Coauthors: Roberta De Asmundis, Daniela di Serafino

 

Abstract:
In the last two decades the innovative approach of Barzilai and Borwein (BB) has stimulated the design of faster gradient methods for function minimization, which have shown to be effective in applications such as image restoration.
The surprising behaviour of these methods has been only partially justified, mostly in terms of the spectrum of the Hessian matrix. On the other hand the well known ability of the Cauchy Steepest Descent (SD) to reveal second order information about the problem has been little exploited to modify the method in order to design more effective gradient methods. In this work we show that, for convex quadratic problems, second order information provided by SD can be exploited to improve the usually poor practical behaviour of this method, achieving computational results comparable with those of BB, with the further advantage of monotonic behaviour. Our analysis also provides insight into the relaxed gradient method by Raydan and Svaiter.

 

  The majority of financial loan companies provide the service of getting Payday Loans North Carolina for U.S. citizens. Therefore, we can say that the active substances in its composition are more perfectly mixed. Vardenafil is not only present in the original Levitra, but also as part of its analogs.