Contributed Session Mon.3.H 0107

Monday, 15:15 - 16:45 h, Room: H 0107

Cluster 16: Nonlinear programming [...]

Methods for nonlinear optimization III


Chair: Masoud Ahookhosh



Monday, 15:15 - 15:40 h, Room: H 0107, Talk 1

Yuan Shen
New augmented lagrangian-based proximal point algorithms for convex optimization with equality constraint

Coauthor: Bingsheng He


The Augmented Lagrangian method (ALM) is a classic and efficient method for solving constrained optimization
problem. It decomposes the original problem into a series of easy-to-solve subproblems to approach the solution of the original problem. However, its efficiency is still, to large extent, dependent on how efficient the subproblem can be solved. In general, the accurate solution of the subproblem can be expensive to compute, hence, it is more practical to relax the subproblem to make it easy to solve. When the objective has some favorable structure, the relaxed subproblem can be simple enough to have a closed form solution. Therefore, the resulting algorithm is efficient and practical for the low cost in each iteration. However, compared with the classic ALM, this algorithm can suffer from the slow convergence rate. Based on the same relaxed subproblem, we propose several new methods with faster convergence rate. We also report their numerical results in comparison to some state-of-the-art algorithms to demonstrate their efficiency.



Monday, 15:45 - 16:10 h, Room: H 0107, Talk 2

Mehiddin Al-Baali
Hybrid damped-BFGS/Gauss-Newton methods for nonlinear least-squares

Coauthor: Mohamed Al-Lawatia


The damped-technique in the modified BFGS method of Powell (1978) for constrained optimization will be extended to the hybrid BFGS/Gauss-Newton methods for unconstrained nonlinear least squares. It will be shown that this extension maintains the useful convergence properties of the hybrid methods and improves their performance substantially in certain cases. The analysis is based on a recent proposal for using the damped-technique when applied to the Broyden family of methods for unconstrained optimization, which enforces safely the positive definiteness property of Hessian approximations.



Monday, 16:15 - 16:40 h, Room: H 0107, Talk 3

Masoud Ahookhosh
An improved nonmonotone technique for both line search and trust-region frameworks

Coauthors: Nosratipour Hadi, Amini Keyvan


The nonmonotone iterative approaches are efficient techniques for solving optimization problems avoiding a monotone decrease in the sequence of function values. It has been believed that the nonmonotone strategies not only can enhance the likelihood of finding the global optimum but also can improve the numerical performance of approaches. Furthermore, the traditional nonmonotone strategy contains some disadvantages encountering with some practical problems. To overcome these drawbacks, some different nonmonotone strategies have proposed with more encouraging results. This study concerns with explorations on reasons of disadvantages of the traditional nonmonotone technique and introduce a variant version which mostly avoids the drawbacks of original one. Then we incorporate it into both line search and trust-region frameworks to construct more reliable approaches. The global convergence to first-order and second-order stationary points are investigated under some classical assumptions. Preliminary numerical experiments indicate the efficiency and the robustness of the proposed approaches for solving unconstrained nonlinear optimization.


  Payday Loans In Texas. But at the same time, it acts only with sexual arousal. Buy Viagra has a number of advantages in comparison with injections in the sexual organ or other procedures aimed at treatment of impotency.