Wednesday, 15:45 - 16:10 h, Room: H 3503

 

Sebastian Stich
Convergence of local search

Coauthors: Bernd Gärtner, Christian Lorenz Müller

 

Abstract:
We study unconstrained optimization of convex functions. Many algorithms generate a sequence of improving approximate solutions to this optimization problem. Usually, these algorithms are analyzed by estimating the (expected) one-step progress. However, in case of randomized algorithms it is often difficult to obtain bounds on the variance of the whole process.
We present a general framework to analyze local search algorithms. Suppose that an algorithm proposes in each iteration exactly one new feasible solution that sufficiently improves the last iterate, i.e. a local decrease condition is satisfied. Karmanov (1974) presented a genuine method to analyze such a local search algorithm for differentiable convex functions. We extend his approach to strongly convex functions where linear convergence rates can be established with high probability. This approach can be used to analyze deterministic as well as randomized optimization algorithms. We show that for instance the Random Gradient method (Nesterov 2011), as well as Random Pursuit (Stich et al. 2011) can be analyzed by this framework. We conclude with another interesting example, namely derivative-free local metric learning.

 

Talk 2 of the contributed session Wed.3.H 3503
"Stochastic zero-order methods" [...]
Cluster 6
"Derivative-free & simulation-based optimization" [...]

 

  Payday Loans In Ohio. If you have already decided to take Levitra For Sale, be sure to consult a doctor, you don't have any contraindications and act strictly due to a prescription.