## Contributed Session Wed.3.H 3503

#### Wednesday, 15:15 - 16:45 h, Room: H 3503

**Cluster 6: Derivative-free & simulation-based optimization** [...]

### Stochastic zero-order methods

**Chair: Stefan Wild and Luís Nunes Vicente**

**Wednesday, 15:15 - 15:40 h, Room: H 3503, Talk 1**

**Joao Lauro Dorneles Faco**

A continuous GRASP for global optimization with general linear constraints

**Coauthors: Mauricio G. Resende, Ricardo A. Silva**

**Abstract:**

A new variant of the global optimization method Continuous GRASP (C-GRASP) is presented. The new variant incorporates general linear constraints in addition to box constraints. C-GRASP solves continuous global optimization problems subject to box constraints by adapting the greedy randomized adaptive search procedure (GRASP) of Feo and Resende (1989) for discrete optimization. It has been applied to a wide range of continuous optimization problems. We consider the box constraints as implicit and handle the general linear equality/inequality constraints explicitly. If we are given an *m × n* matrix *A*, with *m < n*, then *m* basic variables can be eliminated from the global optimization problem. A Reduced Problem in *(n-m)* independent variables will be subject only to the box constraints. The C-GRASP solver is a derivative-free global optimization method yielding an optimal or near-optimal solution to the Reduced Problem. The basic variables can be computed by solving a system of linear equations. If all basic variables are inside the box, the algorithm stops. Otherwise a change-of-basis procedure is applied, and a new Reduced Problem is solved.

**Wednesday, 15:45 - 16:10 h, Room: H 3503, Talk 2**

**Sebastian Urban Stich**

Convergence of local search

**Coauthors: Bernd Gärtner, Christian Lorenz Müller**

**Abstract:**

We study unconstrained optimization of convex functions. Many algorithms generate a sequence of improving approximate solutions to this optimization problem. Usually, these algorithms are analyzed by estimating the (expected) one-step progress. However, in case of randomized algorithms it is often difficult to obtain bounds on the variance of the whole process.

We present a general framework to analyze local search algorithms. Suppose that an algorithm proposes in each iteration exactly one new feasible solution that sufficiently improves the last iterate, i.e. a local decrease condition is satisfied. Karmanov (1974) presented a genuine method to analyze such a local search algorithm for differentiable convex functions. We extend his approach to strongly convex functions where linear convergence rates can be established with high probability. This approach can be used to analyze deterministic as well as randomized optimization algorithms. We show that for instance the Random Gradient method (Nesterov 2011), as well as Random Pursuit (Stich et al. 2011) can be analyzed by this framework. We conclude with another interesting example, namely derivative-free local metric learning.

**Wednesday, 16:15 - 16:40 h, Room: H 3503, Talk 3**

**Anne Auger**

Convergence of adaptive evolution strategies on monotonic *C*^{2}-composite and scale-invariant functions

**Coauthors: Youhei Akimoto, Nikolaus Hansen**

**Abstract:**

Evolution Strategies (ES) are stochastic search algorithms for numerical black-box optimization where a family of probability distributions is iteratively adapted to ultimately converge to a distribution concentrated on optima of the function. They are derivative-free methods using the objective function through the ranking of candidate solutions. Hence they are invariant when optimizing *f* or *g º f* where *g:***R** → **R** is monotonically increasing. Recently, some adaptive ESs where shown to be stochastic approximations of a natural gradient algorithm in the manifold defined by the family of probability distributions. An ODE is naturally associated to this natural gradient algorithm when the step-size goes to zero. Solutions of this ODE are continuous time models of the underlying algorithms.

In this talk we will present convergence results of the solutions of this ODE and prove their local convergence on monotonic *C*^{2}-composite functions towards local optima of the function. We will also present global convergence of the corresponding algorithm on some scale-invariant functions defined as functions such that *f(x) < f(y)* iff *f(s x) < f(s y)* for all *s > 0*.