Invited Session Tue.3.H 3503

Tuesday, 15:15 - 16:45 h, Room: H 3503

Cluster 6: Derivative-free & simulation-based optimization [...]

Novel approaches in derivative-free optimization

 

Chair: Luís Nunes Vicente and Stefan Wild

 

 

Tuesday, 15:15 - 15:40 h, Room: H 3503, Talk 1

Yurii Nesterov
Random gradient-free minimization of convex functions

 

Abstract:
In this talk, we prove the complexity bounds
for methods of Convex Optimization based only on
computation of the function value. The search directions
of our schemes are normally distributed random Gaussian
vectors. It appears that such methods usually need at most
n times more iterations than the standard gradient
methods, where n is the dimension of the space of
variables. This conclusion is true both for nonsmooth and
smooth problems. For the later class, we present also an
accelerated scheme with the expected rate of convergence
O(n2⁄k2), where k is the iteration counter.
For Stochastic Optimization, we propose a zero-order
scheme and justify its expected rate of convergence O(n⁄k1/2). We give also some bounds for the rate of
convergence of the random gradient-free methods to
stationary points of nonconvex functions, both for smooth
and nonsmooth cases. Our theoretical results are supported
by preliminary computational experiments.

 

 

Tuesday, 15:45 - 16:10 h, Room: H 3503, Talk 2

Afonso S. Bandeira
On sparse Hessian recovery and trust-region methods based on probabilistic models

Coauthors: Katya Scheinberg, Luís Nunes Vicente

 

Abstract:
In many application problems in optimization, one has little or no
correlation between problem variables, and such (sparsity) structure
is unknown in advance when optimizing without derivatives.
We will show that quadratic interpolation models computed by
l1-minimization recover the Hessian sparsity of the function
being modeled, when using random sample sets. Given a considerable
level of sparsity in the unknown Hessian of the function,
such models can achieve the accuracy of second order Taylor ones
with a number of sample points (or observations) significantly
lower than O(n2).
The use of such modeling techniques in derivative-free optimization
led us to the consideration of trust-region methods where the
accuracy of the models is given with some positive
probability. We will show that as long as such probability of
model accuracy is over 1/2, one can ensure, almost surely,
some form of convergence to first and second order stationary points.

 

 

Tuesday, 16:15 - 16:40 h, Room: H 3503, Talk 3

Alexander Rakhlin
Regret minimization with zeroth order information

Coauthors: Alekh Agarwal, Dean Foster, Daniel Hsu, Sham Kakade

 

Abstract:
The problem of stochastic convex optimization with the noisy zeroth order oracle can be seen as a generalization of the classical multi-armed bandit problem, if the goal is to minimize regret rather than the final value of the objective. Regret is defined as the average of the function values along the optimization path, and not all convex optimization methods have low regret. In this talk we present a regret-minimization algorithm that is based on the zeroth order algorithm of Nemirovskii and Yudin from the 70's. We also briefly discuss regret minimization in a more difficult scenario of online linear optimization, where the linear cost functions are changing adversarially at every step. While we only have one shot at getting any information about the cost function per step, a randomized method which explores according to the Dikin ellipsoid is shown to enjoy a near-optimal regret bound.

 

  In particular, Payday Loans Texas can cater to the needs of its residents. If you have already decided to take Buy Generic Levitra, be sure to consult a doctor, you don't have any contraindications and act strictly due to a prescription.