Tuesday, 13:15 - 13:40 h, Room: H 1028

 

Peter Richtarik
Parallel block coordinate descent methods for huge-scale partially separable problems

Coauthor: Martin Takac

 

Abstract:
In this work we show that randomized block coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially block separable smooth convex function and a simple block separable convex function. We give a generic algorithm and several variants thereof based on the way parallelization is performed. In all cases we prove iteration complexity results, i.e., we give bounds on the number of iterations sufficient to approximately solve the problem with high probability. Our results generalize the intuitive observation that in the separable case the theoretical speedup caused by parallelization must be equal to the number of processors. We show that the speedup increases with the number of processors and with the degree of partial separability of the smooth component of the objective function. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modelling situations with variable (busy or unreliable) number of processors. We conclude with some encouraging computational results applied to huge-scale LASSO and sparse SVM instances.

 

Talk 1 of the invited session Tue.2.H 1028
"Coordinate descent methods for huge-scale optimization" [...]
Cluster 21
"Sparse optimization & compressed sensing" [...]

 

  Payday Loans In Missouri. Of course, the choice is not that easy, as there exist great number of different preparations. Notwithstanding, Cialis is the one that definitely differs from all other products.