Invited Session Tue.2.H 1028

Tuesday, 13:15 - 14:45 h, Room: H 1028

Cluster 21: Sparse optimization & compressed sensing [...]

Coordinate descent methods for huge-scale optimization

 

Chair: Peter Richtarik

 

 

Tuesday, 13:15 - 13:40 h, Room: H 1028, Talk 1

Peter Richtarik
Parallel block coordinate descent methods for huge-scale partially separable problems

Coauthor: Martin Takac

 

Abstract:
In this work we show that randomized block coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially block separable smooth convex function and a simple block separable convex function. We give a generic algorithm and several variants thereof based on the way parallelization is performed. In all cases we prove iteration complexity results, i.e., we give bounds on the number of iterations sufficient to approximately solve the problem with high probability. Our results generalize the intuitive observation that in the separable case the theoretical speedup caused by parallelization must be equal to the number of processors. We show that the speedup increases with the number of processors and with the degree of partial separability of the smooth component of the objective function. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modelling situations with variable (busy or unreliable) number of processors. We conclude with some encouraging computational results applied to huge-scale LASSO and sparse SVM instances.

 

 

Tuesday, 13:45 - 14:10 h, Room: H 1028, Talk 2

Martin Takac
Distributed block coordinate descent method: Iteration complexity and efficient hybrid implementation

Coauthors: Jakub Marecek, Peter Richtarik

 

Abstract:
In this work we propose solving huge-scale instances of regularized convex minimization problems using a distributed block coordinate descent method. We analyze the iteration complexity of the (synchronous) algorithm and show how it depends on the way the problem data is partitioned to the nodes. Several variations of the basic method are obtained based on the way updates are handled (P2P, broadcasting, asynchronicity). Finally, we report encouraging numerical results for an efficient hybrid MPI + Open MP implementation applied to LASSO and sparse support vector machine instances.

 

 

Tuesday, 14:15 - 14:40 h, Room: H 1028, Talk 3

Rachael Tappenden
Block coordinate descent method for block-structured problems

Coauthors: Jacek Gondzio, Peter Richtarik

 

Abstract:
We are concerned with very large scale convex optimization problems and an application of the Block Coordinate Descent (BCD) algorithm to determine their solution. We assume that the problems display block-structure and we show how this structure may be exploited to accelerate the BCD algorithm. At every iteration of the algorithm, the direction in each block-coordinate must be determined. We discuss the linear algebra techniques employed to accelerate this step. We also present a convergence analysis and a complexity result, which provide a linear algebra insight into the standard convex optimization techniques.

 

  In particular, Texas Payday Loans can cater to the needs of its residents. What makes Viagra Strips better than other forms of this medicine is its easiness of usage.