Invited Session Tue.1.H 1028

Tuesday, 10:30 - 12:00 h, Room: H 1028

Cluster 21: Sparse optimization & compressed sensing [...]

Machine learning algorithms and implementations

 

Chair: Mark Schmidt

 

 

Tuesday, 10:30 - 10:55 h, Room: H 1028, Talk 1

Mark Schmidt
Linearly-convergent stochastic gradient methods

Coauthors: Francis Bach, Michael P. Frielander, Nicolas Le Roux

 

Abstract:
This talk considers optimizing the sum of a large number of smooth functions, where the sum is strongly convex. Stochastic gradient algorithms only compute a single term in the sum on each iteration, leading to inexpensive iterations but unfortunately yielding sublinear convergence rates. In contrast, full-gradient methods achieve linear convergence rates at the expense of evaluating all terms on each iteration. We explore two strategies that exhibit the benefits of both approaches. First, we show that a linear convergence rate can be achieved at the expense of evaluating an increasing number of functions on each iteration. Second and more surprisingly, we propose a new method that only needs to evaluate a single term in the sum on each iteration but that still achieves a linear convergence rate. Numerical experiments indicate that the new algorithms can dramatically outperform standard methods.

 

 

Tuesday, 11:00 - 11:25 h, Room: H 1028, Talk 2

Sewoong Oh
Statistical analysis of ranking from pairwise comparisons

Coauthors: Shah Devavrat, Negahban Sahand

 

Abstract:
The problem of ranking a collection of objects on the basis of a number of pairwise comparisons occurs naturally in many applications, including ranking players of a two-person game based on their records against each other. In this talk, we study two approaches of assigning scores which provide natural ordering of the objects. When the comparisons data is generated from the logit model, we provide performance guarantees for both approaches. First, we provide an upper bound on the error rate of the logistic regression applied to pairwise comparisons data. Next, we introduce an alternative approach of computing the scores of the objects from a stationary distribution of a random walk on a comparisons graph. We define the comparisons graph as a directed and weighted graph of objects where
two objects are connected if the pair has been compared at least once, and the directed edges are weighted according to the outcome of the comparisons. Under the logit model, we provide an upper bound on the resulting error.

 

  personal loans online . But it is worth noting that these tests were carried out on the blood cells. Therefore, it's too early to say about scientific evidence of Viagra influence on blood clots.