Corrections and Discussions for my ICML 2017 tutorial

Thanks everyone for attending. In this post I’d like to provide a platform for future discussions of my talk on “Recent Advances in Stochastic Convex and Non-Convex Optimization”. The link for the talk website is here: http://people.csail.mit.edu/zeyuan/topics/icml-2017, and the video (with my personal recording) is already online.  Continue reading

Advertisements
Posted in Uncategorized | 2 Comments

SVD and Eigenvectors for Big Data

People generally believe that a PC can handle SVD for matrices only up to thousands by thousands. Textbooks also suggest that it is not wise to compute singular vectors one by one. In this post, I’ll refute both statements. In particular, on a PC within 10 seconds without even parallelism, we can find the top 20 eigenvectors of a 260k x 260k matrix with 1.2 million entries. Moreover, the algorithm is simply to invoke rank-1 SVD 20 times. Continue reading

Posted in Algorithm, Optimization | Leave a comment

Faster Than SGD 2: the Katyusha Acceleration

SGD is well-known for large-scale optimization. In my mind, there are so-far two fundamentally different improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. The following picture predicts —in theory— the performance difference between the naive SGD, the variance-reduced SGD, and the accelerated SGD:
wola Continue reading

Posted in Optimization | 1 Comment

Faster Than SGD 1: Variance Reduction

SGD is well-known for large-scale optimization. In my mind, there are two (and only two) fundamental improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this guest post at Princeton’s OptiML group, I’d love to conduct a survey regarding (1), and I’d like to especially thank those ICML’16 participants who pushed me to write this post 🙂

Posted in Algorithm, Learning Theory, Optimization | Leave a comment

Faster Comp-Geometry via Optimization

If one wants to compute the minimum enclosing ball (MinEB) of a set of points, would you believe that the running time can be improved by a significant factor if we randomly rotate the space? While seemingly very counter-intuitive because a ball is still a ball after rotation, my authors and I proved that this is TRUE!

More specifically, we connect MinEB (and some other problem) to optimization, and developed much faster algorithms based on stochastic gradient ideas originally used for SVM or Lasso in machine learning. One ingredient of this method is this aforementioned random rotation. This is another big surprise to me because we can now declare another victory for optimization geeks against classical (such as geometry-based) methods.

Continue reading

Posted in Algorithm, Optimization | Leave a comment

ICML posters available!

The third one is mentioned in this blog post. The other two are coming!

PS: the video of the three talks are on YouTube now.

Posted in Optimization | 1 Comment

Coordinate Descent in One Line, or Three if Accelerated

When minimizing a convex function f(x) using first-order methods, if full gradients \nabla f(x) are too costly to compute at each iteration, there are two alternatives that can reduce this per-iteration cost. One is to use a (random) coordinate gradient \nabla_i f(x), and the other is to use a stochastic gradient \tilde{\nabla} f(x) satisfying \mathbb{E}[\tilde{\nabla} f(x)] = \nabla f(x).

I focus on coordinate (gradient) descent (CD) in this tutorial, and leave it another post in the future to describe stochastic gradient descent (SGD). Part of this survey has also appeared in my conference talk at ICML. Continue reading

Posted in Optimization | Leave a comment