People generally believe that a PC can handle SVD for matrices only up to thousands by thousands. Textbooks also suggest that it is not wise to compute singular vectors one by one. In this post, I’ll refute both statements. In particular, on a PC within 10 seconds without even parallelism, we can find the top 20 eigenvectors of a 260k x 260k matrix with 1.2 million entries. Moreover, the algorithm is simply to invoke rank-1 SVD 20 times. Continue reading
SGD is well-known for large-scale optimization. In my mind, there are so-far two fundamentally different improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. The following picture predicts —in theory— the performance difference between the naive SGD, the variance-reduced SGD, and the accelerated SGD:
SGD is well-known for large-scale optimization. In my mind, there are two (and only two) fundamental improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this guest post at Princeton’s OptiML group, I’d love to conduct a survey regarding (1), and I’d like to especially thank those ICML’16 participants who pushed me to write this post 🙂
If one wants to compute the minimum enclosing ball (MinEB) of a set of points, would you believe that the running time can be improved by a significant factor if we randomly rotate the space? While seemingly very counter-intuitive because a ball is still a ball after rotation, my authors and I proved that this is TRUE!
More specifically, we connect MinEB (and some other problem) to optimization, and developed much faster algorithms based on stochastic gradient ideas originally used for SVM or Lasso in machine learning. One ingredient of this method is this aforementioned random rotation. This is another big surprise to me because we can now declare another victory for optimization geeks against classical (such as geometry-based) methods.
The third one is mentioned in this blog post. The other two are coming!
PS: the video of the three talks are on YouTube now.
I am often asked what is the best algorithm to solve SVM, to solve Lasso Regression, to solve Logistic Regression, etc. At the same time, a growing number of first-order methods have been recently proposed, making even experts hard to track down the state-of-the-arts.
In this guest post on Princeton University’s OptiML blog, I finally have a chance to answer all these questions properly and simultaneously.