Monthly Archives: July 2016

Faster Than SGD 1: Variance Reduction

SGD is well-known for large-scale optimization. In my mind, there are two (and only two) fundamental improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this guest post at Princeton’s OptiML group, I’d love to conduct … Continue reading

Posted in Algorithm, Learning Theory, Optimization | Leave a comment

Faster Comp-Geometry via Optimization

If one wants to compute the minimum enclosing ball (MinEB) of a set of points, would you believe that the running time can be improved by a significant factor if we randomly rotate the space? While seemingly very counter-intuitive because a … Continue reading

Posted in Algorithm, Optimization | Leave a comment