Monthly Archives: June 2016
The third one is mentioned in this blog post. The other two are coming! PS: the video of the three talks are on YouTube now.
When minimizing a convex function using first-order methods, if full gradients are too costly to compute at each iteration, there are two alternatives that can reduce this per-iteration cost. One is to use a (random) coordinate gradient , and the other is … Continue reading
I am often asked what is the best algorithm to solve SVM, to solve Lasso Regression, to solve Logistic Regression, etc. At the same time, a growing number of first-order methods have been recently proposed, making even experts hard to … Continue reading