The interplay of optimization and randomized linear algebra

Stephen Becker, IBM T.J. Watson Research Center

Date and Time: 

Wednesday, September 18, 2013 - 4:00pm

Event Location: 

Mohler Lab #453

The first part of this talk reviews some modern randomized linear algebra techniques. The goal of these methods is to perform approximate matrix multiplication or matrix factorizations (e.g., SVD) with lower computational cost than conventional methods. We then discuss using these methods inside optimization algorithms. The two main questions are (1) is the randomized approach faster, and (2) does the error from the randomized linear algebra affect the optimization algorithms? There are powerful recent results that bound the error of the linear algebra, but they are rarely applied to obtain rigorous guarantees for an optimization algorithm. We also touch on stochastic gradient descent, and apply all of these topics to some specific matrix recovery problems, such as quantum state tomography and matrix completion of the Netflix dataset.

Bio Sketch: 

Stephen Becker received his B.A. degrees in mathematics and in physics from Wesleyan University in 2005, and his Ph.D. degree in Applied & Computational Mathematics from the California Institute of Technology in 2011. From 2011 to 2013 he was a postdoctoral fellow at the Jacques-Louis Lions lab at Paris 6 thanks to a fellowship from the Fondation Sciences Mathematiques de Paris. Currently he is the Goldstine postdoctoral fellow at IBM's T. J. Watson Research Center in Yorktown Heights NY. His research interests are in large-scale optimization methods for signal processing and machine learning.