This talk is an “eye witness account” of the evolution of nonlinear optimization methods over the last 4 decades. Starting from the early days of simplex-inspired methods, to augmented Lagrangians and interior-points, this talk highlights the need for new active set methods and the opportunities (not challenges) provided by stochasticity. We conclude with a few observations about the formulation and solution of optimization problems in supervised learning.
Jorge Nocedal is the David and Karen Sachs Professor and the chair of the Industrial Engineering and Management Sciences Department at Northwestern Univerisity. He holds a Ph.D. in
Mathematical Sciences from Rice University and a B.Sc. degree in Physics from the National University of Mexico. His research focuses on the theory, algorithms and applications of nonlinear optimization in machine learning and in disciplines involving differential equations. He specializes in nonlinear optimization, both convex and non-convex; deterministic and stochastic. The motivation for his current algorithmic and theoretical research stems from applications in image and speech recognition, recommendation systems, and search engines. In 2012, he was awarded the George B. Dantzig Prize. He has multiple honors and distinctions including the SIAM Fellow and the ISI Highly Cited Researcher (Mathematics category). He served as the Editor-in-Chief of the SIAM Journal on Optimization from 2010 to 2014. He was also on the editorial boards of SIAM Review, Mathematical Programming, and Mathematics of Computation journals. He is a co-author of the book “Numerical Optimization”. He has developed widely used software too, including KNITRO. He was affiliated with various organizations including Google Inc., Ziena Optimization