CS 201: On Optimization and the Miracle of Linearity in Deep Learning, MIKHAIL BELKIN, UC SAN DIEGO

Speaker: Mikhail Belkin
Affiliation: UC San Diego

ABSTRACT:

The success of deep learning is due, to a large extent, to the remarkable effectiveness of gradient-based optimization methods applied to large neural networks.

In this talk I will discuss some general mathematical principles allowing for efficient optimization in over-parameterized non-linear systems, a setting that includes deep neural networks. Remarkably, it seems that optimization of such systems is “easy”. In particular, optimization problems corresponding to these systems are not convex, even locally, but instead satisfy locally the Polyak-Lojasiewicz (PL) condition allowing for efficient optimization by gradient descent or SGD. We connect the PL condition of these systems to the condition number associated to the tangent kernel and develop a non-linear theory parallel to classical analyses of over-parameterized linear equations.

In a related but conceptually separate development, I will discuss a new perspective on the remarkable recently discovered phenomenon of the transition to linearity (constancy of NTK) in certain classes of large neural networks. I will show how this transition to linearity results from the scaling of the Hessian with the size of the network. Yet, as we show, it is not a general property of large systems and is not necessary for successful optimization.

Joint work with Chaoyue Liu and Libin Zhu.

BIO:

Mikhail Belkin received his Ph.D. in 2003 from the Department of Mathematics at the University of Chicago. His research interests are in theory and applications of machine learning and data analysis. Some of his well-known work includes widely used Laplacian Eigenmaps, Graph Regularization and Manifold Regularization algorithms, which brought ideas from classical differential geometry and spectral analysis to data science. His recent work has been concerned with understanding remarkable mathematical and statistical phenomena observed in deep learning. This empirical evidence necessitated revisiting some of the basic concepts in statistics and optimization.  One of his key recent findings is the ìdouble descentî risk curve that extends the textbook U-shaped bias-variance trade-off curve beyond the point of interpolation.

Mikhail Belkin is a recipient of a NSF Career Award and a number of best paper and other awards. He has served on the editorial boards of the Journal of Machine Learning Research, IEEE Pattern Analysis and Machine Intelligence and SIAM Journal on Mathematics of Data Science.

Hosted by Professor Quanquan Gu

Date/Time:
Date(s) - Jan 07, 2021
4:00 pm - 5:45 pm

Location:
Zoom Webinar
404 Westwood Plaza Los Angeles
Map Unavailable