CS 201: Computational / Statistical Gaps for Learning Neural Networks, ADAM KLIVANS, UT Austin

Speaker: Adam Klivans
Affiliation: University of Texas at Austin

ABSTRACT:

It has been known for decades that a polynomial-size training sample suffices for learning neural networks.  Most theoretical results, however, indicate that these learning tasks are computationally intractable.  Where does the truth lie?  In this talk we consider one of the simplest and most well-studied settings for learning– when the marginal distribution on inputs is Gaussian– and show unconditionally that gradient descent cannot learn even one-layer neural networks.  We then point to a potential way forward and sketch the first fixed-parameter tractable algorithm for learning deep ReLU networks.  Its running time is polynomial in the ambient dimension and exponential in only the network’s parameters.

BIO:

Adam Klivans is a professor of computer science at UT-Austin. He is the director of the new Machine Learning Laboratory (MLL).  His research interests are in provably efficient algorithms for core tasks in machine learning.

 Hosted by Professor Quanquan Gu

Via Zoom Webinar

Date/Time:
Date(s) - Dec 01, 2020
4:00 pm - 5:45 pm

Location:
Zoom Webinar
404 Westwood Plaza Los Angeles
Map Unavailable