Speaker: Sanjoy Dasgupta
Affiliation: UC San Diego
An intriguing primitive representation, “expand-and-sparsify”, appears in the olfactory system of the fly and the sensory systems of several other organisms. It maps an input vector to a much higher-dimensional sparse representation, using a random linear transformation followed by winner-take-all thresholding.
I’ll show that this representation has a variety of formal properties, such as (1) locality preservation,(2) universal approximation, and (3) adaptivity to manifold structure, that make it an attractive data structure for algorithms and machine learning.
In particular, mimicking the fly’s circuitry yields algorithms for similarity search and for novelty detection that have provable guarantees as well as being effective in practice.
This talk is based on work with Saket Navlakha (Salk Institute), Chuck Stevens (Salk Institute), and Chris Tosh (Columbia). The published results appear in these two papers:
S. Dasgupta, C.F. Stevens and S. Navlakha.
A neural algorithm for a fundamental computing problem. Science 2017.
S. Dasgupta, T.C. Sheehan, C.F. Stevens and S. Navlakha.
A neural data structure for novelty detection. PNAS, 2018.
Sanjoy Dasgupta is a Professor of Computer Science and Engineering at UC San Diego, where he has been since 2002. He works on algorithms for machine learning, with a particular focus on unsupervised and minimally supervised learning. He is author of a textbook, “Algorithms” (with Christos Papadimitriou and Umesh Vazirani).
Hosted by Professor Quanquan Gu
Date(s) - Jan 28, 2020
3400 Boelter Hall
420 Westwood Plaza, Los Angeles California 90095