CS 201: A Geometric Data Structure from Neuroscience, SANJOY DASGUPTA, UC San Diego

Speaker: Sanjoy Dasgupta
Affiliation: UC San Diego

ABSTRACT:

An intriguing primitive representation, “expand-and-sparsify”, appears in the olfactory system of the fly and the sensory systems of several other organisms. It maps an input vector to a much higher-dimensional sparse representation, using a random linear transformation followed by winner-take-all thresholding.

I’ll show that this representation has a variety of formal properties, such as (1) locality preservation,(2) universal approximation, and (3) adaptivity to manifold structure, that make it an attractive data structure for algorithms and machine learning.

In particular, mimicking the fly’s circuitry yields algorithms for similarity search and for novelty detection that have provable guarantees as well as being effective in practice.

This talk is based on work with Saket Navlakha (Salk Institute), Chuck Stevens (Salk Institute), and Chris Tosh (Columbia). The published results appear in these two papers:

S. Dasgupta, C.F. Stevens and S. Navlakha.
A neural algorithm for a fundamental computing problem. Science 2017.
https://science.sciencemag.org/content/sci/358/6364/793

S. Dasgupta, T.C. Sheehan, C.F. Stevens and S. Navlakha.
A neural data structure for novelty detection. PNAS, 2018.
https://www.pnas.org/content/115/51/13093

BIO:

Sanjoy Dasgupta is a Professor of Computer Science and Engineering at UC San Diego, where he has been since 2002. He works on algorithms for machine learning, with a particular focus on unsupervised and minimally supervised learning. He is author of a textbook, “Algorithms” (with Christos Papadimitriou and Umesh Vazirani).

 Hosted by Professor Quanquan Gu

Date/Time:
Date(s) - Jan 28, 2020
12:00 am

Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095