Speaker: Kamalika Chaudhuri
Affiliation: UC San Diego
As machine learning is increasingly deployed, there is a need for reliable and robust methods that go beyond simple test accuracy. In this talk, we will discuss two challenges that arise in reliable machine learning. The first is robustness to adversarial examples, that are small imperceptible perturbations to legitimate test inputs that cause machine learning classifiers to misclassify. While recent work has proposed many attacks and defenses, why exactly they arise still remains a mystery. In this talk, we’ll take a closer look at this question.
The second problem is overfitting, that many generative models are known to be prone to. Motivated by privacy concerns, we formalize a form of overfitting that we call data-copying — where the generative model memorizes and outputs training samples or small variations thereof. We provide a three sample test for detecting data-copying, and study the performance of our test on several canonical models and datasets.
Kamalika Chaudhuri received a Bachelor of Technology from the Indian Institute of Technology, Kanpur, and a PhD in Computer Science from the University of California, Berkeley. Currently, she is an Associate Professor at the University of California, San Diego. She is a recipient of the NSF Career Award, Hellman Faculty Fellowship, and Google and Bloomberg Faculty Awards. Kamalika’s research is on the foundations of trustworthy machine learning — which includes problems such as learning from sensitive data while preserving privacy, learning under sampling bias, and in the presence of an adversary. She is also broadly interested in a number of topics in learning theory, such as non-parametric methods, online learning, and active learning.
Hosted by Professor Quanquan Gu
Date(s) - Jan 19, 2021
4:00 pm - 5:45 pm
404 Westwood Plaza Los Angeles