CS 201 – Machine Learning Colloquium: Universal Adversarial Perturbations: Fooling Deep Networks with a Single Image, HUSSEIN FAWZI, UCLA – Computer Science Department – Vision Lab

Speaker: Hussein Fawzi
Affiliation: UCLA - Computer Science Department - Vision Lab

ABSTRACT: The robustness of classifiers to small perturbations of the data points is a highly desirable property when the classifier is deployed in real and possibly hostile environments. Despite achieving excellent performance on recent visual benchmarks, I will show in this talk that state-of-the-art deep neural networks are highly vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers.  I will then analyze the implications of this vulnerability of deep neural networks, and provide theoretical insights onto their robustness. BIO: Alhussein Fawzi is a Postdoc in the Computer Science Department at UCLA, working with Prof. Stefano Soatto (Vision Lab). His research interests include machine learning and computer vision. He received his M.Sc. and PhD degrees from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland. He has published in refereed conference and journal proceedings such as NIPS, CVPR, International Journal on Computer Vision (IJCV) and SIAM Journal on Imaging Science (SIIMS). He received twice the IBM PhD fellowship, in 2013 and 2015. More details can be found in his personal website:http://www.alhusseinfawzi.info/

Date/Time:
Date(s) - May 11, 2017
4:15 pm - 5:45 pm

Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095