CS 201 | Meisam Razaviyayn, USC

Denoising of Differentially Private Optimizers

Abstract:
Differential privacy (DP) offers a robust framework for safeguarding individual data privacy. To utilize DP in training modern machine learning models, differentially private optimizers have been widely used in recent years. A popular approach to privatize an optimizer is to clip the individual gradients and add sufficiently large noise to the clipped gradient. This approach led to the development of DP optimizers that have comparable performance with their non-private counterparts in fine-tuning tasks or in tasks with a small number of training parameters. However, a significant performance drop is observed when these optimizers are applied to large-scale training. This degradation stems from the substantial noise injection required to maintain DP, which disrupts the optimizer’s dynamics. This talk discusses different approaches for denoising DP optimizers. We employ different filtering strategies such as low pass filtering or Kalman filtering, to effectively denoise privatized gradients and generate progressively refined gradient estimations. To ensure practicality for large-scale training, we simplified our proposed mechanisms, minimizing its memory and computational demands. We establish theoretical privacy-utility trade-off guarantees for the proposed methods and demonstrate provable improvements over standard DP optimizers like DPSGD in terms of iteration complexity upper-bound. Extensive experiments across diverse tasks, including vision tasks such as CIFAR100 and ImageNet-1k and language fine-tuning tasks such as GLUE, E2E, and DART, validate the effectiveness of our training methods. The results showcase the ability of this framework to significantly improve the performance of DP optimizers, surpassing state-of-the-art results under the same privacy constraints on several benchmarks.

This talk is based on a joint work with Xinwei Zhang (USC), Zhiqi Bu (Amazon), Borja Balle (DeepMind), Mingyi Hong (U of Minnesota), and Vahab Mirrokni (Google Research).

Papers and packages:
https://arxiv.org/pdf/2410.03883
https://arxiv.org/pdf/2408.13460
GitHub: https://github.com/pytorch/opacus/tree/main/research/disk_optimizer

Bio:
Meisam Razaviyayn is the Andrew and Erna Viterbi Early Career Chair and an associate professor in the departments of Industrial and Systems Engineering, Computer Science, Quantitative and Computational Biology, and Electrical Engineering at the University of Southern California. He also serves as the associate director of the USC-Meta Center for Research and Education in AI and Learning and is a Faculty Visitor at Google Research. He is also a founding member of the USC OR-AI program supported by NSF. Before joining USC, Meisam was a postdoctoral research fellow in the Department of Electrical Engineering at Stanford University. He earned his PhD in Electrical Engineering with a minor in Computer Science from the University of Minnesota, where he also received his M.Sc. in Mathematics. His research and academic efforts have been recognized with numerous awards, including the 2022 NSF CAREER Award, the 2022 Northrop Grumman Excellence in Teaching Award, the 2021 AFOSR Young Investigator Award, and the 2021 3M Nontenured Faculty Award. He received the 2020 ICCM Best Paper Award in Mathematics and the IEEE-DSW Best Paper Award in 2019, along with the Signal Processing Society Young Author Best Paper Award in 2014. Meisam was among the selected individuals by the National Academy of Engineering for the Frontiers of Engineering Symposium in 2023. Additionally, he was a finalist for the Best Paper Prize for Young Researchers in Continuous Optimization in 2013 and 2016, and a silver medalist in Iran’s National Mathematics Olympiad. His research focuses on the design and analysis of fundamental optimization algorithms relevant to the modern AI era.

Date/Time:
Date(s) - Apr 17, 2025
4:00 pm - 5:45 pm

Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095