
Federated and Robust Parameter-Efficient Fine-Tuning of Large Language Models
Abstract:
While parameter-efficient fine-tuning techniques like Low-Rank Adaptation (LoRA) offer computationally efficient adaptations of Large Language Models (LLMs), their practical deployment often assumes centralized data and training environments. However, real-world scenarios frequently involve distributed, privacy-sensitive datasets that require federated or decentralized solutions. In this talk, we propose a decentralized fine-tuning algorithm based on LoRA. We provide a rigorous theoretical analysis, proving that under standard assumptions of smoothness and bounded gradients for non-convex functions, the proposed algorithm converges to a stationary point. Next, we consider robust fine-tuning of Vision Language Models (VLMs) using adversarial LoRA based on minimax optimization, and show convergence results for the proposed optimization algorithm in the non-convex/strongly-concave case. We provide numerical results to demonstrate the efficacy of the proposed methods throughout the talk.
Bio:
Ramtin Pedarsani is an associate professor in the ECE department at UCSB. He obtained his Ph.D. in Electrical Engineering and Computer Sciences from UC Berkeley in 2015. He received his M.Sc. degree at EPFL in 2011 and his B.Sc. degree at the University of Tehran in 2009. His research interests include machine learning, optimization, information and coding theory, and stochastic networks. He is the recipient of the Communications/Information Theory Society joint paper award in 2020 and the best paper award in the IEEE International Conference on Communications (ICC) in 2014.
Date/Time:
Date(s) - May 22, 2025
4:00 pm - 5:45 pm
Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095