“Maximally Informative, Minimally Demanding: Learning from Human Feedback”
The robot learning community has increasingly turned to large models with the hope of getting good generalization across tasks and environments. Yet, even the most capable zero-shot models benefit substantially from in-domain fine-tuning with human feedback. For broad deployment of robots, we must therefore develop methods that extract the most information from the least amount of human feedback: maximally informative, minimally demanding. In this talk, I will present approaches to modeling diverse feedback signals, including comparative language, interventions, and eye gaze, that make robot learning algorithms more data-efficient without placing extra burden on users.
Erdem Bıyık is an assistant professor in the Thomas Lord Department of Computer Science at the University of Southern California, and in the Ming Hsieh Department of Electrical and Computer Engineering by courtesy. He leads the Learning and Interactive Robot Autonomy Lab (Lira Lab). Prior to joining USC, he was a postdoctoral researcher at UC Berkeley’s Center for Human-Compatible Artificial Intelligence. He received his Ph.D. and M.Sc. degrees in Electrical Engineering from Stanford University, working at the Stanford Artificial Intelligence Lab (SAIL), and his B.Sc. degree in Electrical and Electronics Engineering from Bilkent University in Ankara, Türkiye. During his studies, he worked at the research departments of Google and Aselsan. Erdem was an HRI 2022 Pioneer and received an honorable mention award for his work at HRI 2020. His TMLR 2023 paper was an outstanding paper finalist and RLC 2025 paper received the outstanding paper award on empirical reinforcement learning research. His works were published at premier robotics and artificial intelligence journals and conferences, such as IJRR, CoRL, RSS, NeurIPS.
Date/Time:
Date(s) - Nov 18, 2025
4:00 pm - 5:45 pm
Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095