CS 201: Building Accountable NLP Models: on Social Bias Detection and Mitigation, JIEYU ZHAO, UCLA – Computer Science

Speaker: Jieyu Zhao
Affiliation: UCLA - Computer Science Department

ABSTRACT:

Natural Language Processing (NLP) plays an important role in many applications, including resume filtering, text analysis, and information retrieval. Despite the remarkable accuracy enabled by the advances in machine learning used in many applications, the technique may discover and generalize the societal biases implicit in the data. For example, an automatic resume filtering system may unconsciously select candidates based on their gender and race due to implicit associations between applicant names and job titles, causing the societal disparity discovered by researchers. Various laws and policies have been designed to ensure social equality and diversity. However, there is no such mechanism for a machine learning model for sensitive applications. My research analyzes the potential stereotypes in various machine learning models and develops computational approaches to enhance fairness in a wide range of NLP applications. The broader impact of my research aligns with one the concerns of machine learning community: how can we do AI for (social) good. In this talk, I will show some examples about revealing and mitigating societal biases in different NLP models.

BIO:

Jieyu is a PhD candidate in the department of Computer Science at UCLA advised by Prof. Kai-Wei Chang. Her research interest lies in fairness of ML/NLP models. Her paper was awarded the EMNLP Best Long Paper Award (2017). She was one of the recipients of 2020 Microsoft PhD Fellowship. She was invited by UN-WOMEN Beijing on a panel discussion about gender equity and social responsibility. More detail can be found at https://jyzhao.net/.

Hosted by Professor Kai-Wei Chang

Date/Time:
Date(s) - Dec 02, 2021
4:15 pm - 5:45 pm

Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095