Event date:
Feb 28 2022 11:30 am

Identification & Mitigation of Gender Bias in Sentiment Classification Task

Supervisor
Dr. Agha Ali Raza
Student
Rabia Atique
Venue
CS Board Room
Event
MS Synopsis defense

Abstract: 

The concept of algorithmic bias points to this observation that neural networks and AI systems broadly are susceptible to significant biases such that these biases can lead to very real and detrimental societal consequences. Indeed, today more than ever we are already seeing this manifesting in society from facial recognition to medical decision making to voice recognition. And on top of this algorithmic bias can actually perpetuate existing social and cultural biases such as racial and gender biases. Gender biases are manifest in most of the world’s languages and are consequently propagated or amplified by NLP systems. In this research work we identify gender bias in a pretrained NLP model by fine tuning it to downstream NLP tasks such as Sentiment Classification task and then apply strategies to measure this gender bias. Further mitigation of bias is achieved by using data augmentation techniques for re-training a fine-tuned BERT classifier. We will also apply a Mixup Transformer based Data Augmentation technique to mitigate gender bias.

Evaluation Committee:

  • Dr. Agha Ali Raza (Advisor)
  • Dr. Asim Karim