Qualifying Exam Presentation – Hillary Dawkins

Join us on Thursday, June 2nd at 9:30 am (via Zoom) for the qualifying exam presentation of PhD student, Hillary Dawkins. Hillary will be discussing the detection and mitigation of gender bias in natural language processing.

If you are interested in attending the presentation, feel free to reach out.

Title: Detection and Mitigation of Gender Bias in Natural Language Processing

Abstract:

The goal of this research proposal is to mitigate gender-biased outcomes produced by NLP systems by debiasing pretrained resources (both static word embeddings and language models) via simple post-processing methods. We focus on post-processing methods because they require minimal additional computation, and they are easy to concatenate with existing methods. Throughout, the performance of a debiasing method is quantified by its ability to eliminate or reduce unequal outcomes across binary genders (e.g. as differences in predictions across gender), without affecting task accuracy. As we will come to appreciate, mitigating bias in pretrained resources often requires an understanding of how intrinsic bias (some innate property of the pretrained resource) correlates with observable bias in downstream applications. Therefore, supporting contributions to this research are to propose and investigate intrinsic bias measures.

Examination Committee:

Chair: Dr. Joseph Sawada
Co-advisor: Dr. Daniel Gillis
Advisory Committee: Dr. Graham Taylor
External: Dr. Fei Song
External: Dr. Fattane Zarrinkalam

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.