Ph.D. candidate, Hillary Dawkins will defend her Ph.D. dissertation on Monday, January 16, 2023 at 1 pm. Hillary has spent the last few years developing new approaches to debiasing in natural language processing.
The defence will take place via Zoom. Anyone interested in attending can contact me. The title and abstract for the defence are provided below.
Title: Detection and Mitigation of Gender Bias in Natural Language Processing
This thesis contributes to our collective understanding of how gender bias arises in natural language processing systems, provides new detection and measurement tools, and develops mitigation methods. More specifically, we aim to quantify and reduce bias within pre-trained computational resources, both word embeddings and language models, such that unwanted outcomes produced by the system are mitigated.
On the theme of detection, we make two new observations on how gender bias can manifest in system predictions. Firstly, gender words are shown to carry either marked or default values. Default values may pass through systems undetected, while marked values influence prediction outcomes. Secondly, unwanted latent inferences are detected, due to a shared gender association. We contribute two new test sets, and one enhanced test set, for the purpose of gender bias detection.
On the theme of mitigation, we develop successful debiasing strategies applied to both types of pre-trained resources.
Chair: Dr. Joe Sawada
Advisor: Dr. Judi McCuaig
Co-Advisor: Dr. Daniel Gillis
Non-Advisory: Dr. Fattane Zarrinkalam (School of Engineering)
External Examiner: Dr. Katheen Fraser (National Research Council)