Join us on Wednesday, December 7 at 9 am (via Zoom) for a seminar by Ph.D. candidate (Computational Sciences), Hillary Dawkins. Hillary has been investigating gender bias in pre-trained language models.
You can find the title and abstract for her seminar below. If you are interested in attending his seminar, please reach out.
Title: Detection and Mitigation of Gender Bias in Large Pre-trained Language Models
Abstract:
Mitigation of gender bias in NLP has a long history tied to debiasing static word embeddings. More recently, attention has shifted to debiasing pre-trained language models. We study to what extent the simplest projective debiasing methods, developed for word embeddings, can help when applied to BERTβs internal representations. Projective methods are fast to implement, use a small number of saved parameters, and make no updates to the existing model parameters. We evaluate the efficacy of the methods in reducing both intrinsic bias, as measured by BERTβs next sentence prediction task, and in mitigating observed bias in a downstream setting when fine-tuned.
Committee:
Advisor: Dr. Judi McCuaig
Co-advisor: Dr. Daniel Gillis
Committee: Dr. Stefan Kremer
Committee: Dr. Graham Taylor (School of Engineering)