Ph.D. Seminar – Hillary Dawkins

Join us on Wednesday, December 7 at 9 am (via Zoom) for a seminar by Ph.D. candidate (Computational Sciences), Hillary Dawkins. Hillary has been investigating gender bias in pre-trained language models.

You can find the title and abstract for her seminar below. If you are interested in attending his seminar, please reach out.

Title: Detection and Mitigation of Gender Bias in Large Pre-trained Language Models


Mitigation of gender bias in NLP has a long history tied to debiasing static word embeddings. More recently, attention has shifted to debiasing pre-trained language models. We study to what extent the simplest projective debiasing methods, developed for word embeddings, can help when applied to BERT’s internal representations. Projective methods are fast to implement, use a small number of saved parameters, and make no updates to the existing model parameters. We evaluate the efficacy of the methods in reducing both intrinsic bias, as measured by BERT’s next sentence prediction task, and in mitigating observed bias in a downstream setting when fine-tuned.

Advisor: Dr. Judi McCuaig
Co-advisor: Dr. Daniel Gillis
Committee: Dr. Stefan Kremer
Committee: Dr. Graham Taylor (School of Engineering)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.