Eddie Ungless, Measuring Bias is Pointless

An open licensed seminar from the 2024 Controversies in the Data Society Seminar Series. Eddie Ungless, PhD candidate in the School of Informatics, presents ‘Measuring Bias is Pointless’.
Abstract
It has been consistently shown that existing bias measurement methods for natural language processing (NLP) technologies, like language models, have poor validity and reliability. That is to say, they don’t measure what they claim to measure in a consistent way. Coupled with the fact that these models exist as part of socio-technical systems in which stakeholders can introduce their own biases, this means measuring bias upstream and in abstract seems a fruitless exercise. It is only within specific use contexts that we can understand the negative impact of these models and collaborate with those impacted to develop meaningful solutions. By the end of the talk I hope to have convinced you that measuring bias in abstract is pointless, and we should refocus our efforts on measuring harms in context.
Speaker Bio
Eddie L. Ungless is a final year PhD student in the Centre for Doctoral Training (CDT) in NLP, funded by the UKRI. He has an interdisciplinary background spanning linguistics, psychology, digital media strategy and computer science. His work addresses social bias in NLP technologies, wherein he champions an approach that builds on social science research to centre human experiences in our understanding of AI harms. You can find out more about his interests, along with links to his published work, on his blog: https://mxeddie.github.io/
Watch ‘Measuring Bias is Pointless’ directly on Media Hopper Create
This seminar presented by Eddie Ungles for the Controversies in the Data Society Seminar Series at The University of Edinburgh, is available under a Creative Commons Attribution No Derivatives 4.0 licence.
Header Image: Screencapture of slide from presentation