SSHRC Computational Social Science Seminar Series
In collaboration with the Office of Research and Innovation, the Social Sciences and Humanities Research Committee (SSHRC) will host a talk by Dr. Brent Mittelstadt, as part of the computational social science seminar series. Brent is a Senior Research Fellow and British Academy Postdoctoral Fellow in data ethics at the Oxford Internet Institute, a Turing Fellow at the Alan Turing Institute, and a member of the UK National Statistician's Data Ethics Advisory Committee. He is an ethicist focusing on auditing, interpretability, and ethical governance of complex algorithmic systems. His research concerns primarily digital ethics in relation to algorithms, machine learning, artificial intelligence, predictive analytics, Big Data, and medical expert systems.
The event will be held via Zoom from 12:00 to 1:30 PM on May 14 (Friday), 2021. If you are interested in attending, please register HERE by May 12. Zoom information will be sent to registrants on May 13.
Brent's talk is entitled "Bias Preservation in Fair Machine Learning." Below is the abstract of his talk:
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various fairness and bias metrics. Often these metrics address technical bias but ignore the underlying causes of inequality and take for granted the scope, significance, and ethical acceptability of existing inequalities. In this talk I will introduce the concept of "bias preservation" as a means to assess the compatibility of fairness metrics used in machine learning against the notions of formal and substantive equality. The fundamental aim of EU non-discrimination law is not only to prevent ongoing discrimination, but also to change society, policies, and practices to 'level the playing field' and achieve substantive rather than merely formal equality. Based on this, I will introduce a novel classification scheme for fairness metrics in machine learning based on how they handle pre-existing bias and thus align with the aims of substantive equality. Specifically, I will distinguish between 'bias preserving' and 'bias transforming' fairness metrics. This classification system is intended to bridge the gap between notions of equality, non-discrimination law, and decisions around how to measure fairness and bias machine learning. Bias transforming metrics are essential to achieve substantive equality in practice. To conclude, I will discuss how to choose appropriate metrics to measure bias and fairness in practice.
Social Sciences and Humanities Research Committee (SSHRC) aims at building a vibrant social science research and humanities research community at Drexel University. We are open to faculty members from all academic fields. If you are interested in joining the 365 Group (on Outlook) of the SSHRC to receive information related to social science research, please contact Brooklyn Daly at Bmd88@drexel.edu.