Mathematical models and algorithms are often viewed as fair and objective, and therefore don’t operate on bias or prejudice. A common perception is that if artificial intelligence (AI) and machine learning technologies base their decisions upon big data patterns and correlations that arise from statistics, they can’t be harmful.
This assumption was proven wrong when, in 2018, the American Civil Liberties Union (ACLU) uncovered that Amazon’s face recognition system falsely matched 28 members of U.S. Congress with mugshots where false matches were disproportionately of people of color.
As part of the 11th Annual Philly Tech Week, Drexel’s College of Computing & Informatics (CCI) hosted a May 11 panel discussion on fighting bias in artificial intelligence (AI) and machine learning.
Led by Jerry Overton, CEO of appliedAIstudio, and co-sponsored by CCI’s Diversity, Equity & Inclusion Council, the event featured perspectives from AI experts Logan Wilt (data scientist and senior manager at DXC Technology's AI Practice) and Edward Kim, PhD (associate professor in CCI’s Department of Computer Science).
The panel discussion centered around three main problems in fighting bias in AI, what Overton named as “the data, the machine and the organization.”
In recent years, studies have shown that algorithms can exhibit racial and gender bias, discriminate within a computer-vision facial recognition system, and encode gendered bias in natural language processing.
As AI becomes more pervasive in consumer-based technology, it is important that considerations be taken to prevent bias in the algorithmic decision-making process. However, the solution to avoiding bias in AI and machine learning is complex, the panelists explained.
Watch the panel discussion below and/or read more on Technically Philly.