For a better experience, click the Compatibility Mode icon above to turn off Compatibility Mode, which is only for viewing older websites.

NSF-Backed Study Investigates the Ethics of Algorithms

By Kylie Gray

Ethics of Algorithms

 

January 8, 2018

Computer algorithms — the sequences of instructions or rules computers follow to solve problems — influence many aspects of our lives, from the products we buy to the people we date and even the jobs we are offered. But who makes algorithms and code, and how do their values translate into the work they do? That’s what Kelly Joyce, PhD, wanted to find out in 2013, when she and a fellow researcher were awarded a grant from the National Science Foundation for their study, “The Ethics of Algorithms.”

“There is now a lot of talk about algorithms discriminating, but at the time, the issue was just starting to bubble up,” says Joyce, Director of Drexel’s Center for Science, Technology and Society and professor of sociology.

Joyce partnered with Kristene Unsworth, PhD, a visiting assistant professor at Stockton University, for the four-year study. They spent two years studying teams of computer scientists and engineers who build big data sets — conducting interviews and sitting in on dozens of meetings. With the second half of the grant, they took what they learned from these conversations to write data-driven scenarios that engage future computer scientists in ethical problem solving.

Kelly Joyce, PhD, Professor of Sociology at Drexel University

The innovative project was one of the first of its kind to create scenarios specific to algorithms and big data, employing a critical “upstream” approach that focused on education. The decision to focus on big data was influenced by examples of algorithms increasing inequalities in a variety of fields — from health care to law enforcement and business.

A prime example of this can be found in facial recognition software, which police have used to identify criminal suspects. Because computer scientists rely on previous facial recognition techniques and practices that focused on white people, the software is more likely to be inaccurate when used to identify people of color.

As Joyce points out, although data about humans may be perceived as “neutral, inclusive or representative,” it is often more complicated than it appears. To be meaningful, data needs context.

“Any time you automate a process, you are going to have the potential for bias. We don’t know the decisions that were made about what to include or exclude in an algorithm — we just encounter the effects of them,” Joyce says. “We don’t want to keep reacting. We want to figure out what is causing the effects and try to prevent them.”

Joyce emphasizes that bias effects are often unintentional, and reflect the fact that computer scientists and engineers are not yet trained to deal with the range of ways that human subjects are affected by big data. Because the field of computer science and engineering also acknowledges this, Joyce and Unsworth were met with enthusiasm when they approached three Philadelphia-area labs about observing their data teams.

A Two-Part Study

For the first half of the project, the investigators, along with Michael Dickard, PhD candidate in Information Studies, and Matthew Lesser, MS Science, Technology & Society ’15, embedded themselves in the world of data scientists for two years. They attended big and small group meetings, paying attention to when and how ethical considerations surfaced, and conducted and analyzed over 20 in-depth interviews.

“One finding, which is perhaps not surprising, is that the values of computer scientists and engineers are shaped by their field,” Joyce says. “They love speed and modularity — pieces of code that can be used elsewhere. They appreciate the elegance of certain kinds of code. I found it interesting to see how creative and artistic computer scientists are when they’re working with big data.”

Not among the highest values of those observed? The ability to recognize and anticipate some of the issues related to data about humans. That’s where the second half of the project became crucial: The team set out, not to change the values of computer scientists, but to expose the computer sciences to strategies for recognizing and dealing with the ethical challenges they may face in their work.

“Ethics has historically been taught to STEM students through philosophy and concepts like ‘beneficence,’ but NSF wanted to see more empirically based ethical discussions that come right out of the experiences of computer scientists. The hope is that the scenarios will be useful for them because they’re designed to speak directly to their worlds,” Joyce says.

Taking on the scenarios was a team of three Drexel master’s students in Science, Technology and Society: Kendall Darfler ’17, Dalton George ’17 and Jason Ludwig ’17. The “dream team” — so called by Joyce due to their creativity and willingness to take on the difficult task — wrote and tested six scenarios. Already bridging the domains of social and computer science, the researchers added fiction writing to the list, as they evoked the voice and terminology of computer scientists and engineers to describe ethical dilemmas drawn from real-life observations.

The scenarios touch on a range of topics, from data validity to discrimination to sensitive information; several depict junior employees raising ethical concerns about company products or policies to their skeptical superiors. The scenarios are followed by open-ended questions for conversation and debate.

The researchers took the scenarios to five universities to see how they would play out in real classrooms. While Joyce and Unsworth initially facilitated the conversations, the “dream team” gradually took the helm, learning how to frame discussion to encourage debate.

“Since we were sort of disciplinary outsiders, I had expected a lot of the computer scientists to be suspicious of our work. In reality, it was nearly the complete opposite,” says Ludwig. “Many of the people we spoke to mentioned that they, too, thought it was important to discuss the ethics and politics of algorithm design, and felt that there was a need for the kind of project we were doing.”

George adds, “Working on this project has given me a greater appreciation for the disciplines of computer science and software engineering, and the understanding that cooperation across the boundaries of vastly different disciplines is possible.”

Students at each of the five universities were generally willing to engage with both sides of the dilemma, Joyce says, and understood that there was no clear right or wrong answer. Importantly, these conversations took place without the added pressures of bosses, business interests and deadlines, illustrating the value of ethical education before entering the workforce.

The team collected 59 evaluations from course attendees and professors, which they used to further refine the scenarios. They also worked closely with a computer scientist to make sure the scenarios were as accurate and on-voice as possible.

The hard work paid off; four of the scenarios, which are now accessible on the project website, were recently posted on the National Academy of Engineering’s Online Ethics Center. One of them, “Taking a Product to Market,” was highlighted as a featured resource in the Computer, Math and Physical Sciences section. The distinction, in addition to the show of support from the NAE, means that more educators interested in STEM ethics will be exposed to the scenarios.

A Seat at the Table for Social Scientists

“Black box” big data sets — those that reveal input and output data but not the internal workings of the algorithms — are relied upon for functions like hospital grading and city planning, illustrating the necessity for this kind of ethical STEM research. In recent years, researchers like Joyce and Unsworth have taken up the call for this important work in the field of Science and Technology Studies.

Still up for debate is how much computer scientists, engineers and mathematicians should be expected to know about the human subjects that their algorithms affect. In a 2016 article posted on Technica.ly Philly, Unsworth called for technologists to take responsibility for the fairness of their creations. Her comment was in response to a controversial algorithm, funded by the City of Philadelphia, that predicts an inmate’s likelihood of repeated offence, which she feared could reinforce racist criminal justice practices.

Joyce echoes Unsworth’s concerns, but also believes that instead of asking computer scientists to become experts in social science, we should rely on multidisciplinary teams. While training STEM students to recognize and anticipate the complexity of working with data about humans is important, she says, project design and implementation require the deep expertise provided by a range of fields.

“A lot of universities are starting data science programs, and if you look at who is considered an expert, there are computer scientists, mathematicians, but rarely social scientists. Even though we’re starting to have this broader discussion about the politics and unintended consequences of big data, we are creating programs that are not going to train students to think about that,” Joyce says. “In the end, you make a case for trying to train students in STEM to think about these issues, but also in terms of research teams, to really think about who is at the table.”