In an article on the use of decision-making technologies in law enforcement, public benefit eligibility determination and other government functions, Law 360 cited scholarship by Professor Hannah Bloch-Wehba in an April 21 article.
The article notes that journalists and advocacy groups have raised concerns that algorithms used in artificial intelligence may incorporate unrecognized biases that produce faulty conclusions regarding DNA test results, the likelihood of recidivism by those with criminal convictions and more. Lawsuits have been filed seeking to uncover the algorithms used by government agencies in various systems.
Bloch-Wehba has explored this problem, the article notes, citing her forthcoming article in the Fordham Law Review. In “Access to Algorithms,” Bloch-Wehba argues that legislation that can protect the public is a more effective strategy for promoting transparency than litigation that addresses the concerns of an individual.
“The people who are directly affected by these kinds of tools — whether it’s risk assessment or Medicaid or predictive policing — are not always going to be in a position to seek access to information about how they function,” she told Law360, citing barriers like legal representation and time constraints.
The problem is complex, Bloch-Wehba explained, since governments usually source algorithmic decision systems from private vendors who view their systems as trade secrets and require nondisclosure agreements as a condition of use.
“It basically puts the government to an impossible choice,” she told Law360. “They can’t reveal an algorithm they already, through contract, promised to keep secret.”
The article notes that policymakers in several states are studying the matter and exploring legislation that would set guidelines for procurement and use of automated decision systems by government agencies.