Drexel Joins DARPA Effort to Develop AI Algorithm to Mimic Human Decision Making in Complex, Rapidly Changing Scenarios
- Men and Residents of Higher Crime Areas See Greater Benefit from Community Parks, in Reduction of Deaths from Heart Disease
- Faculty Highlights: Recent Awards and Grants
- A Framework for Community-Driven Environmental Justice Guided by the Community
- Researchers Develop LLM to Identify and Suggest Alternatives to Words That Stigmatize
Deciding what to do in situations when there is no right answer is an unenviable challenge for any leader. Doing it in a matter of seconds under the pressure of a rapidly changing scenario — like a mass-casualty event or natural disaster — requires a great deal of training, experience, and sound judgment. A new project, funded by the Defense Advanced Research Projects Agency, aims to make the guidance of experienced decision-makers available to people who are thrust into leadership positions during crisis situations. Researchers from Drexel University’s College of Computing & Informatics are part of a team lead by Parallax Advanced Research Corporation, that also includes Knexus Research Corporation and the U.S. Naval Medical Research Unit – Dayton, that is working to train and test the artificial intelligence algorithms that will drive the technology.
DARPA’s $4 million “In the Moment” (ITM) program presents the challenge of developing AI algorithms that can both replicate the decision-making process of experienced leaders and apply it to new, rapidly evolving crisis situations. According to the Agency’s announcement, the goal of the program is to create the foundation for trusted algorithmic decision-making in challenging domains, such as medical triage, where there is no right answer, and, as a result, there is no preexisting ground truth on which to train the program.
“ITM seeks to use the algorithmic expression of key human attributes as the basis for trust in algorithmic decision-makers,” according to the program announcement. “ITM will investigate this basis for trust in the context of human off-the-loop decision-making in difficult domains and seeks to enable the development, evaluation, and fielding of algorithmic decision-makers in difficult domains.”
The project will test the AI algorithms in increasingly complex situations; first a triage for small military units in austere environments; and second, triage for a mass-casualty event.
“The triage domain allows us to get at core issues around trust and delegating decision-making to go beyond the state of the art in AI,” said Matt Turek, PhD, DARPA’s ITM program manager and deputy director of the Agency’s Information Innovation Office. “The focus on triage will encourage research teams to work directly on some of the hardest decision-making challenges possible.”
As part of the Parallax team, Drexel’s researchers will apply their expertise in explainable artificial intelligence and case-based reasoning to extract, train, augment and test the AI system, which Parallax calls the Trustworthy Algorithmic Delegate, which combines various AI components. The Drexel team is led by Rosina Weber, PhD, a professor in the College of Computing & Informatics and expert in case-based reasoning and explainable AI that when combined enables the program to produce justifications for its decisions, which are made based on previous similar experiences.
“A case-based reasoning approach is ideal for a technological challenge like this, because there is a fairly clear justification for the decisions the program is making,” Weber said. “This makes it easier for the human decision-maker to understand its logic — which is expected to help make the human decision maker willing to delegate to the algorithm.”
Case-based reasoning differs from other artificial intelligence methods, such as neural networks, which “train” on massive amounts of input data and draws on the patterns it extracts from it to produce an output — though the precise reasoning for that output may be difficult to define. By contrast, case-based reasoning is more similar to the process of referencing legal precedent when making a legal argument. The program is trained on several previous situations – which are essential units of knowledge representation that can be augmented with contextual scenarios and domain knowledge while still benefiting from other machine learning techniques – and from these previous cases it can still generalize to produce a decision for a new situation using only the local information that is available.
“I've seen firsthand deployed soldiers having to deal with casualties when evacuation to the nearest medical facility wasn't an immediate option,” said Christopher Rauch, JD, a military veteran, and doctoral researcher in the College of Computing & Informatics, who is a member of Weber’s team. “The ITM program would bring the experience of senior medical staff to the field and has the real potential to help save lives.”
According to Rauch, who also holds an advanced legal degree, reasoning in these types of scenarios shares some characteristics of complex litigation.
“Not only does the information available change rapidly, but there is no numerical algorithm that can take the place of human judgment,” Rauch said. “That is why the enhanced case-based reasoning approach pioneered by Dr. Weber, which bases difficult decisions in part on the past decisions of trusted humans, is the cornerstone of the decision-making process.”
The team plans to test the Trustworthy Algorithmic Delegate in two phases. The first will focus on aligning the program’s process with a group of trusted human decision-makers. The second, more complex, focus will look at how the program can align with one specific trusted human decision-maker.
“The idea is that, where necessary, the human operator will trust that the AI decision-maker will make decisions that align with what varied experts think should happen,” said Viktoria Greanya, PhD, chief scientist at Parallax. “The assumption is that everybody makes their decisions in a different manner, and the AI should be able to align with the specific person who’s delegating to the algorithmic decision-maker.”
If it’s successful, the program is also intended to produce a framework for creating other algorithms that can express key attributes that are aligned with trusted humans, according to DARPA.
In addition to Weber and Rauch, Drexel College of Computing & Informatics doctoral students Mallika Mainali, Ximing Wen, Prateek Goel, and Anik Sen are participating in this research. For more information about the DARPA In the Moment program visit: https://www.darpa.mil/program/in-the-moment
Drexel News is produced by
University Marketing and Communications.