Researchers studying how AI explains its decisions often find many issues and disagreements. Some even say that most research does not add much value.
Rosina Weber, professor of information science and computer science at CCI, and her three co-authors have published a cover article in the most recent issue of AI Magazine arguing that explainable AI (XAI) has serious problems. The authors of "XAI is in Trouble" highlight four main issues:
- Disagreements on what XAI should cover.
- Lack of clear definitions and consistent use.
- Questionable reasons behind XAI research.
- Inconsistent and limited ways to evaluate XAI.
Weber et al. suggest these problems might come from AI researchers succumbing to the risks of interdisciplinarity. The article offers recommendations to address these challenges and improve the quality of XAI research.
Access the full article here.