A year after OpenAI publicly released the artificial intelligence language model ChatGPT — that captured the world’s imagination about the possibilities of AI — the European Union and White House are working to establish standards and safety measures to address risks of the technology’s rapid proliferation. A Blueprint for an AI Bill of Rights, issued by the Office of Science and Technology Policy over the summer, followed by the recent Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, place the onus on AI developers and companies to ensure the programs are transparent and trustworthy, and cannot be used in a way that jeopardizes citizens’ privacy or safety.
But making a program that is capable of sifting through unimaginable volumes of data in seconds to also explain what it’s doing in a way that human users can understand is just the first of many challenges facing developers as they expand the applications of AI, according to Rosina Weber, PhD, an information science professor in Drexel University’s College of Computing & Informatics who studies explainable artificial intelligence. Weber’s research looks at ways AI technology can be designed for transparency in its decision making, which enable it to support users managing challenging problems.