Drexel Joins U.S. Department of Commerce's AI Safety Consortium

Drexel University has joined more than 200 of the nation’s leading artificial intelligence stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. The effort, led by the Department of Commerce’s National Institute of Standards and Technology, will bring together academics, government and industry researchers, civil society organizations and AI creators and users to form the U.S. AI Safety Consortium.

“Drexel researchers have been at the forefront of developing and applying AI technology to address some of society’s biggest challenges, as well as establishing parameters for its safe and responsible deployment in these efforts,” said Gwynne Grasberger, Drexel’s associate vice provost for Research Development. “We are encouraged by the creation of this consortium, as an important step toward preparing the country for the future of artificial intelligence, and happy to participate in realizing its vision.”

The Consortium was created in support of the Biden administration’s executive order last fall calling for the safe, secure and trustworthy development and use of artificial intelligence. Its work will focus on evaluating risks related to security, misuse and control in advanced AI models and developing guidance on how to safely mitigate them.

It will function as a critical pillar in the NIST-led U.S. Artificial Intelligence Safety Institute and will ensure that the Institute’s research and testing work is integrated with the broad community of AI safety around the country and the world. Members will help NIST develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help prepare the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Secretary of Commerce Gina Raimondo. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

Drexel will be represented on the consortium by Grasberger, Matthew Stamm, PhD, an associate professor in the College of Engineering; Edward Kim, PhD, an associate professor, and Kaidi Xu, PhD, an assistant professor, both in the College of Computing & Informatics; and Asta Zelenkauskaite, PhD, a professor in the College of Arts and Sciences.

This group is part of a cohort from across the university that has been establishing Drexel’s leadership in applying AI technology, broaching a conversation about how to safely, ethically and equitably put it to use and helping society to better understand its impact.

Drexel has also forged ahead in developing guidance and training for faculty as they integrate the technology into their curriculum. In addition to convening a working group on generative AI to produce the University’s “Report on Generative AI’s Educational Impact,” Drexel also provides the latest information and strategies for faculty who are teaching students about the technology or using it as a tool for teaching through its Teaching and Learning Center. The University has also identified artificial intelligence as one of its areas of strength and opportunity for which it will support interdisciplinary collaborative research.

As part of its work with the Consortium, the Drexel researchers will share their expertise in assessing safety and security threats due to synthetic media, like deepfakes; authenticating content that has potentially been manipulated or falsified using AI; and understanding how fake and AI-generated media can be used to spread misinformation and disinformation.

“The applications of AI are developing at an unprecedented pace and they are already having a massive impact on our way of life,” Grasberger said. “Harnessing this technology will require not only better understanding of the benefits and pitfalls of using it, but a set of guardrails that center the public interest.”

 

For more information about the Consortium visit: https://www.federalregister.gov/documents/2023/11/02/2023-24216/artificial-intelligence-safety-institute-consortium

Contact