COLLEGE PARK, Md - The University of Maryland (UMD) has joined with more than 200 of the nation’s leading artificial intelligence (AI) stakeholders in a wide-ranging new federal effort to improve the trustworthiness and safety of AI systems.
The AI Safety Institute Consortium (AISIC), announced Thursday, Feb. 8 by U.S. Department of Commerce Secretary Gina M. Raimondo, brings together AI creators and users, academics, government and industry researchers, and civil society organizations—all focused on improving the technical and societal benefits of AI, while simultaneously reducing its misuse and any related risks.
“The AI Safety Institute Consortium will allow us to work closely with the federal government and multiple other stakeholders to implement, sustain, and extend priority projects involving research, testing and guidance on AI safety,” said Gregory F. Ball, UMD’s vice president for research. “Given that AI tools and applications are growing at an unprecedented pace, transforming society and changing our way of life, the potential benefits and risks of AI require a much closer examination and a more complete understanding if we are going to truly reap the benefits of this technology.”
In her announcement, Raimondo said that aligning AI with the nation’s societal norms and values—and keeping the public safe—requires a broad, human-centered focus, specific policies, processes, and guardrails informed by community stakeholders across various levels of society, and a bold commitment from the public sector.
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” she said. “By working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”
At UMD, activity for the new consortium will be led by both the Institute for Trustworthy AI in Law & Society (TRAILS) and the Applied Research Laboratory for Intelligence and Security (ARLIS). TRAILS launched last year with a $20 million grant from the National Science Foundation and the National Institute for Standards and Technology (NIST), and boasts a strong record of research and scholarship related to trustworthy AI that is already underway. ARLIS, a Department of Defense (DoD) University Affiliated Research Center within UMD, supports research and development, policy, academic outreach and workforce development in AI for the DoD and intelligence community.
The university also has more than 200 researchers and a growing list of centers and programs spanning multiple disciplines developing tools for, exploring the safety and ethics of, and examining human interactions with AI, including:
The Industrial AI Center, working to bring the potential of AI to a wide range of industries, including aerospace, energy, health care, manufacturing and others.
The Values Centered Artificial Intelligence initiative, supported by a UMD Grand Challenges grant, dedicated to developing theories, practices and tools to ensure that AI respects human values.
The Center for Governance of Technology and Systems (GoTech), committed to exploring the development, governance and sustainment of complex critical infrastructure technologies and networks through rigorous interdisciplinary research.
The Maryland Neuroimaging Center, capable of assessing cognitive markers of trust in and usefulness of AI systems through brain imaging and scans.