The Biden administration is proposing a reporting requirement for leading artificial intelligence (AI) developers and cloud providers to allow the federal government to evaluate their technology’s safety and defense capabilities, the Commerce Department announced.
The department said Monday the proposed rule from its Bureau of Industry and Security would mandate “frontier” AI model and computing clusters to provide detailed reporting about their developmental activities and cybersecurity measures.
The rule will also ask for the developers’ results from red teaming — the process of testing for flaws and vulnerabilities in an AI system. The bureau said it would include “testing for dangerous capabilities like the ability to assist in cyberattacks or lower the barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons.”
The proposal comes amid a wider push from the federal government to better understand the capabilities and risks of AI as the technology develops. Commerce Secretary Gina Raimondo noted in a statement that AI is “progressing rapidly” with both “tremendous promise and risk.”
“The information collected through the proposed reporting requirement will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors,” the Commerce Department security bureau said in a release.
It follows a pilot survey of AI developers conducted earlier this year, the bureau said.
The conversation over guardrails on AI has permeated across government including Congress, where lawmakers have held various hearings and meetings with experts to understand the risks and benefits of the technology.
President Biden last year issued a sweeping executive order on AI safety, risks and the preserving of data privacy. The AI Safety Institute was launched within the Commerce Department as part of this order.
Last month, leading AI companies OpenAI and Anthropic have signed agreements with the U.S. government for their AI models to be used for research, testing and evaluation.