Press Releases

WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, and Thom Tillis (R-NC) – the bipartisan co-chairs of the Senate Cybersecurity Caucus – introduced the Secure Artificial Intelligence Act of 2024, legislation to improve the tracking and processing of security and safety incidents and risks associated with Artificial Intelligence (AI). Specifically, this legislation aims to improve information sharing between the federal government and private companies by updating cybersecurity reporting systems to better incorporate AI systems. The legislation would also create a voluntary database to record AI-related cybersecurity incidents including so-called “near miss” events.

As the development and use of AI grow, so does the potential for security and safety incidents that harm organizations and the public. Currently, efforts within the federal government – led by the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) – play a crucial role in tracking of cybersecurity through their National Vulnerability Database (NVD) and the Common Vulnerabilities and Exposures Program (CVE), respectively. The National Security Agency (NSA), through the Cybersecurity Collaboration Center, also provides intel-driven cybersecurity guidance for emerging and chronic cybersecurity challenges through open, collaborative partnerships. However, these systems do not currently reflect the ways in which AI systems can differ dramatically from traditional software, including the ways in which exploits developed to subvert AI systems (a body of research often known as “adversarial machine learning” or “counter-AI”) often do not resemble conventional information security exploits. This legislation updates current standards for cyber incident reporting and information sharing at these organizations to include and better protect against the risks associated with AI. The legislation also establishes an Artificial Intelligence Security Center at the NSA to drive counter-AI research, provide an AI research test-bed to the private sector and academic researchers, develop guidance to prevent or mitigate counter-AI techniques, and promote secure AI adoption.

As we continue to embrace all the opportunities that AI brings, it is imperative that we continue to safeguard against the threats posed by – and to -- this new technology, and information sharing between the federal government and the private sector plays a crucial role,” said Sen. Warner. “By ensuring that public-private communications remain open and up-to-date on current threats facing our industry, we are taking the necessary steps to safeguard against this new generation of threats facing our infrastructure.”

"Safeguarding organizations from cybersecurity risks involving AI requires collaboration and innovation from both the private and public sector,” said Sen. Tillis. "This commonsense legislation creates a voluntary database for reporting AI security and safety incidents and promotes best practices to mitigate AI risks. Additionally, this bill would establish a new Artificial Intelligence Security Center, within the NSA, tasked with promoting secure AI adoption as we continue to innovate and embrace new AI technologies."  

Specifically, the Secure Artificial Intelligence Act would:

·         Require NIST to update the NVD and require CISA to update the CVE program or develop a new process to track voluntary reports of AI security vulnerabilities;

·         Establish a public database to track voluntary reports of AI security and safety incidents;

·         Create a multi-stakeholder process that encourages the development and adoption of best practices that address supply chain risks associated with training and maintaining AI models; and

·         Establish an Artificial Intelligence Security Center at the NSA to provide an AI research test-bed to the private sector and academic researchers, develop guidance to prevent or mitigate counter-AI techniques, and promote secure AI adoption.

“IBM is proud to support the Secure AI Act that expands the current work of NIST, DHS, and NSA and addresses safety and security incidents in AI systems. We commend Senator Warner and Senator Tillis for building upon existing voluntary mechanisms to help harmonize efforts across the government. We urge Congress to ensure these mechanisms are adequately funded to track and manage today’s cyber vulnerabilities, including risks associated with AI,” said Christopher Padilla, Vice President, Government and Regulatory Affairs, IBM Corporation.

“Ensuring the safety and security of AI systems is paramount to facilitating public trust in the technology. ITI commends U.S. Senators Warner and Tillis for introducing the Secure Artificial Intelligence Act, which will advance AI security, encourage the use of voluntary standards to disclose vulnerabilities, and promote public-private collaboration on AI supply chain risk management. ITI also appreciates that this legislation establishes the National Security Agency’s AI Security Center and streamlines coordination with existing AI-focused entities,” said ITI President and CEO Jason Oxman.

“AI security is too big of a task for any one company to tackle alone,” said Jason Green-Lowe, Executive Director of the Center for AI Policy. “AI developers have much to learn from each other about how to keep their systems safe, and it's high time they started sharing that information. That's why the Center for AI Policy is pleased to see Congress coordinating a standard format and shared database for AI incident reporting. We firmly support Senator Warner and Tillis's new bill."

Full text of the legislation is available here. A one-page summary of the legislation is available here

###