Press Releases

WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA), Vice Chair of the Senate Intelligence Committee, led five of his colleagues in pressing six generative artificial intelligence (GenAI) companies for answers regarding their engagements with the Department of Defense (DoD), the rules under which DoD can access and use their technology, and the internal controls that exist in the event their technology is misused by DoD. These letters come after the Trump administration’s unprecedented decision to designate Anthropic as a supply chain risk following a dispute over DoD’s demand to use Anthropic’s AI systems with zero restrictions, including to surveil Americans and engage fully autonomous weapons.

In addition to Sen. Warner, the letters were signed by Sens. Kirsten Gillibrand (D-NY), Mark Kelly (D-AZ), Elissa Slotkin (D-MI), Tim Kaine (D-VA), and Chris Coons (D-DE). The letters were sent to xAI, OpenAI, Alphabet, Meta, AWS, and Microsoft AI.

In the letters, the senators expressed their support for the modernization of national security technologies that ensure the U.S. is defense ready and benefiting from collaboration with the country’s leading AI innovators. However, they stressed the need to anticipate the potential failures of transformative technologies like AI and whether those potential failures stem from intentional misuse or insufficient oversight. The senators are concerned about the apparent retaliation by the Trump administration against private sector partners seeking to ensure the existence of adequate safeguards, particularly against the backdrop of DoD’s AI Strategy that appears to downplay longstanding AI governance measures.

“Recent developments concerning the Department of Defense’s approach to AI suggest a troubling disregard for the kinds of safeguards in place to ensure that AI is being adopted with robust accountability. For instance, while foundational documents – such as the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework, the Office of Management and Budget’s Memorandum on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, and the Final Report of the National Security Commission on Artificial Intelligence  – underscore the central role of governance in effective AI utilization, the Department of Defense’s January 9th Artificial Intelligence Strategy for the Department of War [sic] is conspicuously silent on this fundamental mission-enabler. While the Strategy’s emphasis on rapid adoption of commercial capabilities represents a legitimate objective, its clear disregard for mechanisms to ensure proper legal oversight (as well as meaningful test, evaluation, and validation technical safeguards) undermines U.S. security and American values,” the senators stated.

The senators continued, “A recent highly-publicized dispute between the Department and a leading American AI firm further suggests that this inattention towards – or even deliberate flouting of – AI governance may represent a systemic problem. Specifically, the Department recently rejected an existing vendor’s request to memorialize a restriction on the use of its models for fully autonomous weapons or to facilitate bulk surveillance of Americans. These concerns are not unreasonable: against the recent backdrop of DoD lethal activity in Latin America – with the routine sidelining of military attorneys and the subversion of longstanding norms on the use of lethal force – the Department’s aggressive insistence of an 'any lawful use' standard provides unacceptable reputational risk and legal uncertainty for American companies.”

“Equally concerning, Defense Secretary Hegseth has taken an extraordinary and unprecedented step to designate a leading American tech company as a supply-chain risk to national security, with the ostensible intent of intimidating those prospective and existing government commercial and academic partners who might seek to ensure adequate safeguards for AI in military operations. An American company fulfilling its contractual duties to the Department of Defense, while exercising its prerogative to ensure its products are lawfully, ethically, and appropriately used by the Department of Defense, is not a risk to national security or to America’s supply chain. While the ultimate responsibility for establishing robust and binding mechanisms to ensure lawful, appropriate, and effective AI rests with Congress, in the interim it is reasonable for commercial providers to ensure that products with outsized impact are governed with appropriate compliance mechanisms. Ultimately, strong AI governance for military and intelligence activity ensures the safety of servicemembers, the nation, and our allies and partners, while promoting clear and predictable norms in the face of less scrupulous adversaries,” the senators added.

The senators concluded the letters with a list of questions that will bring transparency to the required human oversight of AI models being used by DoD, the legal guidelines in place to ensure that AI models are not conducting domestic mass surveillance of Americans, the circumstances in which AI technology companies could acquiesce to any unlawful uses of their products by DoD and what responsibility they would have to notify Congress of said unlawful use, and what oversight AI technology companies would have of DoD’s military judgements, decision-making, or operations. The senators requested a response from the companies by April 3, 2026.

Read the full letters here or below.

As some of Congress’s most vocal proponents for the modernization of national security missions with transformative technology, we have actively sought to ensure that the Department of Defense and Intelligence Community are equipped with capabilities drawn from the nation’s leading innovators. These mission users – whose work has been guided by longstanding norms, legal procedures, and accountability mechanisms – benefit greatly from close collaboration with America’s leading AI and advanced compute providers.

Correspondingly, American companies generate enduring public trust when Americans associate their products with efforts that enhance national security in effective, ethical, and lawful ways. At the same time – particularly against the backdrop of numerous pressures on those longstanding norms, procedures, and accountability mechanisms – it is imperative to anticipate potential failure modes for transformational technologies like AI, whether stemming from intentional misuse or insufficient oversight. 

Recent developments concerning the Department of Defense’s approach to AI suggest a troubling disregard for the kinds of safeguards in place to ensure that AI is being adopted with robust accountability. For instance, while foundational documents – such as the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework, the Office of Management and Budget’s Memorandum on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, and the Final Report of the National Security Commission on Artificial Intelligence  – underscore the central role of governance in effective AI utilization, the Department of Defense’s January 9th Artificial Intelligence Strategy for the Department of War [sic] is conspicuously silent on this fundamental mission-enabler. While the Strategy’s emphasis on rapid adoption of commercial capabilities represents a legitimate objective, its clear disregard for mechanisms to ensure proper legal oversight (as well as meaningful test, evaluation, and validation technical safeguards) undermines U.S. security and American values.

A recent highly-publicized dispute between the Department and a leading American AI firm further suggests that this inattention towards – or even deliberate flouting of – AI governance may represent a systemic problem. Specifically, the Department recently rejected an existing vendor’s request to memorialize a restriction on the use of its models for fully autonomous weapons or to facilitate bulk surveillance of Americans. These concerns are not unreasonable: against the recent backdrop of DoD lethal activity in Latin America – with the routine sidelining of military attorneys and the subversion of longstanding norms on the use of lethal force – the Department’s aggressive insistence of an “any lawful use” standard provides unacceptable reputational risk and legal uncertainty for American companies.

Equally concerning, Defense Secretary Hegseth has taken an extraordinary and unprecedented step to designate a leading American tech company as a supply-chain risk to national security, with the ostensible intent of intimidating those prospective and existing government commercial and academic partners who might seek to ensure adequate safeguards for AI in military operations. An American company fulfilling its contractual duties to the Department of Defense, while exercising its prerogative to ensure its products are lawfully, ethically, and appropriately used by the Department of Defense, is not a risk to national security or to America’s supply chain. While the ultimate responsibility for establishing robust and binding mechanisms to ensure lawful, appropriate, and effective AI rests with Congress, in the interim it is reasonable for commercial providers to ensure that products with outsized impact are governed with appropriate compliance mechanisms. Ultimately, strong AI governance for military and intelligence activity ensures the safety of servicemembers, the nation, and our allies and partners, while promoting clear and predictable norms in the face of less scrupulous adversaries. 

Furthermore, the unprecedented designation of an American company, especially under such a weak policy and legal rationale, as a risk to the national security of the United States creates uncertainty among our allies and partners. Many of these countries are looking to incorporate American technology into their own national security and other government functions, and the specter of the Secretary utilizing a very serious sanction against more American companies, seemingly out of a sense of pique, will harm American companies in these global markets.

Your company has reportedly agreed in principle to have your AI model deployed for military purposes or to facilitate such deployment, subject to an “any lawful use” standard. Accordingly, we respectfully request your response to the following questions by April 3, 2026:

  1. Which specific models has your company made available to the Department of Defense, including Combat Support Agencies? Please specify the computing environments, and associated classification levels (via classified courier, if necessary).
  2. Have the models made available to the Department of Defense been trained or tested to deploy lethal autonomous warfare without human oversight or to conduct bulk surveillance of Americans? If so, please specify the training or testing that was conducted, and provide the results of any such training or testing.
  1. Does provision of your product include a contractual requirement for a human on the loop for autonomous kinetic operations? If not, please provide a clear rationale.
  1. Does provision of your product include any specific, legally enforceable protections ensuring your AI model is not used to conduct bulk surveillance on Americans in violation of the law? If so, please specify which laws these provisions explicitly reference. If not, please provide a clear rationale.
  1. What additional forms of AI governance – including documentation, testing and validation, auditability, and performance monitoring – do you ensure through contractual or technical controls for products used in high-impact national security contexts?
  1. Under what circumstances would your company acquiesce to any unlawful uses of its product by the Department of Defense?
  1. To the extent that your contract permits appropriately cleared Forward Deployed Engineers, does your company have internal company reporting mechanisms or procedures to enable cleared staff to alert uncleared corporate leadership of potential misuse? Provide documentation sufficient to substantiate any such asserted mechanisms or procedures.
  1. Under what circumstances would your company inform appropriate Congressional Committees of unlawful or unethical use of your products by the Department of Defense? If there is a contractual limitation on notifying Congress, please indicate such a mechanism and provide documentation sufficient to substantiate that limitation.
  1. Does your model Usage Policy provide you with special capabilities to control, oversee, second-guess, impede, or intervene in the Department of Defense’s military judgments, decision-making, or operations?

Thank you for your attention to this matter.

###

* High-quality photographs of Sen. Mark R. Warner are available for download here *

Photos may be used online and in print, and can be attributed to ‘The Office of Sen. Mark R. Warner’