Press Releases
WASHINGTON –Today, U.S. Sens. Mark R. Warner (D-VA) and Josh Hawley (R-MO) announced they will introduce the AI-Related Job Impacts Clarity Act. This legislation would require major companies and federal agencies to report AI related layoffs to the Department of Labor to be compiled into a publicly available report.
“Good policy starts with good data. This bipartisan legislation will finally give us a clear picture of AI’s impact on the workforce – what jobs are being eliminated, which workers are being retrained, and where new opportunities are emerging. Armed with this information, we can make sure AI drives opportunity instead of leaving workers behind, ” said Sen. Warner.
“Artificial intelligence is already replacing American workers, and experts project AI could drive unemployment up to 10-20% in the next five years,” said Sen. Hawley. “The American people need to have an accurate understanding of how AI is affecting our workforce, so we can ensure that AI works for the people, not the other way around.”
The AI-Related Job Impacts Clarity Act would:
- Require major companies and federal agencies to quarterly report AI-related job effects –including layoffs and job displacement – to the Department of Labor (DOL).
- Require the DOL to compile data on AI-related job effects and publish a report to Congress and the public.
Read the full bill text here.
###
Warner, Coons, Shaheen, Reed, Kelly, Himes, Krishnamoorthi on Trump’s Middle East AI Giveaway
May 16 2025
WASHINGTON – U.S. Sens. Mark R. Warner (D-VA), Chris Coons (D-DE), Jeanne Shaheen (D-NH), Jack Reed (D-RI), and Mark Kelly (D-AZ), as well as Congressmen Jim Himes (D-CT) and Raja Krishnamoorthi (D-IL) issued the following statement in response to President Trump’s artificial intelligence deals that were announced with the United Arab Emirates and Saudi Arabia this week:
“Democrats and Republicans have long agreed that American companies must remain the undisputed leader in AI, a rapidly developing technology critical to the future of everything from our national security to manufacturing, finance to health care. We have worked hard to ensure the most powerful AI systems are built here, and we have fought to restrict the most sophisticated chips from reaching China – or those who would grant remote access to China – given Beijing’s use of AI to strengthen its military, crack down on domestic dissent, and compete with the U.S.
“President Trump announced deals to export very large volumes of advanced AI chips to the UAE and Saudi Arabia without credible security assurances to prevent U.S. adversaries from accessing those chips. These deals pose a significant threat to U.S. national security and fundamentally undermine bipartisan efforts to ensure the United States remains the global leader in AI. Rather than putting America first, this deal puts the Gulf first.
“The volume of AI chips Trump is offering for export would deprive American AI developers of highly sought-after chips needed here and slow the U.S. AI buildout. Under this deal, data centers and AI systems that would otherwise be built in America will be built in the Middle East – at the exact time that President Trump says he wants to bring jobs and key industries back home. This deal would incentivize U.S. firms to build the factories of the future overseas, creating significant vulnerabilities in our AI supply chain. If our leading AI firms offshore their frontier computing infrastructure to the Middle East, we could become as reliant on the Middle East for AI as we are on Taiwan for advanced semiconductors – and as we used to be on the Middle East for oil. We should not foster new dependencies on foreign countries for this premier technology.
“Additionally, these deals will provide our highest end chips to G42, a company with a well-documented history of cooperation with the People’s Republic of China. We applaud the administration's efforts to limit exports of advanced AI chips to China, including recent actions to further restrict exports of Nvidia chips. However, these efforts will be for nothing if G42 or other companies with ties to China are given large quantities of our most advanced chips.
“Proponents of the deal argue that China will fill the gap if we do not sell substantial quantities of advanced chips to these countries. This is false. China cannot and will not because China makes fewer chips as a nation than these deals offer, and each is inferior to their U.S.-designed equivalent. This is thanks to the bipartisan efforts under both the Trump and Biden administrations to cut off China’s access to advanced chip manufacturing equipment. These efforts have worked, and we should double down on this success rather than squander the leverage we have won.
“If this deal succeeds, the offshoring of frontier American AI will be recorded as an historic American blunder. People around the world deserve to enjoy the benefits we will reap from AI. However, AI chips must only be exported to trusted companies, in reasonable numbers, and in concert with credible security standards and assurances. We welcome the opportunity to work with the administration to meet these objectives and urge our colleagues in Congress to do the same.”
Senator Warner is Vice Chair of the Senate Intelligence Committee. Senator Coons is Ranking Member of the Senate Appropriations Subcommittee on Defense. Senator Shaheen is Ranking Member of the Senate Foreign Relations Committee. Senator Reed is Ranking Member of the Senate Armed Services Committee. Senator Kelly is a member of the Senate Intelligence Committee. Congressman Himes is Ranking Member of the House Intelligence Committee. Congressman Krishnamoorthi is Ranking Member of the House Select Committee on the Chinese Communist Party.
###
Warner, Blackburn Introduce Legislation to Strengthen U.S. Immersive Technology Leadership
Mar 25 2025
WASHINGTON – Today, U.S. Sens. Mark R. Warner and Marsha Blackburn (TN) introduced the United States Leadership in Immersive Technology Act, legislation that would establish an advisory panel tasked with creating a national immersive technology strategy.
Virtual reality (VR) and augmented reality (AR), collectively known as immersive technology (XR), allow us to blend the digital and physical worlds into one integrated experience. From automatically generated closed captions of live conversations for the hearing impaired to creating personalized, hands-on training modules for students, the benefits of XR are limitless.
Despite being home to some of the world’s largest XR content and hardware producers, the U.S. lags behind other countries in applying them to commercial and personal use. Already, South Korea, the United Kingdom, the European Union, and China have strategies that allow them to embrace immersive technologies. The U.S. is relinquishing its role on the international stage to guide the creation of XR standards without a strategy.
“As the use of immersive technology continues to rapidly rise, it’s essential that we do not get left behind. I’m proud to introduce this legislation that will help to create a national strategy surrounding XR to ensure that the United States remains competitive globally in this crucial industry,” said Sen. Warner.
“We need to stay two steps ahead of our adversaries when it comes to applying immersive technology in American industries and stimulating economic growth,” said Sen. Blackburn. “Our United States Leadership in Immersive Technology Act would make certain we can compete with adversaries like the Chinese Communist Party and safeguard national security as virtual reality and augmented reality become more prevalent on the world stage.”
“The Trump administration has made it clear that American technology leadership is a top priority, and the XR industry is poised for immense growth. This bill is an important step towards achieving that goal," said Liz Hyman, CEO, XR Association. “XR is being rapidly adopted to tackle challenges in various sectors including workforce training, healthcare, education and agency operations. We are excited by the efforts of this Congress to recognize XR’s potential as a tool to increase efficiency and meet challenges across a broad range of issues.”
The United States Leadership in Immersive Technology Act is endorsed by Google, Meta, HTC, Sony, University of Wyoming, ReframeXR, Qualcomm, NC East Alliance, Transfr, Mynd, Immersive, MediView, Lakeside Metaverse, CareerViewXR, Chocolate Milk & Donuts, XR Association, and the George Washington University’s Digital Trade and Data Governance Hub.
A copy of the bill can be found here.
###
U.S. Sen. Mark R. Warner (D-VA), Vice Chairman of the Senate Select Committee on Intelligence, urged the leaders of federal departments and agencies to promote data collection and transparency around their adoption of artificial intelligence (AI). In a series of letters to 23 department and agency heads, he emphasized the critical importance of collecting data on AI’s impacts promoting productivity and improving government outcomes, and he posed a series of questions about how agencies are making decisions that weigh the benefits and risks of the technology alongside the experiences of federal employees.
AI is having a profound impact on the workforce across both public and private sectors, often allowing workers to complete tasks more efficiently and cost-effectively. In Fiscal Year 2022, 20 of 23 federal agencies reported almost 1,200 current and planned AI use cases. However, agencies haven’t been forthcoming with data on how AI has changed outcomes for federal agencies, or how data is being used to inform future decision-making.
The letter highlights several of these potential and actual use cases, saying, “Per the AI case use inventory, the utilization of artificial intelligence across federal departments and agencies has allowed the federal workforce and contractors to work efficiently and creatively – improving government operations and delivering better results for the American people. These examples include the Social Security Administration using AI to expedite determinations for disability benefits, the Department of Veterans Affairs utilizing AI to capture trends and facilitate processing of veteran feedback, and the Department of Justice applying AI to accurately identify and process threat tips.”
In the letter, Sen. Warner stressed the urgent need for departments and agencies to release data on their AI adoption, so they can implement best practices and work to deliver better results for the American people. It also asks how feedback from federal employees and contractors is being considered to make decisions that best support both workers and outcomes.
“While government-sourced, publicly-available information provides sector or task-specific summaries of how the aforementioned federal departments and agencies are adopting artificial intelligence, I am concerned about the limitations of this information with respect to the broader adoption at scale of AI in the federal government, including the need for measurable data and conclusive assessments on how individual AI use cases are enhancing the missions of federal departments and agencies,” Sen. Warner continued. Establishing data collection standards that track the progress of AI’s adoption in the federal government will help better understand the state of integration, assess its effectiveness, implications, and appropriate usages, and guide the direction of future adoption plans.”
Sens. Warner sent the letter to the following federal departments and agencies: The Departments of: Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Justice, Labor, State, Interior, Transportation, Treasury, and Veterans Affairs, and the Environmental Protection Agency, the General Services Administration, NASA, the Nuclear Regulatory Commission, the National Science Foundation, the Small Business Administration, the Social Security Administration, and the U.S. Agency for International Development.
Sen. Warner, a former tech entrepreneur and Vice Chairman of the Senate Select Committee on Intelligence, has been a longtime leader on AI-related issues, particularly within the federal government. He led bipartisan legislation to help the federal government mitigate risks associated with AI while still being able to benefit from this emerging technology. In May 2024, he introduced bipartisan legislation to improve the tracking and processing of security and safety incidents and risks associated with AI, including through improving information sharing between the federal government and private companies. He also has repeatedly pushed on companies to keep their promises to promote security and safety throughout the rollout of novel AI technologies.
A copy of the letter is available here and below:
I write to you today regarding your agency’s utilization of artificial intelligence (AI) systems and enabled technologies and request information on your department or agency’s use of those systems and technologies. I request sufficient information to understand the purposes to which your department or agency uses those systems, the analyses of the possible and actual uses of AI applications by your department or agency, and the metrics by which your department or agency evaluates the use of those systems, including by federal workers and contractors.
In a world where we are still working to understand the full capabilities and impact of advancements in artificial intelligence, it is critical that the federal government lead in data collection and evidence-based decision-making in the adoption of these technologies. In that same vein, the adoption of AI tools by the federal government should be based on measurable outcomes, such as productivity gains.
The use of artificial intelligence across various occupations and industries is transforming the labor market and impacting the global economy broadly. More specifically, the application of AI in the workforce has yielded promising results, including the potential for increased worker productivity. In many instances, artificial intelligence has allowed for tasks to be completed faster and more efficiently, allowing workers to focus on high-value responsibilities and expanding their range of work.
The private sector, particularly innovative artificial intelligence companies and the businesses that use their products, are leading the charge in measuring and providing real-time dynamic data on the impact of artificial intelligence technologies on their workforce and worker productivity. This data includes, but is not limited to, specific measurements on how AI has led to time-saved on specific tasks, production volume, improving error rates, and customer satisfaction. These metrics and subsequent analyses are useful in evaluating the impact and value of artificial intelligence.
As of the 118th Congress, the federal government employs over 2 million individuals, with the Commonwealth of Virginia holding the third-largest constituency of federal civilian employees. For FY2024, the federal government executed over 104 million contracts, similarly employing, directly and indirectly, individuals to carry out the missions of federal departments and agencies. These public servants perform essential work for our country, and as detailed below, some of their work is complemented by and supplemented through the integration of artificial intelligence systems and technologies.
The Government Accountability Office (GAO) demonstrated that in FY2022, twenty of 23 agencies reported about 1,200 current and planned artificial intelligence use cases. Per the AI case use inventory, the utilization of artificial intelligence across federal departments and agencies has allowed the federal workforce and contractors to work efficiently and creatively – improving government operations and delivering better results for the American people. These examples include the Social Security Administration using AI to expedite determinations for disability benefits, the Department of Veterans Affairs utilizing AI to capture trends and facilitate processing of veteran feedback, and the Department of Justice applying AI to accurately identify and process threat tips. The use case inventory applies the definition of artificial intelligence as provided in Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019.
While government-sourced, publicly-available information provides sector or task-specific summaries of how the aforementioned federal departments and agencies are adopting artificial intelligence, I am concerned about the limitations of this information with respect to the broader adoption at scale of AI in the federal government, including the need for measurable data and conclusive assessments on how individual AI use cases are enhancing the missions of federal departments and agencies. Establishing data collection standards that track the progress of AI’s adoption in the federal government will help better understand the state of integration, assess its effectiveness, implications, and appropriate usages, and guide the direction of future adoption plans.
As such, I respectfully request that you respond to this letter with detailed answers to the following questions by January 17, 2025:
- Does your department or agency utilize AI?
- If yes, please provide a high-level summary of the utilization of AI, including uses by administrative or operational units of your department or agency.
- If no, please detail how your department or agency reached the decision to not utilize AI.
- How does your department or agency identify uses cases, needs, or other instances in which it deems the use of AI to be appropriate? Please describe in detail the decision-making process that your department or agency has, does, or plans to undertake when determining if the use of AI is appropriate.
- Regarding future or planned uses of AI, how does your department or agency incorporate data collection and identify measurable outcomes when determining if the use of AI is appropriate? What metrics does your department or agency use when determining the appropriateness of AI?
- Regarding current uses of AI, how does your department or agency incorporate data collection and identify measurable outcomes when determining if the use of AI is productive or effective? What metrics does your department or agency utilize when determining the productivity or effectiveness of current AI applications? How do these metrics and data collection guide decision-making on future applications of AI?
- Does your department or agency measure worker productivity or productivity gains as a result of the application of AI?
- If yes, please detail how your agency measures worker productivity. How does this guide your department or agency’s decision-making on future applications of AI?
- If no, please detail why your agency does not measure this.
- Please describe in detail the process that your department or agency uses to solicit input or feedback from the federal workers or the contractors who will be directly utilizing the planned AI technology.
- When determining if the use of AI by your department or agency is appropriate, please describe in detail how your department or agency considers the need for additional training for the federal workers and contractors who will be directly applying the AI technology as part of their job duties and responsibilities.
- If your department or agency is utilizing AI, please describe in detail how those uses inform your department or agency’s considerations on adjusting mission approach or allocating tasks among the department or agency’s workforce, including, but not limited to, adjusting job responsibilities, daily tasks, or team compositions?
I appreciate your thoughtful consideration of this matter and look forward to your response.
Statement of Sen. Mark R. Warner On Open AI’s Efforts to Integrate Additional Safety and Security Measures
Dec 09 2024
WASHINGTON – Today, Senate Select Committee on Intelligence Chairman Mark R. Warner (D-VA) released the following statement on Open AI’s new safeguards against malicious misuse of their artificial intelligence products:
“I’m pleased to see OpenAI heed my call for additional safeguards as it releases powerful new features like video generation – including specific measures I have advocated for, including adding new detection mechanisms for violative outputs, clear mechanisms to identify and catalogue synthetic content, and public-facing reporting mechanisms for victims of impersonation campaigns and other Terms of Service violations to seek redress. Ultimately the efficacy of these new policies will be measured in the kinds of resources OpenAI invests in enforcing them, but I appreciate these new steps.”
WASHINGTON – Today, Senate Select Committee on Intelligence Chairman Mark R. Warner (D-VA) issued the following statement in response to President Biden’s National Security Memorandum (NSM) on Artificial Intelligence:
“As we have seen just over the last two years, AI technology is rapidly evolving in a way that will have massive consequences for our economy, national security, and even democracy. I am heartened to see the administration recognize this very fact and take a leadership role to advance AI capabilities while simultaneously promoting responsible research, strong governance that ensures trust and safety, and the protection of human and civil rights.
“I am also gratified to see that the NSM appears to implement many of the legislative proposals I have advanced, including requirements to promote AI security research and address AI cyber vulnerabilities. However, as the chair of the Senate Intelligence Committee I am also acutely aware of the many threats to our AI efforts. I encourage the administration to work in the coming months with Congress to advance a clearer strategy to engage the private sector on national security risks directed at AI systems across the AI supply chain.”
###
Chairman Warner Shares Responses from AI Companies on Efforts to Crack Down on Malicious Use
Aug 07 2024
WASHINGTON – With under 100 days until the U.S. presidential election, Senate Intelligence Committee Chairman Mark R. Warner (D-VA) today shared responses from tech companies about their efforts to crack down on malicious uses of AI and released the statement below on their ramifications for the election and beyond. In February, a group of technology companies (including generative AI vendors, social media platforms, chipmakers, and research firms) signed the Munich Tech Accord to Combat Deceptive Use of AI in 2024 Elections, a high-level roadmap for a variety of new initiatives, investments, and interventions that could improve the information ecosystem surrounding this year’s elections. In May, Sen. Warner pushed for specific answers about the actions that companies are taking to make good on the Tech Accord, including its applicability to combat misuse of generative AI products outside the election context.
“I appreciate the thoughtful engagement from the signatories of the Munich Tech Accord. Their responses indicated promising avenues for collaboration, information-sharing, and standards development, but also illuminated areas for significant improvement.
“While many of the companies indicated that they have clear policies against a wide range of misuses, and have undertaken red-teaming and other pre-deployment testing measures, there is a very concerning lack of specificity and resourcing on enforcement of those policies. Additionally, companies offered little indication of detailed and sustained efforts to engage local media, civic institutions, and election officials and equip them with resources to identify and address misuse of generative AI tools in their communities. Leading social media platforms and gen-AI vendors have commendably posted resources to their websites and have had extensive engagement with legislative and regulatory bodies at the national level, but the failure modes of this technology require sustained relationship-building with local institutions.
“I’m disappointed that few of the companies provided users with clear reporting channels and remediation mechanisms against impersonation-based misuses. Generative AI tools are already harming vulnerable communities – including seniors, who are often victims of financial fraud, and teens, who are vulnerable to appalling acts of non-consensual image generation and extortion.
“Lastly – and perhaps most relevant ahead of the 2024 Presidential Election – I am deeply concerned by the lack of robust and standardized information-sharing mechanisms within the ecosystem. With the election less than 100 days away, we must prioritize real action and robust communication to systematically catalogue harmful AI-generated content. While this technology offers significant promise, generative AI still poses a grave threat to the integrity of our elections, and I’m laser-focused on continuing to work with public and private partners to get ahead of these real and credible threats.”
Responses by each of the companies are available here: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Intuit, LG, McAfee, Microsoft, Meta, Open AI, Snap, Stability AI, TikTok, Trend, True Media, Truepic, and X. Gen, Inflection, NetApp, and Nota did not provide responses.
Ahead of the 2024 election, Sen. Warner has been repeatedly raising the alarm about the potential for AI and tech companies to create and disseminate credible misinformation to influence election results. Last week, he issued a statement on the most recent election security update from the Director of National Intelligence. He has also held open hearings in the Intelligence Committee on this critical issue.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence and co-chair of the Senate Cybersecurity Caucus, wrote to the U.S. Copyright Office in support of expanding the existing good-faith research exemption within the Digital Millennium Copyright Act (DMCA).
Every three years, the DMCA goes through a rulemaking process to authorize exemptions that allow individuals and researchers to circumvent technical protection measures on copyrighted material without risking liability. This year, Artificial Intelligence (AI) researchers have petitioned for a new exemption relating to “Security Research Pertaining to Generative AI Bias.” Sen. Warner, who has led the charge in the Senate to explore the capabilities of AI technology while simultaneously advocating for reasonable guardrails around its usage, argues that expanding the current good-faith research exemption to cover research that falls outside of traditional security concerns, such as bias and other harmful outputs, is the best way to ensure safe and equitable AI while enabling its continued innovation, public trust, and adoption.
Sen. Warner wrote, “Due to the difficulty in understanding the full range of behaviors in AI systems - particularly as models are introduced in contexts that diverge from their intended use - the scope of good-faith research has expanded to the identification of safety flaws caused by misaligned AI systems, as well as research into how AI systems can reflect and reproduce socially and economically harmful biases…it is crucial that we allow researchers to test systems in ways that demonstrate how malfunctions, misuse, and misoperation may lead to an increased risk of physical or psychological harm.”
He continued, “At the same time, as the Department of Justice letter emphasized, a hallmark of the research exemption has been the good faith of security researchers. In the absence of regulation, many AI firms have voluntarily adopted measures to address abuse, security, and deception risks posed by their products. Given the growing use of generative AI systems for fraud, non-consensual intimate image generation, and other harmful and deceptive activity, measures such as watermarks and content credentials represent especially important consumer protection safeguards. While independent research can meaningfully improve the robustness of these kinds of authenticity and provenance measures, it is vital that the Copyright Office ensure that expansion of the exemption does not immunize research that intends to undermine these vital measures; absent very clear indicia of good faith, efforts that undermine provenance technology should not be entitled to the expanded exemption.”
This is the latest step in Sen. Warner’s efforts to reign in big tech and better understand the impacts of rapidly expanding usage of AI. Earlier this month, he introduced the Secure Artificial Intelligence Act of 2024, legislation to improve the tracking and processing of security and safety incidents and risks associated with Artificial Intelligence (AI).
A copy of the letter is available here and below:
Dear Ms. Perlmutter,
I write today in response to the petition submitted to your office that proposes a new exemption for “Security Research Pertaining to Generative AI Bias” as part of the Copyright Office’s ninth triennial rulemaking proceeding under the Digital Millennium Copyright Act (DMCA). I understand a number of stakeholders have submitted public comments to weigh in on this petition, including a letter from the Department of Justice. Ultimately, I urge the Copyright Office to consider expanding the existing good-faith security research exemption to cover both security and safety flaws or vulnerabilities, where safety includes bias and other harmful outputs.
As the leader of bipartisan legislation to improve the security of AI systems and the Co-Chair of the Senate Cybersecurity Caucus, I recognize the importance of independent security research. The existing DMCA exemption for good-faith security researchers plays a critical role in empowering a robust security research ecosystem that identifies vulnerabilities and risks to systems around the world, facilitating their remediation, and preventing future exploitation by threat actors that could lead to incidents. We must continue to promote this important work and understand that, although AI is software at its core, the non-deterministic nature of AI systems means that security vulnerabilities are no longer the only type of flaw that can be introduced and enable misuse. As the AI Risk Management Framework, developed by the National Institute of Standards and Technology (NIST), emphasizes, AI risks differ from traditional software risks in key ways - including increased opacity and barriers to reproducibility, complex and non-deterministic system dependencies, more nascent testing and evaluation frameworks and controls, and a “higher degree of difficulty in predicting failure modes” for so-called “emergent properties” of AI systems.
Due to the difficulty in understanding the full range of behaviors in AI systems - particularly as models are introduced in contexts that diverge from their intended use - the scope of good-faith research has expanded to the identification of safety flaws caused by misaligned AI systems, as well as research into how AI systems can reflect and reproduce socially and economically harmful biases. This research into bias and other harmful outputs is essential to ensuring public safety and equity while enabling continued innovation, public trust, and adoption of AI. Therefore, it is crucial that we allow researchers to test systems in ways that demonstrate how malfunctions, misuse, and misoperation may lead to an increased risk of physical or psychological harm.
At the same time, as the Department of Justice letter emphasized, a hallmark of the research exemption has been the good faith of security researchers. In the absence of regulation, many AI firms have voluntarily adopted measures to address abuse, security, and deception risks posed by their products. Given the growing use of generative AI systems for fraud, non-consensual intimate image generation, and other harmful and deceptive activity, measures such as watermarks and content credentials represent especially important consumer protection safeguards. While independent research can meaningfully improve the robustness of these kinds of authenticity and provenance measures, it is vital that the Copyright Office ensure that expansion of the exemption does not immunize research that intends to undermine these vital measures; absent very clear indicia of good faith, efforts that undermine provenance technology should not be entitled to the expanded exemption.
The existing exemption has been an important contributor to the multistakeholder effort to improve information security by enabling the “good-faith testing, investigation, and/or correction of a security flaw or vulnerability” in computer programs. As you review the public comments on this new petition, I urge you to consider expanding the good-faith security research definition to include both security and safety flaws or vulnerabilities, where safety includes bias and other harmful outputs. In considering this expansion, I urge the Copyright Office to continue to bind the exemption to research that is conducted in a safe environment, primarily to enhance the security or safety of computer programs, without facilitating copyright infringement. Further, I encourage careful consideration of the exemption’s application to any research on technical measures that protect the authenticity or provenance of content from generative AI models.
Sincerely,
###
Senate Intel Chairman Pushes Companies to Follow Through On Commitments to Combat Deceptive Use of AI
May 14 2024
WASHINGTON – With under six months until the U.S. general election, Intelligence Committee Chairman Mark R. Warner (D-VA) today pushed tech companies to follow up on commitments made at the Munich Security Conference and take concrete measures to combat malicious misuses of generative artificial intelligence (AI) that could impact elections. In February, a group of AI companies signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, a high-level roadmap for a variety of new initiatives, investments, and interventions that could improve the information ecosystem surrounding this year’s elections. Following that initial agreement, Sen. Warner is pushing for specific answers about the actions that companies are taking to make good on the Tech Accord.
“Against the backdrop of worldwide proliferation of malign influence activity globally – with an ever-growing range of malign actors embracing social media and wider digital communications technologies to undermine trust in public institutions, markets, democratic systems, and the free press – generative AI (and related media-manipulation) tools can impact the volume, velocity, and believability of deceptive election,” Sen. Warner wrote.
This year, elections are taking place in over 40 countries representing over 4 billion people, while AI companies are simultaneously releasing a range of powerful and untested new tools that have the potential to rapidly spread believable misinformation, as well as abuse by a range of bad actors. While the Tech Accord represented a positive, public-facing first step to recognize and address this novel challenge, Sen. Warner is pushing for effective, durable protections to ensure that malign actors can’t use AI to craft misinformation campaigns and to prevent its dissemination on social media platforms. To that end, he posed a series of questions to get specific information on the actions that companies are taking to prevent the creation and rapid spread of AI-enabled disinformation and election deception.
“While high-level, the commitments your company announced in conjunction with the Tech Accord offer a clear roadmap for a variety of new initiatives, investments, and interventions that can materially enhance the information ecosystem surrounding this year’s election contests. To that end, I am interested in learning more about the specific measures your company is taking to implement the Tech Accord. While the public pledge demonstrated your company’s willingness to constructively engage on this front, ultimately the impact of the Tech Accord will be measured in the efficacy – and durability – of the initiatives and protection measures you adopt,” Sen. Warner continued.
The letter concludes by pointing out that several of the proposed measures to combat malicious misuse in elections would also help address adjacent misuses of AI technology, including the creation of non-consensual intimate imagery, child sexual abuse material, and online bullying and harassment campaigns. Sen. Warner has been consistently calling attention to and pushing for action from AI companies on these and other potential misuses. On Wednesday, Sen. Warner will host a public Intelligence Committee hearing where leaders from the FBI, CISA, and the ODNI will provide updates on threats to the 2024 election.
Sen. Warner sent letters to every signatory of the Tech Accord: Adobe, Amazon, Anthropic, Arm, Eleven Labs, Gen, GitHub, Google, IBM, Inflection, Intuit, LG, LinkedIn, McAfee, Microsoft, Meta, NetApp, Nota, Open AI, Snap, Stability AI, TikTok, Trend, True Media, Truepic, and X.
A copy of every letter is available here and one example is included below:
Earlier this year, I joined to amplify and applaud your company’s commitment to advance election integrity worldwide through the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. As generative artificial intelligence (AI) products proliferate for both commercial and general users, a multi-stakeholder approach is needed to ensure that industry, governments, and civil society adequately anticipate – and counteract – misuse of these products in ways that cause harm to vulnerable communities, public trust, and democratic institutions. The release of a range of powerful new AI tools – many enabled or directly offered by your [company/organization] -- coincides with an unprecedented number of elections worldwide. As memorialized during the Munich Summit, elections have occurred – or will occur – in over 40 countries worldwide, with more than four billion global citizens exercising their franchise. Since the signing of the Tech Accord on February 16th, the first round of India’s elections have already concluded. European Parliament elections will take place in early June and– as primary contests are already well underway – the U.S. general election will take place on November 5th.
While policymakers worldwide have begun the process of developing measures to ensure that generative AI technologies (and related media manipulation tools) serve the public interest, the private sector can – particularly in collaboration with civil society – dramatically shape the usage and wider impact of these technologies through proactive measures. Against the backdrop of worldwide proliferation of malign influence activity globally – with an ever-growing range of malign actors embracing social media and wider digital communications technologies to undermine trust in public institutions, markets, democratic systems, and the free press – generative AI (and related media-manipulation) tools can impact the volume, velocity, and believability of deceptive election information.
While high-level, the commitments your company announced in conjunction with the Tech Accord offer a clear roadmap for a variety of new initiatives, investments, and interventions that can materially enhance the information ecosystem surrounding this year’s election contests. To that end, I am interested in learning more about the specific measures your company is taking to implement the Tech Accord. While the public pledge demonstrated your company’s willingness to constructively engage on this front, ultimately the impact of the Tech Accord will be measured in the efficacy – and durability – of the initiatives and protection measures you adopt. Indeed, many of these measures will be vital in addressing adjacent misuses of generative AI products, such as the creation of non-consensual intimate imagery, child sexual abuse material, or content generated for online harassment and bullying campaigns. I request that you provide answers to the following questions no later than May 24, 2024.
- What steps is your company taking to attach content credentials, and other relevant provenance signals, to any media created using your products? To the extent that your product is incorporated in a downstream product offered by a third-party, do license terms or other terms of use stipulate the adoption of such measures? To the extent you distribute content generated by others, does your company attach labels when you assess – based on either internal classifiers or credible third-party reports – to be machine-generated or machine-manipulated?
- What specific public engagement and education initiatives have you initiated in countries holding elections this year? What has the engagement rate been thus far and what proactive steps are you undertaking to raise user awareness on the availability of new tools hosted by your platform?
- What specific resources has your company provided for independent media and civil society organizations to assist in their efforts to verify media, generate authenticated media, and educate the public?
- What has been your company’s engagement with candidates and election officials with respect to anticipating misuse of your products, as well as the effective utilization of content credentialing or other media authentication tools for their public communications?
- Has your company worked to develop widely-available detection tools and methods to identify, catalogue, and/or continuously track the distribution of machine-generated or machine-manipulated content?
- (To the extent your company offers social media or other content distribution platforms) What kinds of internal classifiers and detection measures are you developing to identify machine-generated or machine-manipulated content? To what extent to these measures depend on collaboration or contributions from generative AI vendors?
- (To the extent your company offers social media or other content distribution platforms) What mechanisms has your platform implemented to enable victims of impersonation campaigns to report content that may violate your Terms of Service? Do you maintain separate reporting tools for public figures?
- (To the extent your company offers generative AI products) What mechanisms has your platform implemented to enable victims of impersonation campaigns that may have relied on your models to report activity that may violate your Terms of Service?
- (To the extent your company offers social media or other content distribution platforms) What is the current status of information sharing between platforms on detecting machine-generated or machine-manipulated content that may be used for malicious ends (such as election disinformation, non-consensual intimate imagery, online harassment, etc.)? Will your company commit to participation in a common database of violative content?
Thank you for your attention to these important matters and I look forward to your response.
###
Warner, Tillis Introduce Legislation to Advance Security of Artificial Intelligence Ecosystem
May 01 2024
WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, and Thom Tillis (R-NC) – the bipartisan co-chairs of the Senate Cybersecurity Caucus – introduced the Secure Artificial Intelligence Act of 2024, legislation to improve the tracking and processing of security and safety incidents and risks associated with Artificial Intelligence (AI). Specifically, this legislation aims to improve information sharing between the federal government and private companies by updating cybersecurity reporting systems to better incorporate AI systems. The legislation would also create a voluntary database to record AI-related cybersecurity incidents including so-called “near miss” events.
As the development and use of AI grow, so does the potential for security and safety incidents that harm organizations and the public. Currently, efforts within the federal government – led by the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) – play a crucial role in tracking of cybersecurity through their National Vulnerability Database (NVD) and the Common Vulnerabilities and Exposures Program (CVE), respectively. The National Security Agency (NSA), through the Cybersecurity Collaboration Center, also provides intel-driven cybersecurity guidance for emerging and chronic cybersecurity challenges through open, collaborative partnerships. However, these systems do not currently reflect the ways in which AI systems can differ dramatically from traditional software, including the ways in which exploits developed to subvert AI systems (a body of research often known as “adversarial machine learning” or “counter-AI”) often do not resemble conventional information security exploits. This legislation updates current standards for cyber incident reporting and information sharing at these organizations to include and better protect against the risks associated with AI. The legislation also establishes an Artificial Intelligence Security Center at the NSA to drive counter-AI research, provide an AI research test-bed to the private sector and academic researchers, develop guidance to prevent or mitigate counter-AI techniques, and promote secure AI adoption.
“As we continue to embrace all the opportunities that AI brings, it is imperative that we continue to safeguard against the threats posed by – and to -- this new technology, and information sharing between the federal government and the private sector plays a crucial role,” said Sen. Warner. “By ensuring that public-private communications remain open and up-to-date on current threats facing our industry, we are taking the necessary steps to safeguard against this new generation of threats facing our infrastructure.”
"Safeguarding organizations from cybersecurity risks involving AI requires collaboration and innovation from both the private and public sector,” said Sen. Tillis. "This commonsense legislation creates a voluntary database for reporting AI security and safety incidents and promotes best practices to mitigate AI risks. Additionally, this bill would establish a new Artificial Intelligence Security Center, within the NSA, tasked with promoting secure AI adoption as we continue to innovate and embrace new AI technologies."
Specifically, the Secure Artificial Intelligence Act would:
· Require NIST to update the NVD and require CISA to update the CVE program or develop a new process to track voluntary reports of AI security vulnerabilities;
· Establish a public database to track voluntary reports of AI security and safety incidents;
· Create a multi-stakeholder process that encourages the development and adoption of best practices that address supply chain risks associated with training and maintaining AI models; and
· Establish an Artificial Intelligence Security Center at the NSA to provide an AI research test-bed to the private sector and academic researchers, develop guidance to prevent or mitigate counter-AI techniques, and promote secure AI adoption.
“IBM is proud to support the Secure AI Act that expands the current work of NIST, DHS, and NSA and addresses safety and security incidents in AI systems. We commend Senator Warner and Senator Tillis for building upon existing voluntary mechanisms to help harmonize efforts across the government. We urge Congress to ensure these mechanisms are adequately funded to track and manage today’s cyber vulnerabilities, including risks associated with AI,” said Christopher Padilla, Vice President, Government and Regulatory Affairs, IBM Corporation.
“Ensuring the safety and security of AI systems is paramount to facilitating public trust in the technology. ITI commends U.S. Senators Warner and Tillis for introducing the Secure Artificial Intelligence Act, which will advance AI security, encourage the use of voluntary standards to disclose vulnerabilities, and promote public-private collaboration on AI supply chain risk management. ITI also appreciates that this legislation establishes the National Security Agency’s AI Security Center and streamlines coordination with existing AI-focused entities,” said ITI President and CEO Jason Oxman.
“AI security is too big of a task for any one company to tackle alone,” said Jason Green-Lowe, Executive Director of the Center for AI Policy. “AI developers have much to learn from each other about how to keep their systems safe, and it's high time they started sharing that information. That's why the Center for AI Policy is pleased to see Congress coordinating a standard format and shared database for AI incident reporting. We firmly support Senator Warner and Tillis's new bill."
Full text of the legislation is available here. A one-page summary of the legislation is available here.
###
Sens. Warner, Moran Introduce Legislation to Establish AI Guidelines for Federal Government
Nov 02 2023
WASHINGTON – U.S. Sens. Mark R. Warner (D-VA) and Jerry Moran (R-KS) today introduced legislation to establish guidelines to be used within the federal government to mitigate risks associated with Artificial Intelligence (AI) while still benefiting from new technology. U.S. Rep. Ted W. Lieu (D-CA-36) plans to introduce companion legislation in the U.S. House of Representatives.
Congress directed the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework that organizations, public and private, could employ to ensure they use AI systems in a trustworthy manner. This framework was released earlier this year and is supported by a wide range of public and private sector organizations, but federal agencies are not currently required to use this framework to manage their use of AI systems.
The Federal Artificial Intelligence Risk Management Act would require federal agencies to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.
“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” said Sen. Warner. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks.”
“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” said Sen. Moran. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”
“Okta is a strong proponent of interoperability across technical standards and governance models alike and as such we applaud Senators Warner and Moran for their bipartisan Federal AI Risk Management Framework Act,” said Michael Clauser, Director, Head of US Federal Affairs, Okta. “This bill complements the Administration’s recent Executive Order on Artificial Intelligence (AI) and takes the next steps by providing the legislative authority to require federal software vendors and government agencies alike to develop and deploy AI in accordance with the NIST AI Risk Management Framework (RMF). The RMF is a quality model for what public-private partnerships can produce and a useful tool as AI developers and deployers govern, map, measure, manage, and mitigate risk from low- and high-impact AI models alike.”
“IEEE-USA heartily supports the Federal Artificial Intelligence Risk Management Act of 2023,” said Russell Harrison, Managing Director, IEEE-USA. “Making the NIST Risk Management Framework (RMF) mandatory helps protect the public from unintended risks of AI systems yet permits AI technology to mature in ways that benefit the public. Requiring agencies to use standards, like those developed by IEEE, will protect both public welfare and innovation by providing a useful checklist for agencies implementing AI systems. Required compliance does not interfere with competitiveness; it promotes clarity by setting forth a ‘how-to.’”
“Procurement of AI systems is challenging because AI evaluation is a complex topic and expertise is often lacking in government.” said Dr. Arvind Narayanan, Professor of Computer Science, Princeton University. “It is also high-stakes because AI is used for making consequential decisions. The Federal Artificial Intelligence Risk Management Act tackles this important problem with a timely and comprehensive approach to revamping procurement by shoring up expertise, evaluation capabilities, and risk management.”
"Risk management in AI requires making responsible choices with appropriate stakeholder involvement at every stage in the technology's development; by requiring federal agencies to follow the guidance of the NIST AI Risk Management Framework to that end, the Federal AI Risk Management Act will contribute to making the technology more inclusive and safer overall,” said Yacine Jernite, Machine Learning & Society Lead, Hugging Face. “Beyond its direct impact on the use of AI technology by the Federal Government, this will also have far-reaching consequences by fostering more shared knowledge and development of necessary tools and good practices. We support the Act and look forward to the further opportunities it will bring to build AI technology more responsibly and collaboratively."
“The Enterprise Cloud Coalition supports the Federal AI Risk Management Act of 2023, which mandates agencies adopt the NIST AI Risk Management Framework to guide the procurement of AI solutions,” said Andrew Howell, Executive Director, Enterprise Cloud Coalition. “By standardizing risk management practices, this act ensures a higher degree of reliability and security in AI technologies used within our government, aligning with our coalition's commitment to trust in technology. We believe this legislation is a critical step toward advancing the United States' leadership in the responsible use and development of artificial intelligence on the global stage.”
A one-page explanation of the legislation of the legislation can be found here.
# # #
Senate Intel Chairman Warner on President Biden's Executive Order on Artificial Intelligence
Oct 30 2023
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) – Chairman of the Senate Select Committee on Intelligence, Co-Chair of the Senate Cybersecurity Caucus, and former technology entrepreneur – issued the statement below after President Joe Biden announced a new executive order on Artificial Intelligence.
“I am impressed by the breadth of this Executive Order – with sections devoted to increasing AI workforce inside and outside of government, federal procurement, and global engagement. I am also happy to see a number of sections that closely align with my efforts around AI safety and security and federal government’s use of AI. At the same time, many of these just scratch the surface – particularly in areas like health care and competition policy. Other areas overlap pending bipartisan legislation, such as the provision related to national security use of AI, which duplicates some of the work in the past two Intel Authorization Acts related to AI governance. While this is a good step forward, we need additional legislative measures, and I will continue to work diligently to ensure that we prioritize security, combat bias and harmful misuse, and responsibly roll out technologies.”
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, wrote to President Biden, urging the Administration to boost the federal government’s tech workforce in order to address the challenges of rapidly advancing AI, building on previous government initiatives to draw in engineers, product managers, and other digital policy experts to revamp the government’s approach to technology. In his letter, Sen. Warner stressed the need for a similar arrangement specifically targeting AI.
“It is clear to me that we will not be able to meet the need in this rapidly advancing field without a diverse and representative group of talented minds,” Sen. Warner wrote. “These individuals should possess technical knowledge, but also a keen understanding of the social impact of AI.”
He continued, “Your administration has taken a number of practical and important steps to advance the safe deployment of AI technologies. To supplement these efforts, I urge you to use your existing authority to bring the best and brightest minds to the table to help our nation grapple with the wide-ranging impact that AI will have on our society. I look forward to working with you on this endeavor.”
Sen. Warner, a former tech entrepreneur, has been a leading voice in the Senate calling for increased efforts into appropriately regulating and addressing the threats of AI, while still harnessing its full potential. Sen. Warner engaged directly with AI companies to push for responsible development and deployment. Last month, he sent a series of letters to major AI companies urging them to take additional action to promote safety and prevent malicious misuse of their products. In April, Sen. Warner called on AI CEOs to develop practices that would ensure that their products and systems are secure. In July, he also pushed on the Biden administration to keep working with AI companies to expand the scope of the voluntary commitments.
Additionally, Sen. Warner wrote to Google last month to raise concerns about their testing of new AI technology in medical settings. Separately, he urged the CEOs of several AI companies to address a concerning report that generative chatbots were producing instructions on how to exacerbate an eating disorder.
Text of the letter can be found here and below.
Dear President Biden,
I write today regarding the need to bolster our Federal workforce and build capacity within the government to address artificial intelligence (AI). Already, excellent work related to AI is happening across the Federal government – from the National Institute of Standards and Technology (NIST) to the National Institutes of Health – but given the work that needs to be done, we undoubtedly need more expertise and more capacity. The rapid advancements in AI technologies underscores the need to build a robust knowledge base within the Federal government to grapple with AI applications across various sectors of our economy and society. Given the speed of innovation in this space, I urge you to use the powers of your office to launch a new initiative focused on bringing the best and brightest minds into government service to meet the challenges and harness the benefits of AI.
In recent years, we have seen successful examples of innovative initiatives that bring talented individuals together within the Federal government to serve the public and solve some of our government’s most pressing needs. For example, 18F has brought together a team of designers, software engineers, strategists, and product managers to collaborate with federal agencies in order to improve and modernize government technology. Similarly, the U.S. Digital Service (USDS) has brought together engineers, product managers, and digital policy experts to be paired with leading civil servants in order to impact our government’s approach to technology and address some of the most critical government services. What these initiatives have in common – and what I believe we must focus on in a similar initiative for AI – is bringing together a group of bright minds, with diverse backgrounds and experiences, to lend their expertise to the federal government on issues of national importance.
It is clear to me that we will not be able to meet the need in this rapidly advancing field without a diverse and representative group of talented minds. These individuals should possess technical knowledge but also a keen understanding of the social impact of AI. Furthermore, a dedicated group of individuals focused solely on AI can help the federal government think through the opportunities to harness AI technologies to meet federal objectives while also working collaboratively with agencies to guard against AI-generated risks within their purview.
Your Administration has taken a number of practical and important steps to advance the safe deployment of AI technologies. To supplement these efforts, I urge you to use your existing authority to bring the best and brightest minds to the table to help our nation grapple with the wide-ranging impact that AI will have on our society. I look forward to working with you on this endeavor.