Press Releases
Warner & Kaine Announce $10.5 Million in Federal Funding to Promote Digital Equity in Virginia
Jan 07 2025
WASHINGTON – Today, U.S. Sens. Mark R. Warner and Tim Kaine (both D-VA) announced $10,500,000 in federal funding for the Virginia Department of Education to promote digital equity across Virginia. This funding was made possible by the Bipartisan Infrastructure Law, which both senators helped pass. It was awarded through the National Telecommunications and Information Administration’s (NTIA) Digital Equity Competitive Grant Program.
“We are proud to have helped pass the Bipartisan Infrastructure Law, which continues to deliver for Virginians,” the senators said. “We are glad that this federal funding will help close the digital divide and help ensure all Virginians have the resources and digital skills to fully take advantage of the opportunities high-speed internet provides.”
Specifically, the funding will be used to support Digital Navigators, who are trained staff that work across Virginia to help individuals access the internet, find devices, and learn crucial digital skills.
Sens. Warner and Kaine have long worked to expand broadband access and promote digital equity. This announcement complements $18.3 million provided to Virginia last month through the Digital Equity Capacity Grant Program, which also works to help close the digital divide and was made possible through the Bipartisan Infrastructure Law.
###
Warner Announces $275 Million Manufacturing Investment for Virginia Thanks to CHIPS Law He Wrote
Dec 10 2024
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) announced that the U.S. Department of Commerce has signed a preliminary agreement for up to $275 million in federal funding for Micron Technology to expand and modernize its manufacturing facility in Manassas, Va. The funding is the result of bipartisan legislation Warner wrote and successfully passed into law over many years to expand American production of semiconductor chips.
“I am proud to announce that $275 million should soon be headed to Virginia for Micron Technology to manufacture more cutting-edge semiconductors here in Virginia,” said Sen. Warner, Chairman of the Senate Select Committee on Intelligence. “Making more of these chips in America will strengthen our national security and create jobs, which is why I pushed to pass this funding through Congress, why I am working with Micron and the Biden administration to secure this investment in Virginia, and why I’m going to be making the case to the incoming administration that we need to keep investing in domestic manufacturing of critical and emerging technologies like semiconductors.”
Nearly everything that has an “on” switch – from cars to phones to washing machines to ATMs to electric toothbrushes – contains a semiconductor, but just a small percentage of these ‘chips’ are currently made in America. Sen. Warner first introduced the Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act to restore semiconductor manufacturing back to American soil in 2020, and 2022, Congress passed into law the CHIPS and Science Act, which included billions in funding championed by Sen. Warner to implement the law he wrote to boost domestic semiconductor manufacturing.
As a result, the Department of Commerce has signed a Preliminary Memorandum of Terms (PMT) with Micron Technology for up to $275 million in proposed funding to expand and modernize its facility in Manassas. The proposed project would onshore Micron’s 1-alpha technology to its Manassas facility, significantly increasing output of more efficient, more powerful chips. Micron’s project in Manassas would create over 400 manufacturing jobs and up to 2700 community jobs at the peak of the project.
###
Statement of Sen. Mark R. Warner On Open AI’s Efforts to Integrate Additional Safety and Security Measures
Dec 09 2024
WASHINGTON – Today, Senate Select Committee on Intelligence Chairman Mark R. Warner (D-VA) released the following statement on Open AI’s new safeguards against malicious misuse of their artificial intelligence products:
“I’m pleased to see OpenAI heed my call for additional safeguards as it releases powerful new features like video generation – including specific measures I have advocated for, including adding new detection mechanisms for violative outputs, clear mechanisms to identify and catalogue synthetic content, and public-facing reporting mechanisms for victims of impersonation campaigns and other Terms of Service violations to seek redress. Ultimately the efficacy of these new policies will be measured in the kinds of resources OpenAI invests in enforcing them, but I appreciate these new steps.”
WASHINGTON — U.S. Sen. Mark R. Warner (D-VA) released the statement below, following an announcement by the Biden-Harris administration that TSMC will receive up to $6.6 billion in direct funding, which will be paired with over $65 billion in private investment to support three leading-edge facilities in Arizona that will manufacture the world’s most advanced semiconductor process technologies. This funding was awarded through the Department of Commerce’s CHIPS Incentives Program and appropriated through the CHIPS and Science Act – legislation negotiated and championed by Sen. Warner.
“Congress originally passed the CHIPS and Science Act because we knew that our national security depended on it. Today’s $6.6 billion investment will help support production of the most advanced chips, used for advanced applications like Artificial Intelligence. This is a win for American workers, for our advanced manufacturing industry, and for the resilience and security of our supply chains,” said Sen. Warner.
At full capacity, TSMC’s three fabs are expected to manufacture tens of millions of leading-edge logic chips that will power products like 5G/6G smartphones, autonomous vehicles, and high-performance computing and AI applications. Reshoring and rebuilding production of these most advanced chips in the United States will help maintain our national security by strengthening our qualitative advantage against foreign adversaries.
Sen. Warner, co-chair of the Senate Cybersecurity Caucus and former technology entrepreneur, has long sounded the alarm about the importance of investing in domestic semiconductor manufacturing. Sen. Warner first introduced the Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act in June 2020 along with Sen. John Cornyn (R-TX).
###
Warner Presses Valve to Crack Down on Hateful Accounts and Rhetoric Proliferating on Steam
Nov 15 2024
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) today urged leadership at Valve, a prominent video game company, to respond to reports that their gaming distribution and social networking platform, Steam, is hosting extremist and hateful content – including over 1.5 million users and tens of thousands of groups that share and amplify antisemitic, Nazi, sexuality- or gender-based hate, and white supremacist content. Sen. Warner called for broad action from Valve to bring its content moderation standards in line with industry standards and crack down on the rampant proliferation of hate-based content.
“I write to you today regarding the hate and extremism that has recently been identified on your gaming digital distribution and social networking platform Steam,” Warner wrote. “Recently, the Anti-Defamation League (ADL) released a report where ADL identified over 1 million unique user accounts and nearly 100,000 user-created groups that glorified antisemitic, Nazi, white supremacist, gender- and sexuality-based hatred, and other extremist ideologies on Valve’s Steam platform.”
The letter notes that Steam has millions of active users that are now exposed to extremist ideologies. According to the ADL report, Steam hosts almost 900,000 users with extremist or antisemitic profile pictures, 40,000 groups with names that included hateful words, and rampant use of text-based images, particularly of swastikas, resulting in over 1 million unique hate-images.
“My concern is elevated by the fact that Steam is the largest single online gaming digital distribution and social networking platform in the world with over 100 million unique user accounts and a userbase similar in scale to that of the ‘traditional’ social media and social network platforms. Steam is financially successful, with a dominant position in its sector, and makes Valve billions of dollars in annual revenue. Until now, Steam has largely not received its due attention as a de facto major social network where its users engage in many of the same activities expected of a social media platform,” Warner continued.
“We have seen on other social networking platforms that lax enforcement of the letter of user conduct agreements, when coupled with a seeming reluctance by those companies to embrace the spirit (namely providing users with a safe, welcoming place to socialize) of those same agreements, leads to toxic social environments that elevate harassment and abuse. You should want your users (and prospective users) to not have to wonder if they or their children will be harassed, intimidated, ridiculed or otherwise face abuse,” Warner concluded.
The letter ends with a series of questions for Valve regarding their enforcement of their own terms of service and their commitment to reining in toxic content.
For years, Sen. Warner, a former tech entrepreneur, has been raising the alarm about rise of hate-fueled content proliferating online, as well as the threat posed by domestic and foreign bad actors circulating disinformation. Recently, he pressed directly for action from Discord, another video game-based social networking site that is hosting violent predatory groups that coerce minors into self-harm and suicide. He has also called attention to the rise of pro-eating disorder content on AI platforms. A leader in the tech space, Sen. Warner has also lead the charge for broad Section 230 reform to allow social media companies to be held accountable for enabling cyber-stalking, harassment, and discrimination on their platforms.
A copy of the letter is available here and below.
Dear Mr. Newell:
I write to you today regarding the hate and extremism that has recently been identified on your gaming digital distribution and social networking platform Steam. Recently, the Anti-Defamation League (ADL) released a report where ADL identified over 1 million unique user accounts and nearly 100,000 user-created groups that glorified antisemitic, Nazi, white supremacist, gender- and sexuality-based hate, and other extremist ideologies on Valve’s Steam platform.
It has been brought to your attention before that extremist ideologies seem to find a home on Steam. In 2022, Valve received a Senate letter identifying nearly identical activity on your platform, and yet two years later it appears that Valve has chosen to continue a ‘hands off’-type approach to content moderation that favors allowing some users to engage in sustained bouts of disturbing and violent rhetoric rather than ensure that all of its users can find a welcoming and safe environment across your platform.
My concern is elevated by the fact that Steam is the largest single online gaming digital distribution and social networking platform in the world with over 100 million unique user accounts and a userbase similar in scale to that of the ‘traditional’ social media and social network platforms. Steam is financially successful, with a dominant position in its sector, and makes Valve billions of dollars in annual revenue. Until now, Steam has largely not received its due attention as a de facto major social network where its users engage in many of the same activities expected of a social media platform.
ADL also found that, in addition to the extremely concerning number of hateful account and user groups:
- Almost 900,000 users with extremist or antisemitic profile pictures
- 40,000 groups with names that included hateful words, with the most prominent being “1488”, “shekel” and “white power”
- Rampant use of text-based images (so-called “copypasta” or “ASCII art”), particularly of swastikas, resulting in over 1 million unique hate-images.
Valve has a Steam Online Conduct policy (“Conduct Policy”) and a Steam Subscriber Agreement (“Agreement”) that Steam subscribers agree to abide by as a condition of using the service. The Conduct Policy requires that “[in] general, as a Steam user you should be a good online citizen and not do anything that prevents any other Steam user from using and enjoying Steam”. The Conduct Policy explicitly directs subscribers to not:
- “Engage in unlawful activity [including] encouraging real-world violence…”
- “Upload or post illegal or inappropriate content [including] [real] or disturbing depictions of violence…”
- “Violate others’ personal rights”
- “Harass other users or Steam personnel [which includes not engaging in] trolling; baiting; threatening; spamming; intimidating; and using abusive language or insults.”
It is reasonable to question how committed Valve is to effectively implement and enforce Valve’s own, self-created Conduct Policy for its users, in light of the 1 million Steam user accounts and 100,000 user-created groups glorifying hateful ideologies that ADL found. We have seen on other social networking platforms that lax enforcement of the letter of user conduct agreements, when coupled with a seeming reluctance by those companies to embrace the spirit (namely providing users with a safe, welcoming place to socialize) of those same agreements, leads to toxic social environments that elevate harassment and abuse. You should want your users (and prospective users) to not have to wonder if they or their children will be harassed, intimidated, ridiculed or otherwise face abuse.
As Black Friday and the holiday buying season approaches, the American public should know that not only is Steam an unsafe place for teens and young adults to purchase and play online games, but also that, absent a change in Valve’s approach to user moderation and the type of behavior that it welcomes on its platform, Steam is playing a clear role in allowing harmful ideologies to spread and take root among the next generation.
Valve must bring its content moderation practices in line with industry standards or face more intense scrutiny from the federal government for its complicity in allowing hate groups to congregate and engage in activities that undoubtedly puts Americans at risk.
Please provide answers to the following questions no later than December 13, 2024. Please provide answers in-line with the questions, and not a narrative that attempts to answer multiple questions.
- Please describe Valve’s current practices used to enforce its terms of service.
- Please provide the definition that Valve uses internally to define each of the following terms and/or behaviors from the Conduct Policy in order to evaluate potential violations of said policy:
- “Encouraging real-world violence”;
- “Inappropriate content”;
- “Real or disturbing depictions of violence”;
- “Violate others’ personal rights”; and
- “Harass other users or Steam personnel”, including:
i. trolling;
ii. baiting;
iii. threatening;
iv. intimidating; and
v. abusive language or insults.
- How many allegations did Valve receive from users about potential violations of the Conduct Policy? Include in your response each date when the Conduct Policy was changed, updated, or otherwise modified. Please provide data sufficient to answer this question for each of the following:
- Each month of each of the years of 2014 to 2024;
- Each category of violation (however Valve tracks types or categories of violations of the policy;
- Each category of violation for each month of each of the years of 2014 to 2024;
- The disposition and/or any findings of each complaint received (this may be presented in aggregate) by Valve, whether through Steam’s internal reporting mechanisms or any other means, and subsequent action taken by Valve in response to each complaint (this may be presented in aggregate).
- The number of unique user accounts that were subject to adverse, punitive, or corrective actions by Valve:
i. In response to a user-generated complaint; and
ii. In response to violations identified by Valve moderators of their own accord.
- For item e, above, please provide data on unique payment methods (e.g. credit card accounts, PayPal or similar payment method accounts, JCB, Klarna, Paysafecard, and any other payment methods accepted on Steam that are uniquely identifiable) associated with each account subject to adverse, punitive, or corrective actions by Valve that was subsequently used for any other account (this may be presented on aggregate).
- Approximately how many human content moderators work for Steam?
- How many of those moderators are in-house Valve employees?
- How many of those moderators are contracted by Valve?
- Does Steam supplement this work with AI-content moderation systems? If so, describe the ways in which any AI system is deployed for that purpose, including any evaluation process that Valve carried out to test any such system and the results that demonstrate the efficacy of any such system in identifying and/or removing content that violates the Conduct Policy and Subscriber Agreement.
- What steps does Valve take to prevent, monitor, and mitigate extremist, white supremacist, and terrorism-related content?
- What commitments will Steam make to ensure that it has meaningfully curbed white supremacist, antisemitic, terroristic, Nazi, homophonic, transphobic, misogynist, and hateful content by November 15, 2025?
- What transparency measures does Valve plan to implement to inform users and the public about content moderation actions related to extremists and behavior that could be reasonably interpreted as endorsing extremist thoughts, beliefs, and/or actions on the platform?
- The research shows a period from late 2019 to mid-2020 where it appears Valve may have stepped up its moderation of certain types of hateful content on Steam. Can you provide more detail on your content moderation practices during this time?
- How frequently does Valve evaluate its content moderation practices related to extremism?
- How frequently do those evaluations result in changes, updates, or other modifications to Valve’s content moderation practices related to extremism?
- Has Steam, or Valve, made policy, enforcement, or practical decisions that have had the effect of limiting or its content moderation? If so, provide the date(s) of each decision and enough information to understand the context and analysis that led to each decision.
I greatly appreciate your swift attention to this matter and look forward to reviewing your response.
Sincerely,
###
Warner Demands Answers from Discord Over Violent Predatory Groups Targeting Virginia Teens
Aug 12 2024
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) pressed Discord, an instant messaging social platform, about the company’s failure to safeguard minors and stop the proliferation of violent predatory groups who target children with the goal of forcing them to end their own lives and livestream the act online.
This letter follows a September warning from the FBI alerting Americans to the existence of these violent online groups, which exist on messaging platforms and deliberately extort children into producing child sexual abuse material (CSAM) or sharing acts of self-harm online. According to the warning, issued by the FBI’s Internet Crime Complaint Center, these groups target minors between the ages of 8 and 17 years old and focus on racial and ethnic minorities, LGBTQ+ youth, and those who struggle with a variety of mental health issues.
“I am extremely concerned about this abuse, and I am profoundly saddened that it has affected Virginia families, including the daughter of a military family who was coerced into self-harm and to attempt suicide,” Sen. Warner wrote. “I recognize that Discord’s Trust & Safety team is aware of this type of activity and has taken some actions to detect and remove some of these violent groups from their platforms. However, despite increased moderation, predators continue to target minors on your platform.”
“As a teenager, I fell victim to the cruel manipulation of violent predatory groups on Discord. During a period in my life where I struggled with anxiety, depression, and eating disorders, they took advantage of my feelings of isolation, and encouraged me to self-harm and even end my life. While I’m deeply grateful to have escaped their abuse, I’m heartbroken to know that this violent, dangerous behavior persists on Discord,” said Abrielle, the Virginia teenager who was coerced by “King” into attempting suicide before being found by first responders in time to save her life. “Enough is enough – tech companies need to do more to crack down on the predatory groups that nearly took my life. Discord owes it to a generation of kids and teens to eliminate the extremely harmful content that abounds on their platforms.”
Sen. Warner continued, “I urge you to devote more resources to this problem, including dedicating a greater number of content moderators, investigators, engineers, and legal professionals to it. It is my understanding that Discord currently enforces its policies through actions like suspending policy-violating users’ accounts and servers, as well as banning their Internet Protocol (IP) addresses and email addresses. I also understand that there are far more sophisticated measures, such as device-based or cookie-based bans, that could be taken to prevent identified malign users from returning to your platform. Further, I am aware of measures that could be used to proactively detect harmful activity and initiate an early intervention to prevent harm and loss of life.”
In the letter, Sen. Warner demands answers to a series of questions about the company’s efforts to address these predatory groups. Specifically, he asks that Discord outline its policies and procedures around content that violates Discord’s Terms of Service, and that it share more information on its detection mechanisms, enforcement actions, measures to prevent the re-entry of malicious actors, and more. He also requests answers on the number of accounts that have been removed over the last four years, and the quantity of suicide ideation or depiction content.
Today’s letter also follows recommendations issued in July by the Biden-Harris Administration’s Kids Online Health and Safety Task Force to address the online health and safety for children and youth, with specific recommendations made to industry. It also comes on the heels of the Senate passage of the Kids Online Safety Act (“KOSA”) and the Children and Teens' Online Privacy Protection Act (“COPPA 2.0”), which will require online platforms to take specific measures to protect the safety and privacy of children using their platforms.
A copy of the letter is available here and below.
Dear Mr. Citron:
I write today regarding disturbing reports that Discord is being used by violent predatory groups to coerce children into self-harm. The failure of your company to stop this activity is deeply troubling, and the lack of adequate safeguards to protect vulnerable individuals, especially teens and children, from this degrading and violent form of abuse is of grave concern. I urge you to quickly take steps to remove malicious actors from your platform, prevent their future access, and collaborate with law enforcement officials to bring safety and justice to the victims.
On September 12, 2023, the FBI’s Internet Crime Complaint Center (IC3) issued a warning to the public that violent online groups are deliberately targeting minor victims on messaging platforms to extort them into recording or live-streaming acts of self-harm and producing child sexual abuse material (CSAM). IC3 noted that these groups are targeting minors between the ages of 8 and 17 years old, especially LGBTQ+ youth, racial and ethnic minorities, and those who struggle with a variety of mental health issues. The warning further noted that these groups often control their victims through inflicting extreme fear, extorting them through threats of sharing sexually explicit videos or photos of the minor victims with their friends and families, and many have an end-goal of forcing these minors into completing suicide on live-stream to view and record for their own entertainment or sense of fame.
I am extremely concerned about this abuse, and I am profoundly saddened that it has affected Virginia families, including the daughter of a military family who was coerced into self-harm and to attempt suicide. The severe harm that the family’s daughter faced from a predatory user going by the name “King” closely mirrored a story published in the Washington Post. This report detailed how one of these violent online groups misused your platform, engaging in pervasive harassing conduct that resulted in the deaths of several minors. It further described how Discord’s Trust & Safety team has struggled to keep this specific group off the platform despite knowing of its existence. “King” ultimately coerced the Virginia minor into attempting suicide. Fortunately, first responders were able to reach her in time to save her life.
I recognize that Discord’s Trust & Safety team is aware of this type of activity and has taken some actions to detect and remove some of these violent groups from their platforms. However, despite increased moderation, predators continue to target minors on your platform. I urge you to devote more resources to this problem, including dedicating a greater number of content moderators, investigators, engineers, and legal professionals to it. It is my understanding that Discord currently enforces its policies through actions like suspending policy-violating users’ accounts and servers, as well as banning their Internet Protocol (IP) addresses and email addresses. I also understand that there are far more sophisticated measures, such as device-based or cookie-based bans, that could be taken to prevent identified malign users from returning to your platform. Further, I am aware of measures that could be used to proactively detect harmful activity and initiate an early intervention to prevent harm and loss of life.
On July 22, 2024, the Biden-Harris Administration’s Kids Online Health and Safety Task Force issued a report providing guidance to address the online health and safety for children and youth with specific recommendations made to industry. Several recommendations that address the harm detailed in this letter were made; including developing and deploying mechanisms and strategies to counter child sexual exploitation and abuse, using data-driven methods to detect and prevent online harassment and abuse, and providing age-appropriate parental control tools. The findings and recommendations of this task force underscore the need for platforms like Discord to act on the self-harm extortion of minors. I urge Discord to review the detailed recommendations made in the report and to take them seriously.
I respectfully request that you respond to this letter with detailed answers to the following questions:
- What processes, procedures, plans, or other organizational policies are in place to identify, review, and remove content involving activity that violates Discord’s Terms of Service and other user agreements with respect to harassing, manipulative, abusive, harmful, or dangerous user activity? Your response should address content and behavior relating to coerced self-harm, to grooming, to CSAM production, to user-to-user extortion of a sexual and of a non-sexual nature, to physical, mental, or sexual abuse, and any other category of behavior that is responsive to this question (e.g. animal cruelty extortion and abuse).
- What enforcement actions may Discord utilize in response to the harmful activities noted in Question 1? How were these enforcement action options developed, and how does Discord determine the appropriate enforcement action for a given violation?
- How many violations and of what type (grooming, sharing of CSAM, extortion, etc.) are identified before each enforcement action is made?
- For a given enforcement action, what is the lowest employee position of authority (e.g. manager, director, vice president, etc.) at which that given action may be approved and carried out? Is there a process for internally reviewing and redetermining a given enforcement action? If so, please describe that process.
- What types of detection mechanisms (e.g. technical indicators, content, behavior, social network, server membership composition, etc.) does Discord employ for activities noted in Question 1? Does Discord utilize machine learning technologies for detecting violations of company policy?
- Does Discord employ user identification methods, including device-specific or cookie-based detection methods, that enables identification of returning violators who take simple evasive measures like changing their username, email address, and IP address?
- Please describe policies, processes, or procedures used by Discord to ensure that violators are consistently tracked and information is shared across the security and trust and safety officials.
- What is the mean time to detection (from content creation to identification by Discord’s detection tools) for this activity?
- Once Discord has removed a violating account or server, does Discord collect and store technical indicators to detect the return of the malicious actor(s) and creator of the server?
- What is the mean time to live (from account creation to account suspension) for the accounts engaging in this activity? What about the servers?
- How many accounts have been removed over the last four years (provide a breakdown by year for each violation category that resulted in account removal of activities noted in Question 1?
- How many of these removed accounts were initially identified via a reporting mechanism vs. a detection mechanism?
- How many accounts in total were flagged for removal by a detection mechanism? For those accounts flagged by that mechanism, describe the review process for determining if the account violates policy.
- How many unique images or videos have been shared in these servers depicting or ideating suicide or that could be reasonably interpreted as depicting or ideating suicide?
- Have you identified activities of the types noted in Question 1 from a user going by the name of “King” (or any successor, related, or otherwise affiliated account or accounts) and what actions has Discord carried out in order to prevent ongoing and future malicious activity from this user?
- Please describe any actions, communications, or deliberations that Discord has taken with respect to the violent groups identified in the September 2023 FBI warning: 676; 764; CVLT; Court; Kaskar; Harm Nation; Leak Society; and H3ll?
Thank you for your prompt attention to this letter, and I look forward to reviewing your response.
Sincerely,
###
WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA) and Marsha Blackburn (R-TN) applauded committee passage of their Promoting United States Leadership in Standards Act of 2024, legislation aimed at restoring the U.S.’s position as the leader in international standards-setting for Artificial Intelligence and other critical emerging technologies (CETs). This legislation passed through the Senate Committee on Commerce, Science, and Transportation by a unanimous voice vote.
The legislation, first introduced in February, comes in response to the rising influence of Chinese-government affiliated companies and organizations on international technology standards and practices. For decades, the United States led the world in developing new technologies, which allowed our country to set the standards that guided the use and development of those technologies around the globe. However, in recent years, companies and organizations backed by the Chinese Communist Party have overtaken the U.S. in some key areas, which has allowed the Chinese government to influence standards in ways that further its own interests.
“I am thrilled to see this important legislation pass through the Commerce Committee with overwhelming bipartisan support,” said Sen. Warner. “This legislation clearly outlines steps we must take to reestablish our leadership and ensure that we are doing all we can to set the global standards for critical and emerging technologies. I look forward to a full Senate vote.”
“The Communist Chinese Party has made it their mission to undermine the U.S. and our interests around the globe by exploiting our deficiencies,” said Sen. Blackburn. “As they ramp up their efforts to dominate global standards for emerging technologies, the U.S. must be a global leader in innovation, and that includes setting standards that reflect our interests and values.”
Specifically, the Promoting United States Leadership in Standards Act would:
- Require the National Institute of Standards and Technology (NIST) to submit a report to Congress that identifies current U.S. participation in standards development activities for AI and other CETs;
- Create an easy-to-access web portal to help stakeholders navigate and actively engage in international standardization efforts. The portal would include a list of relevant standards and information about how to participate in standardization activities related to AI and other CETs;
- Establish a pilot program to award $5 million in grants over 5 years to support the hosting of standards meetings for AI and other CETs in the U.S.;
- Create a report to Congress, during the third year of the program, that identifies grant recipients, provides a summary of expenses, assesses the effectiveness of the program to grow the number of standards meetings in the U.S, and shows the geographic distribution of event attendees.
###
Sens. Mark Warner, Rick Scott Lead Bill to Crack Down on Chinese-Made Drones in the U.S.
Jul 30 2024
WASHINGTON – Today, U.S. Sens. Mark R. Warner and Rick Scott announced the introduction of the bipartisan Countering CCP Drones and Supporting Drones for Law Enforcement Act.The legislation would blacklist dangerous Chinese drone companies Da-Jiang Innovations (DJI) Technologies, Autel Robotics, and other CCP-linked drone industry participants and cut them off from U.S. telecommunication infrastructure by including these companies on the Federal Communications Commission’s (FCC) Covered List, which identifies telecommunication equipment that poses an unacceptable risk to the national security of the United States. The legislation also creates a short-term Department of Transportation grant program, specifically designed for first responders, to replace any existing Chinese drones and purchase American-made ones. Senators Scott and Warner also filed this legislation as an amendment to the FY2025 National Defense Authorization Act.
Sen. Mark Warner said, “Drones have tremendous potential to support agriculture, make our communities safer, and grow our economy. Yet without further intervention, the drone industry could be susceptible to massive intervention from the Communist Party of China, directly threatening our national security and economy. I’m proud to introduce bipartisan legislation to restore American leadership in the drone industry and ensure that the CCP can’t wreak havoc by spying on Americans or otherwise disrupting key functions of drone technology.”
Sen. Rick Scott said, “Drones made in Communist China pose a significant threat to our freedoms and security and cannot be allowed to continue operating in American skies. Companies based in Communist China are at the will of Xi’s evil regime, meaning one of the United States’ greatest adversaries has total access to every bit of data collected by devices. It should terrify every single American that the Chinese Communist Party, known for spying, stealing and espionage, could have access to footage of Americans, their land, their businesses and their families without their knowledge. I was glad to successfully pass my and Senator Warner’s American Security Drone Act to stop the use of drones made by companies in adversarial nations, like Communist China’s DJI, in the United States Government and military, which is critical to protecting our national security. Now, we must pass the Countering CCP Drones and Supporting Drones for Law Enforcement Act as a necessary next step to eliminate the threats we face from Communist China and further protect the security of the United States and every American family.”
A copy of the legislation is available here.
###
Warner, Colleagues Introduce Bipartisan Legislation to Keep Kids Safe, Healthy, Off Social Media
May 01 2024
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) joined Sens. Brian Schatz (D-HI), Ted Cruz (R-TX), and a bipartisan group of colleagues in introducing the Kids Off Social Media Act, legislation that would set a minimum age of 13 to use social media platforms and prevent social media companies from feeding algorithmically-targeted content to users under the age of 17. Joining Sens. Warner, Schatz and Cruz in introduction are U.S. Sens. Chris Murphy (D-CT), Katie Britt (R-AL), Peter Welch (D-VT), Ted Budd (R-NC), John Fetterman (D-PA), and Angus King (I-ME).
The Kids Off Social Media Act aims to address concerns regarding the mental health crisis of children and teens in relation to their use of social media. No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.
Studies have shown a strong relationship between social media use and poor mental health, especially among children. From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes. Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.
“Parents across the country are struggling to protect their kids from the harmful effects of too much social media, and studies show that today’s unregulated social media landscape has fostered a toxic environment for young people, promoting bullying, eating disorders, and mental health struggles unchecked,” said Sen. Warner. “I’m proud to join this bipartisan effort to enact some common sense guardrails for kids and teens using social media platforms.”
Specifically, the Kids Off Social Media Act would:
- Prohibit children under the age of 13 from creating or maintaining social media accounts, consistent with the current practices of major social media companies;
- Prohibit social media companies from pushing targeted content using algorithms to users under the age of 17;
- Provide the FTC and state attorneys general authority to enforce the provisions of the bill; and
- Follow existing CIPA framework to require schools to block and filter social media on their federally funded networks, which many schools already do.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate building a safer online environment, specifically for young people. Last year, he introduced the Kids Online Safety Act, legislation that provides young people and parents with the tools, safeguards, and transparency they need to protect against online harms. He has also introduced several pieces of legislation aimed at holding Big Tech accountable, including the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads; and most recently, seeing through the passage of the national security supplemental aid package, which included a requirement that the prominent social media platform TikTok divest from China-owned parent company ByteDance within one year.
“Public Citizen stands in strong support of this legislation intended to protect the nation’s children from the pernicious impacts of social media. Frequent use of social media can harm vulnerable children and teens as their identities and feelings of self-worth are forming. A straightforward ban for younger children and stopping abusive algorithmic engagement with teens just makes sense. We applaud Senator Schatz for his commonsense bill,” said Lisa Gilbert, Executive Vice President of Public Citizen.
“We survey mothers on pressing issues they face and on the federal bills that seek to address them. We do this because mothers’ first-hand experiences and knowledge are critical sources of information in the policy-making process. This bill, newly renamed the ‘Kids Off Social Media Act,’ had more support by mothers -- across the political spectrum -- than any bill we've studied. Mothers are on the frontlines of this issue, and according to our quantitative and qualitative study, they overwhelmingly believe that social media companies' products and practices should be regulated using age limits and guardrails, similar to other harmful substances,” said Jennifer Bransford, Founder of Count on Mothers.
“Our nation is facing a severe crisis in children’s mental health,” said Dr. Regena Spratling, President of the National Association of Pediatric Nurse Practitioners. “Every day pediatric nurse practitioners (PNPs) and other advanced practice registered nurses (APRNs) focused on children’s health see the serious impact that social media can have on our young people’s well-being. The ‘Kids Off Social Media Act’ will help to provide parents the tools they need to safeguard their children from threats in the digital world.”
“Preparing nurses to help address our country’s growing mental health problems is one of nursing education’s highest priorities,” said Dr. Beverly Malone, President and CEO of the National League for Nursing. “The National League for Nursing is pleased to support the ‘Kids Off Social Media Act’ as an important step to help parents and health care professionals shield our young people from harmful online content that can lead to behavioral health problems.”
“KIDS TOO strongly supports comprehensive legislation that protects kids on social media. Senator Schatz's Kids Off Social Media Act solidifies prohibiting youth under 13 from maintaining or creating social media accounts. This bill gets to the root of the issue by eliminating the chance of young kids being vulnerable to harmful tactics by predators, bullies and drug dealers,” said Tania Haigh, Executive Director of KIDS TOO.
“We’re still learning about the long-term implications that unfettered access to social media has on children and adolescents. Until then, especially considering evidence showing that the way people use social media can impact mental health outcomes, it makes sense to put safeguards in place. As we learn more, we can modify these safeguards as needed. But we need to begin somewhere, and this legislation would provide an opportunity to more clearly understand whether modest safeguards can protect children and adolescents and what responsible measures look like,” said Chuck Ingoglia, President and CEO of the National Council for Mental Wellbeing.
The Kids Off Social Media Act is supported by the American Counseling Association, KidsToo, National Association of Social Workers, National Association of Pediatric Nurse Practitioners, Tyler Clementi Foundation, National Council for Mental Wellbeing, Count on Mothers, Parents Television and Media Council, Parents Who Fight, Public Citizen, National Federation of Families, National Organization for Women, National Association of School Nurses, National League for Nursing, and American Academy of Child & Adolescent Psychiatry.
Full text of the legislation is available here.
###
WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA) and Marsha Blackburn (R-TN) introduced the Promoting United States Leadership in Standards Act of 2024, legislation aimed at restoring the U.S.’s position as a leader in international standards-setting for emerging technologies.
For decades, the United States led the world in developing new technologies, which allowed our country to set the rules of the road when it came to those technologies’ global standards. However, in recent years, Chinese companies backed by the Communist Party of China have overtaken the U.S., which has allowed the Chinese government to influence standards in ways that further their own interests.
“In recent years, the Communist Party of China has asserted their dominance in the global technology space, and as their status has risen, our authority and influence has fallen,” said Sen. Warner. “This legislation clearly outlines steps we must take to reestablish our leadership and ensure that we are doing all we can to set the global standards for critical and emerging technologies.”
“The Communist Chinese Party has made it their mission to undermine the U.S. and our interests around the globe by exploiting our deficiencies,” said Sen. Blackburn. “As they ramp up their efforts to dominate global standards for emerging technologies, the U.S. must be a global leader in innovation, and that includes setting standards that reflect our interests and values.”
Standards-setting bodies make critical decisions not only relating to technical specifications, but also relating to values, such as openness, safety, and accessibility, embedded in emerging technologies.
Specifically, the Promoting United States Leadership in Standards Act would:
- Require the National Institute of Standards and Technology (NIST) to submit a report to Congress that identifies current U.S. participation in standards development activities for AI and other CETs;
- Create an easy-to-access web portal to help stakeholders navigate and actively engage in international standardization efforts. The portal would include a list of relevant standards and information about how to participate in standardization activities related to AI and other CETs;
- Establish a pilot program to award $10 million in grants over 4 years to support the hosting of standards meetings for AI and other CETs in the U.S.;
- Create a report to Congress, after the third year of the program, that identifies grant recipients, provides a summary of expenses, assesses the effectiveness of the program to grow the number of standards meetings in the U.S, and shows the geographic distribution of event attendees.
“The United States must continue to lead global technical standardization. IEEE-USA supports Senator Warner's and Senator Blackburn's Promoting United States Leadership in Standards Act of 2024 to enable necessary increased stakeholder access to the standards development process, especially for those who may not have the resources to fully engage in the development activities. Enabling access for underrepresented actors increases the diversity of voices and ensures democratization of the process, thus strengthening the open markets in which the U.S. is highly competitive,” said Keith Moore, President, IEEE-USA.
“Cisco is engaged in the proper development and deployment of AI across all aspects of the ecosystem, and we firmly believe U.S. leadership is fundamental in the development of global standards for AI and other critical technologies. This legislation will not only foster U.S. participation in standards-setting bodies but also help create a policy environment that unlocks the benefits of responsible and trustworthy use of AI. We applaud the bipartisan efforts of Senators Warner and Blackburn and look forward to engaging them and other stakeholders on this important issue,” said Nicole Isaac, Vice President, Global Public Policy, Government Affairs, Cisco.
“We applaud Senators Warner and Blackburn for introducing the Promoting United States Leadership in Standards Act, which can better position standards development organizations and standards participants for success,” said Morgan Reed, President of ACT | The App Association. “A strong, yet nimble approach to technical standards development is a foundational imperative for ACT | The App Association’s members as they create tomorrow’s innovations. Nurturing open and global participation in standardization activities, especially when hosted in the United States, can address shared technical challenges while advancing American technology leadership. This legislation represents a decisive step in the right direction. We look forward to working with the sponsors to ensure the language best achieves Congress’ goals as the bill moves forward.”
“XRA is proud to support the Promoting United States Leadership in Standards Act of 2024. Emerging technologies like XR drive economic growth and help the U.S. address strategic challenges like workforce development, industrial productivity, and healthcare delivery. Foreign governments, particularly competitors of the U.S., see immersive technology and other emerging technologies as their chance to shape the future of computing and grow their economic influence. These competitors are actively engaged in the development of technical standards and governance frameworks and understand that early leadership in these bodies yield long-term advantage. Unfortunately, the United States Government’s participation in these critical international standards bodies has not kept pace," says the XR Association’s Senior Vice President of Public Policy, Joan O’Hara. “This legislation will strengthen the United States’ leadership role in the development, adoption, and governance of critical emerging technologies like XR.”
Full text of the legislation is available here. A one-page summary of the legislation is available here.
###
Warner, Kennedy Introduce Legislation to Require Financial Regulators to Respond to AI Market Threats
Dec 19 2023
WASHINGTON — U.S. Sens. Mark R. Warner (D-VA) and John Kennedy (R-LA), both members of the Senate Committee on Banking, Housing, and Urban Affairs, introduced the Financial Artificial Intelligence Risk Reduction Act, bipartisan legislation to require financial regulators to address uses of AI-generated content that could disrupt financial markets.
“AI has tremendous potential but also enormous disruptive power across a variety of fields and industries – perhaps none more so than our financial markets,” said Sen. Warner, a former business executive and venture capitalist. “The time to address those vulnerabilities is now.”
“AI is moving quickly, and our laws should do the same to prevent AI manipulation from rattling our financial markets. Our bill would help ensure that AI threats do not put Americans’ investments and retirement dreams at risk,” Sen. Kennedy said.
The legislation requires the Financial Stability Oversight Council (FSOC) to coordinate financial regulators’ response to threats to the stability of the markets posed by AI, including the use of “deepfakes” by malign actors and other practices associated with the use of AI tools that could undermine the financial system, such as trading algorithms. The legislation also requires FSOC to identify gaps in existing regulations, guidance, and exam standards that could hinder effective responses to AI threats, and implement specific recommendations to address those gaps.
In response to the potential magnitude of the threat, the Financial Artificial Intelligence Risk Reduction Act would also provide for treble penalties when AI is used in violations of Securities and Exchange Commission (SEC) rules, including acts of market manipulation and fraud. The legislation also makes clear that anyone who uses an AI model is responsible for making sure that everything that model does complies with all securities laws.
The legislation also provides the National Credit Union Administration (NCUA) and Federal Housing Finance Agency (FHFA) with the authority necessary to oversee AI service providers, similar to the authority the other financial regulators have had for decades.
A copy of the legislation is available here.
###
WASHINGTON – U.S. Sens. Mark R. Warner and Tim Kaine (both D-VA) joined Sens. Sheldon Whitehouse (D-RI), Lisa Murkowski (R-AK), and Marsha Blackburn (R-TN) in introducing the Telehealth Response for E-prescribing Addiction Therapy Services (TREATS) Act, legislation that would increase access to telehealth services for individuals with substance use disorder (SUD). During the COVID-19 pandemic, the Drug Enforcement Administration (DEA) temporarily removed an in-person exam requirement for providers to prescribe SUD treatments. This change expanded access to care and reduced the risk of overdose, but it is set to expire at the end of next year. The TREATS Act would make this flexibility permanent.
“Over the course of the COVID-19 pandemic we learned valuable lessons in how to adapt our health care system in order to better care for patients, including the successful treatment of patients with opioid addiction using telehealth services,” said Sen. Warner. “The TREATS Act would make permanent commonsense, safe telehealth practices that will expand care options for those battling with substance use disorder.”
“Telehealth has helped many Virginians get the health care they need, including access to treatments for substance use disorder,” said Sen. Kaine. “By permanently allowing doctors to prescribe life-saving treatments via telehealth, the TREATS Act would better support individuals in recovery and help reduce the risk of overdoses.”
In 2021, 2,622 Virginians died from overdose, averaging seven Virginians per day. Despite strong evidence that medication is the most effective treatment for SUD, only one in five Americans with SUD receive medication treatment that would help them quit and stay in recovery. The TREATS Act would make life-saving medication like buprenorphine more accessible and save lives.
Joining the senators in cosponsoring this legislation are Sens. Catherine Cortez Masto (D-NV), Thom Tillis (R-NC), Shelley Moore Capito (R-WV), Amy Klobuchar (D-MN), Mark Kelly (D-AZ), and Cory Booker (D-NJ). U.S. Representatives David Trone (D-MD-6), Jay Obernolte (R-CA-23), and Brian Fitzpatrick (R-PA-1) led the introduction of the legislation in the House.
Full text of the bill is available here.
###
WASHINGTON – Today, U.S. Sens. Mark R. Warner and Tim Kaine (both D-VA) announced $2,483,817 in federal funding for the Commonwealth to provide distance learning services for rural areas. The funding was awarded through U.S. Department of Agriculture Rural Development Distance Learning & Telemedicine Grants, which provide rural communities with advanced telecommunications technology. In all, these grants will provide 197,010 Virginia students with the technology they need to take advantage of education opportunities through local colleges and universities.
“Over the past several years, we have seen the tremendous capabilities of distance learning to extend opportunities to students that have previously been limited by their geography,” said the senators. “This funding will provide 197,010 Virginia students with the technology and infrastructure they need to continue taking advantage of distance learning.”
The funding is broken down as follows:
- $952,388 for Germanna Community College in order to equip 10 locations throughout Spotsylvania, Stafford, Orange, Culpeper, Wise, Page, and Madison counties with video conferencing equipment. Instructors at Germanna Community College will use that technology to deliver mental health and healthcare educational courses to benefit 5,372 students;
- $740,793 for Lee County School District in order to equip 12 locations throughout Lee County with interactive teleconferencing equipment. Instructors at Lee County Public Schools will use that technology to deliver instructional resources, professional development courses, and mental health services to benefit 5,545 students;
- $475,122 for Southside Virginia Community College in order to equip six locations throughout Mecklenburg, Brunswick, Charlotte, Nottoway and Greensville counties with a synchronous interactive video conferencing system. Instructors at Southside Virginia Community College will use that technology to deliver nursing and emergency management services simulation labs, and shared college courses to benefit 2,805 students; and
- $315,5134 for Virginia State University in order to equip 15 locations throughout Petersburg, Roanoke, Prince George, Sussex, Dinwiddie, Henry, Southampton, Franklin, Halifax, Louisa, Brunswick, Greensville and Mecklenburg counties with integrated interactive teaching rooms at the college sites and interactive digital white boards at the high school sites. Instructors at Virginia State University will use that technology to deliver dual credit college courses to benefit 183,288 students.
Sens. Warner and Kaine have long supported efforts to better connect rural Virginia, including through significant funding to extend broadband capabilities to every corner of the Commonwealth.
###
Sens. Warner, Moran Introduce Legislation to Establish AI Guidelines for Federal Government
Nov 02 2023
WASHINGTON – U.S. Sens. Mark R. Warner (D-VA) and Jerry Moran (R-KS) today introduced legislation to establish guidelines to be used within the federal government to mitigate risks associated with Artificial Intelligence (AI) while still benefiting from new technology. U.S. Rep. Ted W. Lieu (D-CA-36) plans to introduce companion legislation in the U.S. House of Representatives.
Congress directed the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework that organizations, public and private, could employ to ensure they use AI systems in a trustworthy manner. This framework was released earlier this year and is supported by a wide range of public and private sector organizations, but federal agencies are not currently required to use this framework to manage their use of AI systems.
The Federal Artificial Intelligence Risk Management Act would require federal agencies to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.
“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” said Sen. Warner. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks.”
“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” said Sen. Moran. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”
“Okta is a strong proponent of interoperability across technical standards and governance models alike and as such we applaud Senators Warner and Moran for their bipartisan Federal AI Risk Management Framework Act,” said Michael Clauser, Director, Head of US Federal Affairs, Okta. “This bill complements the Administration’s recent Executive Order on Artificial Intelligence (AI) and takes the next steps by providing the legislative authority to require federal software vendors and government agencies alike to develop and deploy AI in accordance with the NIST AI Risk Management Framework (RMF). The RMF is a quality model for what public-private partnerships can produce and a useful tool as AI developers and deployers govern, map, measure, manage, and mitigate risk from low- and high-impact AI models alike.”
“IEEE-USA heartily supports the Federal Artificial Intelligence Risk Management Act of 2023,” said Russell Harrison, Managing Director, IEEE-USA. “Making the NIST Risk Management Framework (RMF) mandatory helps protect the public from unintended risks of AI systems yet permits AI technology to mature in ways that benefit the public. Requiring agencies to use standards, like those developed by IEEE, will protect both public welfare and innovation by providing a useful checklist for agencies implementing AI systems. Required compliance does not interfere with competitiveness; it promotes clarity by setting forth a ‘how-to.’”
“Procurement of AI systems is challenging because AI evaluation is a complex topic and expertise is often lacking in government.” said Dr. Arvind Narayanan, Professor of Computer Science, Princeton University. “It is also high-stakes because AI is used for making consequential decisions. The Federal Artificial Intelligence Risk Management Act tackles this important problem with a timely and comprehensive approach to revamping procurement by shoring up expertise, evaluation capabilities, and risk management.”
"Risk management in AI requires making responsible choices with appropriate stakeholder involvement at every stage in the technology's development; by requiring federal agencies to follow the guidance of the NIST AI Risk Management Framework to that end, the Federal AI Risk Management Act will contribute to making the technology more inclusive and safer overall,” said Yacine Jernite, Machine Learning & Society Lead, Hugging Face. “Beyond its direct impact on the use of AI technology by the Federal Government, this will also have far-reaching consequences by fostering more shared knowledge and development of necessary tools and good practices. We support the Act and look forward to the further opportunities it will bring to build AI technology more responsibly and collaboratively."
“The Enterprise Cloud Coalition supports the Federal AI Risk Management Act of 2023, which mandates agencies adopt the NIST AI Risk Management Framework to guide the procurement of AI solutions,” said Andrew Howell, Executive Director, Enterprise Cloud Coalition. “By standardizing risk management practices, this act ensures a higher degree of reliability and security in AI technologies used within our government, aligning with our coalition's commitment to trust in technology. We believe this legislation is a critical step toward advancing the United States' leadership in the responsible use and development of artificial intelligence on the global stage.”
A one-page explanation of the legislation of the legislation can be found here.
# # #
Virginia Lawmakers Applaud Selection of Jefferson Lab to Lead High Performance Data Facility
Oct 16 2023
WASHINGTON – Today, Virginia lawmakers gathered at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) to celebrate the U.S. Department of Energy (DOE)’s meritorious selection of Jefferson Lab as the Hub Director for the new High Performance Data Facility (HPDF) – a scientific user facility that will specialize in advanced infrastructure for data-intensive science. The project to build the HPDF Hub will be a partnership between Jefferson Lab and Lawrence Berkeley National Laboratory (LBNL), with the two labs forming a joint project team led by Jefferson Lab and charged to create an integrated HPDF Hub design.
U.S. Sens. Mark R. Warner and Tim Kaine and their colleagues have worked tirelessly to engage the DOE, stress the extent of Jefferson Lab’s capabilities and potential for growth, and best position Virginia to be selected to host the HPDF. As part of this effort, the lawmakers worked with the General Assembly and Governor Youngkin to secure over $40 million in Commonwealth funds for the planning and construction of a shell building to house the HPDF – a bipartisan feat that demonstrated Virginia’s extraordinary support of Jefferson Lab’s mission and commitment to this project.
The High Performance Data Facility is envisioned as a national resource that will serve as the foundation for advancing DOE’s ambitious Integrated Research Infrastructure (IRI) program, which aims to provide researchers the ability to seamlessly meld DOE’s unique data resources, experimental user facilities, and advanced computing resources to accelerate the pace of discovery. The mission of the HPDF will be to enable and accelerate scientific discovery by delivering state-of-the-art data management infrastructure, capabilities, and tools. The HPDF will provide a crucial national resource for artificial intelligence (AI) research, opening new approaches for the nation’s researchers to attack fundamental problems in science and engineering that require nimble, shared access to large data sets, and real-time analysis of streamed data from experiments. DOE is the leading producer of scientific data in the world, and the HPDF will deliver a platform for a broad spectrum of data-intensive research as we enter the era of exascale supercomputing and exascale data.
Today’s news follows an announcement last year by Sens. Warner and Kaine that over $76 million in federal funding was headed to Jefferson Lab for project support and infrastructure upgrades. Those investments were made possible by the Inflation Reduction Act, which passed by one vote and was supported by both senators.
“The selection of Jefferson Lab as the location and lead of the High Performance Data Facility is a monumental win for the Lab, Hampton Roads, and the Commonwealth of Virginia,” said U.S. Senator Mark R. Warner (D-VA). “Since my days as Governor, I have pushed to broaden the mission and responsibilities of Jefferson Lab to reflect the current needs of our nation. Today’s announcement is a massive step towards realizing the goal of diversifying the mission of Jefferson Lab by providing the Lab with a critical national resource that will be used to tackle fundamental problems in science and engineering, including artificial intelligence research. I’m thankful for Secretary Granholm and the Department of Energy’s commitment to ensuring the U.S. can pave the way for the next generation of advanced data management and for providing Jefferson Lab the opportunity to lead this world-class project. I look forward to working with Jefferson Lab, the Department of Energy, and my colleagues in advancing this project as quickly as possible and look forward to seeing the innumerable scientific advancements that are sure to follow.”
“Jefferson Lab’s designation as the leader of the High Performance Data Facility is a powerful recognition of the contributions Virginians make to the research we need to remain at the cutting-edge of technological innovation,” said U.S. Senator Tim Kaine (D-VA). “I’m proud to have helped advocate for this designation, and for years have gone to bat through the annual government funding process to support Jefferson Lab’s work. I will continue to do all that I can to secure the resources Virginia scientists need to advance America’s competitiveness and supercomputing capabilities.”
“From Day One of my Administration, we’ve been working with leaders in our delegation, in our General Assembly, and at Jefferson Lab to secure the High Performance Data Facility, an asset that will accelerate research driven economic development in the Commonwealth. I was proud to work with General Assembly leaders to make a $40 million investment to help land this prize that will catalyze our economy for decades to come. Our Administration will continue to support the cutting-edge technological research that has established the Commonwealth as a nationwide leader in innovation,” said Virginia Governor Glenn Youngkin.
“We are honored to be selected by the DOE’s Advanced Scientific Computing Research program to lead this project,” said Jefferson Lab Director Stuart Henderson. “Building on our extensive experience with large data sets and high performance computing, and our new and ongoing partnerships exploring state-of-the-art approaches to data and data science, we will build a new facility that will revolutionize the way we make scientific discoveries.”
“Today’s announcement is great news for all those committed to innovation and scientific discovery. Since its founding, Jefferson Lab (JLab) has established itself as a world-leader in nuclear physics research while building acumen in the computing realm to manage, store, and interpret data. These two areas of expertise have proven synergistic and advanced the lab’s mission. The location of the High Performance Data Facility will create new opportunities at Jefferson Lab and in Hampton Roads while bringing JLab’s expertise to bear for the entire network of National Labs,” said Congressman Bobby Scott (VA-03).
“I am thrilled to see Jefferson Lab selected as the Hub Director for the Department of Energy’s High Performance Data Facility,” Congressman Rob Wittman (VA-01) said. “Jefferson Lab is a leader in nuclear research, and these investments will unlock vital data science advancements for the Hampton Roads region, the Commonwealth, and our nation. I am proud to have advocated for this important investment alongside my colleagues at the local, state, and federal levels over the years, and I look forward to the future developments that will follow the completion of this critical project.”
“Jefferson Lab’s High Performance Data Facility is a once-in-a-generation initiative that will catapult Newport News and the Commonwealth to the international frontlines of data analytics and advanced computing,” said Newport News Mayor Phillip Jones. “This revolutionary facility will transform scientific research and discovery. In addition to workforce and economic development impacts, there are innumerable opportunities for higher education, research, STEM learning, and commercial investments. This exciting new project, coupled with Jefferson Lab’s already robust scientific and educational offerings, will make Newport News an even greater hub of innovation and research.”
“This project will be one of the greatest economic development projects to come to Newport News in recent memory,” said State Senator Monty Mason, who represents Jefferson Lab as Senator of the 1st District. “As a steadfast advocate on the state level, I am proud to have secured critical state funding for the planning and preparation of this project. The city and the entire peninsula will be strengthened by this facility, bringing 100s of new jobs with salaries well over the region's median income, boosting our local economy, and further solidifying the Virginia peninsula as a leader in science innovation.”
“Today’s announcement is the culmination of years of collaboration between members of the Virginia General Assembly and our federal delegation to bring a High Performance Data Facility to Jefferson Lab. As Chairman of the House Appropriations Committee, I am excited that the Commonwealth’s investment will leverage between $300 million to $500 million in federal funds for this transformative opportunity,” said State Delegate and Chairman of the House Appropriations Committee Barry Knight.
“The investments made today by the Department of Energy (DOE) and the Commonwealth of Virginia into the High-Performance Data Facility mark the beginning of an unparalleled chapter for the laboratory and the wider educational community,” emphasized Dr. Sean J. Hearne, President and CEO of the Southeastern Universities Research Association (SURA). “This cutting-edge research facility serves as a gateway to explore the rapidly expanding realm of data science, offering extensive research and educational opportunities that are poised to redefine our world.”
“The Friends of Jefferson Lab, a coalition of business leaders spanning from Richmond to the oceanfront, are delighted Jefferson Lab has been chosen as the site for the high performance data facility,” said Alan Witt, Chair of Friends of JLab. “Jefferson Lab is a vital asset to Hampton Roads and the addition of this facility will add greatly to the economic, scientific, and educational fabric of the Virginia Peninsula, Hampton Roads, and the Commonwealth of Virginia.”
Specifically, the HPDF will have a “hub-and-spoke” model in which Jefferson Lab and LBNL will host mirrored centralized resources. It will enable high priority DOE mission applications at “spoke” sites by deploying and orchestrating distributed infrastructure at the spokes or other locations. Under Jefferson Lab’s leadership, the Jefferson Lab/LBNL partnership will assemble a world class HPDF Hub project team to deliver a geographically resilient and innovative HPDF core infrastructure capable of meeting the needs of a wide diversity of users, institutions, and use cases. This Jefferson Lab-led partnership will itself provide the template for the first spokes partnerships and blaze new paths in institutional engagement and outreach in the emerging era of AI- enabled integrated science.
As identified in the DOE’s Mission Need Statement for the High Performance Data Facility approved August 2020, DOE anticipates that the total project cost of the HPDF project, including the hub and spokes, will be between $300 million and $500 million in current and future year funds, subject to the availability of future year appropriations.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged several artificial intelligence (AI) companies to take additional action to promote safety and prevent malicious misuse of their products. In a series of letters, Sen. Warner applauded certain companies for publicly joining voluntary commitments proposed by the Biden administration, but encouraged them to broaden their efforts, and called on companies that have not taken this public step to commit to making their products more secure.
As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques. In July, the Biden administration announced that several AI companies had agreed to a series of voluntary commitments that would promote greater security and transparency. However, the commitments were not fully comprehensive in scope or in participation, with many companies not publicly participating and several exploitable aspects of the technology left untouched by the commitments.
In a series of letters sent today, Sen. Warner pushed directly on companies that did not participate, including Apple, Midjourney, Mistral AI, Databricks, Scale AI, and Stability AI, requesting a response detailing the steps they plan to take to increase the security of their products and prioritize transparency. Sen. Warner additionally sent letters to companies that were involved in the Biden administration’s commitments, including Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI, asking that they extend commitments to less capable models and also develop consumer-facing commitments – such as development and monitoring practices – to prevent the most serious forms of misuse.
“While representing an important improvement upon the status quo, the voluntary commitments announced in July can be bolstered in key ways through additional commitments,” Sen. Warner wrote.
Sen. Warner also called specific attention to the urgent need for all AI companies to make additional commitments to safeguard against a few highly sensitive potential misuses, including non-consensual intimate image generation (including child sexual abuse material), social-scoring, real-time facial recognition, and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.
The letters follow up on Sen. Warner’s previous efforts to engage directly with AI companies to push for responsible development and deployment. In April, Sen. Warner directly called on AI CEOs to develop practices that would ensure that their products and systems are secure. In July, he also pushed on the Biden administration to keep working with AI companies to expand the scope of the voluntary commitments.
Additionally, Sen. Warner wrote to Google last week to raise concerns about their testing of new AI technology in real medical settings. Separately, he urged the CEOs of several AI companies to address a concerning report that generative chatbots were producing instructions on how to exacerbate an eating disorder. Additionally, he has introduced several pieces of legislation aimed at making tech safer and more humane, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
Copies of each of the letters can be found here.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence and author of the bipartisan law to invest in domestic semiconductor manufacturing, today released a statement on the one-year anniversary of the CHIPS and Science Act:
“I fought to pass the CHIPS and Science Act because it’s good for our supply chains, our families, and our national security to make semiconductors here at home. In the year since, the law has bolstered innovation, helped America to compete against countries like China for the technology of the future, and created good-paying manufacturing jobs that will grow the middle class.”
Nearly everything that has an “on” switch – from electric toothbrushes and calculators to airplanes and satellites – contains a semiconductor. One year ago, President Biden signed into law the CHIPS and Science Act, a law co-authored by Warner to make a nearly $53 billion investment in U.S. semiconductor manufacturing, research and development, and workforce, and create a 25 percent tax credit for capital investments in semiconductor manufacturing.
Semiconductors were invented in the United States, but today we produce only about 12 percent of global supply – and none of the most advanced chips. Similarly, investments in research and development have fallen to less than 1 percent of GDP from 2 percent in the mid-1960s at the peak of the space race. TheCHIPS and Science Act aims to change this by driving American competitiveness, making American supply chains more resilient, and supporting our national security and access to key technologies. In the one year since it was signed into law, companies have announced over $231 billion in commitments in semiconductor and electronics investments in the United States.
Last month, Sen. Warner co-hosted the CHIPS for Virginia Summit, convening industry, federal and state government, and academic leaders for a series of strategic discussions on how to propel Virginia forward in the booming U.S. semiconductor economy.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged Google CEO Sundar Pichai to provide more clarity into his company’s deployment of Med-PaLM 2, an artificial intelligence (AI) chatbot currently being tested in health care settings. In a letter, Sen. Warner expressed concerns about reports of inaccuracies in the technology, and called on Google to increase transparency, protect patient privacy, and ensure ethical guardrails.
In April, Google began testing Med-PaLM2 with customers, including the Mayo Clinic. Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While the technology has shown some promising results, there are also concerning reports of repeated inaccuracies and of Google’s own senior researchers expressing reservations about the readiness of the technology. Additionally, much remains unknown about where Med-PaLM 2 is being tested, what data sources it learns from, to what extent patients are aware of and can object to the use of AI in their treatment, and what steps Google has taken to protect against bias.
“While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” Sen. Warner wrote.
The letter raises concerns over AI companies prioritizing the race to establish market share over patient well-being. Sen. Warner also emphasizes his previous efforts to raise the alarm about Google skirting health privacy as it trained diagnostic models on sensitive health data without patients’ knowledge or consent.
“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” Sen. Warner continued.
The letter poses a broad range of questions for Google to answer, requesting more transparency into exactly how Med-PaLM 2 is being rolled out, what data sources Med-PaLM 2 learns from, how much information and agency patients have over how AI is involved in their care, and more.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In April, Sen. Warner directly expressed concerns to several AI CEOs – including Sundar Pichai – about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure. Last month, he called on the Biden administration to work with AI companies to develop additional guardrails around the responsible deployment of AI. He has also introduced several pieces of legislation aimed at making tech more secure, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
A copy of the letter can be found here are below.
Dear Mr. Pichai,
I write to express my concern regarding reports that Google began providing Med-PaLM 2 to hospitals to test early this year. While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors.
Over the past year, large technology companies, including Google, have been rushing to develop and deploy AI models and capture market share as the technology has received increased attention following OpenAI’s launch of ChatGPT. Numerous media outlets have reported that companies like Google and Microsoft have been willing to take bigger risks and release more nascent technology in an effort to gain a first mover advantage. In 2019, I raised concerns that Google was skirting health privacy laws through secretive partnerships with leading hospital systems, under which it trained diagnostic models on sensitive health data without patients’ knowledge or consent. This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information. One need look no further than AI pioneer Joseph Weizenbaum’s experiments involving chatbots in psychotherapy to see how users can put premature faith in even basic AI solutions.
According to Google, Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While AI models have previously been used in medical settings, the use of generative AI tools presents complex new questions and risks. According to the Wall Street Journal, a senior research director at Google who worked on Med-PaLM 2 said, “I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey.” Indeed, Google’s own research, released in May, showed that Med-PaLM 2’s answers contained more inaccurate or irrelevant information than answers provided by physicians. It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI
Given these serious concerns and the fact that VHC Health, based in Arlington, Virginia, is a member of the Mayo Clinic Care Network, I request that you provide answers to the following questions.
- Researchers have found large language models to display a phenomenon described as “sycophany,” wherein the model generates responses that confirm or cater to a user’s (tacit or explicit) preferred answers, which could produce risks of misdiagnosis in the medical context. Have you tested Med-PaLM 2 for this failure mode?
- Large language models frequently demonstrate the tendency to memorize contents of their training data, which can risk patient privacy in the context of models trained on sensitive health information. How has Google evaluated Med-PaLM 2 for this risk and what steps has Google taken to mitigate inadvertent privacy leaks of sensitive health information?
- What documentation did Google provide hospitals, such as Mayo Clinic, about Med-PaLM 2? Did it share model or system cards, datasheets, data-statements, and/or test and evaluation results?
- Google’s own research acknowledges that its clinical models reflect scientific knowledge only as of the time the model is trained, necessitating “continual learning.” What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?
- Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?
- Does Google ensure that patients are informed when Med-PaLM 2, or other AI models offered or licensed by, are used in their care by health care licensees? If so, how is the disclosure presented? Is it part of a longer disclosure or more clearly presented?
- Do patients have the option to opt-out of having AI used to facilitate their care? If so, how is this option communicated to patients?
- Does Google retain prompt information from health care licensees, including protected health information contained therein? Please list each purpose Google has for retaining that information.
- What license terms exist in any product license to use Med-PaLM 2 to protect patients, ensure ethical guardrails, and prevent misuse or inappropriate use of Med-PaLM 2? How does Google ensure compliance with those terms in the post-deployment context?
- How many hospitals is Med-PaLM 2 currently being used at? Please provide a list of all hospitals and health care systems Google has licensed or otherwise shared Med-Palm 2 with.
- Does Google use protected health information from hospitals using Med-PaLM 2 to retrain or finetune Med-PaLM 2 or any other models? If so, does Google require that hospitals inform patients that their protected health information may be used in this manner?
- In Google’s own research publication announcing Med-PaLM 2, researchers cautioned about the need to adopt “guardrails to mitigate against over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over-reliance on the output of Med-PaLM 2 as well as when it particularly should and should not be used? What guardrails has Google incorporated through product license terms to prevent over-reliance on the output?
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged the Biden administration to build on its recently announced voluntary commitments from several prominent artificial intelligence (AI) leaders in order to promote greater security, safety, and trust in the rapidly developing AI field.
As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques. On Friday, the Biden administration announced that several AI companies had agreed to a series of measures that would promote greater security and transparency. Sen. Warner wrote to the administration to applaud these efforts and laid out a series of next steps to bolster this progress, including extending commitments to less capable models, seeking consumer-facing commitments, and developing an engagement strategy to better address security risks.
“These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks,” Sen. Warner wrote. “As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.”
The letter builds on Sen. Warner’s continued advocacy for the responsible development and deployment of AI. In April, Sen. Warner directly expressed concerns to several AI CEOs about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure.
The letter also affirms Congress’ role in regulating AI, and expands on the annual Intelligence Authorization Act, legislation that recently passed unanimously through the Sente Select Committee on Intelligence. Sen. Warner urges the administration to adopt the strategy outlined in this pending bill as well as work with the FBI, CISA, ODNI, and other federal agencies to fully address the potential risks of AI technology.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In addition to his April letters, has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
A copy of the letter can be found here and below.
Dear President Biden,
I write to applaud the Administration’s significant efforts to secure voluntary commitments from leading AI vendors related to promoting greater security, safety, and trust through improved development practices. These commitments – largely applicable to these vendors’ most advanced products – can materially reduce a range of security and safety risks identified by researchers and developers in recent years. In April, I wrote to a number of these same companies, urging them to prioritize security and safety in their development, product release, and post-deployment practices. Among other things, I asked them to fully map dependencies and downstream implications of compromise of their systems; focus greater financial, technical and personnel resources on internal security; and improve their transparency practices through greater documentation of system capabilities, system limitations, and training data.
These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks. Moreover, a growing roster of highly-capable open source models have been released to the public – and would benefit from similar pre-deployment commitments contained in a number of the July 21st obligations. As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.
To be sure, responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks, and promote secure development practices in this burgeoning and highly consequential industry – and in the downstream industries integrating their products. In the interim, the important commitments your Administration has secured can be bolstered in a number of important ways.
First, I strongly encourage your Administration to continue engagement with this industry to extend these all of these commitments more broadly to less capable models that, in part through their wider adoption, can produce the most frequent examples of misuse and compromise.
Second, it is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse. Most prominent among these should be commitments from leading vendors to adopt development practices, licensing terms, and post-deployment monitoring practices that prevent non-consensual intimate image generation, social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.
Lastly, the Administration’s successful high-level engagement with the leadership of these companies must be complemented by a deeper engagement strategy to track national security risks associated with these technologies. In June, the Senate Select Committee on Intelligence on a bipartisan basis advanced our annual Intelligence Authorization Act, a provision of which directed the President to establish a strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.
This provision was spurred by conversations with leading vendors, who confided that they would not know how best to report malicious activity – such as suspected intrusions of their internal networks, observed efforts by foreign actors to generate or refine malware using their tools, or identified activity by foreign malign actors to generate content to mislead or intimidate voters. To be sure, a highly-capable and well-established set of resources, processes, and organizations – including the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Office of the Director of National Intelligence’s Foreign Malign Influence Center – exist to engage these communities, including through counter-intelligence education and defensive briefings. Nonetheless, it appears that these entities have not been fully activated to engage the range of key stakeholders in this space. For this reason, I would encourage you to pursue the contours of the strategy outlined in our pending bill.
Thank you for your Administration’s important leadership in this area. I look forward to working with you to develop bipartisan legislation in this area.
###
Warner Announces $1.8 Million for Virginia Universities to Train AI to Fight Cyberattacks
May 04 2023
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) today announced $1,820,000 for Virginia universities to research and develop AI capabilities to mitigate cyberattacks. Federal funding will allow the University of Virginia and Norfolk State University to study innovative AI-based approaches to cybersecurity. Researchers from these institutions will collaborate with teams at 10 additional educational institutions and 20 private industry partners to develop revolutionary methods to counter cyberattacks in which AI-enabled intelligent security agents will cooperate with humans to build more resilient networks.
“Addressing the cybersecurity threats that our nation faces requires constant adaptation and innovation, and utilizing AI to counter these threats is an incredibly exciting use-case for this emerging technology,” said Sen. Warner. “This funding will allow teams at the University of Virginia and Norfolk State to do groundbreaking research on ways AI can help safeguard against cyberattacks. I congratulate UVA and NSU on receiving this funding, and I can’t wait to see what they discover and develop.
The funding is distributed as follows:
· Norfolk State University will receive $975,000.
· University of Virginia will receive $845,000.
Funding for these awards is provided jointly by the National Science Foundation, the Department of Homeland Security, and IBM. Investments are designed to build a diverse AI workforce across the United States.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for improving cybersecurity and security-oriented design by AI companies. In April, he sent a series of letters to CEOs of several AI companies urging them to prioritize security, combat bias, and responsibly roll out new technologies. In November 2022, he published “Cybersecurity is Patient Safety,” a policy options paper that outlined current cybersecurity threats facing health care providers and offering a series of policy solutions to improve cybersecurity. As Chairman of the Senate Select Committee on Intelligence, Sen. Warner co-authored legislation that requires companies responsible for U.S. critical infrastructure report cybersecurity incidents to the government. He has also introduced several pieces of legislation aimed at building a more secure internet, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries and the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) released the statement below after the Drug Enforcement Agency (DEA) announced that it would extend current flexibilities around telehealth prescriptions of controlled substances, including those that treat opioid use disorder and anxiety, while it reviews a record number of comments received in response to its new proposed telemedicine rules. This move follows strong advocacy by Sen. Warner, who spoke out in March about the need to ensure that patients can continue getting their medications and sent a letter to DEA in August 2022 asking them to explain their plan for continuity of care after the COVID-19 Public Health Emergency.
“I’m pleased to see that the DEA is taking additional time to consider the comments to their proposed rule, which I believe overlooked the key benefits and lessons learned during the pandemic. This proposed rule could counterproductively exacerbate the opioid crisis and push patients to seek dangerous alternatives to proper health care, such as self-medicating, by removing a telehealth option in many cases. I’m working with my colleagues in Congress on a response to DEA’s proposed rule, and I look forward to further robust discussion on this critical issue.”
During COVID-19, patients widely adopted telehealth as a convenient and accessible way to get care remotely. This was made possible by the COVID-19 Public Health Emergency, which allowed for a number of flexibilities, including utilizing an exception to the in-person medical evaluation requirement under the Ryan Haight Online Pharmacy Consumer Protection Act, legislation regulating the online prescription of controlled substances. With the Public Health Emergency set to expire, patients will soon lose the ability to reap the benefits of a mature telehealth system in which responsible providers know how to take care of their patients remotely when appropriate.
Since 2008, Congress has directed the DEA to set up a special registration process, another exception process under the Ryan Haight Act, that would open up the door for quality health care providers to evaluate a patient and prescribe controlled substances over telehealth safely, as they’ve done during the pandemic. This special registration process has yet to be established, and DEA wrote they believe this proposed rule fulfills those Congressional mandates, despite not proposing such a registration.
Sen. Warner, a former tech entrepreneur, has been a longtime advocate for increased access to telehealth. He is a co-author of the CONNECT for Health Act, which would expand coverage of telehealth services through Medicare, make COVID-19 telehealth flexibilities permanent, improve health outcomes, and make it easier for patients to safely connect with their doctors. He previously wrote to both the Biden and Trump administrations, urging the DEA to finalize regulations long-delayed by prior administrations allowing doctors to prescribe controlled substances through telehealth. Sen. Warner also sent a letter to Senate leadership during the height of the COVID-19 crisis, calling for the permanent expansion of access to telehealth services.
In 2018, Sen. Warner included a provision to expand financial coverage for virtual substance use treatment in the Opioid Crisis Response Act of 2018. In 2003, then-Gov. Warner expanded Medicaid coverage for telemedicine statewide, including evaluation and management visits, a range of individual psychotherapies, the full range of consultations, and some clinical services, including in cardiology and obstetrics. Coverage was also expanded to include non-physician providers. Among other benefits, the telehealth expansion allowed individuals in medically underserved and remote areas of Virginia to access quality specialty care that isn’t always available at home.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) joined 27 colleagues in introducing the Kids Online Safety Act, comprehensive bipartisan legislation to protect children online.
The Kids Online Safety Act provides young people and parents with the tools, safeguards, and transparency they need to protect against online harms. The bill requires social media platforms to by default enable a range of protections against addictive design and algorithmic recommendations. It also requires privacy protections, dedicated channels to report harm, and independent audits by experts and academic researchers to ensure that social media platforms are taking meaningful steps to address risks to kids.
“Experts are clear: kids and teens are growing up in a toxic and unregulated social media landscape that promotes bullying, eating disorders, and mental health struggles,” said Sen. Warner. “The Kids Online Safety Act would give kids and parents the long-overdue ability to control some of the least transparent and most damaging aspects of social media, creating a safer and more humane online environment.”
Reporting has shown that social media companies have proof that their platforms contribute to mental health issues in children and teens, and that young people have demonstrated a precipitous rise in mental health crises over the last decade.
Specifically, the Kids Online Safety Act would:
· Require that social media platforms provide minors with options to protect their information, disable addictive product features, and opt out of algorithmic recommendations. Platforms would be required to enable the strongest settings by default.
· Give parents new controls to help support their children and identify harmful behaviors, and provides parents and children with a dedicated channel to report harms to kids to the platform.
· Create a responsibility for social media platforms to prevent and mitigate harms to minors, such as promotion of suicide, eating disorders, substance abuse, sexual exploitation, and unlawful products for minors (e.g. gambling and alcohol).
· Require social media platforms to perform an annual independent audit that assesses the risks to minors, their compliance with this legislation, and whether the platform is taking meaningful steps to prevent those harms.
· Provide academic and public interest organizations with access to critical datasets from social media platforms to foster research regarding harms to the safety and well-being of minors.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and building a safer online environment. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology and social media platforms from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
The one-page summary of the bill can be found here, the section-by-section summary can be found here, and the full text of the Senate bill can be found here.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged CEOs of several artificial intelligence (AI) companies to prioritize security, combat bias, and responsibly roll out new technologies. In a series of letters, Sen. Warner expressed concerns about the potential risks posed by AI technology, and called on companies to ensure that their products and systems are secure.
In the past several years, AI technology has rapidly advanced while chatbots and other generative AI products have simultaneously widened the accessibility of AI products and services. As these technologies are rolled out broadly, open source researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in the prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques.
“[W]ith the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work,” Sen. Warner wrote. “Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.”
Sen. Warner highlighted several specific security risks associated with AI, including data supply chain security and data poisoning attacks. He also expressed concerns about algorithmic bias, trustworthiness, and potential misuse or malicious use of AI systems.
The letters include a series of questions for companies developing large-scale AI models to answer, aimed at ensuring that they are taking appropriate measures to address these security risks. Among the questions are inquiries about companies' security strategies, limits on third-party access to their models that undermine the ability to evaluate model fitness, and steps taken to ensure secure and accurate data inputs and outputs. Recipients of the letter include the CEOs of OpenAI, Scale AI, Meta, Google, Apple, Stability AI, Midjourney, Anthropic, Percipient.ai, and Microsoft.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
A copy of the letters can be found here and below.
I write today regarding the need to prioritize security in the design and development of artificial intelligence (AI) systems. As companies like yours make rapid advancements in AI, we must acknowledge the security risks inherent in this technology and ensure AI development and adoption proceeds in a responsible and secure way. While public concern about the safety and security of AI has been on the rise, I know that work on AI security is not new. However, with the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work. Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.
I recognize the important work you and your colleagues are doing to advance AI. As a leading company in this emerging technology, I believe you have a responsibility to ensure that your technology products and systems are secure. I have long advocated for incorporating security-by-design, as we have found time and again that failing to consider security early in the product development lifecycle leads to more costly and less effective security. Instead, incorporating security upfront can reduce costs and risks. Moreover, the last five years have demonstrated that the ways in which the speed, scale, and excitement associated with new technologies have frequently obscured the shortcomings of their creators in anticipating the harmful effects of their use. AI capabilities hold enormous potential; however, we must ensure that they do not advance without appropriate safeguards and regulation.
While it is important to apply many of the same security principles we associate with traditional computing services and devices, AI presents a new set of security concerns that are distinct from traditional software vulnerabilities. Some of the AI-specific security risks that I am concerned about include the origin, quality, and accuracy of input data (data supply chain), tampering with training data (data poisoning attacks), and inputs to models that intentionally cause them to make mistakes (adversarial examples). Each of these risks further highlighting the need for secure, quality data inputs. Broadly speaking, these techniques can effectively defeat or degrade the integrity, security, or performance of an AI system (including the potential confidentiality of its training data). As leading models are increasingly integrated into larger systems, often without fully mapping dependencies and downstream implications, the effects of adversarial attacks on AI systems are only magnified.
In addition to those risks, I also have concerns regarding bias, trustworthiness, and potential misuse or malicious use of AI systems. In the last six months, we have seen open source researchers repeatedly exploit a number of prominent, publicly-accessible generative models – crafting a range of clever (and often foreseeable) prompts to easily circumvent a system’s rules. Examples include using widely-adopted models to generate malware, craft increasingly sophisticated phishing techniques, contribute to disinformation, and provide harmful information. It is imperative that we address threats to not only digital security, but also threats to physical security and political security.
In light of this, I am interested in learning about the measures that your company is taking to ensure the security of its AI systems. I request that you provide answers to the following questions no later than May 26, 2023.
Questions:
1. Can you provide an overview of your company’s security approach or strategy?
2. What limits do you enforce on third-party access to your model and how do you actively monitor for non-compliant uses?
3. Are you participating in third party (internal or external) test & evaluation, verification & validation of your systems?
4. What steps have you taken to ensure that you have secure and accurate data inputs and outputs? Have you provided comprehensive and accurate documentation of your training data to downstream users to allow them to evaluate whether your model is appropriate for their use?
5. Do you provide complete and accurate documentation of your model to commercial users? Which documentation standards or procedures do you rely on?
6. What kind of input sanitization techniques do you implement to ensure that your systems are not susceptible to prompt injection techniques that pose underlying system risks?
7. How are you monitoring and auditing your systems to detect and mitigate security breaches?
8. Can you explain the security measures that you take to prevent unauthorized access to your systems and models?
9. How do you protect your systems against potential breaches or cyberattacks? Do you have a plan in place to respond to a potential security incident? What is your process for alerting users that have integrated your model into downstream systems?
10. What is your process for ensuring the privacy of sensitive or personal information you that your system uses?
11. Can you describe how your company has handled past security incidents?
12. What security standards, if any, are you adhering to? Are you using NIST’s AI Risk Management Framework?
13. Is your company participating in the development of technical standards related to AI and AI security?
14. How are you ensuring that your company continues to be knowledgeable about evolving security best practices and risks?
15. How is your company addressing concerns about AI trustworthiness, including potential algorithmic bias and misuse or malicious use of AI?
16. Have you identified any security challenges unique to AI that you believe policymakers should address?
Thank you for your attention to these important matters and I look forward to your response.
###
WASHINGTON – U.S. Sens. Mark R. Warner (D-VA) and John Hoeven (R-ND) this week introduced legislation to support the research and development of unmanned aerial systems (UAS) technologies at the nation’s UAS test sites, including the site at Virginia Tech.
“Unmanned Aerial Systems have the potential to transform the way we manage disasters, maintain our infrastructure, administer medicine, tackle national security threats, and conduct day-to-day business,” said Sen. Warner. “UAS test sites, such as the one located at Virginia Tech, are crucial to the research and development of these technologies and I am glad to continue building on the progress we have made over the last decade.”
“UAS play a crucial role in our country’s defense, and there is tremendous potential yet to be realized, benefiting our national security as well as our economy,” said Sen. Hoeven. “The UAS test sites, including the Northern Plains UAS Test Site in North Dakota, are at the center of our efforts to ensure these aircraft can be safely integrated into our national airspace. This legislation supports their ongoing work and dovetails with the new BVLOS waivers we recently secured for our test site, further strengthening North Dakota’s position in this dynamic industry.”
Specifically, this legislation:
- Extends the authorization for the Federal Aviation Administration’s (FAA) UAS test sites for an additional five years through 2028;
- Formally authorizes research grants through the FAA for the purpose of demonstrating or validating technology related to the integration of UAS in the national airspace system (NAS);
- Requires a grant recipient to have a contract with an FAA UAS test site;
- Identifies key research priorities, including: detect and avoid capabilities; beyond visual line of sight (BVLOS) operations; operation of multiple unmanned aircraft systems; unmanned systems traffic management; command and control; and UAS safety standards.
This legislation builds on Sen. Warner’s efforts to expand the domestic production of unmanned systems, including driverless cars, drones, and unmanned maritime vehicles and make Virginia a national leader in this growing sector. Earlier this year, he introduced the Increasing Competitiveness for American Drones Act, legislation that will clear the way for drones to be used for commercial transport of goods across the country. As Chairman of the Senate Intelligence Committee, he has led efforts in Congress to shore up U.S. national and cybersecurity against hostile foreign governments through unmanned air systems. Last month, Sen. Warner introduced legislation to prohibit the federal government from purchasing drones manufactured in countries identified as national security threats, such as the People’s Republic of China.
###
Senators Introduce Bipartisan Bill to Tackle National Security Threats from Foreign Tech
Mar 07 2023
WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, and John Thune (R-SD), ranking member of the Commerce Committee’s Subcommittee on Communications, Media and Broadband, led a group of 12 bipartisan senators to introduce the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, legislation that will comprehensively address the ongoing threat posed by technology from foreign adversaries by better empowering the Department of Commerce to review, prevent, and mitigate information communications and technology transactions that pose undue risk to our national security.
“Today, the threat that everyone is talking about is TikTok, and how it could enable surveillance by the Chinese Communist Party, or facilitate the spread of malign influence campaigns in the U.S. Before TikTok, however, it was Huawei and ZTE, which threatened our nation’s telecommunications networks. And before that, it was Russia’s Kaspersky Lab, which threatened the security of government and corporate devices,” said Sen. Warner. “We need a comprehensive, risk-based approach that proactively tackles sources of potentially dangerous technology before they gain a foothold in America, so we aren’t playing Whac-A-Mole and scrambling to catch up once they’re already ubiquitous.”
“Congress needs to stop taking a piecemeal approach when it comes to technology from adversarial nations that pose national security risks,” said Sen. Thune. “Our country needs a process in place to address these risks, which is why I’m pleased to work with Senator Warner to establish a holistic, methodical approach to address the threats posed by technology platforms – like TikTok – from foreign adversaries. This bipartisan legislation would take a necessary step to ensure consumers’ information and our communications technology infrastructure is secure.”
The RESTRICT Act establishes a risk-based process, tailored to the rapidly changing technology and threat environment, by directing the Department of Commerce to identify and mitigate foreign threats to information and communications technology products and services.
In addition to Sens. Warner and Thune, the legislation is co-sponsored by Sens. Tammy Baldwin (D-WI), Deb Fischer (R-NE), Joe Manchin (D-WV), Jerry Moran (R-KS), Michael Bennet (D-CO), Dan Sullivan (R-AK), Kirsten Gillibrand (D-NY), Susan Collins (R-ME), Martin Heinrich (D-NM), and Mitt Romney (R-UT).
The Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act would:
- Require the Secretary of Commerce to establish procedures to identify, deter, disrupt, prevent, prohibit, and mitigate transactions involving information and communications technology products in which any foreign adversary has any interest and poses undue or unacceptable risk to national security;
- Prioritize evaluation of information communications and technology products used in critical infrastructure, integral to telecommunications products, or pertaining to a range of defined emerging, foundational, and disruptive technologies with serious national security implications;
- Ensure comprehensive actions to address risks of untrusted foreign information communications and technology products by requiring the Secretary to take up consideration of concerning activity identified by other government entities;
- Educate the public and business community about the threat by requiring the Secretary of Commerce to coordinate with the Director of National Intelligence to provide declassified information on how transactions denied or otherwise mitigated posed undue or unacceptable risk.
“We need to protect Americans’ data and keep our country safe against today and tomorrow’s threats. While many of these foreign-owned technology products and social media platforms like TikTok are extremely popular, we also know these products can pose a grave danger to Wisconsin’s users and threaten our national security,” said Sen. Baldwin. “This bipartisan legislation will empower us to respond to our fast-changing environment – giving the United States the tools it needs to assess and act on current and future threats that foreign-owned technologies pose to Wisconsinites and our national security.”
“There are a host of dangerous technology platforms – including TikTok – that can be manipulated by China and other foreign adversaries to threaten U.S. national security and abuse Americans’ personal data. I’m proud to join Senator Warner in introducing bipartisan legislation that would put an end to disjointed interagency responses and strengthen the federal government’s ability to counter these digital threats,” said Sen. Fischer.
“Over the past several years, foreign adversaries of the United States have encroached on American markets through technology products that steal sensitive location and identifying information of U.S. citizens, including social media platforms like TikTok. This dangerous new internet infrastructure poses serious risks to our nation’s economic and national security,” said Sen. Manchin. “I’m proud to introduce the bipartisan RESTRICT ACT, which will empower the Department of Commerce to adopt a comprehensive approach to evaluating and mitigating these threats posed by technology products. As Chairman of the Senate Armed Services Subcommittee on Cybersecurity, I will continue working with my colleagues on both sides of the aisle to get this critical legislation across the finish line.”
“Foreign adversaries are increasingly using products and services to collect information on American citizens, posing a threat to our national security,” said Sen. Moran. “This legislation would give the Department of Commerce the authority to help prevent adversarial governments from introducing harmful products and services in the U.S., providing us the long-term tools necessary to combat the infiltration of our information and communications systems. The government needs to be vigilant against these threats, but a comprehensive data privacy law is needed to ensure Americans are able to control who accesses their data and for what purpose.”
“We shouldn’t let any company subject to the Chinese Communist Party’s dictates collect data on a third of our population – and while TikTok is just the latest example, it won’t be the last. The federal government can’t continue to address new foreign technology from adversarial nations in a one-off manner; we need a strategic, enduring mechanism to protect Americans and our national security. I look forward to working in a bipartisan way with my colleagues on the Senate Select Intelligence Committee to send this bill to the floor,” said Sen. Bennet.
“Our modern economy, communication networks, and military rely on a range of information communication technologies. Unfortunately, some of these technology products pose a serious risk to our national security,” said Sen. Gillibrand. “The RESTRICT Act will address this risk by empowering the Secretary of Commerce to carefully evaluate these products and ensure that they do not endanger our critical infrastructure or undermine our democratic processes.”
“China’s brazen incursion of our airspace with a sophisticated spy balloon was only the most recent and highly visible example of its aggressive surveillance that has targeted our country for years. Through hardware exports, malicious software, and other clandestine means, China has sought to steal information in an attempt to gain a military and economic edge,” said Sen. Collins. “Rather than taking a piecemeal approach to these hostile acts and reacting to each threat individually, our legislation would create a wholistic, government-wide response to proactively defend against surveillance attempts by China and other adversaries. This will directly improve our national security as well as safeguard Americans’ personal information and our nation’s vital intellectual property.”
"Cybersecurity is one of the most serious economic and national security challenges we face as a nation. The future of conflict is moving further away from the battlefield and closer to the devices and the networks everyone increasingly depends on. We need a systemic approach to addressing potential threats posed by technology from foreign adversaries. This bill provides that approach by authorizing the Administration to review and restrict apps and services that pose a risk to Americans’ data security. I will continue to push for technology defenses that the American people want and deserve to keep our country both safe and free,” said Sen. Heinrich.
“The Chinese Communist Party is engaged in a multi-generational, multi-faceted, and systematic campaign to replace the United States as the world’s superpower. One tool at its disposal—the ability to force social media companies headquartered in China, like TikTok’s parent company, to hand over the data it collects on users,” said Sen. Romney. “Our adversaries—countries like China, Russia, Iran—are increasingly using technology products to spy on Americans and discover vulnerabilities in our communications infrastructure, which can then be exploited. The United States must take stronger action to safeguard our national security against the threat technology products pose and this legislation is a strong step in that direction.”
A two-page summary of the bill is available here. A copy of the bill text is available here.
###