Press Releases
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged several artificial intelligence (AI) companies to take additional action to promote safety and prevent malicious misuse of their products. In a series of letters, Sen. Warner applauded certain companies for publicly joining voluntary commitments proposed by the Biden administration, but encouraged them to broaden their efforts, and called on companies that have not taken this public step to commit to making their products more secure.
As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques. In July, the Biden administration announced that several AI companies had agreed to a series of voluntary commitments that would promote greater security and transparency. However, the commitments were not fully comprehensive in scope or in participation, with many companies not publicly participating and several exploitable aspects of the technology left untouched by the commitments.
In a series of letters sent today, Sen. Warner pushed directly on companies that did not participate, including Apple, Midjourney, Mistral AI, Databricks, Scale AI, and Stability AI, requesting a response detailing the steps they plan to take to increase the security of their products and prioritize transparency. Sen. Warner additionally sent letters to companies that were involved in the Biden administration’s commitments, including Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI, asking that they extend commitments to less capable models and also develop consumer-facing commitments – such as development and monitoring practices – to prevent the most serious forms of misuse.
“While representing an important improvement upon the status quo, the voluntary commitments announced in July can be bolstered in key ways through additional commitments,” Sen. Warner wrote.
Sen. Warner also called specific attention to the urgent need for all AI companies to make additional commitments to safeguard against a few highly sensitive potential misuses, including non-consensual intimate image generation (including child sexual abuse material), social-scoring, real-time facial recognition, and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.
The letters follow up on Sen. Warner’s previous efforts to engage directly with AI companies to push for responsible development and deployment. In April, Sen. Warner directly called on AI CEOs to develop practices that would ensure that their products and systems are secure. In July, he also pushed on the Biden administration to keep working with AI companies to expand the scope of the voluntary commitments.
Additionally, Sen. Warner wrote to Google last week to raise concerns about their testing of new AI technology in real medical settings. Separately, he urged the CEOs of several AI companies to address a concerning report that generative chatbots were producing instructions on how to exacerbate an eating disorder. Additionally, he has introduced several pieces of legislation aimed at making tech safer and more humane, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
Copies of each of the letters can be found here.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence and author of the bipartisan law to invest in domestic semiconductor manufacturing, today released a statement on the one-year anniversary of the CHIPS and Science Act:
“I fought to pass the CHIPS and Science Act because it’s good for our supply chains, our families, and our national security to make semiconductors here at home. In the year since, the law has bolstered innovation, helped America to compete against countries like China for the technology of the future, and created good-paying manufacturing jobs that will grow the middle class.”
Nearly everything that has an “on” switch – from electric toothbrushes and calculators to airplanes and satellites – contains a semiconductor. One year ago, President Biden signed into law the CHIPS and Science Act, a law co-authored by Warner to make a nearly $53 billion investment in U.S. semiconductor manufacturing, research and development, and workforce, and create a 25 percent tax credit for capital investments in semiconductor manufacturing.
Semiconductors were invented in the United States, but today we produce only about 12 percent of global supply – and none of the most advanced chips. Similarly, investments in research and development have fallen to less than 1 percent of GDP from 2 percent in the mid-1960s at the peak of the space race. TheCHIPS and Science Act aims to change this by driving American competitiveness, making American supply chains more resilient, and supporting our national security and access to key technologies. In the one year since it was signed into law, companies have announced over $231 billion in commitments in semiconductor and electronics investments in the United States.
Last month, Sen. Warner co-hosted the CHIPS for Virginia Summit, convening industry, federal and state government, and academic leaders for a series of strategic discussions on how to propel Virginia forward in the booming U.S. semiconductor economy.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged Google CEO Sundar Pichai to provide more clarity into his company’s deployment of Med-PaLM 2, an artificial intelligence (AI) chatbot currently being tested in health care settings. In a letter, Sen. Warner expressed concerns about reports of inaccuracies in the technology, and called on Google to increase transparency, protect patient privacy, and ensure ethical guardrails.
In April, Google began testing Med-PaLM2 with customers, including the Mayo Clinic. Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While the technology has shown some promising results, there are also concerning reports of repeated inaccuracies and of Google’s own senior researchers expressing reservations about the readiness of the technology. Additionally, much remains unknown about where Med-PaLM 2 is being tested, what data sources it learns from, to what extent patients are aware of and can object to the use of AI in their treatment, and what steps Google has taken to protect against bias.
“While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” Sen. Warner wrote.
The letter raises concerns over AI companies prioritizing the race to establish market share over patient well-being. Sen. Warner also emphasizes his previous efforts to raise the alarm about Google skirting health privacy as it trained diagnostic models on sensitive health data without patients’ knowledge or consent.
“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” Sen. Warner continued.
The letter poses a broad range of questions for Google to answer, requesting more transparency into exactly how Med-PaLM 2 is being rolled out, what data sources Med-PaLM 2 learns from, how much information and agency patients have over how AI is involved in their care, and more.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In April, Sen. Warner directly expressed concerns to several AI CEOs – including Sundar Pichai – about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure. Last month, he called on the Biden administration to work with AI companies to develop additional guardrails around the responsible deployment of AI. He has also introduced several pieces of legislation aimed at making tech more secure, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
A copy of the letter can be found here are below.
Dear Mr. Pichai,
I write to express my concern regarding reports that Google began providing Med-PaLM 2 to hospitals to test early this year. While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors.
Over the past year, large technology companies, including Google, have been rushing to develop and deploy AI models and capture market share as the technology has received increased attention following OpenAI’s launch of ChatGPT. Numerous media outlets have reported that companies like Google and Microsoft have been willing to take bigger risks and release more nascent technology in an effort to gain a first mover advantage. In 2019, I raised concerns that Google was skirting health privacy laws through secretive partnerships with leading hospital systems, under which it trained diagnostic models on sensitive health data without patients’ knowledge or consent. This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information. One need look no further than AI pioneer Joseph Weizenbaum’s experiments involving chatbots in psychotherapy to see how users can put premature faith in even basic AI solutions.
According to Google, Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While AI models have previously been used in medical settings, the use of generative AI tools presents complex new questions and risks. According to the Wall Street Journal, a senior research director at Google who worked on Med-PaLM 2 said, “I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey.” Indeed, Google’s own research, released in May, showed that Med-PaLM 2’s answers contained more inaccurate or irrelevant information than answers provided by physicians. It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI
Given these serious concerns and the fact that VHC Health, based in Arlington, Virginia, is a member of the Mayo Clinic Care Network, I request that you provide answers to the following questions.
- Researchers have found large language models to display a phenomenon described as “sycophany,” wherein the model generates responses that confirm or cater to a user’s (tacit or explicit) preferred answers, which could produce risks of misdiagnosis in the medical context. Have you tested Med-PaLM 2 for this failure mode?
- Large language models frequently demonstrate the tendency to memorize contents of their training data, which can risk patient privacy in the context of models trained on sensitive health information. How has Google evaluated Med-PaLM 2 for this risk and what steps has Google taken to mitigate inadvertent privacy leaks of sensitive health information?
- What documentation did Google provide hospitals, such as Mayo Clinic, about Med-PaLM 2? Did it share model or system cards, datasheets, data-statements, and/or test and evaluation results?
- Google’s own research acknowledges that its clinical models reflect scientific knowledge only as of the time the model is trained, necessitating “continual learning.” What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?
- Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?
- Does Google ensure that patients are informed when Med-PaLM 2, or other AI models offered or licensed by, are used in their care by health care licensees? If so, how is the disclosure presented? Is it part of a longer disclosure or more clearly presented?
- Do patients have the option to opt-out of having AI used to facilitate their care? If so, how is this option communicated to patients?
- Does Google retain prompt information from health care licensees, including protected health information contained therein? Please list each purpose Google has for retaining that information.
- What license terms exist in any product license to use Med-PaLM 2 to protect patients, ensure ethical guardrails, and prevent misuse or inappropriate use of Med-PaLM 2? How does Google ensure compliance with those terms in the post-deployment context?
- How many hospitals is Med-PaLM 2 currently being used at? Please provide a list of all hospitals and health care systems Google has licensed or otherwise shared Med-Palm 2 with.
- Does Google use protected health information from hospitals using Med-PaLM 2 to retrain or finetune Med-PaLM 2 or any other models? If so, does Google require that hospitals inform patients that their protected health information may be used in this manner?
- In Google’s own research publication announcing Med-PaLM 2, researchers cautioned about the need to adopt “guardrails to mitigate against over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over-reliance on the output of Med-PaLM 2 as well as when it particularly should and should not be used? What guardrails has Google incorporated through product license terms to prevent over-reliance on the output?
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged the Biden administration to build on its recently announced voluntary commitments from several prominent artificial intelligence (AI) leaders in order to promote greater security, safety, and trust in the rapidly developing AI field.
As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques. On Friday, the Biden administration announced that several AI companies had agreed to a series of measures that would promote greater security and transparency. Sen. Warner wrote to the administration to applaud these efforts and laid out a series of next steps to bolster this progress, including extending commitments to less capable models, seeking consumer-facing commitments, and developing an engagement strategy to better address security risks.
“These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks,” Sen. Warner wrote. “As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.”
The letter builds on Sen. Warner’s continued advocacy for the responsible development and deployment of AI. In April, Sen. Warner directly expressed concerns to several AI CEOs about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure.
The letter also affirms Congress’ role in regulating AI, and expands on the annual Intelligence Authorization Act, legislation that recently passed unanimously through the Sente Select Committee on Intelligence. Sen. Warner urges the administration to adopt the strategy outlined in this pending bill as well as work with the FBI, CISA, ODNI, and other federal agencies to fully address the potential risks of AI technology.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In addition to his April letters, has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
A copy of the letter can be found here and below.
Dear President Biden,
I write to applaud the Administration’s significant efforts to secure voluntary commitments from leading AI vendors related to promoting greater security, safety, and trust through improved development practices. These commitments – largely applicable to these vendors’ most advanced products – can materially reduce a range of security and safety risks identified by researchers and developers in recent years. In April, I wrote to a number of these same companies, urging them to prioritize security and safety in their development, product release, and post-deployment practices. Among other things, I asked them to fully map dependencies and downstream implications of compromise of their systems; focus greater financial, technical and personnel resources on internal security; and improve their transparency practices through greater documentation of system capabilities, system limitations, and training data.
These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks. Moreover, a growing roster of highly-capable open source models have been released to the public – and would benefit from similar pre-deployment commitments contained in a number of the July 21st obligations. As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.
To be sure, responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks, and promote secure development practices in this burgeoning and highly consequential industry – and in the downstream industries integrating their products. In the interim, the important commitments your Administration has secured can be bolstered in a number of important ways.
First, I strongly encourage your Administration to continue engagement with this industry to extend these all of these commitments more broadly to less capable models that, in part through their wider adoption, can produce the most frequent examples of misuse and compromise.
Second, it is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse. Most prominent among these should be commitments from leading vendors to adopt development practices, licensing terms, and post-deployment monitoring practices that prevent non-consensual intimate image generation, social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.
Lastly, the Administration’s successful high-level engagement with the leadership of these companies must be complemented by a deeper engagement strategy to track national security risks associated with these technologies. In June, the Senate Select Committee on Intelligence on a bipartisan basis advanced our annual Intelligence Authorization Act, a provision of which directed the President to establish a strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.
This provision was spurred by conversations with leading vendors, who confided that they would not know how best to report malicious activity – such as suspected intrusions of their internal networks, observed efforts by foreign actors to generate or refine malware using their tools, or identified activity by foreign malign actors to generate content to mislead or intimidate voters. To be sure, a highly-capable and well-established set of resources, processes, and organizations – including the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Office of the Director of National Intelligence’s Foreign Malign Influence Center – exist to engage these communities, including through counter-intelligence education and defensive briefings. Nonetheless, it appears that these entities have not been fully activated to engage the range of key stakeholders in this space. For this reason, I would encourage you to pursue the contours of the strategy outlined in our pending bill.
Thank you for your Administration’s important leadership in this area. I look forward to working with you to develop bipartisan legislation in this area.
###
Warner Announces $1.8 Million for Virginia Universities to Train AI to Fight Cyberattacks
May 04 2023
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) today announced $1,820,000 for Virginia universities to research and develop AI capabilities to mitigate cyberattacks. Federal funding will allow the University of Virginia and Norfolk State University to study innovative AI-based approaches to cybersecurity. Researchers from these institutions will collaborate with teams at 10 additional educational institutions and 20 private industry partners to develop revolutionary methods to counter cyberattacks in which AI-enabled intelligent security agents will cooperate with humans to build more resilient networks.
“Addressing the cybersecurity threats that our nation faces requires constant adaptation and innovation, and utilizing AI to counter these threats is an incredibly exciting use-case for this emerging technology,” said Sen. Warner. “This funding will allow teams at the University of Virginia and Norfolk State to do groundbreaking research on ways AI can help safeguard against cyberattacks. I congratulate UVA and NSU on receiving this funding, and I can’t wait to see what they discover and develop.
The funding is distributed as follows:
· Norfolk State University will receive $975,000.
· University of Virginia will receive $845,000.
Funding for these awards is provided jointly by the National Science Foundation, the Department of Homeland Security, and IBM. Investments are designed to build a diverse AI workforce across the United States.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for improving cybersecurity and security-oriented design by AI companies. In April, he sent a series of letters to CEOs of several AI companies urging them to prioritize security, combat bias, and responsibly roll out new technologies. In November 2022, he published “Cybersecurity is Patient Safety,” a policy options paper that outlined current cybersecurity threats facing health care providers and offering a series of policy solutions to improve cybersecurity. As Chairman of the Senate Select Committee on Intelligence, Sen. Warner co-authored legislation that requires companies responsible for U.S. critical infrastructure report cybersecurity incidents to the government. He has also introduced several pieces of legislation aimed at building a more secure internet, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries and the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) joined 27 colleagues in introducing the Kids Online Safety Act, comprehensive bipartisan legislation to protect children online.
The Kids Online Safety Act provides young people and parents with the tools, safeguards, and transparency they need to protect against online harms. The bill requires social media platforms to by default enable a range of protections against addictive design and algorithmic recommendations. It also requires privacy protections, dedicated channels to report harm, and independent audits by experts and academic researchers to ensure that social media platforms are taking meaningful steps to address risks to kids.
“Experts are clear: kids and teens are growing up in a toxic and unregulated social media landscape that promotes bullying, eating disorders, and mental health struggles,” said Sen. Warner. “The Kids Online Safety Act would give kids and parents the long-overdue ability to control some of the least transparent and most damaging aspects of social media, creating a safer and more humane online environment.”
Reporting has shown that social media companies have proof that their platforms contribute to mental health issues in children and teens, and that young people have demonstrated a precipitous rise in mental health crises over the last decade.
Specifically, the Kids Online Safety Act would:
· Require that social media platforms provide minors with options to protect their information, disable addictive product features, and opt out of algorithmic recommendations. Platforms would be required to enable the strongest settings by default.
· Give parents new controls to help support their children and identify harmful behaviors, and provides parents and children with a dedicated channel to report harms to kids to the platform.
· Create a responsibility for social media platforms to prevent and mitigate harms to minors, such as promotion of suicide, eating disorders, substance abuse, sexual exploitation, and unlawful products for minors (e.g. gambling and alcohol).
· Require social media platforms to perform an annual independent audit that assesses the risks to minors, their compliance with this legislation, and whether the platform is taking meaningful steps to prevent those harms.
· Provide academic and public interest organizations with access to critical datasets from social media platforms to foster research regarding harms to the safety and well-being of minors.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and building a safer online environment. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology and social media platforms from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
The one-page summary of the bill can be found here, the section-by-section summary can be found here, and the full text of the Senate bill can be found here.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) released the statement below after the Drug Enforcement Agency (DEA) announced that it would extend current flexibilities around telehealth prescriptions of controlled substances, including those that treat opioid use disorder and anxiety, while it reviews a record number of comments received in response to its new proposed telemedicine rules. This move follows strong advocacy by Sen. Warner, who spoke out in March about the need to ensure that patients can continue getting their medications and sent a letter to DEA in August 2022 asking them to explain their plan for continuity of care after the COVID-19 Public Health Emergency.
“I’m pleased to see that the DEA is taking additional time to consider the comments to their proposed rule, which I believe overlooked the key benefits and lessons learned during the pandemic. This proposed rule could counterproductively exacerbate the opioid crisis and push patients to seek dangerous alternatives to proper health care, such as self-medicating, by removing a telehealth option in many cases. I’m working with my colleagues in Congress on a response to DEA’s proposed rule, and I look forward to further robust discussion on this critical issue.”
During COVID-19, patients widely adopted telehealth as a convenient and accessible way to get care remotely. This was made possible by the COVID-19 Public Health Emergency, which allowed for a number of flexibilities, including utilizing an exception to the in-person medical evaluation requirement under the Ryan Haight Online Pharmacy Consumer Protection Act, legislation regulating the online prescription of controlled substances. With the Public Health Emergency set to expire, patients will soon lose the ability to reap the benefits of a mature telehealth system in which responsible providers know how to take care of their patients remotely when appropriate.
Since 2008, Congress has directed the DEA to set up a special registration process, another exception process under the Ryan Haight Act, that would open up the door for quality health care providers to evaluate a patient and prescribe controlled substances over telehealth safely, as they’ve done during the pandemic. This special registration process has yet to be established, and DEA wrote they believe this proposed rule fulfills those Congressional mandates, despite not proposing such a registration.
Sen. Warner, a former tech entrepreneur, has been a longtime advocate for increased access to telehealth. He is a co-author of the CONNECT for Health Act, which would expand coverage of telehealth services through Medicare, make COVID-19 telehealth flexibilities permanent, improve health outcomes, and make it easier for patients to safely connect with their doctors. He previously wrote to both the Biden and Trump administrations, urging the DEA to finalize regulations long-delayed by prior administrations allowing doctors to prescribe controlled substances through telehealth. Sen. Warner also sent a letter to Senate leadership during the height of the COVID-19 crisis, calling for the permanent expansion of access to telehealth services.
In 2018, Sen. Warner included a provision to expand financial coverage for virtual substance use treatment in the Opioid Crisis Response Act of 2018. In 2003, then-Gov. Warner expanded Medicaid coverage for telemedicine statewide, including evaluation and management visits, a range of individual psychotherapies, the full range of consultations, and some clinical services, including in cardiology and obstetrics. Coverage was also expanded to include non-physician providers. Among other benefits, the telehealth expansion allowed individuals in medically underserved and remote areas of Virginia to access quality specialty care that isn’t always available at home.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged CEOs of several artificial intelligence (AI) companies to prioritize security, combat bias, and responsibly roll out new technologies. In a series of letters, Sen. Warner expressed concerns about the potential risks posed by AI technology, and called on companies to ensure that their products and systems are secure.
In the past several years, AI technology has rapidly advanced while chatbots and other generative AI products have simultaneously widened the accessibility of AI products and services. As these technologies are rolled out broadly, open source researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in the prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques.
“[W]ith the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work,” Sen. Warner wrote. “Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.”
Sen. Warner highlighted several specific security risks associated with AI, including data supply chain security and data poisoning attacks. He also expressed concerns about algorithmic bias, trustworthiness, and potential misuse or malicious use of AI systems.
The letters include a series of questions for companies developing large-scale AI models to answer, aimed at ensuring that they are taking appropriate measures to address these security risks. Among the questions are inquiries about companies' security strategies, limits on third-party access to their models that undermine the ability to evaluate model fitness, and steps taken to ensure secure and accurate data inputs and outputs. Recipients of the letter include the CEOs of OpenAI, Scale AI, Meta, Google, Apple, Stability AI, Midjourney, Anthropic, Percipient.ai, and Microsoft.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
A copy of the letters can be found here and below.
I write today regarding the need to prioritize security in the design and development of artificial intelligence (AI) systems. As companies like yours make rapid advancements in AI, we must acknowledge the security risks inherent in this technology and ensure AI development and adoption proceeds in a responsible and secure way. While public concern about the safety and security of AI has been on the rise, I know that work on AI security is not new. However, with the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work. Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.
I recognize the important work you and your colleagues are doing to advance AI. As a leading company in this emerging technology, I believe you have a responsibility to ensure that your technology products and systems are secure. I have long advocated for incorporating security-by-design, as we have found time and again that failing to consider security early in the product development lifecycle leads to more costly and less effective security. Instead, incorporating security upfront can reduce costs and risks. Moreover, the last five years have demonstrated that the ways in which the speed, scale, and excitement associated with new technologies have frequently obscured the shortcomings of their creators in anticipating the harmful effects of their use. AI capabilities hold enormous potential; however, we must ensure that they do not advance without appropriate safeguards and regulation.
While it is important to apply many of the same security principles we associate with traditional computing services and devices, AI presents a new set of security concerns that are distinct from traditional software vulnerabilities. Some of the AI-specific security risks that I am concerned about include the origin, quality, and accuracy of input data (data supply chain), tampering with training data (data poisoning attacks), and inputs to models that intentionally cause them to make mistakes (adversarial examples). Each of these risks further highlighting the need for secure, quality data inputs. Broadly speaking, these techniques can effectively defeat or degrade the integrity, security, or performance of an AI system (including the potential confidentiality of its training data). As leading models are increasingly integrated into larger systems, often without fully mapping dependencies and downstream implications, the effects of adversarial attacks on AI systems are only magnified.
In addition to those risks, I also have concerns regarding bias, trustworthiness, and potential misuse or malicious use of AI systems. In the last six months, we have seen open source researchers repeatedly exploit a number of prominent, publicly-accessible generative models – crafting a range of clever (and often foreseeable) prompts to easily circumvent a system’s rules. Examples include using widely-adopted models to generate malware, craft increasingly sophisticated phishing techniques, contribute to disinformation, and provide harmful information. It is imperative that we address threats to not only digital security, but also threats to physical security and political security.
In light of this, I am interested in learning about the measures that your company is taking to ensure the security of its AI systems. I request that you provide answers to the following questions no later than May 26, 2023.
Questions:
1. Can you provide an overview of your company’s security approach or strategy?
2. What limits do you enforce on third-party access to your model and how do you actively monitor for non-compliant uses?
3. Are you participating in third party (internal or external) test & evaluation, verification & validation of your systems?
4. What steps have you taken to ensure that you have secure and accurate data inputs and outputs? Have you provided comprehensive and accurate documentation of your training data to downstream users to allow them to evaluate whether your model is appropriate for their use?
5. Do you provide complete and accurate documentation of your model to commercial users? Which documentation standards or procedures do you rely on?
6. What kind of input sanitization techniques do you implement to ensure that your systems are not susceptible to prompt injection techniques that pose underlying system risks?
7. How are you monitoring and auditing your systems to detect and mitigate security breaches?
8. Can you explain the security measures that you take to prevent unauthorized access to your systems and models?
9. How do you protect your systems against potential breaches or cyberattacks? Do you have a plan in place to respond to a potential security incident? What is your process for alerting users that have integrated your model into downstream systems?
10. What is your process for ensuring the privacy of sensitive or personal information you that your system uses?
11. Can you describe how your company has handled past security incidents?
12. What security standards, if any, are you adhering to? Are you using NIST’s AI Risk Management Framework?
13. Is your company participating in the development of technical standards related to AI and AI security?
14. How are you ensuring that your company continues to be knowledgeable about evolving security best practices and risks?
15. How is your company addressing concerns about AI trustworthiness, including potential algorithmic bias and misuse or malicious use of AI?
16. Have you identified any security challenges unique to AI that you believe policymakers should address?
Thank you for your attention to these important matters and I look forward to your response.
###
WASHINGTON – U.S. Sens. Mark R. Warner (D-VA) and John Hoeven (R-ND) this week introduced legislation to support the research and development of unmanned aerial systems (UAS) technologies at the nation’s UAS test sites, including the site at Virginia Tech.
“Unmanned Aerial Systems have the potential to transform the way we manage disasters, maintain our infrastructure, administer medicine, tackle national security threats, and conduct day-to-day business,” said Sen. Warner. “UAS test sites, such as the one located at Virginia Tech, are crucial to the research and development of these technologies and I am glad to continue building on the progress we have made over the last decade.”
“UAS play a crucial role in our country’s defense, and there is tremendous potential yet to be realized, benefiting our national security as well as our economy,” said Sen. Hoeven. “The UAS test sites, including the Northern Plains UAS Test Site in North Dakota, are at the center of our efforts to ensure these aircraft can be safely integrated into our national airspace. This legislation supports their ongoing work and dovetails with the new BVLOS waivers we recently secured for our test site, further strengthening North Dakota’s position in this dynamic industry.”
Specifically, this legislation:
- Extends the authorization for the Federal Aviation Administration’s (FAA) UAS test sites for an additional five years through 2028;
- Formally authorizes research grants through the FAA for the purpose of demonstrating or validating technology related to the integration of UAS in the national airspace system (NAS);
- Requires a grant recipient to have a contract with an FAA UAS test site;
- Identifies key research priorities, including: detect and avoid capabilities; beyond visual line of sight (BVLOS) operations; operation of multiple unmanned aircraft systems; unmanned systems traffic management; command and control; and UAS safety standards.
This legislation builds on Sen. Warner’s efforts to expand the domestic production of unmanned systems, including driverless cars, drones, and unmanned maritime vehicles and make Virginia a national leader in this growing sector. Earlier this year, he introduced the Increasing Competitiveness for American Drones Act, legislation that will clear the way for drones to be used for commercial transport of goods across the country. As Chairman of the Senate Intelligence Committee, he has led efforts in Congress to shore up U.S. national and cybersecurity against hostile foreign governments through unmanned air systems. Last month, Sen. Warner introduced legislation to prohibit the federal government from purchasing drones manufactured in countries identified as national security threats, such as the People’s Republic of China.
###
Senators Introduce Bipartisan Bill to Tackle National Security Threats from Foreign Tech
Mar 07 2023
WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, and John Thune (R-SD), ranking member of the Commerce Committee’s Subcommittee on Communications, Media and Broadband, led a group of 12 bipartisan senators to introduce the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, legislation that will comprehensively address the ongoing threat posed by technology from foreign adversaries by better empowering the Department of Commerce to review, prevent, and mitigate information communications and technology transactions that pose undue risk to our national security.
“Today, the threat that everyone is talking about is TikTok, and how it could enable surveillance by the Chinese Communist Party, or facilitate the spread of malign influence campaigns in the U.S. Before TikTok, however, it was Huawei and ZTE, which threatened our nation’s telecommunications networks. And before that, it was Russia’s Kaspersky Lab, which threatened the security of government and corporate devices,” said Sen. Warner. “We need a comprehensive, risk-based approach that proactively tackles sources of potentially dangerous technology before they gain a foothold in America, so we aren’t playing Whac-A-Mole and scrambling to catch up once they’re already ubiquitous.”
“Congress needs to stop taking a piecemeal approach when it comes to technology from adversarial nations that pose national security risks,” said Sen. Thune. “Our country needs a process in place to address these risks, which is why I’m pleased to work with Senator Warner to establish a holistic, methodical approach to address the threats posed by technology platforms – like TikTok – from foreign adversaries. This bipartisan legislation would take a necessary step to ensure consumers’ information and our communications technology infrastructure is secure.”
The RESTRICT Act establishes a risk-based process, tailored to the rapidly changing technology and threat environment, by directing the Department of Commerce to identify and mitigate foreign threats to information and communications technology products and services.
In addition to Sens. Warner and Thune, the legislation is co-sponsored by Sens. Tammy Baldwin (D-WI), Deb Fischer (R-NE), Joe Manchin (D-WV), Jerry Moran (R-KS), Michael Bennet (D-CO), Dan Sullivan (R-AK), Kirsten Gillibrand (D-NY), Susan Collins (R-ME), Martin Heinrich (D-NM), and Mitt Romney (R-UT).
The Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act would:
- Require the Secretary of Commerce to establish procedures to identify, deter, disrupt, prevent, prohibit, and mitigate transactions involving information and communications technology products in which any foreign adversary has any interest and poses undue or unacceptable risk to national security;
- Prioritize evaluation of information communications and technology products used in critical infrastructure, integral to telecommunications products, or pertaining to a range of defined emerging, foundational, and disruptive technologies with serious national security implications;
- Ensure comprehensive actions to address risks of untrusted foreign information communications and technology products by requiring the Secretary to take up consideration of concerning activity identified by other government entities;
- Educate the public and business community about the threat by requiring the Secretary of Commerce to coordinate with the Director of National Intelligence to provide declassified information on how transactions denied or otherwise mitigated posed undue or unacceptable risk.
“We need to protect Americans’ data and keep our country safe against today and tomorrow’s threats. While many of these foreign-owned technology products and social media platforms like TikTok are extremely popular, we also know these products can pose a grave danger to Wisconsin’s users and threaten our national security,” said Sen. Baldwin. “This bipartisan legislation will empower us to respond to our fast-changing environment – giving the United States the tools it needs to assess and act on current and future threats that foreign-owned technologies pose to Wisconsinites and our national security.”
“There are a host of dangerous technology platforms – including TikTok – that can be manipulated by China and other foreign adversaries to threaten U.S. national security and abuse Americans’ personal data. I’m proud to join Senator Warner in introducing bipartisan legislation that would put an end to disjointed interagency responses and strengthen the federal government’s ability to counter these digital threats,” said Sen. Fischer.
“Over the past several years, foreign adversaries of the United States have encroached on American markets through technology products that steal sensitive location and identifying information of U.S. citizens, including social media platforms like TikTok. This dangerous new internet infrastructure poses serious risks to our nation’s economic and national security,” said Sen. Manchin. “I’m proud to introduce the bipartisan RESTRICT ACT, which will empower the Department of Commerce to adopt a comprehensive approach to evaluating and mitigating these threats posed by technology products. As Chairman of the Senate Armed Services Subcommittee on Cybersecurity, I will continue working with my colleagues on both sides of the aisle to get this critical legislation across the finish line.”
“Foreign adversaries are increasingly using products and services to collect information on American citizens, posing a threat to our national security,” said Sen. Moran. “This legislation would give the Department of Commerce the authority to help prevent adversarial governments from introducing harmful products and services in the U.S., providing us the long-term tools necessary to combat the infiltration of our information and communications systems. The government needs to be vigilant against these threats, but a comprehensive data privacy law is needed to ensure Americans are able to control who accesses their data and for what purpose.”
“We shouldn’t let any company subject to the Chinese Communist Party’s dictates collect data on a third of our population – and while TikTok is just the latest example, it won’t be the last. The federal government can’t continue to address new foreign technology from adversarial nations in a one-off manner; we need a strategic, enduring mechanism to protect Americans and our national security. I look forward to working in a bipartisan way with my colleagues on the Senate Select Intelligence Committee to send this bill to the floor,” said Sen. Bennet.
“Our modern economy, communication networks, and military rely on a range of information communication technologies. Unfortunately, some of these technology products pose a serious risk to our national security,” said Sen. Gillibrand. “The RESTRICT Act will address this risk by empowering the Secretary of Commerce to carefully evaluate these products and ensure that they do not endanger our critical infrastructure or undermine our democratic processes.”
“China’s brazen incursion of our airspace with a sophisticated spy balloon was only the most recent and highly visible example of its aggressive surveillance that has targeted our country for years. Through hardware exports, malicious software, and other clandestine means, China has sought to steal information in an attempt to gain a military and economic edge,” said Sen. Collins. “Rather than taking a piecemeal approach to these hostile acts and reacting to each threat individually, our legislation would create a wholistic, government-wide response to proactively defend against surveillance attempts by China and other adversaries. This will directly improve our national security as well as safeguard Americans’ personal information and our nation’s vital intellectual property.”
"Cybersecurity is one of the most serious economic and national security challenges we face as a nation. The future of conflict is moving further away from the battlefield and closer to the devices and the networks everyone increasingly depends on. We need a systemic approach to addressing potential threats posed by technology from foreign adversaries. This bill provides that approach by authorizing the Administration to review and restrict apps and services that pose a risk to Americans’ data security. I will continue to push for technology defenses that the American people want and deserve to keep our country both safe and free,” said Sen. Heinrich.
“The Chinese Communist Party is engaged in a multi-generational, multi-faceted, and systematic campaign to replace the United States as the world’s superpower. One tool at its disposal—the ability to force social media companies headquartered in China, like TikTok’s parent company, to hand over the data it collects on users,” said Sen. Romney. “Our adversaries—countries like China, Russia, Iran—are increasingly using technology products to spy on Americans and discover vulnerabilities in our communications infrastructure, which can then be exploited. The United States must take stronger action to safeguard our national security against the threat technology products pose and this legislation is a strong step in that direction.”
A two-page summary of the bill is available here. A copy of the bill text is available here.
###
WASHINGTON – Today, Chairman of the Senate Select Committee on Intelligence U.S. Sen. Mark R. Warner (D-VA) appeared on FOX News Sunday to discuss the how the U.S. needs to tackle rising threats posed by the Communist Party of China.
On the how the United States needs to address the rise of the Chinese Communist Party on the world stage:
“We have never had a potential adversary like China. The Soviet Union, Russia, was military or ideological, China is investing in economic areas. They have $500 billion in intellectual property theft, and we are in a competition not just on a national security basis but on a technology basis. That's why national security now includes telecommunications, satellites, artificial intelligence, quantum computing. Each of these domains, we have got to make the kind of investments to stay ahead. I think we are starting that in a bipartisan way. We did the CHIPS bill to try to bring semiconductor manufacturing back, we have kicked out Huawei out of our telecom systems. This week, I have a broad bipartisan bill that I am launching with my friend John Thune, the Republican lead, where we are going to say, in terms of foreign technology coming into America, we’ve got to have a systemic approach to make sure we can ban or prohibit it when necessary.”
On the influence of TikTok:
“Listen, you have 100 million Americans on TikTok, 90 minutes a day…They are taking data from Americans, not keeping it safe, but what worries me more with TikTok is that this could be a propaganda tool. The kind of videos you see would promote ideological issues. If you look at what TikTok shows to the Chinese kids, which is all about science and engineering, versus what our kids see, there’s a radical difference.”
On China’s support for Putin’s war in Ukraine:
“…if China moves forward to support Russia in Ukraine, I can't understand some of my colleagues who are willing to say, ‘I don't really care about Ukraine, but I'm concerned about China.’ Well, China and Russia, these authoritarian regimes, are linked, and we have to make sure Putin is not successful in Ukraine and that Xi doesn't further his expansion plans around Taiwan.”
Video of Sen. Warner on FOX News Sunday can be found here. A transcript follows.
FOX News Sunday
SHANNON BREAM: Joining is now, Virginia Democratic Senator Mark Warner, Chairman of the Senate Intelligence Committee, welcome back. This week, you all have a hearing on worldwide threat assessments. You will have the DNI, the director of the CIA there. You have long been warning about China on multiple fronts. Do you think that we have lost valuable time in assessing the threat accurately? Will you talk about that this week?
SENATOR MARK WARNER: Well I think for a long time conventional wisdom was, the more you bring China into the world order, the more they’re going to change. That assumption was just plain wrong. China even changed their laws in 2016 to make it explicitly clear that every company in China, their first obligation is to the Communist Party. So we have never had a potential adversary like China. The Soviet Union, Russia, was military or ideological, China is investing in economic areas. They have $500 billion in intellectual property theft, and we are in a competition not just on a national security basis but on a technology basis. That's why national security now includes telecommunications, satellites, artificial intelligence, quantum computing. Each of these domains, we have got to make the kind of investments to stay ahead. I think we are starting that in a bipartisan way. We did the CHIPS bill to try to bring semiconductor manufacturing back, we have kicked out Huawei out of our telecom systems. This week, I have a broad bipartisan bill that I am launching with my friend John Thune, the Republican lead where we are going to say, in terms of foreign technology coming into America, we’ve got to have a systemic approach to make sure we can ban or prohibit it when necessary.
BREAM: Does that mean TikTok?
SEN. WARNER: That means TikTok is one of the potentials. Listen, you have 100 million Americans on TikTok, 90 minutes a day. Even you guys would like that kind of return, 90 minutes a day. They are taking data from Americans, not keeping it safe, but what worries me more with TikTok is that this could be a propaganda tool. The kind of videos you see would promote ideological issues. If you look at what TikTok shows to the Chinese kids, which is all about science and engineering, versus what our kids see, there’s a radical difference.
BREAM: We will watch that, because that's a bipartisan offering potentially this week. This past week we got information, it was revealed that both the Department of Energy and FBI believe that the origins of COVID were most likely a leak from the Wuhan Institute for Virology. This is something that early on this was called a conspiracy theory, you were racist if you talked about it. The Senate has actually unanimously passed a measure that would call on this administration to declassify information that we have about the origins. The White House won't say whether the president will veto it or not if it gets to his desk. Do Americans, worldwide, do people not have a right to see that information?
SEN. WARNER: Shannon, here is again an example of what we are dealing with, with the Communist Party in China. If this virus had originated virtually anywhere else, we would have had world scientists there. The Chinese Communist Party has been totally opaque about letting in outside scientists to figure this out. Now, you’ve still got of some parts of the intelligence community that think it originated in a wet market, others saying that it could have gotten out from a lab, although I would say that one entity says it came from one lab in Wuhan, another said from another. At the end of the day, we’ve got to keep looking and we've got to make sure, in terms of future pandemics, that we can have access to the source of where these diseases originate a lot earlier on in the system. We’re three and half later, we still don't have access to Wuhan.
BREAM: They're not going to cooperate with that, especially if they assess internally they were at fault. How do they pay for this? Now, billions probably trillions in damages and losses for people, millions and millions of lives. How do they pay?
SEN. WARNER: Well I think again, this is where we’ve got to have that united front of countries all around the world, that there has to be consequences. There has to be consequences potentially in terms of sanctions, it’s one of the reasons why, if China moves forward to support Russia in Ukraine, I can't understand some of my colleagues who are willing to say, “I don't really care about Ukraine, but I'm concerned about China.” Well, China and Russia, these authoritarian regimes, are linked, and we have to make sure Putin is not successful in Ukraine and that Xi doesn't further his expansion plans around Taiwan.
BREAM: Well, we know that even if they are not sending bullets over to Russia, they are buying up copious amounts of Russian oil. They are sending dual-use products that could actually be used on the battlefield. Xi doesn't seem very worried about the warnings from the U.S. at this point. They haven't even acknowledged or apologized for the balloon that went across America, we think capturing information as it went. It Xi afraid of this administration? To our warnings mean anything?
SEN. WARNER: Well I think Xi, as Putin thought, thought that with the invasion of the Ukraine, that the West would basically throw in the towel. The fact that we’ve not, the fact that you've got, for example, the German chancellor here just this past week, Germany’s dramatically increasing their defense budget. The fact that we've got nations like Finland and Sweden trying to join NATO. I think Putin made a major miscalculation and I do think Xi is watching the West stand up against Putin and is taking some lessons from that.
BREAM: You're just back from India, among many other countries you visited. They abstained from the U.N. vote that condemned Russia's invasion of Ukraine and called for an end to this. How important is it, a critical place like India, that they choose a side, and with the West?
SEN. WARNER: I think it’s time. Look, India is a great nation, as a matter of fact, I’m chair of the India Caucus, I'm a big supporter of India. India is now a major, major power. Fifth-largest economy in the world, and a place where remarkable things are happening. My message to the Indians has been, we understand that you have historic ties to Russia, and you still get a lot of your arms, but you cannot be a world leader, and attempt to be a moral world leader, without picking a side. And in this case, I think the younger Indians get that. Some of the older generation, I think we still have work to do.
BREAM: Okay, let's turn to continued funding for Ukraine. Another $400 million was announced on Friday. There are questions, there'll be more requests from Congress no doubt in the coming weeks about that. While there is strong support, here across the U.S. and across the West, the polls show that it's pulling back a little bit. Here's the reality from one analyst, “funding for the Ukrainian government has not demanded any tough bureaucratic trade-offs between funding priorities. It's not requiring bouncing needs for Ukraine against a domestic spending.” We’ve hit our ceiling, we have some kind of negotiation that’s got to happen very shortly. There are competing needs and they are very real, so where do we assess our financial commitment?
SEN. WARNER: Well Shannon, let's look at this. We have allocated $113 billion to Ukraine. We have actually only given them actually less than half of that, and on the military side, about $30 billion of roughly $60 billion. We’ve still got some runway to go there. But I think we need to keep that commitment, and the truth is the Russian army is being chewed up by the Ukrainians. We spent $800 billion a year on defense, in most of my lifetime to prevent Russia from exploiting that. We are having Ukrainians do that right now, in a sense, for us. I think we need to continue that. I think we will see the vast majority of members of Congress in both parties, there are some loudmouths on both sides that are pulling back, but if we are going to keep in this competition against Russia and China, Putin cannot be successful. At the same time, we have to realize as we look at China that national security is no longer simply tanks and trucks and guns and ships. It's also telecom and AI and quantum computing and advanced synthetic biology. We have to make investments in those domains, as well, which is both an economic investment and I believe, national security investment.
BREAM: Speaking of another national security interest, Iran, this report on their nuclear capabilities came out this week and it’s kind of getting lost in all the other foreign policy headlines, but basically what the International Atomic Energy Agency told us is that they have hit 84% as far as enriching uranium. They said that’s just short of the 90% that you would need for a weapon. Britain, France, and Germany say they want to censure Iran over this. The U.S. is kind of hesitant. The reporting is that the Biden administration doesn’t want to go there. Are we now then softer on Iran's new program then Europe?
SEN. WARNER: I do not believe that. We have made it explicitly clear – and I was just in Israel recently with a group of senators – that we agree with Israel. Iran cannot be a nuclear power. I think, that has been our policy it will continue to be our policy. There are two steps in this process, one is the enrichment issue, and I believe we will be tougher than the Europeans. We always historically always have been –
BREAM: So then why are we against censuring, reportedly?
SEN. WARNER: We have already sanctioned and censured more Iranian companies by far than our European friends. But there is also a question around delivery systems. Again, I think we and our Israeli friends are following this very closely. Again, we will not allow Iran to become a nuclear power.
BREAM: I've got to hit this, Havana Syndrome. The reporting out this week, an assessment from several intelligence agencies that they don't think – that it's unlikely there was a foreign adversary carrying out these attacks, whatever they were, where our people, diplomats or Intel officers around the world in U.S. missions have suffered really debilitating symptoms from this. Senator Rubio, your colleague tweeted this: “The CIA took the investigation of Havana syndrome seriously. But when you read about the devastating injuries it's hard to except that it was by AC units and loud cicadas. Something happened here and just because we don’t have all the answers doesn’t mean it didn’t happen.” Will you continue trying to pursue answers?
SEN. WARNER: Absolutely. First of all, the most important thing is anyone who got sick, whatever the source was, whether they are CIA, DoD, State Department officials, we owe them the world's best health care and I think we are providing that now. Initially frankly, under the last administration, this whole issue was attempted to be swept under the rug. We are now making sure that health care is provided. I know how, particularly the CIA, how extensive the investigation has been. And I've made very clear to them, if they need to continue that investigation, if new facts come to light, they ought to pursue that. But at this moment in time, I know how thorough they have been, and they have not found the evidence that I think perhaps they thought they would have found. We've got to follow the facts. At the end of the day that's what we owe the members of this intel community, who protect our nation, and that means giving them the health care. If it ends up sensing some other source then what has been discovered so far, we have to pursue it.
BREAM: Senator, Chairman, thanks for coming back to Fox News Sunday.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, issued a statement after the Department of Commerce released the first Notice of Funding Opportunity (NOFO) for CHIPS Act incentives, welcoming the announcement:
“The projects that will be made possible by the CHIPS Act will strengthen our national security and create good-paying manufacturing jobs here in the United States. With limited funding available, I urge the Department of Commerce to be strategic in selecting projects in order to ensure that funding advances U.S. economic and national security objectives.”
Nearly everything that has an “on” switch – from cars to phones to washing machines to ATMs to electric toothbrushes – contains a semiconductor, but just 12 percent of these ‘chips’ are currently made in America. The CHIPS and Science Act includes $52 billion in funding championed by Sen. Warner to manufacture chips here on American soil – a move that will increase economic and national security and help America compete against countries like China for the technology of the future.
Sen. Warner, co-chair of the Senate Cybersecurity Caucus and former technology entrepreneur, has long sounded the alarm about the importance of investing in domestic semiconductor manufacturing. Sen. Warner first introduced the Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act in June 2020 along with Sen. John Cornyn (R-TX).
###
Warner Presses Meta on Facebook's Role in Inciting Violence and Spreading Misinformation Around the World
Feb 22 2023
WASHINGTON – Senate Select Committee on Intelligence Chairman Mark R. Warner (D-VA) sent a letter to Meta CEO Mark Zuckerberg, pressing the company on its efforts to combat the spread of misinformation, hate speech, and incitement content around the world. Reporting indicates that Facebook devotes 84 percent of its misinformation budget to the United States, where only ten percent of its users reside.
“In its pursuit of growth and dominance in new markets, I worry that Meta has not adequately invested in the technical, organizational, and human safeguards necessary to ensuring that your platform is not used to incite violence and real-world harm,” wrote Sen. Warner, pointing to evidence, acknowledged by Meta, that the platform as used to foment genocide in Myanmar. “I am concerned that Meta is not taking seriously the responsibility it has to ensure that Facebook and its other platforms do not inspire similar events in other nations around the world.”
In his letter, Sen. Warner noted that Facebook supported more than 110 languages on its platform as of October 2021, and users and advertisers posted on the platform in over 160 languages. However, Facebook’s community standards, the policies that outline what is and isn’t allowed on the platform, were available in less than half of the languages that Facebook offered at that time. Facebook has previously said that it uses artificial intelligence to proactively identify hate speech in more than 50 languages and that it has native speakers reviewing content in more than 70 languages.
“Setting aside the efficacy of Facebook’s AI solutions to detect hate speech and violent rhetoric in all of the languages that it offers, the fact that Facebook does not employ native speakers in dozens of languages officially welcomed on its platform is troubling – indicating that Facebook has prioritized growth over the safety of its users and the communities Facebook operates in,” Sen. Warner wrote, citing documents provided by Facebook whistleblower Frances Haugen. “Of particular concern is the lack of resources dedicated to what Facebook itself calls ‘at-risk countries’ – nations that are especially vulnerable to misinformation, hate speech, and incitement to violence.”
Warner noted that in Ethiopia, Facebook reportedly did not have automated systems capable of flagging harmful posts in Amharic and Oromo, the country’s two most spoken languages. A March 2021 internal report said that armed groups within Ethiopia were using Facebook to incite violence against ethnic minorities, recruit, and fundraise.
“In the wake of Facebook’s role in the genocide of the Rohingya in Myanmar – where UN investigators explicitly described Facebook as playing a ‘determining role’ in the atrocities – one would imagine more resources would be dedicated to places like Ethiopia. Even in languages where Meta does have experience, the systems in place appear woefully inadequate at preventing violent hate speech from appearing on Facebook,” observed Sen. Warner, citing an investigation conducted by the non-profit Global Witness, which was able to post ads in Swahili and English ahead of the 2022 general elections in Kenya that violated Facebook’s stated Community Standards for hate speech and ethnic-based calls to violence.
“Unfortunately, these are not isolated cases – or new revelations. For nearly six years, Facebook’s role in fueling, amplifying, and accelerating racial, religious, and ethnic violence has been documented across the globe – including in Bangladesh, Indonesia, South Sudan, and Sri Lanka. In other developing countries – such as Cambodia, Vietnam and the Philippines – Facebook has reportedly courted autocratic parties and leaders in order to ensure its continued penetration of those markets,” wrote Sen. Warner. “Across many of these cases, Facebook’s global success – an outgrowth of its business strategy to cultivate high levels of global dependence through efforts like Facebook Free Basics and Internet.org – has heightened the effects of its misuse. In many developing countries, Facebook, in effect, constitutes the internet for millions of people, and serves as the infrastructure for significant social, political, and economic activity.”
“Ultimately, the destabilizing impacts of your platform on fragile societies across the globe poses a set of regional – if not global – security risks,” concluded Warner, posing a series of questions to Zuckerberg about the company’s investments in foreign language content moderation and requesting a response by March 15, 2023.
A full copy of the letter is available here.
###
Warner & Rubio Urge Biden Admin to Prevent Flow of U.S. Innovation to China's Military Industrial Complex
Feb 21 2023
WASHINGTON – Senate Select Committee on Intelligence Chairman Mark R. Warner (D-VA) and Vice Chairman Marco Rubio (R-FL) wrote to the Biden administration to request that it expand the use of existing tools and authorities at the Departments of Treasury and Commerce to prevent China’s military industrial complex from benefiting from U.S. technology, talent and investments.
In a pair of letters, the Senators expressed concern with the flow of U.S. innovation, talent, and capital into the People’s Republic of China (PRC), which seeks to exert control over global supply chains, achieve technological superiority, and rise as the dominant economic and military power in the world. They also stress the need to utilize the authorities at the government’s disposal to protect U.S. interests and ensure that American businesses, investors, and consumers are not inadvertently advancing China’s authoritarian interests or supporting its ongoing genocide in Xinjiang and human rights abuses in Tibet and Hong Kong.
In their letter to Treasury Secretary Janet Yellen, the Senators wrote, “It is widely known that the PRC’s Military-Civil Fusion (MCF) program targets technological advancements in the U.S., as well as university and research partnerships with the U.S., for the PRC’s military development. U.S. technology, talent, and capital continue to contribute—through both lawful and unlawful means, including theft—to the PRC’s development of critical military-use industries, technologies, and related supply chains. The breadth of the MCF program’s ambitions and reach creates dangerous vulnerabilities for U.S. national and economic security as well as undermines respect for democratic values globally.”
The Senators also posed a number of questions for Sec. Yellen regarding Treasury’s internal Specially Designated Nationals and Blocked Persons (SDN) lists, which do not include a number of entities and individuals who have been identified by the U.S. Government as posing national security risks or human rights concerns.
In their letter to Commerce Secretary Gina Raimondo, the Senators wrote, “Despite recent restrictions on the export of sensitive technologies critical to U.S. national security, we remain deeply concerned that American technology, investment, and talent continue to support the People’s Republic of China’s (PRC’s) military industrial complex, intelligence and security apparatus, its ongoing genocide, and other PRC efforts to displace United States economic leadership. As such, we urge the Department of Commerce to immediately use its authorities to more broadly restrict these activities.”
The Senators also requested answers from Sec. Raimondo regarding America’s most critical high-technology sectors, the Department’s ability and authority to evaluate companies’ reliance on China and assess the flow of U.S. innovation to PRC entities.
A copy of the letter to the Department of Treasury is available here. A copy of the letter to the Department of Commerce is available here.
###
WASHINGTON – Last week, U.S. Sens. Mark R. Warner (D-VA) and Rick Scott (R-FL) introduced the American Security Drone Act of 2023, legislation to prohibit the purchase of drones from countries identified as national security threats, such as China.
“I am a staunch supporter of unmanned systems and drone investment here in the United States, and I wholeheartedly believe that we must continue to invest in domestic production of drones,” said Sen. Warner. “But the purchase of drones from foreign countries, especially those that have been deemed a national security threat, is dangerous. I am glad to introduce legislation that takes logical steps to protect our data from foreign adversaries and meanwhile supports American manufacturers.”
“I’ve been clear for years: the United States should never spend taxpayer dollars on anything made in Communist China, especially drones which pose a significant threat to our national security,” said Sen. Scott. “Xi and the Communist Party of China are on a quest for global domination and whether it’s with spy balloons, TikTok or drones, they will stop at nothing to infiltrate our society and steal our data. I’m proud to join my colleagues to reintroduce the bipartisan American Security Drone Act to STOP the U.S. from buying drones manufactured in nations identified as national security threats. This important bill is critical to our national security and should be passed by the Senate, House and signed into law IMMEDIATELY.”
Specifically, The American Security Drone Act:
- Prohibits federal departments and agencies from procuring certain foreign commercial off-the-shelf drone or covered unmanned aircraft system manufactured or assembled in countries identified as national security threats, and provides a timeline to end current use of these drones.
- Prohibits the use of federal funds awarded through certain contracts, grants, or cooperative agreements to state or local governments from being used to purchase foreign commercial off-the-shelf drones or covered unmanned aircraft systems manufactured or assembled in a country identified as a national security threat.
- Requires the Comptroller General of the United States to submit a report to Congress detailing the amount of foreign commercial off-the-shelf drones and covered unmanned aircraft systems procured by federal departments and agencies from countries identified as national security threats.
In addition to Sens. Warner and Scott, the legislation is cosponsored by Sens. Marco Rubio (R-FL), Richard Blumenthal (D-CT), Marsha Blackburn (R-TN), Chris Murphy (D-CT), Tom Cotton (R-AR), and Josh Hawley (R-MO).
Sen. Warner is a strong supporter of the domestic production of unmanned systems, including driverless cars, drones, and unmanned maritime vehicles. Earlier this month, Sen. Warner introduced the Increasing Competitiveness for American Drones Act, legislation that will clear the way for drones to be used for commercial transport of goods across the country.
Full text of the legislation is available here.
###
WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA) and John Thune (R-SD) introduced the Increasing Competitiveness for American Drones Act of 2023, comprehensive legislation to streamline the approvals process for beyond visual line of sight (BVLOS) drone flights and clear the way for drones to be used for commercial transport of goods across the country – making sure that the U.S. remains competitive globally in a growing industry increasingly dominated by competitors like China.
Currently, each aircraft and each BVLOS operation that takes flight requires unmanned aerial system (UAS) operators to seek waivers from the Federal Aviation Administration (FAA), but the FAA has not laid out any consistent set of criteria for the granting of waivers, making the process for approving drone flights slow and unpredictable. The bipartisan Increasing Competitiveness for American Drones Act will require the FAA to issue a new rule allowing BVLOS operations under certain circumstances.
“Drones have the ability to transform so much of the way we do business. Beyond package delivery, drones can change the way we grow crops, manage disasters, maintain our infrastructure, and administer medicine,” said Sen. Warner. “If we want the drones of tomorrow to be manufactured in the U.S. and not in China, we have to start working today to integrate them into our airspace. Revamping the process for approving commercial drone flight will catapult the United States into the 21st century, allowing us to finally start competing at the global level as technological advancements make drone usage ever more common.”
“Drones have the potential to transform the economy, with innovative opportunities for transportation and agriculture that would benefit rural states like South Dakota,” said Sen. Thune. “I’m proud to support this legislation that provides a clear framework for the approval of complex drone operations, furthering the integration of these aircraft into the National Airspace System.”
Specifically, the bill requires the FAA to establish a “risk methodology,” which will be used to determine what level of regulatory scrutiny is required:
- Operators of small UAS under 55lbs simply have to declare that they conducted a risk assessment and meet the standard, subject to audit compliance by the FAA.
- Operators of UAS between 55lbs and 1320lbs must submit materials based on the risk assessment to the FAA to seek a “Special Airworthiness Certificate.” UAS in this category may be limited to operating no more than 400 feet above ground level.
- Finally, operators of UAS over 1320lbs must undergo the full “type certification” process—the standard approval process for crewed aircraft.
In addition, the Increasing Competitiveness for American Drones Act would create the position of “Associate Administrator of UAS Integration” as well as a UAS Certification Unit that would have the sole authority to issue all rulemakings, certifications, and waivers. This new organizational structure would create central rulemaking body for UAS, allowing for a more uniform process.
“Commercial drone operations provide valuable services to the American public and workforce – but significant regulatory hurdles are hampering these benefits from reaching their fullest potential and jeopardize U.S. global leadership in aviation. The regulatory challenges are not driven by safety, they are hampered by bureaucracy. We accordingly have urged Congress to prioritize drone integration, and we are grateful for the support of Senators Warner and Thune in this cause. AUVSI is proud to endorse this legislation, and we urge Congress to include it as part of their critical work this year to pass a multi-year FAA Reauthorization,” Michael Robbins, Chief Advocacy Officer of the Association for Uncrewed Vehicle Systems International (AUVSI), said.
“The Coalition is grateful for the leadership of Senators Thune and Warner, and this bill comes at a pivotal time for the drone industry. Since 2012, Congress has worked to progress the law and regulation around commercial drone use, but now, in 2023, this progress has slowed as regulations and approvals continue to be delayed. With reauthorization of Federal Aviation Administration (FAA) programs required by September 30, this year is a critical time for the drone industry,” said The Small UAV Coalition.
“The Commercial Drone Alliance applauds the introduction of the Increasing Competitiveness for American Drones Act of 2023, and we commend and thank Senator Warner and Senator Thune for their leadership on these important issues. While the U.S. has lagged behind other countries in developing and deploying uncrewed aircraft systems (UAS), this legislation provides the U.S. with the opportunity to reestablish its prominence as a global leader in advanced aviation and compete more effectively in the global economy,” said The Commercial Drone Alliance.
Sen. Warner has been a strong supporter of research and investment in unmanned systems, including driverless cars, drones, and unmanned maritime vehicles. He previously introduced legislation designed to advance the development of UAS and build on the FAA’s efforts to safely integrate them into the National Airspace System. Virginia is home to one of seven FAA-approved sites across the country where researchers are testing the safest and most effective ways to incorporate UAS into the existing airspace – including the first-ever package delivery by drone to take place in the United States. Last October, Sen. Warner visited the headquarters of DroneUp, a leader in independent drone delivery contracting, in Hampton Roads, Virginia.
Full text of the legislation is available here.
###
WASHINGTON – Today, Senate Select Committee on Intelligence Chairman Mark Warner (D-VA) and Vice Chairman Marco Rubio (R-FL) wrote to Meta CEO Mark Zuckerberg, questioning the company about recently released documents revealing that the company knew, as early as 2018, that hundreds of thousands of developers in what Facebook classified as “high-risk jurisdictions” including the People’s Republic of China (PRC) and Russia had access to user data that could have been used to facilitate espionage. The documents were released as part of ongoing litigation against the company related to its lax handling of personal data after revelations regarding Cambridge Analytica.
Under pressure from Congress, Facebook revealed in 2018 that it provided access to key application programming interfaces (APIs) to device-makers based in the PRC, including Huawei, OPPO, TCL, and others. In the wake of those disclosures, Facebook met repeatedly with the staffs of both senators and the Senate Intelligence Committee to discuss access to this data and what controls Facebook was putting in place to protect user data in the future.
Wrote the bipartisan leaders of the Senate Intelligence Committee in today’s letter, “Given those discussions, we were startled to learn recently, as a result of this ongoing litigation and discovery, that Facebook had concluded that a much wider range of foreign-based developers, in addition to the PRC-based device-makers, also had access to this data. According to at least one internal document, this included nearly 90,000 separate developers in the People’s Republic of China (PRC), which is especially remarkable given that Facebook has never been permitted to operate in the PRC. The document also refers to discovery of more than 42,000 developers in Russia, and thousands of developers in other ‘high-risk jurisdictions,’ including Iran and North Korea, that had access to this user information.”
The newly available documents reveal that Facebook internally acknowledged in 2018 that this access could be used for espionage purposes.
“As the Chairman and Vice Chairman of the Senate Select Committee on Intelligence, we have grave concerns about the extent to which this access could have enabled foreign intelligence service activity, ranging from foreign malign influence to targeting and counter-intelligence activity,” wrote Warner and Rubio, posing a series of questions to the company about the implications of the access, including:
1) The unsealed document notes that Facebook conducted separate reviews on developers based in the PRC and Russia “given the risk associated with those countries.” What additional reviews were conducted on these developers? When was this additional review completed and what were the primary conclusions? What percentage of the developers located in the PRC and Russia was Facebook able to definitively identify? What communications, if any, has Facebook had with these developers since its initial identification? What criteria does Facebook use to evaluate the “risk associated with” operation in the PRC and Russia?
2) For the developers identified as being located within the PRC and Russia, please provide a full list of the types of information to which these developers had access, as well as the timeframes associated with such access.
3) Does Facebook have comprehensive logs on the frequency with which developers from high-risk jurisdictions accessed its APIs and the forms of data accessed?
4) Please provide an estimate of the number of discrete Facebook users in the United States whose data was shared with a developer located in the each country identified as a “high-risk jurisdiction” (broken out by country).
5) The internal document indicates that Facebook would establish a framework to identify the “developers and apps determined to be most potentially risky[.]” How did Facebook establish this rubric? How many developers and apps based in the PRC and Russia met this threshold? How many developers and apps in other high-risk jurisdictions met this threshold? What were the specific characteristics of these developers that gave rise to this determination? Did Facebook identify any developers as too risky to safely operate with? If so, which?
6) The internal document references your public commitment to “conduct a full audit of any app with suspicious activity.” How does Facebook characterize “suspicious activity” and how many apps triggered this full audit process?
7) Does Facebook have any indication that any developers’ access enabled coordinated inauthentic activity, targeting activity, or any other malign behavior by foreign governments?
8) Does Facebook have any indication that developers’ access enabled malicious advertising or other fraudulent activity by foreign actors, as revealed in public reporting?
The full of today’s letter is available here and below.
Dear Mr. Zuckerberg,
We write you with regard to recently unsealed documents in connection with pending litigation your company, Meta, is engaged in. It appears from these documents that Facebook has known, since at least September 2018, that hundreds of thousands of developers in countries Facebook characterized as “high-risk,” including the People’s Republic of China (PRC), had access to significant amounts of sensitive user data. As leaders of the Senate Intelligence Committee, we write today with a number of questions regarding these documents and the extent to which developers in these countries were granted access to American user data.
In 2018, the New York Times revealed that Facebook had provided privileged access to key application programming interfaces (APIs) to Huawei, OPPO, TCL, and other device-makers based in the PRC. Under the terms of agreements with Facebook dating back to at least 2010, these device manufacturers were permitted to access a wealth of information on Facebook’s users, including profile data, user IDs, photos, as well as contact information and even private messages. In the wake of these revelations, as well as broader revelations concerning Facebook’s lax data security policies related to third-party applications, our staffs held numerous meetings with representatives from your company, including with senior executives, to discuss who had access to this data and what controls Facebook was putting in place to protect user data in the future.
Given those discussions, we were startled to learn recently, as a result of this ongoing litigation and discovery, that Facebook had concluded that a much wider range of foreign-based developers, in addition to the PRC-based device-makers, also had access to this data. According to at least one internal document, this included nearly 90,000 separate developers in the People’s Republic of China (PRC), which is especially remarkable given that Facebook has never been permitted to operate in the PRC. The document also refers to discovery of more than 42,000 developers in Russia, and thousands of developers in other “high-risk jurisdictions,” including Iran and North Korea, that had access to this user information.
As Facebook’s own internal materials note, those jurisdictions “may be governed by potentially risky data storage and disclosure rules or be more likely to house malicious actors,” including “states known to collect data for intelligence targeting and cyber espionage.” As the Chairman and Vice Chairman of the Senate Select Committee on Intelligence, we have grave concerns about the extent to which this access could have enabled foreign intelligence service activity, ranging from foreign malign influence to targeting and counter-intelligence activity.
In light of these revelations, we request answers to the following questions on the findings of Facebook’s internal investigation:
1) The unsealed document notes that Facebook conducted separate reviews on developers based in the PRC and Russia “given the risk associated with those countries.”
- What additional reviews were conducted on these developers?
- When was this additional review completed and what were the primary conclusions?
- What percentage of the developers located in the PRC and Russia was Facebook able to definitively identify?
- What communications, if any, has Facebook had with these developers since its initial identification?
- What criteria does Facebook use to evaluate the “risk associated with” operation in the PRC and Russia?
2) For the developers identified as being located within the PRC and Russia, please provide a full list of the types of information to which these developers had access, as well as the timeframes associated with such access.
3) Does Facebook have comprehensive logs on the frequency with which developers from high-risk jurisdictions accessed its APIs and the forms of data accessed?
4) Please provide an estimate of the number of discrete Facebook users in the United States whose data was shared with a developer located in the each country identified as a “high-risk jurisdiction” (broken out by country).
5) The internal document indicates that Facebook would establish a framework to identify the “developers and apps determined to be most potentially risky[.]”
- How did Facebook establish this rubric?
- How many developers and apps based in the PRC and Russia met this threshold? How many developers and apps in other high-risk jurisdictions met this threshold?
- What were the specific characteristics of these developers that gave rise to this determination?
- Did Facebook identify any developers as too risky to safely operate with? If so, which?
6) The internal document references your public commitment to “conduct a full audit of any app with suspicious activity.”
- How does Facebook characterize “suspicious activity” and how many apps triggered this full audit process?
7) Does Facebook have any indication that any developers’ access enabled coordinated inauthentic activity, targeting activity, or any other malign behavior by foreign governments?
8) Does Facebook have any indication that developers’ access enabled malicious advertising or other fraudulent activity by foreign actors, as revealed in public reporting?
Thank you for your prompt attention.
###
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) and Rep. Elissa Slotkin (D-MI) wrote to Sundar Pichai – the CEO of Alphabet Inc. and its subsidiary Google – urging him to curb deceptive advertisements and ensure that users receive accurate information when searching for abortion services on the platform. This letter comes on the heels of an investigation that reveals how Google regularly fails to apply disclaimer labels to misleading ads by anti-abortion clinics. It also follows a successful effort by Sen. Warner and Rep. Slotkin who previously urged Google to take action to prevent misleading search results for anti-abortion clinics. This push ultimately led Google to clearly label facilities that provide abortions and prevent users from being misled by fake clinics or crisis pregnancy centers.
“We are encouraged by and appreciative of the recent steps Google has taken to protect those searching for abortion services from being mistakenly directed to clinics that do not offer comprehensive reproductive health services. However, we ask you to address issues with misrepresentation in advertising on Google’s site and take a more expansive, proactive approach to addressing violations of Google’s stated policy,” wrote the lawmakers.
“According to an investigation by Bloomberg News and the Center for Countering Digital Hate (CCDH), depending on the search term used, Google does not consistently apply disclaimer labels to ads by anti-abortion clinics. CCDH recently conducted searches that returned 132 misleading ads for such clinics that lacked disclaimers. Specifically, researchers found that queries for terms such as ‘Plan C pills,’ ‘pregnancy help,’ and ‘Planned Parenthood’ often returned results with ads that are not labeled accurately,” they continued. “Furthermore, the Tech Transparency Project found that some ads from ‘crisis pregnancy centers,’ even when they were properly labeled, the ads themselves included deliberately deceptive verbiage aimed at tricking users into believing that they offer abortion services. For example, ads for ‘crisis pregnancy centers’ were found to contain language such as ‘Free Abortion Pill’ and ‘First Trimester Abortion.’ Such deceptive advertising likely reduces the effectiveness of labels and may lead to detrimental health outcomes for users who receive delayed treatment.”
In addition to urging Google to rectify these issues, the lawmakers also requested answers to the following questions:
- What specific search terms does Google consider related to “getting an abortion”?
- What criteria does Google use to determine whether specific queries are related to “getting an abortion”?
- What additional steps will Google take to identify and remove ads with misleading verbiage that violates Google’s policies against misrepresentation?
A copy of the letter is available here and full text of the letter can be found below:
Dear Mr. Pichai,
We write today regarding the responsibility that Google has to ensure users receive accurate information when searching for abortion services on your platform. We are encouraged by and appreciative of the recent steps Google has taken to protect those searching for abortion services from being mistakenly directed to clinics that do not offer comprehensive reproductive health services. However, we ask you to address issues with misrepresentation in advertising on Google’s site and take a more expansive, proactive approach to addressing violations of Google’s stated policy.
On June 17, 2022, we wrote to you, along with 19 other senators and representatives, regarding research that showed Google results for searches such as “abortion services near me” often included links to clinics that are anti-abortion, sometimes called “crisis pregnancy centers.” We were extremely concerned with this practice of directing users toward “crisis pregnancy centers” without any disclaimer indicating those businesses do not provide abortions.
We were pleased to see the changes you have made in response to our letter, such as the new refinement tool that allows users to only see facilities verified to offer abortion services, while still preserving the option to see a broader range of search results. The steps you have taken will help prevent users from mistakenly being sent to organizations that attempt to deceive individuals into thinking they provide comprehensive health services and instead, regularly provide users with disinformation regarding the risks of abortion. As many states are increasingly narrowing the window between getting a positive pregnancy test and when you can terminate a pregnancy, every day counts.
But we find ourselves again asking that Google live up to its promises with regards to preventing misleading ads on its platform. According to an investigation by Bloomberg News and the Center for Countering Digital Hate (CCDH), depending on the search term used, Google does not consistently apply disclaimer labels to ads by anti-abortion clinics. CCDH recently conducted searches that returned 132 misleading ads for such clinics that lacked disclaimers. Specifically, researchers found that queries for terms such as “Plan C pills,” “pregnancy help,” and “Planned Parenthood” often returned results with ads that are not labeled accurately. We believe Google’s failure to apply disclaimer labels to these common searches appears to be a violation of your June 2019 policy that requires “advertisers who want to run ads using keywords related to getting an abortion” to go through a verification process and be labeled as a provider that “Provides abortions” or “Does not provide abortions.”
Furthermore, the Tech Transparency Project found that some ads from “crisis pregnancy centers,” even when they were properly labeled, the ads themselves included deliberately deceptive verbiage aimed at tricking users into believing that they offer abortion services. For example, ads for “crisis pregnancy centers” were found to contain language such as “Free Abortion Pill” and “First Trimester Abortion.” Such deceptive advertising likely reduces the effectiveness of labels and may lead to detrimental health outcomes for users who receive delayed treatment. These ads appear to violate Google’s policy on misrepresentation, which prohibits ads that “deceive users.” Your responsiveness to our first letter gives us hope that you are willing to see this issue through. We, therefore, would appreciate answers to the following questions:
- What specific search terms does Google consider related to “getting an abortion”?
- What criteria does Google use to determine whether specific queries are related to “getting an abortion”?
- What additional steps will Google take to identify and remove ads with misleading verbiage that violates Google’s policies against misrepresentation?
We urge you to take proactive action to rectify these and any additional issues surrounding misleading ads, and help ensure users receive search results that accurately address their queries and are relevant to their intentions.
Thanks for your consideration, and we look forward to your timely response.
###
Sens. Warner & Kaine Announce Over $76 Million in Federal Funding For Jefferson Lab in Newport News
Nov 04 2022
WASHINGTON – Today, U.S. Sens. Mark R. Warner and Tim Kaine announced $76,530,000 in federal funding for the Thomas Jefferson National Accelerator Facility, also known as Jefferson Lab, in Newport News to support multiple projects that are critical to ensuring the U.S. remains a leader in science and technology. The funding was made possible by the Inflation Reduction Act, legislation Sens. Warner and Kaine helped pass in August to lower costs for Virginians and build a strong foundation for future national security and economic growth, in part by accelerating scientific programs and national laboratory infrastructure projects.
“This funding is a powerful example of how the Inflation Reduction Act, which we proudly helped pass earlier this year, will accelerate the development of key technologies,” said the Senators. “We’re glad Jefferson Lab’s research programs and infrastructure projects are receiving this support and look forward to seeing Virginians at the lab continue to lead the way in technological innovation.”
This funding will help make critical laboratory upgrades and support Jefferson Lab’s cutting-edge work in various fields, including projects that will help increase our understanding of the fundamental building blocks and forces at work in our universe—information that can play a key role in the development of an array of technologies, including those with clean energy and medical implications. It is part of $1.5 billion from the Inflation Reduction Act for national laboratories to research and develop new technologies to help the U.S. meet its energy, climate, and security needs.
Sens. Warner and Kaine have consistently advocated for funding for Jefferson Lab and its programs.
###
WASHINGTON – As the Biden administration works to establish two crucial semiconductor initiatives authorized by CHIPS and Science Act, U.S. Sens. Mark R. Warner (D-VA), John Cornyn (R-TX), and Mark Kelly (D-AZ) are leading eight of their colleagues in urging the U.S. Department of Commerce to take full advantage of the contributions, assets, and expertise available in states nationwide.
In a letter to Commerce Secretary Gina Raimondo, the Senators advocate for a decentralized “hub-and-spoke” model for the National Semiconductor Technology Center (NSTC) and the National Advanced Packaging Manufacturing Program (NAPMP). This model would establish various centers of excellence around the country, as opposed to a single centralized facility that is limited to the resources and strengths of a single state or region.
“Allowing the NSTC and NAPMP to draw upon experts, institutions, entrepreneurs, and private-sector partners spread across the country would best position these programs to fulfill their missions of driving semiconductor and advanced packaging research forward, coordinating and scaling up the ongoing workforce development efforts, promoting geographic diversity, and ensuring long-term U.S. competitiveness in this critical technology sector,” wrote the lawmakers.
They continued, “Such a model would allow them to draw upon the strengths of experts, research facilities, and private-sector partnerships and consortia from across the country. This model would consist of central research facilities with centers of excellence in various locations across the country where there is particular expertise in memory, logic, packaging, testing, or other elements of the semiconductor ecosystem.”
In their letter, the Senators also note that this approach was recommended by the President’s Council of Advisors on Science and Technology in a report to President Biden. This report stated, “the Secretary of Commerce should ensure the NSTC founding charter includes establishing prototyping capabilities in a geographically distributed model encompassing up to six centers of excellence (COEs) aligned around major technical thrusts.”
The NSTC and NAPMP – designed to accelerate U.S. semiconductor production and advance research and development – were championed by Sens. Warner, Cornyn, and Kelly, who authored the CHIPS law signed by President Biden in August. In addition to Sens. Warner, Cornyn and Kelly, the letter was signed by Sens. Tim Kaine (D-VA), Rob Portman (R-OH), Sherrod Brown (D-OH), Amy Klobuchar (D-MN), Kyrsten Sinema (D-AZ), Ben Ray Luján (D-NM), Ron Wyden (D-OR), and Dianne Feinstein (D-CA).
A copy of the letter can be found here and below.
October 14, 2022
Dear Secretary Raimondo,
As the Department of Commerce begins implementing the CHIPS and Science Act, we respectfully urge your department to consider using a decentralized, so-called “hub-and-spoke” model as the basis for the National Semiconductor Technology Center (NSTC) and the National Advanced Packaging Manufacturing Program (NAPMP). Allowing the NSTC and NAPMP to draw upon experts, institutions, entrepreneurs, and private-sector partners spread across the country would best position these programs to fulfill their missions of driving semiconductor and advanced packaging research forward, coordinating and scaling up the ongoing workforce development efforts, promoting geographic diversity, and ensuring long-term U.S. competitiveness in this critical technology sector.
When Congress passed the Creating Helpful Incentives to Produce Semiconductors for America Act in January 2021 and funding of $11 billion in the recently-passed CHIPS and Science Act, it recognized the need for increased investment in research and development (R&D). This R&D will include prototyping of advanced semiconductor tools, technology, and packaging capabilities to advance both U.S. economic competitiveness and the security of our domestic supply chain.
The NSTC was established as a way to drive this research forward, bringing together the Department of Commerce, Department of Defense, Department of Energy, the National Science Foundation, and the private sector in a public-private consortium. Congress created the NAPMP to “strengthen semiconductor advanced test, assembly, and packaging capability in the domestic ecosystem” in coordination with the NSTC.
Incredibly diverse knowledge and expertise will be required to ensure that the NSTC and NAPMP are successful. We believe that it would be in the best interests of the long-term success of these programs if the Department of Commerce was to embrace a “hub-and-spoke” model for these programs. In fact, the President’s Council of Advisors on Science and Technology recommended such an approach in their report to President Biden titled, “Revitalizing the U.S. Semiconductor Ecosystem.” The report states, “The Secretary of Commerce should ensure the NSTC founding charter includes establishing prototyping capabilities in a geographically distributed model encompassing up to six centers of excellence (COEs) aligned around major technical thrusts.” Such a model would allow them to draw upon the strengths of experts, research facilities, and private-sector partnerships and consortia from across the country. This model would consist of central research facilities with centers of excellence in various locations across the country where there is particular expertise in memory, logic, packaging, testing, or other elements of the semiconductor ecosystem. Doing so would ensure that a broader range of expertise is captured by the NSTC and NAPMP and ensure entrepreneurs and researchers across the country can take advantage of these programs to drive America’s semiconductor ecosystem forward.
Thank you for your consideration and for all of the work that you and your team are doing to implement this important legislation.
Sincerely,
###
Warner, Rubio Urge DNI to Review Risk Chinese Chipmaker YMTC Presents to National Security
Sep 22 2022
Earlier this month, Apple publicly acknowledged that it is considering procuring NAND memory chips for future iPhones from Yangtze Memory Technologies Co. (YMTC), a state-owned company with extensive links to the Chinese Communist Party (CCP) and its armed wing, the People’s Liberation Army (PLA).
U.S. Sens. Mark R. Warner (D-VA) and Marco Rubio (R-FL), Chairman and Vice Chairman of the U.S. Senate Select Committee on Intelligence, sent a letter to U.S. Director of National Intelligence Avril Haines calling for a public analysis and review of YMTC and the risks it presents to U.S. national security.
- “[W]e write to convey that any decision to partner with YMTC, no matter the intended market of the product offerings developed by such a partnership, would affirm and reward the PRC’s distortive and unfair trade practices, which undermine U.S. companies globally by creating significant advantages to Chinese firms at the expense of foreign competitors. Last year, the Biden Administration described YMTC as China’s ‘national champion memory chip producer,’ which supports the CCP’s efforts to counter U.S. innovation and leadership in this space.”
- “Policymakers have for several years now conveyed to the American public the importance of a competitive semiconductor industry to U.S. national and economic security. A partnership between Apple and YMTC would endanger this critical sector and risk nullifying efforts to support it, jeopardizing the health of chipmakers in the U.S. and allied countries and advancing Beijing’s goal of controlling the global semiconductor market. Buoyed by a major contract with a leading global equipment vendor such as Apple, YMTC’s success would threaten the 24,000 American jobs that support memory chip production. More broadly, such a partnership would also threaten the opportunities this market provides for research at U.S. universities and further development of memory chips for civilian and military uses.”
Majority Leader Chuck Schumer (D-NY) and Senator John Cornyn (R-TX) also signed the letter.
Full text of the letter is available here and below.
Dear Director Haines:
We write to convey our extreme concern about the possibility that Apple Inc. will soon procure 3D NAND memory chips from the People’s Republic of China (PRC) state-owned manufacturer Yangtze Memory Technologies Co. (YMTC). Such a decision would introduce significant privacy and security vulnerabilities to the global digital supply chain that Apple helps shape given YMTC’s extensive, but often opaque, ties to the Chinese Communist Party (CCP) and concerning PRC-backed entities. In addition, we write to convey that any decision to partner with YMTC, no matter the intended market of the product offerings developed by such a partnership, would affirm and reward the PRC’s distortive and unfair trade practices, which undermine U.S. companies globally by creating significant advantages to Chinese firms at the expense of foreign competitors. Last year, the Biden Administration described YMTC as China’s “national champion memory chip producer,” which supports the CCP’s efforts to counter U.S. innovation and leadership in this space.
In July 2022, we wrote to Commerce Secretary Gina Raimondo to warn of the threat YMTC poses to U.S. national security and to request that it be added to the Bureau of Industry and Security’s Entity List. We made these arguments based on the company’s central role in CCP efforts to supplant U.S. technological leadership, including through unfair trade practices. YMTC also appears to have strong ties to the PRC’s military-civil fusion program, as shown through its investors and partnerships; its parent company, Tsinghua Unigroup, allegedly supplies the PRC military.
The PRC has heavily subsidized YMTC for several years, enabling the company to rapidly expand production and sales in China and internationally. Since its formation in 2016, YMTC’s nearly $24 billion in PRC subsidies triggered explosive growth, helping to prepare the company’s plan to launch a second plant in Wuhan as early as the end of this year. At a time when overcapacity is potentially disrupting the market for chipmakers, these subsidies could enable YMTC to distort this often highly cyclical market, selling memory chips below cost in an effort to push out competitors. In addition, in April, reports alleged that YMTC may have breached the U.S.’s foreign direct product rule for supplying smartphone and electronics components to Huawei.
For these reasons, we request that you coordinate among the relevant intelligence community (IC) components a comprehensive review and analysis of YMTC and the threat that a suppler partnership arrangement between it and Apple would pose to U.S. national and economic security. The review should consider, among other issues:
- How the CCP supports the YMTC as part of its plan to bolster and indigenize China’s semiconductor industry and to displace chipmakers from the United States and allied and partnered nations;
- YMTC’s role in assisting other Chinese firms, including Huawei, to evade U.S. sanctions;
- YMTC’s role in the PRC’s military-civil fusion program and its linkages to the People’s Liberation Army; and
- The risks to U.S. national and economic security of this potential procurement.
Policymakers have for several years now conveyed to the American public the importance of a competitive semiconductor industry to U.S. national and economic security. A partnership between Apple and YMTC would endanger this critical sector and risk nullifying efforts to support it, jeopardizing the health of chipmakers in the U.S. and allied countries and advancing Beijing’s goal of controlling the global semiconductor market. Buoyed by a major contract with a leading global equipment vendor such as Apple, YMTC’s success would threaten the 24,000 American jobs that support memory chip production. More broadly, such a partnership would also threaten the opportunities this market provides for research at U.S. universities and further development of memory chips for civilian and military uses.
We once again request that you convene the relevant IC components to review and assess YMTC’s ties to the CCP and produce a comprehensive public report on YMTC, which can be used to inform federal agencies and the public as to the nature and risks associated with YMTC and similar companies.
We look forward to your attention to this critical matter and request a response by October 1, 2022.
Sincerely,
###
WASHINGTON— Today, U.S. Sens. Mark R. Warner and Tim Kaine—who serves on the Senate Health, Education, Labor & Pensions Committee—teamed up with 28 of their colleagues to call on the Department of Health and Human Services (HHS) to take immediate action to safeguard women’s privacy and their ability to safely and confidentially get the health care they need. Specifically, the Senators urged the Biden Administration to strengthen federal privacy protections under the Health Information Portability and Accountability Act (HIPAA) to broadly restrict providers from sharing patients’ reproductive health information without their explicit consent—particularly with law enforcement or in legal proceedings over accessing abortion care.
Since the Dobbs decision, the new patchwork of state abortion bans has caused widespread confusion among health care providers over whether they are required to turn over patients’ health information to state and local law enforcement. As a result, patients may delay or avoid seeking the care they need out of fear their sensitive health information could be weaponized against them.
In recent weeks, states have investigated and sought to punish patients and providers for seeking and providing abortion care. While abortion is not currently criminalized in Virginia, Governor Youngkin has said he would “happily and gleefully” sign “any bill” limiting reproductive freedom, and has tapped Virginia state legislators to introduce legislation to that effect in 2023. Should that legislation be signed into law, the Senators’ push could help prevent personal health data from being used against Virginia women in legal proceedings. This letter makes clear that additional privacy protections are needed to protect this data so it cannot be used by prosecutors or law enforcement seeking to enforce an abortion ban.
“Our nation faces a crisis in access to reproductive health services, and some states have already begun to investigate and punish women seeking abortion care. It is critical that HHS take all available action to fully protect women’s privacy and their ability to safely and confidentially seek medical care,” wrote the Senators.
In their letter to Secretary Xavier Becerra, the Senators urge HHS to take immediate action to strengthen federal privacy protections under HIPAA, bolster enforcement of the protections, educate providers about their obligations, and ensure patients understand their rights. Shortly after the Dobbs decision, Becerra pledged to work to protect patient and provider privacy.
“To safeguard the privacy of women’s personal health care decisions and ensure patients feel safe seeking medical care, including reproductive health care, we urge you to quickly initiate the rulemaking process to strengthen privacy protections for reproductive health information,” urged the Senators. “In particular, HHS should update the HIPAA Privacy Rule to broadly restrict regulated entities from sharing individuals’ reproductive health information without explicit consent, particularly for law enforcement, civil, or criminal proceedings premised on the provision of abortion care.”
Joining Sens. Warner and Kaine in sending the letter were Senators Murray (D-WA), Baldwin (D-WI), Blumenthal (D-CT), Booker (D-NJ), Brown (D-OH), Cantwell (D-WA), Casey (D-PA), Duckworth (D-IL), Durbin (D-IL), Gillibrand (D-NY), Heinrich (D-NM), Hickenlooper (D-CO), Hirono (D-HI), Klobuchar (D-MN), Luján (D-NM), Markey (D-MA), Menendez (D-NJ), Merkley (D-OR), Padilla (D-CA), Reed (D-RI), Rosen (D-NV), Sanders (I-VT), Shaheen (D-NH), Smith (D-MN), Stabenow (D-MI), Van Hollen (D-MD), Warren (D-MA), and Wyden (D-OR).
The Senators’ full letter is available below:
Dear Secretary Becerra:
Since the Supreme Court’s decision to strip away the constitutional right to abortion, patients across the country have lost access to reproductive health care, and providers have scrambled to adapt to the immense confusion, fear, and upheaval this ruling has caused. In some states, legislators and prosecutors have already sought to investigate and punish women seeking abortion care. To protect patients, and their providers, from having their health information weaponized against them, we urge you to take immediate action to strengthen education on and enforcement of federal health privacy protections, and to initiate the rulemaking process to augment privacy protections under Health Insurance Portability and Accountability Act (HIPAA) regulations.
Every day, health care personnel across this nation care for patients who are pregnant or may become pregnant. This care may include anything from an annual check-up to obstetrical visits to emergency care. In order for patients to feel comfortable seeking care, and for health care personnel to provide this care, patients and providers must know that their personal health information, including information about their medical decisions, will be protected. Recognizing this critical need, in 1996, Congress passed HIPAA, which directed the Department of Health and Human Services (HHS) to issue privacy regulations for personal health information. HHS issued corresponding privacy regulations (the “HIPAA Privacy Rule”) in 2000, with several subsequent updates over the years.
The Dobbs v. Jackson Women’s Health Organization decision has caused widespread confusion among health care providers on health privacy protections, and whether they are required to turn over health information to state and local law enforcement. Stakeholders have told us about providers who have felt uncertain about whether they must turn over personal health information to state and law enforcement officials, including cases where providers believed they had to turn over information when doing so is only permitted—but not required—under the HIPAA Privacy Rule. In other cases, providers did not know that certain disclosures are actually impermissible. Stakeholders have even described clashes between providers and health care system administrators on whether certain information must be shared. Many of these issues seem to arise from misunderstandings of what the HIPAA Privacy Rule requires of regulated entities and their employees.
This confusion is likely to grow as state lawmakers continue to implement a patchwork of laws restricting access to abortion and other reproductive health care services. Already, some states have laws in effect criminalizing abortion providers, and some states have enacted laws that penalize anyone who “aids or abets” an abortion, potentially exposing everyone from a referring provider to a receptionist to legal liability. Some state legislators have even proposed to bar women from traveling to another state for abortion care. And even before Dobbs, states had already prosecuted women following their abortions or miscarriages. In many cases, these laws have been used to disproportionately criminalize or surveil women of color for their pregnancy loss.
Actions to prohibit abortion access and undermine health privacy are likely to have devastating consequences for women’s health. Out of concern that their reproductive health information may be used against them, women may delay or avoid disclosing a pregnancy or obtaining prenatal care. They may fear initiating treatments for conditions like cancer or arthritis, where treatment could impact a pregnancy, even as health care providers may hesitate to provide them. And women who experience complications from a pregnancy or abortion may avoid seeking desperately needed emergency care, risking devastating health consequences and even death. These concerns are not without justification – in recent years, numerous medical providers have reported women to law enforcement for seeking care following an abortion, a miscarriage, or other pregnancy-related medical issue.
HHS has the tools to protect patients and health care providers, even in the wake of this devastating decision. For over twenty years, the HIPAA Privacy Rule has protected the privacy of individuals’ health information, laying out when health information may or may not be shared without a patient’s explicit consent. In addition, the HIPAA Privacy Rule has long recognized that stronger protections may be needed for particularly sensitive health information, such as psychotherapy notes. We commend you for the actions the Department has already taken to clarify privacy protections in the wake of the Dobbs decision, including the issuance of additional guidance on the HIPAA Privacy Rule. However, given the growing likelihood that women’s personal health information may be used against them, HHS must also take proactive steps to strengthen patient privacy protections.
To safeguard the privacy of women’s personal health care decisions and ensure patients feel safe seeking medical care, including reproductive health care, we urge you to quickly initiate the rulemaking process to strengthen privacy protections for reproductive health information. In particular, HHS should update the HIPAA Privacy Rule to broadly restrict regulated entities from sharing individuals’ reproductive health information without explicit consent, particularly for law enforcement, civil, or criminal proceedings premised on the provision of abortion care.
In addition, while HHS moves forward with the rulemaking process, the Department should take the following steps to improve awareness and enforcement of current privacy protections in the HIPAA Privacy Rule:
- HHS should increase its efforts to engage and educate the health care community about regulated entities’ obligations under the HIPAA Privacy Rule, including the difference between permissible and required disclosures, best practices for educating patients and health plan enrollees on their privacy rights, how HIPAA interacts with state laws (including those related to prescriptions), and potential legal consequences for violations of the HIPAA Privacy Rule, including civil and criminal penalties. As part of this effort, HHS should engage the full range of health care personnel, including providers, senior executives, and smaller health care organizations, as well as pharmacists, health plan administrators and sponsors, legal and compliance personnel, and entities that provide HIPAA training. These efforts should include listening sessions, additional guidance and FAQs with specific examples, webinars, and additional avenues for individuals at regulated entities to seek confidential advice.
- HHS should expand its efforts to educate patients about their rights under the HIPAA Privacy Rule, including when information may be shared without patient consent, the ability to request additional restrictions or corrections, and how to file a complaint with HHS.
- HHS should ensure cases involving reproductive health information receive timely, appropriate attention for compliance and enforcement activities.
Our nation faces a crisis in access to reproductive health services, and some states have already begun to investigate and punish women seeking abortion care. It is critical that HHS take all available action to fully protect women’s privacy and their ability to safely and confidentially seek medical care. Thank you for your attention to this urgent matter.
Sincerely,
###
WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA), Tim Kaine (D-VA) and Rep. A. Donald McEachin (D-VA) celebrated $52.9 million in funding from the federal government for the Petersburg/Richmond region to support job creation and increase American independence from foreign drug manufacturers.
This funding was recently awarded through the Economic Development Administration and funded by the American Rescue Plan, which was supported by the three lawmakers and passed through the Senate by a vote of 50 – 49 and the House by a vote of 220 – 211.
“The American Rescue Plan is the gift that keeps on giving – this time with $52.9 million that will go towards establishing Central Virginia as a hub for pharmaceutical manufacturing. This unparalleled federal investment will help boost American production of essential drugs and active pharmaceutical ingredients while creating 21st century jobs for Virginians and tackling our nation’s dangerous overreliance on foreign supply chains for medicines,” said the lawmakers.
The Virginia Advanced Pharma Manufacturing (APM) and R&D Cluster – led by the Virginia Biotechnology Research Partnership Authority– is one of 21 winners of the $1 billion Build Back Better Regional Challenge – the most impactful regional economic development competition in decades. The projects funded as part of this award include expanding a nascent pharmaceutical manufacturing corridor in Central Virginia through investment in new wet lab space, development of critical infrastructure to sustain industrial capacity in Petersburg, and engagement with local business to enhance the regional pharmaceutical supply chain. The project will also catalyze a new partnership between Virginia Commonwealth University and Virginia State University to create new pathways for underserved residents to high-quality training and jobs in the pharmaceutical industry.
The Build Back Better Regional Challenge (BBBRC) is an unprecedented competitive federal grant program that provides each regional coalition with significant investments to tackle a wide variety of projects – including entrepreneurial support, workforce development, infrastructure, and innovation – to drive inclusive economic growth. Each coalition’s collection of projects aims to develop and strengthen regional industry clusters – all while embracing economic equity, creating good-paying jobs, and enhancing U.S. competitiveness globally. Projects span 24 states and include $87 million to two primarily Tribal coalitions and over $150 million for projects serving communities impacted by the declining use of coal.
Sen. Warner helped negotiate portions of the American Rescue Plan and directly advocated for this project. In March, he sat down with the Virginia Biotechnology Research Partnership Authority and other pharmaceutical industry professionals for a roundtable discussion on the need to manufacture more prescription drugs in Virginia.
While on Richmond City Council, Sen. Kaine played a major role in the formation and growth of the Virginia Biotechnology Research Partnership Authority, served on its board when he was Mayor of Richmond, and appointed board members while he was Governor. In addition to advocating for the American Rescue Plan, which provided the funding for the EDA Build Back Better Regional Challenge, Sen. Kaine specifically advocated for this project to win this grant. He also visited the project’s facilities in Richmond and Petersburg in April of this year.
Rep. McEachin proudly supported the American Rescue Plan and engaged with the Biden administration throughout the BBBRC application and selection process in support of the Virginia Biotechnology Research Partnership Authority. He sent multiple letters to Secretary of Commerce Gina Raimondo advocating for this project and held briefings with relevant stakeholders to keep them apprised of developments and receive timely updates.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) joined Sens. John Hickenlooper (D-CO), Deb Fischer (R-NE), Gary Peters (D-MI), and Cynthia Lummis (R-WY) and more than 30 of their colleagues in a bipartisan letter to Senate leadership in support of closing the $3 billion funding shortfall impacting the Secure & Trusted Communications Networks Act’s Reimbursement Program. The shortfall leaves wireless networks—often in rural areas—vulnerable to espionage or disruption.
Due to security concerns, in 2020 the FCC prohibited the purchase of equipment manufactured by Chinese telecom companies Huawei and ZTE and also prohibited the use of FCC-administered funds to expand or maintain networks with Huawei or ZTE equipment already present.
The reimbursement program helps small telecommunications providers remove and replace suspect Chinese network equipment manufactured by Huawei and ZTE. If the funding shortfall for the program is not closed, the FCC will not be able to fully cover the costs of removing, disposing, and replacing suspect network equipment which will leave U.S. wireless networks vulnerable to espionage and disruption.
“The highest priority class of telecommunications providers in the Reimbursement Program serve the most rural areas of the United States where wireless connectivity is a vital lifeline to accessing telehealth services, receiving emergency notifications, and participating in the 21st century economy,” wrote the Senators.
The Secure and Trusted Communications Networks Act was enacted in 2020 and given a $1.9 billion appropriation for the Federal Communications Commission (FCC) to help small network providers remove and replace high-risk network equipment. While the initial $1.9 billion was based on a voluntary survey of possible costs small network providers would incur, supply chain disruptions and additional program requirements (such as proper disposal of suspect equipment) added to the overall costs within the Reimbursement Program.
“The bipartisan Senate support – thirty-four Senators! – in favor of a well-resourced Reimbursement Program sends a clear message, and I applaud the letter signatories, especially Senators Hickenlooper, Fischer, Peters, and Lummis, for their leadership on this critical national security issue. The funding shortfall must be addressed as soon as possible to ensure eligible small and rural carriers are adequately reimbursed for costs associated with removing, destroying, and replacing affected equipment. These carriers serve some of the most rural and hard-to-reach places across the country and, without adequate reimbursements, their ability to provide ongoing service to customers is seriously jeopardized,” said Steven K. Berry, president and CEO, Competitive Carriers Association (CCA).
Text of the letter is available here and below.
Dear Leader Schumer and Leader McConnell,
We write to express our support for the Federal Communication Commission’s (FCC) Reimbursement Program under the Secure and Trusted Communications Networks Act (Secure Networks Act). The program’s success is critical to maintaining network resiliency in Rural America and our national security. Since the Secure Networks Act was signed into law in 2020, Congress has appropriated $1.9 billion to support the FCC’s ongoing implementation of the Secure Networks Act and the establishment of the Reimbursement Program to reimburse eligible small and rural telecommunications providers for costs associated with removing, destroying, and replacing “threats to the security of our nation’s communications networks posed by certain communications equipment providers.”
On February 4, 2022, the FCC announced providers, using guidance provided by the FCC, had requested close to $5.6 billion to remove and replace equipment in their networks—nearly three times more than a previous projection for the Reimbursement Program and creating a significant financial shortfall of $3.7 billion. On July 15, 2022, the FCC informed Congress that following an extensive review of applications submitted under the Reimbursement Program, the amount of supplemental funding needed to fully fund approved cost estimates is $3.08 billion. Pursuant to the Secure Networks Act, a funding shortfall requires the FCC issue a pro-rated reimbursement to eligible telecommunications providers—resulting in only 39.5% of funding for approved costs allocated for reimbursement.
The highest priority class of telecommunications providers in the Reimbursement Program serve the most rural areas of the United States where wireless connectivity is a vital lifeline to accessing telehealth services, receiving emergency notifications, and participating in the 21st century economy. Due to significant national security risks to U.S. communications infrastructure, the FCC has already prohibited monies from the Universal Service Fund (USF) from supporting the maintenance or expansion of any wireless network that has covered equipment from Huawei and ZTE present. While these actions are necessary, small rural wireless telecommunications providers rely upon USF funds, and rural America faces a perilous situation. Currently, rural wireless carriers may not maintain, service, or upgrade networks with USF with Huawei and ZTE equipment still present. We are jeopardizing vital communications networks nationwide and our national security.
Recognizing the importance of a well-resourced Reimbursement Program to maintaining critical telecommunications service in rural communities, we are committed to working with you on legislative solutions to promptly provide the financial resources necessary to mitigate national security vulnerabilities emanating from network equipment manufactured by untrusted companies such as Huawei Technologies and ZTE Corporation.
Thank you for your attention to this urgent matter. We look forward to working with you to find a swift solution.
Sincerely,
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) today welcomed an announcement from Google that it will make changes to its search results to clearly label facilities that provide abortions so that users seeking abortions are not misled by anti-abortion fake clinics or crisis pregnancy centers. Today’s action follows a bicameral June 17 letter led by Warner and Rep. Elissa Slotkin (D-MI) to the CEO of Alphabet Inc. and its subsidiary Google, Sundar Pichai, urging him to take action to prevent misleading Google search results and ads that lead to anti-abortion clinics.
In response to Warner and Slotkin’s letter, today Google announced that those who search for “abortion clinics near me” will only see facilities that have been verified to provide abortions in the Local Search results box on Google, unless they affirmatively choose to see additional, potentially less relevant results. Additionally, Google will clearly label results for searches such as “abortion clinics” to indicate whether the facility provides abortions.
“I welcome the changes that Google has announced today so that women seeking abortion services aren’t directed towards fake clinics that traffic in misinformation and don’t provide comprehensive health services. Importantly, this isn’t about silencing voices or restricting speech – it’s about returning search results that accurately address a user’s query and giving users information that is relevant to their searches,” said Sen. Warner today.
###