Press Releases

WASHINGTON — U.S. Sens. Mark R. Warner (D-VA) and John Kennedy (R-LA), both members of the Senate Committee on Banking, Housing, and Urban Affairs, introduced the Financial Artificial Intelligence Risk Reduction Act, bipartisan legislation to require financial regulators to address uses of AI-generated content that could disrupt financial markets.

“AI has tremendous potential but also enormous disruptive power across a variety of fields and industries – perhaps none more so than our financial markets,” said Sen. Warner, a former business executive and venture capitalist. “The time to address those vulnerabilities is now.”

“AI is moving quickly, and our laws should do the same to prevent AI manipulation from rattling our financial markets. Our bill would help ensure that AI threats do not put Americans’ investments and retirement dreams at risk,” Sen. Kennedy said.

The legislation requires the Financial Stability Oversight Council (FSOC) to coordinate financial regulators’ response to threats to the stability of the markets posed by AI, including the use of “deepfakes” by malign actors and other practices associated with the use of AI tools that could undermine the financial system, such as trading algorithms. The legislation also requires FSOC to identify gaps in existing regulations, guidance, and exam standards that could hinder effective responses to AI threats, and implement specific recommendations to address those gaps.

In response to the potential magnitude of the threat, the Financial Artificial Intelligence Risk Reduction Act would also provide for treble penalties when AI is used in violations of Securities and Exchange Commission (SEC) rules, including acts of market manipulation and fraud. The legislation also makes clear that anyone who uses an AI model is responsible for making sure that everything that model does complies with all securities laws.

The legislation also provides the National Credit Union Administration (NCUA) and Federal Housing Finance Agency (FHFA) with the authority necessary to oversee AI service providers, similar to the authority the other financial regulators have had for decades.

A copy of the legislation is available here.

 

###

 

 

WASHINGTON – U.S. Sens. Mark R. Warner and Tim Kaine (both D-VA) joined Sens. Sheldon Whitehouse (D-RI), Lisa Murkowski (R-AK), and Marsha Blackburn (R-TN) in introducing the Telehealth Response for E-prescribing Addiction Therapy Services (TREATS) Act, legislation that would increase access to telehealth services for individuals with substance use disorder (SUD). During the COVID-19 pandemic, the Drug Enforcement Administration (DEA) temporarily removed an in-person exam requirement for providers to prescribe SUD treatments. This change expanded access to care and reduced the risk of overdose, but it is set to expire at the end of next year. The TREATS Act would make this flexibility permanent.

“Over the course of the COVID-19 pandemic we learned valuable lessons in how to adapt our health care system in order to better care for patients, including the successful treatment of patients with opioid addiction using telehealth services,” said Sen. Warner.  “The TREATS Act would make permanent commonsense, safe telehealth practices that will expand care options for those battling with substance use disorder.”

“Telehealth has helped many Virginians get the health care they need, including access to treatments for substance use disorder,” said Sen. Kaine. “By permanently allowing doctors to prescribe life-saving treatments via telehealth, the TREATS Act would better support individuals in recovery and help reduce the risk of overdoses.”

In 2021, 2,622 Virginians died from overdose, averaging seven Virginians per day. Despite strong evidence that medication is the most effective treatment for SUD, only one in five Americans with SUD receive medication treatment that would help them quit and stay in recovery. The TREATS Act would make life-saving medication like buprenorphine more accessible and save lives.

Joining the senators in cosponsoring this legislation are Sens. Catherine Cortez Masto (D-NV), Thom Tillis (R-NC), Shelley Moore Capito (R-WV), Amy Klobuchar (D-MN), Mark Kelly (D-AZ), and Cory Booker (D-NJ). U.S. Representatives David Trone (D-MD-6), Jay Obernolte (R-CA-23), and Brian Fitzpatrick (R-PA-1) led the introduction of the legislation in the House. 

Full text of the bill is available here.

###

 

WASHINGTON – Today, U.S. Sens. Mark R. Warner and Tim Kaine (both D-VA) announced $2,483,817 in federal funding for the Commonwealth to provide distance learning services for rural areas. The funding was awarded through U.S. Department of Agriculture Rural Development Distance Learning & Telemedicine Grants, which provide rural communities with advanced telecommunications technology. In all, these grants will provide 197,010 Virginia students with the technology they need to take advantage of education opportunities through local colleges and universities.

“Over the past several years, we have seen the tremendous capabilities of distance learning to extend opportunities to students that have previously been limited by their geography,” said the senators. “This funding will provide 197,010 Virginia students with the technology and infrastructure they need to continue taking advantage of distance learning.”

The funding is broken down as follows:

  1. $952,388 for Germanna Community College in order to equip 10 locations throughout Spotsylvania, Stafford, Orange, Culpeper, Wise, Page, and Madison counties with video conferencing equipment. Instructors at Germanna Community College will use that technology to deliver mental health and healthcare educational courses to benefit 5,372 students;
  2. $740,793 for Lee County School District in order to equip 12 locations throughout Lee County with interactive teleconferencing equipment. Instructors at Lee County Public Schools will use that technology to deliver instructional resources, professional development courses, and mental health services to benefit 5,545 students;
  3. $475,122 for Southside Virginia Community College in order to equip six locations throughout Mecklenburg, Brunswick, Charlotte, Nottoway and Greensville counties with a synchronous interactive video conferencing system. Instructors at Southside Virginia Community College will use that technology to deliver nursing and emergency management services simulation labs, and shared college courses to benefit 2,805 students; and
  4. $315,5134 for Virginia State University in order to equip 15 locations throughout Petersburg, Roanoke, Prince George, Sussex, Dinwiddie, Henry, Southampton, Franklin, Halifax, Louisa, Brunswick, Greensville and Mecklenburg counties with integrated interactive teaching rooms at the college sites and interactive digital white boards at the high school sites. Instructors at Virginia State University will use that technology to deliver dual credit college courses to benefit 183,288 students.

Sens. Warner and Kaine have long supported efforts to better connect rural Virginia, including through significant funding to extend broadband capabilities to every corner of the Commonwealth.

###

WASHINGTON – U.S. Sens. Mark R. Warner (D-VA) and Jerry Moran (R-KS) today introduced legislation to establish guidelines to be used within the federal government to mitigate risks associated with Artificial Intelligence (AI) while still benefiting from new technology. U.S. Rep. Ted W. Lieu (D-CA-36) plans to introduce companion legislation in the U.S. House of Representatives. 

Congress directed the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework that organizations, public and private, could employ to ensure they use AI systems in a trustworthy manner. This framework was released earlier this year and is supported by a wide range of public and private sector organizations, but federal agencies are not currently required to use this framework to manage their use of AI systems.

The Federal Artificial Intelligence Risk Management Act would require federal agencies to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.

“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” said Sen. Warner. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks.”

“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” said Sen. Moran. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”

“Okta is a strong proponent of interoperability across technical standards and governance models alike and as such we applaud Senators Warner and Moran for their bipartisan Federal AI Risk Management Framework Act,” said Michael Clauser, Director, Head of US Federal Affairs, Okta. “This bill complements the Administration’s recent Executive Order on Artificial Intelligence (AI) and takes the next steps by providing the legislative authority to require federal software vendors and government agencies alike to develop and deploy AI in accordance with the NIST AI Risk Management Framework (RMF). The RMF is a quality model for what public-private partnerships can produce and a useful tool as AI developers and deployers govern, map, measure, manage, and mitigate risk from low- and high-impact AI models alike.”

“IEEE-USA heartily supports the Federal Artificial Intelligence Risk Management Act of 2023,” said Russell Harrison, Managing Director, IEEE-USA. “Making the NIST Risk Management Framework (RMF) mandatory helps protect the public from unintended risks of AI systems yet permits AI technology to mature in ways that benefit the public. Requiring agencies to use standards, like those developed by IEEE, will protect both public welfare and innovation by providing a useful checklist for agencies implementing AI systems. Required compliance does not interfere with competitiveness; it promotes clarity by setting forth a ‘how-to.’”

“Procurement of AI systems is challenging because AI evaluation is a complex topic and expertise is often lacking in government.” said Dr. Arvind Narayanan, Professor of Computer Science, Princeton University. “It is also high-stakes because AI is used for making consequential decisions. The Federal Artificial Intelligence Risk Management Act tackles this important problem with a timely and comprehensive approach to revamping procurement by shoring up expertise, evaluation capabilities, and risk management.”

"Risk management in AI requires making responsible choices with appropriate stakeholder involvement at every stage in the technology's development; by requiring federal agencies to follow the guidance of the NIST AI Risk Management Framework to that end, the Federal AI Risk Management Act will contribute to making the technology more inclusive and safer overall,” said Yacine Jernite, Machine Learning & Society Lead, Hugging Face. “Beyond its direct impact on the use of AI technology by the Federal Government, this will also have far-reaching consequences by fostering more shared knowledge and development of necessary tools and good practices. We support the Act and look forward to the further opportunities it will bring to build AI technology more responsibly and collaboratively."

“The Enterprise Cloud Coalition supports the Federal AI Risk Management Act of 2023, which mandates agencies adopt the NIST AI Risk Management Framework to guide the procurement of AI solutions,” said Andrew Howell, Executive Director, Enterprise Cloud Coalition. “By standardizing risk management practices, this act ensures a higher degree of reliability and security in AI technologies used within our government, aligning with our coalition's commitment to trust in technology. We believe this legislation is a critical step toward advancing the United States' leadership in the responsible use and development of artificial intelligence on the global stage.”

A one-page explanation of the legislation of the legislation can be found here.

# # #

 

WASHINGTON – Today, Virginia lawmakers gathered at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) to celebrate the U.S. Department of Energy (DOE)’s meritorious selection of Jefferson Lab as the Hub Director for the new High Performance Data Facility (HPDF) – a scientific user facility that will specialize in advanced infrastructure for data-intensive science. The project to build the HPDF Hub will be a partnership between Jefferson Lab and Lawrence Berkeley National Laboratory (LBNL), with the two labs forming a joint project team led by Jefferson Lab and charged to create an integrated HPDF Hub design.

U.S. Sens. Mark R. Warner and Tim Kaine and their colleagues have worked tirelessly to engage the DOE, stress the extent of Jefferson Lab’s capabilities and potential for growth, and best position Virginia to be selected to host the HPDF. As part of this effort, the lawmakers worked with the General Assembly and Governor Youngkin to secure over $40 million in Commonwealth funds for the planning and construction of a shell building to house the HPDF – a bipartisan feat that demonstrated Virginia’s extraordinary support of Jefferson Lab’s mission and commitment to this project.

The High Performance Data Facility is envisioned as a national resource that will serve as the foundation for advancing DOE’s ambitious Integrated Research Infrastructure (IRI) program, which aims to provide researchers the ability to seamlessly meld DOE’s unique data resources, experimental user facilities, and advanced computing resources to accelerate the pace of discovery. The mission of the HPDF will be to enable and accelerate scientific discovery by delivering state-of-the-art data management infrastructure, capabilities, and tools. The HPDF will provide a crucial national resource for artificial intelligence (AI) research, opening new approaches for the nation’s researchers to attack fundamental problems in science and engineering that require nimble, shared access to large data sets, and real-time analysis of streamed data from experiments. DOE is the leading producer of scientific data in the world, and the HPDF will deliver a platform for a broad spectrum of data-intensive research as we enter the era of exascale supercomputing and exascale data.

Today’s news follows an announcement last year by Sens. Warner and Kaine that over $76 million in federal funding was headed to Jefferson Lab for project support and infrastructure upgrades. Those investments were made possible by the Inflation Reduction Act, which passed by one vote and was supported by both senators.

“The selection of Jefferson Lab as the location and lead of the High Performance Data Facility is a monumental win for the Lab, Hampton Roads, and the Commonwealth of Virginia,” said U.S. Senator Mark R. Warner (D-VA). “Since my days as Governor, I have pushed to broaden the mission and responsibilities of Jefferson Lab to reflect the current needs of our nation. Today’s announcement is a massive step towards realizing the goal of diversifying the mission of Jefferson Lab by providing the Lab with a critical national resource that will be used to tackle fundamental problems in science and engineering, including artificial intelligence research. I’m thankful for Secretary Granholm and the Department of Energy’s commitment to ensuring the U.S. can pave the way for the next generation of advanced data management and for providing Jefferson Lab the opportunity to lead this world-class project. I look forward to working with Jefferson Lab, the Department of Energy, and my colleagues in advancing this project as quickly as possible and look forward to seeing the innumerable scientific advancements that are sure to follow.”

“Jefferson Lab’s designation as the leader of the High Performance Data Facility is a powerful recognition of the contributions Virginians make to the research we need to remain at the cutting-edge of technological innovation,” said U.S. Senator Tim Kaine (D-VA). “I’m proud to have helped advocate for this designation, and for years have gone to bat through the annual government funding process to support Jefferson Lab’s work. I will continue to do all that I can to secure the resources Virginia scientists need to advance America’s competitiveness and supercomputing capabilities.”

“From Day One of my Administration, we’ve been working with leaders in our delegation, in our General Assembly, and at Jefferson Lab to secure the High Performance Data Facility, an asset that will accelerate research driven economic development in the Commonwealth. I was proud to work with General Assembly leaders to make a $40 million investment to help land this prize that will catalyze our economy for decades to come. Our Administration will continue to support the cutting-edge technological research that has established the Commonwealth as a nationwide leader in innovation,” said Virginia Governor Glenn Youngkin.

“We are honored to be selected by the DOE’s Advanced Scientific Computing Research program to lead this project,” said Jefferson Lab Director Stuart Henderson. “Building on our extensive experience with large data sets and high performance computing, and our new and ongoing partnerships exploring state-of-the-art approaches to data and data science, we will build a new facility that will revolutionize the way we make scientific discoveries.”

“Today’s announcement is great news for all those committed to innovation and scientific discovery. Since its founding, Jefferson Lab (JLab) has established itself as a world-leader in nuclear physics research while building acumen in the computing realm to manage, store, and interpret data. These two areas of expertise have proven synergistic and advanced the lab’s mission. The location of the High Performance Data Facility will create new opportunities at Jefferson Lab and in Hampton Roads while bringing JLab’s expertise to bear for the entire network of National Labs,” said Congressman Bobby Scott (VA-03).

“I am thrilled to see Jefferson Lab selected as the Hub Director for the Department of Energy’s High Performance Data Facility,” Congressman Rob Wittman (VA-01) said. “Jefferson Lab is a leader in nuclear research, and these investments will unlock vital data science advancements for the Hampton Roads region, the Commonwealth, and our nation. I am proud to have advocated for this important investment alongside my colleagues at the local, state, and federal levels over the years, and I look forward to the future developments that will follow the completion of this critical project.”

“Jefferson Lab’s High Performance Data Facility is a once-in-a-generation initiative that will catapult Newport News and the Commonwealth to the international frontlines of data analytics and advanced computing,” said Newport News Mayor Phillip Jones. “This revolutionary facility will transform scientific research and discovery. In addition to workforce and economic development impacts, there are innumerable opportunities for higher education, research, STEM learning, and commercial investments. This exciting new project, coupled with Jefferson Lab’s already robust scientific and educational offerings, will make Newport News an even greater hub of innovation and research.”

“This project will be one of the greatest economic development projects to come to Newport News in recent memory,” said State Senator Monty Mason, who represents Jefferson Lab as Senator of the 1st District. “As a steadfast advocate on the state level, I am proud to have secured critical state funding for the planning and preparation of this project. The city and the entire peninsula will be strengthened by this facility, bringing 100s of new jobs with salaries well over the region's median income, boosting our local economy, and further solidifying the Virginia peninsula as a leader in science innovation.”

“Today’s announcement is the culmination of years of collaboration between members of the Virginia General Assembly and our federal delegation to bring a High Performance Data Facility to Jefferson Lab. As Chairman of the House Appropriations Committee, I am excited that the Commonwealth’s investment will leverage between $300 million to $500 million in federal funds for this transformative opportunity,” said State Delegate and Chairman of the House Appropriations Committee Barry Knight.

“The investments made today by the Department of Energy (DOE) and the Commonwealth of Virginia into the High-Performance Data Facility mark the beginning of an unparalleled chapter for the laboratory and the wider educational community,” emphasized Dr. Sean J. Hearne, President and CEO of the Southeastern Universities Research Association (SURA). “This cutting-edge research facility serves as a gateway to explore the rapidly expanding realm of data science, offering extensive research and educational opportunities that are poised to redefine our world.”

“The Friends of Jefferson Lab, a coalition of business leaders spanning from Richmond to the oceanfront, are delighted Jefferson Lab has been chosen as the site for the high performance data facility,” said Alan Witt, Chair of Friends of JLab. “Jefferson Lab is a vital asset to Hampton Roads and the addition of this facility will add greatly to the economic, scientific, and educational fabric of the Virginia Peninsula, Hampton Roads, and the Commonwealth of Virginia.”

Specifically, the HPDF will have a “hub-and-spoke” model in which Jefferson Lab and LBNL will host mirrored centralized resources. It will enable high priority DOE mission applications at “spoke” sites by deploying and orchestrating distributed infrastructure at the spokes or other locations. Under Jefferson Lab’s leadership, the Jefferson Lab/LBNL partnership will assemble a world class HPDF Hub project team to deliver a geographically resilient and innovative HPDF core infrastructure capable of meeting the needs of a wide diversity of users, institutions, and use cases. This Jefferson Lab-led partnership will itself provide the template for the first spokes partnerships and blaze new paths in institutional engagement and outreach in the emerging era of AI- enabled integrated science.

As identified in the DOE’s Mission Need Statement for the High Performance Data Facility approved August 2020, DOE anticipates that the total project cost of the HPDF project, including the hub and spokes, will be between $300 million and $500 million in current and future year funds, subject to the availability of future year appropriations.

###

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged several artificial intelligence (AI) companies to take additional action to promote safety and prevent malicious misuse of their products. In a series of letters, Sen. Warner applauded certain companies for publicly joining voluntary commitments proposed by the Biden administration, but encouraged them to broaden their efforts, and called on companies that have not taken this public step to commit to making their products more secure.

As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques. In July, the Biden administration announced that several AI companies had agreed to a series of voluntary commitments that would promote greater security and transparency. However, the commitments were not fully comprehensive in scope or in participation, with many companies not publicly participating and several exploitable aspects of the technology left untouched by the commitments.

In a series of letters sent today, Sen. Warner pushed directly on companies that did not participate, including Apple, Midjourney, Mistral AI, Databricks, Scale AI, and Stability AI, requesting a response detailing the steps they plan to take to increase the security of their products and prioritize transparency. Sen. Warner additionally sent letters to companies that were involved in the Biden administration’s commitments, including Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI, asking that they extend commitments to less capable models and also develop consumer-facing commitments – such as development and monitoring practices – to prevent the most serious forms of misuse. 

“While representing an important improvement upon the status quo, the voluntary commitments announced in July can be bolstered in key ways through additional commitments,” Sen. Warner wrote.

Sen. Warner also called specific attention to the urgent need for all AI companies to make additional commitments to safeguard against a few highly sensitive potential misuses, including non-consensual intimate image generation (including child sexual abuse material), social-scoring, real-time facial recognition, and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.

The letters follow up on Sen. Warner’s previous efforts to engage directly with AI companies to push for responsible development and deployment. In April, Sen. Warner directly called on AI CEOs to develop practices that would ensure that their products and systems are secure. In July, he also pushed on the Biden administration to keep working with AI companies to expand the scope of the voluntary commitments.

Additionally, Sen. Warner wrote to Google last week to raise concerns about their testing of new AI technology in real medical settings. Separately, he urged the CEOs of several AI companies to address a concerning report that generative chatbots were producing instructions on how to exacerbate an eating disorder. Additionally, he has introduced several pieces of legislation aimed at making tech safer and more humane, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.

Copies of each of the letters can be found here.

###

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence and author of the bipartisan law to invest in domestic semiconductor manufacturing, today released a statement on the one-year anniversary of the CHIPS and Science Act: 

“I fought to pass the CHIPS and Science Act because it’s good for our supply chains, our families, and our national security to make semiconductors here at home. In the year since, the law has bolstered innovation, helped America to compete against countries like China for the technology of the future, and created good-paying manufacturing jobs that will grow the middle class.”

Nearly everything that has an “on” switch – from electric toothbrushes and calculators to airplanes and satellites – contains a semiconductor. One year ago, President Biden signed into law the CHIPS and Science Act, a law co-authored by Warner to make a nearly $53 billion investment in U.S. semiconductor manufacturing, research and development, and workforce, and create a 25 percent tax credit for capital investments in semiconductor manufacturing. 

Semiconductors were invented in the United States, but today we produce only about 12 percent of global supply – and none of the most advanced chips. Similarly, investments in research and development have fallen to less than 1 percent of GDP from 2 percent in the mid-1960s at the peak of the space race. TheCHIPS and Science Act aims to change this by driving American competitiveness, making American supply chains more resilient, and supporting our national security and access to key technologies. In the one year since it was signed into law, companies have announced over $231 billion in commitments in semiconductor and electronics investments in the United States.

Last month, Sen. Warner co-hosted the CHIPS for Virginia Summit, convening industry, federal and state government, and academic leaders for a series of strategic discussions on how to propel Virginia forward in the booming U.S. semiconductor economy.

### 

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged Google CEO Sundar Pichai to provide more clarity into his company’s deployment of Med-PaLM 2, an artificial intelligence (AI) chatbot currently being tested in health care settings. In a letter, Sen. Warner expressed concerns about reports of inaccuracies in the technology, and called on Google to increase transparency, protect patient privacy, and ensure ethical guardrails.

In April, Google began testing Med-PaLM2 with customers, including the Mayo Clinic. Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While the technology has shown some promising results, there are also concerning reports of repeated inaccuracies and of Google’s own senior researchers expressing reservations about the readiness of the technology. Additionally, much remains unknown about where Med-PaLM 2 is being tested, what data sources it learns from, to what extent patients are aware of and can object to the use of AI in their treatment, and what steps Google has taken to protect against bias.

“While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” Sen. Warner wrote. 

The letter raises concerns over AI companies prioritizing the race to establish market share over patient well-being. Sen. Warner also emphasizes his previous efforts to raise the alarm about Google skirting health privacy as it trained diagnostic models on sensitive health data without patients’ knowledge or consent.

“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” Sen. Warner continued.

The letter poses a broad range of questions for Google to answer, requesting more transparency into exactly how Med-PaLM 2 is being rolled out, what data sources Med-PaLM 2 learns from, how much information and agency patients have over how AI is involved in their care, and more.

Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In April, Sen. Warner directly expressed concerns to several AI CEOs – including Sundar Pichai – about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure. Last month, he called on the Biden administration to work with AI companies to develop additional guardrails around the responsible deployment of AI. He has also introduced several pieces of legislation aimed at making tech more secure, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.

A copy of the letter can be found here are below. 

Dear Mr. Pichai,

I write to express my concern regarding reports that Google began providing Med-PaLM 2 to hospitals to test early this year. While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors.

Over the past year, large technology companies, including Google, have been rushing to develop and deploy AI models and capture market share as the technology has received increased attention following OpenAI’s launch of ChatGPT. Numerous media outlets have reported that companies like Google and Microsoft have been willing to take bigger risks and release more nascent technology in an effort to gain a first mover advantage. In 2019, I raised concerns that Google was skirting health privacy laws through secretive partnerships with leading hospital systems, under which it trained diagnostic models on sensitive health data without patients’ knowledge or consent. This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information. One need look no further than AI pioneer Joseph Weizenbaum’s experiments involving chatbots in psychotherapy to see how users can put premature faith in even basic AI solutions.

According to Google, Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While AI models have previously been used in medical settings, the use of generative AI tools presents complex new questions and risks. According to the Wall Street Journal, a senior research director at Google who worked on Med-PaLM 2 said, “I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey.” Indeed, Google’s own research, released in May, showed that Med-PaLM 2’s answers contained more inaccurate or irrelevant information than answers provided by physicians. It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI

Given these serious concerns and the fact that VHC Health, based in Arlington, Virginia, is a member of the Mayo Clinic Care Network, I request that you provide answers to the following questions. 

  1. Researchers have found large language models to display a phenomenon described as “sycophany,” wherein the model generates responses that confirm or cater to a user’s (tacit or explicit) preferred answers, which could produce risks of misdiagnosis in the medical context. Have you tested Med-PaLM 2 for this failure mode?
  2. Large language models frequently demonstrate the tendency to memorize contents of their training data, which can risk patient privacy in the context of models trained on sensitive health information. How has Google evaluated Med-PaLM 2 for this risk and what steps has Google taken to mitigate inadvertent privacy leaks of sensitive health information?
  3. What documentation did Google provide hospitals, such as Mayo Clinic, about Med-PaLM 2? Did it share model or system cards, datasheets, data-statements, and/or test and evaluation results?
  4. Google’s own research acknowledges that its clinical models reflect scientific knowledge only as of the time the model is trained, necessitating “continual learning.” What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?
  5. Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?
  6. Does Google ensure that patients are informed when Med-PaLM 2, or other AI models offered or licensed by, are used in their care by health care licensees? If so, how is the disclosure presented? Is it part of a longer disclosure or more clearly presented?
  7. Do patients have the option to opt-out of having AI used to facilitate their care? If so, how is this option communicated to patients?
  8. Does Google retain prompt information from health care licensees, including protected health information contained therein? Please list each purpose Google has for retaining that information.
  9. What license terms exist in any product license to use Med-PaLM 2 to protect patients, ensure ethical guardrails, and prevent misuse or inappropriate use of Med-PaLM 2? How does Google ensure compliance with those terms in the post-deployment context? 
  10. How many hospitals is Med-PaLM 2 currently being used at? Please provide a list of all hospitals and health care systems Google has licensed or otherwise shared Med-Palm 2 with.
  11. Does Google use protected health information from hospitals using Med-PaLM 2 to retrain or finetune Med-PaLM 2 or any other models? If so, does Google require that hospitals inform patients that their protected health information may be used in this manner?
  12. In Google’s own research publication announcing Med-PaLM 2, researchers cautioned about the need to adopt “guardrails to mitigate against over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over-reliance on the output of Med-PaLM 2 as well as when it particularly should and should not be used? What guardrails has Google incorporated through product license terms to prevent over-reliance on the output?

 

### 

 

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged the Biden administration to build on its recently announced voluntary commitments from several prominent artificial intelligence (AI) leaders in order to promote greater security, safety, and trust in the rapidly developing AI field.

As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques. On Friday, the Biden administration announced that several AI companies had agreed to a series of measures that would promote greater security and transparency. Sen. Warner wrote to the administration to applaud these efforts and laid out a series of next steps to bolster this progress, including extending commitments to less capable models, seeking consumer-facing commitments, and developing an engagement strategy to better address security risks.

“These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks,” Sen. Warner wrote. “As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.”

The letter builds on Sen. Warner’s continued advocacy for the responsible development and deployment of AI. In April, Sen. Warner directly expressed concerns to several AI CEOs about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure.

The letter also affirms Congress’ role in regulating AI, and expands on the annual Intelligence Authorization Act, legislation that recently passed unanimously through the Sente Select Committee on Intelligence. Sen. Warner urges the administration to adopt the strategy outlined in this pending bill as well as work with the FBI, CISA, ODNI, and other federal agencies to fully address the potential risks of AI technology.

Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In addition to his April letters, has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.

A copy of the letter can be found here and below. 

Dear President Biden,

I write to applaud the Administration’s significant efforts to secure voluntary commitments from leading AI vendors related to promoting greater security, safety, and trust through improved development practices. These commitments – largely applicable to these vendors’ most advanced products – can materially reduce a range of security and safety risks identified by researchers and developers in recent years. In April, I wrote to a number of these same companies, urging them to prioritize security and safety in their development, product release, and post-deployment practices. Among other things, I asked them to fully map dependencies and downstream implications of compromise of their systems; focus greater financial, technical and personnel resources on internal security; and improve their transparency practices through greater documentation of system capabilities, system limitations, and training data.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks. Moreover, a growing roster of highly-capable open source models have been released to the public – and would benefit from similar pre-deployment commitments contained in a number of the July 21st obligations. As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models. 

To be sure, responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks, and promote secure development practices in this burgeoning and highly consequential industry – and in the downstream industries integrating their products. In the interim, the important commitments your Administration has secured can be bolstered in a number of important ways. 

First, I strongly encourage your Administration to continue engagement with this industry to extend these all of these commitments more broadly to less capable models that, in part through their wider adoption, can produce the most frequent examples of misuse and compromise.

Second, it is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse. Most prominent among these should be commitments from leading vendors to adopt development practices, licensing terms, and post-deployment monitoring practices that prevent non-consensual intimate image generation, social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.

Lastly, the Administration’s successful high-level engagement with the leadership of these companies must be complemented by a deeper engagement strategy to track national security risks associated with these technologies. In June, the Senate Select Committee on Intelligence on a bipartisan basis advanced our annual Intelligence Authorization Act, a provision of which directed the President to establish a strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.

This provision was spurred by conversations with leading vendors, who confided that they would not know how best to report malicious activity – such as suspected intrusions of their internal networks, observed efforts by foreign actors to generate or refine malware using their tools, or identified activity by foreign malign actors to generate content to mislead or intimidate voters.  To be sure, a highly-capable and well-established set of resources, processes, and organizations – including the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Office of the Director of National Intelligence’s Foreign Malign Influence Center – exist to engage these communities, including through counter-intelligence education and defensive briefings. Nonetheless, it appears that these entities have not been fully activated to engage the range of key stakeholders in this space. For this reason, I would encourage you to pursue the contours of the strategy outlined in our pending bill. 

Thank you for your Administration’s important leadership in this area. I look forward to working with you to develop bipartisan legislation in this area.

###

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) today announced $1,820,000 for Virginia universities to research and develop AI capabilities to mitigate cyberattacks. Federal funding will allow the University of Virginia and Norfolk State University to study innovative AI-based approaches to cybersecurity. Researchers from these institutions will collaborate with teams at 10 additional educational institutions and 20 private industry partners to develop revolutionary methods to counter cyberattacks in which AI-enabled intelligent security agents will cooperate with humans to build more resilient networks.

“Addressing the cybersecurity threats that our nation faces requires constant adaptation and innovation, and utilizing AI to counter these threats is an incredibly exciting use-case for this emerging technology,” said Sen. Warner. “This funding will allow teams at the University of Virginia and Norfolk State to do groundbreaking research on ways AI can help safeguard against cyberattacks. I congratulate UVA and NSU on receiving this funding, and I can’t wait to see what they discover and develop. 

The funding is distributed as follows:

·         Norfolk State University will receive $975,000.

·         University of Virginia will receive $845,000.

Funding for these awards is provided jointly by the National Science Foundation, the Department of Homeland Security, and IBM. Investments are designed to build a diverse AI workforce across the United States. 

Sen. Warner, a former tech entrepreneur, has been a vocal advocate for improving cybersecurity and security-oriented design by AI companies. In April, he sent a series of letters to CEOs of several AI companies urging them to prioritize security, combat bias, and responsibly roll out new technologies. In November 2022, he published “Cybersecurity is Patient Safety,” a policy options paper that outlined current cybersecurity threats facing health care providers and offering a series of policy solutions to improve cybersecurity. As Chairman of the Senate Select Committee on Intelligence, Sen. Warner co-authored legislation that requires companies responsible for U.S. critical infrastructure report cybersecurity incidents to the government. He has also introduced several pieces of legislation aimed at building a more secure internet, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries and the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms.

### 

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) joined 27 colleagues in introducing the Kids Online Safety Act, comprehensive bipartisan legislation to protect children online.

The Kids Online Safety Act provides young people and parents with the tools, safeguards, and transparency they need to protect against online harms. The bill requires social media platforms to by default enable a range of protections against addictive design and algorithmic recommendations. It also requires privacy protections, dedicated channels to report harm, and independent audits by experts and academic researchers to ensure that social media platforms are taking meaningful steps to address risks to kids. 

“Experts are clear: kids and teens are growing up in a toxic and unregulated social media landscape that promotes bullying, eating disorders, and mental health struggles,” said Sen. Warner. “The Kids Online Safety Act would give kids and parents the long-overdue ability to control some of the least transparent and most damaging aspects of social media, creating a safer and more humane online environment.”

Reporting has shown that social media companies have proof that their platforms contribute to mental health issues in children and teens, and that young people have demonstrated a precipitous rise in mental health crises over the last decade. 

Specifically, the Kids Online Safety Act would: 

·         Require that social media platforms provide minors with options to protect their information, disable addictive product features, and opt out of algorithmic recommendations. Platforms would be required to enable the strongest settings by default.

·         Give parents new controls to help support their children and identify harmful behaviors, and provides parents and children with a dedicated channel to report harms to kids to the platform. 

·         Create a responsibility for social media platforms to prevent and mitigate harms to minors, such as promotion of suicide, eating disorders, substance abuse, sexual exploitation, and unlawful products for minors (e.g. gambling and alcohol).

·         Require social media platforms to perform an annual independent audit that assesses the risks to minors, their compliance with this legislation, and whether the platform is taking meaningful steps to prevent those harms. 

·         Provide academic and public interest organizations with access to critical datasets from social media platforms to foster research regarding harms to the safety and well-being of minors. 

Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and building a safer online environment. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology and social media platforms from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.

The one-page summary of the bill can be found here, the section-by-section summary can be found here, and the full text of the Senate bill can be found here.

###

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) released the statement below after the Drug Enforcement Agency (DEA) announced that it would extend current flexibilities around telehealth prescriptions of controlled substances, including those that treat opioid use disorder and anxiety, while it reviews a record number of comments received in response to its new proposed telemedicine rules. This move follows strong advocacy by Sen. Warner, who spoke out in March about the need to ensure that patients can continue getting their medications and sent a letter to DEA in August 2022 asking them to explain their plan for continuity of care after the COVID-19 Public Health Emergency.

“I’m pleased to see that the DEA is taking additional time to consider the comments to their proposed rule, which I believe overlooked the key benefits and lessons learned during the pandemic. This proposed rule could counterproductively exacerbate the opioid crisis and push patients to seek dangerous alternatives to proper health care, such as self-medicating, by removing a telehealth option in many cases. I’m working with my colleagues in Congress on a response to DEA’s proposed rule, and I look forward to further robust discussion on this critical issue.”

During COVID-19, patients widely adopted telehealth as a convenient and accessible way to get care remotely. This was made possible by the COVID-19 Public Health Emergency, which allowed for a number of flexibilities, including utilizing an exception to the in-person medical evaluation requirement under the Ryan Haight Online Pharmacy Consumer Protection Act, legislation regulating the online prescription of controlled substances. With the Public Health Emergency set to expire, patients will soon lose the ability to reap the benefits of a mature telehealth system in which responsible providers know how to take care of their patients remotely when appropriate.  

Since 2008, Congress has directed the DEA to set up a special registration process, another exception process under the Ryan Haight Act, that would open up the door for quality health care providers to evaluate a patient and prescribe controlled substances over telehealth safely, as they’ve done during the pandemic. This special registration process has yet to be established, and DEA wrote they believe this proposed rule fulfills those Congressional mandates, despite not proposing such a registration.

Sen. Warner, a former tech entrepreneur, has been a longtime advocate for increased access to telehealth. He is a co-author of the CONNECT for Health Act, which would expand coverage of telehealth services through Medicare, make COVID-19 telehealth flexibilities permanent, improve health outcomes, and make it easier for patients to safely connect with their doctors. He previously wrote to both the Biden and Trump administrations, urging the DEA to finalize regulations long-delayed by prior administrations allowing doctors to prescribe controlled substances through telehealth. Sen. Warner also sent a letter to Senate leadership during the height of the COVID-19 crisis, calling for the permanent expansion of access to telehealth services.

In 2018, Sen. Warner included a provision to expand financial coverage for virtual substance use treatment in the Opioid Crisis Response Act of 2018. In 2003, then-Gov. Warner expanded Medicaid coverage for telemedicine statewide, including evaluation and management visits, a range of individual psychotherapies, the full range of consultations, and some clinical services, including in cardiology and obstetrics. Coverage was also expanded to include non-physician providers. Among other benefits, the telehealth expansion allowed individuals in medically underserved and remote areas of Virginia to access quality specialty care that isn’t always available at home.

### 

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged CEOs of several artificial intelligence (AI) companies to prioritize security, combat bias, and responsibly roll out new technologies. In a series of letters, Sen. Warner expressed concerns about the potential risks posed by AI technology, and called on companies to ensure that their products and systems are secure.

In the past several years, AI technology has rapidly advanced while chatbots and other generative AI products have simultaneously widened the accessibility of AI products and services. As these technologies are rolled out broadly, open source researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in the prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques.

“[W]ith the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work,” Sen. Warner wrote. “Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.”

Sen. Warner highlighted several specific security risks associated with AI, including data supply chain security and data poisoning attacks. He also expressed concerns about algorithmic bias, trustworthiness, and potential misuse or malicious use of AI systems.

The letters include a series of questions for companies developing large-scale AI models to answer, aimed at ensuring that they are taking appropriate measures to address these security risks. Among the questions are inquiries about companies' security strategies, limits on third-party access to their models that undermine the ability to evaluate model fitness, and steps taken to ensure secure and accurate data inputs and outputs. Recipients of the letter include the CEOs of OpenAI, Scale AI, Meta, Google, Apple, Stability AI, Midjourney, Anthropic, Percipient.ai, and Microsoft.

Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.

A copy of the letters can be found here and below. 

I write today regarding the need to prioritize security in the design and development of artificial intelligence (AI) systems. As companies like yours make rapid advancements in AI, we must acknowledge the security risks inherent in this technology and ensure AI development and adoption proceeds in a responsible and secure way. While public concern about the safety and security of AI has been on the rise, I know that work on AI security is not new. However, with the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work. Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.

I recognize the important work you and your colleagues are doing to advance AI. As a leading company in this emerging technology, I believe you have a responsibility to ensure that your technology products and systems are secure. I have long advocated for incorporating security-by-design, as we have found time and again that failing to consider security early in the product development lifecycle leads to more costly and less effective security. Instead, incorporating security upfront can reduce costs and risks. Moreover, the last five years have demonstrated that the ways in which the speed, scale, and excitement associated with new technologies have frequently obscured the shortcomings of their creators in anticipating the harmful effects of their use. AI capabilities hold enormous potential; however, we must ensure that they do not advance without appropriate safeguards and regulation. 

While it is important to apply many of the same security principles we associate with traditional computing services and devices, AI presents a new set of security concerns that are distinct from traditional software vulnerabilities. Some of the AI-specific security risks that I am concerned about include the origin, quality, and accuracy of input data (data supply chain), tampering with training data (data poisoning attacks), and inputs to models that intentionally cause them to make mistakes (adversarial examples). Each of these risks further highlighting the need for secure, quality data inputs. Broadly speaking, these techniques can effectively defeat or degrade the integrity, security, or performance of an AI system (including the potential confidentiality of its training data). As leading models are increasingly integrated into larger systems, often without fully mapping dependencies and downstream implications, the effects of adversarial attacks on AI systems are only magnified.

In addition to those risks, I also have concerns regarding bias, trustworthiness, and potential misuse or malicious use of AI systems. In the last six months, we have seen open source researchers repeatedly exploit a number of prominent, publicly-accessible generative models – crafting a range of clever (and often foreseeable) prompts to easily circumvent a system’s rules. Examples include using widely-adopted models to generate malware, craft increasingly sophisticated phishing techniques, contribute to disinformation, and provide harmful information. It is imperative that we address threats to not only digital security, but also threats to physical security and political security.

In light of this, I am interested in learning about the measures that your company is taking to ensure the security of its AI systems. I request that you provide answers to the following questions no later than May 26, 2023.

Questions: 

1.     Can you provide an overview of your company’s security approach or strategy?

2.     What limits do you enforce on third-party access to your model and how do you actively monitor for non-compliant uses?

3.     Are you participating in third party (internal or external) test & evaluation, verification & validation of your systems?

4.     What steps have you taken to ensure that you have secure and accurate data inputs and outputs? Have you provided comprehensive and accurate documentation of your training data to downstream users to allow them to evaluate whether your model is appropriate for their use?

5.     Do you provide complete and accurate documentation of your model to commercial users? Which documentation standards or procedures do you rely on?

6.     What kind of input sanitization techniques do you implement to ensure that your systems are not susceptible to prompt injection techniques that pose underlying system risks?

7.     How are you monitoring and auditing your systems to detect and mitigate security breaches?

8.     Can you explain the security measures that you take to prevent unauthorized access to your systems and models?

9.     How do you protect your systems against potential breaches or cyberattacks? Do you have a plan in place to respond to a potential security incident? What is your process for alerting users that have integrated your model into downstream systems? 

10. What is your process for ensuring the privacy of sensitive or personal information you that your system uses?

11. Can you describe how your company has handled past security incidents?

12. What security standards, if any, are you adhering to? Are you using NIST’s AI Risk Management Framework?

13. Is your company participating in the development of technical standards related to AI and AI security?

14. How are you ensuring that your company continues to be knowledgeable about evolving security best practices and risks? 

15. How is your company addressing concerns about AI trustworthiness, including potential algorithmic bias and misuse or malicious use of AI?

16. Have you identified any security challenges unique to AI that you believe policymakers should address?

Thank you for your attention to these important matters and I look forward to your response. 

###

WASHINGTON – U.S. Sens. Mark R. Warner (D-VA) and John Hoeven (R-ND) this week introduced legislation to support the research and development of unmanned aerial systems (UAS) technologies at the nation’s UAS test sites, including the site at Virginia Tech.

“Unmanned Aerial Systems have the potential to transform the way we manage disasters, maintain our infrastructure, administer medicine, tackle national security threats, and conduct day-to-day business,” said Sen. Warner. “UAS test sites, such as the one located at Virginia Tech, are crucial to the research and development of these technologies and I am glad to continue building on the progress we have made over the last decade.” 

“UAS play a crucial role in our country’s defense, and there is tremendous potential yet to be realized, benefiting our national security as well as our economy,” said Sen. Hoeven. “The UAS test sites, including the Northern Plains UAS Test Site in North Dakota, are at the center of our efforts to ensure these aircraft can be safely integrated into our national airspace. This legislation supports their ongoing work and dovetails with the new BVLOS waivers we recently secured for our test site, further strengthening North Dakota’s position in this dynamic industry.”

Specifically, this legislation:

  • Extends the authorization for the Federal Aviation Administration’s (FAA) UAS test sites for an additional five years through 2028;
  • Formally authorizes research grants through the FAA for the purpose of demonstrating or validating technology related to the integration of UAS in the national airspace system (NAS);
  • Requires a grant recipient to have a contract with an FAA UAS test site;
  • Identifies key research priorities, including: detect and avoid capabilities; beyond visual line of sight (BVLOS) operations; operation of multiple unmanned aircraft systems; unmanned systems traffic management; command and control; and UAS safety standards.

This legislation builds on Sen. Warner’s efforts to expand the domestic production of unmanned systems, including driverless cars, drones, and unmanned maritime vehicles and make Virginia a national leader in this growing sector. Earlier this year, he introduced the Increasing Competitiveness for American Drones Act, legislation that will clear the way for drones to be used for commercial transport of goods across the country. As Chairman of the Senate Intelligence Committee, he has led efforts in Congress to shore up U.S. national and cybersecurity against hostile foreign governments through unmanned air systems. Last month, Sen. Warner introduced legislation to prohibit the federal government from purchasing drones manufactured in countries identified as national security threats, such as the People’s Republic of China. 

###

WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, and John Thune (R-SD), ranking member of the Commerce Committee’s Subcommittee on Communications, Media and Broadband, led a group of 12 bipartisan senators to introduce the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, legislation that will comprehensively address the ongoing threat posed by technology from foreign adversaries by better empowering the Department of Commerce to review, prevent, and mitigate information communications and technology transactions that pose undue risk to our national security.

“Today, the threat that everyone is talking about is TikTok, and how it could enable surveillance by the Chinese Communist Party, or facilitate the spread of malign influence campaigns in the U.S. Before TikTok, however, it was Huawei and ZTE, which threatened our nation’s telecommunications networks. And before that, it was Russia’s Kaspersky Lab, which threatened the security of government and corporate devices,” said Sen. Warner. “We need a comprehensive, risk-based approach that proactively tackles sources of potentially dangerous technology before they gain a foothold in America, so we aren’t playing Whac-A-Mole and scrambling to catch up once they’re already ubiquitous.”

“Congress needs to stop taking a piecemeal approach when it comes to technology from adversarial nations that pose national security risks,” said Sen. Thune. “Our country needs a process in place to address these risks, which is why I’m pleased to work with Senator Warner to establish a holistic, methodical approach to address the threats posed by technology platforms – like TikTok – from foreign adversaries. This bipartisan legislation would take a necessary step to ensure consumers’ information and our communications technology infrastructure is secure.”

The RESTRICT Act establishes a risk-based process, tailored to the rapidly changing technology and threat environment, by directing the Department of Commerce to identify and mitigate foreign threats to information and communications technology products and services.

In addition to Sens. Warner and Thune, the legislation is co-sponsored by Sens. Tammy Baldwin (D-WI), Deb Fischer (R-NE), Joe Manchin (D-WV), Jerry Moran (R-KS), Michael Bennet (D-CO), Dan Sullivan (R-AK), Kirsten Gillibrand (D-NY), Susan Collins (R-ME), Martin Heinrich (D-NM), and Mitt Romney (R-UT).

The Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act would:

  • Require the Secretary of Commerce to establish procedures to identify, deter, disrupt, prevent, prohibit, and mitigate transactions involving information and communications technology products in which any foreign adversary has any interest and poses undue or unacceptable risk to national security;
  • Prioritize evaluation of information communications and technology products used in critical infrastructure, integral to telecommunications products, or pertaining to a range of defined emerging, foundational, and disruptive technologies with serious national security implications;
  • Ensure comprehensive actions to address risks of untrusted foreign information communications and technology products by requiring the Secretary to take up consideration of concerning activity identified by other government entities;
  • Educate the public and business community about the threat by requiring the Secretary of Commerce to coordinate with the Director of National Intelligence to provide declassified information on how transactions denied or otherwise mitigated posed undue or unacceptable risk.

“We need to protect Americans’ data and keep our country safe against today and tomorrow’s threats. While many of these foreign-owned technology products and social media platforms like TikTok are extremely popular, we also know these products can pose a grave danger to Wisconsin’s users and threaten our national security,” said Sen. Baldwin. “This bipartisan legislation will empower us to respond to our fast-changing environment – giving the United States the tools it needs to assess and act on current and future threats that foreign-owned technologies pose to Wisconsinites and our national security.”

“There are a host of dangerous technology platforms – including TikTok – that can be manipulated by China and other foreign adversaries to threaten U.S. national security and abuse Americans’ personal data. I’m proud to join Senator Warner in introducing bipartisan legislation that would put an end to disjointed interagency responses and strengthen the federal government’s ability to counter these digital threats,” said Sen. Fischer.

“Over the past several years, foreign adversaries of the United States have encroached on American markets through technology products that steal sensitive location and identifying information of U.S. citizens, including social media platforms like TikTok. This dangerous new internet infrastructure poses serious risks to our nation’s economic and national security,” said Sen. Manchin. “I’m proud to introduce the bipartisan RESTRICT ACT, which will empower the Department of Commerce to adopt a comprehensive approach to evaluating and mitigating these threats posed by technology products. As Chairman of the Senate Armed Services Subcommittee on Cybersecurity, I will continue working with my colleagues on both sides of the aisle to get this critical legislation across the finish line.”

“Foreign adversaries are increasingly using products and services to collect information on American citizens, posing a threat to our national security,” said Sen. Moran. “This legislation would give the Department of Commerce the authority to help prevent adversarial governments from introducing harmful products and services in the U.S., providing us the long-term tools necessary to combat the infiltration of our information and communications systems. The government needs to be vigilant against these threats, but a comprehensive data privacy law is needed to ensure Americans are able to control who accesses their data and for what purpose.”

“We shouldn’t let any company subject to the Chinese Communist Party’s dictates collect data on a third of our population – and while TikTok is just the latest example, it won’t be the last. The federal government can’t continue to address new foreign technology from adversarial nations in a one-off manner; we need a strategic, enduring mechanism to protect Americans and our national security. I look forward to working in a bipartisan way with my colleagues on the Senate Select Intelligence Committee to send this bill to the floor,” said Sen. Bennet.

“Our modern economy, communication networks, and military rely on a range of information communication technologies. Unfortunately, some of these technology products pose a serious risk to our national security,” said Sen. Gillibrand. “The RESTRICT Act will address this risk by empowering the Secretary of Commerce to carefully evaluate these products and ensure that they do not endanger our critical infrastructure or undermine our democratic processes.”

“China’s brazen incursion of our airspace with a sophisticated spy balloon was only the most recent and highly visible example of its aggressive surveillance that has targeted our country for years.  Through hardware exports, malicious software, and other clandestine means, China has sought to steal information in an attempt to gain a military and economic edge,” said Sen. Collins. “Rather than taking a piecemeal approach to these hostile acts and reacting to each threat individually, our legislation would create a wholistic, government-wide response to proactively defend against surveillance attempts by China and other adversaries.  This will directly improve our national security as well as safeguard Americans’ personal information and our nation’s vital intellectual property.”

"Cybersecurity is one of the most serious economic and national security challenges we face as a nation. The future of conflict is moving further away from the battlefield and closer to the devices and the networks everyone increasingly depends on. We need a systemic approach to addressing potential threats posed by technology from foreign adversaries. This bill provides that approach by authorizing the Administration to review and restrict apps and services that pose a risk to Americans’ data security. I will continue to push for technology defenses that the American people want and deserve to keep our country both safe and free,” said Sen. Heinrich.

“The Chinese Communist Party is engaged in a multi-generational, multi-faceted, and systematic campaign to replace the United States as the world’s superpower. One tool at its disposal—the ability to force social media companies headquartered in China, like TikTok’s parent company, to hand over the data it collects on users,” said Sen. Romney. “Our adversaries—countries like China, Russia, Iran—are increasingly using technology products to spy on Americans and discover vulnerabilities in our communications infrastructure, which can then be exploited. The United States must take stronger action to safeguard our national security against the threat technology products pose and this legislation is a strong step in that direction.”

A two-page summary of the bill is available here. A copy of the bill text is available here.

### 

WASHINGTON – Today, Chairman of the Senate Select Committee on Intelligence U.S. Sen. Mark R. Warner (D-VA) appeared on FOX News Sunday to discuss the how the U.S. needs to tackle rising threats posed by the Communist Party of China. 

On the how the United States needs to address the rise of the Chinese Communist Party on the world stage:

“We have never had a potential adversary like China. The Soviet Union, Russia, was military or ideological, China is investing in economic areas. They have $500 billion in intellectual property theft, and we are in a competition not just on a national security basis but on a technology basis. That's why national security now includes telecommunications, satellites, artificial intelligence, quantum computing. Each of these domains, we have got to make the kind of investments to stay ahead. I think we are starting that in a bipartisan way. We did the CHIPS bill to try to bring semiconductor manufacturing back, we have kicked out Huawei out of our telecom systems. This week, I have a broad bipartisan bill that I am launching with my friend John Thune, the Republican lead, where we are going to say, in terms of foreign technology coming into America, we’ve got to have a systemic approach to make sure we can ban or prohibit it when necessary.”

On the influence of TikTok:

“Listen, you have 100 million Americans on TikTok, 90 minutes a day…They are taking data from Americans, not keeping it safe, but what worries me more with TikTok is that this could be a propaganda tool. The kind of videos you see would promote ideological issues. If you look at what TikTok shows to the Chinese kids, which is all about science and engineering, versus what our kids see, there’s a radical difference.”

On China’s support for Putin’s war in Ukraine:

“…if China moves forward to support Russia in Ukraine, I can't understand some of my colleagues who are willing to say, ‘I don't really care about Ukraine, but I'm concerned about China.’ Well, China and Russia, these authoritarian regimes, are linked, and we have to make sure Putin is not successful in Ukraine and that Xi doesn't further his expansion plans around Taiwan.”

Video of Sen. Warner on FOX News Sunday can be found here. A transcript follows.

FOX News Sunday 

SHANNON BREAM: Joining is now, Virginia Democratic Senator Mark Warner, Chairman of the Senate Intelligence Committee, welcome back. This week, you all have a hearing on worldwide threat assessments. You will have the DNI, the director of the CIA there. You have long been warning about China on multiple fronts. Do you think that we have lost valuable time in assessing the threat accurately? Will you talk about that this week?

SENATOR MARK WARNER: Well I think for a long time conventional wisdom was, the more you bring China into the world order, the more they’re going to change. That assumption was just plain wrong. China even changed their laws in 2016 to make it explicitly clear that every company in China, their first obligation is to the Communist Party. So we have never had a potential adversary like China. The Soviet Union, Russia, was military or ideological, China is investing in economic areas. They have $500 billion in intellectual property theft, and we are in a competition not just on a national security basis but on a technology basis. That's why national security now includes telecommunications, satellites, artificial intelligence, quantum computing. Each of these domains, we have got to make the kind of investments to stay ahead. I think we are starting that in a bipartisan way. We did the CHIPS bill to try to bring semiconductor manufacturing back, we have kicked out Huawei out of our telecom systems. This week, I have a broad bipartisan bill that I am launching with my friend John Thune, the Republican lead where we are going to say, in terms of foreign technology coming into America, we’ve got to have a systemic approach to make sure we can ban or prohibit it when necessary.

BREAMDoes that mean TikTok?

SEN. WARNER: That means TikTok is one of the potentials. Listen, you have 100 million Americans on TikTok, 90 minutes a day. Even you guys would like that kind of return, 90 minutes a day. They are taking data from Americans, not keeping it safe, but what worries me more with TikTok is that this could be a propaganda tool. The kind of videos you see would promote ideological issues. If you look at what TikTok shows to the Chinese kids, which is all about science and engineering, versus what our kids see, there’s a radical difference.

BREAM: We will watch that, because that's a bipartisan offering potentially this week. This past week we got information, it was revealed that both the Department of Energy and FBI believe that the origins of COVID were most likely a leak from the Wuhan Institute for Virology. This is something that early on this was called a conspiracy theory, you were racist if you talked about it. The Senate has actually unanimously passed a measure that would call on this administration to declassify information that we have about the origins. The White House won't say whether the president will veto it or not if it gets to his desk. Do Americans, worldwide, do people not have a right to see that information?

SEN. WARNER: Shannon, here is again an example of what we are dealing with, with the Communist Party in China. If this virus had originated virtually anywhere else, we would have had world scientists there. The Chinese Communist Party has been totally opaque about letting in outside scientists to figure this out. Now, you’ve still got of some parts of the intelligence community that think it originated in a wet market, others saying that it could have gotten out from a lab, although I would say that one entity says it came from one lab in Wuhan, another said from another. At the end of the day, we’ve got to keep looking and we've got to make sure, in terms of future pandemics, that we can have access to the source of where these diseases originate a lot earlier on in the system. We’re three and half later, we still don't have access to Wuhan.

BREAM: They're not going to cooperate with that, especially if they assess internally they were at fault. How do they pay for this? Now, billions probably trillions in damages and losses for people, millions and millions of lives. How do they pay?

SEN. WARNER: Well I think again, this is where we’ve got to have that united front of countries all around the world, that there has to be consequences. There has to be consequences potentially in terms of sanctions, it’s one of the reasons why, if China moves forward to support Russia in Ukraine, I can't understand some of my colleagues who are willing to say, “I don't really care about Ukraine, but I'm concerned about China.” Well, China and Russia, these authoritarian regimes, are linked, and we have to make sure Putin is not successful in Ukraine and that Xi doesn't further his expansion plans around Taiwan.

BREAM: Well, we know that even if they are not sending bullets over to Russia, they are buying up copious amounts of Russian oil. They are sending dual-use products that could actually be used on the battlefield. Xi doesn't seem very worried about the warnings from the U.S. at this point. They haven't even acknowledged or apologized for the balloon that went across America, we think capturing information as it went. It Xi afraid of this administration? To our warnings mean anything?

SEN. WARNER: Well I think Xi, as Putin thought, thought that with the invasion of the Ukraine, that the West would basically throw in the towel. The fact that we’ve not, the fact that you've got, for example, the German chancellor here just this past week, Germany’s dramatically increasing their defense budget. The fact that we've got nations like Finland and Sweden trying to join NATO. I think Putin made a major miscalculation and I do think Xi is watching the West stand up against Putin and is taking some lessons from that.

BREAM: You're just back from India, among many other countries you visited. They abstained from the U.N. vote that condemned Russia's invasion of Ukraine and called for an end to this. How important is it, a critical place like India, that they choose a side, and with the West?

SEN. WARNER: I think it’s time. Look, India is a great nation, as a matter of fact, I’m chair of the India Caucus, I'm a big supporter of India. India is now a major, major power. Fifth-largest economy in the world, and a place where remarkable things are happening. My message to the Indians has been, we understand that you have historic ties to Russia, and you still get a lot of your arms, but you cannot be a world leader, and attempt to be a moral world leader, without picking a side. And in this case, I think the younger Indians get that. Some of the older generation, I think we still have work to do.

BREAM: Okay, let's turn to continued funding for Ukraine. Another $400 million was announced on Friday. There are questions, there'll be more requests from Congress no doubt in the coming weeks about that. While there is strong support, here across the U.S. and across the West, the polls show that it's pulling back a little bit. Here's the reality from one analyst, “funding for the Ukrainian government has not demanded any tough bureaucratic trade-offs between funding priorities. It's not requiring bouncing needs for Ukraine against a domestic spending.” We’ve hit our ceiling, we have some kind of negotiation that’s got to happen very shortly. There are competing needs and they are very real, so where do we assess our financial commitment?

SEN. WARNER: Well Shannon, let's look at this. We have allocated $113 billion to Ukraine. We have actually only given them actually less than half of that, and on the military side, about $30 billion of roughly $60 billion. We’ve still got some runway to go there. But I think we need to keep that commitment, and the truth is the Russian army is being chewed up by the Ukrainians. We spent $800 billion a year on defense, in most of my lifetime to prevent Russia from exploiting that. We are having Ukrainians do that right now, in a sense, for us. I think we need to continue that. I think we will see the vast majority of members of Congress in both parties, there are some loudmouths on both sides that are pulling back, but if we are going to keep in this competition against Russia and China, Putin cannot be successful. At the same time, we have to realize as we look at China that national security is no longer simply tanks and trucks and guns and ships. It's also telecom and AI and quantum computing and advanced synthetic biology. We have to make investments in those domains, as well, which is both an economic investment and I believe, national security investment.

BREAM: Speaking of another national security interest, Iran, this report on their nuclear capabilities came out this week and it’s kind of getting lost in all the other foreign policy headlines, but basically what the International Atomic Energy Agency told us is that they have hit 84% as far as enriching uranium. They said that’s just short of the 90% that you would need for a weapon. Britain, France, and Germany say they want to censure Iran over this. The U.S. is kind of hesitant. The reporting is that the Biden administration doesn’t want to go there. Are we now then softer on Iran's new program then Europe?

SEN. WARNER: I do not believe that. We have made it explicitly clear – and I was just in Israel recently with a group of senators  – that we agree with Israel. Iran cannot be a nuclear power. I think, that has been our policy it will continue to be our policy. There are two steps in this process, one is the enrichment issue, and I believe we will be tougher than the Europeans. We always historically always have been –

BREAM: So then why are we against censuring, reportedly?

SEN. WARNER: We have already sanctioned and censured more Iranian companies by far than our European friends. But there is also a question around delivery systems. Again, I think we and our Israeli friends are following this very closely. Again, we will not allow Iran to become a nuclear power.

BREAM: I've got to hit this, Havana Syndrome. The reporting out this week, an assessment from several intelligence agencies that they don't think –  that it's unlikely there was a foreign adversary carrying out these attacks, whatever they were, where our people, diplomats or Intel officers around the world in U.S. missions have suffered really debilitating symptoms from this. Senator Rubio, your colleague tweeted this: “The CIA took the investigation of Havana syndrome seriously. But when you read about the devastating injuries it's hard to except that it was by AC units and loud cicadas. Something happened here and just because we don’t have all the answers doesn’t mean it didn’t happen.” Will you continue trying to pursue answers?

SEN. WARNER: Absolutely. First of all, the most important thing is anyone who got sick, whatever the source was, whether they are CIA, DoD, State Department officials, we owe them the world's best health care and I think we are providing that now. Initially frankly, under the last administration, this whole issue was attempted to be swept under the rug. We are now making sure that health care is provided. I know how, particularly the CIA, how extensive the investigation has been. And I've made very clear to them, if they need to continue that investigation, if new facts come to light, they ought to pursue that. But at this moment in time, I know how thorough they have been, and they have not found the evidence that I think perhaps they thought they would have found. We've got to follow the facts. At the end of the day that's what we owe the members of this intel community, who protect our nation, and that means giving them the health care. If it ends up sensing some other source then what has been discovered so far, we have to pursue it.

BREAM: Senator, Chairman, thanks for coming back to Fox News Sunday.

 

###

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, issued a statement after the Department of Commerce released the first Notice of Funding Opportunity (NOFO) for CHIPS Act incentives, welcoming the announcement:

“The projects that will be made possible by the CHIPS Act will strengthen our national security and create good-paying manufacturing jobs here in the United States. With limited funding available, I urge the Department of Commerce to be strategic in selecting projects in order to ensure that funding advances U.S. economic and national security objectives.”

Nearly everything that has an “on” switch – from cars to phones to washing machines to ATMs to electric toothbrushes – contains a semiconductor, but just 12 percent of these ‘chips’ are currently made in America. The CHIPS and Science Act includes $52 billion in funding championed by Sen. Warner to manufacture chips here on American soil – a move that will increase economic and national security and help America compete against countries like China for the technology of the future.

Sen. Warner, co-chair of the Senate Cybersecurity Caucus and former technology entrepreneur, has long sounded the alarm about the importance of investing in domestic semiconductor manufacturing. Sen. Warner first introduced the Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act in June 2020 along with Sen. John Cornyn (R-TX).

###

WASHINGTON – Senate Select Committee on Intelligence Chairman Mark R. Warner (D-VA) sent a letter to Meta CEO Mark Zuckerberg, pressing the company on its efforts to combat the spread of misinformation,  hate speech, and incitement content around the world. Reporting indicates that Facebook devotes 84 percent of its misinformation budget to the United States, where only ten percent of its users reside.  

“In its pursuit of growth and dominance in new markets, I worry that Meta has not adequately invested in the technical, organizational, and human safeguards necessary to ensuring that your platform is not used to incite violence and real-world harm,” wrote Sen. Warner, pointing to evidence, acknowledged by Meta, that the platform as used to foment genocide in Myanmar. “I am concerned that Meta is not taking seriously the responsibility it has to ensure that Facebook and its other platforms do not inspire similar events in other nations around the world.”

In his letter, Sen. Warner noted that Facebook supported more than 110 languages on its platform as of October 2021, and users and advertisers posted on the platform in over 160 languages. However, Facebook’s community standards, the policies that outline what is and isn’t allowed on the platform, were available in less than half of the languages that Facebook offered at that time. Facebook has previously said that it uses artificial intelligence to proactively identify hate speech in more than 50 languages and that it has native speakers reviewing content in more than 70 languages.

“Setting aside the efficacy of Facebook’s AI solutions to detect hate speech and violent rhetoric in all of the languages that it offers, the fact that Facebook does not employ native speakers in dozens of languages officially welcomed on its platform is troubling – indicating that Facebook has prioritized growth over the safety of its users and the communities Facebook operates in,” Sen. Warner wrote, citing documents provided by Facebook whistleblower Frances Haugen. “Of particular concern is the lack of resources dedicated to what Facebook itself calls ‘at-risk countries’ – nations that are especially vulnerable to misinformation, hate speech, and incitement to violence.”

Warner noted that in Ethiopia, Facebook reportedly did not have automated systems capable of flagging harmful posts in Amharic and Oromo, the country’s two most spoken languages.  A March 2021 internal report said that armed groups within Ethiopia were using Facebook to incite violence against ethnic minorities, recruit, and fundraise.

“In the wake of Facebook’s role in the genocide of the Rohingya in Myanmar – where UN investigators explicitly described Facebook as playing a ‘determining role’ in the atrocities  – one would imagine more resources would be dedicated to places like Ethiopia. Even in languages where Meta does have experience, the systems in place appear woefully inadequate at preventing violent hate speech from appearing on Facebook,” observed Sen. Warner, citing an investigation conducted by the non-profit Global Witness, which was able to post ads in Swahili and English ahead of the 2022 general elections in Kenya that violated Facebook’s stated Community Standards for hate speech and ethnic-based calls to violence.

“Unfortunately, these are not isolated cases – or new revelations. For nearly six years, Facebook’s role in fueling, amplifying, and accelerating racial, religious, and ethnic violence has been documented across the globe – including in Bangladesh, Indonesia, South Sudan, and Sri Lanka. In other developing countries – such as Cambodia, Vietnam and the Philippines  – Facebook has reportedly courted autocratic parties and leaders in order to ensure its continued penetration of those markets,” wrote Sen. Warner. “Across many of these cases, Facebook’s global success – an outgrowth of its business strategy to cultivate high levels of global dependence through efforts like Facebook Free Basics and Internet.org – has heightened the effects of its misuse. In many developing countries, Facebook, in effect, constitutes the internet for millions of people, and serves as the infrastructure for significant social, political, and economic activity.”

“Ultimately, the destabilizing impacts of your platform on fragile societies across the globe poses a set of regional – if not global – security risks,” concluded Warner, posing a series of questions to Zuckerberg about the company’s investments in foreign language content moderation and requesting a response by March 15, 2023.

A full copy of the letter is available here.

### 

WASHINGTON – Senate Select Committee on Intelligence Chairman Mark R. Warner (D-VA) and Vice Chairman Marco Rubio (R-FL) wrote to the Biden administration to request that it expand the use of existing tools and authorities at the Departments of Treasury and Commerce to prevent China’s military industrial complex from benefiting from U.S. technology, talent and investments.

In a pair of letters, the Senators expressed concern with the flow of U.S. innovation, talent, and capital into the People’s Republic of China (PRC), which seeks to exert control over global supply chains, achieve technological superiority, and rise as the dominant economic and military power in the world. They also stress the need to utilize the authorities at the government’s disposal to protect U.S. interests and ensure that American businesses, investors, and consumers are not inadvertently advancing China’s authoritarian interests or supporting its ongoing genocide in Xinjiang and human rights abuses in Tibet and Hong Kong.

In their letter to Treasury Secretary Janet Yellen, the Senators wrote, “It is widely known that the PRC’s Military-Civil Fusion (MCF) program targets technological advancements in the U.S., as well as university and research partnerships with the U.S., for the PRC’s military development.  U.S. technology, talent, and capital continue to contribute—through both lawful and unlawful means, including theft—to the PRC’s development of critical military-use industries, technologies, and related supply chains. The breadth of the MCF program’s ambitions and reach creates dangerous vulnerabilities for U.S. national and economic security as well as undermines respect for democratic values globally.”

The Senators also posed a number of questions for Sec. Yellen regarding Treasury’s internal Specially Designated Nationals and Blocked Persons (SDN) lists, which do not include a number of entities and individuals who have been identified by the U.S. Government as posing national security risks or human rights concerns.  

In their letter to Commerce Secretary Gina Raimondo, the Senators wrote, “Despite recent restrictions on the export of sensitive technologies critical to U.S. national security, we remain deeply concerned that American technology, investment, and talent continue to support the People’s Republic of China’s (PRC’s) military industrial complex, intelligence and security apparatus, its ongoing genocide, and other PRC efforts to displace United States economic leadership. As such, we urge the Department of Commerce to immediately use its authorities to more broadly restrict these activities.”

The Senators also requested answers from Sec. Raimondo regarding America’s most critical high-technology sectors, the Department’s ability and authority to evaluate companies’ reliance on China and assess the flow of U.S. innovation to PRC entities.  

A copy of the letter to the Department of Treasury is available here. A copy of the letter to the Department of Commerce is available here.  

### 

WASHINGTON – Last week, U.S. Sens. Mark R. Warner (D-VA) and Rick Scott (R-FL) introduced the American Security Drone Act of 2023, legislation to prohibit the purchase of drones from countries identified as national security threats, such as China.  

“I am a staunch supporter of unmanned systems and drone investment here in the United States, and I wholeheartedly believe that we must continue to invest in domestic production of drones,” said Sen. Warner. “But the purchase of drones from foreign countries, especially those that have been deemed a national security threat, is dangerous. I am glad to introduce legislation that takes logical steps to protect our data from foreign adversaries and meanwhile supports American manufacturers.” 

“I’ve been clear for years: the United States should never spend taxpayer dollars on anything made in Communist China, especially drones which pose a significant threat to our national security,” said Sen. Scott. “Xi and the Communist Party of China are on a quest for global domination and whether it’s with spy balloons, TikTok or drones, they will stop at nothing to infiltrate our society and steal our data. I’m proud to join my colleagues to reintroduce the bipartisan American Security Drone Act to STOP the U.S. from buying drones manufactured in nations identified as national security threats. This important bill is critical to our national security and should be passed by the Senate, House and signed into law IMMEDIATELY.”

Specifically, The American Security Drone Act:

  • Prohibits federal departments and agencies from procuring certain foreign commercial off-the-shelf drone or covered unmanned aircraft system manufactured or assembled in countries identified as national security threats, and provides a timeline to end current use of these drones.
  • Prohibits the use of federal funds awarded through certain contracts, grants, or cooperative agreements to state or local governments from being used to purchase foreign commercial off-the-shelf drones or covered unmanned aircraft systems manufactured or assembled in a country identified as a national security threat.
  • Requires the Comptroller General of the United States to submit a report to Congress detailing the amount of foreign commercial off-the-shelf drones and covered unmanned aircraft systems procured by federal departments and agencies from countries identified as national security threats.

In addition to Sens. Warner and Scott, the legislation is cosponsored by Sens. Marco Rubio (R-FL), Richard Blumenthal (D-CT), Marsha Blackburn (R-TN), Chris Murphy (D-CT), Tom Cotton (R-AR), and Josh Hawley (R-MO).

Sen. Warner is a strong supporter of the domestic production of unmanned systems, including driverless cars, drones, and unmanned maritime vehicles. Earlier this month, Sen. Warner introduced the Increasing Competitiveness for American Drones Act, legislation that will clear the way for drones to be used for commercial transport of goods across the country.

Full text of the legislation is available here.

###

WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA) and John Thune (R-SD) introduced the Increasing Competitiveness for American Drones Act of 2023, comprehensive legislation to streamline the approvals process for beyond visual line of sight (BVLOS) drone flights and clear the way for drones to be used for commercial transport of goods across the country – making sure that the U.S. remains competitive globally in a growing industry increasingly dominated by competitors like China.

Currently, each aircraft and each BVLOS operation that takes flight requires unmanned aerial system (UAS) operators to seek waivers from the Federal Aviation Administration (FAA), but the FAA has not laid out any consistent set of criteria for the granting of waivers, making the process for approving drone flights slow and unpredictable. The bipartisan Increasing Competitiveness for American Drones Act will require the FAA to issue a new rule allowing BVLOS operations under certain circumstances.

“Drones have the ability to transform so much of the way we do business. Beyond package delivery, drones can change the way we grow crops, manage disasters, maintain our infrastructure, and administer medicine,” said Sen. Warner. “If we want the drones of tomorrow to be manufactured in the U.S. and not in China, we have to start working today to integrate them into our airspace. Revamping the process for approving commercial drone flight will catapult the United States into the 21st century, allowing us to finally start competing at the global level as technological advancements make drone usage ever more common.”

“Drones have the potential to transform the economy, with innovative opportunities for transportation and agriculture that would benefit rural states like South Dakota,” said Sen. Thune. “I’m proud to support this legislation that provides a clear framework for the approval of complex drone operations, furthering the integration of these aircraft into the National Airspace System.”

Specifically, the bill requires the FAA to establish a “risk methodology,” which will be used to determine what level of regulatory scrutiny is required:  

  • Operators of small UAS under 55lbs simply have to declare that they conducted a risk assessment and meet the standard, subject to audit compliance by the FAA.
  • Operators of UAS between 55lbs and 1320lbs must submit materials based on the risk assessment to the FAA to seek a “Special Airworthiness Certificate.” UAS in this category may be limited to operating no more than 400 feet above ground level.
  • Finally, operators of UAS over 1320lbs must undergo the full “type certification” process—the standard approval process for crewed aircraft.

In addition, the Increasing Competitiveness for American Drones Act would create the position of “Associate Administrator of UAS Integration” as well as a UAS Certification Unit that would have the sole authority to issue all rulemakings, certifications, and waivers. This new organizational structure would create central rulemaking body for UAS, allowing for a more uniform process.  

“Commercial drone operations provide valuable services to the American public and workforce – but significant regulatory hurdles are hampering these benefits from reaching their fullest potential and jeopardize U.S. global leadership in aviation. The regulatory challenges are not driven by safety, they are hampered by bureaucracy. We accordingly have urged Congress to prioritize drone integration, and we are grateful for the support of Senators Warner and Thune in this cause. AUVSI is proud to endorse this legislation, and we urge Congress to include it as part of their critical work this year to pass a multi-year FAA Reauthorization,”  Michael Robbins, Chief Advocacy Officer of the Association for Uncrewed Vehicle Systems International (AUVSI), said.

“The Coalition is grateful for the leadership of Senators Thune and Warner, and this bill comes at a pivotal time for the drone industry. Since 2012, Congress has worked to progress the law and regulation around commercial drone use, but now, in 2023, this progress has slowed as regulations and approvals continue to be delayed. With reauthorization of Federal Aviation Administration (FAA) programs required by September 30, this year is a critical time for the drone industry,” said The Small UAV Coalition.

“The Commercial Drone Alliance applauds the introduction of the Increasing Competitiveness for American Drones Act of 2023, and we commend and thank Senator Warner and Senator Thune for their leadership on these important issues. While the U.S. has lagged behind other countries in developing and deploying uncrewed aircraft systems (UAS), this legislation provides the U.S. with the opportunity to reestablish its prominence as a global leader in advanced aviation and compete more effectively in the global economy,” said The Commercial Drone Alliance.

Sen. Warner has been a strong supporter of research and investment in unmanned systems, including driverless cars, drones, and unmanned maritime vehicles. He previously introduced  legislation designed to advance the development of UAS and build on the FAA’s efforts to safely integrate them into the National Airspace System. Virginia is home to one of seven FAA-approved sites across the country where researchers are testing the safest and most effective ways to incorporate UAS into the existing airspace – including the first-ever package delivery by drone to take place in the United States. Last October, Sen. Warner visited the headquarters of DroneUp, a leader in independent drone delivery contracting, in Hampton Roads, Virginia.

Full text of the legislation is available here.

###

WASHINGTON – Today, Senate Select Committee on Intelligence Chairman Mark Warner (D-VA) and Vice Chairman Marco Rubio (R-FL) wrote to Meta CEO Mark Zuckerberg, questioning the company about recently released documents revealing that the company knew, as early as 2018, that hundreds of thousands of developers in what Facebook classified as “high-risk jurisdictions” including the People’s Republic of China (PRC) and Russia had access to user data that could have been used to facilitate espionage. The documents were released as part of ongoing litigation against the company related to its lax handling of personal data after revelations regarding Cambridge Analytica.

Under pressure from Congress, Facebook revealed in 2018 that it provided access to key application programming interfaces (APIs) to device-makers based in the PRC, including Huawei, OPPO, TCL, and others. In the wake of those disclosures, Facebook met repeatedly with the staffs of both senators and the Senate Intelligence Committee to discuss access to this data and what controls Facebook was putting in place to protect user data in the future.

Wrote the bipartisan leaders of the Senate Intelligence Committee in today’s letter, “Given those discussions, we were startled to learn recently, as a result of this ongoing litigation and discovery, that Facebook had concluded that a much wider range of foreign-based developers, in addition to the PRC-based device-makers, also had access to this data. According to at least one internal document, this included nearly 90,000 separate developers in the People’s Republic of China (PRC), which is especially remarkable given that Facebook has never been permitted to operate in the PRC.  The document also refers to discovery of more than 42,000 developers in Russia, and thousands of developers in other ‘high-risk jurisdictions,’ including Iran and North Korea, that had access to this user information.”

The newly available documents reveal that Facebook internally acknowledged in 2018 that this access could  be used for espionage purposes.

“As the Chairman and Vice Chairman of the Senate Select Committee on Intelligence, we have grave concerns about the extent to which this access could have enabled foreign intelligence service activity, ranging from foreign malign influence to targeting and counter-intelligence activity,” wrote Warner and Rubio, posing a series of questions to the company about the implications of the access, including:

1)      The unsealed document notes that Facebook conducted separate reviews on developers based in the PRC and Russia “given the risk associated with those countries.” What additional reviews were conducted on these developers? When was this additional review completed and what were the primary conclusions? What percentage of the developers located in the PRC and Russia was Facebook able to definitively identify?  What communications, if any, has Facebook had with these developers since its initial identification? What criteria does Facebook use to evaluate the “risk associated with” operation in the PRC and Russia?

2)      For the developers identified as being located within the PRC and Russia, please provide a full list of the types of information to which these developers had access, as well as the timeframes associated with such access.

3)      Does Facebook have comprehensive logs on the frequency with which developers from high-risk jurisdictions accessed its APIs and the forms of data accessed?

4)      Please provide an estimate of the number of discrete Facebook users in the United States whose data was shared with a developer located in the each country identified as a “high-risk jurisdiction” (broken out by country).

5)      The internal document indicates that Facebook would establish a framework to identify the “developers and apps determined to be most potentially risky[.]” How did Facebook establish this rubric? How many developers and apps based in the PRC and Russia met this threshold? How many developers and apps in other high-risk jurisdictions met this threshold? What were the specific characteristics of these developers that gave rise to this determination? Did Facebook identify any developers as too risky to safely operate with? If so, which?

6)      The internal document references your public commitment to “conduct a full audit of any app with suspicious activity.” How does Facebook characterize “suspicious activity” and how many apps triggered this full audit process? 

7)      Does Facebook have any indication that any developers’ access enabled coordinated inauthentic activity, targeting activity, or any other malign behavior by foreign governments?

8)      Does Facebook have any indication that developers’ access enabled malicious advertising or other fraudulent activity by foreign actors, as revealed in public reporting? 

The full of today’s letter is available here and below.

Dear Mr. Zuckerberg,

We write you with regard to recently unsealed documents in connection with pending litigation your company, Meta, is engaged in. It appears from these documents that Facebook has known, since at least September 2018, that hundreds of thousands of developers in countries Facebook characterized as “high-risk,” including the People’s Republic of China (PRC), had access to significant amounts of sensitive user data. As leaders of the Senate Intelligence Committee, we write today with a number of questions regarding these documents and the extent to which developers in these countries were granted access to American user data. 

In 2018, the New York Times revealed that Facebook had provided privileged access to key application programming interfaces (APIs) to Huawei, OPPO, TCL, and other device-makers based in the PRC.  Under the terms of agreements with Facebook dating back to at least 2010, these device manufacturers were permitted to access a wealth of information on Facebook’s users, including profile data, user IDs, photos, as well as contact information and even private messages.  In the wake of these revelations, as well as broader revelations concerning Facebook’s lax data security policies related to third-party applications, our staffs held numerous meetings with representatives from your company, including with senior executives, to discuss who had access to this data and what controls Facebook was putting in place to protect user data in the future.

Given those discussions, we were startled to learn recently, as a result of this ongoing litigation and discovery, that Facebook had concluded that a much wider range of foreign-based developers, in addition to the PRC-based device-makers, also had access to this data. According to at least one internal document, this included nearly 90,000 separate developers in the People’s Republic of China (PRC), which is especially remarkable given that Facebook has never been permitted to operate in the PRC.  The document also refers to discovery of more than 42,000 developers in Russia, and thousands of developers in other “high-risk jurisdictions,” including Iran and North Korea, that had access to this user information.

As Facebook’s own internal materials note, those jurisdictions “may be governed by potentially risky data storage and disclosure rules or be more likely to house malicious actors,” including “states known to collect data for intelligence targeting and cyber espionage.”  As the Chairman and Vice Chairman of the Senate Select Committee on Intelligence, we have grave concerns about the extent to which this access could have enabled foreign intelligence service activity, ranging from foreign malign influence to targeting and counter-intelligence activity. 

In light of these revelations, we request answers to the following questions on the findings of Facebook’s internal investigation:

1) The unsealed document notes that Facebook conducted separate reviews on developers based in the PRC and Russia “given the risk associated with those countries.”

  • What additional reviews were conducted on these developers?
  • When was this additional review completed and what were the primary conclusions?
  • What percentage of the developers located in the PRC and Russia was Facebook able to definitively identify?
  • What communications, if any, has Facebook had with these developers since its initial identification?
  • What criteria does Facebook use to evaluate the “risk associated with” operation in the PRC and Russia?

2) For the developers identified as being located within the PRC and Russia, please provide a full list of the types of information to which these developers had access, as well as the timeframes associated with such access.

3) Does Facebook have comprehensive logs on the frequency with which developers from high-risk jurisdictions accessed its APIs and the forms of data accessed?

4) Please provide an estimate of the number of discrete Facebook users in the United States whose data was shared with a developer located in the each country identified as a “high-risk jurisdiction” (broken out by country).

5) The internal document indicates that Facebook would establish a framework to identify the “developers and apps determined to be most potentially risky[.]”

  • How did Facebook establish this rubric?
  • How many developers and apps based in the PRC and Russia met this threshold? How many developers and apps in other high-risk jurisdictions met this threshold?
  • What were the specific characteristics of these developers that gave rise to this determination?
  • Did Facebook identify any developers as too risky to safely operate with? If so, which?

6) The internal document references your public commitment to “conduct a full audit of any app with suspicious activity.”

  • How does Facebook characterize “suspicious activity” and how many apps triggered this full audit process? 

7) Does Facebook have any indication that any developers’ access enabled coordinated inauthentic activity, targeting activity, or any other malign behavior by foreign governments?

8) Does Facebook have any indication that developers’ access enabled malicious advertising or other fraudulent activity by foreign actors, as revealed in public reporting? 

Thank you for your prompt attention.

 

###

WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) and Rep. Elissa Slotkin (D-MI) wrote to Sundar Pichai – the CEO of Alphabet Inc. and its subsidiary Google – urging him to curb deceptive advertisements and ensure that users receive accurate information when searching for abortion services on the platform. This letter comes on the heels of an investigation that reveals how Google regularly fails to apply disclaimer labels to misleading ads by anti-abortion clinics. It also follows a successful effort by Sen. Warner and Rep. Slotkin who previously urged Google to take action to prevent misleading search results for anti-abortion clinics. This push ultimately led Google to clearly label facilities that provide abortions and prevent users from being misled by fake clinics or crisis pregnancy centers.

“We are encouraged by and appreciative of the recent steps Google has taken to protect those searching for abortion services from being mistakenly directed to clinics that do not offer comprehensive reproductive health services. However, we ask you to address issues with misrepresentation in advertising on Google’s site and take a more expansive, proactive approach to addressing violations of Google’s stated policy,” wrote the lawmakers.

“According to an investigation by Bloomberg News and the Center for Countering Digital Hate (CCDH), depending on the search term used, Google does not consistently apply disclaimer labels to ads by anti-abortion clinics.  CCDH recently conducted searches that returned 132 misleading ads for such clinics that lacked disclaimers. Specifically, researchers found that queries for terms such as ‘Plan C pills,’ ‘pregnancy help,’ and ‘Planned Parenthood’ often returned results with ads that are not labeled accurately,” they continued. “Furthermore, the Tech Transparency Project found that some ads from ‘crisis pregnancy centers,’ even when they were properly labeled, the ads themselves included deliberately deceptive verbiage aimed at tricking users into believing that they offer abortion services.  For example, ads for ‘crisis pregnancy centers’ were found to contain language such as ‘Free Abortion Pill’ and ‘First Trimester Abortion.’ Such deceptive advertising likely reduces the effectiveness of labels and may lead to detrimental health outcomes for users who receive delayed treatment.”

In addition to urging Google to rectify these issues, the lawmakers also requested answers to the following questions:

 

  1. What specific search terms does Google consider related to “getting an abortion”?
  2. What criteria does Google use to determine whether specific queries are related to “getting an abortion”?
  3. What additional steps will Google take to identify and remove ads with misleading verbiage that violates Google’s policies against misrepresentation?

A copy of the letter is available here and full text of the letter can be found below:

Dear Mr. Pichai,

We write today regarding the responsibility that Google has to ensure users receive accurate information when searching for abortion services on your platform. We are encouraged by and appreciative of the recent steps Google has taken to protect those searching for abortion services from being mistakenly directed to clinics that do not offer comprehensive reproductive health services. However, we ask you to address issues with misrepresentation in advertising on Google’s site and take a more expansive, proactive approach to addressing violations of Google’s stated policy.

On June 17, 2022, we wrote to you, along with 19 other senators and representatives, regarding research that showed Google results for searches such as “abortion services near me” often included links to clinics that are anti-abortion, sometimes called “crisis pregnancy centers.”   We were extremely concerned with this practice of directing users toward “crisis pregnancy centers” without any disclaimer indicating those businesses do not provide abortions.

We were pleased to see the changes you have made in response to our letter, such as the new refinement tool that allows users to only see facilities verified to offer abortion services, while still preserving the option to see a broader range of search results.  The steps you have taken will help prevent users from mistakenly being sent to organizations that attempt to deceive individuals into thinking they provide comprehensive health services and instead, regularly provide users with disinformation regarding the risks of abortion.  As many states are increasingly narrowing the window between getting a positive pregnancy test and when you can terminate a pregnancy, every day counts.

But we find ourselves again asking that Google live up to its promises with regards to preventing misleading ads on its platform. According to an investigation by Bloomberg News and the Center for Countering Digital Hate (CCDH), depending on the search term used, Google does not consistently apply disclaimer labels to ads by anti-abortion clinics.  CCDH recently conducted searches that returned 132 misleading ads for such clinics that lacked disclaimers. Specifically, researchers found that queries for terms such as “Plan C pills,” “pregnancy help,” and “Planned Parenthood” often returned results with ads that are not labeled accurately.  We believe Google’s failure to apply disclaimer labels to these common searches appears to be a violation of your June 2019 policy that requires “advertisers who want to run ads using keywords related to getting an abortion” to go through a verification process and be labeled as a provider that “Provides abortions” or “Does not provide abortions.”

Furthermore, the Tech Transparency Project found that some ads from “crisis pregnancy centers,” even when they were properly labeled, the ads themselves included deliberately deceptive verbiage aimed at tricking users into believing that they offer abortion services.  For example, ads for “crisis pregnancy centers” were found to contain language such as “Free Abortion Pill” and “First Trimester Abortion.” Such deceptive advertising likely reduces the effectiveness of labels and may lead to detrimental health outcomes for users who receive delayed treatment. These ads appear to violate Google’s policy on misrepresentation, which prohibits ads that “deceive users.”  Your responsiveness to our first letter gives us hope that you are willing to see this issue through. We, therefore, would appreciate answers to the following questions:

  1. What specific search terms does Google consider related to “getting an abortion”?
  2. What criteria does Google use to determine whether specific queries are related to “getting an abortion”?
  3. What additional steps will Google take to identify and remove ads with misleading verbiage that violates Google’s policies against misrepresentation?

We urge you to take proactive action to rectify these and any additional issues surrounding misleading ads, and help ensure users receive search results that accurately address their queries and are relevant to their intentions.

Thanks for your consideration, and we look forward to your timely response. 

### 

WASHINGTON – Today, U.S. Sens. Mark R. Warner and Tim Kaine announced $76,530,000 in federal funding for the Thomas Jefferson National Accelerator Facility, also known as Jefferson Lab, in Newport News to support multiple projects that are critical to ensuring the U.S. remains a leader in science and technology. The funding was made possible by the Inflation Reduction Act, legislation Sens. Warner and Kaine helped pass in August to lower costs for Virginians and build a strong foundation for future national security and economic growth, in part by accelerating scientific programs and national laboratory infrastructure projects.

“This funding is a powerful example of how the Inflation Reduction Act, which we proudly helped pass earlier this year, will accelerate the development of key technologies,” said the Senators. “We’re glad Jefferson Lab’s research programs and infrastructure projects are receiving this support and look forward to seeing Virginians at the lab continue to lead the way in technological innovation.”

This funding will help make critical laboratory upgrades and support Jefferson Lab’s cutting-edge work in various fields, including projects that will help increase our understanding of the fundamental building blocks and forces at work in our universe—information that can play a key role in the development of an array of technologies, including those with clean energy and medical implications. It is part of $1.5 billion from the Inflation Reduction Act for national laboratories to research and develop new technologies to help the U.S. meet its energy, climate, and security needs.

Sens. Warner and Kaine have consistently advocated for funding for Jefferson Lab and its programs. 

###

WASHINGTON – As the Biden administration works to establish two crucial semiconductor initiatives authorized by CHIPS and Science Act, U.S. Sens. Mark R. Warner (D-VA), John Cornyn (R-TX), and Mark Kelly (D-AZ) are leading eight of their colleagues in urging the U.S. Department of Commerce to take full advantage of the contributions, assets, and expertise available in states nationwide.

In a letter to Commerce Secretary Gina Raimondo, the Senators advocate for a decentralized “hub-and-spoke” model for the National Semiconductor Technology Center (NSTC) and the National Advanced Packaging Manufacturing Program (NAPMP). This model would establish various centers of excellence around the country, as opposed to a single centralized facility that is limited to the resources and strengths of a single state or region.

“Allowing the NSTC and NAPMP to draw upon experts, institutions, entrepreneurs, and private-sector partners spread across the country would best position these programs to fulfill their missions of driving semiconductor and advanced packaging research forward, coordinating and scaling up the ongoing workforce development efforts, promoting geographic diversity, and ensuring long-term U.S. competitiveness in this critical technology sector,” wrote the lawmakers.

They continued, “Such a model would allow them to draw upon the strengths of experts, research facilities, and private-sector partnerships and consortia from across the country. This model would consist of central research facilities with centers of excellence in various locations across the country where there is particular expertise in memory, logic, packaging, testing, or other elements of the semiconductor ecosystem.”

In their letter, the Senators also note that this approach was recommended by the President’s Council of Advisors on Science and Technology in a report to President Biden. This report stated, “the Secretary of Commerce should ensure the NSTC founding charter includes establishing prototyping capabilities in a geographically distributed model encompassing up to six centers of excellence (COEs) aligned around major technical thrusts.” 

The NSTC and NAPMP – designed to accelerate U.S. semiconductor production and advance research and development – were championed by Sens. Warner, Cornyn, and Kelly, who authored the CHIPS law signed by President Biden in August. In addition to Sens. Warner, Cornyn and Kelly, the letter was signed by Sens. Tim Kaine (D-VA), Rob Portman (R-OH), Sherrod Brown (D-OH), Amy Klobuchar (D-MN), Kyrsten Sinema (D-AZ), Ben Ray Luján (D-NM), Ron Wyden (D-OR), and Dianne Feinstein (D-CA).

A copy of the letter can be found here and below.

October 14, 2022

Dear Secretary Raimondo,

As the Department of Commerce begins implementing the CHIPS and Science Act, we respectfully urge your department to consider using a decentralized, so-called “hub-and-spoke” model as the basis for the National Semiconductor Technology Center (NSTC) and the National Advanced Packaging Manufacturing Program (NAPMP). Allowing the NSTC and NAPMP to draw upon experts, institutions, entrepreneurs, and private-sector partners spread across the country would best position these programs to fulfill their missions of driving semiconductor and advanced packaging research forward, coordinating and scaling up the ongoing workforce development efforts, promoting geographic diversity, and ensuring long-term U.S. competitiveness in this critical technology sector.

When Congress passed the Creating Helpful Incentives to Produce Semiconductors for America Act in January 2021 and funding of $11 billion in the recently-passed CHIPS and Science Act, it recognized the need for increased investment in research and development (R&D). This R&D will include prototyping of advanced semiconductor tools, technology, and packaging capabilities to advance both U.S. economic competitiveness and the security of our domestic supply chain.

The NSTC was established as a way to drive this research forward, bringing together the Department of Commerce, Department of Defense, Department of Energy, the National Science Foundation, and the private sector in a public-private consortium. Congress created the NAPMP to “strengthen semiconductor advanced test, assembly, and packaging capability in the domestic ecosystem” in coordination with the NSTC.

Incredibly diverse knowledge and expertise will be required to ensure that the NSTC and NAPMP are successful. We believe that it would be in the best interests of the long-term success of these programs if the Department of Commerce was to embrace a “hub-and-spoke” model for these programs. In fact, the President’s Council of Advisors on Science and Technology recommended such an approach in their report to President Biden titled, “Revitalizing the U.S. Semiconductor Ecosystem.” The report states, “The Secretary of Commerce should ensure the NSTC founding charter includes establishing prototyping capabilities in a geographically distributed model encompassing up to six centers of excellence (COEs) aligned around major technical thrusts.”  Such a model would allow them to draw upon the strengths of experts, research facilities, and private-sector partnerships and consortia from across the country. This model would consist of central research facilities with centers of excellence in various locations across the country where there is particular expertise in memory, logic, packaging, testing, or other elements of the semiconductor ecosystem. Doing so would ensure that a broader range of expertise is captured by the NSTC and NAPMP and ensure entrepreneurs and researchers across the country can take advantage of these programs to drive America’s semiconductor ecosystem forward.

Thank you for your consideration and for all of the work that you and your team are doing to implement this important legislation.

Sincerely,

###