Press Releases
WASHINGTON – Today, Democratic senators released a framework for market structure legislation. The group of senators include U.S. Sens. Mark R. Warner (D-VA), Ruben Gallego (D-AZ), Kirsten Gillibrand (D-NY), Cory Booker (D-NJ), Catherine Cortez Masto (D-NV), Ben Ray Luján (D-NM), John Hickenlooper (D-CO), Raphael Warnock (D-GA), Adam Schiff (D-CA), Andy Kim (D-NJ), Lisa Blunt Rochester (D-DE), and Angela Alsobrooks (D-MD).
“The digital asset sector has grown to a $4 trillion global market. We owe it to the millions of Americans who participate in this market to create clear rules of the road that protect consumers and safeguard our markets. We also must ensure that digital assets are not used to finance illicit activities or to line the pockets of politicians and their families,” the senators said.
“Today, we’re releasing a framework for a market structure bill that would regulate digital asset markets in the U.S., ensure responsible innovation, and create a safe and level playing field for all market participants. The framework is a substantive road map to guide what we hope will be robust and fruitful bipartisan negotiations and ultimately, a bipartisan product.
“Achieving a strong, bipartisan outcome will require time and cannot be rushed. We look forward to working on this with our Republican colleagues.”
The framework contains seven key pillars that any market structure legislation should include:
- Closing the Gap in the Spot Market for Non-Security Digital Assets;
- Clarifying the Legal Status of Digital Assets and Regulator Jurisdiction;
- Incorporating Digital Asset Issuers into the Regulatory Framework;
- Incorporating Digital Asset Platforms into the Regulatory Framework;
- Preventing Illicit Finance;
- Preventing Corruption and Abuse; and
- Ensuring Fair, Effective Regulation.
The full framework can be viewed here.
###
WASHINGTON, D.C. – Today, U.S. Senators Mark R. Warner and Tim Kaine (both D-VA) released the following statement regarding this week’s National Transportation Safety Board (NTSB) hearing about the fatal January 29 crash near DCA. Yesterday, Potomac radar facility manager Bryan Lehman testified that he voiced concern that there were too many flights departing and arriving at the airport, but was then rebuffed by Federal Aviation Administration (FAA) managers who cited ongoing congressional work on the FAA Reauthorization Act of 2024, which ultimately—over the vocal and repeated objections of Warner and Kaine—added even more flight slots to the airport:
“We owe it to the loved ones of those we lost in the fatal mid-air collision near DCA on January 29 that we make sure another tragedy like this never happens again. In the hours, days, and months since the accident, we have engaged with the families that were directly impacted, the National Transportation Safety Board (NTSB), Federal Aviation Administration (FAA) and the Army. We have been following the ongoing collision investigation closely, and were struck by testimony during yesterday’s NTSB hearing from an FAA official who voiced concerns over congestion at the airport years ago and proposed reducing traffic, but was rebuffed by his managers because of conversations in Congress at the time about adding even more flights into and out of DCA. While we recognize that various factors contributed to the crash and continue to work with our colleagues to address them, this testimony underscores a clear takeaway: Congress must act to reduce dangerous congestion by removing flights into and out of DCA.”
Warner and Kaine have been closely involved with the investigation of the January 29 collision, meeting with first responders and offering condolences to the families and loved ones of the 67 lives lost immediately following the tragedy. The senators also saw through passage of legislation to remember the victims of the crash. Warner and Kaine also requested answers from FAA on its plans to protect the flying public in the wake of the January 29 collision. In March of this year, the senators responded to the preliminary National Transportation Safety Board (NTSB) report on the crash. In June, the senators introduced the Safe Operation and Shared Airspace Act of 2025 to strengthen aviation safety. Last month, Kaine successfully secured a provision in the Senate Armed Services Committee’s draft National Defense Authorization Act that would require all aircraft of the Department of Defense that operate near commercial airports be equipped with broadcast positioning technology.
###
WASHINGTON — U.S. Sens. Mark R. Warner (D-VA) and John Thune (R-SD) today reintroduced the Equitable Community Access to Pharmacist Services (ECAPS) Act, bipartisan legislation that would ensure seniors can continue to access important clinical services from their pharmacist. The bill would allow Medicare to reimburse for certain pharmacist-administered tests, treatments, and vaccinations for illnesses like influenza, respiratory syncytial virus (RSV), and strep throat, in accordance with state scope-of-practice laws.
“Seniors across South Dakota rely on the care and support they receive from their community pharmacists,” said Thune. “I am proud to lead this commonsense legislation that would allow these services and other important treatments to remain a reliable option for seniors, particularly in our rural communities.”
“During the pandemic, we saw firsthand how pharmacists stepped up to meet urgent health care needs, especially in underserved and rural communities,” said Warner. “This bill builds on that progress by making sure seniors can continue to count on their local pharmacists for routine tests, vaccines, and treatments for common illnesses like flu and COVID. This is a practical step to improve access to care, reduce the burden on hospitals and clinics, and make our health system work better for seniors.”
“In rural states like South Dakota, pharmacists are often the most accessible – and sometimes the only – health care provider available to patients,” said Amanda Bacon, executive director of the South Dakota Pharmacists Association. “The ECAPS Act recognizes the vital role pharmacists play on the front lines of care, especially in areas where access is limited by geography, provider shortages, or both. The South Dakota Pharmacists Association strongly supports this legislation and the critical role it plays in strengthening our rural health care system. The ECAPS Act helps keep care close to home – and in South Dakota, that makes all the difference.”
“We applaud Senator Warner and Senator Thune for championing the reintroduction of the ECAPS Act,” said Jamie Fisher, executive director of the Virginia Pharmacy Association. “This bipartisan legislation recognizes what patients across Virginia already know – pharmacists are vital, trusted, and accessible members of the health care team. By ensuring Medicare beneficiaries can receive essential services like flu, COVID-19, RSV, and strep testing and treatment from their local pharmacist, the ECAPS Act will improve health outcomes, particularly in rural and underserved communities where access to care is often limited. We strongly support this effort to expand access and equity in health care.”
“The Future of Pharmacy Care Coalition commends Senate Majority Leader John Thune and Senator Mark Warner for championing the ECAPS Act to ensure seniors, including those living in rural areas and vulnerable communities, can turn to their local pharmacists for testing and treatment services that can protect them from certain common respiratory conditions,” said the Future of Pharmacy Care Coalition. “Congress must move with urgency to provide seniors with Medicare coverage in states where pharmacists can offer testing and treatment services for conditions that, although common, can quickly become life-threatening if not properly managed.”
###
Warner, Kaine, and Colleagues Press FAA on Federal Workforce Cuts and Use of AI on Aviation Safety
Jul 23 2025
WASHINGTON – U.S. Senators Mark R. Warner and Tim Kaine (both D-VA) joined Senator Edward J. Markey (D-MA) and nine of their Senate colleagues in sending a letter to Federal Aviation Administration (FAA) Administrator Bryan Bedford requesting answers on the impact of FAA workforce reductions on aviation safety, including among analytical staff who proactively identify safety risks. The senators also inquired about comments by FAA officials suggesting the agency is using artificial intelligence (AI) to analyze safety data to identify risks.
“The tragic crash of American Airlines flight 5342 highlighted serious gaps in our aviation safety system and demonstrated the need for a robust and experienced analytical workforce at the Federal Aviation Administration (FAA). Unfortunately, over the past six months, your agency has significantly reduced its workforce. We are deeply concerned about these reductions’ impact on aviation safety,” the lawmakers wrote.
“The National Transportation Safety Board (NTSB) investigation into the crash of American Airlines flight 5342 has demonstrated the need for a robust FAA workforce, beyond the air traffic controllers and other FAA personnel on the front lines of our aviation system. According to the NTSB investigation, more than 15,000 ‘close proximity events’ occurred at Ronald Reagan Washington National Airport over the last five years—reflecting a shockingly high trend that the FAA should have identified…It’s critical that this Administration ensures the FAA has the workforce capacity to proactively and properly analyze aviation safety data to prevent another crash like the American Airlines flight 5342 tragedy,” the senators continued.
“In the aftermath of the crash, the FAA should be analyzing the near miss data from events at Reagan National Airport and reviewing the sufficiency of FAA staffing. Instead, the agency has moved ahead with workforce reductions. In particular, FAA fired hundreds of probationary employees in critical support roles key to assisting air traffic controllers in doing their jobs,” the lawmakers wrote.
The lawmakers requested the following information by August 11, 2025:
- For each FAA line of business and its relevant suboffices, please provide the (a) number of employees employed as of January 1, 2025, (b) number of employees employed as of July 1, 2025, and (c) the current number of job openings.
- For each FAA line of business and its relevant suboffices, please indicate whether any of its job positions are currently subject to a hiring freeze as of January 20, 2025.
- Please provide the analysis conducted by the Office of Airports related to the impact of workforce cuts on its safety mission.
- Besides the Office of Airports, please explain if any other FAA line of business has conducted an analysis of the impact of workforce cuts on its ability to deliver its mission. If so, please provide those analyses.
- Please explain all relevant FAA lines of business and relevant suboffices charged with identifying aviation safety trends and possible safety risks affecting airport operations in congested airspace.
- What specific AI tools is the FAA using to analyze aviation safety impacts and flight data and how is this improving FAA’s analysis? Does the FAA have adequate staff, familiar with these tools, to manage this analysis and ensure the security of the data used and generated by AI?
In addition to Warner, Kaine, and Markey, the letter was cosigned by Senators Angela Alsobrooks (D-MD), Richard Blumenthal (D-CT), Cory Booker (D-NJ), Mazie Hirono (D-HI), Jeff Merkley (D-OR), Bernie Sanders (I-VT), Chris Van Hollen (D-MD), Elizabeth Warren (D-MA), and Peter Welch (D-VT).
Warner and Kaine have long championed aviation safety and spoken out against federal workforce reductions at the FAA and other agencies. Following the January 29, 2025 collision between an Army Black Hawk helicopter and American Airlines flight 5342 near Ronald Reagan Washington National Airport (DCA), Warner and Kaine demanded answers from the FAA on additional safety measures to protect the public and expressed concerns about the impact of the “Department of Government Efficiency” in addressing issues that led to the mid-air collision. The senators also introduced legislation to strengthen aviation safety. Kaine, a member of the Senate Armed Services Committee, successfully got a provision included in the committee-passed Fiscal Year 2026 National Defense Authorization Act to require that all Department of Defense aircraft that operate near commercial airports be equipped with broadcast positioning technology. Earlier this year, Kaine invited Jason King, a veteran from Fairfax who was fired from his position in the FAA’s safety division, as his guest to the State of the Union address. King was rehired after the State of the Union.
Full text of the letter is available here and below:
Dear Administrator Bedford,
The tragic crash of American Airlines flight 5342 highlighted serious gaps in our aviation safety system and demonstrated the need for a robust and experienced analytical workforce at the Federal Aviation Administration (FAA). Unfortunately, over the past six months, your agency has significantly reduced its workforce. We are deeply concerned about these reductions’ impact on aviation safety. We therefore write to request information on changes in the FAA workforce and their impact on aviation safety, including any analyses that the FAA has conducted on the effects of workforce reductions on the agency’s safety mission.
The National Transportation Safety Board (NTSB) investigation into the crash of American Airlines flight 5342 has demonstrated the need for a robust FAA workforce, beyond the air traffic controllers and other FAA personnel on the front lines of our aviation system. According to the NTSB investigation, more than 15,000 “close proximity events” occurred at Ronald Reagan Washington National Airport over the last five years — reflecting a shockingly high trend that the FAA should have identified. At a Senate Commerce Committee hearing in March, the then-Acting FAA Administrator Chris Rocheleau acknowledged that the agency missed this warning sign, in part because of the sheer volume of data that FAA personnel must analyze. The Acting Administrator’s testimony illustrated the need for an FAA workforce robust and experienced enough to analyze all relevant data and identify safety risks. It’s critical that this Administration ensures the FAA has the workforce capacity to proactively and properly analyze aviation safety data to prevent another crash like the American Airlines flight 5342 tragedy.
Despite this clear need for enhanced analytical capacity, the FAA has instead moved to reduce its workforce during this critical period. In the aftermath of the crash, the FAA should be analyzing the near miss data from events at Reagan National Airport and reviewing the sufficiency of FAA staffing. Instead, the agency has moved ahead with workforce reductions. In particular, FAA fired hundreds of probationary employees in critical support roles key to assisting air traffic controllers in doing their jobs. With the Department of Transportation (DOT) pushing personnel to leave via two rounds of the Deferred Resignation Program — under which employees could elect to resign and receive pay until September 2025 — coupled with the federal hiring freeze, federal officials are leaving their jobs and it may be difficult for the FAA to attract new, qualified employees. Although the DOT assured Senators that key FAA safety staff were exempt from firings and the Deferred Resignation Program, the FAA has still not clarified whether it has the staff it needs to ensure the safety of the American public. Estimates from the DOT suggest that between 1,000 and 3,000 employees may leave the agency once the Deferred Resignation Program offers are finalized. According to an internal presentation to FAA management: “Employees are departing the agency in mass quantities across all skill levels.” Most recently, the Department of Transportation may now be able to move ahead with a large Reduction in Force after the Supreme Court’s recent ruling allowing federal agencies to move forward with staffing cuts consistent with existing federal law. This moment — after a tragic crash highlighted critical gaps in aviation safety — seems like precisely the wrong time for the FAA to aggressively shrink its workforce.
Moreover, the FAA’s recent announcement that it is using artificial intelligence (AI) to analyze its data — without explaining whether such AI tools are reliable or effective — provides little reassurance to the public. While we support the use of technology to improve how aviation safety data is used, the decision to rely on technological fixes while simultaneously moving ahead with staffing reductions is deeply worrisome. The FAA has not been transparent with Congress about the types of technology it is now using, whether those technologies are replacing, augmenting, or otherwise impacting the FAA workforce, or whether it requires human review of AI analyses before using any analysis in a safety-related decision. This reliance on technological fixes — without a transparent analysis of the FAA’s workforce levels and capacity— raises questions about the FAA’s commitment to prioritizing safety.
If the FAA lacks the staff to identify safety risks before future incidents occur, Congress must be informed of this as soon as possible. At a recent Senate Commerce Committee hearing, Senators questioned FAA officials from the Office of Airports, the Office of Aviation Safety, and the Air Traffic Organization about the personnel reductions at their respective offices and whether their offices had conducted any analysis on the impact of these workforce cuts on aviation safety. Only the head of the FAA Office of Airports — which is charged with planning and developing a safe and efficient national airport system — responded that his Office had conducted such an analysis. Senators urged the FAA to turn over that analysis to the Committee, along with data on any workforce reductions, but to date it has not. It is essential that Congress have sufficient information to understand the impact of recent FAA personnel changes on aviation safety.
To better understand the impact of FAA workforce reductions on aviation safety, please provide written responses to the following questions and requests for information by August 11, 2025:
- For each FAA line of business and its relevant suboffices, please provide the (a) number of employees employed as of January 1, 2025, (b) number of employees employed as of July 1, 2025, and (c) the current number of job openings.
- For each FAA line of business and its relevant suboffices, please indicate whether any of its job positions are currently subject to a hiring freeze as of January 20, 2025.
- Please provide the analysis conducted by the Office of Airports related to the impact of workforce cuts on its safety mission.
- Besides the Office of Airports, please explain if any other FAA line of business has conducted an analysis of the impact of workforce cuts on its ability to deliver its mission. If so, please provide those analyses.
- Please explain all relevant FAA lines of business and relevant suboffices charged with identifying aviation safety trends and possible safety risks affecting airport operations in congested airspace.
- What specific AI tools is the FAA using to analyze aviation safety impacts and flight data and how is this improving FAA’s analysis?
- Does the FAA have adequate staff, familiar with these tools, to manage this analysis and ensure the security of the data used and generated by AI?
- How were these AI tools selected? Please describe the specific testing or evaluation conducted in advance of the implementation of the tools and provide a copy of any reports or conclusions produced. If no testing or evaluation occurred, please explain why not.
- Does the FAA have adequate staff, familiar with these tools, to manage this analysis and ensure the security of the data used and generated by AI?
Thank you in advance for your attention to this matter.
Sincerely,
###
Warner & Colleagues Demand Answers from Delta on Use of AI to Set Individualized Ticket Prices
Jul 22 2025
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) along with his colleagues Sens. Ruben Gallego (D-AZ) and Richard Blumental (D-CT) led their colleagues in demanding answers from Delta Air Lines CEO Ed Bastian after the company announced that it plans to ramp up its use of Artificial Intelligence to set surveillance-based ticket prices.
“Individualized pricing, or surveillance-based price setting, eliminates a fixed or static price in favor of prices that are tailored to an individual consumer’s willingness to pay. Delta’s current and planned individualized pricing practices not only present data privacy concerns, but will also likely mean fare price increases up to each individual consumer’s personal “pain point” at a time when American families are already struggling with rising costs,” wrote the senators. “The technology making that determination is trained using “all the data we can get our hands on” according to Fetcherr CEO Roy Cohen, and the company’s website claims that AI adoption and usage could increase aviation industry profits by up to $4.4 trillion annually.”
“The implications for individual consumer privacy are severe on their own. Surveillance pricing has been shown to utilize extensive personal information obtained through a variety of thirdparty channels, including data about a passenger’s purchase history, web browsing behavior, geolocation, social media activity, biometric data, and financial status,” they continued. “Former FTC Chair Lina Khan has cautioned against a particularly egregious but conceivable example of an airline using AI to charge a higher fare to a passenger ‘because the company knows that they just had a death in the family and need to fly across the country.’”
In the letter, the senators demanded answers on the company’s plans to protect Americans from pricing discrimination. They also requested answers to a series of questions around the types and sources of data Delta will use to train this AI system, how many passengers and which routes will be impacted, and what steps the company has taken to ensure compliance will follow all applicable federal and state laws.
The full text of the letter is available here.
###
WASHINGTON — U.S. Sen. Mark R. Warner (D-VA) joined Sen. Jack Reed (D-RI) and a number of their Senate colleagues in a letter demanding that the Consumer Financial Protection Bureau (CFPB) perform its essential work supervising and investigating violations of consumer financial protection laws and taking forceful enforcement actions against scammers and payday lenders. This letter comes on the heels of an ill-advised move by the Trump administration to shutter the CFPB, which collects, investigates, and monitors consumer complaints about financial products and services, and provides relief to consumers who have been wronged by unscrupulous financial providers.
As a consumer watchdog, the CFPB looks out for Americans’ financial wellbeing, preventing scams and holding offenders accountable. This is especially true for servicemembers, veterans, and their families. Since the agency's inception, the CFPB has returned over $21 billion back to consumers who have fallen victim to abusive and illegal activity.
“This morning, in your capacity as Acting Director of the Consumer Financial Protection Bureau (CFPB), you issued a directive to employees to cease all work without your express written approval. This includes investigations, supervision, enforcement, and litigation activities, as well as all stakeholder engagement and public communications. This decision leaves all Americans susceptible to predatory lending and other abusive practices, but in particular, it eliminates protections that prevent servicemembers from being exploited,” wrote the senators.
In this letter, the Senators also express The Trump Administration’s decision to stop supervision, enforcement, and litigation eliminates key protections enacted by Congress through the Military Lending Act (MLA) and Servicemembers Civil Relief Act (SCRA) to protect servicemembers, who are disproportionally targeted by predatory lenders and schemes, and often face greater financial risks than civilian borrowers due to the nature of their military service. The financial and legal protections in these bipartisan laws – most notably a temporary reduction in interest rates on mortgages, credit cards, and auto loans – are critical to national defense and military readiness.
“Nullifying the MLA and imperiling servicemembers’ rights under the SCRA will degrade military readiness, cost taxpayers money, and tarnish servicemembers’ records. The Department of Defense (DOD) has stated that ‘high-cost debt can detract from mission focus, reduce productivity, and require the attention of supervisors and commanders.’ Morale suffers when servicemembers and their families are trapped in cycles of debt. And taxpayers are on the hook when our servicemembers leave the military due to avoidable personal issues like financial insecurity. According to DOD, each separated servicemember costs the Pentagon more than $58,000,” they continued.
“Accordingly, we request that the CFPB continue to supervise and investigate violations of the consumer financial protection laws and take forceful enforcement actions against lenders that violate the law, especially when it comes to predatory lending that harms our military readiness. We also request that the CFPB continue to make public communications to consumers, especially to servicemembers regarding the rights that they are owed under the SCRA,” the letter concluded.
In addition to Sens. Warner and Reed, the letter was signed by U.S. Sens. Jeanne Shaheen (D-NH), Ben Ray Lujan (D-NM), Gary Peters (D-MI), Jeff Merkley (D-OR), Jon Ossoff (D-GA), Cory Booker (D-NJ), John Hickenlooper (D-CO), and Edward Markey (D-MA).
A copy of the letter is available here and below:
Dear Director Vought:
This morning, in your capacity as Acting Director of the Consumer Financial Protection Bureau (CFPB), you issued a directive to employees to cease all work without your express written approval. This includes investigations, supervision, enforcement, and litigation activities, as well as all stakeholder engagement and public communications. This decision leaves all Americans susceptible to predatory lending and other abusive practices, but in particular, it eliminates protections that prevent servicemembers from being exploited.
This funding, supervision, enforcement, and communications freeze will hit military families especially hard. Without a functional CFPB, military families will be stripped of their financial protections under the bipartisan Military Lending Act (MLA) that they have earned and deserve by serving our Nation. The CFPB is the primary agency responsible for supervising and enforcing the MLA against nonbank financial companies, including payday lenders, pawnshops, and debt collectors who have charged servicemembers interest rates as high as 600% and who have threatened to derail their careers if they do not pay up.
The agency’s supervision and enforcement program has delivered concrete results for the military. The CFPB has resolved 39 cases involving harm to servicemembers and veterans, returning $363 million to victims, including six enforcement actions for violations of the MLA. Two additional MLA cases are currently pending in court, alleging that a pawn shop and an installment lender charged sky high interest rates to military families and engaged in deceptive practices to illegally harvest fees. With these cases frozen, no supervision, staff locked out, and additional enforcement off the table, unscrupulous lenders will exploit these circumstances to engage in additional predatory lending. The actions that you have taken since being installed as Acting Director betray our servicemembers and empower scammers who want to rip them off.
Further, recent CFPB research identified a long-running pattern of lenders failing to decrease servicemembers’ interest rates while on active duty as required by the Servicemembers Civil Relief Act (SCRA). These failures cost servicemembers thousands of dollars per year. The CFPB’s public communications have held lenders accountable and helped servicemembers exercise their rights under Federal law.
Nullifying the MLA and imperiling servicemembers’ rights under the SCRA will degrade military readiness, cost taxpayers money, and tarnish servicemembers’ records. The Department of Defense (DOD) has stated that “high-cost debt can detract from mission focus, reduce productivity, and require the attention of supervisors and commanders.” Morale suffers when servicemembers and their families are trapped in cycles of debt. And taxpayers are on the hook when our servicemembers leave the military due to avoidable personal issues like financial insecurity. According to DOD, each separated servicemember costs the Pentagon more than $58,000.
Accordingly, we request that the CFPB continue to supervise and investigate violations of the consumer financial protection laws and take forceful enforcement actions against lenders that violate the law, especially when it comes to predatory lending that harms our military readiness. We also request that the CFPB continue to make public communications to consumers, especially to servicemembers regarding the rights that they are owed under the SCRA.
We request your commitment no later than February 12, 2025. Thank you for your attention to this important matter.
Sincerely,
###
Warner Presses Valve to Crack Down on Hateful Accounts and Rhetoric Proliferating on Steam
Nov 15 2024
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) today urged leadership at Valve, a prominent video game company, to respond to reports that their gaming distribution and social networking platform, Steam, is hosting extremist and hateful content – including over 1.5 million users and tens of thousands of groups that share and amplify antisemitic, Nazi, sexuality- or gender-based hate, and white supremacist content. Sen. Warner called for broad action from Valve to bring its content moderation standards in line with industry standards and crack down on the rampant proliferation of hate-based content.
“I write to you today regarding the hate and extremism that has recently been identified on your gaming digital distribution and social networking platform Steam,” Warner wrote. “Recently, the Anti-Defamation League (ADL) released a report where ADL identified over 1 million unique user accounts and nearly 100,000 user-created groups that glorified antisemitic, Nazi, white supremacist, gender- and sexuality-based hatred, and other extremist ideologies on Valve’s Steam platform.”
The letter notes that Steam has millions of active users that are now exposed to extremist ideologies. According to the ADL report, Steam hosts almost 900,000 users with extremist or antisemitic profile pictures, 40,000 groups with names that included hateful words, and rampant use of text-based images, particularly of swastikas, resulting in over 1 million unique hate-images.
“My concern is elevated by the fact that Steam is the largest single online gaming digital distribution and social networking platform in the world with over 100 million unique user accounts and a userbase similar in scale to that of the ‘traditional’ social media and social network platforms. Steam is financially successful, with a dominant position in its sector, and makes Valve billions of dollars in annual revenue. Until now, Steam has largely not received its due attention as a de facto major social network where its users engage in many of the same activities expected of a social media platform,” Warner continued.
“We have seen on other social networking platforms that lax enforcement of the letter of user conduct agreements, when coupled with a seeming reluctance by those companies to embrace the spirit (namely providing users with a safe, welcoming place to socialize) of those same agreements, leads to toxic social environments that elevate harassment and abuse. You should want your users (and prospective users) to not have to wonder if they or their children will be harassed, intimidated, ridiculed or otherwise face abuse,” Warner concluded.
The letter ends with a series of questions for Valve regarding their enforcement of their own terms of service and their commitment to reining in toxic content.
For years, Sen. Warner, a former tech entrepreneur, has been raising the alarm about rise of hate-fueled content proliferating online, as well as the threat posed by domestic and foreign bad actors circulating disinformation. Recently, he pressed directly for action from Discord, another video game-based social networking site that is hosting violent predatory groups that coerce minors into self-harm and suicide. He has also called attention to the rise of pro-eating disorder content on AI platforms. A leader in the tech space, Sen. Warner has also lead the charge for broad Section 230 reform to allow social media companies to be held accountable for enabling cyber-stalking, harassment, and discrimination on their platforms.
A copy of the letter is available here and below.
Dear Mr. Newell:
I write to you today regarding the hate and extremism that has recently been identified on your gaming digital distribution and social networking platform Steam. Recently, the Anti-Defamation League (ADL) released a report where ADL identified over 1 million unique user accounts and nearly 100,000 user-created groups that glorified antisemitic, Nazi, white supremacist, gender- and sexuality-based hate, and other extremist ideologies on Valve’s Steam platform.
It has been brought to your attention before that extremist ideologies seem to find a home on Steam. In 2022, Valve received a Senate letter identifying nearly identical activity on your platform, and yet two years later it appears that Valve has chosen to continue a ‘hands off’-type approach to content moderation that favors allowing some users to engage in sustained bouts of disturbing and violent rhetoric rather than ensure that all of its users can find a welcoming and safe environment across your platform.
My concern is elevated by the fact that Steam is the largest single online gaming digital distribution and social networking platform in the world with over 100 million unique user accounts and a userbase similar in scale to that of the ‘traditional’ social media and social network platforms. Steam is financially successful, with a dominant position in its sector, and makes Valve billions of dollars in annual revenue. Until now, Steam has largely not received its due attention as a de facto major social network where its users engage in many of the same activities expected of a social media platform.
ADL also found that, in addition to the extremely concerning number of hateful account and user groups:
- Almost 900,000 users with extremist or antisemitic profile pictures
- 40,000 groups with names that included hateful words, with the most prominent being “1488”, “shekel” and “white power”
- Rampant use of text-based images (so-called “copypasta” or “ASCII art”), particularly of swastikas, resulting in over 1 million unique hate-images.
Valve has a Steam Online Conduct policy (“Conduct Policy”) and a Steam Subscriber Agreement (“Agreement”) that Steam subscribers agree to abide by as a condition of using the service. The Conduct Policy requires that “[in] general, as a Steam user you should be a good online citizen and not do anything that prevents any other Steam user from using and enjoying Steam”. The Conduct Policy explicitly directs subscribers to not:
- “Engage in unlawful activity [including] encouraging real-world violence…”
- “Upload or post illegal or inappropriate content [including] [real] or disturbing depictions of violence…”
- “Violate others’ personal rights”
- “Harass other users or Steam personnel [which includes not engaging in] trolling; baiting; threatening; spamming; intimidating; and using abusive language or insults.”
It is reasonable to question how committed Valve is to effectively implement and enforce Valve’s own, self-created Conduct Policy for its users, in light of the 1 million Steam user accounts and 100,000 user-created groups glorifying hateful ideologies that ADL found. We have seen on other social networking platforms that lax enforcement of the letter of user conduct agreements, when coupled with a seeming reluctance by those companies to embrace the spirit (namely providing users with a safe, welcoming place to socialize) of those same agreements, leads to toxic social environments that elevate harassment and abuse. You should want your users (and prospective users) to not have to wonder if they or their children will be harassed, intimidated, ridiculed or otherwise face abuse.
As Black Friday and the holiday buying season approaches, the American public should know that not only is Steam an unsafe place for teens and young adults to purchase and play online games, but also that, absent a change in Valve’s approach to user moderation and the type of behavior that it welcomes on its platform, Steam is playing a clear role in allowing harmful ideologies to spread and take root among the next generation.
Valve must bring its content moderation practices in line with industry standards or face more intense scrutiny from the federal government for its complicity in allowing hate groups to congregate and engage in activities that undoubtedly puts Americans at risk.
Please provide answers to the following questions no later than December 13, 2024. Please provide answers in-line with the questions, and not a narrative that attempts to answer multiple questions.
- Please describe Valve’s current practices used to enforce its terms of service.
- Please provide the definition that Valve uses internally to define each of the following terms and/or behaviors from the Conduct Policy in order to evaluate potential violations of said policy:
- “Encouraging real-world violence”;
- “Inappropriate content”;
- “Real or disturbing depictions of violence”;
- “Violate others’ personal rights”; and
- “Harass other users or Steam personnel”, including:
i. trolling;
ii. baiting;
iii. threatening;
iv. intimidating; and
v. abusive language or insults.
- How many allegations did Valve receive from users about potential violations of the Conduct Policy? Include in your response each date when the Conduct Policy was changed, updated, or otherwise modified. Please provide data sufficient to answer this question for each of the following:
- Each month of each of the years of 2014 to 2024;
- Each category of violation (however Valve tracks types or categories of violations of the policy;
- Each category of violation for each month of each of the years of 2014 to 2024;
- The disposition and/or any findings of each complaint received (this may be presented in aggregate) by Valve, whether through Steam’s internal reporting mechanisms or any other means, and subsequent action taken by Valve in response to each complaint (this may be presented in aggregate).
- The number of unique user accounts that were subject to adverse, punitive, or corrective actions by Valve:
i. In response to a user-generated complaint; and
ii. In response to violations identified by Valve moderators of their own accord.
- For item e, above, please provide data on unique payment methods (e.g. credit card accounts, PayPal or similar payment method accounts, JCB, Klarna, Paysafecard, and any other payment methods accepted on Steam that are uniquely identifiable) associated with each account subject to adverse, punitive, or corrective actions by Valve that was subsequently used for any other account (this may be presented on aggregate).
- Approximately how many human content moderators work for Steam?
- How many of those moderators are in-house Valve employees?
- How many of those moderators are contracted by Valve?
- Does Steam supplement this work with AI-content moderation systems? If so, describe the ways in which any AI system is deployed for that purpose, including any evaluation process that Valve carried out to test any such system and the results that demonstrate the efficacy of any such system in identifying and/or removing content that violates the Conduct Policy and Subscriber Agreement.
- What steps does Valve take to prevent, monitor, and mitigate extremist, white supremacist, and terrorism-related content?
- What commitments will Steam make to ensure that it has meaningfully curbed white supremacist, antisemitic, terroristic, Nazi, homophonic, transphobic, misogynist, and hateful content by November 15, 2025?
- What transparency measures does Valve plan to implement to inform users and the public about content moderation actions related to extremists and behavior that could be reasonably interpreted as endorsing extremist thoughts, beliefs, and/or actions on the platform?
- The research shows a period from late 2019 to mid-2020 where it appears Valve may have stepped up its moderation of certain types of hateful content on Steam. Can you provide more detail on your content moderation practices during this time?
- How frequently does Valve evaluate its content moderation practices related to extremism?
- How frequently do those evaluations result in changes, updates, or other modifications to Valve’s content moderation practices related to extremism?
- Has Steam, or Valve, made policy, enforcement, or practical decisions that have had the effect of limiting or its content moderation? If so, provide the date(s) of each decision and enough information to understand the context and analysis that led to each decision.
I greatly appreciate your swift attention to this matter and look forward to reviewing your response.
Sincerely,
###
WASHINGTON – Today, ahead of the Fourth of July holiday, U.S. Sens. Mark R. Warner and Tim Kaine (both D-VA) are urging the Consumer Products Safety Commission (CPSC) to work with American Society of Testing and Materials (ASTM) International, a nonprofit that recently released safety standards to address hazards posed by detached and flyaway beach umbrellas, to evaluate and finalize these new standards.
CPSC estimates that nearly 3,000 individuals across the country are sent to the emergency room each year due to umbrella-related accidents. This includes last week’s incident in Cocoa Beach, Florida, as well as the tragic 2016 death of Lottie Michelle Belk of Chester, Va. who was struck in the torso and killed while vacationing in Virginia Beach with her family.
“As we enter the Fourth of July holiday weekend, with millions of Americans enjoying our country’s beaches, lakes, and rivers, it is vital that beachgoers are safe from dislodged beach umbrellas. Improperly secured umbrellas can result in death or serious injury. We believe that recent actions by CPSC and the American Society of Testing Materials (ASTM), including the release earlier this year of the ASTM F3681-24 safety standard, are good steps forward. We urge CPSC and industry to work together quickly to finish any outstanding suggested improvements or other modifications to this standard,” the Senators wrote.
They continued, “As the Beach Umbrellas Task Group continues to consider additional guidance and safety improvements to the ASTM F3681-24 standard, it is important that the group move swiftly and with thoroughness. We encourage the group to evaluate the full scope and harm of unsecured beach umbrellas to maximize safety for beachgoers.”
This letter is the latest push by Sens. Warner and Kaine to ensure the safety and wellbeing of beachgoers. In 2019 letter to CPSC, they drew attention to the unexpected danger of flying beach umbrellas to beachgoers, and in 2021 the Senators pushed ASTM International for increased safety measures. These efforts culminated in ASTM’s release of new safety standards earlier this year.
Full text of the letter is available here and below:
Dear Chair Hoen-Saric:
We write today requesting swift action by the Consumer Product Safety Commission (CPSC) to ensure that beach umbrellas sold in the United States are safe for consumers and the public. To achieve this goal, we urge the Beach Umbrella Task Group within CPSC to finalize strong and clear consumer safety guidance for the design, manufacture, and use of beach umbrellas and anchor devices.
As we enter the Fourth of July holiday weekend, with millions of Americans enjoying our country’s beaches, lakes, and rivers, it is vital that beachgoers are safe from dislodged beach umbrellas. Improperly secured umbrellas can result in death[3] or serious injury.[4] We believe that recent actions by CPSC and the American Society of Testing Materials (ASTM), including the release earlier this year of the ASTM F3681-24 safety standard, are good steps forward. We urge CPSC and industry to work together quickly to finish any outstanding suggested improvements or other modifications to this standard.
Following our sustained engagement on this issue over the last several years, the CPSC has worked with ASTM and other industry stakeholders to develop the ASTM F3681-24 safety standard and released it in April 2024. ASTM F3681-24 establishes minimum requirements for the safe anchoring of beach umbrellas, which should protect beachgoers from dislodged and airborne umbrellas.
As the Beach Umbrellas Task Group continues to consider additional guidance and safety improvements to the ASTM F3681-24 standard, it is important that the group move swiftly and with thoroughness. We encourage the group to evaluate the full scope and harm of unsecured beach umbrellas to maximize safety for beachgoers.
We appreciate your attention to this critical and timely matter and urge CPSC to work with industry stakeholders to strengthen protections for consumers across the Commonwealth and nation.
Sincerely,
###
Warner, Colleagues Introduce Bipartisan Legislation to Keep Kids Safe, Healthy, Off Social Media
May 01 2024
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) joined Sens. Brian Schatz (D-HI), Ted Cruz (R-TX), and a bipartisan group of colleagues in introducing the Kids Off Social Media Act, legislation that would set a minimum age of 13 to use social media platforms and prevent social media companies from feeding algorithmically-targeted content to users under the age of 17. Joining Sens. Warner, Schatz and Cruz in introduction are U.S. Sens. Chris Murphy (D-CT), Katie Britt (R-AL), Peter Welch (D-VT), Ted Budd (R-NC), John Fetterman (D-PA), and Angus King (I-ME).
The Kids Off Social Media Act aims to address concerns regarding the mental health crisis of children and teens in relation to their use of social media. No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.
Studies have shown a strong relationship between social media use and poor mental health, especially among children. From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes. Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.
“Parents across the country are struggling to protect their kids from the harmful effects of too much social media, and studies show that today’s unregulated social media landscape has fostered a toxic environment for young people, promoting bullying, eating disorders, and mental health struggles unchecked,” said Sen. Warner. “I’m proud to join this bipartisan effort to enact some common sense guardrails for kids and teens using social media platforms.”
Specifically, the Kids Off Social Media Act would:
- Prohibit children under the age of 13 from creating or maintaining social media accounts, consistent with the current practices of major social media companies;
- Prohibit social media companies from pushing targeted content using algorithms to users under the age of 17;
- Provide the FTC and state attorneys general authority to enforce the provisions of the bill; and
- Follow existing CIPA framework to require schools to block and filter social media on their federally funded networks, which many schools already do.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate building a safer online environment, specifically for young people. Last year, he introduced the Kids Online Safety Act, legislation that provides young people and parents with the tools, safeguards, and transparency they need to protect against online harms. He has also introduced several pieces of legislation aimed at holding Big Tech accountable, including the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads; and most recently, seeing through the passage of the national security supplemental aid package, which included a requirement that the prominent social media platform TikTok divest from China-owned parent company ByteDance within one year.
“Public Citizen stands in strong support of this legislation intended to protect the nation’s children from the pernicious impacts of social media. Frequent use of social media can harm vulnerable children and teens as their identities and feelings of self-worth are forming. A straightforward ban for younger children and stopping abusive algorithmic engagement with teens just makes sense. We applaud Senator Schatz for his commonsense bill,” said Lisa Gilbert, Executive Vice President of Public Citizen.
“We survey mothers on pressing issues they face and on the federal bills that seek to address them. We do this because mothers’ first-hand experiences and knowledge are critical sources of information in the policy-making process. This bill, newly renamed the ‘Kids Off Social Media Act,’ had more support by mothers -- across the political spectrum -- than any bill we've studied. Mothers are on the frontlines of this issue, and according to our quantitative and qualitative study, they overwhelmingly believe that social media companies' products and practices should be regulated using age limits and guardrails, similar to other harmful substances,” said Jennifer Bransford, Founder of Count on Mothers.
“Our nation is facing a severe crisis in children’s mental health,” said Dr. Regena Spratling, President of the National Association of Pediatric Nurse Practitioners. “Every day pediatric nurse practitioners (PNPs) and other advanced practice registered nurses (APRNs) focused on children’s health see the serious impact that social media can have on our young people’s well-being. The ‘Kids Off Social Media Act’ will help to provide parents the tools they need to safeguard their children from threats in the digital world.”
“Preparing nurses to help address our country’s growing mental health problems is one of nursing education’s highest priorities,” said Dr. Beverly Malone, President and CEO of the National League for Nursing. “The National League for Nursing is pleased to support the ‘Kids Off Social Media Act’ as an important step to help parents and health care professionals shield our young people from harmful online content that can lead to behavioral health problems.”
“KIDS TOO strongly supports comprehensive legislation that protects kids on social media. Senator Schatz's Kids Off Social Media Act solidifies prohibiting youth under 13 from maintaining or creating social media accounts. This bill gets to the root of the issue by eliminating the chance of young kids being vulnerable to harmful tactics by predators, bullies and drug dealers,” said Tania Haigh, Executive Director of KIDS TOO.
“We’re still learning about the long-term implications that unfettered access to social media has on children and adolescents. Until then, especially considering evidence showing that the way people use social media can impact mental health outcomes, it makes sense to put safeguards in place. As we learn more, we can modify these safeguards as needed. But we need to begin somewhere, and this legislation would provide an opportunity to more clearly understand whether modest safeguards can protect children and adolescents and what responsible measures look like,” said Chuck Ingoglia, President and CEO of the National Council for Mental Wellbeing.
The Kids Off Social Media Act is supported by the American Counseling Association, KidsToo, National Association of Social Workers, National Association of Pediatric Nurse Practitioners, Tyler Clementi Foundation, National Council for Mental Wellbeing, Count on Mothers, Parents Television and Media Council, Parents Who Fight, Public Citizen, National Federation of Families, National Organization for Women, National Association of School Nurses, National League for Nursing, and American Academy of Child & Adolescent Psychiatry.
Full text of the legislation is available here.
###
WASHINGTON, D.C. – U.S. Senators Mark R. Warner and Tim Kaine joined Senators Bob Casey (D-PA), John Fetterman (D-PA), Sherrod Brown (D-OH), and Joe Manchin (D-WV) and U.S. Representative Bobby Scott (D-VA-3) in introducing the Black Lung Benefits Improvement Act, which would help miners who have suffered from black lung disease and their survivors access the workers’ compensation they are entitled to receive under the Black Lung Benefits Program. This legislation would remove barriers that prevent miners and their survivors from accessing their benefits such as lengthy processing times, lack of a legal representative, and inflation.
“For generations, coal miners across Virginia have made tremendous sacrifices to power America, literally risking their lives and their health to electrify our nation,” said Senator Warner. “Miners living with black lung and their survivors need easy access to the benefits they’ve earned – but far too often, red tape gets in the way. The Black Lung Benefits Improvement Act would take important steps to make sure miners can access legal representation, have protection against inflation, and more so America can keep making good on the debt it owes to victims of black lung.”
“Many of our nation’s miners have developed black lung disease, and we owe it to them to provide them with the care and support they need,” said Senator Kaine. “The Black Lung Benefits Improvement Act is critical to helping more miners, miner retirees, and their families receive the benefits and compensation they’ve earned following their tremendous sacrifices.”
Many miners have developed coal workers’ pneumoconiosis—commonly referred to as “black lung”—a debilitating and deadly disease caused by the long-term inhalation of coal dust in underground and surface coal mines. In response, Congress passed the Black Lung Benefits Act in 1976 to provide monthly compensation and medical coverage for coal miners who develop black lung disease and are disabled. The Black Lung Benefits Improvement Act makes needed updates to ensure Congress is fulfilling its commitment to the Nation’s coal miners by:
- Restoring cost-of-living benefit increases for black lung beneficiaries and ensuring cost-of-living increases are never withheld in the future,
- Helping miners and their survivors secure legal representation by providing interim attorney fees for miners that prevail at various stages of their claim,
- Allowing miners or their survivors to reopen their cases if they had been wrongly denied benefits because of errors in medical interpretations, and
- Prohibiting unethical conduct by attorneys and doctors in the black lung claims process, such as withholding evidence of black lung, and helping miners review and rebut potentially biased or inaccurate medical evidence developed by coal companies.
Warner and Kaine have long worked to support miners and their families. The Senate-passed draft of the Fiscal Year 2024 government funding bill includes $12.19 million in federal funding for black lung clinics, which the senators are working to ensure is included in the final version of the bill. The Inflation Reduction Act, which the senators helped pass, included a permanent extension of the Black Lung Disability Trust Fund’s excise tax at a higher rate, providing certainty for miners, miner retirees, and their families who rely on the fund to access benefits. This followed Warner and Kaine’s successful efforts to ensure that miners receive the pensions and health care they earned. In July, the senators reintroduced the Relief for Survivors of Miners Act, which would ease restrictions to make it easier for miners’ survivors to successfully claim benefits. Warner and Kaine also urged the Biden Administration to issue new silica standards to protect miners across America – a push that helped contribute towards the release of those standards.
A one-pager on the bill is available here.
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA), wrote to Chairwoman of the Federal Trade Commission (FTC) Lina Khan urging the Commission to take action against Google and Meta over their failure to remove graphic videos depicting the murders of Alison Parker and Adam Ward from YouTube, Facebook, and Instagram. In August 2015, Alison Parker and Adam Ward, employees at CBS affiliate WDBJ, were murdered by a former co-worker while reporting live on WDBJ’s morning broadcast. The live footage as well as the killer’s own recorded video have circulated online since. For years, Andy Parker, Alison’s father, has been vocal about the damaging impact that this footage has had on his family, including during testimony before the Senate Judiciary Committee.
“I am deeply troubled by this response, as the burden of finding and removing harmful content should not fall to victims’ families who are grieving their loved ones,” Sen. Warner wrote. “This approach only serves to retraumatize them and inflict additional pain. Instead, I firmly believe that the responsibility lies solely with the platform to ensure that any content violating its own Terms of Service is removed expeditiously.”
In March 2020 and October 2021, Mr. Parker submitted complaints to the FTC and requested a Section 5 investigation of deceptive practices in connection with YouTube and Meta (then Facebook).The complaints argue that YouTube and Meta have failed to enforce their terms of service by neglecting to remove videos of the murders of Alison Parker and Adam Ward from their platforms. Section 5 of the FTC Act prohibits ''unfair or deceptive acts or practices in or affecting commerce,” with a deceptive act defined as one that misleads or is likely to mislead a consumer acting reasonably.
Sen. Warner continued, “It has been over three years since Mr. Parker and the Georgetown University Law Clinic filed their first complaint regarding this case, and Mr. Parker continues to endure harassment as a result of the videos remaining on these platforms. Given the practices outlined above, I ask that your agency consider all possible avenues to ensure that companies like Google and Meta uphold their Terms of Service, not only in Mr. Parker’s case but also in other instances where their platforms may host violent and harmful content.”
Sen. Warner is one of Congress’ leading voices in demanding accountability and user protections from social media companies and has previously pressed Meta on Facebook's role in inciting violence around the world.
Text of the letter can be found here and below.
Dear Chairwoman Khan,
I write today in support of my constituent, Mr. Andy Parker, and his urgent requests for the Federal Trade Commission (FTC) to take action against Google and Meta over their failure to remove videos depicting the tragic murders of Alison Parker and Adam Ward from YouTube, Facebook, and Instagram. In light of this behavior, we ask that your agency engage closely with Mr. Parker regarding his complaints and explore all possible avenues to ensure that Google and Meta uphold their Terms of Service in relation to violent and harmful content.
In August 2015, journalist Alison Parker and photojournalist Adam Ward were shot and killed during a live television interview in Moneta, Virginia. Following this horrifying event, footage captured by the assailant, as well as video from the live news broadcast, were uploaded to several online platforms, including YouTube, Facebook, and Instagram. Despite the platforms having policies banning violent content and repeated requests from Mr. Parker and volunteers acting on his behalf to remove this distressing footage, these videos continue to exist on all three platforms to this day. Even more troubling, the footage has been circulated widely by conspiracy theorists who subject the victims’ families to further harm and harassment by falsely claiming the attack was a hoax.
While both Google and Meta purport to have robust content moderation protocols, Mr. Parker's experience demonstrates that the responsibility of removing harmful content often falls upon the victims’ families. It is my understanding that Google responded to Mr. Parker’s complaints by directing him to flag and report each individual video of the attack on YouTube. Further, Instagram’s policy states, “If you see a video or picture on Instagram that depicts the violent death of a family member, you can ask us to remove it. Once we've been notified, we will remove that specific piece of content.” I am deeply troubled by this response, as the burden of finding and removing harmful content should not fall to victims’ families who are grieving their loved ones. This approach only serves to retraumatize them and inflict additional pain. Instead, I firmly believe that the responsibility lies solely with the platform to ensure that any content violating its own Terms of Service is removed expeditiously.
For years, volunteers from the Coalition For A Safer Web have reported videos of Alison’s murder and requested repeatedly for the videos to be taken down on Mr. Parker’s behalf. Disturbingly, only some of the flagged videos have been removed, with many still viewable on YouTube, Facebook, and Instagram. While Meta has responded that it removed certain videos from Facebook and Instagram, there are still clear violations of their Terms of Service present on their platforms, with videos of Alison Parker’s murder, filmed by the perpetrator, still accessible on Instagram. While YouTube appears to have more thoroughly removed content of Alison Parker’s murder filmed by the perpetrator, content containing disturbing footage of the moment of attack is widespread. Through the continued hosting of videos showing the heinous attack on Alison Parker and Adam Ward and other violence, these platforms fail to provide users with an experience free of harmful content despite claiming to do so.
It has been over three years since Mr. Parker and the Georgetown University Law Clinic filed their first complaint regarding this case, and Mr. Parker continues to endure harassment as a result of the videos remaining on these platforms. Given the practices outlined above, I ask that your agency consider all possible avenues to ensure that companies like Google and Meta uphold their Terms of Service, not only in Mr. Parker’s case but also in other instances where their platforms may host violent and harmful content. Further, I ask that you engage closely with Mr. Parker as you consider this request and provide him with a prompt response to his complaints.
I look forward to further engagement with you regarding Mr. Parker’s complaints. Thank you for your urgent attention to this matter.
Sincerely,
###
WASHINGTON – U.S. Sen. Mark Warner joined Sens. Ben Ray Luján (D-NM), Edward Markey (D-MA), and others to urge the Federal Communications Commission (FCC) to enforce its existing regulations regarding consent for receiving telemarketing calls, also known as robocalls. The letter also asks the FCC to issue guidance along the lines of the Federal Trade Commission’s (FTC) recent Business Guidance restating the FCC’s long-held requirements for these unwanted telemarketing calls. By issuing guidance similar to the FTC’s, the FCC will assist telemarketers and sellers in complying with these requirements.
“While the consideration of new regulations may be appropriate in some instances, we believe that the FCC’s current regulations already prohibit many of the activities that lead to the proliferation of unwanted telemarketing calls,” wrote the Senators. “Both the regulations issued in 2003 delineating the rules for telemarketers to obtain consent for calls to lines subscribed to the Do Not Call Registry, and those issued in 2012 governing consent to receive telemarketing calls made with an artificial or prerecorded voice or an automated telephone dialing system, clearly set out the types of protections intended by Congress to eliminate unwanted telemarketing calls.”
The Senators concluded, “As Congress instructed the FCC ‘to maximize consistency with the rule promulgated by the Federal Trade Commission’ relating to the implementation of the Do-Not-Call Registry, we respectfully urge the FCC to issue a guidance along the lines of the FTC’s recent Business Guidance restating its long-held requirements for these unwanted telemarketing calls. As inconsistent rules governing the same activity would be problematic, by issuing guidance similar to the FTC’s, the FCC will assist telemarketers and sellers in complying with these requirements.”
Sen. Warner, a former cell phone entrepreneur, has been active in fighting robocalls for many years. He sponsored the Telephone Robocall Abuse Criminal Enforcement and Deterrence (TRACED) Act to give regulators – including the FCC – more time to find scammers, increase civil forfeiture penalties, require service providers to adopt call authentication and blocking, and bring relevant federal agencies and state attorneys general together to address impediments to criminal prosecution of robocallers. Former President Trump signed the TRACED Act into law in 2019. In July, he applauded new efforts from the FTC to crack down on spam calls.
In addition to Sens. Warner, Lujan, and Markey, the letter is signed by U.S. Senators Chris Van Hollen (D-MD), Peter Welch (D-VT), Elizabeth Warren (D-MA), Angus King (I-ME), Richard Durbin (D-IL), Martin Heinrich (D-NM), Amy Klobuchar (D-MN), Ron Wyden (D-OR), and Gary Peters (D-MI). This letter is endorsed by Appleseed, Consumer Action, Consumer Federation of America, Electronic Privacy Information Center, National Association of State Utility Consumer Advocates, National Consumers League, Public Citizen, Public Knowledge, and U.S. PIRG.
Full text of the letter is available here and below.
Dear Chairwoman Rosenworcel:
We are heartened that the Federal Communications Commission (FCC) is considering ways to curtail the number of unwanted telemarketing calls—currently over 1.25 billion every month—in a proceeding pending under the Telephone Consumer Protection Act (TCPA). As the Commission recognizes, the continued onslaught of illegal calls threatens the trustworthiness and usefulness of our nation’s telephone system.
While the consideration of new regulations may be appropriate in some instances, we believe that the FCC’s current regulations already prohibit many of the activities that lead to the proliferation of unwanted telemarketing calls. Both the regulations issued in 2003 delineating the rules for telemarketers to obtain consent for calls to lines subscribed to the Do Not Call Registry, and those issued in 2012 governing consent to receive telemarketing calls made with an artificial or prerecorded voice or an automated telephone dialing system, clearly set out the types of protections intended by Congress to eliminate unwanted telemarketing calls. Both of these regulations allow robocalls calls only if the call recipients sign a written agreement relating to calls from a single seller.
Additionally, the FCC’s 2003 regulation for telemarketing calls to lines registered on the Do Not Call Registry requires that the “signed, written agreement” must be “between the consumer and the seller.” This requirement provides two protections. First, it means that the seller, not a telemarketer or a lead generator, or anyone other than the seller, or the agent of the seller, must be party to the agreement with the consumer. Second, it limits the calls that are covered by the agreement to calls related only to the seller that was the party to the agreement. Enforcement of the current limitations applicable to agreements providing consent for telemarketing calls under the existing regulations would eliminate the sale and trading of these consents which have led to the proliferation of unwanted telemarketing robocalls.
Moreover, as many of these agreements are entered into online, current federal law requires specific protections for consumers receiving writings through electronic records in the Electronic Signatures in Global and National Commerce Act (the E-Sign Act). One example of these protections in the E-Sign Act is the prohibition of oral communication as a substitute for a writing. Although telemarketers routinely ignore the requirements of the E-Sign Act, the legislation’s mandate for E-Sign consent before writings can be provided in electronic records in 15 U.S.C. § 7001(c) is fully applicable.
Finally, as Congress instructed the FCC “to maximize consistency with the rule promulgated by the Federal Trade Commission” relating to the implementation of the Do-Not-Call Registry, we respectfully urge the FCC to issue a guidance along the lines of the FTC’s recent Business Guidance restating its long-held requirements for these unwanted telemarketing calls. As inconsistent rules governing the same activity would be problematic, by issuing guidance similar to the FTC’s, the FCC will assist telemarketers and sellers in complying with these requirements. This guidance should also emphasize that the obligations imposed by the E-Sign Act apply when these agreements are entered into online.
We appreciate your work to curb unwanted and illegal robocalls. Issuing guidance that emphasizes the meaningful requirements of current regulations as well as the requirements of the federal E-Sign Act will go a long way to reduce the number of unwanted robocalls. Thank you for your consideration of this request.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged Google CEO Sundar Pichai to provide more clarity into his company’s deployment of Med-PaLM 2, an artificial intelligence (AI) chatbot currently being tested in health care settings. In a letter, Sen. Warner expressed concerns about reports of inaccuracies in the technology, and called on Google to increase transparency, protect patient privacy, and ensure ethical guardrails.
In April, Google began testing Med-PaLM2 with customers, including the Mayo Clinic. Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While the technology has shown some promising results, there are also concerning reports of repeated inaccuracies and of Google’s own senior researchers expressing reservations about the readiness of the technology. Additionally, much remains unknown about where Med-PaLM 2 is being tested, what data sources it learns from, to what extent patients are aware of and can object to the use of AI in their treatment, and what steps Google has taken to protect against bias.
“While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” Sen. Warner wrote.
The letter raises concerns over AI companies prioritizing the race to establish market share over patient well-being. Sen. Warner also emphasizes his previous efforts to raise the alarm about Google skirting health privacy as it trained diagnostic models on sensitive health data without patients’ knowledge or consent.
“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” Sen. Warner continued.
The letter poses a broad range of questions for Google to answer, requesting more transparency into exactly how Med-PaLM 2 is being rolled out, what data sources Med-PaLM 2 learns from, how much information and agency patients have over how AI is involved in their care, and more.
Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In April, Sen. Warner directly expressed concerns to several AI CEOs – including Sundar Pichai – about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure. Last month, he called on the Biden administration to work with AI companies to develop additional guardrails around the responsible deployment of AI. He has also introduced several pieces of legislation aimed at making tech more secure, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.
A copy of the letter can be found here are below.
Dear Mr. Pichai,
I write to express my concern regarding reports that Google began providing Med-PaLM 2 to hospitals to test early this year. While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors.
Over the past year, large technology companies, including Google, have been rushing to develop and deploy AI models and capture market share as the technology has received increased attention following OpenAI’s launch of ChatGPT. Numerous media outlets have reported that companies like Google and Microsoft have been willing to take bigger risks and release more nascent technology in an effort to gain a first mover advantage. In 2019, I raised concerns that Google was skirting health privacy laws through secretive partnerships with leading hospital systems, under which it trained diagnostic models on sensitive health data without patients’ knowledge or consent. This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information. One need look no further than AI pioneer Joseph Weizenbaum’s experiments involving chatbots in psychotherapy to see how users can put premature faith in even basic AI solutions.
According to Google, Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While AI models have previously been used in medical settings, the use of generative AI tools presents complex new questions and risks. According to the Wall Street Journal, a senior research director at Google who worked on Med-PaLM 2 said, “I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey.” Indeed, Google’s own research, released in May, showed that Med-PaLM 2’s answers contained more inaccurate or irrelevant information than answers provided by physicians. It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI
Given these serious concerns and the fact that VHC Health, based in Arlington, Virginia, is a member of the Mayo Clinic Care Network, I request that you provide answers to the following questions.
- Researchers have found large language models to display a phenomenon described as “sycophany,” wherein the model generates responses that confirm or cater to a user’s (tacit or explicit) preferred answers, which could produce risks of misdiagnosis in the medical context. Have you tested Med-PaLM 2 for this failure mode?
- Large language models frequently demonstrate the tendency to memorize contents of their training data, which can risk patient privacy in the context of models trained on sensitive health information. How has Google evaluated Med-PaLM 2 for this risk and what steps has Google taken to mitigate inadvertent privacy leaks of sensitive health information?
- What documentation did Google provide hospitals, such as Mayo Clinic, about Med-PaLM 2? Did it share model or system cards, datasheets, data-statements, and/or test and evaluation results?
- Google’s own research acknowledges that its clinical models reflect scientific knowledge only as of the time the model is trained, necessitating “continual learning.” What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?
- Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?
- Does Google ensure that patients are informed when Med-PaLM 2, or other AI models offered or licensed by, are used in their care by health care licensees? If so, how is the disclosure presented? Is it part of a longer disclosure or more clearly presented?
- Do patients have the option to opt-out of having AI used to facilitate their care? If so, how is this option communicated to patients?
- Does Google retain prompt information from health care licensees, including protected health information contained therein? Please list each purpose Google has for retaining that information.
- What license terms exist in any product license to use Med-PaLM 2 to protect patients, ensure ethical guardrails, and prevent misuse or inappropriate use of Med-PaLM 2? How does Google ensure compliance with those terms in the post-deployment context?
- How many hospitals is Med-PaLM 2 currently being used at? Please provide a list of all hospitals and health care systems Google has licensed or otherwise shared Med-Palm 2 with.
- Does Google use protected health information from hospitals using Med-PaLM 2 to retrain or finetune Med-PaLM 2 or any other models? If so, does Google require that hospitals inform patients that their protected health information may be used in this manner?
- In Google’s own research publication announcing Med-PaLM 2, researchers cautioned about the need to adopt “guardrails to mitigate against over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over-reliance on the output of Med-PaLM 2 as well as when it particularly should and should not be used? What guardrails has Google incorporated through product license terms to prevent over-reliance on the output?
###
Warner, Fischer Lead Bipartisan Reintroduction of Legislation to Ban Manipulative 'Dark Patterns'
Jul 28 2023
WASHINGTON –This week, U.S. Sens. Mark R. Warner (D-VA) and Deb Fischer (R-NE), joined by Sens. Amy Klobuchar (D-MN), and John Thune (R-SD), introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act to prohibit large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data. The bill would also require these platforms to obtain consent from users for covered research and prohibit them from using features that result in compulsive usage by children and teens.
The term “dark patterns” is used to describe online interfaces in websites and apps designed to intentionally manipulate users into taking actions they otherwise would not. These design tactics are frequently used by social media platforms to mislead consumers into agreeing to settings and practices more beneficial to the company.
“Dark patterns – manipulative online designs that trick you into signing up for services you don’t want or spending money you don’t mean to – are everywhere online, and they make user experience worse, and data less secure. The DETOUR Act will end this practice while working to instill transparency and oversight that the tech world lacks,” said Sen. Warner. “Consumers shouldn’t have to navigate intentionally misleading interfaces and design features in order to protect their privacy.”
“Manipulative 'dark pattern' interfaces trick users – including children – online. The ‘choices’ platforms present can often be deceptively obscured to exploit users' personal data and behavior,” said Sen. Fischer. “It’s wrong, and our bipartisan bill will finally crack down on this harmful practice. I encourage my colleagues to support the DETOUR Act to increase trust online and protect consumer privacy.”
Dark patterns can take various forms, pushing users into agreeing to terms stacked in favor of the service provider. These deceptive practices can include deliberately obscuring alternate choices or settings through design or other means or the use of privacy settings to push users to ‘agree’ as the default option while more privacy-friendly options can only be found through a much longer process, detouring through multiple screens. Frequently, users cannot find the alternate option, if it exists at all, and simply give up looking.
The result is that large online platforms have an unfair advantage over users and often force consumers to give up personal data such as their contacts, messages, web activity, or location in order to benefit of the company.
The Deceptive Experiences To Online Users Reduction (DETOUR) Act aims to curb this manipulative behavior by prohibiting large online platforms (those with over 100 million monthly active users) from relying on user interfaces that intentionally impair user autonomy, decision-making, or choice. The legislation:
- Prohibits large online operators from designing, modifying, or manipulating user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.
- Prohibits subdividing or segmenting consumers for the purposes of behavioral experiments without a consumer’s informed consent, which cannot be buried in a general contract or service agreement. This includes routine disclosures for large online operators, not less than once every 90 days, on any behavioral or psychological experiments to users and the public. Additionally, the bill would require large online operators to create an internal Independent Review Board to provide oversight on these practices to safeguard consumer welfare.
- Prohibits user design intended to create compulsive usage among children and teens under the age of 17 years old.
“Social media companies often trick users into giving up their personal data – everything from their thoughts and fears to their likes and dislikes – which they then sell to advertisers. These practices are designed to exploit people; not to serve them better. Senator Warner and Senator Fischer’s DETOUR Act would put a stop to the destructive and deceptive use of dark patterns,” said Imran Ahmed, CEO of the Center for Countering Digital Hate.
“Momentum is building, in Congress and across the states, to force tech companies to reduce the serious harm to kids and teens caused by the way that these companies design and operate their platforms," said James P. Steyer, founder and CEO of Common Sense Media. “The reintroduction of the DETOUR Act comes at just the right time to add another important element of protection for children and their families. We applaud Senators Warner and Fischer for working together to try to stop companies from utilizing manipulative design features that trick kids into giving up more personal information and compulsive usage of their platforms for the sake of increasing their profits and engagement without regard for the harm it inflicts on kids.”
“The proposed legislation represents an important step towards reducing big tech companies’ use of dark patterns that prioritize user engagement over well-being. As a developmental scientist, I’m hopeful the DETOUR Act will encourage companies to adopt a child-centered approach to design that places children’s well-being front and center, reducing the burden on parents to look out for and avoid dark patterns in their children’s technology experiences,” said Katie Davis, EdD, Associate Professor at the University of Washington.
“The DETOUR Act proposed by Sen. Warner and co-sponsors represents a positive and important step to protect American consumers,” said Colin M. Gray, PhD Associate Professor, Indiana University. “DETOUR provides a mechanism for independent oversight over large technology companies and curtailing the ability of these companies to use deceptive and manipulative design practices, such as ‘dark patterns,’ which have been shown to produce substantial harms to users. This legislation provides a foothold for regulators to better guard against deceptive and exploitative practices that have become rampant in many large technology companies, and which have had outsized impacts on children and underserved communities.”
Sen. Warner, a former tech entrepreneur, has been one of Congress’s leading voices calling for accountability in Big Tech. He has introduced several pieces of legislation aimed at addressing these issues, including the ACCESS Act earlier this week, which will promote competition in social media by making it easier to transport user data to new sites; the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Kids Online Safety Act, which would protect children online by providing young people and parents with the tools, safeguards, and transparency they need to protect against online harms.
Full text of the bill is available here.
###
Warner, Colleagues Reintroduce Bipartisan Legislation to Encourage Competition in Social Media
Jul 26 2023
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, led a bipartisan group of colleagues in reintroducing the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, legislation that would encourage market-based competition with major social media platforms by requiring the largest companies make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings. Sen. Warner was joined in introduction by Sens. Richard Blumenthal (D-CT), Lindsey Graham (R-SC), Josh Hawley (R-MO), and Amy Klobuchar (D-MN).
“Consumers are currently locked in to the social media platforms that they use, unable to move to a different platform for fear of losing years’ worth of data and interactions,” said the senators. “Interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors. By making it easier for social media users to easily move their data or to continue to communicate with their friends after switching platforms, startups will be able to compete on equal terms with the biggest social media companies. This bill will create long-overdue requirements that will boost competition and give consumers more power.”
Online communications platforms have become vital to the economic and social fabric of the nation, but network effects and consumer lock-in have solidified a select number of companies’ dominance in the digital market and enhanced their control over consumer data, even as the social media landscape changes by the day and platforms’ user experiences become more and more unpredictable.
The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act would increase market competition, encourage innovation, and increase consumer choice by requiring large communications platforms (products or services with over 100 million monthly active users in the U.S.) to:
· Make their services interoperable with competing communications platforms;
· Permit users to easily port their personal data in a structured, commonly used and machine-readable format;
· Allow users to delegate trusted custodial services, which are required to act in a user’s best interests through a strong duty of care, with the task of managing their account settings, content, and online interactions.
“Markets work only when consumers know what they give up and get in any transaction with a seller and have the option to take their business elsewhere. By supporting organizations that can uncover what tech firms are actually doing and by mandating portability, the ACCESS Act will restore the conditions needed for the market in tech services to work,” Paul Romer, Boston College University Professor and Nobel Prize winner in Economics, said.
“The ACCESS Act is a critical, bipartisan first step in requiring large technology platforms to incorporate interoperability into their products, which is fundamental to a dynamic and competitive technology industry. Innovators, consumers, and society as a whole all benefit when people have the right to move their data if they choose to switch platforms. Without interoperability, innovation is held captive by the market power of large platforms. Our economy needs innovation to thrive -- and innovation is stifled if our most promising startups must compete in a world where consumers are locked into the largest platforms because they can't move their own data. That is in no one's interest,” Garry Tan, president and CEO of Y Combinator, said.
“Interoperability is a key tool for promoting competition on and against dominant digital platforms. For social networks in particular, interoperability is needed to make it easy for users to switch to a new social network. Until we have clear and effective interoperability requirements, it will be hard for users to leave a social network that fails to reflect their values, protect their privacy, or offer the best experience. Whatever our reasons for switching to a new social network, the ACCESS Act can make it easier by requiring the largest platforms to offer interoperability with competitors. We all stand to benefit from the greater competition that an interoperable world can create,” Charlotte Slaiman, Competition Policy Director at Public Knowledge, said.
“The reintroduction of the ACCESS Act in the Senate is a critically important step forward for empowering consumers with the freedom to control their own data and enable consumers to leave the various walled gardens of the today’s social media platforms. The ACCESS Act literally does what it says—it would give consumers the option to choose better services without having to balance the unfair choice of abandoning their personal network of family and friends in order to seek better products in the market. The Senate needs to move forward as soon as possible to vote on the ACCESS Act,” Eric Migicovsky, Founder and CEO of Beeper, said.
“Consumers must have control of their own personal data. You should be able to easily access it, share it, revoke access, and interact with is how you see fit. Putting individuals in charge of what is best for them is vital to balance out the ongoing wave of technological innovation. This has broad implications beyond just social media - Congress must pass the ACCESS Act,” David Pickerell, Co-founder and CEO of Para, said.
Sen. Warner first introduced the ACCESS Act in 2019 and, as a former tech entrepreneur, has been on of Congress’s leading voices calling for accountability in Big Tech. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Kids Online Safety Act,which would protect children online by providing young people and parents with the tools, safeguards, and transparency they need to protect against online harms.
Full text of the bill is available here. One-pager of the legislation is available here.
Bipartisan U.S. Senators Unveil Crypto Anti-Money Laundering Bill to Stop Illicit Transfers
Jul 19 2023
WASHINGTON - In an effort to prevent money laundering and stop crypto-facilitated crime and sanctions violations, a leading group of U.S. Senators is introducing new, bipartisan legislation requiring decentralized finance (DeFi) services to meet the same anti-money laundering (AML) and economic sanctions compliance obligations as other financial companies, including centralized crypto trading platforms, casinos, and even pawn shops. The legislation also modernizes key Treasury Department anti-money laundering authorities, and sets new requirements to ensure that “crypto kiosks” don’t become a vector for laundering the proceeds of illicit activities.
DeFi generally refers to applications that facilitate peer-to-peer financial transactions that are recorded on blockchains. The most prominent example of DeFi is so called “decentralized exchanges,” where automated software purportedly allows users to trade cryptocurrencies without using intermediaries.
By design, DeFi provides anonymity. This can allow malicious and criminal actors to evade traditional financial regulatory tools, including longstanding and well-developed rules requiring financial institutions to monitor all transactions and report suspected money laundering and financial crime to the Financial Crimes Enforcement Network (FinCEN), which is a bureau of the U.S. Treasury Department. This allows DeFi to be used to launder criminal proceeds and fund more crime.
Criminals, drug traffickers, and hostile state actors such as North Korea have all demonstrated a propensity for using (DeFi) as a preferred method of transferring and laundering ill-gotten gains. These bad actors have been quick to recognize how DeFi can be exploited to advance nefarious activities like cross-border fentanyl trafficking and financing the development of weapons of mass destruction.
According to the most recent U.S. National Money Laundering Risk Assessment: “DeFi services often involve no AML or other processes to identify customers.” According to another recent Treasury Department report, “illicit actors, including ransomware cybercriminals, thieves, scammers, and Democratic People’s Republic of Korea (DPRK) cyber actors, are using DeFi services in the process of transferring and laundering their illicit proceeds. To accomplish this, illicit actors are exploiting vulnerabilities in the U.S. and foreign AML regulatory, supervisory, and enforcement regimes as well as the technology underpinning DeFi services.”
Noting that transparency and sensible rules are vital for protecting the financial system from crime, U.S. Senators Jack Reed (D-RI), Mike Rounds (R-SD), Mark Warner (D-VA), and Mitt Romney (R-UT) today unveiled the Crypto-Asset National Security Enhancement and Enforcement (CANSEE) Act (S. 2355). This legislation targets money laundering and sanctions evasion involving DeFi.
The CANSEE Act would end special treatment for DeFi by applying the same national security laws that apply to banks and securities brokers, casinos and pawn shops, and even other cryptocurrency companies like centralized trading platforms. That means DeFi services would be forced to meet basic obligations, most notably to maintain AML programs, conduct due diligence on their customers, and report suspicious transactions to FinCEN.
These requirements will close an attractive avenue for money laundering that has been routinely exploited over the past several months by the North Korean government, Chinese chemicals manufacturers, Mexican drug cartels, cybercriminals, ransomware attackers, scammers, and a host of other bad actors.
The legislation also makes clear that if a sanctioned person, like a Russian oligarch, uses a DeFi service to evade U.S. sanctions, then anyone who controls that project will be liable for facilitating that violation. If nobody controls a DeFi service, then—as a backstop—anyone who invests more than $25 million in developing the project will be responsible for these obligations.
The CANSEE Act would also require operators of crypto kiosks (also known as crypto ATMs) to improve traceability of funds by verifying the identities of each counterparty to each transaction using a kiosk. Unless these vulnerabilities are addressed, criminals will continue to exploit these kiosks to launder money from drug trafficking, human trafficking, scams, and other crimes.
Featuring an interface similar to regular ATMs, crypto ATMs are often found at convenience stores, laundromats, and gas stations. Users can insert cash or a debit card into the machine to turn their real money into cryptocurrency, which is then transferred into a digital wallet that can then be accessed by scammers. Once a transfer is complete, users cannot get their money back. Currently, there are about 30,600 crypto ATMs across the country – up from 1,200 in 2018, according to Coin ATM Radar.
Finally, the CANSEE Act makes important updates to the Treasury Department’s authority to require participants in the U.S. financial system to take special measures against money laundering threats. Currently, these authorities are limited to transactions conducted in the traditional banking system. But as new technologies like cryptocurrency increasingly enable new ways to conduct financial transactions, it is critical to extend Treasury’s authority to crack down on illicit financial activity that may occur outside the banking sector.
“DeFi and crypto ATMs are part of a largely unregulated technology that needs stronger oversight and guardrails to prevent rampant money laundering and sanctions evasion,” said Sen. Reed. “This legislation bolsters the Treasury Department’s tools to protect our national and economic security. Drug cartels, sex traffickers, and the like shouldn’t be able to use DeFi platforms to avoid justice – their victims deserve better. Our bill will also ensure that law enforcement has access to better information about cryptocurrency transactions, which they need to fight crimes like cross-border drug trafficking, weapons proliferation, and ransomware attacks. We must protect the integrity of the financial system from new and emerging threats from the worst criminal organizations and malicious state actors.”
“Our adversaries and criminals worldwide are using creative ways every day to take advantage of the United States financial system and we should not allow them to exploit American innovation to evade sanctions and money launder,” said Sen. Rounds. “As more Americans start to use and invest in cryptocurrency, both DeFi platforms and crypto kiosks remain in the blind spot of regulation. This targeted legislation kicks off an important debate on how to protect our financial system and give law enforcement the tools they need to prosecute bad actors.”
“As Chair of the Senate Intelligence Committee, I remain deeply concerned that criminals and rogue states continue to use crypto to launder money, evade sanctions, and conceal illicit activity. The targeted package we’re introducing today will help address specific problems in decentralized finance and crypto kiosks, and incorporates the Special Measures to Address Modern Threats bill I introduced in the last Congress to modernize FinCEN’s existing anti-money laundering authorities,” said Sen. Warner. “I believe these focused measures will help maintain the robust AML and sanctions enforcement we need to protect our national security, while allowing participants who play by the rules to continue to take advantage of the potential of distributed ledger technologies.”
“Malign actors—including China-based fentanyl manufacturers and drug cartels operating along the southern border—are capitalizing on existing loopholes under current law to evade sanctions using decentralized finance services,” said Sen. Romney. “By fortifying U.S. anti-money laundering frameworks, our legislation cracks down on crypto-facilitated crimes and ultimately reinforces our national security.”
###
WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA), Mazie Hirono (D-HI), Amy Klobuchar (D-MN), Tim Kaine (D-VA), and Richard Blumenthal (D-CT), along with U.S. Reps. Kathy Castor (D-FL-14) and Mike Levin (D-CA-49), reintroduced the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act to reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms.
“For too long, Section 230 has given cover to social media companies as they turn a blind eye to the harmful scams, harassment, and violent extremism that run rampant across their platforms,” said Sen. Warner, a former technology entrepreneur and the Chairman of the Senate Select Committee on Intelligence. “When Section 230 was enacted over 25 years ago, the internet we use today was not even fathomable. This legislation takes strides to update a law that was meant to encourage service providers to develop tools and policies to support effective moderation and allows them to finally be held accountable for the harmful, often criminal behavior that exists on their platforms.”
“Social media platforms allow people to connect all across the world—but they also cause great pain and suffering, being used as a tool for cyberbullying, stalking, spreading hate, and more. The way we communicate as a society has changed drastically over the last 25 years, it’s time for our laws to catch up,” said Sen. Hirono, a member of the Senate Judiciary Committee. “The SAFE TECH Act targets the worst abuses perpetrated on internet platforms to better protect our children and our communities from the very real harms of social media.”
“We need to be asking more from big tech companies, not less. How they operate has a real-life effect on the safety and civil rights of Americans and people around the world, as well as our democracy. Our legislation will hold these platforms accountable for ads and content that can lead to real-world harm,” said Sen. Klobuchar.
“Congress has acted in the past to ensure that social media companies don’t get blanket immunity after hosting information on their websites aimed at facilitating human or sex trafficking,” said Sen. Kaine. “I’m fully supportive of using that precedent as a roadmap to require social media companies to moderate dangerous content linked to other crimes—like cyber-stalking, discrimination, and harassment—in a responsible way. This is critical to keep our communities safe.”
“Section 230’s blanket immunity has prioritized Big Tech over Americans’ civil rights and safety. Platforms’ refusal to be held accountable for the dangerous and harmful content they host has real-life implications for users – leaving many vulnerable to threats like stalking, intimidation, and harassment, as well as discrimination,” said Sen. Blumenthal. “Our legislation is needed to safeguard consumers and ensure social media giants aren’t shielded from the legal consequences of failing to act. These common sense protections are essential in today’s online world.”
“For too long, big tech companies have treated the internet like the wild west while users on their platforms violate civil and human rights, defraud consumers and harass others. These companies have shown over and over again that they are unwilling to make their platforms safe for Americans. It is long past time for consumers to have legal recourse when big tech companies harm them or their families. Our bill will ensure they are held accountable,” said Rep. Castor.
“Social media companies continue to allow malicious users to go unchecked, harm other users, and violate laws. This cannot go on and it is clear federal reform is necessary,” said Rep. Levin. “Our bicameral legislation makes much needed updates to Section 230 to ensure Americans can safely use online platforms and have legal recourse when they are harmed. It’s long past time that these legislative fixes are made and I look forward to this bill moving through the Congress.”
Specifically the SAFE TECH Act would force online service providers to address misuse on their platforms or face civil liability. The legislation would make clear that Section 230:
- Doesn’t apply to ads or other paid content – ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams;
- Doesn’t bar injunctive relief – allowing victims to seek court orders where misuse of a provider’s services is likely to cause irreparable harm;
- Doesn’t impair enforcement of civil rights laws – maintaining the vital and hard-fought protections from discrimination even when activities or services are mediated by internet platforms;
- Doesn’t interfere with laws that address stalking/cyber-stalking or harassment and intimidation on the basis of protected classes – ensuring that victims of abuse and targeted harassment can hold platforms accountable when they directly enable harmful activity;
- Doesn’t bar wrongful death actions – allowing the family of a decedent to bring suit against platforms where they may have directly contributed to a loss of life;
- Doesn’t bar suits under the Alien Tort Claims Act – potentially allowing victims of platform-enabled human rights violations abroad to seek redress in U.S. courts against U.S.-based platforms.
Sen. Warner first introduced the SAFE TECH Act in 2021 and is one of Congress’ leading voices in demanding accountability and user protections from social media companies. Last week, Sen. Warner pressed Meta on Facebook's role in inciting violence around the world. In addition to the SAFE TECH Act, Sen. Warner has introduced and written numerous bills aimed at improving transparency, privacy, and accountability on social media. These include the Deceptive Experiences to Online Users Reduction (DETOUR) Act – legislation to prohibit large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data and the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, legislation that would encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings.
“The onslaught of misinformation and discriminatory attacks across social media platforms continues unabated. It is essential that the tech companies that run these platforms protect their users and end the rampant civil rights violations of Black users and other users of color. Social media remains a virtually unchecked home for hateful content discrimination, especially through the manipulation of algorithms that lead to both the targeting and limiting of which users see certain types of advertisements and opportunities. Congress can take a step in the right direction by strengthening Section 230 and ensuring that online communities are not safe harbors for discrimination and civil rights violations. LDF supports Senator Warner and Senator Hirono’s bill to address these critical concerns,” said Lisa Cylar Barrett, Director of Policy, Legal Defense Fund (LDF).
“There needs to be real clarity on Section 230. The hate that festers online: antisemitism, Islamophobia, racism, misogyny and disinformation – leads to real violence, real lives targeted, real people put at risk. ADL supports the ability for people affected by violence to hold perpetrators accountable – and that includes social media companies. ADL appreciates the efforts of Senators Warner, Hirono, Klobuchar, and Kaine to tackle this complex challenge. We look forward to working with them to refine this legislation to ensure a safer and less hate-filled internet for all users.” said Jonathan A. Greenblatt, CEO of ADL (Anti-Defamation League).
“Platforms should not profit from targeting employment ads toward White users, or from targeting voter suppression ads toward Black users. The SAFE TECH Act makes it clear that Section 230 does not give platforms a free pass to violate civil rights laws, while also preserving the power of platforms to remove harmful disinformation,” said Spencer Overton, President, Joint Center for Political and Economic Studies.
“I applaud the SAFE TECH Act introduced by Sens. Warner and Hirono which provides useful modifications to section 230 of the 1996 Communications Decency Act to limit the potential negative impacts of commercial advertising interests while continuing to protect anti-harassment and civil and human rights interests of those who may be wrongfully harmed through wrongful online activity,” Ramesh Srinivasan, Professor at the UCLA Department of Information Studies and Director of UC Digital Cultures Lab, said.
“It is glaringly apparent that we cannot rely on the tech companies to implement common sense policies that reflect common decency on their own. We thank and commend Senators Warner, Hirona, Klobuchar, and Kaine for their foresight and for showing their commitment to the safety of our citizens by putting forth the SAFE TECH Act. The SAFE TECH Act will continue to protect free speech and further protect our civil rights while sensibly amending section 230, an outdated law that the tech companies hide behind in their refusal to take responsibility for real-life consequences” said Wendy Via, Cofounder, Global Project Against Hate and Extremism.
“The Cyber Civil Rights Initiative welcomes this effort to protect civil rights in the digital age and to hold online intermediaries accountable for their role in the silencing and exploitation of vulnerable communities. This bill addresses the urgent need to limit and correct the overzealous interpretation of Section 230 that has granted a multibillion dollar industry immunity and impunity for profiting from irreparable injury,” said Mary Anne Franks, President, Cyber Civil Rights Initiative and Danielle K. Citron, Vice President, Cyber Civil Rights Initiative.
“Social media companies have enabled hate, threats and even genocide against Muslims with virtual impunity. The SAFE TECH Act would bring needed and long-overdue accountability to these companies,” said Muslim Advocates Senior Policy Counsel Sumayyah Waheed. “We thank Sens. Warner, Hirono, Klobuchar, Kaine and Blumenthal for leading on this important bill. Every day, Muslims are profiled, discriminated against, attacked and worse just for engaging in public life. Passing this bill would bring us one step closer to ensuring that Muslims and other marginalized communities can hold social media companies accountable for the reckless way they violate people’s rights and threaten their safety on and offline.”
“The SAFE TECH Act is an important step forward for platform accountability and for the protection of privacy online. Providing an opportunity for victims of harassment, privacy invasions, and other violations to remove unlawful content is critical to stopping its spread and limiting harm,” said Caitriona Fitzgerald, Deputy Director, Electronic Privacy Information Center (EPIC).
“The SAFE TECH Act is a Section 230 reform America needs now. Troubling readings of Section 230 have encouraged reckless and negligent shirking by platforms of basic duties toward their users. Few if any of the drafters of Section 230 could have imagined that it would be opportunistically used to, for example, allow dating sites to ignore campaigns of harassment and worse against their users. The SAFE TECH Act reins in the cyberlibertarian ethos of over-expansive interpretations of Section 230, permitting courts to carefully weigh and assess evidence in cases where impunity is now preemptively assumed,” said Frank Pasquale, Author of The Black Box Society and Professor at Brooklyn Law School.
“It is unacceptable that courts have interpreted Section 230 to provide Big Tech platforms with blanket immunity from wrongdoing. Congress never intended Section 230 to shield companies from all civil and criminal liability. Reforms proposed by Sens. Warner and Hirono are an important step in the right direction. It is time to hold Big Tech accountable for the harms they cause children and families and other vulnerable populations," said James P. Steyer, Founder and CEO, Common Sense.
“The SAFE TECH Act aims to hold social media giants accountable for spreading harmful misinformation and hateful language that affects Black communities and limits our voting power," said Brandon Tucker, Sr. Director of Policy & Government Affairs at Color Of Change. “Social media companies have used Section 230 as a shield against legal repercussions for their continued civil rights violations across their platforms. When we released our Black Tech Agenda and Scorecard last year, we made sure that the SAFE TECH Act was a key criteria in marking legislators’ progress toward advancing tech policy solutions with a racial justice framework. We call on members of Congress to support this critical legislation to protect Black people’s rights and safety online.”
“It has become abundantly clear that disinformation and hate on social media can create real-world harms. - whether it's anti-vaxx misinformation, election-related lies or hate, it is now clear that there is a significant threat to human life, civil rights and national security. The problem is crazy incentives, where bad actors can freely spread hate and misinformation, platforms profit from traffic regardless of whether it is productive or damaging, but the costs are borne by the public and society at large. This timely bill forensically delineates the harms and ensures perpetrators and enablers pay a price for the harms they create. In doing so, it reflects our desire for better communication technologies, which enhance our right to speak and be heard, and that also respect our fundamental rights to life and safety,” said Imran Ahmed, CEO, Center for Countering Digital Hate.
“Senator Mark Warner is a leader in ensuring that technology supports democracy even as it advances innovation. This legislation removes obstacles to enforcement against online discrimination, cyber-stalking, and targeted harassment and incentivizes platforms to move past the current, ineffective whack-a-mole approach to harms,” said Karen Kornbluh, Former US Ambassador to the Organization for Economic Co-operation and Development.
Full text of legislation is available here.
###
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), a member of the Senate Banking Committee and a lead author of the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010, which created the Consumer Financial Protection Bureau (CFPB), released the following statement after the Supreme Court announced it will hear arguments next term in a case with far-reaching implications for the constitutionality of the CFPB, CFPB v. Community Financial Services Association of America:
“Congress created the Consumer Financial Protection Bureau after the financial crisis to enforce consumer protection laws and make sure the banks, credit card companies and other financial institutions aren’t abusing their powers to take advantage of everyday Americans. If the Fifth Circuit’s decision, which could make every rule put forward by the CFPB unconstitutional, is permitted to stand, there will be financial chaos as all sorts of transactions governed by CFPB policies could grind to a halt, and consumers would be left without the protections they expect and deserve.”
Since its creation in 2010, the CFPB has recovered nearly $15 billion in financial relief for customers.
###
WASHINGTON – With the privacy debate receiving renewed attention in Congress, U.S. Sens. Mark R. Warner (D-VA), Deb Fischer (R-NE), Amy Klobuchar (D-MN), and John Thune (R-SD) and Reps. Lisa Blunt Rochester (D-DE-AL) and Anthony Gonzalez (R-OH-16) today announced that their bipartisan, bicameral DETOUR Act – legislation that would prevent large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data – has picked up several new endorsements.
“We are pleased to see growing momentum behind our bipartisan effort to ban these manipulative practices,” said the members of Congress today. “There’s an increasing consensus in Congress that Americans should be able to make informed choices about handing over their data to large platform companies.”
The term “dark patterns” is used to describe online interfaces in websites and apps designed to intentionally manipulate users into taking actions they would otherwise not. These design tactics, drawn from extensive behavioral psychology research, are frequently used by social media platforms to mislead consumers into agreeing to settings and practices advantageous to the company.
The DETOUR Act would also prohibit large platforms from deploying features that encourage compulsive usage by children and from conducting behavioral experiments without a consumer’s consent.
"The American Psychological Association supports the efforts of Senators Mark Warner, Deb Fischer, Amy Klobuchar and John Thune to reduce harmful practices and deceptive tactics by social media companies. These practices can be especially harmful to children, but adults are also susceptible,” said Mitch Prinstein, PhD, Chief Science Officer at the American Psychological Association. “Through my research and that of my colleagues in psychological science, we increasingly understand how these companies can mislead individuals. This is why we support the DETOUR Act and its aim to protect social media users.”
“Social media companies often trick users into giving up their personal data – everything from their thoughts and fears to their likes and dislikes – which they then sell to advertisers. These practices are designed to exploit people; not to serve them better. Senator Warner and Senator Fischer’s DETOUR Act would put a stop to the destructive and deceptive use of dark patterns,” said Imran Ahmed, CEO of the Center for Countering Digital Hate.
“The DETOUR Act is an important step towards curbing Big Tech's unfair design choices that manipulate users into acting against their own interests. We are particularly excited by the provision that prohibits designs that cultivate compulsive use in children,” said Josh Golin, Executive Director of Fairplay. “Over the past year, we've heard a lot of talk from members of Congress about the need to protect children and teens from social media harms. It's time to put those words into action - pass the DETOUR Act!”
“The DETOUR Act proposed by Sen. Warner and co-sponsors represents a positive and important step to protect American consumers. DETOUR provides a mechanism for independent oversight over large technology companies and curtailing the ability of these companies to use deceptive and manipulative design practices, such as ‘dark patterns,’ which have been shown to produce substantial harms to users,” said Colin M. Gray, PhD, Associate Professor at Purdue University. “This legislation provides a foothold for regulators to better guard against deceptive and exploitative practices that have become rampant in many large technology companies, and which have had outsized impacts on children and underserved communities.”
“The proposed legislation represents an important step towards reducing big tech companies’ use of dark patterns that prioritize user engagement over well-being,” said Katie Davis, EdD, Associate Professor at the University of Washington. “As a developmental scientist, I’m hopeful the DETOUR Act will encourage companies to adopt a child-centered approach to design that places children’s well-being front and center, reducing the burden on parents to look out for and avoid dark patterns in their children’s technology experiences.”
The legislation was also previously supported by Mozilla, Common Sense, and the Center for Digital Democracy. Full text of the DETOUR Act is available here.
###
WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) led a bipartisan group of colleagues in reintroducing the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, legislation that will encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose. Sens. Richard Blumenthal (D-CT), Lindsey Graham (R-SC), Josh Hawley (R-MO), and Amy Klobuchar (D-MN) joined Sen. Warner in introducing the legislation.
“The tremendous dominance of a handful of large social media platforms has major downsides – including few options for consumers who face a marketplace with just a few major players and little in the way of real competition,” the senators said. “As we learned in the Microsoft antitrust case, interoperability and portability are powerful tools to restrain anti-competitive behaviors and promote innovative new companies. By making it easier for social media users to easily move their data or to continue to communicate with their friends after switching platforms, startups will be able to compete on equal terms with the biggest social media companies. Additionally, empowering trusted custodial companies to step in on behalf of users to better manage their accounts across different platforms will help balance the playing field between consumers and companies. In other words – by enabling portability, interoperability, and delegatability, this bill will create long-overdue requirements that will boost competition and give consumers the power to move their data from one service to another.”
Online communications platforms have become vital to the economic and social fabric of the nation, but network effects and consumer lock-in have entrenched a select number of companies’ dominance in the digital market and enhanced their control over consumer data.
The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act would increase market competition, encourage innovation, and increase consumer choice by requiring large communications platforms (products or services with over 100 million monthly active users in the U.S.) to:
- Make their services interoperable with competing communications platforms.
- Permit users to easily port their personal data in a structured, commonly used and machine-readable format.
- Allow users to delegate trusted custodial services, which are required to act in a user’s best interests through a strong duty of care, with the task of managing their account settings, content, and online interactions.
“Markets work when consumers have a choice and know what's going on. The ACCESS Act is an important step toward reestablishing this dynamic in the market for tech services. We must get back to the conditions that make markets work: when consumers know what they give a firm and what they get in return; and if they don't like the deal, they can take their business elsewhere. By giving consumers the ability to delegate decisions to organizations working on their behalf, the ACCESS Act gives consumers some hope that they can understand what they are giving up and getting in the opaque world that the tech firms have created. By mandating portability, it also gives them a realistic option of switching to another provider,” Paul Romer, New York University Professor of Economics and Nobel Prize winner in Economics, said.
“Interoperability is a key tool for promoting competition on and against dominant digital platforms. For social networks in particular, interoperability is needed to make it easy for users to switch to a new social network. Until we have clear and effective interoperability requirements, it will be hard for users to leave a social network that fails to reflect their values, protect their privacy, or offer the best experience. Whatever our reasons for switching to a new social network, the ACCESS Act can make it easier by requiring the largest platforms to offer interoperability with competitors. We all stand to benefit from the greater competition that an interoperable world can create,” Charlotte Slaiman, Competition Policy Director at Public Knowledge, said.
"We now understand that the dominant tech platforms' exclusive control over the data we create as we interact with them is the source of extraordinary market power. That power distorts markets, reduces innovation and limits consumer choice. By requiring interoperability, the ACCESS Act empowers consumers, levels the playing field and opens the market to competition. Anyone who believes that markets work best when consumers are able to make informed choices should support this Act,” Brad Burnham, Partner and Co-Founder at Union Square Ventures, said.
“The reintroduction of the ACCESS Act in the Senate is a critically important step forward for empowering consumers with the freedom to control their own data and enable consumers to leave the various walled gardens of the today’s social media platforms. The ACCESS Act literally does what it says—it would give consumers the option to choose better services without having to balance the unfair choice of abandoning their personal network of family and friends in order to seek better products in the market. The Senate needs to move forward as soon as possible to vote on the ACCESS Act.” Eric Migicovsky, Founder and CEO of Beeper, said.
Sen. Warner first introduced the ACCESS Act in 2019 and has been raising concerns about the implications of the lack of competition in social for years.
Sen. Warner is one of Congress’ leading voices in demanding accountability and user protections from social media companies. In addition to the ACCESS Act, Sen. Warner has introduced and written numerous bills designed to improve transparency, privacy, and accountability on social media. These include the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act – legislation that would allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination across platforms; the Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data (DASHBOARD) Act, bipartisan legislation that would require data harvesting companies to tell consumers and financial regulators exactly what data they are collecting from consumers and how it is being leveraged by the platform for profit; and the Deceptive Experiences to Online Users Reduction (DETOUR) Act, bipartisan and bicameral legislation that would prohibit large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data and would prohibit these platforms from using features that result in compulsive usage by children.
Full text of the bill is available here. One-pager of the legislation is available here.
###
Lawmakers Reintroduce Bipartisan Bicameral Legislation to Ban Manipulative 'Dark Patterns'
Dec 08 2021
WASHINGTON – Ahead of Wednesday’s Senate hearing with the head of Instagram, U.S. Sens. Mark R. Warner (D-VA), Deb Fischer (R-NE), Amy Klobuchar (D-MN), and John Thune (R-SD) along with Reps. Lisa Blunt Rochester (D-DE-AL) and Anthony Gonzalez (R-OH-16) have re-introduced the Deceptive Experiences to Online Users Reduction (DETOUR) Act to prohibit large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data. The DETOUR Act would also prohibit these platforms from using features that result in compulsive usage by children.
The term “dark patterns” is used to describe online interfaces in websites and apps designed to intentionally manipulate users into taking actions they would otherwise not. These design tactics, drawn from extensive behavioral psychology research, are frequently used by social media platforms to mislead consumers into agreeing to settings and practices advantageous to the company.
“For years dark patterns have allowed social media companies to use deceptive tactics to convince users to hand over personal data without understanding what they are consenting to. The DETOUR Act will end this practice while working to instill some level of transparency and oversight that the tech world currently lacks,” said Sen. Warner, Chairman of the Senate Select Committee on Intelligence and former technology executive. “Consumers should be able to make their own informed choices on when to share personal information without having to navigate intentionally misleading interfaces and design features deployed by social media companies.”
“Manipulative user interfaces that confuse people and trick consumers into sharing access to their personal information have become all too common online. Our bipartisan legislation would rein in the use of these dishonest interfaces and boost consumer trust. It’s time we put an end to ‘dark patterns’ and other manipulative practices to protect children online and ensure the American people can better protect their personal data,” said Sen. Fischer, a member of the Senate Commerce Committee.
“Dark patterns are manipulative tactics used to trick consumers into sharing their personal data. These tactics undermine consumers’ autonomy and privacy, yet they are becoming pervasive on many online platforms. This legislation would help prevent the major online platforms from using such manipulative tactics to mislead consumers, and it would prohibit behavioral experiments on users without their informed consent,” said Sen. Klobuchar, a member of the Senate Commerce and Judiciary Committees.
“We live in an environment where large online operators often deploy manipulative practices or ‘dark patterns’ to obtain consent to collect user data,” said Sen. Thune, ranking member of the Senate Commerce Committee’s Subcommittee on Communications, Media, and Broadband. “This bipartisan legislation would create a path forward to strengthen consumer transparency by holding large online operators accountable when they subject their users to behavioral or psychological research for the purpose of promoting engagement on their platforms.”
“My colleagues and I are introducing the DETOUR Act because Congress and the American public are tired of tech companies evading scrutiny and avoiding accountability for their actions. Despite congressional hearings and public outcries, many of these tech companies continue to trick and manipulate people into making choices against their own self-interest,” said Rep. Lisa Blunt Rochester. “Our bill would address some common tactics these companies use, like intentionally deceptive user interfaces that trick people into handing over their personal information. Our children, seniors, veterans, people of color, even our very way of life is at stake. We must act. And today, we are.”
“Social media has connected our communities, but also had detrimental effects on our society. Big tech companies that control these platforms currently have unregulated access to a wealth of information about their users and have used nontransparent methods, such as dark patterns, to gather additional information and manipulate users,” said Rep. Anthony Gonzalez. “The DETOUR Act would make these platforms more transparent through prohibiting the use of dark patterns. We live in a transformative period of technology, and it is important that the tech which permeates our day to day lives is transparent.”
Dark patterns can take various forms, often exploiting the power of defaults to push users into agreeing to terms stacked in favor of the service provider. Some examples of these actions include: a deliberate obscuring of alternative choices or settings through design or other means; the use of privacy settings that push users to ‘agree’ as the default option, while users looking for more privacy-friendly options often must click through a much longer process, detouring through multiple screens. Other times, users cannot find the alternative option, if it exists at all, and simply give up looking.
The result is that large online platforms have an unfair advantage over users and potential competitors in forcing consumers to give up personal data such as their contacts, messages, web activity, or location to the benefit of the company.
“Tech companies have clearly demonstrated that they cannot be trusted to self-regulate. So many companies choose to utilize manipulative design features that trick kids into giving up more personal information and compulsive usage of their platforms for the sake of increasing their profits and engagement without regard for the harm it inflicts on kids,” said Jim Steyer, CEO of Common Sense. “Common Sense supports Senators Warner and Fischer and Representatives Blunt Rochester and Gonzalez on this bill, which would rightfully hold companies accountable for these practices so kids can have a healthier and safer online experience.”
“'Dark patterns' and manipulative design techniques on the internet deceive consumers. We need solutions that protect people online and empower consumers to shape their own experience. We appreciate Senator Warner and Senator Fischer's work to address these misleading practices,” said Jenn Taylor Hodges, Head of U.S. Public Policy at Mozilla.
“Manipulative design, efforts to undermine users’ independent decision making, and secret psychological experiments conducted by corporations are everywhere online. The exploitative commercial surveillance model thrives on taking advantage of unsuspecting users. The DETOUR Act would put a stop to this: prohibiting online companies from designing their services to impair autonomy and to cultivate compulsive usage by children under 13. It would also prohibit companies from conducting online user experiments without consent. If enacted, the DETOUR Act will make an important contribution to living in a fairer and more civilized digital world,” said Katharina Kopp, Director of Policy at Center for Digital Democracy.
The Deceptive Experiences To Online Users Reduction (DETOUR) Act aims to curb manipulative behavior by prohibiting the largest online platforms (those with over 100 million monthly active users) from relying on user interfaces that intentionally impair user autonomy, decision-making, or choice. The legislation:
- Prohibits large online operators from designing, modifying, or manipulating user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data
- Prohibits subdividing or segmenting consumers for the purposes of behavioral experiments without a consumer’s informed consent, which cannot be buried in a general contract or service agreement. This includes routine disclosures for large online operators, not less than once every 90 days, on any behavioral or psychological experiments to users and the public. Additionally, the bill would require large online operators to create an internal Independent Review Board to provide oversight on these practices to safeguard consumer welfare.
- Prohibits user design intended to create compulsive usage among children under the age of 13 years old (as currently defined by the Children’s Online Privacy Protection Act).
- Directs the FTC to create rules within one year of enactment to carry out the requirements related to informed consent, Independent Review Boards, and Professional Standards Bodies.
Sen. Warner first introduced the DETOUR ACT in 2019 and has been raising concerns about the implications of social media companies’ reliance on dark patterns for years. In 2014, Sen. Warner asked the FTC to investigate Facebook’s use of dark patterns in an experiment involving nearly 700,000 users designed to study the emotional impact of manipulating information on their News Feeds.
Sen. Warner is one of Congress’ leading voices in demanding accountability and user protections from social media companies. In addition to the DETOUR Act, Sen. Warner has introduced and written numerous bills aimed designed to improve transparency, privacy, and accountability on social media. These include the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act – legislation that allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination across platforms; the Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data (DASHBOARD) Act, bipartisan legislation that would require data harvesting companies to tell consumers and financial regulators exactly what data they are collecting from consumers and how it is being leveraged by the platform for profit; and the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, legislation that would encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose.
Full text of the bill is available here.
###
WASHINGTON – As Labor Day weekend approaches, U.S. Sens. Mark Warner and Tim Kaine (both D-VA) along with Sens. Bob Menendez and Cory Booker (both D-NJ) are pressing product safety regulators to include beach umbrellas in their testing protocols as they work to develop new safety standards for umbrellas sold to consumers. It’s the latest push in the senators’ continued effort to protect beachgoers following multiple accidents involving wind-swept beach umbrellas, including in 2016, when Lottie Michelle Belk of Chester, Va. was struck in the torso and killed while vacationing in Virginia Beach with her family.
Sens. Warner and Kaine have previously pushed for increased safety measures in a 2019 letter to the U.S. Consumer Product Safety Commission (CPSC). In addition, the senators have called for a public safety campaign to educate the public about the dangers of beach umbrellas.
“Given the grave danger posed by beach umbrellas we feel it is imperative that ASTM include beach umbrellas in any new test methods,” the senators wrote to ASTM International Subcommittee Chair Ben Favret. “Summer is in full swing, and as millions of newly vaccinated Americans emerge from their homes to spend time at the shore, we must do all we can to ensure the safety of beach umbrellas.”
ASTM International—a nonprofit that often partners with the U.S. Consumer Product Safety Commission (CPSC) to develop technical standards for a wide range of materials, products, systems, and services—last year began testing the safety and durability of market umbrellas in various wind conditions. Unfortunately it has continued to exclude beach umbrellas from this testing regimen, instead limiting it to patio and weighted-base umbrellas.
Assessing the risks associated with using certain products under specific conditions is a critical step towards developing new product safety standards, recommendations, and best practices to mitigate the risk.
According to the U.S. Consumer Product Safety Commission, an estimated 2,800 people sought treatment at emergency rooms for beach umbrella-related injuries from 2010-2018.
Full text of the letter is below and can be downloaded here:
Ben Favret
Subcommittee Chair, ASTM F15.79
ASTM International
100 Barr Harbor Drive
West Conshohocken, PA 19428
Dear Mr. Favret:
We write to urge ASTM International to update its testing method standard to account for wind speed as it relates to beach umbrellas.
As you note on your website, “[t]he deleterious effects of a Market Umbrellas [sic] being blow[n] over or broken by wind forces can range from acute injury, such as cuts or bruises to blunt force trauma, such as concussions or broken bones and in some cases death.” Further, you state that “[t]he lack of any voluntary standard for the safe performance of Market Umbrellas puts millions of consumers and employees around the world at risk unnecessarily.” Indeed, as the Consumer Product Safety Commission (CPSC) stated in a June 2019 letter to the Senate, over the nine-year period from 2010-2018, an estimated 2,800 people sought treatment in emergency rooms for injuries related to beach umbrellas. A majority of those injuries were caused by a wind-blown beach umbrella.
In March 2021, the CPSC wrote to ASTM requesting that it “expand the standard to address fully the hazards of injuries and death due to beach umbrellas implanted in the sand.” In addition, the agency suggested “mentioning the known fatality in the introduction of the standard, along with the injury data already there”. We could not agree more. Given the grave danger posed by beach umbrellas we feel it is imperative that ASTM include beach umbrellas in any new test methods.
Summer is in full swing, and as millions of newly vaccinated Americans emerge from their homes to spend time at the shore, we must do all we can to ensure the safety of beach umbrellas. We appreciate ASTM’s willingness to consider this issue. Should you have further questions please contact Shelby Boxenbaum in Senator Menendez’s office at 202-224-4744.
Sincerely,
###
Warner, Colleagues Press Facebook on Decision to Remove Independent Researchers from Platform
Aug 09 2021
WASHINGTON – U.S. Senator Mark Warner (D-VA), Chairman of the Senate Select Committee on Intelligence; Senator Amy Klobuchar (D-MN), Chairwoman of the Senate Subcommittee on Competition Policy, Antitrust, and Consumer Rights; and Senator Chris Coons (D-DE), Chairman of the Subcommittee on Privacy, Technology, and the Law, sent a letter to Facebook CEO Mark Zuckerburg asking about Facebook’s decision to terminate the ability of researchers at New York University’s Ad Observatory Project’s to access its platform.
The independent researchers were studying political advertising on Facebook. Their research has produced several key discoveries including highlighting a lack of transparency in how advertisers target political ads online on Facebook.
“We were surprised to learn that Facebook has terminated access to its platform for researchers connected with the NYU Ad Observatory project. The opaque and unregulated online advertising platforms that social media companies maintain have allowed a hotbed of disinformation and consumer scams to proliferate, and we need to find solutions to those problems,” the senators wrote.
The senators continued later in the letter: “...independent researchers are a critical part of the solution. While we agree that Facebook must safeguard user privacy, it is similarly imperative that Facebook allow credible academic researchers and journalists like those involved in the Ad Observatory project to conduct independent research that will help illuminate how the company can better tackle misinformation, disinformation, and other harmful activity that is proliferating on its platforms.”
The full text of the letter can be found below and HERE.
Dear Mr. Zuckerberg,
As you know, we are committed to protecting privacy for all Americans while eliminating the scourge that is disinformation and misinformation, particularly with regard to elections and the COVID-19 pandemic.
We were surprised to learn that Facebook has terminated access to its platform for researchers connected with the NYU Ad Observatory project. The opaque and unregulated online advertising platforms that social media companies maintain have allowed a hotbed of disinformation and consumer scams to proliferate, and we need to find solutions to those problems. The Ad Observatory project describes itself as “nonpartisan [and] independent…focused on improving the transparency of online political advertising.” Research efforts studying online advertising have helped inform consumers and policymakers about the extent to which your ad platform has been a vector for consumer scams and frauds, enabled hiring discrimination and discriminatory ads for financial services, and circumvented accessibility laws. Such work to improve the integrity of online advertising is critical to strengthening American democracy.
We appreciate Facebook’s ongoing efforts to address misinformation and disinformation on its platforms. But there is much more to do, and independent researchers are a critical part of the solution. While we agree that Facebook must safeguard user privacy, it is similarly imperative that Facebook allow credible academic researchers and journalists like those involved in the Ad Observatory project to conduct independent research that will help illuminate how the company can better tackle misinformation, disinformation, and other harmful activity that is proliferating on its platforms.
We therefore ask that you provide written answers to the following questions by August 20, 2021:
- How many accounts of researchers and journalists were terminated or otherwise disabled during 2021, including but not limited to researchers from the NYU Ad Observatory?
- Please explain why you terminated those accounts referenced in question 1. If you believe that the researchers violated Facebook’s terms of service, please describe how, in detail.
- If the researchers’ access violated Facebook’s terms of service, what steps are you taking to revise these terms to better accommodate research that improves the security and integrity of your platform?
- Facebook’s public statement about its decision to terminate the Ad Observatory researchers’ access said that research should not “compromis[e] people’s privacy.” Please explain how the researchers’ work compromised privacy of end-users.
- The Ad Observatory project asked Facebook users to voluntarily install a browser extension that would provide information available to that user about the ads that the user was shown. Facebook’s public statement says that the extension “collected data about Facebook users who did not install it or consent to the collection.” Were these non-consenting “users” advertisers whose advertising information was being collected and analyzed, other individual Facebook users, or both?
- Facebook has suggested that the NYU researchers potentially violated user privacy because the browser extension could have exposed the identity of users who liked or commented on an advertisement. However, both researchers at NYU and other independent researchers have confirmed that the extension did not collect information beyond the frame of the ad, and that the program could not collect personal posts. Given these technical constraints, what evidence does Facebook have to suggest that this research exposed personal information of non-consenting individuals?
- Facebook’s public statement explaining its decision to revoke access for the NYU researchers states that Facebook made this decision “in line with our privacy program under the FTC Order.” FTC Acting Bureau Director Samuel Levine sent you a letter dated August 5, 2021 in which he noted that “Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest. Indeed, the FTC supports efforts to shed light on opaque business practices.”
- Why didn’t Facebook contact the FTC about its plans to disable researchers’ accounts?
- Does Facebook maintain that the FTC consent decree or other orders required it to disable access for the Ad Observatory researchers? If so, please explain with specificity which sections of which decree(s) compel that response.
- Are there measures Facebook could take to authorize the Ad Observatory research while remaining in compliance with FTC requirements?
- In light of Mr. Levine’s statement that the FTC Order does not require Facebook to disable the access of the Ad Observatory researchers, does Facebook intend to restore the Ad Observatory researchers’ access?
- In its public statement, Facebook highlighted tools that it offers to the academic community, including its Facebook Open Research and Transparency (FORT) initiative. However, public reporting suggests that tool only includes data from the three month period before the November 2020 election, and further that it does not include ads seen by fewer than 100 people.
- Why does Facebook limit this data set to the three months prior to the November 2020 election?
- Why does Facebook limit this data set to ads seen by more than 100 people?
- What percentage of unique ads on Facebook are seen by more than 100 people?
We look forward to your prompt responses.
# # #
WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, released the statement below, following a report that Facebook disabled the accounts of researchers studying political ads on the social network:
“This latest action by Facebook to cut off an outside group’s transparency efforts – efforts that have repeatedly facilitated revelations of ads violating Facebook’s Terms of Service, ads for frauds and predatory financial schemes, and political ads that were improperly omitted from Facebook’s lackluster Ad Library – is deeply concerning. For several years now, I have called on social media platforms like Facebook to work with, and better empower, independent researchers, whose efforts consistently improve the integrity and safety of social media platforms by exposing harmful and exploitative activity. Instead, Facebook has seemingly done the opposite. It’s past time for Congress to act to bring greater transparency to the shadowy world of online advertising, which continues to be a major vector for fraud and misconduct.”
###
WASHINGTON – U.S. Senators Mark Warner (D-Va.), Bob Menendez (D-N.J.), and Mazie Hirono (D-Hawaii) today slammed Facebook for failing to remove vaccine misinformation from its platforms. The rapid spread of dangerous misinformation across social media could hamper the efforts of public health officials as they work to vaccinate hard-to-reach communities and hesitant individuals, representing a serious concern for public safety. Studies show that roughly 275,000 Facebook users belong to anti-vaccine groups on the platform.
“As public health experts struggle to reach individuals who are vaccine hesitant, epidemiologists warn that low rates of vaccine rates coupled with the relaxing of mask mandates could result in new COVID-19 outbreaks,” the senators wrote in a letter to Facebook CEO Mark Zuckerberg. “Moreover, most public health officials agree that because herd immunity in the U.S. is now unlikely, ‘continued immunizations, especially for people at highest risk because of age, exposure or health status, will be crucial to limiting the severity of outbreaks, if not their frequency’. In short, ‘vaccinations remain the key to transforming the virus into a controllable threat’.”
A recent report from Markup.org’s “Citizen Browser project” found that there are 117 active anti-vaccine groups on Facebook. Combined, the groups had roughly 275,000 members. The study also found that Facebook was recommending health groups to its users, including anti-vaccine groups and pages that spread COVID-19 misinformation and propaganda.
The lawmakers asked Zuckerberg a series of questions, including why users were recommended vaccine misinformation; how long anti-vaccine groups and pages remained on the platform before being taken down; and what specific steps the company is taking to ensure its platforms do not recommend vaccine misinformation to its users.
A copy of the letter can be found here and below:
Dear Mr. Zuckerberg,
We write to express our concern over recent reporting alleging that Facebook failed to remove vaccine misinformation from its platforms. As the U.S. struggles to reach vaccine hesitant individuals and the world grapples with new variants, it is more important than ever that social media companies such as Facebook ensure that its platforms are free from disinformation.
In a February 2021 blog post, Facebook promised to expand “the list of false claims [it] will remove to include additional debunked claims about the coronavirus and vaccines. This includes claims such as: COVID-19 is man-made or manufactured; Vaccines are not effective at preventing the disease they are meant to protect against; It’s safer to get the disease than to get the vaccine; [and] Vaccines are toxic, dangerous or cause autism.” According to data from the Markup.org’s “Citizen Browser project,” misinformation regarding COVID-19 and vaccines are readily available on Facebook. According to Madelyn Webb, a senior researcher at Media Matters, as late as April 2021, she found 117 active anti-vaccine groups on Facebook. Combined, those groups had roughly 275,000 members. Even more troubling is the finding that Facebook “continued to recommend health groups to its users, including blatantly anti-vaccine groups and pages explicitly founded to propagate lies about the pandemic.” As public health experts struggle to reach individuals who are vaccine hesitant, epidemiologists warn that low rates of vaccine rates coupled with the relaxing of mask mandates could result in new COVID-19 outbreaks. Moreover, most public health officials agree that because herd immunity in the U.S. is now unlikely, “[c]ontinued immunizations, especially for people at highest risk because of age, exposure or health status, will be crucial to limiting the severity of outbreaks, if not their frequency.” In short, “vaccinations remain the key to transforming the virus into a controllable threat.”
In March 2021, Senator Warner wrote to you expressing these same concerns. Your April 2021 response failed to directly answer the questions posed in his letter. Specifically, you failed to respond to a question as to why posts with content warnings about health misinformation were promoted into Instagram feeds. Given Facebook’s continued failure to remove vaccine misinformation from its platforms, we seek answers to the following questions no later than July 5, 2021.
1. In calendar year 2021, how many users viewed vaccine-related misinformation?
2. In calendar year 2021, how many users were recommended anti-vaccine information or vaccine-related misinformation?
a. Why were these users recommended such information?
3. In calendar year 2021, how many vaccine-related posts has Facebook removed due to violations of its vaccine misinformation policy? How many pages were removed? How many accounts were removed? How many groups were removed?
a. On average, how long did these pages or posts remain on the platform before Facebook removed them?
4. What steps is Facebook taking to ensure that its platforms do not recommend vaccine-related misinformation to its users? Please be specific.
5. What steps is Facebook taking to ensure that individuals who search out anti-vaccine content are not subsequently shown additional misinformation?
6. In March 2019, Facebook said it would stop recommending groups that contained vaccine-related misinformation content. It wasn’t until February 2021 that the company announced it would remove such content across the platform. Why did it take Facebook nearly a year to make this decision?
Thank you in advance or your prompt response to the above questions.
Sincerely,
###