Press Releases

WASHINGTON, D.C. – U.S. Senators Mark R. Warner and Tim Kaine joined Senators Bob Casey (D-PA), John Fetterman (D-PA), Sherrod Brown (D-OH), and Joe Manchin (D-WV) and U.S. Representative Bobby Scott (D-VA-3) in introducing the Black Lung Benefits Improvement Act, which would help miners who have suffered from black lung disease and their survivors access the workers’ compensation they are entitled to receive under the Black Lung Benefits Program. This legislation would remove barriers that prevent miners and their survivors from accessing their benefits such as lengthy processing times, lack of a legal representative, and inflation. 

“For generations, coal miners across Virginia have made tremendous sacrifices to power America, literally risking their lives and their health to electrify our nation,” said Senator Warner. “Miners living with black lung and their survivors need easy access to the benefits they’ve earned – but far too often, red tape gets in the way. The Black Lung Benefits Improvement Act would take important steps to make sure miners can access legal representation, have protection against inflation, and more so America can keep making good on the debt it owes to victims of black lung.”

“Many of our nation’s miners have developed black lung disease, and we owe it to them to provide them with the care and support they need,” said Senator Kaine. “The Black Lung Benefits Improvement Act is critical to helping more miners, miner retirees, and their families receive the benefits and compensation they’ve earned following their tremendous sacrifices.”

Many miners have developed coal workers’ pneumoconiosis—commonly referred to as “black lung”—a debilitating and deadly disease caused by the long-term inhalation of coal dust in underground and surface coal mines. In response, Congress passed the Black Lung Benefits Act in 1976 to provide monthly compensation and medical coverage for coal miners who develop black lung disease and are disabled. The Black Lung Benefits Improvement Act makes needed updates to ensure Congress is fulfilling its commitment to the Nation’s coal miners by: 

  • Restoring cost-of-living benefit increases for black lung beneficiaries and ensuring cost-of-living increases are never withheld in the future,
  • Helping miners and their survivors secure legal representation by providing interim attorney fees for miners that prevail at various stages of their claim,
  • Allowing miners or their survivors to reopen their cases if they had been wrongly denied benefits because of errors in medical interpretations, and
  • Prohibiting unethical conduct by attorneys and doctors in the black lung claims process, such as withholding evidence of black lung, and helping miners review and rebut potentially biased or inaccurate medical evidence developed by coal companies.

 

Warner and Kaine have long worked to support miners and their families. The Senate-passed draft of the Fiscal Year 2024 government funding bill includes $12.19 million in federal funding for black lung clinics, which the senators are working to ensure is included in the final version of the bill. The Inflation Reduction Act, which the senators helped pass, included a permanent extension of the Black Lung Disability Trust Fund’s excise tax at a higher rate, providing certainty for miners, miner retirees, and their families who rely on the fund to access benefits. This followed Warner and Kaine’s successful efforts to ensure that miners receive the pensions and health care they earned. In July, the senators reintroduced the Relief for Survivors of Miners Act, which would ease restrictions to make it easier for miners’ survivors to successfully claim benefits. Warner and Kaine also urged the Biden Administration to issue new silica standards to protect miners across America – a push that helped contribute towards the release of those standards.

A one-pager on the bill is available here.

WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA), wrote to Chairwoman of the Federal Trade Commission (FTC) Lina Khan urging the Commission to take action against Google and Meta over their failure to remove graphic videos depicting the murders of Alison Parker and Adam Ward from YouTube, Facebook, and Instagram. In August 2015, Alison Parker and Adam Ward, employees at CBS affiliate WDBJ, were murdered by a former co-worker while reporting live on WDBJ’s morning broadcast. The live footage as well as the killer’s own recorded video have circulated online since. For years, Andy Parker, Alison’s father, has been vocal about the damaging impact that this footage has had on his family, including during testimony before the Senate Judiciary Committee.   

“I am deeply troubled by this response, as the burden of finding and removing harmful content should not fall to victims’ families who are grieving their loved ones,” Sen. Warner wrote. “This approach only serves to retraumatize them and inflict additional pain. Instead, I firmly believe that the responsibility lies solely with the platform to ensure that any content violating its own Terms of Service is removed expeditiously.”

In March 2020 and October 2021, Mr. Parker submitted complaints to the FTC and requested a Section 5 investigation of deceptive practices in connection with YouTube and Meta (then Facebook).The complaints argue that YouTube and Meta have failed to enforce their terms of service by neglecting to remove videos of the murders of Alison Parker and Adam Ward from their platforms. Section 5 of the FTC Act prohibits ''unfair or deceptive acts or practices in or affecting commerce,” with a deceptive act defined as one that misleads or is likely to mislead a consumer acting reasonably.

Sen. Warner continued, “It has been over three years since Mr. Parker and the Georgetown University Law Clinic filed their first complaint regarding this case, and Mr. Parker continues to endure harassment as a result of the videos remaining on these platforms. Given the practices outlined above, I ask that your agency consider all possible avenues to ensure that companies like Google and Meta uphold their Terms of Service, not only in Mr. Parker’s case but also in other instances where their platforms may host violent and harmful content.”

Sen. Warner is one of Congress’ leading voices in demanding accountability and user protections from social media companies and has previously pressed Meta on Facebook's role in inciting violence around the world.

Text of the letter can be found here and below.

Dear Chairwoman Khan,

I write today in support of my constituent, Mr. Andy Parker, and his urgent requests for the Federal Trade Commission (FTC) to take action against Google and Meta over their failure to remove videos depicting the tragic murders of Alison Parker and Adam Ward from YouTube, Facebook, and Instagram. In light of this behavior, we ask that your agency engage closely with Mr. Parker regarding his complaints and explore all possible avenues to ensure that Google and Meta uphold their Terms of Service in relation to violent and harmful content.

In August 2015, journalist Alison Parker and photojournalist Adam Ward were shot and killed during a live television interview in Moneta, Virginia. Following this horrifying event, footage captured by the assailant, as well as video from the live news broadcast, were uploaded to several online platforms, including YouTube, Facebook, and Instagram. Despite the platforms having policies banning violent content and repeated requests from Mr. Parker and volunteers acting on his behalf to remove this distressing footage, these videos continue to exist on all three platforms to this day. Even more troubling, the footage has been circulated widely by conspiracy theorists who subject the victims’ families to further harm and harassment by falsely claiming the attack was a hoax.

While both Google and Meta purport to have robust content moderation protocols, Mr. Parker's experience demonstrates that the responsibility of removing harmful content often falls upon the victims’ families. It is my understanding that Google responded to Mr. Parker’s complaints by directing him to flag and report each individual video of the attack on YouTube. Further, Instagram’s policy states, “If you see a video or picture on Instagram that depicts the violent death of a family member, you can ask us to remove it. Once we've been notified, we will remove that specific piece of content.” I am deeply troubled by this response, as the burden of finding and removing harmful content should not fall to victims’ families who are grieving their loved ones. This approach only serves to retraumatize them and inflict additional pain. Instead, I firmly believe that the responsibility lies solely with the platform to ensure that any content violating its own Terms of Service is removed expeditiously.

For years, volunteers from the Coalition For A Safer Web have reported videos of Alison’s murder and requested repeatedly for the videos to be taken down on Mr. Parker’s behalf. Disturbingly, only some of the flagged videos have been removed, with many still viewable on YouTube, Facebook, and Instagram. While Meta has responded that it removed certain videos from Facebook and Instagram, there are still clear violations of their Terms of Service present on their platforms, with videos of Alison Parker’s murder, filmed by the perpetrator, still accessible on Instagram. While YouTube appears to have more thoroughly removed content of Alison Parker’s murder filmed by the perpetrator, content containing disturbing footage of the moment of attack is widespread. Through the continued hosting of videos showing the heinous attack on Alison Parker and Adam Ward and other violence, these platforms fail to provide users with an experience free of harmful content despite claiming to do so.

It has been over three years since Mr. Parker and the Georgetown University Law Clinic filed their first complaint regarding this case, and Mr. Parker continues to endure harassment as a result of the videos remaining on these platforms. Given the practices outlined above, I ask that your agency consider all possible avenues to ensure that companies like Google and Meta uphold their Terms of Service, not only in Mr. Parker’s case but also in other instances where their platforms may host violent and harmful content. Further, I ask that you engage closely with Mr. Parker as you consider this request and provide him with a prompt response to his complaints.

I look forward to further engagement with you regarding Mr. Parker’s complaints. Thank you for your urgent attention to this matter. 

Sincerely,

###

WASHINGTON – U.S. Sen. Mark Warner joined Sens. Ben Ray Luján (D-NM), Edward Markey (D-MA), and others to urge the Federal Communications Commission (FCC) to enforce its existing regulations regarding consent for receiving telemarketing calls, also known as robocalls. The letter also asks the FCC to issue guidance along the lines of the Federal Trade Commission’s (FTC) recent Business Guidance restating the FCC’s long-held requirements for these unwanted telemarketing calls. By issuing guidance similar to the FTC’s, the FCC will assist telemarketers and sellers in complying with these requirements. 

“While the consideration of new regulations may be appropriate in some instances, we believe that the FCC’s current regulations already prohibit many of the activities that lead to the proliferation of unwanted telemarketing calls,” wrote the Senators. “Both the regulations issued in 2003 delineating the rules for telemarketers to obtain consent for calls to lines subscribed to the Do Not Call Registry, and those issued in 2012 governing consent to receive telemarketing calls made with an artificial or prerecorded voice or an automated telephone dialing system, clearly set out the types of protections intended by Congress to eliminate unwanted telemarketing calls.”

The Senators concluded, “As Congress instructed the FCC ‘to maximize consistency with the rule promulgated by the Federal Trade Commission’ relating to the implementation of the Do-Not-Call Registry, we respectfully urge the FCC to issue a guidance along the lines of the FTC’s recent Business Guidance restating its long-held requirements for these unwanted telemarketing calls. As inconsistent rules governing the same activity would be problematic, by issuing guidance similar to the FTC’s, the FCC will assist telemarketers and sellers in complying with these requirements.”

Sen. Warner, a former cell phone entrepreneur, has been active in fighting robocalls for many years. He sponsored the Telephone Robocall Abuse Criminal Enforcement and Deterrence (TRACED) Act to give regulators – including the FCC – more time to find scammers, increase civil forfeiture penalties, require service providers to adopt call authentication and blocking, and bring relevant federal agencies and state attorneys general together to address impediments to criminal prosecution of robocallers. Former President Trump signed the TRACED Act into law in 2019. In July, he applauded new efforts from the FTC to crack down on spam calls.

In addition to Sens. Warner, Lujan, and Markey, the letter is signed by U.S. Senators Chris Van Hollen (D-MD), Peter Welch (D-VT), Elizabeth Warren (D-MA), Angus King (I-ME), Richard Durbin (D-IL), Martin Heinrich (D-NM), Amy Klobuchar (D-MN), Ron Wyden (D-OR), and Gary Peters (D-MI). This letter is endorsed by Appleseed, Consumer Action, Consumer Federation of America, Electronic Privacy Information Center, National Association of State Utility Consumer Advocates, National Consumers League, Public Citizen, Public Knowledge, and U.S. PIRG.

Full text of the letter is available here and below.

Dear Chairwoman Rosenworcel:

We are heartened that the Federal Communications Commission (FCC) is considering ways to curtail the number of unwanted telemarketing calls—currently over 1.25 billion every month—in a proceeding pending under the Telephone Consumer Protection Act (TCPA). As the Commission recognizes, the continued onslaught of illegal calls threatens the trustworthiness and usefulness of our nation’s telephone system.

While the consideration of new regulations may be appropriate in some instances, we believe that the FCC’s current regulations already prohibit many of the activities that lead to the proliferation of unwanted telemarketing calls. Both the regulations issued in 2003 delineating the rules for telemarketers to obtain consent for calls to lines subscribed to the Do Not Call Registry, and those issued in 2012 governing consent to receive telemarketing calls made with an artificial or prerecorded voice or an automated telephone dialing system, clearly set out the types of protections intended by Congress to eliminate unwanted telemarketing calls. Both of these regulations allow robocalls calls only if the call recipients sign a written agreement relating to calls from a single seller. 

Additionally, the FCC’s 2003 regulation for telemarketing calls to lines registered on the Do Not Call Registry requires that the “signed, written agreement” must be “between the consumer and the seller.” This requirement provides two protections. First, it means that the seller, not a telemarketer or a lead generator, or anyone other than the seller, or the agent of the seller, must be party to the agreement with the consumer. Second, it limits the calls that are covered by the agreement to calls related only to the seller that was the party to the agreement. Enforcement of the current limitations applicable to agreements providing consent for telemarketing calls under the existing regulations would eliminate the sale and trading of these consents which have led to the proliferation of unwanted telemarketing robocalls.

Moreover, as many of these agreements are entered into online, current federal law requires specific protections for consumers receiving writings through electronic records in the Electronic Signatures in Global and National Commerce Act (the E-Sign Act). One example of these protections in the E-Sign Act is the prohibition of oral communication as a substitute for a writing. Although telemarketers routinely ignore the requirements of the E-Sign Act, the legislation’s mandate for E-Sign consent before writings can be provided in electronic records in 15 U.S.C. § 7001(c) is fully applicable.

Finally, as Congress instructed the FCC “to maximize consistency with the rule promulgated by the Federal Trade Commission” relating to the implementation of the Do-Not-Call Registry, we respectfully urge the FCC to issue a guidance along the lines of the FTC’s recent Business Guidance restating its long-held requirements for these unwanted telemarketing calls. As inconsistent rules governing the same activity would be problematic, by issuing guidance similar to the FTC’s, the FCC will assist telemarketers and sellers in complying with these requirements. This guidance should also emphasize that the obligations imposed by the E-Sign Act apply when these agreements are entered into online.

We appreciate your work to curb unwanted and illegal robocalls. Issuing guidance that emphasizes the meaningful requirements of current regulations as well as the requirements of the federal E-Sign Act will go a long way to reduce the number of unwanted robocalls. Thank you for your consideration of this request.

 
###

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged Google CEO Sundar Pichai to provide more clarity into his company’s deployment of Med-PaLM 2, an artificial intelligence (AI) chatbot currently being tested in health care settings. In a letter, Sen. Warner expressed concerns about reports of inaccuracies in the technology, and called on Google to increase transparency, protect patient privacy, and ensure ethical guardrails.

In April, Google began testing Med-PaLM2 with customers, including the Mayo Clinic. Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While the technology has shown some promising results, there are also concerning reports of repeated inaccuracies and of Google’s own senior researchers expressing reservations about the readiness of the technology. Additionally, much remains unknown about where Med-PaLM 2 is being tested, what data sources it learns from, to what extent patients are aware of and can object to the use of AI in their treatment, and what steps Google has taken to protect against bias.

“While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” Sen. Warner wrote. 

The letter raises concerns over AI companies prioritizing the race to establish market share over patient well-being. Sen. Warner also emphasizes his previous efforts to raise the alarm about Google skirting health privacy as it trained diagnostic models on sensitive health data without patients’ knowledge or consent.

“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” Sen. Warner continued.

The letter poses a broad range of questions for Google to answer, requesting more transparency into exactly how Med-PaLM 2 is being rolled out, what data sources Med-PaLM 2 learns from, how much information and agency patients have over how AI is involved in their care, and more.

Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In April, Sen. Warner directly expressed concerns to several AI CEOs – including Sundar Pichai – about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure. Last month, he called on the Biden administration to work with AI companies to develop additional guardrails around the responsible deployment of AI. He has also introduced several pieces of legislation aimed at making tech more secure, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.

A copy of the letter can be found here are below. 

Dear Mr. Pichai,

I write to express my concern regarding reports that Google began providing Med-PaLM 2 to hospitals to test early this year. While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors.

Over the past year, large technology companies, including Google, have been rushing to develop and deploy AI models and capture market share as the technology has received increased attention following OpenAI’s launch of ChatGPT. Numerous media outlets have reported that companies like Google and Microsoft have been willing to take bigger risks and release more nascent technology in an effort to gain a first mover advantage. In 2019, I raised concerns that Google was skirting health privacy laws through secretive partnerships with leading hospital systems, under which it trained diagnostic models on sensitive health data without patients’ knowledge or consent. This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information. One need look no further than AI pioneer Joseph Weizenbaum’s experiments involving chatbots in psychotherapy to see how users can put premature faith in even basic AI solutions.

According to Google, Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While AI models have previously been used in medical settings, the use of generative AI tools presents complex new questions and risks. According to the Wall Street Journal, a senior research director at Google who worked on Med-PaLM 2 said, “I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey.” Indeed, Google’s own research, released in May, showed that Med-PaLM 2’s answers contained more inaccurate or irrelevant information than answers provided by physicians. It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI

Given these serious concerns and the fact that VHC Health, based in Arlington, Virginia, is a member of the Mayo Clinic Care Network, I request that you provide answers to the following questions. 

  1. Researchers have found large language models to display a phenomenon described as “sycophany,” wherein the model generates responses that confirm or cater to a user’s (tacit or explicit) preferred answers, which could produce risks of misdiagnosis in the medical context. Have you tested Med-PaLM 2 for this failure mode?
  2. Large language models frequently demonstrate the tendency to memorize contents of their training data, which can risk patient privacy in the context of models trained on sensitive health information. How has Google evaluated Med-PaLM 2 for this risk and what steps has Google taken to mitigate inadvertent privacy leaks of sensitive health information?
  3. What documentation did Google provide hospitals, such as Mayo Clinic, about Med-PaLM 2? Did it share model or system cards, datasheets, data-statements, and/or test and evaluation results?
  4. Google’s own research acknowledges that its clinical models reflect scientific knowledge only as of the time the model is trained, necessitating “continual learning.” What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?
  5. Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?
  6. Does Google ensure that patients are informed when Med-PaLM 2, or other AI models offered or licensed by, are used in their care by health care licensees? If so, how is the disclosure presented? Is it part of a longer disclosure or more clearly presented?
  7. Do patients have the option to opt-out of having AI used to facilitate their care? If so, how is this option communicated to patients?
  8. Does Google retain prompt information from health care licensees, including protected health information contained therein? Please list each purpose Google has for retaining that information.
  9. What license terms exist in any product license to use Med-PaLM 2 to protect patients, ensure ethical guardrails, and prevent misuse or inappropriate use of Med-PaLM 2? How does Google ensure compliance with those terms in the post-deployment context? 
  10. How many hospitals is Med-PaLM 2 currently being used at? Please provide a list of all hospitals and health care systems Google has licensed or otherwise shared Med-Palm 2 with.
  11. Does Google use protected health information from hospitals using Med-PaLM 2 to retrain or finetune Med-PaLM 2 or any other models? If so, does Google require that hospitals inform patients that their protected health information may be used in this manner?
  12. In Google’s own research publication announcing Med-PaLM 2, researchers cautioned about the need to adopt “guardrails to mitigate against over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over-reliance on the output of Med-PaLM 2 as well as when it particularly should and should not be used? What guardrails has Google incorporated through product license terms to prevent over-reliance on the output?

 

### 

 

WASHINGTON –This week, U.S. Sens. Mark R. Warner (D-VA) and Deb Fischer (R-NE), joined by Sens. Amy Klobuchar (D-MN), and John Thune (R-SD), introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act to prohibit large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data. The bill would also require these platforms to obtain consent from users for covered research and prohibit them from using features that result in compulsive usage by children and teens.

The term “dark patterns” is used to describe online interfaces in websites and apps designed to intentionally manipulate users into taking actions they otherwise would not. These design tactics are frequently used by social media platforms to mislead consumers into agreeing to settings and practices more beneficial to the company. 

“Dark patterns – manipulative online designs that trick you into signing up for services you don’t want or spending money you don’t mean to – are everywhere online, and they make user experience worse, and data less secure. The DETOUR Act will end this practice while working to instill transparency and oversight that the tech world lacks,” said Sen. Warner. “Consumers shouldn’t have to navigate intentionally misleading interfaces and design features in order to protect their privacy.” 

“Manipulative 'dark pattern' interfaces trick users – including children – online. The ‘choices’ platforms present can often be deceptively obscured to exploit users' personal data and behavior,” said Sen. Fischer. “It’s wrong, and our bipartisan bill will finally crack down on this harmful practice. I encourage my colleagues to support the DETOUR Act to increase trust online and protect consumer privacy.”

Dark patterns can take various forms, pushing users into agreeing to terms stacked in favor of the service provider. These deceptive practices can include deliberately obscuring alternate choices or settings through design or other means or the use of privacy settings to push users to ‘agree’ as the default option while more privacy-friendly options can only be found through a much longer process, detouring through multiple screens. Frequently, users cannot find the alternate option, if it exists at all, and simply give up looking.

The result is that large online platforms have an unfair advantage over users and often force consumers to give up personal data such as their contacts, messages, web activity, or location in order to benefit of the company.

The Deceptive Experiences To Online Users Reduction (DETOUR) Act aims to curb this manipulative behavior by prohibiting large online platforms (those with over 100 million monthly active users) from relying on user interfaces that intentionally impair user autonomy, decision-making, or choice. The legislation:

  • Prohibits large online operators from designing, modifying, or manipulating user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.
  • Prohibits subdividing or segmenting consumers for the purposes of behavioral experiments without a consumer’s informed consent, which cannot be buried in a general contract or service agreement. This includes routine disclosures for large online operators, not less than once every 90 days, on any behavioral or psychological experiments to users and the public. Additionally, the bill would require large online operators to create an internal Independent Review Board to provide oversight on these practices to safeguard consumer welfare.
  • Prohibits user design intended to create compulsive usage among children and teens under the age of 17 years old.

“Social media companies often trick users into giving up their personal data – everything from their thoughts and fears to their likes and dislikes – which they then sell to advertisers. These practices are designed to exploit people; not to serve them better. Senator Warner and Senator Fischer’s DETOUR Act would put a stop to the destructive and deceptive use of dark patterns,” said Imran Ahmed, CEO of the Center for Countering Digital Hate.

“Momentum is building, in Congress and across the states, to force tech companies to reduce the serious harm to kids and teens caused by the way that these companies design and operate their platforms," said James P. Steyer, founder and CEO of Common Sense Media. “The reintroduction of the DETOUR Act comes at just the right time to add another important element of protection for children and their families. We applaud Senators Warner and Fischer for working together to try to stop companies from utilizing manipulative design features that trick kids into giving up more personal information and compulsive usage of their platforms for the sake of increasing their profits and engagement without regard for the harm it inflicts on kids.”

“The proposed legislation represents an important step towards reducing big tech companies’ use of dark patterns that prioritize user engagement over well-being. As a developmental scientist, I’m hopeful the DETOUR Act will encourage companies to adopt a child-centered approach to design that places children’s well-being front and center, reducing the burden on parents to look out for and avoid dark patterns in their children’s technology experiences,” said Katie Davis, EdD, Associate Professor at the University of Washington.

“The DETOUR Act proposed by Sen. Warner and co-sponsors represents a positive and important step to protect American consumers,” said Colin M. Gray, PhD Associate Professor, Indiana University. “DETOUR provides a mechanism for independent oversight over large technology companies and curtailing the ability of these companies to use deceptive and manipulative design practices, such as ‘dark patterns,’ which have been shown to produce substantial harms to users. This legislation provides a foothold for regulators to better guard against deceptive and exploitative practices that have become rampant in many large technology companies, and which have had outsized impacts on children and underserved communities.”

Sen. Warner, a former tech entrepreneur, has been one of Congress’s leading voices calling for accountability in Big Tech. He has introduced several pieces of legislation aimed at addressing these issues, including the ACCESS Act earlier this week, which will promote competition in social media by making it easier to transport user data to new sites; the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Kids Online Safety Act, which would protect children online by providing young people and parents with the tools, safeguards, and transparency they need to protect against online harms. 

Full text of the bill is available here

 

###

WASHINGTON –  Today, U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, led a bipartisan group of colleagues in reintroducing the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, legislation that would encourage market-based competition with major social media platforms by requiring the largest companies make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings. Sen. Warner was joined in introduction by Sens. Richard Blumenthal (D-CT), Lindsey Graham (R-SC), Josh Hawley (R-MO), and Amy Klobuchar (D-MN).

“Consumers are currently locked in to the social media platforms that they use, unable to move to a different platform for fear of losing years’ worth of data and interactions,” said the senators. “Interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors. By making it easier for social media users to easily move their data or to continue to communicate with their friends after switching platforms, startups will be able to compete on equal terms with the biggest social media companies. This bill will create long-overdue requirements that will boost competition and give consumers more power.”

Online communications platforms have become vital to the economic and social fabric of the nation, but network effects and consumer lock-in have solidified a select number of companies’ dominance in the digital market and enhanced their control over consumer data, even as the social media landscape changes by the day and platforms’ user experiences become more and more unpredictable.

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act would increase market competition, encourage innovation, and increase consumer choice by requiring large communications platforms (products or services with over 100 million monthly active users in the U.S.) to:

·         Make their services interoperable with competing communications platforms;

·         Permit users to easily port their personal data in a structured, commonly used and machine-readable format;

·         Allow users to delegate trusted custodial services, which are required to act in a user’s best interests through a strong duty of care, with the task of managing their account settings, content, and online interactions. 

“Markets work only when consumers know what they give up and get in any transaction with a seller and have the option to take their business elsewhere. By supporting organizations that can uncover what tech firms are actually doing and by mandating portability, the ACCESS Act will restore the conditions needed for the market in tech services to work,” Paul Romer, Boston College University Professor and Nobel Prize winner in Economics, said.

“The ACCESS Act is a critical, bipartisan first step in requiring large technology platforms to incorporate interoperability into their products, which is fundamental to a dynamic and competitive technology industry. Innovators, consumers, and society as a whole all benefit when people have the right to move their data if they choose to switch platforms. Without interoperability, innovation is held captive by the market power of large platforms. Our economy needs innovation to thrive -- and innovation is stifled if our most promising startups must compete in a world where consumers are locked into the largest platforms because they can't move their own data. That is in no one's interest,” Garry Tan, president and CEO of Y Combinator, said. 

“Interoperability is a key tool for promoting competition on and against dominant digital platforms. For social networks in particular, interoperability is needed to make it easy for users to switch to a new social network. Until we have clear and effective interoperability requirements, it will be hard for users to leave a social network that fails to reflect their values, protect their privacy, or offer the best experience. Whatever our reasons for switching to a new social network, the ACCESS Act can make it easier by requiring the largest platforms to offer interoperability with competitors. We all stand to benefit from the greater competition that an interoperable world can create,” Charlotte Slaiman, Competition Policy Director at Public Knowledge, said.

“The reintroduction of the ACCESS Act in the Senate is a critically important step forward for empowering consumers with the freedom to control their own data and enable consumers to leave the various walled gardens of the today’s social media platforms. The ACCESS Act literally does what it says—it would give consumers the option to choose better services without having to balance the unfair choice of abandoning their personal network of family and friends in order to seek better products in the market.  The Senate needs to move forward as soon as possible to vote on the ACCESS Act,” Eric Migicovsky, Founder and CEO of Beeper, said.

“Consumers must have control of their own personal data. You should be able to easily access it, share it, revoke access, and interact with is how you see fit. Putting individuals in charge of what is best for them is vital to balance out the ongoing wave of technological innovation. This has broad implications beyond just social media - Congress must pass the ACCESS Act,” David Pickerell, Co-founder and CEO of Para, said.

Sen. Warner first introduced the ACCESS Act in 2019 and, as a former tech entrepreneur, has been on of Congress’s leading voices calling for accountability in Big Tech. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Kids Online Safety Act,which would protect children online by providing young people and parents with the tools, safeguards, and transparency they need to protect against online harms. 

Full text of the bill is available here. One-pager of the legislation is available here.

WASHINGTON - In an effort to prevent money laundering and stop crypto-facilitated crime and sanctions violations, a leading group of U.S. Senators is introducing new, bipartisan legislation requiring decentralized finance (DeFi) services to meet the same anti-money laundering (AML) and economic sanctions compliance obligations as other financial companies, including centralized crypto trading platforms, casinos, and even pawn shops.  The legislation also modernizes key Treasury Department anti-money laundering authorities, and sets new requirements to ensure that “crypto kiosks” don’t become a vector for laundering the proceeds of illicit activities.

DeFi generally refers to applications that facilitate peer-to-peer financial transactions that are recorded on blockchains.  The most prominent example of DeFi is so called “decentralized exchanges,” where automated software purportedly allows users to trade cryptocurrencies without using intermediaries.

By design, DeFi provides anonymity.  This can allow malicious and criminal actors to evade traditional financial regulatory tools, including longstanding and well-developed rules requiring financial institutions to monitor all transactions and report suspected money laundering and financial crime to the Financial Crimes Enforcement Network (FinCEN), which is a bureau of the U.S. Treasury Department.  This allows DeFi to be used to launder criminal proceeds and fund more crime.

Criminals, drug traffickers, and hostile state actors such as North Korea have all demonstrated a propensity for using (DeFi) as a preferred method of transferring and laundering ill-gotten gains.  These bad actors have been quick to recognize how DeFi can be exploited to advance nefarious activities like cross-border fentanyl trafficking and financing the development of weapons of mass destruction. 

According to the most recent U.S. National Money Laundering Risk Assessment: “DeFi services often involve no AML or other processes to identify customers.”  According to another recent Treasury Department report, “illicit actors, including ransomware cybercriminals, thieves, scammers, and Democratic People’s Republic of Korea (DPRK) cyber actors, are using DeFi services in the process of transferring and laundering their illicit proceeds. To accomplish this, illicit actors are exploiting vulnerabilities in the U.S. and foreign AML regulatory, supervisory, and enforcement regimes as well as the technology underpinning DeFi services.”

Noting that transparency and sensible rules are vital for protecting the financial system from crime, U.S. Senators Jack Reed (D-RI), Mike Rounds (R-SD), Mark Warner (D-VA), and Mitt Romney (R-UT) today unveiled the Crypto-Asset National Security Enhancement and Enforcement (CANSEE) Act (S. 2355).  This legislation targets money laundering and sanctions evasion involving DeFi.

The CANSEE Act would end special treatment for DeFi by applying the same national security laws that apply to banks and securities brokers, casinos and pawn shops, and even other cryptocurrency companies like centralized trading platforms.  That means DeFi services would be forced to meet basic obligations, most notably to maintain AML programs, conduct due diligence on their customers, and report suspicious transactions to FinCEN.

These requirements will close an attractive avenue for money laundering that has been routinely exploited over the past several months by the North Korean government, Chinese chemicals manufacturers, Mexican drug cartels, cybercriminals, ransomware attackers, scammers, and a host of other bad actors. 

The legislation also makes clear that if a sanctioned person, like a Russian oligarch, uses a DeFi service to evade U.S. sanctions, then anyone who controls that project will be liable for facilitating that violation.  If nobody controls a DeFi service, then—as a backstop—anyone who invests more than $25 million in developing the project will be responsible for these obligations.

The CANSEE Act would also require operators of crypto kiosks (also known as crypto ATMs) to improve traceability of funds by verifying the identities of each counterparty to each transaction using a kiosk.  Unless these vulnerabilities are addressed, criminals will continue to exploit these kiosks to launder money from drug trafficking, human trafficking, scams, and other crimes.

Featuring an interface similar to regular ATMs, crypto ATMs are often found at convenience stores, laundromats, and gas stations.  Users can insert cash or a debit card into the machine to turn their real money into cryptocurrency, which is then transferred into a digital wallet that can then be accessed by scammers.   Once a transfer is complete, users cannot get their money back.  Currently, there are about 30,600 crypto ATMs across the country – up from 1,200 in 2018, according to Coin ATM Radar.

Finally, the CANSEE Act makes important updates to the Treasury Department’s authority to require participants in the U.S. financial system to take special measures against money laundering threats.  Currently, these authorities are limited to transactions conducted in the traditional banking system.  But as new technologies like cryptocurrency increasingly enable new ways to conduct financial transactions, it is critical to extend Treasury’s authority to crack down on illicit financial activity that may occur outside the banking sector.

“DeFi and crypto ATMs are part of a largely unregulated technology that needs stronger oversight and guardrails to prevent rampant money laundering and sanctions evasion,” said Sen. Reed. “This legislation bolsters the Treasury Department’s tools to protect our national and economic security. Drug cartels, sex traffickers, and the like shouldn’t be able to use DeFi platforms to avoid justice – their victims deserve better.  Our bill  will also ensure that law enforcement has access to better information about cryptocurrency transactions, which they need to fight crimes like cross-border drug trafficking, weapons proliferation, and ransomware attacks.  We must protect the integrity of the financial system from new and emerging threats from the worst criminal organizations and malicious state actors.”

“Our adversaries and criminals worldwide are using creative ways every day to take advantage of the United States financial system and we should not allow them to exploit American innovation to evade sanctions and money launder,” said Sen. Rounds. “As more Americans start to use and invest in cryptocurrency, both DeFi platforms and crypto kiosks remain in the blind spot of regulation. This targeted legislation kicks off an important debate on how to protect our financial system and give law enforcement the tools they need to prosecute bad actors.” 

“As Chair of the Senate Intelligence Committee, I remain deeply concerned that criminals and rogue states continue to use crypto to launder money, evade sanctions, and conceal illicit activity. The targeted package we’re introducing today will help address specific problems in decentralized finance and crypto kiosks, and incorporates the Special Measures to Address Modern Threats bill I introduced in the last Congress to modernize FinCEN’s existing anti-money laundering authorities,” said Sen. Warner. “I believe these focused measures will help maintain the robust AML and sanctions enforcement we need to protect our national security, while allowing participants who play by the rules to continue to take advantage of the potential of distributed ledger technologies.”

“Malign actors—including China-based fentanyl manufacturers and drug cartels operating along the southern border—are capitalizing on existing loopholes under current law to evade sanctions using decentralized finance services,” said Sen. Romney. “By fortifying U.S. anti-money laundering frameworks, our legislation cracks down on crypto-facilitated crimes and ultimately reinforces our national security.”

###

WASHINGTON – Today, U.S. Sens. Mark R. Warner (D-VA), Mazie Hirono (D-HI), Amy Klobuchar (D-MN), Tim Kaine (D-VA), and Richard Blumenthal (D-CT), along with U.S. Reps. Kathy Castor (D-FL-14) and Mike Levin (D-CA-49), reintroduced the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act to reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms.

“For too long, Section 230 has given cover to social media companies as they turn a blind eye to the harmful scams, harassment, and violent extremism that run rampant across their platforms,” said Sen. Warner, a former technology entrepreneur and the Chairman of the Senate Select Committee on Intelligence. “When Section 230 was enacted over 25 years ago, the internet we use today was not even fathomable. This legislation takes strides to update a law that was meant to encourage service providers to develop tools and policies to support effective moderation and allows them to finally be held accountable for the harmful, often criminal behavior that exists on their platforms.” 

“Social media platforms allow people to connect all across the world—but they also cause great pain and suffering, being used as a tool for cyberbullying, stalking, spreading hate, and more. The way we communicate as a society has changed drastically over the last 25 years, it’s time for our laws to catch up,” said Sen. Hirono, a member of the Senate Judiciary Committee. “The SAFE TECH Act targets the worst abuses perpetrated on internet platforms to better protect our children and our communities from the very real harms of social media.”  

“We need to be asking more from big tech companies, not less. How they operate has a real-life effect on the safety and civil rights of Americans and people around the world, as well as our democracy. Our legislation will hold these platforms accountable for ads and content that can lead to real-world harm,” said Sen. Klobuchar.

“Congress has acted in the past to ensure that social media companies don’t get blanket immunity after hosting information on their websites aimed at facilitating human or sex trafficking,” said Sen. Kaine. “I’m fully supportive of using that precedent as a roadmap to require social media companies to moderate dangerous content linked to other crimes—like cyber-stalking, discrimination, and harassment—in a responsible way. This is critical to keep our communities safe.”

“Section 230’s blanket immunity has prioritized Big Tech over Americans’ civil rights and safety. Platforms’ refusal to be held accountable for the dangerous and harmful content they host has real-life implications for users – leaving many vulnerable to threats like stalking, intimidation, and harassment, as well as discrimination,” said Sen. Blumenthal. “Our legislation is needed to safeguard consumers and ensure social media giants aren’t shielded from the legal consequences of failing to act. These common sense protections are essential in today’s online world.”

“For too long, big tech companies have treated the internet like the wild west while users on their platforms violate civil and human rights, defraud consumers and harass others. These companies have shown over and over again that they are unwilling to make their platforms safe for Americans. It is long past time for consumers to have legal recourse when big tech companies harm them or their families. Our bill will ensure they are held accountable,” said Rep. Castor.

“Social media companies continue to allow malicious users to go unchecked, harm other users, and violate laws. This cannot go on and it is clear federal reform is necessary,” said Rep. Levin. “Our bicameral legislation makes much needed updates to Section 230 to ensure Americans can safely use online platforms and have legal recourse when they are harmed. It’s long past time that these legislative fixes are made and I look forward to this bill moving through the Congress.”

Specifically the SAFE TECH Act would force online service providers to address misuse on their platforms or face civil liability. The legislation would make clear that Section 230:

  • Doesn’t apply to ads or other paid content – ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams;
  • Doesn’t bar injunctive relief – allowing victims to seek court orders where misuse of a provider’s services is likely to cause irreparable harm;
  • Doesn’t impair enforcement of civil rights laws – maintaining the vital and hard-fought protections from discrimination even when activities or services are mediated by internet platforms;
  • Doesn’t interfere with laws that address stalking/cyber-stalking or harassment and intimidation on the basis of protected classes – ensuring that victims of abuse and targeted harassment can hold platforms accountable when they directly enable harmful activity;
  • Doesn’t bar wrongful death actions – allowing the family of a decedent to bring suit against platforms where they may have directly contributed to a loss of life;
  • Doesn’t bar suits under the Alien Tort Claims Act – potentially allowing victims of platform-enabled human rights violations abroad to seek redress in U.S. courts against U.S.-based platforms.

Sen. Warner first introduced the SAFE TECH Act in 2021 and is one of Congress’ leading voices in demanding accountability and user protections from social media companies. Last week, Sen. Warner pressed Meta on Facebook's role in inciting violence around the world. In addition to the SAFE TECH Act, Sen. Warner has introduced and written numerous bills aimed at improving transparency, privacy, and accountability on social media. These include the Deceptive Experiences to Online Users Reduction (DETOUR) Act  – legislation to prohibit large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data and the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, legislation that would encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings.

“The onslaught of misinformation and discriminatory attacks across social media platforms continues unabated. It is essential that the tech companies that run these platforms protect their users and end the rampant civil rights violations of Black users and other users of color. Social media remains a virtually unchecked home for hateful content discrimination, especially through the manipulation of algorithms that lead to both the targeting and limiting of which users see certain types of advertisements and opportunities. Congress can take a step in the right direction by strengthening Section 230 and ensuring that online communities are not safe harbors for discrimination and civil rights violations. LDF supports Senator Warner and Senator Hirono’s bill to address these critical concerns,” said Lisa Cylar Barrett, Director of Policy, Legal Defense Fund (LDF).

“There needs to be real clarity on Section 230. The hate that festers online: antisemitism, Islamophobia, racism, misogyny and disinformation – leads to real violence, real lives targeted, real people put at risk. ADL supports the ability for people affected by violence to hold perpetrators accountable – and that includes social media companies. ADL appreciates the efforts of Senators Warner, Hirono, Klobuchar, and Kaine to tackle this complex challenge. We look forward to working with them to refine this legislation to ensure a safer and less hate-filled internet for all users.” said Jonathan A. Greenblatt, CEO of ADL (Anti-Defamation League).

“Platforms should not profit from targeting employment ads toward White users, or from targeting voter suppression ads toward Black users. The SAFE TECH Act makes it clear that Section 230 does not give platforms a free pass to violate civil rights laws, while also preserving the power of platforms to remove harmful disinformation,” said Spencer Overton, President, Joint Center for Political and Economic Studies.

“I applaud the SAFE TECH Act introduced by Sens. Warner and Hirono which provides useful modifications to section 230 of the 1996 Communications Decency Act to limit the potential negative impacts of commercial advertising interests while continuing to protect anti-harassment and civil and human rights interests of those who may be wrongfully harmed through wrongful online activity,” Ramesh Srinivasan, Professor at the UCLA Department of Information Studies and Director of UC Digital Cultures Lab, said.

“It is glaringly apparent that we cannot rely on the tech companies to implement common sense policies that reflect common decency on their own. We thank and commend Senators Warner, Hirona, Klobuchar, and Kaine for their foresight and for showing their commitment to the safety of our citizens by putting forth the SAFE TECH Act. The SAFE TECH Act will continue to protect free speech and further protect our civil rights while sensibly amending section 230, an outdated law that the tech companies hide behind in their refusal to take responsibility for real-life consequences” said Wendy Via, Cofounder, Global Project Against Hate and Extremism.

“The Cyber Civil Rights Initiative welcomes this effort to protect civil rights in the digital age and to hold online intermediaries accountable for their role in the silencing and exploitation of vulnerable communities. This bill addresses the urgent need to limit and correct the overzealous interpretation of Section 230 that has granted a multibillion dollar industry immunity and impunity for profiting from irreparable injury,” said Mary Anne Franks, President, Cyber Civil Rights Initiative and Danielle K. Citron, Vice President, Cyber Civil Rights Initiative.

“Social media companies have enabled hate, threats and even genocide against Muslims with virtual impunity. The SAFE TECH Act would bring needed and long-overdue accountability to these companies,” said Muslim Advocates Senior Policy Counsel Sumayyah Waheed. “We thank Sens. Warner, Hirono, Klobuchar, Kaine and Blumenthal for leading on this important bill. Every day, Muslims are profiled, discriminated against, attacked and worse just for engaging in public life. Passing this bill would bring us one step closer to ensuring that Muslims and other marginalized communities can hold social media companies accountable for the reckless way they violate people’s rights and threaten their safety on and offline.”

“The SAFE TECH Act is an important step forward for platform accountability and for the protection of privacy online. Providing an opportunity for victims of harassment, privacy invasions, and other violations to remove unlawful content is critical to stopping its spread and limiting harm,” said Caitriona Fitzgerald, Deputy Director, Electronic Privacy Information Center (EPIC).

“The SAFE TECH Act is a Section 230 reform America needs now. Troubling readings of Section 230 have encouraged reckless and negligent shirking by platforms of basic duties toward their users. Few if any of the drafters of Section 230 could have imagined that it would be opportunistically used to, for example, allow dating sites to ignore campaigns of harassment and worse against their users. The SAFE TECH Act reins in the cyberlibertarian ethos of over-expansive interpretations of Section 230, permitting courts to carefully weigh and assess evidence in cases where impunity is now preemptively assumed,” said Frank Pasquale, Author of The Black Box Society and Professor at Brooklyn Law School.

“It is unacceptable that courts have interpreted Section 230 to provide Big Tech platforms with blanket immunity from wrongdoing. Congress never intended Section 230 to shield companies from all civil and criminal liability. Reforms proposed by Sens. Warner and Hirono are an important step in the right direction. It is time to hold Big Tech accountable for the harms they cause children and families and other vulnerable populations," said James P. Steyer, Founder and CEO, Common Sense.

“The SAFE TECH Act aims to hold social media giants accountable for spreading harmful misinformation and hateful language that affects Black communities and limits our voting power," said Brandon Tucker, Sr. Director of Policy & Government Affairs at Color Of Change. “Social media companies have used Section 230 as a shield against legal repercussions for their continued civil rights violations across their platforms. When we released our Black Tech Agenda and Scorecard last year, we made sure that the SAFE TECH Act was a key criteria in marking legislators’ progress toward advancing tech policy solutions with a racial justice framework. We call on members of Congress to support this critical legislation to protect Black people’s rights and safety online.”

“It has become abundantly clear that disinformation and hate on social media can create real-world harms.  - whether it's anti-vaxx misinformation, election-related lies or hate, it is now clear that there is a significant threat to human life, civil rights and national security. The problem is crazy incentives, where bad actors can freely spread hate and misinformation, platforms profit from traffic regardless of whether it is productive or damaging, but the costs are borne by the public and society at large. This timely bill forensically delineates the harms and ensures perpetrators and enablers pay a price for the harms they create. In doing so, it reflects our desire for better communication technologies, which enhance our right to speak and be heard, and that also respect our fundamental rights to life and safety,” said Imran Ahmed, CEO, Center for Countering Digital Hate. 

“Senator Mark Warner is a leader in ensuring that technology supports democracy even as it advances innovation. This legislation removes obstacles to enforcement against online discrimination, cyber-stalking, and targeted harassment and incentivizes platforms to move past the current, ineffective whack-a-mole approach to harms,” said Karen Kornbluh, Former US Ambassador to the Organization for Economic Co-operation and Development.

Full text of legislation is available here

###

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), a member of the Senate Banking Committee and a lead author of the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010, which created the Consumer Financial Protection Bureau (CFPB), released the following statement after the Supreme Court announced it will hear arguments next term in a case with far-reaching implications for the constitutionality of the CFPB, CFPB v. Community Financial Services Association of America:

“Congress created the Consumer Financial Protection Bureau after the financial crisis to enforce consumer protection laws and make sure the banks, credit card companies and other financial institutions aren’t abusing their powers to take advantage of everyday Americans. If the Fifth Circuit’s decision, which could make every rule put forward by the CFPB unconstitutional,  is permitted to stand, there will be financial chaos as all sorts of transactions governed by CFPB policies could grind to a halt, and consumers would be left without the protections they expect and deserve.”

Since its creation in 2010, the CFPB has recovered nearly $15 billion in financial relief for customers.

###

WASHINGTON – With the privacy debate receiving renewed attention in Congress, U.S. Sens. Mark R. Warner (D-VA), Deb Fischer (R-NE), Amy Klobuchar (D-MN), and John Thune (R-SD) and Reps. Lisa Blunt Rochester (D-DE-AL) and Anthony Gonzalez (R-OH-16) today announced that their bipartisan, bicameral DETOUR Act – legislation that would prevent large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data – has picked up several new endorsements.

“We are pleased to see growing momentum behind our bipartisan effort to ban these manipulative practices,” said the members of Congress today. “There’s an increasing consensus in Congress that Americans should be able to make informed choices about handing over their data to large platform companies.”

The term “dark patterns” is used to describe online interfaces in websites and apps designed to intentionally manipulate users into taking actions they would otherwise not. These design tactics, drawn from extensive behavioral psychology research, are frequently used by social media platforms to mislead consumers into agreeing to settings and practices advantageous to the company.

The DETOUR Act would also prohibit large platforms from deploying features that encourage compulsive usage by children and from conducting behavioral experiments without a consumer’s consent.

"The American Psychological Association supports the efforts of Senators Mark Warner, Deb Fischer, Amy Klobuchar and John Thune to reduce harmful practices and deceptive tactics by social media companies. These practices can be especially harmful to children, but adults are also susceptible,” said Mitch Prinstein, PhD, Chief Science Officer at the American Psychological Association. “Through my research and that of my colleagues in psychological science, we increasingly understand how these companies can mislead individuals. This is why we support the DETOUR Act and its aim to protect social media users.”

“Social media companies often trick users into giving up their personal data – everything from their thoughts and fears to their likes and dislikes – which they then sell to advertisers. These practices are designed to exploit people; not to serve them better. Senator Warner and Senator Fischer’s DETOUR Act would put a stop to the destructive and deceptive use of dark patterns,” said Imran Ahmed, CEO of the Center for Countering Digital Hate.

“The DETOUR Act is an important step towards curbing Big Tech's unfair design choices that manipulate users into acting against their own interests. We are particularly excited by the provision that prohibits designs that cultivate compulsive use in children,” said Josh Golin, Executive Director of Fairplay. “Over the past year, we've heard a lot of talk from members of Congress about the need to protect children and teens from social media harms. It's time to put those words into action - pass the DETOUR Act!”

“The DETOUR Act proposed by Sen. Warner and co-sponsors represents a positive and important step to protect American consumers. DETOUR provides a mechanism for independent oversight over large technology companies and curtailing the ability of these companies to use deceptive and manipulative design practices, such as ‘dark patterns,’ which have been shown to produce substantial harms to users,” said Colin M. Gray, PhD, Associate Professor at Purdue University. “This legislation provides a foothold for regulators to better guard against deceptive and exploitative practices that have become rampant in many large technology companies, and which have had outsized impacts on children and underserved communities.”

“The proposed legislation represents an important step towards reducing big tech companies’ use of dark patterns that prioritize user engagement over well-being,” said Katie Davis, EdD, Associate Professor at the University of Washington. “As a developmental scientist, I’m hopeful the DETOUR Act will encourage companies to adopt a child-centered approach to design that places children’s well-being front and center, reducing the burden on parents to look out for and avoid dark patterns in their children’s technology experiences.”

The legislation was also previously supported by Mozilla, Common Sense, and the Center for Digital Democracy. Full text of the DETOUR Act is available here

###

 

WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) led a bipartisan group of colleagues in reintroducing the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, legislation that will encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose. Sens. Richard Blumenthal (D-CT), Lindsey Graham (R-SC), Josh Hawley (R-MO), and Amy Klobuchar (D-MN) joined Sen. Warner in introducing the legislation.

“The tremendous dominance of a handful of large social media platforms has major downsides – including few options for consumers who face a marketplace with just a few major players and little in the way of real competition,” the senators said. “As we learned in the Microsoft antitrust case, interoperability and portability are powerful tools to restrain anti-competitive behaviors and promote innovative new companies. By making it easier for social media users to easily move their data or to continue to communicate with their friends after switching platforms, startups will be able to compete on equal terms with the biggest social media companies. Additionally, empowering trusted custodial companies to step in on behalf of users to better manage their accounts across different platforms will help balance the playing field between consumers and companies. In other words – by enabling portability, interoperability, and delegatability, this bill will create long-overdue requirements that will boost competition and give consumers the power to move their data from one service to another.”

Online communications platforms have become vital to the economic and social fabric of the nation, but network effects and consumer lock-in have entrenched a select number of companies’ dominance in the digital market and enhanced their control over consumer data. 

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act would increase market competition, encourage innovation, and increase consumer choice by requiring large communications platforms (products or services with over 100 million monthly active users in the U.S.) to:

  • Make their services interoperable with competing communications platforms.
  • Permit users to easily port their personal data in a structured, commonly used and machine-readable format.
  • Allow users to delegate trusted custodial services, which are required to act in a user’s best interests through a strong duty of care, with the task of managing their account settings, content, and online interactions.  

“Markets work when consumers have a choice and know what's going on. The ACCESS Act is an important step toward reestablishing this dynamic in the market for tech services. We must get back to the conditions that make markets work: when consumers know what they give a firm and what they get in return; and if they don't like the deal, they can take their business elsewhere. By giving consumers the ability to delegate decisions to organizations working on their behalf, the ACCESS Act gives consumers some hope that they can understand what they are giving up and getting in the opaque world that the tech firms have created. By mandating portability, it also gives them a realistic option of switching to another provider,” Paul Romer, New York University Professor of Economics and Nobel Prize winner in Economics, said.

“Interoperability is a key tool for promoting competition on and against dominant digital platforms. For social networks in particular, interoperability is needed to make it easy for users to switch to a new social network. Until we have clear and effective interoperability requirements, it will be hard for users to leave a social network that fails to reflect their values, protect their privacy, or offer the best experience. Whatever our reasons for switching to a new social network, the ACCESS Act can make it easier by requiring the largest platforms to offer interoperability with competitors. We all stand to benefit from the greater competition that an interoperable world can create,” Charlotte Slaiman, Competition Policy Director at Public Knowledge, said.

"We now understand that the dominant tech platforms' exclusive control over the data we create as we interact with them is the source of extraordinary market power. That power distorts markets, reduces innovation and limits consumer choice. By requiring interoperability, the ACCESS Act empowers consumers, levels the playing field and opens the market to competition. Anyone who believes that markets work best when consumers are able to make informed choices should support this Act,” Brad Burnham, Partner and Co-Founder at Union Square Ventures, said.

“The reintroduction of the ACCESS Act in the Senate is a critically important step forward for empowering consumers with the freedom to control their own data and enable consumers to leave the various walled gardens of the today’s social media platforms. The ACCESS Act literally does what it says—it would give consumers the option to choose better services without having to balance the unfair choice of abandoning their personal network of family and friends in order to seek better products in the market.  The Senate needs to move forward as soon as possible to vote on the ACCESS Act.” Eric Migicovsky, Founder and CEO of Beeper, said.

Sen. Warner first introduced the ACCESS Act in 2019 and has been raising concerns about the implications of the lack of competition in social for years.

Sen. Warner is one of Congress’ leading voices in demanding accountability and user protections from social media companies. In addition to the ACCESS Act, Sen. Warner has introduced and written numerous bills designed to improve transparency, privacy, and accountability on social media. These include the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Actlegislation that would allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination across platforms; the Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data (DASHBOARD) Act, bipartisan legislation that would require data harvesting companies to tell consumers and financial regulators exactly what data they are collecting from consumers and how it is being leveraged by the platform for profit; and the Deceptive Experiences to Online Users Reduction (DETOUR) Act, bipartisan and bicameral legislation that would prohibit large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data and would prohibit these platforms from using features that result in compulsive usage by children.

Full text of the bill is available here. One-pager of the legislation is available here.

###

 

WASHINGTON – Ahead of Wednesday’s Senate hearing with the head of Instagram, U.S. Sens. Mark R. Warner (D-VA), Deb Fischer (R-NE), Amy Klobuchar (D-MN), and John Thune (R-SD) along with Reps. Lisa Blunt Rochester (D-DE-AL) and Anthony Gonzalez (R-OH-16) have re-introduced the Deceptive Experiences to Online Users Reduction (DETOUR) Act to prohibit large online platforms from using deceptive user interfaces, known as “dark patterns,” to trick consumers into handing over their personal data. The DETOUR Act would also prohibit these platforms from using features that result in compulsive usage by children.

The term “dark patterns” is used to describe online interfaces in websites and apps designed to intentionally manipulate users into taking actions they would otherwise not. These design tactics, drawn from extensive behavioral psychology research, are frequently used by social media platforms to mislead consumers into agreeing to settings and practices advantageous to the company. 

“For years dark patterns have allowed social media companies to use deceptive tactics to convince users to hand over personal data without understanding what they are consenting to. The DETOUR Act will end this practice while working to instill some level of transparency and oversight that the tech world currently lacks,” said Sen. Warner, Chairman of the Senate Select Committee on Intelligence and former technology executive. “Consumers should be able to make their own informed choices on when to share personal information without having to navigate intentionally misleading interfaces and design features deployed by social media companies.” 

Manipulative user interfaces that confuse people and trick consumers into sharing access to their personal information have become all too common online. Our bipartisan legislation would rein in the use of these dishonest interfaces and boost consumer trust. It’s time we put an end to ‘dark patterns’ and other manipulative practices to protect children online and ensure the American people can better protect their personal data, said Sen. Fischer, a member of the Senate Commerce Committee.

“Dark patterns are manipulative tactics used to trick consumers into sharing their personal data. These tactics undermine consumers’ autonomy and privacy, yet they are becoming pervasive on many online platforms. This legislation would help prevent the major online platforms from using such manipulative tactics to mislead consumers, and it would prohibit behavioral experiments on users without their informed consent,” said Sen. Klobuchar, a member of the Senate Commerce and Judiciary Committees.

“We live in an environment where large online operators often deploy manipulative practices or ‘dark patterns’ to obtain consent to collect user data,” said Sen. Thune, ranking member of the Senate Commerce Committee’s Subcommittee on Communications, Media, and Broadband. “This bipartisan legislation would create a path forward to strengthen consumer transparency by holding large online operators accountable when they subject their users to behavioral or psychological research for the purpose of promoting engagement on their platforms.”

“My colleagues and I are introducing the DETOUR Act because Congress and the American public are tired of tech companies evading scrutiny and avoiding accountability for their actions. Despite congressional hearings and public outcries, many of these tech companies continue to trick and manipulate people into making choices against their own self-interest,” said Rep. Lisa Blunt Rochester. “Our bill would address some common tactics these companies use, like intentionally deceptive user interfaces that trick people into handing over their personal information. Our children, seniors, veterans, people of color, even our very way of life is at stake. We must act. And today, we are.”

“Social media has connected our communities, but also had detrimental effects on our society. Big tech companies that control these platforms currently have unregulated access to a wealth of information about their users and have used nontransparent methods, such as dark patterns, to gather additional information and manipulate users,” said Rep. Anthony Gonzalez. “The DETOUR Act would make these platforms more transparent through prohibiting the use of dark patterns. We live in a transformative period of technology, and it is important that the tech which permeates our day to day lives is transparent.”

Dark patterns can take various forms, often exploiting the power of defaults to push users into agreeing to terms stacked in favor of the service provider. Some examples of these actions include: a deliberate obscuring of alternative choices or settings through design or other means; the use of privacy settings that push users to ‘agree’ as the default option, while users looking for more privacy-friendly options often must click through a much longer process, detouring through multiple screens. Other times, users cannot find the alternative option, if it exists at all, and simply give up looking.

The result is that large online platforms have an unfair advantage over users and potential competitors in forcing consumers to give up personal data such as their contacts, messages, web activity, or location to the benefit of the company.

“Tech companies have clearly demonstrated that they cannot be trusted to self-regulate.  So many companies choose to utilize manipulative design features that trick kids into giving up more personal information and compulsive usage of their platforms for the sake of increasing their profits and engagement without regard for the harm it inflicts on kids,” said Jim Steyer, CEO of Common Sense. “Common Sense supports Senators Warner and Fischer and Representatives Blunt Rochester and Gonzalez on this bill, which would rightfully hold companies accountable for these practices so kids can have a healthier and safer online experience.”

“'Dark patterns' and manipulative design techniques on the internet deceive consumers. We need solutions that protect people online and empower consumers to shape their own experience. We appreciate Senator Warner and Senator Fischer's work to address these misleading practices,” said Jenn Taylor Hodges, Head of U.S. Public Policy at Mozilla.

“Manipulative design, efforts to undermine users’ independent decision making, and secret psychological experiments conducted by corporations are everywhere online. The exploitative commercial surveillance model thrives on taking advantage of unsuspecting users. The DETOUR Act would put a stop to this: prohibiting online companies from designing their services to impair autonomy and to cultivate compulsive usage by children under 13. It would also prohibit companies from conducting online user experiments without consent. If enacted, the DETOUR Act will make an important contribution to living in a fairer and more civilized digital world,” said Katharina Kopp, Director of Policy at Center for Digital Democracy.

The Deceptive Experiences To Online Users Reduction (DETOUR) Act aims to curb manipulative behavior by prohibiting the largest online platforms (those with over 100 million monthly active users) from relying on user interfaces that intentionally impair user autonomy, decision-making, or choice. The legislation:

  • Prohibits large online operators from designing, modifying, or manipulating user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data
  • Prohibits subdividing or segmenting consumers for the purposes of behavioral experiments without a consumer’s informed consent, which cannot be buried in a general contract or service agreement. This includes routine disclosures for large online operators, not less than once every 90 days, on any behavioral or psychological experiments to users and the public. Additionally, the bill would require large online operators to create an internal Independent Review Board to provide oversight on these practices to safeguard consumer welfare.
  • Prohibits user design intended to create compulsive usage among children under the age of 13 years old (as currently defined by the Children’s Online Privacy Protection Act).
  • Directs the FTC to create rules within one year of enactment to carry out the requirements related to informed consent, Independent Review Boards, and Professional Standards Bodies.

Sen. Warner first introduced the DETOUR ACT in 2019 and has been raising concerns about the implications of social media companies’ reliance on dark patterns for years. In 2014, Sen. Warner asked the FTC to investigate Facebook’s use of dark patterns in an experiment involving nearly 700,000 users designed to study the emotional impact of manipulating information on their News Feeds.

Sen. Warner is one of Congress’ leading voices in demanding accountability and user protections from social media companies. In addition to the DETOUR Act, Sen. Warner has introduced and written numerous bills aimed designed to improve transparency, privacy, and accountability on social media. These include the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Actlegislation that allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination across platforms; the Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data (DASHBOARD) Act, bipartisan legislation that would require data harvesting companies to tell consumers and financial regulators exactly what data they are collecting from consumers and how it is being leveraged by the platform for profit; and the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, legislation that would encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose.

Full text of the bill is available here

###

 

WASHINGTON – As Labor Day weekend approaches, U.S. Sens. Mark Warner and Tim Kaine (both D-VA) along with Sens. Bob Menendez and Cory Booker (both D-NJ) are pressing product safety regulators to include beach umbrellas in their testing protocols as they work to develop new safety standards for umbrellas sold to consumers. It’s the latest push in the senators’ continued effort to protect beachgoers following multiple accidents involving wind-swept beach umbrellas, including in 2016, when Lottie Michelle Belk of Chester, Va. was struck in the torso and killed while vacationing in Virginia Beach with her family. 

Sens. Warner and Kaine have previously pushed for increased safety measures in a 2019 letter to the U.S. Consumer Product Safety Commission (CPSC). In addition, the senators have called for a public safety campaign to educate the public about the dangers of beach umbrellas.  

“Given the grave danger posed by beach umbrellas we feel it is imperative that ASTM include beach umbrellas in any new test methods,” the senators wrote to ASTM International Subcommittee Chair Ben Favret. “Summer is in full swing, and as millions of newly vaccinated Americans emerge from their homes to spend time at the shore, we must do all we can to ensure the safety of beach umbrellas.”

ASTM International—a nonprofit that often partners with the U.S. Consumer Product Safety Commission (CPSC) to develop technical standards for a wide range of materials, products, systems, and services—last year began testing the safety and durability of market umbrellas in various wind conditions. Unfortunately it has continued to exclude beach umbrellas from this testing regimen, instead limiting it to patio and weighted-base umbrellas. 

Assessing the risks associated with using certain products under specific conditions is a critical step towards developing new product safety standards, recommendations, and best practices to mitigate the risk.    

According to the U.S. Consumer Product Safety Commission, an estimated 2,800 people sought treatment at emergency rooms for beach umbrella-related injuries from 2010-2018

 Full text of the letter is below and can be downloaded here:

Ben Favret

Subcommittee Chair, ASTM F15.79

ASTM International

100 Barr Harbor Drive

West Conshohocken, PA 19428

 

Dear Mr. Favret:

We write to urge ASTM International to update its testing method standard to account for wind speed as it relates to beach umbrellas.

As you note on your website, “[t]he deleterious effects of a Market Umbrellas [sic] being blow[n] over or broken by wind forces can range from acute injury, such as cuts or bruises to blunt force trauma, such as concussions or broken bones and in some cases death.”  Further, you state that “[t]he lack of any voluntary standard for the safe performance of Market Umbrellas puts millions of consumers and employees around the world at risk unnecessarily.”  Indeed, as the Consumer Product Safety Commission (CPSC) stated in a June 2019 letter to the Senate, over the nine-year period from 2010-2018, an estimated 2,800 people sought treatment in emergency rooms for injuries related to beach umbrellas.  A majority of those injuries were caused by a wind-blown beach umbrella. 

In March 2021, the CPSC wrote to ASTM requesting that it “expand the standard to address fully the hazards of injuries and death due to beach umbrellas implanted in the sand.”  In addition, the agency suggested “mentioning the known fatality in the introduction of the standard, along with the injury data already there”.  We could not agree more. Given the grave danger posed by beach umbrellas we feel it is imperative that ASTM include beach umbrellas in any new test methods.

Summer is in full swing, and as millions of newly vaccinated Americans emerge from their homes to spend time at the shore, we must do all we can to ensure the safety of beach umbrellas. We appreciate ASTM’s willingness to consider this issue.  Should you have further questions please contact Shelby Boxenbaum in Senator Menendez’s office at 202-224-4744.

Sincerely, 

###

WASHINGTON – U.S. Senator Mark Warner (D-VA), Chairman of the Senate Select Committee on Intelligence; Senator Amy Klobuchar (D-MN), Chairwoman of the Senate Subcommittee on Competition Policy, Antitrust, and Consumer Rights; and Senator Chris Coons (D-DE), Chairman of the Subcommittee on Privacy, Technology, and the Law, sent a letter to Facebook CEO Mark Zuckerburg asking about Facebook’s decision to terminate the ability of researchers at New York University’s Ad Observatory Project’s to access its platform.  

The independent researchers were studying political advertising on Facebook. Their research has produced several key discoveries including highlighting a lack of transparency in how advertisers target political ads online on Facebook. 

“We were surprised to learn that Facebook has terminated access to its platform for researchers connected with the NYU Ad Observatory project. The opaque and unregulated online advertising platforms that social media companies maintain have allowed a hotbed of disinformation and consumer scams to proliferate, and we need to find solutions to those problems,” the senators wrote.

The senators continued later in the letter: “...independent researchers are a critical part of the solution. While we agree that Facebook must safeguard user privacy, it is similarly imperative that Facebook allow credible academic researchers and journalists like those involved in the Ad Observatory project to conduct independent research that will help illuminate how the company can better tackle misinformation, disinformation, and other harmful activity that is proliferating on its platforms.”

The full text of the letter can be found below and HERE.

 

Dear Mr. Zuckerberg,  

As you know, we are committed to protecting privacy for all Americans while eliminating the scourge that is disinformation and misinformation, particularly with regard to elections and the COVID-19 pandemic.

We were surprised to learn that Facebook has terminated access to its platform for researchers connected with the NYU Ad Observatory project. The opaque and unregulated online advertising platforms that social media companies maintain have allowed a hotbed of disinformation and consumer scams to proliferate, and we need to find solutions to those problems. The Ad Observatory project describes itself as “nonpartisan [and] independent…focused on improving the transparency of online political advertising.” Research efforts studying online advertising have helped inform consumers and policymakers about the extent to which your ad platform has been a vector for consumer scams and frauds, enabled hiring discrimination and discriminatory ads for financial services, and circumvented accessibility laws. Such work to improve the integrity of online advertising is critical to strengthening American democracy.

We appreciate Facebook’s ongoing efforts to address misinformation and disinformation on its platforms. But there is much more to do, and independent researchers are a critical part of the solution. While we agree that Facebook must safeguard user privacy, it is similarly imperative that Facebook allow credible academic researchers and journalists like those involved in the Ad Observatory project to conduct independent research that will help illuminate how the company can better tackle misinformation, disinformation, and other harmful activity that is proliferating on its platforms.

We therefore ask that you provide written answers to the following questions by August 20, 2021:

  1. How many accounts of researchers and journalists were terminated or otherwise disabled during 2021, including but not limited to researchers from the NYU Ad Observatory?
  2. Please explain why you terminated those accounts referenced in question 1. If you believe that the researchers violated Facebook’s terms of service, please describe how, in detail.
  3. If the researchers’ access violated Facebook’s terms of service, what steps are you taking to revise these terms to better accommodate research that improves the security and integrity of your platform?
  4. Facebook’s public statement about its decision to terminate the Ad Observatory researchers’ access said that research should not “compromis[e] people’s privacy.” Please explain how the researchers’ work compromised privacy of end-users.
  5. The Ad Observatory project asked Facebook users to voluntarily install a browser extension that would provide information available to that user about the ads that the user was shown. Facebook’s public statement says that the extension “collected data about Facebook users who did not install it or consent to the collection.” Were these non-consenting “users” advertisers whose advertising information was being collected and analyzed, other individual Facebook users, or both?
  6. Facebook has suggested that the NYU researchers potentially violated user privacy because the browser extension could have exposed the identity of users who liked or commented on an advertisement.  However, both researchers at NYU and other independent researchers have confirmed that the extension did not collect information beyond the frame of the ad, and that the program could not collect personal posts.  Given these technical constraints, what evidence does Facebook have to suggest that this research exposed personal information of non-consenting individuals?
  7. Facebook’s public statement explaining its decision to revoke access for the NYU researchers states that Facebook made this decision “in line with our privacy program under the FTC Order.” FTC Acting Bureau Director Samuel Levine sent you a letter dated August 5, 2021 in which he noted that “Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest. Indeed, the FTC supports efforts to shed light on opaque business practices.”
    1. Why didn’t Facebook contact the FTC about its plans to disable researchers’ accounts?
    2. Does Facebook maintain that the FTC consent decree or other orders required it to disable access for the Ad Observatory researchers? If so, please explain with specificity which sections of which decree(s) compel that response.
    3. Are there measures Facebook could take to authorize the Ad Observatory research while remaining in compliance with FTC requirements?
    4. In light of Mr. Levine’s statement that the FTC Order does not require Facebook to disable the access of the Ad Observatory researchers, does Facebook intend to restore the Ad Observatory researchers’ access?
  8. In its public statement, Facebook highlighted tools that it offers to the academic community, including its Facebook Open Research and Transparency (FORT) initiative.  However, public reporting suggests that tool only includes data from the three month period before the November 2020 election, and further that it does not include ads seen by fewer than 100 people.
    1. Why does Facebook limit this data set to the three months prior to the November 2020 election?
    2. Why does Facebook limit this data set to ads seen by more than 100 people?
    3. What percentage of unique ads on Facebook are seen by more than 100 people?

 We look forward to your prompt responses.

# # #

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, released the statement below, following a report that Facebook disabled the accounts of researchers studying political ads on the social network: 

“This latest action by Facebook to cut off an outside group’s transparency efforts – efforts that have repeatedly facilitated revelations of ads violating Facebook’s Terms of Service, ads for frauds and predatory financial schemes, and political ads that were improperly omitted from Facebook’s lackluster Ad Library – is deeply concerning. For several years now, I have called on social media platforms like Facebook to work with, and better empower, independent researchers, whose efforts consistently improve the integrity and safety of social media platforms by exposing harmful and exploitative activity. Instead, Facebook has seemingly done the opposite. It’s past time for Congress to act to bring greater transparency to the shadowy world of online advertising, which continues to be a major vector for fraud and misconduct.”

###

WASHINGTON – U.S. Senators Mark Warner (D-Va.), Bob Menendez (D-N.J.), and Mazie Hirono (D-Hawaii) today slammed Facebook for failing to remove vaccine misinformation from its platforms. The rapid spread of dangerous misinformation across social media could hamper the efforts of public health officials as they work to vaccinate hard-to-reach communities and hesitant individuals, representing a serious concern for public safety. Studies show that roughly 275,000 Facebook users belong to anti-vaccine groups on the platform. 

“As public health experts struggle to reach individuals who are vaccine hesitant, epidemiologists warn that low rates of vaccine rates coupled with the relaxing of mask mandates could result in new COVID-19 outbreaks,” the senators wrote in a letter to Facebook CEO Mark Zuckerberg. “Moreover, most public health officials agree that because herd immunity in the U.S. is now unlikely, ‘continued immunizations, especially for people at highest risk because of age, exposure or health status, will be crucial to limiting the severity of outbreaks, if not their frequency’. In short, ‘vaccinations remain the key to transforming the virus into a controllable threat’.” 

A recent report from Markup.org’s “Citizen Browser project” found that there are 117 active anti-vaccine groups on Facebook. Combined, the groups had roughly 275,000 members. The study also found that Facebook was recommending health groups to its users, including anti-vaccine groups and pages that spread COVID-19 misinformation and propaganda.

The lawmakers asked Zuckerberg a series of questions, including why users were recommended vaccine misinformation; how long anti-vaccine groups and pages remained on the platform before being taken down; and what specific steps the company is taking to ensure its platforms do not recommend vaccine misinformation to its users.

A copy of the letter can be found here and below:

 

Dear Mr. Zuckerberg,

We write to express our concern over recent reporting alleging that Facebook failed to remove vaccine misinformation from its platforms. As the U.S. struggles to reach vaccine hesitant individuals and the world grapples with new variants, it is more important than ever that social media companies such as Facebook ensure that its platforms are free from disinformation.

In a February 2021 blog post, Facebook promised to expand “the list of false claims [it] will remove to include additional debunked claims about the coronavirus and vaccines. This includes claims such as: COVID-19 is man-made or manufactured; Vaccines are not effective at preventing the disease they are meant to protect against; It’s safer to get the disease than to get the vaccine; [and] Vaccines are toxic, dangerous or cause autism.” According to data from the Markup.org’s “Citizen Browser project,” misinformation regarding COVID-19 and vaccines are readily available on Facebook. According to Madelyn Webb, a senior researcher at Media Matters, as late as April 2021, she found 117 active anti-vaccine groups on Facebook. Combined, those groups had roughly 275,000 members. Even more troubling is the finding that Facebook “continued to recommend health groups to its users, including blatantly anti-vaccine groups and pages explicitly founded to propagate lies about the pandemic.” As public health experts struggle to reach individuals who are vaccine hesitant, epidemiologists warn that low rates of vaccine rates coupled with the relaxing of mask mandates could result in new COVID-19 outbreaks. Moreover, most public health officials agree that because herd immunity in the U.S. is now unlikely, “[c]ontinued immunizations, especially for people at highest risk because of age, exposure or health status, will be crucial to limiting the severity of outbreaks, if not their frequency.” In short, “vaccinations remain the key to transforming the virus into a controllable threat.”

In March 2021, Senator Warner wrote to you expressing these same concerns. Your April 2021 response failed to directly answer the questions posed in his letter. Specifically, you failed to respond to a question as to why posts with content warnings about health misinformation were promoted into Instagram feeds. Given Facebook’s continued failure to remove vaccine misinformation from its platforms, we seek answers to the following questions no later than July 5, 2021.

1.    In calendar year 2021, how many users viewed vaccine-related misinformation? 

2.    In calendar year 2021, how many users were recommended anti-vaccine information or vaccine-related misinformation? 

a.    Why were these users recommended such information?

3.    In calendar year 2021, how many vaccine-related posts has Facebook removed due to violations of its vaccine misinformation policy? How many pages were removed? How many accounts were removed? How many groups were removed?

a.    On average, how long did these pages or posts remain on the platform before Facebook removed them?  

4.    What steps is Facebook taking to ensure that its platforms do not recommend vaccine-related misinformation to its users? Please be specific. 

5.    What steps is Facebook taking to ensure that individuals who search out anti-vaccine content are not subsequently shown additional misinformation?

6.    In March 2019, Facebook said it would stop recommending groups that contained vaccine-related misinformation content. It wasn’t until February 2021 that the company announced it would remove such content across the platform. Why did it take Facebook nearly a year to make this decision? 

Thank you in advance or your prompt response to the above questions. 

Sincerely, 

###

WASHINGTON – Today U.S. Sens. Mark R. Warner (D-VA), Mazie Hirono (D-HI) and Amy Klobuchar (D-MN) announced the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act to reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination on their platforms.  

“When Section 230 was enacted in 1996, the Internet looked very different than it does today. A law meant to encourage service providers to develop tools and policies to support effective moderation has instead conferred sweeping immunity on online providers even when they do nothing to address foreseeable, obvious and repeated misuse of their products and services to cause harm,” said Sen. Warner, a former technology entrepreneur and the Chairman of the Senate Select Committee on Intelligence. “Section 230 has provided a ‘Get Out of Jail Free’ card to the largest platform companies even as their sites are used by scam artists, harassers and violent extremists to cause damage and injury. This bill doesn’t interfere with free speech – it’s about allowing these platforms to finally be held accountable for harmful, often criminal behavior enabled by their platforms to which they have turned a blind eye for too long.” 

“Section 230 was passed in 1996 to incentivize then-nascent internet companies to voluntarily police illegal and harmful content posted by their users. Now, twenty-five years later, the law allows some of the biggest companies in the world turn a blind eye while their platforms are used to violate civil and human rights, stalk and harass people, and defraud consumers—all without accountability,” Sen. Hirono said. “The SAFE TECH Act brings Section 230 into the modern age by creating targeted exceptions to the law’s broad immunity. Internet platforms must either address the serious harms they impose on society or face potential civil liability.”

“We need to be asking more from big tech companies, not less. How they operate has a real-life effect on the safety and civil rights of Americans and people around the world, as well as our democracy. Holding these platforms accountable for ads and content that can lead to real-world harm is critical, and this legislation will do just that,” said Sen. Klobuchar. 

The SAFE TECH Act would make clear that Section 230:

·       Doesn’t apply to ads or other paid content – ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams;

·       Doesn’t bar injunctive relief – allowing victims to seek court orders where misuse of a provider’s services is likely to cause irreparable harm; 

·       Doesn’t impair enforcement of civil rights laws – maintaining the vital and hard-fought protections from discrimination even when activities or services are mediated by internet platforms; 

·       Doesn’t interfere with laws that address stalking/cyber-stalking or harassment and intimidation on the basis of protected classes – ensuring that victims of abuse and targeted harassment can hold platforms accountable when they directly enable harmful activity;

·       Doesn’t bar wrongful death actions – allowing the family of a decedent to bring suit against platforms where they may have directly contributed to a loss of life;

·       Doesn’t bar suits under the Alien Tort Claims Act – potentially allowing victims of platform-enabled human rights violations abroad (like the survivors of the Rohingya genocide) to seek redress in U.S. courts against U.S.-based platforms.

These changes to Section 230 do not guarantee that platforms will be held liable in all, or even most, cases. Proposed changes do not subject platforms to strict liability; and the current legal standards for plaintiffs still present steep obstacles. Rather, these reforms ensure that victims have an opportunity to raise claims without Section 230 serving as a categorical bar to their efforts to seek legal redress for harms they suffer – even when directly enabled by a platform’s actions or design. 

Bill text is available here. A three-page summary is available here. Frequently asked questions about the bill are available here. A redline of Section 230 is available here.

“Social media platforms and the tech companies that run them must protect their users from the growing and dangerous combination of misinformation and discrimination. As we have repeatedly seen, these platforms are being used to violate the civil rights of Black users and other users of color by serving as virtually-unchecked homes for hateful content and in areas such as housing and employment discrimination through the targeting and limiting of who can see certain advertisements. Section 230 must be strengthened to ensure that these online communities are not safe harbors for the violations of civil rights laws. LDF supports Senator Warner and Senator Hirono’s bill as it addresses these critical concerns,” said Lisa Cylar Barrett, Director of Policy, NAACP Legal Defense and Educational Fund, Inc. (LDF)

“Tech companies must be held accountable for their roles in facilitating genocide, extremist violence and egregious civil rights abuses. We applaud Senators Hirono and Warner for their leadership in introducing a robust bill that focuses on supporting targets of civil and human rights abuses on social media while also addressing cyber-harassment and other crimes stemming from the spread of hate and disinformation. The sweeping legal protections enjoyed by tech platforms cannot continue,” said Jonathan A. Greenblatt, CEO of ADL (Anti-Defamation League).

“Platforms should not profit from targeting employment ads toward White users, or from targeting voter suppression ads toward Black users. Senator Warner and Senator Hirono’s comprehensive bill makes it clear that Section 230 does not give platforms a free pass to violate civil rights laws, while also preserving the power of platforms to remove harmful disinformation,” said Spencer Overton, President, Joint Center for Political and Economic Studies.

“I applaud the SAFE TECH Act introduced by Sens. Warner and Hirono which provides useful modifications to section 230 of the 1996 Communications Decency Act to limit the potential negative impacts of commercial advertising interests while continuing to protect anti-harassment and civil and human rights interests of those who may be wrongfully harmed through wrongful online activity,”

Ramesh Srinivasan, Professor at the UCLA Department of Information Studies and Director of UC Digital Cultures Lab, said.

“Congress enacted 47 USC 230 in the mid-1990s to support online innovation and free speech but the way in which courts have very generously read Section 230 have meant there is no legal mechanism that has done more to insulate intermediaries from legal accountability for distributing, amplifying, and carefully delivering unlawful content and facilitating dangerous antisocial connections. Racist, misogynist, and violent antidemocratic forces coalesce online because intermediaries rarely have to account for their social impacts. Senator Warner and Senator Hirono’s proposed changes create a new and necessary incentive for such companies to be far more mindful of the social impacts of their services in areas of law that are of vital importance to the health of the networked information environment. It does this while not abandoning the protection for intermediaries' distribution of otherwise lawful content,” said Olivier Sylvain, Professor at Fordham Law School and Director of the McGannon Center for Communications Research.

“We applaud Senator Warner and Senator Hirono’s important effort to reform Section 230 and thus bring greater accountability to the tech sector. Warner’s proposed reforms are crucial to protecting civil rights and making the web safer for those who have been negatively impacted by much that happens there, both online and off.  We thank Senator Warner and Senator Hirono for tackling this critically important issue,” Wendy Via, Cofounder, Global Project Against Hate and Extremism, said.

“The Cyber Civil Rights Initiative welcomes this effort to protect civil rights in the digital age and to hold online intermediaries accountable for their role in the silencing and exploitation of vulnerable communities. This bill offers urgently needed provisions to limit and correct the overzealous interpretation of Section 230 that has granted a multibillion dollar industry immunity and impunity for profiting from irreparable injury,” said Mary Anne Franks, President, Cyber Civil Rights Initiative and Danielle K. Citron, Vice President, Cyber Civil Rights Initiative.

“For too long, companies like Facebook and YouTube have undermined the rights and safety of Muslims and communities of color in the U.S. and around the world. We have urged them to take responsibility for the targeted hate and violence, including genocide, facilitated by their platforms but these companies have refused to act,” said Madihha Ahussain, Muslim Advocates Special Counsel for Anti-Muslim Bigotry. “We appreciate Senators Warner and Hirono for introducing the SAFE TECH Act, which includes essential adjustments to Section 230 and will finally hold these companies accountable for violating people’s rights.”

“The SAFE TECH Act is an important step forward for platform accountability and for the protection of privacy online. Providing an opportunity for victims of harassment, privacy invasions, and other violations to remove unlawful content is critical to stopping its spread and limiting harm,” said Caitriona Fitzgerald, Interim Associate Director and Policy Director, Electronic Privacy Information Center (EPIC).

“The SAFE TECH Act is the Section 230 reform America needs now. Over-expansive readings of Section 230 have encouraged reckless and negligent shirking by platforms of basic duties toward their users. Few if any of the drafters of Section 230 could have imagined that it would be opportunistically seized on to deregulate online arms sales, protect sellers of defective merchandise, permit genocidaires to organize online with impunity, or allow dating sites to ignore campaigns of harassment and worse against their users. The SAFE TECH Act reins in the cyberlibertarian ethos of Section 230 imperialism, permitting courts to carefully weigh and assess evidence in cases where impunity is now preemptively assumed,” Frank Pasquale, Author of The Black Box Society and Professor at Brooklyn Law School, said.

“For far too long online platforms have placed profit over accountability and decency, and allowed misinformation, algorithmic discrimination, and online hate to be weaponized. When the Communications Decency Act was passed in 1996, no one imagined it would be used to shield the most valuable companies in the world from basic civil rights compliance,” said David Brody, Counsel and Senior Fellow for Privacy and Technology, Lawyers’ Committee for Civil Rights Under Law. “This bill would make irresponsible big tech companies accountable for the digital pollution they knowingly and willfully produce, while continuing to protect free speech online. Black Americans and other communities of color are frequent targets of online hate, threats and discrimination, and many of these online behaviors would not be tolerated if they occurred in a brick-and-mortar business. It is time that big tech stop treating our communities of color like second-class citizens, and give them the protection they deserve.”

“It is unacceptable that Big Tech enjoys near total legal immunity from the harm that their platforms expose to children and families. Tech companies should not be able to hide behind Section 230 to avoid abiding by civil rights laws, court injunctions, and other protections for families and the most vulnerable in society. Reforms proposed by Sens. Warner and Hirono begin to change that. It is time to hold these companies accountable for the harms their platforms have unleashed on society,” said James P. Steyer, CEO and Founder, Common Sense.

“The deadly insurrection at the Capitol made clear that lawmakers must take immediate action to ensure multi-billion-dollar social media companies, whose business models incentivize the unchecked spread of hate-fueled misinformation and violent clickbait conspiracies, can no longer abuse Section 230’s broad protections to evade civil rights laws,” said Arisha Hatch, Color Of Change Vice President and Chief of Campaigns. “The SAFE TECH Act from Sen. Warner and Sen. Hirono is critical. The proposed reform would not only prevent power-hungry social media companies from leveraging Section 230 to turn a blind eye to civil rights violations on their platforms, but it would also incentivize them to take down dangerous paid and organic content — and establish better protections against real world harms like cyberstalking, which disproportionately impacts Black women. We strongly encourage members of Congress to support this legislation, which represents a significant step towards finally holding Big Tech accountable for their years-long role in enabling civil rights violations against Black communities.”

“After 2020 no-one is asking if online misinformation creates real-world harms - whether it's COVID and anti-vaxx misinformation, election-related lies or hate, it is now clear that action is needed to deal with unregulated digital platforms. Whereas users can freely spread hate and misinformation, platforms profit from traffic regardless of whether it is productive or damaging, the costs are borne by the public and society at large. This timely bill forensically delineates the harms and ensures perpetrators and enablers pay a price for the harms they create. In doing so, it reflects our desire for richer communication technologies, which enhance our right to speak and be heard, and that also respect our fundamental rights to life and safety,” said Imran Ahmed, CEO, Center for Countering Digital Hate. 

“Our lives are at stake because hate and white supremacy is flourishing online. On January 6th we saw the results of what continuous disinformation and hate online can do with the insurection and domestic terrorist attack on the U.S. Capitol, where five lives were lost,” said Brenda Victoria Castillo, President & CEO, National Hispanic Media Coalition. “It is time to hold online platforms accountable for their role in the radicalization and spread of extremist ideologies in our country. NHMC is proud to support Senator Warner's limited reform of Section 230, and applauds his efforts to safeguard our democracy and the Latinx community.”

“Senator Mark Warner is a leader in ensuring that technology  supports democracy even as it advances innovation. His and Senator Hirono’s new Section 230 reform bill now removes  obstacles to enforcement against discrimination, cyber-stalking, and targeted harassment in the online world. The events of Jan 6 demonstrated that what happens online isn’t just a game. Online conspiracy theories, discrimination, and harassment are a public danger. The Warner-Hirono bill would go a long way toward addressing these dangers, and incentivizing platforms to move past the current, ineffective whack-a-mole approach to these important online harms,” said Karen Kornbluh, Director of the Digital Innovation and Democracy Initiative at the German Marshall Fund of the US and Former US Ambassador to the Organization for Economic Co-operation and Development.

###

WASHINGTON - As tech companies and public health agencies deploy new tools to fight the spread of COVID-19 – including contact tracing apps, digital monitoring, home tests, and vaccine appointment booking – U.S. Sens. Mark R. Warner (D-VA), Richard Blumenthal (D-CT) and U.S. Representatives Anna G. Eshoo (D-CA), Jan Schakowsky (D-IL), and Suzan DelBene (D-WA) introduced the Public Health Emergency Privacy Act to set strong and enforceable privacy and data security rights for health information.

After decades of data misuse, breaches, and privacy intrusions, Americans are reluctant to trust tech firms to protect their sensitive health information – according to a recent poll, more than half of Americans would not use a contact tracing app and similar tools from Google and Apple over privacy concerns. The bicameral Public Health Emergency Privacy Act would protect Americans who use this kind of technology during the pandemic and safeguard civil liberties. Strengthened public trust will empower health authorities and medical experts to leverage new health data and apps to fight COVID-19. 

“Technologies like contact tracing, home testing, and online appointment booking are absolutely essential to stop the spread of this disease, but Americans are rightly skeptical that their sensitive health data will be kept safe and secure,” Blumenthal said. “Legal safeguards protecting consumer privacy failed to keep pace with technology, and that lapse is costing us in the fight against COVID-19. This measure sets strict and straightforward privacy protections and promises: Your information will be used to stop the spread of this disease, and no more. The Public Health Emergency Privacy Act’s commitment to civil liberties is an investment in our public health.”

“Our health privacy laws have not kept pace with what Americans have come to expect for their sensitive health data,” Warner said. “Strong privacy protections for COVID health data will only be more vital as we move forward with vaccination efforts and companies begin experimenting with things like ‘immunity passports’ to gate access to facilities and services. Absent a clear commitment from policymakers to improving our health privacy laws, as this important legislation seeks to accomplish, I fear that creeping privacy violations and discriminatory uses of health data could become the new status quo in health care and public health.” 

“I’m exceedingly proud of the American innovators, many of whom are in my congressional district, who have built technologies to combat the coronavirus. As these technologies are used, they must be coupled with policies to protect the civil liberties that define who we are as a nation,” said Eshoo. “The Public Health Emergency Privacy Act is a critical bill that will prohibit privacy invasions by preventing misuse of pandemic-related data for unrelated purposes like marketing, prohibiting the data from being used in discriminatory ways, and requiring data security and integrity measures. The legislation will give the American people confidence to use technologies and systems that can aid our efforts to combat the pandemic.”

“As we continue to respond to the devastating suffering caused by COVID-19, our country’s first and foremost public health response must be testing, testing, testing, AND manual contact tracing. Digital contact tracing can and should complement these efforts, but it is just that – complimentary. However, if we do pursue digital contact tracing, consumers need clearly-defined privacy rights and strong enforcement to safeguard these rights. I am proud to re-introduce this bill with my friend and fellow Energy & Commerce Subcommittee Chairwoman Eshoo and Congresswoman DelBene, along with Senators Blumenthal and Warner,” said Schakowsky. “It’s our shared belief that the Trump Administration missed an opportunity when it failed to advocate for swift passage of this legislation. Based on how poorly the Trump Administration’s contact tracing scheme went, we all know this legislation would go a long way towards establishing the trust American consumers need – and which Big Tech has squandered, time and again – for digital contact tracing to be a worthwhile auxiliary to the Biden Administration’s plan for widespread testing and manual contact tracing.” 

“Technology has become one of our greatest tools in responding to the COVID-19 pandemic but we need to build trust with the broader public if we are going to reach its full potential. Americans need to be certain their sensitive personal information will be protected when using tracing apps and other COVID-19 response technology and this pandemic-specific privacy legislation will help build that trust,” said DelBene. “Data privacy should not end with the pandemic. We need comprehensive privacy reform to protect Americans at all times, including state preemption to create a strong, uniform national standard. I hope that this crisis has shed light on the lack of adequate digital privacy policies in our country and look forward to working with these lawmakers and others to create the necessary standards moving forward.”

The bill is co-sponsored in the Senate by U.S. Senators Michael Bennet (D-CO), Amy Klobuchar (D-MN), Edward J. Markey (D-MA), Tammy Baldwin (D-WI), Mazie K. Hirono (D-HI), Cory Booker (D-NJ), Robert Menendez (D-NJ), Angus King (I-ME), Elizabeth Warren (D-MA) and Dick Durbin (D-IL).

The bill is co-sponsored in the House of Representatives by Don Beyer (D-VA), Jerry McNerney (D-CA), Nanette Diaz Barragán (D-CA), Mark Pocan (D-WI), Bobby Rush (D-IL), Peter Welch (D-VT), Mary Gay Scanlon (D-PA), Doris Matsui (D-CA), Ted Lieu (D-CA), Mark DeSaulnier (D-CA), Jahana Hayes (D-CT), Ro Khanna (D-CA), Jesús ''Chuy'' García (D-IL), Stephen Lynch (D-MA), Raúl Grijalva (D-AZ), Barbara Lee (D-CA), Debbie Dingell (D-MI), and Peter DeFazio (D-OR). 

The Public Health Emergency Privacy Act would:

·       Ensure that data collected for public health is strictly limited for use in public health;

·       Explicitly prohibit the use of health data for discriminatory, unrelated, or intrusive purposes, including commercial advertising, e-commerce, or efforts to gate access to employment, finance, insurance, housing, or education opportunities;

·       Prevent the potential misuse of health data by government agencies with no role in public health;

·       Require meaningful data security and data integrity protections – including data minimization and accuracy – and mandate deletion by tech firms after the public health emergency;

·       Protect voting rights by prohibiting conditioning the right to vote based on a medical condition or use of contact tracing apps;

·       Require regular reports on the impact of digital collection tools on civil rights;

·       Give the public control over their participation in these efforts by mandating meaningful transparency and requiring opt-in consent; and

·       Provide for robust private and public enforcement, with rulemaking from an expert agency while recognizing the continuing role of states in legislation and enforcement.

The Public Health Emergency Privacy Act is endorsed by Access Now, Electronic Privacy and Information Center (EPIC), the Center for Digital Democracy, Color of Change, Common Sense Media, New America’s Open Technology Institute, and Public Knowledge.

“A public health crisis is not the time to give up on our privacy rights, and this bill would go a long way toward protecting those rights. COVID-19 response apps are already out there, and this bill will help ensure that the apps are distributed and used in a responsible manner that will limit the new and expansive surveillance systems companies are building. Allowing these apps to proceed unchecked would create serious privacy violations that will never be undone,”said Eric Null, U.S. Policy Manager at Access Now.

“The Public Health Emergency Privacy Act shows that privacy and public health are complementary goals. The bill requires companies to limit the collection of health data to only what is necessary for public health purposes, and crucially, holds companies accountable if they fail to do so,” said Caitriona Fitzgerald, Interim Associate Director and Policy Director with Electronic Privacy Information Center (EPIC).

“Public health measures to contain the deadly spread of COVID-19 must be effective and protect those most at risk. Where data are collected or used, they should not be misused to undermine privacy, fairness and equity, or place our civil rights in peril. The Public Health Emergency Privacy Act ensures that efforts to limit the spread of the virus truly protect all our interests,” said Katharina Kopp, Director of Policy for the Center for Digital Democracy.

“Color Of Change strongly supports the Public Health Emergency Privacy Act, as it would prevent corporate profiteering and government misuse of health data to help ensure Black people — who are disproportionately exposed to the dangers of surveillance — can operate online without fear. Profit-incentivized corporations should not be allowed to exploit loopholes to gather and sell sensitive health and location data without any regard to the safety of our communities. As the COVID-19 pandemic rages on, we need stringent and enforceable safeguards in place to protect private health information of Black people and other marginalized communities, who are most at risk of both COVID-19 and surveillance. We thank Senators Blumenthal and Warner for their leadership on this legislation, and we will continue to advocate for the highest standard of protection against the abuse of personal data,” said Color Of Change President Rashad Robinson.

“Common Sense calls on Congress to pass meaningful privacy safeguards for families. More than ever, the pandemic has highlighted how important it is that families can trust how their information is being collected, used, and shared. PHEPA is an important proposal to ensure technologies and data being used to combat COVID are used in privacy-protective ways, and it also can serve as a model for how Congress can comprehensively protect privacy in the near future,” said Ariel Fox Johnson, Senior Counsel for Global Policy with Common Sense Media. 

“OTI welcomes the re-introduction of this legislation that would establish strong safeguards to prevent personal data from being used for non-public health purposes and prevent the data from being used in a discriminatory manner. The ongoing privacy threats and urgency of the pandemic make these protections more important than ever,” said Christine Bannan, Policy Counsel at New America’s Open Technology Institute.

“As contact tracing apps and other types of COVID-19 surveillance become commonplace in the United States, this legislation will protect the privacy of Americans regardless of the type of technology used or who created it. It is critical that Congress continue to work to prevent this type of corporate or government surveillance from becoming ubiquitous and compulsory,” said Sara Collins, Policy Counsel at Public Knowledge.

###

Washington, D.C. – Today, U.S. Sen. Mark R. Warner (D-Va.) joined Sens.  Catherine Cortez Masto (D-Nev.) and Sherrod Brown (D-Ohio) and 13 of their Senate colleagues in sending a letter to Consumer Financial Protection Bureau (CFPB) Director Kathleen Kraninger regarding the Bureau’s recent public enforcement actions against mortgage originators offering Veterans Administration (VA)-guaranteed loans. Between July 2020 and September 2020, the CFPB announced consent orders against eight different mortgage lenders for deceptive and misleading advertising of VA mortgages. In each case, the CFPB found that the originators’ advertisements contained false, misleading, or inaccurate statements that violated the Consumer Financial Protection Act’s prohibition against deceptive acts and practices, the Mortgage Acts and Practices Advertising Rule, and Regulation Z. The CFPB collected approximately $2.8 million in civil penalties from these eight violators, but did not require any of these companies to provide restitution to harmed consumers.

The lawmakers wrote, “We write to you regarding the Consumer Financial Protection Bureau (Bureau)’s recent public enforcement actions against mortgage originators offering Veterans Administration (VA)-guaranteed loans. We are deeply concerned by the Bureau’s failure to obtain restitution for consumers who were targeted by these companies’ deceptive marketing practices.”

“Unfortunately, because of extended travel and multiple relocations, often related to their service, servicemembers and veterans are particularly vulnerable to scams. The VA and the Bureau have long been aware of one such scam: direct-mail advertisements that contained inadequate disclosures or misleading and deceptive statements pertaining to VA home loans,” the lawmakers continued. “For instance, in 2016, the Bureau released a snapshot of servicemember complaints and highlighted that veterans had reported receiving misleading advertisements. And in November 2017, the VA and the Bureau issued a “Warning Order” alerting servicemembers and veterans to offers of mortgage refinancing that contained deceptive or false advertising.”

“As servicemembers, veterans, and their families make sacrifices for our country, they expose themselves to a number of financial risks and challenges; the Bureau must be clear that it is looking out for them in return. We are concerned that there has been no effort to ensure that thousands of servicemembers and veterans are made whole or at least compensated for damages caused by unscrupulous lenders seeking to profit by misleading homeowners,” wrote the lawmakers. 

The full text of the letter can be found here.

BACKGROUND:

Since the beginning of the coronavirus pandemic, complaints to the CFPB have increased 50 percent over the 2019 levels, including thousands of complaints about credit reporting, debt collection, credit cards and prepaid cards, and mortgages. 

###

WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA), former technology entrepreneur and Vice Chairman of the Senate Select Committee on Intelligence, applauded the house passage of the Internet of Things (IoT) Cybersecurity Improvement Act – legislation to require minimum security requirements for Internet of Things (IoT) devices purchased by the U.S. government. Sen. Warner authored and introduced this legislation in the Senate back in August 2017. He reintroduced the bill in the 116th Congress with a House companion led by U.S. Reps. Robin Kelly and Will Hurd. That legislation passed through the Senate Homeland Security and Governmental Affairs Committee in June 2019 and now awaits consideration in the Senate. 

“The House passage of this legislation is a major accomplishment in combatting the threats that insecure IoT devices pose to our individual and national security. Frankly, manufacturers today just don’t have the appropriate market incentives to properly secure the devices they make and sell – that’s why this legislation is so important,” said U.S. Sen. Mark R. Warner. “I commend Congresswoman Kelly and Congressman Hurd for their efforts to push this legislation forward over the past two years. I look forward to continuing to work to get this bipartisan, bicameral bill across the finish line in the Senate.”

Specifically, the Internet of Things (IoT) Cybersecurity Improvement Act introduced by Sen. Warner would:

  • Require the National Institute of Standards and Technology (NIST) to issue recommendations addressing, at a minimum, secure development, identity management, patching, and configuration management for IoT devices.
  • Direct the Office of Management and Budget (OMB) to issue guidelines for each agency that are consistent with the NIST recommendations, and charge OMB with reviewing these policies at least every five years.
  • Require any Internet-connected devices purchased by the federal government to comply with those recommendations.
  • Direct NIST to work with cybersecurity researchers, industry experts, and the Department of Homeland Security (DHS) to publish guidance on coordinated vulnerability disclosure to ensure that vulnerabilities related to agency devices are addressed.
  • Require contractors and vendors providing information systems to the U.S. government to adopt coordinated vulnerability disclosure policies, so that if a vulnerability is uncovered, that can be effectively shared with a vendor for remediation.


Sen. Warner, the Vice Chairman of the Senate Select Committee on Intelligence and former technology executive, is the co-founder and co-chair of the bipartisan Senate Cybersecurity Caucus and a leader in Congress on security issues related to the Internet of Things.

###

WASHINGTON, DC – As communities across the country grapple with how to reopen as safely as possible, U.S. Sen. Mark R. Warner joined Sens. Tom Carper (D-Del.), Bill Cassidy, M.D. (R-La.) and a bipartisan group of senators in calling on the Department of Health and Human Services (HHS) and the Centers for Disease Control and Prevention (CDC) to improve, automate and modernize COVID-19 data collection and management. In a letter sent to Secretary Azar and Dr. Redfield, the lawmakers specifically called on the agencies to harness technologically advanced systems and build on existing data sources in order to provide public health officials and community leaders with more accurate, real-time information as they make critical decisions about reopening.

Unfortunately, recent reports have shown that case reporting and contact tracing across the country are being hampered by a fragmented health system and antiquated technology, including manual entry of patients’ data and results and sharing of such results through paper and pencil or fax. In Texas, some patients were having to wait l0 days to find out if they had been infected with coronavirus because their results were being faxed to public health officials and then entered into a database by hand. 

In their letter, the lawmakers wrote, “During an emergency such as the current pandemic, scaling up and using existing systems to the greatest extent possible can improve data collection and contact tracing efforts. We therefore ask that you and your colleagues utilize and build on existing data sources, such as electronic health record (EHR) and laboratory information management systems (LIMS), claims databases, and other automated systems to provide government leaders, public health officials, community leaders, and others with actionable, easy-to-interpret data from a wide-ranging set of sources. Data generated by contact tracing, syndromic surveillance, and large-scale testing can help inform decisions on how to safely reopen communities and bring economies back online. Modernizing and automating data collection should augment detection, testing, and contact tracing plans, while also helping to prevent and improve the management of new outbreaks.”

The bipartisan group highlighted the fact that some of these tools are already being successfully utilized in communities across the country. They noted, “Fortunately, software-based systems providing data management for state public health entities and major testing laboratories already exist, and they are more efficient and accurate while reducing the burden of excess paperwork. For example, North Carolina and Florida have taken steps to modernize and improve patients’ Covid-19 test results and other infectious disease symptoms. In Florida, nurses can register patients for Covid testing in the field using tablet computers that are connected to a HIPAA compliant cloud. By managing the patient and order requisition information electronically, lab processing time is reduced and transcription errors are eliminated.”

Joining Sens. Warner, Carper and Cassidy in sending this letter are Sens. Michael Bennet (D-Colo.), Richard Blumenthal (D-Conn.), Bob Casey (D-Penn.), Susan Collins (R-Maine), Chris Coons (D-Del.), Tina Smith (D-Minn.), and Thom Tillis (R-N.C.).

The letter is available here

 

###

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA) joined Sen. Amy Klobuchar (D-MN), a senior Member of Senate Commerce Committee and Ranking Member of the Senate Judiciary Subcommittee on Antitrust, Competition Policy and Consumer Rights and Chairman of the Senate Commerce Subcommittee on Manufacturing, Trade, and Consumer Protection, Senator Jerry Moran (R-KS), sent a letter to  Federal Trade Commission (FTC) Chairman Joseph Simons urging the FTC to take action to address the troubling data collection and sharing practices of the mobile application (“app”) Premom. 

Premom is a mobile app that helps users track their fertility cycles to determine the best time to get pregnant, relying on personal and private health information. As of November 2019, the app has been downloaded over half a million times, and it is one of the top search results among fertility apps in the Apple App and Google Play stores.

In addition to Sen. Warner, Sens. Klobuchar and Moran were joined by Ranking Member of the Senate Commerce Committee, Maria Cantwell (D-WA), Richard Blumenthal (D-CT), Shelley Moore Capito (R-WV), and Elizabeth Warren (D-MA).

“A recent investigation from the International Digital Accountability Council (IDAC) indicated that Premom may have engaged in deceptive consumer data collection and processing, and that there may be material differences between Premom’s stated privacy policies and its actual data-sharing practices. Most troubling, the investigation found that Premom shared its users’ data without their consent,” Klobuchar and her colleagues wrote.

The full text of the letter can be found HERE and below:

Dear Chairman Simons:

We write to express our serious concerns regarding recent reports about the data collection and sharing practices of the mobile application (“app”) Premom and to request information on the steps that the Federal Trade Commission (FTC) plans to take to address this issue.

Premom is a mobile app that helps users track their fertility cycles to determine the best time to get pregnant. As of November 2019, the app has been downloaded over half a million times, and it is one of the top search results among fertility apps in the leading app stores. To use Premom, users provide the app extensive personal and private health information.

A recent investigation from the International Digital Accountability Council (IDAC) indicated that Premom may have engaged in deceptive consumer data collection and processing, and that there may be material differences between Premom’s stated privacy policies and its actual data-sharing practices. Most troubling, the investigation found that Premom shared its users’ data without their consent. IDAC sent a letter to the FTC on August 6, 2020, to describe these undisclosed data transmissions along with other concerning allegations including conflicting privacy policies and questionable representations related to their collection of installed apps for functionality purposes.

While Premom claimed to only share “nonidentifiable” information in its privacy policy, the IDAC report found that Premom collected and shared—with three third-party advertising companies based in China including Jiguang, UMSNS, and Umeng—non-resettable unique user device identifiers that can be used to build profiles of consumer behavior. Additionally, users of the Premom app were not given the option to opt out of sharing their personal data with these advertising companies, and reports also allege that one of the companies that received user data from Premom concealed the data being transferred—which privacy experts say is an uncommon practice for apps that is used primarily to conceal their data collection practices.

While we understand that Premom has taken steps to update its app to halt the sharing of its users’ information with these companies, it is concerning that Premom may have engaged in these deceptive practices and shared users’ personal data without their consent. Additionally, there may still be users who have not yet updated the Premom app, which could still be sharing their personal data—without their knowledge or consent. 

In light of these concerning reports, and given the critical role that the FTC plays in enforcing federal laws that protect consumer privacy and data under Section 5 of the Federal Trade Commission Act and other sector specific laws, we respectfully ask that you respond to the following questions:

1.  Does the FTC treat persistent identifiers, such as the non-resettable device hardware identifiers discussed in the IDAC report, as personally identifiable information in relation to its general consumer data security and privacy enforcement authorities under Section 5 of the FTC Act?

2.  Is the FTC currently investigating or does it plan to investigate Premom’s consumer data collection, transmission, and processing conduct described in the IDAC report to determine if the company has engaged in deceptive practices?

3.  Does the FTC plan to take any steps to educate users of the Premom app that the app may still be sharing their personal data without their permission if they have not updated the app? If not, does the FTC plan to require Premom to conduct such outreach?

4.  Please describe any unique or practically uncommon uses of encryption by the involved third-party companies receiving information from Premom that could be functionally interpreted to obfuscate oversight of the involved data transmissions.

5.  How can the FTC use its Section 5 authority to ensure that mobile apps are not deceiving consumers about their data collection and sharing practices and to preempt future potentially deceptive practices like those Premom may have engaged in? 

Thank you for your time and attention to this important matter. We look forward to working with you to improve Americans consumers’ data privacy protections. 

Sincerely, 

###

 

WASHINGTON – Today, U.S. Sen. Mark R. Warner (D-VA) and Sen. Richard Blumenthal (D-CT), along with Sens. Michael Bennet (D-CO), Mazie Hirono (D-HI), Angus King (I-ME), Bob Menendez (D-NJ), Kamala Harris (D-CA), Ed Markey (D-MA), Cory Booker (D-NJ), Tammy Baldwin (D-WI), Elizabeth Warren (D-MA), Amy Klobuchar (D-MN), and Dick Durbin (D-IL), sent a letter to Senate leaders urging them to include the Public Health Emergency Privacy Act in the next coronavirus relief package as negotiations between Senate Republicans and Democrats are underway. Inclusion of the legislation will help strengthen the public’s trust to participate in critical screening and contact tracing efforts to aid in the fight against COVID-19.

“As you begin negotiations on another coronavirus stimulus package, we write to urge inclusion of commonsense privacy protections for COVID health data. Building public trust in COVID screening tools will be essential to ensuring meaningful participation in such efforts. With research consistently showing that Americans are reluctant to adopt COVID screening and tracing apps due to privacy concerns, the lack of health privacy protections could significantly undermine efforts to contain this virus and begin to safely re-open – particularly with many screening tools requiring a critical mass in order to provide meaningful benefits,” the Senators wrote in a letter to Senate Majority Leader Mitch McConnell, Senate Minority Leader Chuck Schumer, and the Chairman and Ranking Member of the Senate Committee on Health, Education, and Labor.

According to a recent survey, 84 percent of Americans feel uneasy about sharing their personal health information for COVID-19 related mitigation efforts. Public reluctance can be attributed to a myriad of investigative reports and congressional hearings that have exposed widespread secondary use of Americans data over the years. The Senators noted that with the inclusion of their bill, Congress can establish commonsense targeted rules to ensure the collection, retention, and use of data by COVID screening tools are focused on combatting COVID and not for extraneous, invasive, or discriminatory purposes.

“Our urgent and forceful response to COVID-19 can coexist with protecting and even bolstering our health privacy. If not appropriately addressed, these issues could lead to a breakdown in public trust that could ultimately thwart successful public health surveillance initiatives. Privacy experts, patient advocates, civil rights leaders, and public interest organizations have resoundingly called for strong privacy protections to govern technological measures offered in response to the COVID-19 crisis. In the absence of a federal privacy framework, experts and enforcers – including the Director of the Bureau of Consumer Protection of Federal Trade Commission – have encouraged targeted rules on this sensitive health data. The Public Health Emergency Privacy Act meets the needs raised by privacy and public health communities, and has been resoundingly endorsed by experts and civil society groups,” the Senators continued.

A copy of the letter can be found here and below.

 

Dear Leader McConnell, Leader Schumer, Chairman Alexander, and Ranking Member Murray,

As you begin negotiations on another coronavirus stimulus package, we write to urge inclusion of commonsense privacy protections for COVID health data. Building public trust in COVID screening tools will be essential to ensuring meaningful participation in such efforts. With research consistently showing that Americans are reluctant to adopt COVID screening and tracing apps due to privacy concerns, the lack of health privacy protections could significantly undermine efforts to contain this virus and begin to safely re-open – particularly with many screening tools requiring a critical mass in order to provide meaningful benefits. According to one survey, 84% of Americans “fear that data collection efforts aimed at helping to contain the coronavirus cost too much in the way of privacy.”

Public health experts have consistently pointed to health screening and contact tracing as essential elements of a comprehensive strategy to contain and eradicate COVID. Since the onset of the pandemic, employers, public venue operators, and consumer service providers have introduced a range tools and resources to engage in symptom monitoring, contact tracing, exposure notification, temperature checks, and location tracking. Increasingly, we have seen higher education institutions mandate the use of these applications for incoming students and employers mandate participation in these programs among employees.

Health data is among the most sensitive data imaginable and even before this public health emergency, there has been increasing bipartisan concern with gaps in our nation’s health privacy laws. While a comprehensive update of health privacy protections is unrealistic at this time, targeted reforms to protect health data – particularly with clear evidence that a lack of privacy protections has inhibited public participation in screening activities – is both appropriate and necessary.

Our legislation does not prohibit or otherwise prevent employers, service providers, or any other entity from introducing COVID screening tools. Rather, it provides commonsense and widely understood rules related to the collection, retention, and usage of that information – most notably, stipulating that sensitive data collected under the auspices of efforts to contain COVID should not be used for unrelated purposes. As a litany of investigative reports, Congressional hearings, and studies have increasingly demonstrated, the widespread secondary use of Americans’ data – including sensitive health and geolocation data – has become a significant public concern. The legislation also ensures that Americans cannot be discriminated against on the basis of COVID health data – something particularly important given the disproportionate impact of this pandemic on communities of color.

Efforts by public health agencies to combat COVID-19, such as manual contract tracing, health screenings, interviews, and case investigations, are not restricted by our bill. And the legislation would allow for the collection, use, and sharing of data for public health research purposes and makes clear that it does not restrict use of health information for public health or other scientific research associated with a public health emergency.

Our urgent and forceful response to COVID-19 can coexist with protecting and even bolstering our health privacy. If not appropriately addressed, these issues could lead to a breakdown in public trust that could ultimately thwart successful public health surveillance initiatives. Privacy experts, patient advocates, civil rights leaders, and public interest organizations have resoundingly called for strong privacy protections to govern technological measures offered in response to the COVID-19 crisis. In the absence of a federal privacy framework, experts and enforcers – including the Director of the Bureau of Consumer Protection of Federal Trade Commission – have encouraged targeted rules on this sensitive health data. The Public Health Emergency Privacy Act meets the needs raised by privacy and public health communities, and has been resoundingly endorsed by experts and civil society groups.

Providing Americans with assurance that their sensitive health data will not be misused will give Americans more confidence to participate in COVID screening efforts, strengthening our common mission in containing and eradicating COVID-19. For this reason, we urge you to include the privacy protections contained in the Public Health Emergency Privacy Act in any forthcoming stimulus package.

Thank you for your attention to this important matter.                                                                       

Sincerely,

###

WASHINGTON - U.S. Sen. Mark R. Warner (D-VA) joined Sen. Sherrod Brown (D-OH) and 6 of their Senate colleagues in a letter requesting additional information on the Borrower Protection Program that the Consumer Financial Protection Bureau (CFPB) and the Federal Housing Finance Agency (FHFA) announced in April. The agencies’ announcement stated that the CFPB and FHFA would share data under the program but did not say how that data would be used to protect borrowers. The Senators asked the agencies what information they would share and how each agency would use this new program to avoid unnecessary borrower defaults and foreclosures, as well as misinformation, unequal treatment of borrowers, or otherwise address servicers not complying with the law.   

“It is critical that the CFPB and FHFA act quickly to ensure homeowners across the country can access the relief they need during this national emergency. Any delay could result in unnecessary delinquencies and foreclosures that will set consumers back, rather than helping them recover,” wrote the lawmakers.

In addition to Sens. Warner and Brown, the letter was signed by Sens. Jack Reed (D-RI), Elizabeth Warren (D-MA), Brian Schatz (D-HI), Chris Van Hollen (D-MD), Catherine Cortez Masto (D- NV), and Tina Smith (D-MN).

A copy of the letter appears here and below:

 

We are writing regarding the Consumer Financial Protection Bureau (CFPB) and the Federal Housing Finance Agency’s (FHFA) joint announcement of the Borrower Protection Program. The announcement states that the CFPB will share consumer complaint data and analytics with FHFA, and FHFA will provide the CFPB with its internal data on mortgage forbearances, modifications, and other loss mitigation.

Sharing information between your agencies is an important first step to ensure that homeowners are getting the help they need. The CFPB’s supervisory, research, and market monitoring tools and consumer-oriented perspective coupled with FHFA’s loan-level data could provide unique insights into borrowers’ experiences.

But information sharing alone will not protect borrowers. Once information is shared, the CFPB and FHFA must also have plans to use their respective tools and authorities to immediately address trends that indicate borrowers are receiving inaccurate information or unequal treatment, or that servicers are not complying with the law. Timeliness of the CFPB and FHFA’s oversight is critical to avoid unnecessary borrower defaults and foreclosures. Just a few weeks of delay could have disastrous outcomes for consumers who may lose the ability to access an affordable modification after just two months or face foreclosure after four months.

To help us better understand what steps your agencies will take to protect homeowners through the Borrower Protection Program, please respond to the following questions:

1.      It has been more than nine weeks since the COVID-19 national emergency declaration, and borrowers may already have experienced weeks of financial hardship.

a.      When will the CFPB and FHFA first share data under the Borrower Protection Program?  

b.      What specific actions will the CFPB and FHFA take, respectively, if either agency identifies noncompliance or consumer harm both to get consumers accurate information and to address noncompliance? Please list all tools that could be used by each agency.  

2.      Consumer complaint data is an important source of information, but it is not the CFPB’s only tool to monitor consumer harm. In addition to consumer complaint data, what other information will the FHFA receive from the CFPB?

3.      The CFPB has regulatory and supervisory authority over many of the largest mortgage servicers, including depositories with more than $10 billion in assets and nonbank mortgage servicers.

a.      Will the information examined under the Borrower Protection Program show data by loan servicer? If so, how will the CFPB use any servicer-specific data to inform its supervisory activities?

b.      Will any servicer-specific data distinguish between loans in forbearance and delinquent loans? If so, how will the CFPB or FHFA monitor and address disparities in delinquency rates amongst servicers to ensure that those borrowers who are facing a financial hardship and eligible for forbearance can receive it?

c.      To the extent that the CFPB or FHFA receives information or identifies trends among mortgage servicers that do not fall within the CFPB’s supervisory authority, will the CFPB or FHFA communicate those findings to the appropriate regulator to ensure compliance with servicing laws and policies? If not, why not?

4.      Will information provided to the CFPB include borrower demographic information when available, including race, ethnicity, English proficiency, age, or other protected classes under the Fair Housing Act to facilitate fair lending oversight?   

a.      How will the CFPB use any available information to ensure that mortgage servicing policies and practices result in equal treatment for all borrowers? Will the CFPB monitor forbearance rates, delinquency rates, loan modifications, non-retention loss mitigation options, and foreclosures by protected class? 

b.      What tools will the CFPB and FHFA use to address any disparate outcomes?

5.      Will any information provided to either agency include a borrower’s servicemember status, when available, to monitor compliance with the Servicemembers Civil Relief Act (SCRA)? If possible violations of the SCRA are identified, which agency will address those violations? 

6.      Many mortgage servicers service not just Fannie Mae and Freddie Mac loans, but also FHA, VA, USDA, and HUD Section 184 loans, as well as loans in private-label securities. 

a.      Will the CFPB enter into agreements with the other federal agencies, which collectively insure or guarantee more than 25 percent of loans, to share data and inform those agencies’ supervision of their servicers? If not, why not?

b.      Borrowers whose loans are not guaranteed by Fannie Mae or Freddie Mac or insured or guaranteed through a federal program are not assured to receive forbearance or other relief if they face a hardship, and information about outcomes for these borrowers will be limited. How will the Borrower Protection Program protect borrowers whose loans are not guaranteed by Fannie Mae or Freddie Mac or insured or guaranteed through a federal program? 

7.      Will the CFPB and FHFA publish regular, public updates on the Borrower Protection Program to share findings and actions? If not, why not?

It is critical that the CFPB and FHFA act quickly to ensure homeowners across the country can access the relief they need during this national emergency. Any delay could result in unnecessary delinquencies and foreclosures that will set consumers back, rather than helping them recover. Thank you for your prompt attention to this request. 

Sincerely,  

###

WASHINGTON - As tech companies and public health agencies deploy contact tracing apps and digital monitoring tools to fight the spread of COVID-19, U.S. Sens. Mark R. Warner and Richard Blumenthal (D-CT) and U.S. Reps. Anna G. Eshoo (D-CA), Jan Schakowsky (D-IL), and Suzan DelBene (D-WA) introduced the Public Health Emergency Privacy Act to set strong and enforceable privacy and data security rights for health information.

After decades of data misuse, breaches, and privacy intrusions, Americans are reluctant to trust tech firms to protect their sensitive health information – according to a recent poll, more than half of Americans would not use a contact tracing app and similar tools from Google and Apple over privacy concerns. The bicameral Public Health Emergency Privacy Act would protect Americans who use this kind of technology during the pandemic and safeguard civil liberties. Strengthened public trust will empower health authorities and medical experts to leverage new health data and apps to fight COVID-19.

“This measure sets strict and straightforward privacy protections and promises: Your information will be used to stop the spread of this disease, and no more,” Blumenthal said. “Legal safeguards protecting consumer privacy failed to keep pace with technology, and that lapse is costing us in the fight against COVID-19. Americans are rightly skeptical that their sensitive health data will be kept safe and secure, and as a result, they’re reluctant to participate in contact tracing programs essential to halt the spread of this disease. The Public Health Emergency Privacy Act’s commitment to civil liberties is an investment in our public health.”

“Communications technology has obviously played an enormously important role for Americans in coping with and navigating the new reality of COVID-19 and new technology will certainly play an important role in helping to track and combat the spread of this virus. Unfortunately, our health privacy laws have not kept pace with the privacy expectations Americans have come to expect for their sensitive health data,” Warner said. “Absent a clear commitment from policymakers to improving our health privacy laws, as this important legislation seeks to accomplish, I fear that creeping privacy violations could become the new status quo in health care and public health. The credibility – and indeed efficacy – of these technologies depends on public trust.” 

“I’m thankful that our country is blessed with the world’s best innovators and technologists, many of whom I represent in the House, and that they have joined the effort to combat the coronavirus by using technology to control the spread of the virus,” said Eshoo. “As we consider new technologies that collect vast amounts of sensitive personal data, we must not lose site of the civil liberties that define who we are as a nation. I’m proud to join my colleagues to introduce the Public Health Emergency Privacy Act, strong and necessary legislation that protects the privacy of every American while ensuring that innovation can aid important public health efforts.”

“As we continue to respond to the devastating suffering caused by COVID-19, our country’s first and foremost public health response must be testing, testing, testing, AND manual contact tracing. Digital contact tracing can and should complement these efforts, but it is just that – complimentary. However, if we do pursue digital contact tracing, consumers need clearly-defined privacy rights and strong enforcement to safeguard these rights. I am proud to introduce this bill with my friend and fellow Energy & Commerce Subcommittee Chairwoman Eshoo, along with Senators Blumenthal and Warner,” said Schakowsky. “It’s our shared belief that swift passage of this legislation would go a long way towards establishing the trust American consumers need – and which Big Tech has squandered, time and again –  for digital contact tracing to be a worthwhile auxiliary to widespread testing and manual contact tracing.”

“We must use every tool available to us to respond to the COVID-19 pandemic. Contract tracing, along with testing, are the cornerstones of a science-based approach to addressing this historic crisis. We can protect our public health response and personal data privacy,” said DelBene. “I have been calling on the Trump administration and the private sector to adopt data privacy principles since the start of this outbreak. It is time for Congress to lead the way in assuring we have a strong national contact tracing system and that Americans’ personal data is protected. This bill will achieve this mutual goal.”

Eshoo, Schakowsky, and DelBene introduced House legislation with original co-sponsors House Energy and Commerce Committee Vice Chair Yvette Clarke (D-NY), Health Subcommittee Vice Chair G. K. Butterfield (D-NY), and Consumer Protection & Commerce Subcommittee Vice Chair Tony Cárdenas (D-CA).

The Public Health Emergency Privacy Act would:

·       Ensure that data collected for public health is strictly limited for use in public health;

·       Explicitly prohibit the use of health data for discriminatory, unrelated, or intrusive purposes, including commercial advertising, e-commerce, or efforts to gate access to employment, finance, insurance, housing, or education opportunities;

·       Prevent the potential misuse of health data by government agencies with no role in public health;

·       Require meaningful data security and data integrity protections – including data minimization and accuracy – and mandate deletion by tech firms after the public health emergency;

·       Protect voting rights by prohibiting conditioning the right to vote based on a medical condition or use of contact tracing apps;

·       Require regular reports on the impact of digital collection tools on civil rights;

·       Give the public control over their participation in these efforts by mandating meaningful transparency and requiring opt-in consent; and

·       Provide for robust private and public enforcement, with rulemaking from an expert agency while recognizing the continuing role of states in legislation and enforcement.

The Public Health Emergency Privacy Act is endorsed by Lawyers’ Committee for Civil Rights Under Law, Public Knowledge, New America’s Open Technology Institute, Consumer Reports, Free Press, Electronic Privacy and Information Center (EPIC), Public Citizen, health privacy scholar Frank Pasquale, and privacy scholar Ryan Calo.

“African Americans and other marginalized communities are suffering disproportionately from coronavirus and its economic effects. They do not need further harm from snake oil surveillance tech. This bill protects the most vulnerable—it ensures that any technology used to track the virus is not used to unfairly discriminate in employment, voting, housing, education, and everyday commerce,” said David Brody, Counsel and Senior Fellow for Privacy & Technology at the Lawyers’ Committee for Civil Rights Under Law.

“As contact tracing apps and other types of COVID-19 surveillance become commonplace in the United States, this legislation will protect the privacy of Americans regardless of the type of technology used or who created it. It is critical that Congress continue to work to prevent this type of corporate or government surveillance from becoming ubiquitous and compulsory,” said Sara Collins, Policy Counsel at Public Knowledge. 

“OTI welcomes this effort to protect privacy as lawmakers consider pandemic response plans that gather vast quantities of data. The bill would establish strong safeguards that would prevent personal data from being used for non-public health purposes and prevent the data from being used in a discriminatory manner,” said Christine Bannan, Policy Counsel at New America’s Open Technology Institute.

“When it comes to tracking and collecting people’s data, we want to make sure there are basic protections for people’s privacy, and this bill is a positive step to establish the trust and balance that’s needed. The bill smartly requires that data collected to fight coronavirus can only be used for public health purposes – and nothing else. Importantly, the bill ensures an individual's right to seek redress for violations, and it bars against the use of pre-dispute arbitration agreements. These measures will help individuals trust contact-tracing or proximity-tracing programs, and they can serve as a model for more comprehensive protections down the road,” said Justin Brookman, Director of Consumer Privacy and Technology Policy for Consumer Reports.

“Digital contact tracing and exposure notification systems may be important tools in combating the spread of coronavirus. But they must be deployed responsibly and with adequate safeguards that protect the privacy and civil rights of the people that use them. The Public Health Emergency Privacy Act is a serious effort at ensuring our rights are protected while giving public health officials the tools they need to track and notify those exposed to COVID-19. These rules must apply to everyone using these systems, whether that’s state or local governments, employers, or other tech companies. This bill protects the civil rights of the most vulnerable essential workers, the disproportionately Black and Latinx people most exposed to the virus, and will help ensure they’re not also subject to invasive and unnecessary surveillance that will linger long after this crisis passes,” said Gaurav Laroia, Senior Policy Counsel with Free Press.

“The Public Health Emergency Privacy Act shows that privacy and public health are complementary goals. The bill requires companies to limit the collection of health data to only what is necessary for public health purposes, and crucially, holds companies accountable if they fail to do so,” said Caitriona Fitzgerald, Interim Associate Director and Policy Director with Electronic Privacy Information Center (EPIC). 

“What we need more than anything during this global emergency is to feel less vulnerable, to be sure not just that our health is protected, but that our rights are protected as well. This bill will ensure that whatever technological innovation emerges during the pandemic, we will feel safer knowing that our rights to privacy, to our day in court and to access to the ballot box won’t be threatened,” said Robert Weissman, President of Public Citizen.

 “This bill establishes critical protections for patients whose health data is released in the context of the public health emergency. To build a trusted data infrastructure, the US needs to ensure that any entity which accesses such data is held accountable and does not abuse the public trust. The Public Health Emergency  Privacy Act is a big step in the right direction,” said Frank Pasquale, Piper & Marbury Professor of Law at University of Maryland Carey School of Law. 

“This draft legislation addresses two of my biggest privacy concerns about the use of technology and information to respond to COVID-19. As the Act makes clear, the emergency health data of Americans should only be used to fight the pandemic and should never be used to discriminate or deny opportunity,” said Ryan Calo, Lane Powell & D. Wayne Gittinger Endowed Professor at University of Washington School of Law.