MURTHY v. MISSOURI
Supreme Court Cases
No. 23鈥411 (2024)
Opinions
Majority Opinion Author
Amy Coney Barrett
Majority Participants
Dissenting Participants
SUPREME COURT OF THE UNITED STATES
Syllabus
MURTHY, SURGEON GENERAL, et al. v. MISSOURI et al.
certiorari to the united states court of appeals for the fifth circuit
No. 23鈥411.鈥傾rgued March 18, 2024 鈥 Decided June 26, 2024
Under their longstanding content-moderation policies, social-media platforms have taken a range of actions to suppress certain categories of speech, including speech they judge to be false or misleading. In 2020, with the outbreak of COVID鈥19, the platforms announced that they would enforce these policies against users who post false or misleading content about the pandemic. The platforms also applied misinformation policies during the 2020 election season. During that period, various federal officials regularly spoke with the platforms about COVID鈥19 and election-related misinformation. For example, White House officials publicly and privately called on the platforms to do more to address vaccine misinformation. Surgeon General Vivek Murthy issued a health advisory that encouraged the platforms to take steps to prevent COVID鈥19 misinformation 鈥渇rom taking hold.鈥 The Centers for Disease Control and Prevention alerted the platforms to COVID鈥19 misinformation trends and flagged example posts. The Federal Bureau of Investigation and Cybersecurity and Infrastructure Security Agency communicated with the platforms about election-related misinformation in advance of the 2020 Presidential election and the 2022 midterms.
Respondents are two States and five individual social-media users who sued dozens of Executive Branch officials and agencies, alleging that the Government pressured the platforms to censor their speech in violation of the First Amendment. Following extensive discovery, the District Court issued a preliminary injunction. The Fifth Circuit affirmed in part and reversed in part. The court held that both the state plaintiffs and the individual plaintiffs had Article III standing to seek injunctive relief. On the merits, the court held that the Government entities and officials, by 鈥渃oerc[ing]鈥 or 鈥渟ignificantly encourag[ing]鈥 the 辫濒补迟蹿辞谤尘蝉鈥 moderation decisions, transformed those decisions into state action. The court then modified the District Court鈥檚 injunction to state that the defendants shall not coerce or significantly encourage social-media companies to suppress protected speech on their platforms.
Held: Neither the individual nor the state plaintiffs have established Article III standing to seek an injunction against any defendant. Pp. 8鈥29.
(a) Article III鈥檚 鈥渃ase or controversy鈥 requirement is 鈥渇undamental鈥 to the 鈥減roper role鈥 of the Judiciary. Raines v. Byrd, 521 U.S. 811, 818. A proper case or controversy exists only when at least one plaintiff 鈥渆stablish[es] that [she] ha[s] standing to sue,鈥 ibid. 鈥 i.e., that she has suffered, or will suffer, an injury that is 鈥渃oncrete, particularized, and actual or imminent; fairly traceable to the challenged action; and redressable by a favorable ruling,鈥 Clapper v. Amnesty Int鈥檒 USA, 568 U.S. 398, 409. Here, the plaintiffs鈥 theories of standing depend on the 辫濒补迟蹿辞谤尘蝉鈥 actions 鈥 yet the plaintiffs do not seek to enjoin the platforms from restricting any posts or accounts. Instead, they seek to enjoin the Government agencies and officials from pressuring or encouraging the platforms to suppress protected speech in the future.
The one-step-removed, anticipatory nature of the plaintiffs鈥 alleged injuries presents two particular challenges. First, it is a bedrock principle that a federal court cannot redress 鈥渋njury that results from the independent action of some third party not before the court.鈥 Simon v. Eastern Ky. Welfare Rights Organization, 426 U.S. 26, 41鈥42. Second, because the plaintiffs request forward-looking relief, they must face 鈥渁 real and immediate threat of repeated injury.鈥 翱鈥橲丑别补 v. Littleton, 414 U.S. 488, 496. Putting these requirements together, the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. Here, at the preliminary injunction stage, they must show that they are likely to succeed in carrying that burden. On the record in this case, that is a tall order. Pp. 8鈥10.
(b) The plaintiffs鈥 primary theory of standing involves their 鈥渄irect censorship injuries.鈥 Pp. 10鈥26.
(1) The Court first considers whether the plaintiffs have demonstrated traceability for their past injuries. Because the plaintiffs are seeking only forward-looking relief, the past injuries are relevant only for their predictive value. The primary weakness in the record of past restrictions is the lack of specific causation findings with respect to any discrete instance of content moderation. And while the record reflects that the Government defendants played a role in at least some of the 辫濒补迟蹿辞谤尘蝉鈥 moderation choices, the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment. The Fifth Circuit, by attributing every platform decision at least in part to the defendants, glossed over complexities in the evidence. The Fifth Circuit also erred by treating the defendants, plaintiffs, and platforms each as a unified whole. Because 鈥渟tanding is not dispensed in gross,鈥 TransUnion LLC v. Ramirez, 594 U.S. 413, 431, 鈥減laintiffs must demonstrate standing for each claim they press鈥 against each defendant, 鈥渁nd for each form of relief they seek,鈥 ibid. This requires a threshold showing that a particular defendant pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff鈥檚 speech on that topic. Complicating the plaintiffs鈥 effort to demonstrate that each platform acted due to Government coercion, rather than its own judgment, is the fact that the platforms began to suppress the plaintiffs鈥 COVID鈥19 content before the defendants鈥 challenged communications started. Pp. 10鈥14.
(2) The plaintiffs fail, by and large, to link their past social-media restrictions and the defendants鈥 communications with the platforms. The state plaintiffs, Louisiana and Missouri, refer only to action taken by Facebook against a Louisiana state representative鈥檚 post about children and the COVID鈥19 vaccine. But they never say when Facebook took action against the official鈥檚 post 鈥 a critical fact in establishing a causal link. Nor have the three plaintiff doctors established a likelihood that their past restrictions are traceable to either the White House officials or the CDC. They highlight restrictions imposed by Twitter and LinkedIn, but point only to Facebook鈥檚 communications with White House officials. Plaintiff Jim Hoft, who runs a news website, experienced election-related restrictions on various platforms. He points to the FBI鈥檚 role in the 辫濒补迟蹿辞谤尘蝉鈥 adoption of hacked-material policies and claims that Twitter restricted his content pursuant to those policies. Yet Hoft鈥檚 declaration reveals that Twitter took action according to its own rules against posting private, intimate media without consent. Hoft does not provide evidence that his past injuries are likely traceable to the FBI or CISA. Plaintiff Jill Hines, a healthcare activist, faced COVID鈥19-related restrictions on Facebook. Though she makes the best showing of all the plaintiffs, most of the lines she draws are tenuous. Plus, Facebook started targeting her content before almost all of its communications with the White House and the CDC, thus weakening the inference that her subsequent restrictions are likely traceable to Government-coerced enforcement of Facebook鈥檚 policies. Even assuming Hines can eke out a showing of traceability, the past is relevant only insofar as it predicts the future. Pp. 14鈥21.
(3) To obtain forward-looking relief, the plaintiffs must establish a substantial risk of future injury that is traceable to the Government defendants and likely to be redressed by an injunction against them. The plaintiffs who have not pointed to any past restrictions likely traceable to the Government defendants (i.e., everyone other than Hines) are ill suited to the task of establishing their standing to seek forward-looking relief. But even Hines, with her superior showing on past harm, has not shown enough to demonstrate likely future harm at the hands of these defendants. On this record, it appears that the frequent, intense communications that took place in 2021 between the Government defendants and the platforms had considerably subsided by 2022, when Hines filed suit. Thus it is 鈥渘o more than conjecture鈥 to assume that Hines will be subject to Government-induced content moderation. Los Angeles v. Lyons, 461 U.S. 95, 108.
The plaintiffs鈥 counterarguments are unpersuasive. First, they argue that they suffer 鈥渃ontinuing, present adverse effects鈥 from their past restrictions, as they must now self-censor on social media. 翱鈥橲丑别补, 414 U. S., at 496. But the plaintiffs 鈥渃annot manufacture standing merely by inflicting harm on themselves based on their fears of hypothetical future harm that is not certainly impending.鈥 Clapper, 568 U. S., at 416. Second, the plaintiffs suggest that the platforms continue to suppress their speech according to policies initially adopted under Government pressure. But the plaintiffs have a redressability problem. Without evidence of continued pressure from the defendants, the platforms remain free to enforce, or not to enforce, their policies 鈥 even those tainted by initial governmental coercion. And the available evidence indicates that the platforms have continued to enforce their policies against COVID鈥19 misinformation even as the Federal Government has wound down its own pandemic response measures. Enjoining the Government defendants, therefore, is unlikely to affect the 辫濒补迟蹿辞谤尘蝉鈥 content-moderation decisions. Pp. 21鈥27.
(c) The plaintiffs next assert a 鈥渞ight to listen鈥 theory of standing. The individual plaintiffs argue that the First Amendment protects their interest in reading and engaging with the content of other speakers on social media. This theory is startlingly broad, as it would grant all social-media users the right to sue over someone else鈥檚 censorship 鈥 at least so long as they claim an interest in that person鈥檚 speech. While the Court has recognized a 鈥 First Amendment right to receive information and ideas,鈥 the Court has identified a cognizable injury only where the listener has a concrete, specific connection to the speaker. Kleindienst v. Mandel, 408 U.S. 753, 762. Attempting to satisfy this requirement, the plaintiffs emphasize that hearing unfettered speech on social media is critical to their work as scientists, pundits, and activists. But they do not point to any specific instance of content moderation that caused them identifiable harm. They have therefore failed to establish an injury that is sufficiently 鈥渃oncrete and particularized.鈥 Lujan v. Defenders of Wildlife, 504 U.S. 555, 560. The state plaintiffs assert a sovereign interest in hearing from their citizens on social media, but they have not identified any specific speakers or topics that they have been unable to hear or follow. And States do not have third-party 鈥渟tanding as parens patriae to bring an action against the Federal Government鈥 on behalf of their citizens who have faced social-media restrictions. Haaland v. Brackeen, 599 U.S. 255, 295. Pp. 27鈥28.
83 F. 4th 350, reversed and remanded.
Barrett, J., delivered the opinion of the Court, in which Roberts, C. J., and Sotomayor, Kagan, Kavanaugh, and Jackson, JJ., joined. Alito, J., filed a dissenting opinion, in which Thomas and Gorsuch, JJ., joined.
NOTICE: This opinion is subject to formal revision before publication in the United States Reports. Readers are requested to notify the Reporter of Decisions, Supreme Court of the United States, Washington, D. C. 20543, pio@supremecourt.gov, of any typographical or other formal errors.
SUPREME COURT OF THE UNITED STATES
_________________
No. 23鈥411
_________________
VIVEK H. MURTHY, SURGEON GENERAL, et al., PETITIONERS v. MISSOURI, et al.
on writ of certiorari to the united states court of appeals for the fifth circuit
[June 26, 2024]
Justice Barrett delivered the opinion of the Court.
During the 2020 election season and the COVID鈥19 pandemic, social-media platforms frequently removed, demoted, or fact checked posts containing allegedly false or misleading information. At the same time, federal officials, concerned about the spread of 鈥渕isinformation鈥 on social media, communicated extensively with the platforms about their content-moderation efforts.
The plaintiffs, two States and five social-media users, sued dozens of Executive Branch officials and agencies, alleging that they pressured the platforms to suppress protected speech in violation of the First Amendment. The Fifth Circuit agreed, concluding that the officials鈥 communications rendered them responsible for the private 辫濒补迟蹿辞谤尘蝉鈥 moderation decisions. It then affirmed a sweeping preliminary injunction.
The Fifth Circuit was wrong to do so. To establish standing, the plaintiffs must demonstrate a substantial risk that, in the near future, they will suffer an injury that is traceable to a Government defendant and redressable by the injunction they seek. Because no plaintiff has carried that burden, none has standing to seek a preliminary injunction.
I
A
With their billions of active users, the world鈥檚 major social-media companies host a 鈥渟taggering鈥 amount of content on their platforms. Twitter, Inc. v. Taamneh, 598 U.S. 471, 480 (2023). Yet for many of these companies, including Facebook, Twitter, and YouTube, not everything goes.[1] Under their longstanding content-moderation policies, the platforms have taken a range of actions to suppress certain categories of speech. They place warning labels on some posts, while deleting others. They also 鈥渄emote鈥 content so that it is less visible to other users. And they may suspend or ban users who frequently post content that violates platform policies.
For years, the platforms have targeted speech they judge to be false or misleading. For instance, in 2016, Facebook began fact checking and demoting posts containing misleading claims about elections. Since 2018, Facebook has removed health-related misinformation, including false claims about a measles outbreak in Samoa and the polio vaccine in Pakistan. Likewise, in 2019, YouTube announced that it would 鈥渄emonetize鈥 channels that promote anti-vaccine messages.
In 2020, with the outbreak of COVID鈥19, the platforms announced that they would enforce their policies against users who post false or misleading content about the pandemic. As early as January 2020, Facebook deleted posts it deemed false regarding 鈥渃ures,鈥 鈥渢reatments,鈥 and the effect of 鈥減hysical distancing.鈥 60 Record on Appeal 19,035 (Record). And it demoted posts containing what it described as 鈥渃onspiracy theories about the origin of the virus.鈥 Id., at 19,036. Twitter and YouTube began applying their policies in March and May 2020, respectively. Throughout the pandemic, the platforms removed or reduced posts questioning the efficacy and safety of mask wearing and the COVID鈥19 vaccine, along with posts on related topics.
The platforms also applied their misinformation policies during the 2020 Presidential election season. Facebook, in late 2019, unveiled measures to counter foreign interference campaigns and voter suppression efforts. One month before the election, multiple platforms suppressed a report about Hunter Biden鈥檚 laptop, believing that the story originated from a Russian hack-and-leak operation. After the election, the platforms took action against users or posts that questioned the integrity of the election results.
Over the past few years, various federal officials regularly spoke with the platforms about COVID鈥19 and election-related misinformation. Officials at the White House, the Office of the Surgeon General, and the Centers for Disease Control and Prevention (CDC) focused on COVID鈥19 content, while the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) concentrated on elections.
White House. In early 2021, and continuing primarily through that year, the Director of Digital Strategy and members of the COVID鈥19 response team interacted with the platforms about their efforts to suppress vaccine misinformation. They expressed concern that Facebook in particular was 鈥渙ne of the top drivers of vaccine hesitancy,鈥 due to the spread of allegedly false or misleading claims on the platform. App. 659鈥660. Thus, the officials peppered Facebook (and to a lesser extent, Twitter and YouTube) with detailed questions about their policies, pushed them to suppress certain content, and sometimes recommended policy changes. Some of these communications were more aggressive than others. For example, the director of Digital Strategy, frustrated that Facebook had not removed a particular post, complained: 鈥淸L]ast time we did this dance, it ended in an insurrection.鈥 Id., at 698. Another official, unhappy with Facebook鈥檚 supposed lack of transparency about its vaccine misinformation problems, wrote: 鈥淚nternally we have been considering our options on what to do about it.鈥 Id., at 657. Publicly, White House communications officials called on the platforms to do more to address COVID鈥19 misinformation 鈥 and, perhaps as motivation, raised the possibility of reforms aimed at the platforms, including changes to the antitrust laws and 47 U. S. C. 搂230.
Surgeon General. In July 2021, Surgeon General Vivek Murthy issued a health advisory on misinformation. The advisory encouraged platforms to 鈥淸r]edesign recommendation algorithms to avoid amplifying misinformation,鈥 鈥淸i]mpose clear consequences for accounts that repeatedly violate platform policies,鈥 and 鈥淸p]rovide information from trusted and credible sources to prevent misconceptions from taking hold.鈥 3 Record 662. At a press conference to announce the advisory, Surgeon General Murthy argued that the platforms should 鈥渙perate with greater transparency and accountability.鈥 2 id., at 626. The following year, the Surgeon General issued a 鈥淩equest for Information,鈥 seeking, among other things, reports on each 辫濒补迟蹿辞谤尘鈥檚 鈥淐OVID鈥19 misinformation policies.鈥 Impact of Health Misinformation in the Digital Information Environment in the United States Throughout the COVID鈥19 Pandemic Request for Information (RFI), 87 Fed. Reg. 12714 (Mar. 7, 2022).
CDC. Like the White House, the CDC frequently communicated with the platforms about COVID鈥19 misinformation. In early 2020, Facebook reached out to the agency, seeking authoritative information about the virus that it could post on the platform. The following year, the CDC鈥檚 communications expanded to other platforms, including Twitter and YouTube. The CDC hosted meetings and sent reports to the platforms, alerting them to misinformation trends and flagging example posts. The platforms often asked the agency for fact checks on specific claims.
FBI and CISA. These agencies communicated with the platforms about election-related misinformation. They hosted meetings with several platforms in advance of the 2020 Presidential election and the 2022 midterms. The FBI alerted the platforms to posts containing false information about voting, as well as pernicious foreign influence campaigns that might spread on their sites. Shortly before the 2020 election, the FBI warned the platforms about the potential for a Russian hack-and-leak operation. Some companies then updated their moderation policies to prohibit users from posting hacked materials. Until mid-2022, CISA, through its 鈥渟witchboarding鈥 operations, forwarded third-party reports of election-related misinformation to the platforms. These communications typically stated that the agency 鈥渨[ould] not take any action, favorable or unfavorable, toward social media companies based on decisions about how or whether to use this information.鈥 72 Record 23,223.
B
Respondents are two States and five individual social-media users. They were the plaintiffs below, and for the sake of narrative clarity, we will refer to them as 鈥減laintiffs鈥 in this opinion. (Likewise, we will refer to the Government individuals and agencies as 鈥渄efendants鈥 rather than petitioners.) The individual plaintiffs 鈥 three doctors, the owner of a news website, and a healthcare activist 鈥 allege that various platforms removed or demoted their COVID鈥19 or election-related content between 2020 and 2023. The States, Missouri and Louisiana, claim that the platforms have suppressed the speech of state entities and officials, as well as their citizens鈥 speech.
Though the platforms restricted the plaintiffs鈥 content, the plaintiffs maintain that the Federal Government was behind it. Acting on that belief, the plaintiffs sued dozens of Executive Branch officials and agencies, alleging that they pressured the platforms to censor the plaintiffs鈥 speech in violation of the First Amendment. The States filed their complaint on May 5, 2022. The next month, they moved for a preliminary injunction, seeking to stop the defendants from 鈥渢aking any steps to demand, urge, encourage, pressure, or otherwise induce鈥 any platform 鈥渢o censor, suppress, remove, de-platform, suspend, shadow-ban, de-boost, restrict access to content, or take any other adverse action against any speaker, content, or viewpoint expressed on social media.鈥 1 id., at 253. The individual plaintiffs joined the suit on August 2, 2022.
After granting extensive discovery, the District Court issued a preliminary injunction. Missouri v. Biden, 680 F. Supp. 3d 630, 729 (WD La. 2023). The court held that officials at the White House, the Surgeon General鈥檚 Office, the CDC, the FBI, and CISA likely 鈥渃oerced鈥 or 鈥渟ignificantly encouraged鈥 the platforms 鈥渢o such extent that the[ir content-moderation] decision[s] should be deemed to be the decisions of the Government.鈥 Id., at 694 (internal quotation marks omitted). It enjoined those agencies, along with scores of named and unnamed officials and employees, from taking actions 鈥渇or the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-medial platforms.鈥 Missouri v. Biden, 2023 WL 5841935, *1鈥*2 (WD La., July 4, 2023).[2]
Following a grant of panel rehearing, the Fifth Circuit affirmed in part and reversed in part. Missouri v. Biden, 83 F. 4th 350 (2023). It first held that the individual plaintiffs had Article III standing to seek injunctive relief, reasoning that the social-media companies had suppressed the plaintiffs鈥 speech in the past and were likely to do so again in the future, id., at 367鈥369, and that both of these injuries were 鈥渢raceable to government-coerced enforcement鈥 of the 辫濒补迟蹿辞谤尘鈥檚 policies and 鈥渞edressable by an injunction against the government officials,鈥 id., at 373. The court also concluded that the States had standing, both because the platforms had restricted the posts of individual state officials and because the States have the 鈥渞ight to listen鈥 to their citizens on social media. Id., at 371鈥372.
On the merits, the Fifth Circuit explained that 鈥渁 private party鈥檚 conduct may be state action if the government coerced or significantly encouraged it.鈥 Id., at 380 (citing Blum v. Yaretsky, 457 U.S. 991, 1004 (1982); emphasis deleted). To identify coercion, it asked whether 鈥渢he government compelled the [private party鈥檚] decision by . . . intimating that some form of punishment will follow a failure to comply.鈥 83 F. 4th, at 380. The court explained that the Government significantly encourages a private party鈥檚 choice when it exercises 鈥渁ctive, meaningful control, whether by entanglement in the party鈥檚 decision-making process or direct involvement in carrying out the decision itself.鈥 Id., at 377.[3]
Applying those tests, the Fifth Circuit determined that White House officials, in conjunction with the Surgeon General鈥檚 Office, likely both coerced and significantly encouraged the platforms to moderate content. Id., at 388. The court concluded that the same was true for the FBI. Ibid. It held that the CDC and CISA significantly encouraged (but did not coerce) the 辫濒补迟蹿辞谤尘蝉鈥 moderation decisions. Id., at 389, 391.
The Fifth Circuit agreed with the District Court that the equities favored the plaintiffs. Id., at 392鈥394. It then modified the District Court鈥檚 injunction to state that the defendants, and their employees and agents, shall not 鈥 鈥榗oerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech.鈥 鈥 Id., at 397. The court did not limit the injunction to the platforms that the plaintiffs use or the topics that the plaintiffs wish to discuss, explaining that the harms stemming from the defendants鈥 conduct 鈥渋mpac[t] every social-media user.鈥 Id., at 398.
The federal agencies and officials applied to this Court for emergency relief. We stayed the injunction, treated the application as a petition for a writ of certiorari, and granted the petition. 601 U. S. ___ (2023).
II
We begin 鈥 and end 鈥 with standing. At this stage, neither the individual nor the state plaintiffs have established standing to seek an injunction against any defendant. We therefore lack jurisdiction to reach the merits of the dispute.
A
Article III of the Constitution limits the jurisdiction of federal courts to 鈥淐ases鈥 and 鈥淐ontroversies.鈥 The 鈥渃ase or controversy鈥 requirement is 鈥 鈥榝undamental to the judiciary鈥檚 proper role in our system of government.鈥 鈥 Raines v. Byrd, 521 U.S. 811, 818 (1997) (quoting Simon v. Eastern Ky. Welfare Rights Organization, 426 U.S. 26, 37 (1976)). Federal courts can only review statutes and executive actions when necessary 鈥渢o redress or prevent actual or imminently threatened injury to persons caused by . . . official violation of law.鈥 Summers v. Earth Island Institute, 555 U.S. 488, 492 (2009). As this Court has explained, 鈥淸i]f a dispute is not a proper case or controversy, the courts have no business deciding it, or expounding the law in the course of doing so.鈥 DaimlerChrysler Corp. v. Cuno, 547 U.S. 332, 341 (2006).
A proper case or controversy exists only when at least one plaintiff 鈥渆stablish[es] that [she] ha[s] standing to sue.鈥 Raines, 521 U. S., at 818; Department of Commerce v. New York, 588 U.S. 752, 766 (2019). She must show that she has suffered, or will suffer, an injury that is 鈥渃oncrete, particularized, and actual or imminent; fairly traceable to the challenged action; and redressable by a favorable ruling.鈥 Clapper v. Amnesty Int鈥檒 USA, 568 U.S. 398, 409 (2013) (internal quotation marks omitted). These requirements help ensure that the plaintiff has 鈥渟uch a personal stake in the outcome of the controversy as to warrant [her] invocation of federal-court jurisdiction.鈥 Summers, 555 U. S., at 493 (internal quotation marks omitted).
The plaintiffs claim standing based on the 鈥渄irect censorship鈥 of their own speech as well as their 鈥渞ight to listen鈥 to others who faced social-media censorship. Brief for Respondents 19, 22. Notably, both theories depend on the 辫濒补迟蹿辞谤尘鈥檚 actions 鈥 yet the plaintiffs do not seek to enjoin the platforms from restricting any posts or accounts. They seek to enjoin Government agencies and officials from pressuring or encouraging the platforms to suppress protected speech in the future.
The one-step-removed, anticipatory nature of their alleged injuries presents the plaintiffs with two particular challenges. First, it is a bedrock principle that a federal court cannot redress 鈥渋njury that results from the independent action of some third party not before the court.鈥 Simon, 426 U. S., at 41鈥42. In keeping with this principle, we have 鈥渂een reluctant to endorse standing theories that require guesswork as to how independent decisionmakers will exercise their judgment.鈥 Clapper, 568 U. S., at 413. Rather than guesswork, the plaintiffs must show that the third-party platforms 鈥渨ill likely react in predictable ways鈥 to the defendants鈥 conduct. Department of Commerce, 588 U. S., at 768. Second, because the plaintiffs request forward-looking relief, they must face 鈥渁 real and immediate threat of repeated injury.鈥 翱鈥橲丑别补 v. Littleton, 414 U.S. 488, 496 (1974); see also Susan B. Anthony List v. Driehaus, 573 U.S. 149, 158 (2014) (鈥淎n allegation of future injury may suffice if the threatened injury is certainly impending, or there is a substantial risk that the harm will occur鈥 (internal quotation marks omitted)). Putting these requirements together, the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. On this record, that is a tall order.
Before we evaluate the plaintiffs鈥 different theories, a few preliminaries: The plaintiff 鈥渂ears the burden of establishing standing as of the time [s]he brought th[e] lawsuit and maintaining it thereafter.鈥 Carney v. Adams, 592 U.S. 53, 59 (2020). She must support each element of standing 鈥渨ith the manner and degree of evidence required at the successive stages of the litigation.鈥 Lujan v. Defenders of Wildlife, 504 U.S. 555, 561 (1992). At the preliminary injunction stage, then, the plaintiff must make a 鈥渃lear showing鈥 that she is 鈥渓ikely鈥 to establish each element of standing. See Winter v. Natural Resources Defense Council, Inc., 555 U.S. 7, 22 (2008) (emphasis deleted). Where, as here, the parties have taken discovery, the plaintiff cannot rest on 鈥渕ere allegations,鈥 but must instead point to factual evidence. See Lujan, 504 U. S., at 561 (internal quotation marks omitted).
B
1
The plaintiffs鈥 primary theory of standing involves their 鈥渄irect censorship injuries.鈥 They claim that the restrictions they have experienced in the past on various platforms are traceable to the defendants and that the platforms will continue to censor their speech at the behest of the defendants. So we first consider whether the plaintiffs have demonstrated traceability for their past injuries.
Here, a note of caution: If the plaintiffs were seeking compensatory relief, the traceability of their past injuries would be the whole ball game. But because the plaintiffs are seeking only forward-looking relief, the past injuries are relevant only for their predictive value. See 翱鈥橲丑别补, 414 U. S., at 495鈥496 (鈥淧ast exposure to illegal conduct鈥 can serve as evidence of threatened future injury but 鈥渄oes not in itself show a present case or controversy regarding injunctive relief 鈥). If a plaintiff demonstrates that a particular Government defendant was behind her past social-media restriction, it will be easier for her to prove that she faces a continued risk of future restriction that is likely to be traceable to that same defendant. Conversely, if a plaintiff cannot trace her past injury to one of the defendants, it will be much harder for her to make that showing. See Clapper, 568 U. S., at 411. In the latter situation, the plaintiff would essentially have to build her case from scratch, showing why she has some newfound reason to fear that one of the named defendants will coerce her chosen platform to restrict future speech on a topic about which she plans to post 鈥 in this case, either COVID鈥19 or the upcoming election. Keep in mind, therefore, that the past is relevant only insofar as it is a launching pad for a showing of imminent future injury.
The primary weakness in the record of past restrictions is the lack of specific causation findings with respect to any discrete instance of content moderation. The District Court made none. Nor did the Fifth Circuit, which approached standing at a high level of generality. The platforms, it reasoned, 鈥渉ave engaged in censorship of certain viewpoints on key issues,鈥 while 鈥渢he government has engaged in a years-long pressure campaign鈥 to ensure that the platforms suppress those viewpoints. 83 F. 4th, at 370. The 辫濒补迟蹿辞谤尘蝉鈥 鈥渃ensorship decisions鈥 鈥 including those affecting the plaintiffs 鈥 were thus 鈥渓ikely attributable at least in part to the 辫濒补迟蹿辞谤尘蝉鈥 reluctance to risk鈥 the consequences of refusing to 鈥渁dhere to the government鈥檚 directives.鈥 Ibid.
We reject this overly broad assertion. As already discussed, the platforms moderated similar content long before any of the Government defendants engaged in the challenged conduct. In fact, the platforms, acting independently, had strengthened their pre-existing content-moderation policies before the Government defendants got involved. For instance, Facebook announced an expansion of its COVID鈥19 misinformation policies in early February 2021, before White House officials began communicating with the platform. And the platforms continued to exercise their independent judgment even after communications with the defendants began. For example, on several occasions, various platforms explained that White House officials had flagged content that did not violate company policy. Moreover, the platforms did not speak only with the defendants about content moderation; they also regularly consulted with outside experts.
This evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment. To be sure, the record reflects that the Government defendants played a role in at least some of the 辫濒补迟蹿辞谤尘蝉鈥 moderation choices. But the Fifth Circuit, by attributing every platform decision at least in part to the defendants, glossed over complexities in the evidence.[4]
The Fifth Circuit also erred by treating the defendants, plaintiffs, and platforms each as a unified whole. Our decisions make clear that 鈥渟tanding is not dispensed in gross.鈥 TransUnion LLC v. Ramirez, 594 U.S. 413, 431 (2021). That is, 鈥減laintiffs must demonstrate standing for each claim that they press鈥 against each defendant, 鈥渁nd for each form of relief that they seek.鈥 Ibid. Here, for every defendant, there must be at least one plaintiff with standing to seek an injunction. This requires a certain threshold showing: namely, that a particular defendant pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff 鈥檚 speech on that topic.
Heeding these conditions is critically important in a sprawling suit like this one. The plaintiffs faced speech restrictions on different platforms, about different topics, at different times. Different groups of defendants communicated with different platforms, about different topics, at different times. And even where the plaintiff, platform, time, content, and defendant line up, the links must be evaluated in light of the 辫濒补迟蹿辞谤尘鈥檚 independent incentives to moderate content. As discussed, the platforms began to suppress the plaintiffs鈥 COVID鈥19 content before the defendants鈥 challenged communications started, which complicates the plaintiffs鈥 effort to demonstrate that each platform acted due to 鈥済overnment-coerced enforcement鈥 of its policies, 83 F. 4th, at 370 (emphasis deleted), rather than in its own judgment as an 鈥 鈥榠ndependent acto[r],鈥 鈥 Lujan, 504 U. S., at 562. With these factors in mind, we proceed to untangle the mass of the plaintiffs鈥 injuries and Government communications.
2
The plaintiffs rely on allegations of past Government censorship as evidence that future censorship is likely. But they fail, by and large, to link their past social-media restrictions to the defendants鈥 communications with the platforms. Thus, the events of the past do little to help any of the plaintiffs establish standing to seek an injunction to prevent future harms.
Louisiana and Missouri. The state plaintiffs devote minimal attention to restriction of their own social-media content, much less to a causal link between any such restriction and the actions of any Government defendant. They refer only to Facebook鈥檚 鈥渇lagg[ing] . . . and de-boost[ing]鈥 of a Louisiana state representative鈥檚 post about children and the COVID鈥19 vaccine. Brief for Respondents 20; App. 635鈥636. We need not decide whether an injury to a state representative counts as an injury to the State, because evidence of causation is lacking.[5] The States assert only that in November 2021, Facebook, 鈥渁s a result of [its] work [with the CDC],鈥 updated its policies 鈥渢o remove additional false claims about the COVID鈥19 vaccine for children.鈥 37 Record 11,457. But they never say when Facebook took action against the official鈥檚 post 鈥 and a causal link is possible only if the removal occurred after Facebook鈥檚 communication with the CDC. There is therefore no evidence to support the States鈥 allegation that Facebook restricted the state representative pursuant to the CDC-influenced policy.
Jayanta Bhattacharya, Martin Kulldorff, and Aaron Kheriarty. These plaintiffs are doctors who questioned the wisdom of then-prevailing COVID鈥19 policies, including lockdowns and mask and vaccine mandates. Each faced his first social-media restriction in 2020, before the White House and the CDC entered discussions with the relevant platforms. Plaintiffs highlight restrictions imposed by Twitter and LinkedIn, starting in 2021, on Dr. Kulldorff 鈥檚 posts about natural immunity. They also point out that Twitter restricted the visibility of Dr. Kheriarty鈥檚 posts about vaccine safety and efficacy, as well as the ethics surrounding vaccine mandates. Attempting to show causation, the plaintiffs emphasize that in January 2022, Facebook reported to White House officials that it had recently demoted one post advocating for natural immunity over vaccine immunity. But neither the timing nor the platforms line up (nor, in Dr. Kheriarty鈥檚 case, does the content), so the plaintiffs cannot show that these restrictions were traceable to the White House officials. In fact, there is no record evidence that White House officials ever communicated at all with LinkedIn.
Drs. Bhattacharya and Kulldorff claim that, after disagreeing with the CDC and other federal health officials, they faced a 鈥渞elentless covert campaign of social-media censorship.鈥 App. 585 (emphasis deleted). They refer to the 辫濒补迟蹿辞谤尘蝉鈥 suppression of the Great Barrington Declaration, their coauthored report calling for an end to lockdowns. But their declarations do not suggest that anyone at the CDC was involved; rather, they point to officials at the National Institutes of Health and the NIAID. Those entities are not before us. With nothing else to show, Drs. Bhattacharya, Kulldorff, and Kheriarty have not established a likelihood that their past restrictions are traceable to either the White House officials or the CDC.
Jim Hoft. Both Hoft and his news website, 鈥淭he Gateway Pundit,鈥 experienced election and COVID鈥19-related restrictions on various platforms. Hoft tries to demonstrate his standing to sue only the FBI and CISA, which means that only the suppression of his election-related posts is relevant. (As already discussed, the record contains no evidence that either the FBI or CISA engaged with the platforms about the pandemic.) First, Hoft points to the FBI鈥檚 role in the 辫濒补迟蹿辞谤尘蝉鈥 adoption of hacked-material policies. And he claims that Twitter, in December 2020, censored content about the Hunter Biden laptop story under such a policy. The post was titled: 鈥淲here鈥檚 Hunter? How is Hunter Celebrating the New Year? New Photos of Hunter Biden Pushing Drugs on Women Emerge.鈥 Hoft鈥檚 own declaration reveals that Twitter acted according to its 鈥渞ules against posting or sharing privately produced/distributed intimate media of someone without their express consent.鈥 Id., at 608. Hoft provides no evidence that Twitter adopted a policy against posting private, intimate content in response to the FBI鈥檚 warnings about hack-and-leak operations. Plus, it was Hoft鈥檚 brother, Joe Hoft, who posted this tweet; Twitter therefore suspended Joe Hoft鈥檚 account. It is unclear why Jim Hoft would have standing to sue for his brother鈥檚 injury.
Hoft claims that his content appears on a CISA document tracking posts that various entities had flagged for the platforms as misinformation. The spreadsheet shows that a private entity, the Election Integrity Partnership 鈥 not CISA 鈥 alerted Twitter to an unidentified article from the Gateway Pundit. And the spreadsheet does not reveal whether Twitter removed or otherwise suppressed that post. This evidence does not support the conclusion that Hoft鈥檚 past injuries are likely traceable to the FBI or CISA.
Jill Hines. Of all the plaintiffs, Hines makes the best showing of a connection between her social-media restrictions and communications between the relevant platform (Facebook) and specific defendants (CDC and the White House). That said, most of the lines she draws are tenuous, particularly given her burden of proof at the preliminary injunction stage 鈥 recall that she must show that her restrictions are likely traceable to the White House and the CDC.
A healthcare activist, Hines codirects 鈥淗ealth Freedom Louisiana,鈥 a group that advocated against COVID鈥19 mask and vaccine mandates. In October 2020 鈥 before the start of communications with the White House and the bulk of communications with the CDC 鈥 Facebook began to reduce the reach of Hines鈥 and Health Freedom鈥檚 pages. Hines tries to connect Facebook鈥檚 subsequent actions against her to both the White House officials and the CDC.
First, Facebook 鈥渄eplatformed鈥 (i.e., deleted) one of Health Freedom鈥檚 groups in July 2021. The last post in the group asked members to contact state legislators about health freedom legislation. Three months earlier, a White House official sent Facebook several 鈥渟uggestions鈥 that were 鈥渃irculating around the building and informing thinking,鈥 including that the platform should 鈥渆nd group recommendations for groups with a history of COVID鈥19 or vaccine misinformation.鈥 54 Record 16,870鈥16,871. A week later, Facebook replied that it had 鈥渁lready removed all health groups from our recommendation feature.鈥 App. 716. It is hard to know what to make of this. Facebook reported that it had already acted, which tends to imply that Facebook made its decision independently of the White House. Moreover, Facebook and the White House communicated about removing groups from recommendation features, not deleting them altogether 鈥 further weakening the inference that Facebook was implementing White House policy rather than its own.[6]
Next, in April 2023, Facebook gave Hines a warning after she reposted content from Robert F. Kennedy, Jr. Two years earlier, White House officials had pushed Facebook to remove the accounts of the 鈥渄isinformation dozen,鈥 12 people (including Kennedy) supposedly responsible for a majority of COVID鈥19-related misinformation. Hines tries to link the warning she received to this earlier White House pressure. Again, though, the link is weak. There is no evidence that the White House asked Facebook to censor every user who reposts a member of the disinformation dozen, nor did Facebook change its policies to do so. Facebook鈥檚 2023 warning to Hines bears only a tangential relationship to the White House鈥檚 2021 directive to Facebook.
Hines traces her remaining restrictions to the CDC. Beginning in October 2020, Facebook fact checked Hines鈥 posts about pregnant women taking the COVID鈥19 vaccine, along with posts including data from the Vaccine Adverse Event Reporting System (VAERS). And in March 2021, the CDC flagged several misinformation trends for Facebook, including claims related to pregnancy and VAERS data. Because Hines does not provide dates for the fact checks, we cannot know whether the CDC could be responsible.
In May 2022, Facebook restricted Hines鈥 account for posting an article discussing increased rates of myocarditis in teenagers following vaccination. A little over a year earlier, the CDC warned Facebook against claims of 鈥渦nsubstantiated links to new [vaccine] side effects,鈥 including 鈥 鈥榠rritab[ility],鈥 鈥 鈥 鈥榓uto-immune issues, infertility,鈥 鈥 and 鈥 鈥榥eurological damage including lowered IQ.鈥 鈥 54 Record 17,042鈥17,043 (emphasis deleted). There is no evidence that the CDC ever listed myocarditis as an unsubstantiated side effect 鈥 but because it is an alleged side effect, it at least falls under the same umbrella as the CDC鈥檚 communication. Health Freedom鈥檚 February 2023 violation, by contrast, was for posting that vaccine manufacturers would not compensate those with vaccine-related injuries 鈥 a topic that bears little resemblance to the content that the CDC flagged.
In April 2023, Hines received violations for posts about children and the vaccine. In November 2021, Facebook worked with the CDC to update its policies to remove additional false claims including that 鈥 鈥榯he COVID vaccine is not safe for kids.鈥 鈥 37 id., at 11,457. It is not clear that either of Hines鈥 posts violated the CDC-influenced policy against false claims related to children and the vaccine. One simply referred to the World Health Organization鈥檚 COVID鈥19 vaccine recommendations for children, and the other discussed the role of children within the 鈥減redatory鈥 pharmaceutical industry. App. 789鈥790. Given the loose match between the policy and the posts, it is hard to call it 鈥渓ikely鈥 that Facebook was enforcing the CDC鈥檚 preferences rather than its own.[7]
With one or two potentially viable links, Hines makes the best showing of all the plaintiffs. Still, Facebook was targeting her pages before almost all of its communications with the White House and the CDC, which weakens the inference that her subsequent restrictions are likely traceable to 鈥済overnment-coerced enforcement鈥 of Facebook鈥檚 policies, 83 F. 4th, at 370 (emphasis deleted), rather than to Facebook鈥檚 independent judgment.[8] Even assuming, however, that Hines has eked out a showing of traceability for her past injuries, the past is relevant only insofar as it predicts the future. And this weak record gives her little momentum going forward.
3
To obtain forward-looking relief, the plaintiffs must establish a substantial risk of future injury that is traceable to the Government defendants and likely to be redressed by an injunction against them. To carry that burden, the plaintiffs must proffer evidence that the defendants鈥 鈥渁llegedly wrongful behavior w[ould] likely occur or continue.鈥 Friends of the Earth, Inc. v. Laidlaw Environmental Services (TOC), Inc., 528 U.S. 167, 190 (2000). At the preliminary injunction stage, the plaintiffs must show that they are likely to succeed in carrying that burden. See Winter, 555 U. S., at 22. But without proof of an ongoing pressure campaign, it is entirely speculative that the 辫濒补迟蹿辞谤尘蝉鈥 future moderation decisions will be attributable, even in part, to the defendants.
The plaintiffs treat the defendants as a monolith, claiming broadly that 鈥 鈥榯he governmen[t]鈥 鈥 continues to communicate with the platforms about 鈥 鈥榗ontent-moderation issues.鈥 鈥 Brief for Respondents 29 (quoting 83 F. 4th, at 369). But we must confirm that each Government defendant continues to engage in the challenged conduct, which is 鈥渃oercion鈥 and 鈥渟ignificant encouragement,鈥 not mere 鈥渃ommunication.鈥 Plus, the plaintiffs have only explicitly identified an interest in speaking about COVID鈥19 or elections 鈥 so the defendants鈥 discussions about content-moderation issues must focus on those topics.
We begin with the plaintiffs who have not pointed to any past restrictions likely traceable to the Government defendants. This failure to establish traceability for past harms 鈥 which can serve as evidence of expected future harm 鈥 鈥渟ubstantially undermines [the plaintiffs鈥橾 standing theory.鈥 Clapper, 568 U. S., at 411. These plaintiffs (i.e., everyone other than Hines) are thus particularly ill suited to the task of establishing their standing to seek forward-looking relief.
Take Hoft, the only plaintiff who has expressed interest in speaking about elections (and thus the only plaintiff with potential standing to sue the FBI and CISA). The FBI鈥檚 challenged conduct was ongoing at the time of the complaint, as the agency worked with the platforms during the 2022 midterm election season. Still, Hoft must rely on a 鈥渟peculative chain of possibilities鈥 to establish a likelihood of future harm traceable to the FBI. Id., at 414. Hoft鈥檚 future posts (presumably about the 2024 Presidential election) must contain content that falls within a misinformation trend that the FBI has identified or will identify in the future. The FBI must pressure the platforms to remove content within that category. The platform must then suppress Hoft鈥檚 post, and it must do so at least partly in response to the FBI, rather than in keeping with its own content-moderation policy. Hoft cannot satisfy his burden with such conjecture. CISA, meanwhile, stopped switchboarding in mid-2022, and the Government has represented that it will not resume operations for the 2024 election. Especially in light of his poor showing of traceability in the past, Hoft has failed to demonstrate likely future injury at the hands of the FBI or CISA 鈥 so the injunction against those entities cannot survive.
The doctors and the state plaintiffs, who focus on COVID鈥19 content, have a similarly uphill battle vis-脿-vis the White House, the Surgeon General鈥檚 Office, and the CDC. Hines, with her superior showing on past harm, is in a slightly better position to demonstrate likely future harm at the hands of these defendants. Still, she has not shown enough.
Starting with the White House and Surgeon General鈥檚 Office, the vast majority of their public and private engagement with the platforms occurred in 2021, when the pandemic was still in full swing. By August 2022, when Hines joined the case, the officials鈥 communications about COVID鈥19 misinformation had slowed to a trickle. Publicly, the White House Press Secretary made two statements in February and April 2022. First, she said that the platforms should continue 鈥渃all[ing] out misinformation and disinformation.鈥 3 Record 758. Two months later, she spoke generally about 搂230 and antitrust reform, but did not mention content moderation or COVID鈥19 misinformation. In March 2022, the Surgeon General issued a voluntary 鈥淩equest for Information鈥 from the platforms about their misinformation policies.[9]
Privately, Facebook sent monthly 鈥淐ovid Insights鈥 reports to officials in the White House and the Surgeon General鈥檚 Office, at least until July 2022. These reports contained information about the top 100 vaccine-related posts in the United States, including whether Facebook took action against any of them. In June, Facebook asked if it should continue sending these reports, as it had stopped seeing 鈥減roblematic vaccine related鈥 content in the top posts. 50 id., at 15,645鈥15,646. The official replied that, though he would 鈥渘ormally say we are good to discontinue,鈥 the reports would be helpful 鈥渁s we start to ramp up . . . vaccines鈥 for children under five. Id., at 15,645. The record contains no other evidence of private contact with respect to COVID鈥19 misinformation.
On this record, it appears that the frequent, intense communications that took place in 2021 had considerably subsided by 2022. (Perhaps unsurprisingly, given the changed state of the pandemic.) It is thus very difficult for Hines to show that she faces future harm that is traceable to officials in the White House and the Surgeon General鈥檚 Office. Recall the Fifth Circuit鈥檚 reasoning regarding traceability for past harms: In the face of a governmental 鈥減ressure campaign,鈥 the 鈥湵璞舨钩俅诖前静踱 censorship decisions were likely attributable at least in part to [their] reluctance to risk the adverse legal or regulatory consequences that could result from a refusal to adhere to the government鈥檚 directives.鈥 83 F. 4th, at 370. But in the months leading up to this suit, these officials issued no directives and threatened no consequences. They only asked for information about the most popular vaccine-related posts. Hines does not allege that her content has fallen, or is likely to fall, in that category.
In these circumstances, Hines cannot rely on 鈥渢he predictable effect of Government action on the decisions of third parties鈥; rather, she can only 鈥渟peculat[e] about the decisions of third parties.鈥 Department of Commerce, 588 U. S., at 768. It is 鈥渘o more than conjecture鈥 to assume that Hines will be subject to White House-induced content moderation. Los Angeles v. Lyons, 461 U.S. 95, 108 (1983). Hines (along with the other plaintiffs) has therefore failed to establish a likelihood of future injury traceable to the White House or the Surgeon General鈥檚 Office. Likewise, the risk of future harm traceable to the CDC is minimal. The CDC stopped meeting with the platforms in March 2022. Thereafter, the platforms sporadically asked the CDC to verify or debunk several claims about vaccines. But the agency has not received any such message since the summer of 2022.[10]
The plaintiffs鈥 counterarguments do not persuade. First, they argue that they suffer 鈥渃ontinuing, present adverse effects鈥 from their past restrictions, as they must now self-censor on social media. 翱鈥橲丑别补, 414 U. S., at 496. But the plaintiffs 鈥渃annot manufacture standing merely by inflicting harm on themselves based on their fears of hypothetical future harm that is not certainly impending.鈥 Clapper, 568 U. S., at 416. And as we explained, the plaintiffs have not shown that they are likely to face a risk of future censorship traceable to the defendants. Indeed, even before the defendants entered the scene, the plaintiffs 鈥渉ad a similar incentive to engage in鈥 self-censorship, given the 辫濒补迟蹿辞谤尘蝉鈥 independent content moderation. Id., at 417. So it is 鈥渄ifficult to see how鈥 the plaintiffs鈥 self-censorship 鈥渃an be traced to鈥 the defendants. Ibid.
Second, the plaintiffs and the dissent suggest that the platforms continue to suppress their speech according to policies initially adopted under Government pressure. Post, at 21. That may be true. But the plaintiffs have a redressability problem. 鈥淭o determine whether an injury is redressable,鈥 we 鈥渃onsider the relationship between 鈥榯he judicial relief requested鈥 and the 鈥榠njury鈥 suffered.鈥 California v. Texas, 593 U.S. 659, 671 (2021). The plaintiffs assert several injuries 鈥 their past social-media restrictions, current self-censorship, and likely social-media restrictions in the future. The requested judicial relief, meanwhile, is an injunction stopping certain Government agencies and employees from coercing or encouraging the platforms to suppress speech. A court could prevent these Government defendants from interfering with the 辫濒补迟蹿辞谤尘蝉鈥 independent application of their policies. But without evidence of continued pressure from the defendants, it appears that the platforms remain free to enforce, or not to enforce, those policies 鈥 even those tainted by initial governmental coercion. The platforms are 鈥渘ot parties to the suit, and there is no reason they should be obliged to honor an incidental legal determination the suit produced.鈥 Lujan, 504 U. S., at 569 (plurality opinion); see also Haaland v. Brackeen, 599 U.S. 255, 293鈥294 (2023).
Indeed, the available evidence indicates that the platforms have enforced their policies against COVID鈥19 misinformation even as the Federal Government has wound down its own pandemic response measures. For instance, Hines reports that Facebook imposed several restrictions on her vaccine-related posts in the spring of 2023. Around the same time, in April 2023, President Biden signed a joint resolution that ended the national COVID鈥19 emergency. See Pub. L. 118鈥3, 137Stat. 6. The next month, the White House disbanded its COVID鈥19 Response Team, which was responsible for many of the challenged communications in this case. Enjoining the Government defendants, therefore, is unlikely to affect the 辫濒补迟蹿辞谤尘蝉鈥 content-moderation decisions.[11]
C
We conclude briefly with the plaintiffs鈥 鈥渞ight to listen鈥 theory. The individual plaintiffs claim an interest in reading and engaging with the content of other speakers on social media. The First Amendment, they argue, protects that interest. Thus, the plaintiffs assert injuries based on the restrictions that countless other social-media users have experienced.
This theory is startlingly broad, as it would grant all social-media users the right to sue over someone else鈥檚 censorship 鈥 at least so long as they claim an interest in that person鈥檚 speech. This Court has 鈥渘ever accepted such a boundless theory of 蝉迟补苍诲颈苍驳.鈥 Already, LLC v. Nike, Inc., , 99 (2013). While we have recognized a 鈥 First Amendment right to 鈥榬eceive information and ideas,鈥 鈥 we have identified a cognizable injury only where the listener has a concrete, specific connection to the speaker. Kleindienst v. Mandel, 408 U.S. 753, 762 (1972). For instance, in Mandel, we agreed that a group of professors had a First Amendment interest in challenging the visa denial of a person they had invited to speak at a conference. Id., at 762鈥765. And in Virginia Bd. of Pharmacy v. Virginia Citizens Consumer Council, Inc., we concluded that prescription-drug consumers had an interest in challenging the prohibition on advertising the price of those drugs. 425 U.S. 748, 756鈥757 (1976).
Attempting to satisfy this requirement, the plaintiffs emphasize that hearing unfettered speech on social media is critical to their work as scientists, pundits, and activists. But they do not point to any specific instance of content moderation that caused them identifiable harm. They have therefore failed to establish an injury that is sufficiently 鈥渃oncrete and particularized.鈥 Lujan, 504 U. S., at 560.
The state plaintiffs, claiming their own version of the 鈥渞ight to listen鈥 theory, assert a sovereign interest in hearing from their citizens on social media. See 83 F. 4th, at 372鈥373. But this theory suffers from the same flaws as the individual plaintiffs鈥 theory. The States have not identified any specific speakers or topics that they have been unable to hear or follow.
The States cite this supposed sovereign injury as a basis for asserting third-party standing on behalf of 鈥渢he citizens they would listen to.鈥 Brief for Respondents 30. But 鈥淸t]his argument is a thinly veiled attempt to circumvent the limits on parens patriae 蝉迟补苍诲颈苍驳.鈥 Brackeen, 599 U. S., at 295, n. 11. Namely, States do not have 鈥 鈥榮tanding as parens patriae to bring an action against the Federal Government.鈥 鈥 Id., at 295.
The States, like the individual plaintiffs, have failed to establish a likelihood of standing.
*鈥冣赌*鈥冣赌*
The plaintiffs, without any concrete link between their injuries and the defendants鈥 conduct, ask us to conduct a review of the years-long communications between dozens of federal officials, across different agencies, with different social-media platforms, about different topics. This Court鈥檚 standing doctrine prevents us from 鈥渆xercis[ing such] general legal oversight鈥 of the other branches of Government. TransUnion, 594 U. S., at 423鈥424. We therefore reverse the judgment of the Fifth Circuit and remand the case for further proceedings consistent with this opinion.
It is so ordered.
Notes
[1] Since the events of this suit, Twitter has merged into X Corp. and is now known as X. Facebook is now known as Meta Platforms. For the sake of clarity, we will refer to these platforms as Twitter and Facebook, as they were known during the vast majority of the events underlying this suit.
[2] The District Court also enjoined the National Institute of Allergy and Infectious Diseases (NIAID) and the State Department, along with their officials and employees. 680 F. Supp. 3d, at 700鈥701, 704鈥705. The Fifth Circuit removed these entities and individuals from the injunction, however, so they are not before us. Missouri v. Biden, 83 F. 4th 350, 391 (2023).
[3] Because we do not reach the merits, we express no view as to whether the Fifth Circuit correctly articulated the standard for when the Government transforms private conduct into state action.
[4] The Fifth Circuit relied on the District Court鈥檚 factual findings, many of which unfortunately appear to be clearly erroneous. The District Court found that the defendants and the platforms had an 鈥渆fficient report-and-censor relationship.鈥 Missouri v. Biden, 680 F. Supp. 3d 630, 715 (WD La. 2023). But much of its evidence is inapposite. For instance, the court says that Twitter set up a 鈥渟treamlined process for censorship requests鈥 after the White House 鈥渂ombarded鈥 it with such requests. Ibid., n. 662 (internal quotation marks omitted). The record it cites says nothing about 鈥渃ensorship requests.鈥 See App. 639鈥642. Rather, in response to a White House official asking Twitter to remove an impersonation account of President Biden鈥檚 granddaughter, Twitter told the official about a portal that he could use to flag similar issues. Ibid. This has nothing to do with COVID鈥19 misinformation. The court also found that 鈥淸a] drastic increase in censorship . . . directly coincided with Defendants鈥 public calls for censorship and private demands for censorship.鈥 680 F. Supp. 3d, at 715. As to the 鈥渃alls for censorship,鈥 the court鈥檚 proof included statements from Members of Congress, who are not parties to this suit. Ibid., and n. 658. Some of the evidence of the 鈥渋ncrease in censorship鈥 reveals that Facebook worked with the CDC to update its list of removable false claims, but these examples do not suggest that the agency 鈥渄emand[ed]鈥 that it do so. Ibid. Finally, the court, echoing the plaintiffs鈥 proposed statement of facts, erroneously stated that Facebook agreed to censor content that did not violate its policies. Id., at 714, n. 655. Instead, on several occasions, Facebook explained that certain content did not qualify for removal under its policies but did qualify for other forms of moderation.
[5] The Fifth Circuit held that States 鈥渟ustain a direct injury when the social-media accounts of state officials are censored due to federal coercion.鈥 83 F. 4th, at 372. Because the State failed to show that its official was censored, we need not express a view on this theory.
[6] Hines tries to link this restriction to the Surgeon General鈥檚 Office as well, suggesting that the White House and Surgeon General together pressured Facebook. But the record reveals that a White House official sent the relevant email, and Facebook responded only to White House officials. The Surgeon General鈥檚 Office was seemingly uninvolved. Thus, Hines cannot demonstrate that her past restriction is traceable to the Surgeon General鈥檚 Office. The plaintiffs do not attempt to draw any other connections between their restrictions and the Surgeon General鈥檚 Office.
[7] The dissent does not dispute the Court鈥檚 assessment of these asserted links. Instead, the dissent draws links that Hines herself has not set forth, often based on injuries that Hines never claimed. Compare post, at 19鈥20, with Brief for Respondents 19鈥20; App. 628鈥632. For instance, the dissent says that in May 2021, Facebook began demoting content from accounts that repeatedly shared misinformation, purportedly due to White House pressure. Post, at 10, 19. Because Facebook frequently fact checked Hines鈥 posts, the dissent simply assumes (without citing Hines鈥 declarations) that her content was subsequently hidden from her friends鈥 feeds. Post, at 19. Likewise, pointing to an August 2021 policy change, the dissent concludes that the mid-July 2021 deplatforming of one of Hines鈥 groups rendered her other pages 鈥渘on-recommendable.鈥 Ibid. Hines, however, never claimed as much 鈥 and the plaintiffs bear the burden to establish standing by setting forth 鈥渟pecific facts.鈥 Lujan v. Defenders of Wildlife, 504 U.S. 555, 561 (1992) (internal quotation marks omitted). It is especially important to hold the plaintiffs to their burden in a case like this one, where the record spans over 26,000 pages and the lower courts did not make any specific causation findings. As the Seventh Circuit has memorably put it, 鈥淸j]udges are not like pigs, hunting for truffles buried [in the record].鈥 Gross v. Cicero, 619 F.3d 697, 702 (2010) (internal quotation marks omitted).
[8] By acknowledging the real possibility that Facebook acted independently in suppressing Hines鈥 content, we are not applying a 鈥渘ew and heightened standard,鈥 as the dissent claims. Post, at 20. The whole purpose of the traceability requirement is to ensure that 鈥渋n fact, the asserted injury was the consequence of the defendants鈥 actions,鈥 rather than of 鈥渢he independent action鈥 of a third party. Simon v. Eastern Ky. Welfare Rights Organization, , 42, 45 (1976). Nor is our analysis inconsistent with Department of Commerce v. New York, 588 U.S. 752 (2019). See post, at 19. There, the plaintiffs, including several States, challenged the Secretary of Commerce鈥檚 decision to reinstate a citizenship question on the census. 588 U. S., at 761, 764. They argued that this question would make noncitizens less likely to respond to the census, leading to an inaccurate population count and the concomitant loss of congressional seats and federal funding. Id., at 766鈥767. The plaintiffs鈥 injuries thus depended on the actions of third parties. Id., at 767鈥768. The District Court found that noncitizens had historically responded at lower rates than citizens to previous versions of the census (and other surveys) that included a citizenship question and that noncitizens were disproportionately likely to stop responding to those questionnaires once they reached the citizenship question. New York v. United States Dept. of Commerce, 351 F. Supp. 3d 502, 578鈥579 (SDNY 2019). Crediting those findings, this Court concluded that the plaintiffs 鈥渕et their burden of showing that third parties will likely react in predictable ways to the citizenship question.鈥 Department of Commerce, 588 U. S., at 768. The dissent suggests that it 鈥渨ould have been difficult for [the plaintiffs] to determine which noncitizen households failed to respond to the census because of a citizenship question and which had other reasons.鈥 Post, at 20. But the evidence made clear that the citizenship question drove noncitizens鈥 lower response rates; the District Court made no findings about noncitizens鈥 response rates to the census generally. Here, by contrast, the evidence is murky. Facebook targeted Hines鈥 posts (and others like hers) before the White House entered the picture, meaning that Facebook had independent incentives to restrict Hines鈥 content. It is therefore difficult to say that the White House was responsible (even in part) for all of Hines鈥 later restrictions 鈥 especially absent clear links between White House content-moderation requests to Facebook and Facebook鈥檚 actions toward Hines. Cf. post, at 21.
[9] According to a declaration submitted by the Surgeon General鈥檚 Chief of Staff, no one in that office met with the platforms to discuss their submissions 鈥渙r otherwise had substantive communications with social media companies about the RFI.鈥 61 Record 19,480.
[10] The dissent claims that the future injury prong is satisfied because Facebook continued to censor Hines at the time of her complaint and thereafter. Post, at 17. But the dissent gives short shrift to the key point: By the time Hines filed suit in August 2022, the White House was no longer engaged in any sort of 鈥減ressure campaign鈥 toward Facebook. (Note that the dissent, in its 10-page recounting of the record, devotes only one paragraph to the events of 2022. Post, at 14.) Thus, when Hines sued, it was unlikely that Facebook鈥檚 actions were fairly traceable to the White House at the time 鈥 or would be going forward.
[11] As with traceability, the dissent is wrong to claim that we are applying a 鈥渘ew and elevated standard for redressability.鈥 Post, at 22. Far from holding plaintiffs to a 鈥渃ertainty鈥 standard, ibid., we simply conclude that an injunction against the Government defendants is unlikely to stop the platforms from suppressing the plaintiffs鈥 speech. And while traceability and redressability are 鈥 鈥often 鈥渇lip sides of the same coin,鈥 鈥 鈥 post, at 22 (quoting FDA v. Alliance for Hippocratic Medicine, 602 U.S. 367, 380 (2024); emphasis added), that is not always the case. Facebook might continue to remove Hines鈥 posts under a policy that it adopted at the White House鈥檚 behest (thus satisfying traceability). But if the White House officials have already abandoned their pressure campaign, enjoining them is unlikely to prompt Facebook to stop enforcing the policy (thus failing redressability). Finally, by invoking Massachusetts v. EPA, it is the dissent that applies a new and loosened standard for redressability. Post, at 22. In that case, we explained that state plaintiffs are 鈥渆ntitled to special solicitude鈥 when it comes to standing, and we conducted our analysis accordingly. 549 U.S. 497, 520 (2007). That 鈥渟pecial solicitude鈥 does not apply to Jill Hines, an individual.
SUPREME COURT OF THE UNITED STATES
_________________
No. 23鈥411
_________________
VIVEK H. MURTHY, SURGEON GENERAL, et al., PETITIONERS v. MISSOURI, et al.
on writ of certiorari to the united states court of appeals for the fifth circuit
[June 26, 2024]
Justice Alito, with whom Justice Thomas and Justice Gorsuch join, dissenting.
This case involves what the District Court termed 鈥渁 far-reaching and widespread censorship campaign鈥 conducted by high-ranking federal officials against Americans who expressed certain disfavored views about COVID鈥19 on social media. Missouri v. Biden, 680 F. Supp. 3d 630, 729 (WD La. 2023). Victims of the campaign perceived by the lower courts brought this action to ensure that the Government did not continue to coerce social media platforms to suppress speech. Among these victims were two States, whose public health officials were hampered in their ability to share their expertise with state residents; distinguished professors of medicine at Stanford and Harvard; a professor of psychiatry at the University of California, Irvine School of Medicine; the owner and operator of a news website; and Jill Hines, the director of a consumer and human rights advocacy organization. All these victims simply wanted to speak out on a question of the utmost public importance.
To protect their right to do so, the District Court issued a preliminary injunction, App. 278鈥285, and the Court of Appeals found ample evidence to support injunctive relief. See Missouri v. Biden, 83 F. 4th 350 (CA5 2023).
If the lower courts鈥 assessment of the voluminous record is correct, this is one of the most important free speech cases to reach this Court in years. Freedom of speech serves many valuable purposes, but its most important role is protection of speech that is essential to democratic self-government, see Snyder v. Phelps, 562 U.S. 443, 451鈥452 (2011), and speech that advances humanity鈥檚 store of knowledge, thought, and expression in fields such as science, medicine, history, the social sciences, philosophy, and the arts, see United States v. Alvarez, 567 U.S. 709, 751 (2012) (Alito, J., dissenting).
The speech at issue falls squarely into those categories. It concerns the COVID鈥19 virus, which has killed more than a million Americans.[1] Our country鈥檚 response to the COVID鈥19 pandemic was and remains a matter of enormous medical, social, political, geopolitical, and economic importance, and our dedication to a free marketplace of ideas demands that dissenting views on such matters be allowed. I assume that a fair portion of what social media users had to say about COVID鈥19 and the pandemic was of little lasting value. Some was undoubtedly untrue or misleading, and some may have been downright dangerous. But we now know that valuable speech was also suppressed.[2] That is what inevitably happens when entry to the marketplace of ideas is restricted.
Of course, purely private entities like newspapers are not subject to the First Amendment, and as a result, they may publish or decline to publish whatever they wish. But government officials may not coerce private entities to suppress speech, see National Rifle Association of America v. Vullo, 602 U.S. 175 (2024), and that is what happened in this case.
The record before us is vast. It contains evidence of communications between many different government actors and a variety of internet platforms, as well as evidence regarding the effects of those interactions on the seven different plaintiffs. For present purposes, however, I will focus on (a) just a few federal officials (namely, those who worked either in the White House or the Surgeon General鈥檚 office), (b) only one of the most influential social media platforms, Facebook, and (c) just one plaintiff, Jill Hines, because if any of the plaintiffs has standing, we are obligated to reach the merits of this case. See Rumsfeld v. Forum for Academic and Institutional Rights, Inc., 547 U.S. 47, 52, n. 2 (2006).
With the inquiry focused in this way, here is what the record plainly shows. For months in 2021 and 2022, a coterie of officials at the highest levels of the Federal Government continuously harried and implicitly threatened Facebook with potentially crippling consequences if it did not comply with their wishes about the suppression of certain COVID鈥19-related speech. Not surprisingly, Facebook repeatedly yielded. As a result Hines was indisputably injured, and due to the officials鈥 continuing efforts, she was threatened with more of the same when she brought suit. These past and threatened future injuries were caused by and traceable to censorship that the officials coerced, and the injunctive relief she sought was an available and suitable remedy. This evidence was more than sufficient to establish Hines鈥檚 standing to sue, see Lujan v. Defenders of Wildlife, 504 U.S. 555, 561鈥562 (1992), and consequently, we are obligated to tackle the free speech issue that the case presents. The Court, however, shirks that duty and thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think.
That is regrettable. What the officials did in this case was more subtle than the ham-handed censorship found to be unconstitutional in Vullo, but it was no less coercive. And because of the perpetrators鈥 high positions, it was even more dangerous. It was blatantly unconstitutional, and the country may come to regret the Court鈥檚 failure to say so. Officials who read today鈥檚 decision together with Vullo will get the message. If a coercive campaign is carried out with enough sophistication, it may get by. That is not a message this Court should send.
In the next section of this opinion, I will recount in some detail what was done by the officials in this case, but in considering the coercive impact of their conduct, two prominent facts must be kept in mind.
First, social media have become a leading source of news for many Americans,[3] and with the decline of other media, their importance may grow.
Second, internet platforms, although rich and powerful, are at the same time far more vulnerable to Government pressure than other news sources. If a President dislikes a particular newspaper, he (fortunately) lacks the ability to put the paper out of business. But for Facebook and many other social media platforms, the situation is fundamentally different. They are critically dependent on the protection provided by 搂230 of the Communications Decency Act of 1996, 47 U. S. C. 搂230, which shields them from civil liability for content they spread. They are vulnerable to antitrust actions; indeed, Facebook CEO Mark Zuckerberg has described a potential antitrust lawsuit as an 鈥渆xistential鈥 threat to his company.[4] And because their substantial overseas operations may be subjected to tough regulation in the European Union and other foreign jurisdictions, they rely on the Federal Government鈥檚 diplomatic efforts to protect their interests.
For these and other reasons,[5] internet platforms have a powerful incentive to please important federal officials, and the record in this case shows that high-ranking officials skillfully exploited Facebook鈥檚 vulnerability. When Facebook did not heed their requests as quickly or as fully as the officials wanted, the platform was publicly accused of 鈥渒illing people鈥 and subtly threatened with retaliation.
Not surprisingly these efforts bore fruit. Facebook adopted new rules that better conformed to the officials鈥 wishes, and many users who expressed disapproved views about the pandemic or COVID鈥19 vaccines were 鈥渄eplatformed鈥 or otherwise injured.
I
A
I begin by recounting the White House-led campaign to coerce Facebook. The story starts in early 2021, when White House officials began communicating with Facebook about the spread of misinformation about COVID鈥19 on its platform. Their emails started as questions, e.g., 鈥淐an you also give us a sense of misinformation that might be falling outside of your removal polices?鈥 10 Record 3397. But when the White House did not get the results it wanted, its questions quickly turned to virtual demands. And sometimes, those statements were paired with explicit references to potential consequences.
We may begin this account with an exchange that occurred in March 2021, when the Washington Post reported that Facebook was conducting a study that examined whether posts on the platform questioning COVID鈥19鈥檚 severity or the vaccines鈥 efficacy dissuaded some Americans from being vaccinated.[6] The study noted that Facebook鈥檚 rules permitted some of this content to circulate. Rob Flaherty, the White House Director of Digital Strategy, promptly emailed Facebook about the report. The subject line of his email contained this accusation: 鈥淵ou are hiding the ball.鈥 30 id., at 9366. Flaherty noted that the White House was 鈥済ravely concerned that [Facebook] is one of the top drivers of vaccine hesitancy,鈥 and he demanded to know how Facebook was trying to solve the problem. Id., at 9365. In his words, 鈥渨e want to know that you鈥檙e trying, we want to know how we can help, and we want to know that you鈥檙e not playing a shell game with us when we ask you what is going on.鈥 Ibid.
Andy Slavitt, the White House Senior Advisor for the COVID鈥19 Response, chimed in with similar complaints. 鈥淸R]elative to othe[r]鈥 platforms, he said, 鈥渋nteractions with Facebook are not straightforward鈥 even though the misinformation problems there, in his view, were 鈥渨orse.鈥 Id., at 9364. According to Slavitt, the White House did not believe that Facebook was 鈥渢rying to solve the problem,鈥 so he informed Facebook that 鈥淸i]nternally we have been considering our options on what to do about it.鈥 Ibid.
Facebook responded apologetically to this and other missives. It acknowledged that 鈥淸w]e obviously have work to do to gain your trust.鈥 Id., at 9365. And after a follow-up conversation, the platform promised Flaherty and Slavitt that it would adopt additional policies to 鈥渞educ[e] virality of vaccine hesitancy content.鈥 Id., at 9369. In particular, Facebook promised to 鈥渞emove [any] Groups, Pages, and Accounts鈥 that 鈥渄isproportionately promot[e] . . . sensationalized content鈥 about the risks of vaccines, even though it acknowledged that user stories about their experiences and those of family members or friends were 鈥渙fte[n] true.鈥 Ibid. Facebook also promised to share additional data with the White House, ibid., but Flaherty was not fully satisfied. He said that the additional data Facebook offered was not 鈥済oing to get us the info we鈥檙e looking for,鈥 but 鈥渋t shows to me that you at least understand the ask.鈥 Id., at 9368.
In April, Flaherty again demanded information on the 鈥渁ctions and changes鈥 Facebook was taking 鈥渢o ensure you鈥檙e not making our country鈥檚 vaccine hesitancy problem worse.鈥 Id., at 9371. To emphasize his urgency, Flaherty likened COVID鈥19 misinformation to misinformation that led to the January 6 attack on the Capitol. Ibid. Facebook, he charged, had helped to 鈥渋ncrease skepticism鈥 of the 2020 election, and he claimed that 鈥渁n insurrection . . . was plotted, in large part, on your platform.鈥 Ibid. He added: 鈥淚 want some assurances, based in data, that you are not doing the same thing again here.鈥 Ibid. Facebook was surprised by these remarks because it 鈥渢hought we were doing a better job鈥 communicating with the White House, but it promised to 鈥渕ore clearly respon[d]鈥 in the future. Ibid.
The next week, Facebook officers spoke with Slavitt and Flaherty about reports of a rare blood clot caused by the Johnson & Johnson vaccine. Id., at 9385. The conversation quickly shifted when the White House noticed that one of the most-viewed vaccine-related posts from the past week was a Tucker Carlson video questioning the efficacy of the Johnson & Johnson vaccine. Id., at 9376, 9388. Facebook informed the White House that the video did not 鈥渜ualify for removal under our policies鈥 and thus would be demoted instead, ibid., but that answer did not please Flaherty. 鈥淗ow was this not violative?鈥 he queried, and 鈥淸w]hat exactly is the rule for removal vs demoting?鈥 Id., at 9387. Then, for the second time in a week, he invoked the January 6 attack: 鈥淣ot for nothing, but last time we did this dance, it ended in an insurrection.鈥 Id., at 9388. When Facebook did not respond promptly, he made his demand more explicit: 鈥淭hese questions weren鈥檛 rhetorical.鈥 Id., at 9387.
If repeated accusations that Facebook aided an insurrection did not sufficiently convey the White House鈥檚 displeasure, Flaherty and Slavitt made sure to do so by phone.[7] In one call, both officials chided Facebook for not being 鈥渟traightforward鈥 and not 鈥減lay[ing] ball.鈥 Committee Report 141鈥142. Flaherty also informed Facebook that he was reporting on the COVID鈥19 misinformation problem to the President. Id., at 136.
After a second call, a high-ranking Facebook executive perceived that Slavitt was 鈥渙utraged 鈥 not too strong a word to describe his reaction鈥 鈥 that the platform had not removed a fast-spreading meme suggesting that the vaccines might cause harm. Id., at 295. The executive had 鈥渃ountered that removing content like that would represent a significant incursion into traditional boundaries of free expression in the US,鈥 but Slavitt was unmoved, in part because he presumed that other platforms 鈥渨ould never accept something like this.鈥 Ibid.
A few weeks later, White House Press Secretary Jen Psaki was asked at a press conference about Facebook鈥檚 decision to keep former President Donald Trump off the platform. See Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack (May 5, 2021) (hereinafter May 5 Press Briefing).[8] Psaki deflected that question but took the opportunity to call on platforms like Facebook to 鈥 鈥榮top amplifying untrustworthy content . . . , especially related to COVID鈥19, vaccinations, and elections.鈥 鈥 78 Record 25170. In the same breath, Psaki reminded the platforms that President Biden 鈥 鈥榮upports . . . a robust anti-trust program.鈥 鈥 Id., at 25171 (emphasis deleted); May 5 Press Briefing.
Around this same time, Flaherty and Slavitt were in- terrogating Facebook on the mechanics of its content- moderation rules for COVID鈥19 misinformation. 30 Record 9391, 9397. Flaherty also forwarded to Facebook a 鈥淐OVID鈥19 Vaccine Misinformation Brief 鈥 that had been drafted by outside researchers and was 鈥渋nforming thinking鈥 in the White House on what Facebook鈥檚 policies should be. 52 id., at 16186. This document recommended that Facebook strengthen its efforts against misinformation in several ways. It recommended the adoption of 鈥減rogressively severe penalties鈥 for accounts that repeatedly posted misinformation, and it proposed that Facebook make it harder for users to find 鈥渁nti-vaccine or vaccine-hesitant propaganda鈥 from other users. Ibid. Facebook declined to adopt some of these suggestions immediately, but it did 鈥渟e[t] up more dedicated monitoring for [COVID] vaccine content鈥 and adopted a policy of 鈥渟tronger demotions [for] a broader set of content.鈥 30 id., at 9396.
The White House responded with more questions. Acknowledging that he sounded 鈥渓ike a broken record,鈥 Flaherty interrogated Facebook about 鈥渉ow much content is being demoted, and how effective [Facebook was] at mitigating reach, and how quickly.鈥 Id., at 9395. Later, Flaherty chastised Facebook for failing to prevent some vaccine-hesitant content from showing up through the 辫濒补迟蹿辞谤尘鈥檚 search function. Id., at 9400. 鈥 鈥榌R]emoving bad information from search鈥 is one of the easy, low-bar things you guys do to make people like me think you鈥檙e taking action,鈥 he said. Id., at 9399. 鈥淚f you鈥檙e not getting that right, it raises even more questions about the higher bar stuff.鈥 Ibid. A few weeks after this latest round of haranguing, Facebook expanded penalties for individual Facebook accounts that repeatedly shared content that fact-checkers deemed misinformation; henceforth, all of those individuals鈥 posts would show up less frequently in their friends鈥 news feeds. See 9 id., at 2697; Facebook, Taking Action Against People Who Repeatedly Share Misinformation (May 26, 2021).[9]
Perhaps the most intense period of White House pressure began a short time later. On July 15, Surgeon General Vivek Murthy released an advisory titled 鈥淐onfronting Health Misinformation.鈥 78 Record 25171, 25173. Dr. Murthy suggested, among other things, algorithmic changes to demote misinformation and additional consequences for misinformation 鈥 鈥榮uper-spreaders.鈥 鈥 U. S. Public Health Service, Confronting Health Misinformation: The U. S. Surgeon General鈥檚 Advisory on Building a Healthy Information Environment 12 (2021).[10] Dr. Murthy also joined Psaki at a press conference, where he asked the platforms to take 鈥渕uch, much more . . . aggressive action鈥 to combat COVID鈥19 misinformation 鈥渂ecause it鈥檚 costing people their lives.鈥 Press Briefing by Press Secretary Jen Psaki and Surgeon General Dr. Vivek H. Murthy (July 15, 2021).[11]
At the same press conference, Psaki singled out Facebook as a primary driver of misinformation and asked the platform to make several changes. Facebook 鈥渟hould provide, publicly and transparently, data on the reach of COVID鈥19 [and] COVID vaccine misinformation.鈥 Ibid. It 鈥渘eeds to move more quickly to remove harmful, violative posts.鈥 Ibid. And it should change its algorithm to promote 鈥渜uality information sources.鈥 Ibid. These recommendations echoed Slavitt鈥檚 and Flaherty鈥檚 private demands from the preceding months 鈥 as Psaki herself acknowledged. The White House 鈥渆ngage[s] with [Facebook] regularly,鈥 she said, and Facebook 鈥渃ertainly understand[s] what our asks are.鈥 Ibid. Apparently, the White House had not gotten everything it wanted from those private conversations, so it was turning up the heat in public.
Facebook responded by telling the press that it had partnered with the White House to counter misinformation and that it had 鈥渞emoved accounts that repeatedly break the rules鈥 and 鈥渕ore than 18 million pieces of COVID misinformation.鈥 78 Record 25174. But at another press briefing the next day, Psaki said these efforts were 鈥淸c]learly not鈥 sufficient and expressed confidence that Facebook would 鈥渕ake decisions about additional steps they can take.鈥 See id., at 25175; Press Briefing by Press Secretary Jen Psaki (July 16, 2021).[12]
That same day, President Biden told reporters that social media platforms were 鈥 鈥榢illing people鈥 鈥 by allowing COVID-related misinformation to circulate. 78 Record 25174, 25212. At oral argument, the Government suggested that the President later disclaimed any desire to hold the platforms accountable for misinformation, Tr. of Oral Arg. 34鈥35, but that is not so. The President鈥檚 so-called clarification, like many other statements by Government officials, called on 鈥 鈥楩acebook鈥 鈥 to 鈥 鈥榙o something about the misinformation鈥 鈥 on its platform. B. Klein, M. Vazquez, & K. Collins, Biden Backs Away From His Claim That Facebook Is 鈥楰illing People鈥 by Allowing COVID Misinformation, CNN (July 19, 2021).[13]
And far from disclaiming potential regulatory action, the White House confirmed that it had not 鈥 鈥榯aken any options off the table.鈥 鈥 Ibid. In fact, the day after the President鈥檚 supposed clarification, the White House Communications Director commended the President for 鈥渟peak[ing] very aggressively鈥 and affirmed that platforms 鈥渃ertainly . . . should be held accountable鈥 for publishing misinformation. 61 Record 19400鈥19401. Indeed, she said that the White House was 鈥渞eviewing鈥 whether 搂230 should be amended to open the platforms to suit. Id., at 19400.
Facebook responded quickly. The same day the President made his 鈥渒illing people鈥 remark, the platform reached out to Dr. Murthy to determine 鈥渢he scope of what the White House expects from us on misinformation going forward.鈥 9 id., at 2690. The next day, Facebook asked officials about how to 鈥済et back to a good place鈥 with the White House. 30 id., at 9403. And soon after, Facebook sent an email saying that it 鈥渉ear[d]鈥 the officials鈥 鈥渃all for us to do more,鈥 and promptly assured the White House that it would comply. 9 id., at 2706. In spite of the White House鈥檚 inflammatory rhetoric, Facebook at all times went out of its way to strike a conciliatory tone. Only two days after the President鈥檚 remark 鈥 and before his supposed clarification 鈥 Facebook assured Dr. Murthy that, though 鈥渋t鈥檚 not great to be accused of killing people,鈥 Facebook would 鈥渇ind a way to deescalate and work together collaboratively.鈥 Id., at 2713.
Concrete changes followed in short order. In early August, the Surgeon General鈥檚 Office reached out to Facebook for 鈥渁n update of any new/additional steps you are taking with respect to health misinformation in light of 鈥 the July 15 advisory. Id., at 2703. In response, Facebook informed the Surgeon General that it would soon 鈥渆xpan[d] [its] COVID policies to further reduce the spread of potentially harmful content.鈥 Id., at 2701.
White House-Facebook conversations about misinformation did not end there. In September, the Wall Street Journal wrote about the spread of misinformation on Facebook, and Facebook preemptively reached out to the White House to clarify. 8 id., at 2681. Flaherty asked (again) for information on 鈥渉ow big the problem is, what solutions you鈥檙e implementing, and how effective they鈥檝e been.鈥 Ibid.
Then in October, the Washington Post published yet another story suggesting that Facebook knew more than it let on about the spread of misinformation. Flaherty emailed the link to Facebook with the subject line: 鈥渘ot even sure what to say at this point.鈥 Id., at 2676. And the Surgeon General鈥檚 Office indicated both publically and privately that it was disappointed in Facebook. See @Surgeon_General, X (Oct. 29, 2021) (accusing Facebook of 鈥渓acking . . . transparency and accountability鈥);[14] 9 Record 2708. Facebook offered to speak with both the White House and the Surgeon General鈥檚 Office to assuage concerns. 8 id., at 2676.
Interactions related to COVID鈥19 misinformation continued until at least June 2022. Id., at 2663. At that point, Facebook proposed discontinuing its reports on misinformation, but assured the White House that it would be 鈥渉appy to continue, or to pick up at a later date, . . . if we hear from you that this continues to be of value.鈥 Ibid. Flaherty asked Facebook to continue reporting on misinformation because the Government was preparing to roll out COVID鈥19 vaccines for children under five years old and, 鈥淸o]bviously,鈥 that rollout 鈥渉a[d] the potential to be just as charged鈥 as other vaccine-related controversies. Ibid. Flaherty added that he 鈥淸w]ould love to get a sense of what you all are planning here,鈥 and Facebook agreed to provide information for as long as necessary. Ibid.
What these events show is that top federal officials continuously and persistently hectored Facebook to crack down on what the officials saw as unhelpful social media posts, including not only posts that they thought were false or misleading but also stories that they did not claim to be literally false but nevertheless wanted obscured. See, e.g., 30 id., at 9361, 9365, 9369, 9385鈥9388. And Facebook鈥檚 reactions to these efforts were not what one would expect from an independent news source or a journalistic entity dedicated to holding the Government accountable for its actions. Instead, Facebook鈥檚 responses resembled that of a subservient entity determined to stay in the good graces of a powerful taskmaster. Facebook told White House officials that it would 鈥渨ork . . . to gain your trust.鈥 Id., at 9365. When criticized, Facebook representatives whimpered that they 鈥渢hought we were doing a better job鈥 but promised to do more going forward. Id., at 9371. They pleaded to know how they could 鈥済et back to a good place鈥 with the White House. Id., at 9403. And when denounced as 鈥渒illing people,鈥 Facebook responded by expressing a desire to 鈥渨ork together collaboratively鈥 with its accuser. 9 id., at 2713; 78 id., at 25174. The picture is clear.
B
While all this was going on, Jill Hines and others were subjected to censorship. Hines serves as the co-director of Health Freedom Louisiana, an organization that advocated against vaccine and mask mandates during the pandemic. Over the course of the pandemic 鈥 and while the White House was pressuring Facebook 鈥 the platform repeatedly censored Hines鈥檚 speech.
For instance, in the summer and fall of 2021, Facebook removed two groups that Hines had formed to discuss the vaccine. 4 id., at 1313鈥1315. In January 2022, Facebook restricted posts from Hines鈥檚 personal page 鈥渇or 30 days . . . for sharing the image of a display board used in a legislative hearing that had Pfizer鈥檚 preclinical trial data on it.鈥 Id., at 1313. In late May, Facebook restricted Hines for 90 days for sharing an article about 鈥渋ncreased emergency calls for teens with myocarditis following [COVID] vaccination.鈥 Id., at 1313鈥1314. Hines鈥檚 public pages, Reopen Louisiana and Health Freedom Louisiana, were subjected to similar treatment. Facebook鈥檚 disciplinary actions meant that both public pages suffered a drop in viewership; as Hines put it, 鈥淓ach time you build viewership up [on a page], it is knocked back down with each violation.鈥 Id., at 1314. And from February to April 2023, Facebook issued warnings and violations for several vaccine-related posts shared on Hines鈥檚 personal and public pages, including a post by Robert F. Kennedy, Jr., and an article entitled 鈥 鈥楽ome Americans Shouldn鈥檛 Get Another COVID-19 Vaccine Shot, FDA Says.鈥 鈥 78 id., at 25503鈥25506. The result was that 鈥淸n]o one else was permitted to view or engage with the[se] post[s].鈥 Id., at 25503.
II
Hines and the other plaintiffs in this case brought this suit and asked for an injunction to stop the censorship campaign just described. To maintain that suit, they needed to show that they (1) were imminently threatened with an injury in fact (2) that is traceable to the defendants and (3) that could be redressed by the court. Lujan, 504 U. S., at 560鈥561; 翱鈥橲丑别补 v. Littleton, , 496 (1974). Hines satisfied all these requirements.
A
Injury in fact. Because Hines sought and obtained a preliminary injunction, it was not enough for her to show that she had been injured in the past. Instead, she had to identify a 鈥渞eal and immediate threat of repeated injury鈥 that existed at the time she sued 鈥 that is, on August 2, 2022. 翱鈥橲丑别补, 414 U. S., at 496; see also Friends of the Earth, Inc. v. Laidlaw Environmental Services (TOC), Inc., 528 U.S. 167, 191 (2000); Mollan v. Torrance, 9 Wheat. 537, 539 (1824).
The Government concedes that Hines suffered past injury, but it claims that she did not make the showing needed to obtain prospective relief. See Brief for Petitioners 17. Both the District Court and the Court of Appeals rejected this argument and found that Hines had shown that she was likely to be censored in the future. 680 F. Supp. 3d, at 713; 83 F. 4th, at 368鈥369. We have previously examined such findings under the 鈥渃learly erroneous鈥 test. See Duke Power Co. v. Carolina Environmental Study Group, Inc., 438 U.S. 59, 77 (1978). But no matter what test is applied, the record clearly shows that Hines was still being censored when she sued 鈥 and that the censorship continued thereafter. See supra, at 15鈥16. That was sufficient to establish the type of injury needed to obtain injunctive relief. 翱鈥橲丑别补, 414 U. S., at 496; see also County of Riverside v. McLaughlin, 500 U.S. 44, 51 (1991).
B
Traceability. To sue the White House officials, Hines had to identify a 鈥渃ausal connection鈥 between the actions of those officials and her censorship. Bennett v. Spear, 520 U.S. 154, 169 (1997). Hines did not need to prove that it was only because of those officials鈥 conduct that she was censored. Rather, as we held in Department of Commerce v. New York, 588 U.S. 752 (2019), it was enough for her to show that one predictable effect of the officials鈥 action was that Facebook would modify its censorship policies in a way that affected her. Id., at 768.
Hines easily met that test, and her traceability theory is at least as strong as the State of New York鈥檚 in the Department of Commerce case. There, the State claimed that it would be hurt by a census question about citizenship. The State predicted that the question would dissuade some noncitizen households from complying with their legal duty to complete the form, and it asserted that this in turn could cause the State to lose a seat in the House of Representatives, as well as federal funds that are distributed on the basis of population. Id., at 766鈥767. Although this theory depended on illegal conduct by third parties and an attenuated chain of causation, the Court found that the State had established traceability. It was enough, the Court held, that the failure of some aliens to respond to the census was 鈥渓ikely attributable鈥 to the Government鈥檚 introduction of a citizenship question. Id., at 768.
This is not a demanding standard, and Hines made the requisite showing 鈥 with room to spare. Recall that officials from the White House and Surgeon General鈥檚 Office repeatedly hectored and implicitly threatened Facebook to suppress speech expressing the viewpoint that Hines espoused. See supra, at 6鈥15. Censorship of Hines was the 鈥減redictable effect鈥 of these efforts. Department of Commerce, 588 U. S., at 768. Or, to put the point in different terms, Facebook would 鈥渓ikely react in predictable ways鈥 to this unrelenting pressure. Ibid.
This alone was sufficient to show traceability, but here there is even more direct proof. On numerous occasions, the White House officials successfully pushed Facebook to tighten its censorship policies, see supra, at 7, 10, 13, and those policies had implications for Hines.[15] First, in March 2021, the White House pressured Facebook into implementing a policy of removing accounts that 鈥渄isproportionately promot[e] . . . sensationalized content鈥 about vaccines. Supra, at 7. Later that year, Facebook removed two of Hines鈥檚 groups, which posted about vaccines. Supra, at 15. And when Hines sued in August 2022, she reported that her personal page was 鈥渃urrently restricted鈥 for sharing vaccine-related content and, thus, that she was 鈥渦nder constant threat of being completely deplatformed.鈥 4 Record 1314.
Second, in May, Facebook told Slavitt that it would 鈥渟e[t] up more dedicated monitoring鈥 of vaccine content and apply demotions to 鈥渁 broader set of content.鈥 Supra, at 10. Then, a few weeks later, Facebook also increased demotions of posts by individual Facebook accounts that repeatedly shared misinformation. Ibid. Hines says that she was repeatedly fact-checked for posting about the vaccines, see supra, at 15鈥16; 4 Record 1314, so these policy changes apparently increased the risk that posts from her personal account would have been hidden from her friends鈥 Facebook feeds.
Third, in response to the July 2021 comments from the White House and the Surgeon General, Facebook made more changes. Supra, at 13. And from the details Hines provides about her posting history, this policy change would have affected her. For one thing, Facebook 鈥渞endered 鈥榥on-recommendable鈥 鈥 any page linked to another account that had been 鈥渞emoved鈥 for spreading misinformation about COVID鈥19. 9 Record 2701. Hines says that two of her groups were removed for alleged COVID misinformation around this time. Supra, at 15; 4 Record 1315. So under the new policy, her other pages would apparently be non-recommendable. Perhaps for this reason, though Hines attempted to convince members of her deplatformed group to migrate to a substitute group, only about a quarter of its membership made the move before the substitute group too was removed. Ibid.
For another, Facebook 鈥渋ncreas[ed] the strength of [its] demotions for COVID and vaccine-related content that third party fact checkers rate[d] as 鈥楶artly False鈥 or 鈥楳issing Context.鈥 鈥 9 id., at 2701. And Facebook 鈥渕a[de] it easier to have Pages/Groups/Accounts demoted for sharing COVID and vaccine-related misinformation by . . . counting content removals鈥 under Facebook鈥檚 COVID鈥19 policies 鈥渢owards their demotion threshold.鈥 Ibid. Under this new policy, Facebook would now consider Hines鈥檚 鈥渘umerous鈥 community standards violations, 4 id., at 1314, when determining whether to make her posts less accessible to other users. So, for instance, when Hines received several citations in early 2023, this amendment would have governed Facebook鈥檚 decision to 鈥渄owngrad[e] the visibility of [her] posts in Facebook鈥檚 News Feed (thereby limiting its reach to other users).鈥 78 id., at 25503. The record here amply shows traceability.
The Court reaches the opposite conclusion by applying a new and heightened standard. The Court notes that Facebook began censoring COVID鈥19-related misinformation before officials from the White House and the Surgeon General鈥檚 Office got involved. Ante, at 20; see also Brief for Petitioners 18. And in the Court鈥檚 view, that fact makes it difficult to untangle Government-caused censorship from censorship that Facebook might have undertaken anyway. See ante, at 20. That may be so, but in the Department of Commerce census case, it also would have been difficult for New York to determine which noncitizen households failed to respond to the census because of a citizenship question and which had other reasons. Nevertheless, the Court did not require New York to perform that essentially impossible operation because it was clear that a citizenship question would dissuade at least some noncitizen households from responding. As we explained, 鈥淎rticle III 鈥榬equires no more than de facto causality,鈥 鈥 so a showing that a citizenship question affected some aliens sufficed. Department of Commerce, 588 U. S., at 768.
Here, it is reasonable to infer (indeed, the inference leaps out from the record) that the efforts of the federal officials affected at least some of Facebook鈥檚 decisions to censor Hines. All of Facebook鈥檚 demotion, content-removal, and deplatforming decisions are governed by its policies.[16] So when the White House pressured Facebook to amend some of the policies related to speech in which Hines engaged, those amendments necessarily impacted some of Facebook鈥檚 censorship decisions. Nothing more is needed. What the Court seems to want are a series of ironclad links 鈥 from a particular coercive communication to a particular change in Facebook鈥檚 rules or practice and then to a particular adverse action against Hines. No such chain was required in the Department of Commerce case, and neither should one be demanded here.
In addition to this heightened linkage requirement, the Court argues that Hines lacks standing because the threat of future injury dissipated at some point during summer 2022 when the officials鈥 pressure campaign tapered off. Ante, at 25, n. 10. But this argument errs in two critical respects. First, the effects of the changes the officials coerced persisted. Those changes controlled censorship decisions before and after Hines sued.
Second, the White House threats did not come with expiration dates, and it would be silly to assume that the threats lost their force merely because White House officials opted not to renew them on a regular basis. Indeed, the record suggests that Facebook did not feel free to chart its own course when Hines sued; rather, the platform had promised to continue reporting to the White House and remain responsive to its concerns for as long as the officials requested. Supra, at 14.
In short, when Hines sued in August 2022, there was still a link between the White House and the injuries she was presently suffering and could reasonably expect to suffer in the future. That is enough for traceability.
C
Redressability. Finally, Hines was required to show that the threat of future injury she faced when the complaint was filed 鈥渓ikely would be redressed鈥 by injunctive relief. FDA v. Alliance for Hippocratic Medicine, 602 U.S. 367, 380 (2024). This required proof that a preliminary injunction would reduce Hines鈥檚 鈥渞isk of [future] harm . . . to some extent.鈥 Massachusetts v. EPA, 549 U.S. 497, 526 (2007) (emphasis added). And as we recently explained, 鈥淸t]he second and third standing requirements 鈥 causation and redressability 鈥 are often 鈥榝lip sides of the same coin.鈥 鈥 Alliance for Hippocratic Medicine, 602 U. S., at 380. Therefore, 鈥淸i]f a defendant鈥檚 action causes an injury, enjoining the action or awarding damages for the action will typically redress that injury.鈥 Id., at 381.
Hines easily satisfied that requirement. For the reasons just explained, there is ample proof that Hines鈥檚 past injuries were a 鈥減redictable effect鈥 of the Government鈥檚 censorship campaign, and the preliminary injunction was likely to prevent the continuation of the harm to at least 鈥渟ome extent.鈥 Massachusetts v. EPA, 549 U. S., at 526.
The Court disagrees because Facebook 鈥渞emain[s] free to enforce . . . even those [policies] tainted by initial governmental coercion.鈥 Ante, at 26. But as with traceability, the Court applies a new and elevated standard for redressability, which has never required plaintiffs to be 鈥certain鈥 that a court order would prevent future harm. Larson v. Valente, 456 U.S. 228, 243鈥244, n. 15 (1982). In Massachusetts v. EPA, for example, no one could say that the relief sought 鈥 reconsideration by the EPA of its decision not to regulate the emission of greenhouse gases 鈥 would actually remedy the Commonwealth鈥檚 alleged injuries, such as the loss of land due to rising sea levels. The Court鈥檚 decision did not prevent the EPA from adhering to its prior decision, 549 U. S., at 534鈥535, and there was no way to know with any degree of certainty that any greenhouse gas regulations that the EPA might eventually issue would prevent the oceans from rising. Yet the Court found that the redressability requirement was met.
Similarly, in Department of Commerce, no one could say with any certainty that our decision barring a censorship question from the 2020 census questionnaire would prevent New York from losing a seat in the House of Representatives, 588 U. S., at 767, and in fact that result occurred despite our decision. S. Goldmacher, New York Loses House Seat After Coming Up 89 People Short on Census, N. Y. Times, Apr. 26, 2021.[17]
As we recently proclaimed in FDA v. Alliance for Hippocratic Medicine, Article III standing is an important component of our Constitution鈥檚 structural design. See 602 U. S., at 378鈥380. That doctrine is cheapened when the rules are not evenhandedly applied.
*鈥冣赌*鈥冣赌*
Hines showed that, when she sued, Facebook was censoring her COVID-related posts and groups. And because the White House prompted Facebook to amend its censorship policies, Hines鈥檚 censorship was, at least in part, caused by the White House and could be redressed by an injunction against the continuation of that conduct. For these reasons, Hines met all the requirements for Article III standing.
III
I proceed now to the merits of Hines鈥檚 First Amendment claim.[18] Government efforts to 鈥渄ictat[e] the subjects about which persons may speak,鈥 First Nat. Bank of Boston v. Bellotti, 435 U.S. 765, 784鈥785 (1978), or to suppress protected speech are 鈥 鈥榩resumptively unconstitutional,鈥 鈥 Rosenberger v. Rector and Visitors of Univ. of Va., 515 U.S. 819, 830 (1995). And that is so regardless of whether the Government carries out the censorship itself or uses a third party 鈥 鈥榯o accomplish what . . . is constitutionally forbidden.鈥 鈥 Norwood v. Harrison, 413 U.S. 455, 465 (1973).
As the Court held more than 60 years ago in Bantam Books, Inc. v. Sullivan, 372 U.S. 58 (1963), the Government may not coerce or intimidate a third-party intermediary into suppressing someone else鈥檚 speech. Id., at 67. Earlier this Term, we reaffirmed that important principle in National Rifle Association v. Vullo, 602 U. S., at 187鈥191. As we said there, 鈥渁 government official cannot do indirectly what she is barred from doing directly,鈥 id., at 190, and while an official may forcefully attempt to persuade, 鈥淸w]hat she cannot do . . . is use the power of the State to punish or suppress disfavored expression,鈥 id., at 188.
In Vullo, the alleged conduct was blunt. The head of the state commission with regulatory authority over insurance companies allegedly told executives at Lloyd鈥檚 directly and in no uncertain terms that she would be 鈥 鈥榣ess interested鈥 鈥 in punishing the company鈥檚 regulatory infractions if it ceased doing business with the National Rifle Association. Id., at 183. The federal officials鈥 conduct here was more subtle and sophisticated. The message was delivered piecemeal by various officials over a period of time in the form of aggressive questions, complaints, insistent requests, demands, and thinly veiled threats of potentially fatal reprisals. But the message was unmistakable, and it was duly received.
The principle recognized in Bantam Books and Vullo requires a court to distinguish between permissible persuasion and unconstitutional coercion, and in Vullo, we looked to three leading factors that are helpful in making that determination: (1) the authority of the government officials who are alleged to have engaged in coercion, (2) the nature of statements made by those officials, and (3) the reactions of the third party alleged to have been coerced. 602 U. S., at 189鈥190, and n. 4, 191鈥194. In this case, all three factors point to coercion.
A
I begin with the authority of the relevant officials 鈥 high-ranking White House officials and the Surgeon General. High-ranking White House officials presumably speak for and may have the ability to influence the President, and as discussed earlier, a Presidential administration has the power to inflict potentially fatal damage to social media platforms like Facebook. See supra, at 5. Facebook appreciates what the White House could do, and President Biden has spoken openly about that power 鈥 as he has every right to do. For instance, he has declared that the 鈥減olicy of [his] Administration [is] to enforce the antitrust laws to meet the challenges posed by . . . the rise of the dominant Internet platforms,鈥 and he has directed the Attorney General and other agency heads to 鈥渆nforce the antitrust laws . . . vigorously.鈥 Promoting Competition in the American Economy, Executive Order No. 14036, 3 CFR 609 (2021).[19] He has also floated the idea of amending or repealing 搂230 of the Communications Decency Act. See, e.g., B. Klein, White House Reviewing Section 230 Amid Efforts To Push Social Media Giants To Crack Down on Misinformation, CNN (July 20, 2021) [20]; R. Kern, White House Renews Call To 鈥楻emove鈥 Section 230 Liability Shield, Politico (Sept. 8, 2022).[21]
Previous administrations have also wielded significant power over Facebook. In a data-privacy case brought jointly by the Department of Justice and the Federal Trade Commission, Facebook was required 鈥渢o pay an unprecedented $5 billion civil penalty,鈥 which is 鈥渁mong the largest civil penalties ever obtained by the federal government.鈥 Press Release, Dept. of Justice, Facebook Agrees To Pay $5 Billion and Implement Robust New Protections of User Information in Settlement of Data-Privacy Claims (July 24, 2019).[22]
A matter that may well have been prominent in Facebook鈥檚 thinking during the period in question in this case was a dispute between the United States and the European Union over international data transfers. In 2020, the Court of Justice of the European Union invalidated the mechanism for transferring data between the European Union and United States because it did not sufficiently protect EU citizens from Federal Government surveillance. Data Protection Comm鈥檙 v. Facebook Ireland Limited, Case C鈥311/18 (2020). The EU-U. S. conflict over data privacy hindered Facebook鈥檚 international operations, but Facebook could not 鈥渞esolve [the conflict] on its own.鈥 N. Clegg & J. Newstead, Our Response to the Decision on Facebook鈥檚 EU-US Data Transfers, Meta (May 22, 2023).[23] Rather, the platform relied on the White House to negotiate an agreement that would preserve its ability to maintain its trans-Atlantic operations. K. Mackrael, EU Approves Data-Transfer Deal With U. S., Averting Potential Halt in Flows, Wall Street Journal, July 10, 2023.[24]
It is therefore beyond any serious dispute that the top-ranking White House officials and the Surgeon General possessed the authority to exert enormous coercive pressure.
B
1
Second, I turn to of the officials鈥 communications with Facebook, which possess all the hallmarks of coercion that we identified in Bantam Books and Vullo. Many of the White House鈥檚 emails were 鈥減hrased virtually as orders,鈥 Bantam Books, 372 U. S., at 68, and the officials鈥 frequent follow-ups ensured that they were understood as such, id., at 63. To take a few examples, after Flaherty read an article about content causing vaccine hesitancy, he demanded 鈥渢o know that [Facebook was] trying鈥 to combat the issue and 鈥渢o know that you鈥檙e not playing a shell game with us when we ask you what is going on.鈥 30 Record 9365; see supra, at 7. The next month, he requested 鈥渁ssurances, based in data,鈥 that Facebook was not 鈥渕aking our country鈥檚 vaccine hesitancy problem worse.鈥 30 Record 9371; see supra, at 7鈥8. A week after that, he questioned Facebook about its policies 鈥渇or removal vs demoting,鈥 and when the platform did not promptly respond, he added: 鈥淭hese questions weren鈥檛 rhetorical.鈥 30 Record 9387; see supra, at 8. When Facebook provided the White House with some data it asked for, Flaherty thanked Facebook for demonstrating 鈥渢hat you at least understand the ask.鈥 30 Record 9368; see supra, at 7.
Various comments during the July pressure campaign likewise reveal that the White House and the Surgeon General鈥檚 Office expected compliance. At the press conference announcing the Surgeon General鈥檚 recommendations related to misinformation, Psaki noted that the White House 鈥渆ngage[s] with [Facebook] regularly,鈥 and Facebook 鈥渃ertainly understand[s] what our asks are.鈥 Supra, at 11. The next day, she expressed confidence that Facebook would 鈥渕ake decisions about additional steps they can take.鈥 78 Record 25175; see supra, at 12. And eventually, the Surgeon General鈥檚 Office prompted Facebook for 鈥渁n update of any new/additional steps you are taking with respect to health misinformation in light of 鈥 the July 15 advisory. 9 Record 2703; see supra, at 13.
These demands were coupled with 鈥渢hinly veiled threats鈥 of legal consequences. Bantam Books, 372 U. S., at 68. Three instances stand out. Early on, when the White House first expressed skepticism that Facebook was effectively combatting misinformation, Slavitt informed the platform that the White House was 鈥渃onsidering our options on what to do about it.鈥 30 Record 9364; see supra, at 7. In other words, if Facebook did not 鈥渟olve鈥 its 鈥渕isinformation鈥 problem, the White House might unsheathe its potent authority. 30 Record 9364.
The threat was made more explicit in May, when Psaki paired a request for platforms to 鈥 鈥榮top amplifying untrustworthy content鈥 鈥 with a reminder that President Biden 鈥 鈥榮upports . . . a robust anti-trust program.鈥 鈥 78 id., at 25170鈥25171 (emphasis deleted); May 5 Press Briefing; see also supra, at 9. The Government casts this reference to legal consequences as a defense of individual Americans against censorship by the platforms. See Reply Brief 9. But Psaki鈥檚 full answer undermines that interpretation. Immediately after noting President Biden鈥檚 support for antitrust enforcement, Psaki added, 鈥淪o his view is that there鈥檚 more that needs to be done to ensure that this type of . . . life-threatening information is not going out to the American public.鈥 May 5 Press Briefing. The natural interpretation is that the White House might retaliate if the platforms allowed free speech, not if they suppressed it.
Finally, in July, the White House asserted that the platforms 鈥渟hould be held accountable鈥 for publishing misinformation. 61 Record 19400; see supra, at 11鈥13. The totality of this record 鈥 constant haranguing, dozens of demands for compliance, and references to potential consequences 鈥 evince 鈥渁 scheme of state censorship.鈥 Bantam Books, 372 U. S., at 72.
2
The Government tries to spin these interactions as fairly benign. In its telling, Flaherty, Slavitt, and other officials merely 鈥渁sked the platforms for information鈥 and then 鈥減ublicly and privately criticized the platforms for what the officials perceived as a . . . failure to live up to the 辫濒补迟蹿辞谤尘蝉鈥 commitments.鈥 Brief for Petitioners 31. References to consequences, the Government claims, were 鈥渇leeting and general鈥 and 鈥渃annot plausibly be characterized as coercive threats.鈥 Id., at 32.
This characterization is not true to what happened. Slavitt and Flaherty did not simply ask Facebook for information. They browbeat the platform for months and made it clear that if it did not do more to combat what they saw as misinformation, it might be called to account for its shortcomings. And as for the supposedly 鈥渇leeting鈥 nature of the numerous references to potential consequences, death threats can be very effective even if they are not delivered every day.
The Government also defends the officials鈥 actions on the ground that 鈥淸t]he President and his senior aides are entitled to speak out on such matters of pressing public concern.鈥 Reply Brief 11. According to the Government, the officials were simply using the President鈥檚 鈥渂ully pulpit鈥 to 鈥渋nform, persuade, and protect the public.鈥 Brief for Petitioners 5, 24.
This argument introduces a new understanding of the term 鈥渂ully pulpit,鈥 which was coined by President Theodore Roosevelt to denote a President鈥檚 excellent (i.e., 鈥渂ully鈥 [25]) position (i.e., his 鈥減ulpit鈥) to persuade the public.[26] But Flaherty, Slavitt, and other officials who emailed and telephoned Facebook were not speaking to the public from a figurative pulpit. On the contrary, they were engaged in a covert scheme of censorship that came to light only after the plaintiffs demanded their emails in discovery and a congressional Committee obtained them by subpoena. See Committee Report 1鈥2. If these communications represented the exercise of the bully pulpit, then everything that top federal officials say behind closed doors to any private citizen must also represent the exercise of the President鈥檚 bully pulpit. That stretches the concept beyond the breaking point.
In any event, the Government is hard-pressed to find any prior example of the use of the bully pulpit to threaten censorship of private speech. The Government cites four instances in which past Presidents commented publicly about the performance of the media. President Reagan lauded the media for 鈥渢ough reporting鈥 on drugs. Reagan Presidential Library & Museum, Remarks to Media Executives at a White House Briefing on Drug Abuse (Mar. 7, 1988).[27] But he never threatened to do anything to media outlets that were soft on the issue of drugs. President Theodore Roosevelt 鈥渓ambasted 鈥榤uck-raking鈥 journalists鈥 as 鈥 鈥榦ne of the most potent forces for evil鈥 鈥 and encouraged journalists to speak truth, rather than slander. Brief for Petitioners 24 (quoting The American Presidency Project, Remarks at the Laying of the Cornerstone of the Office Building of the House of Representatives (Apr. 14, 1906)).[28] But his comment did not threaten any action against the muckrakers, see Goodwin 480鈥487, and it is unclear what he could have done to them. President George W. Bush denounced pornography as 鈥渄ebilitating鈥 for 鈥渃ommunities, marriages, families, and children.鈥 Presidential Proclamation No. 7725, 3 CFR 129 (2003 Comp.). But he never threatened to take action against pornography that was not 鈥渙bscene鈥 within the meaning of our precedents.
The Government鈥檚 last example is a 1915 speech in which President Wilson deplored false reporting that the Japanese were using Turtle Bay, California, as a naval base. The American Presidency Project, Address at the Associated Press Luncheon in New York City (Apr. 20, 1915).[29] Speaking to a gathering of reporters, President Wilson proclaimed: 鈥淲e ought not to permit that sort of thing to use up the electrical energy of the [telegraph] wires, because its energy is malign, its energy is not of the truth, its energy is mischief.鈥 Ibid. Wilson鈥檚 comment is best understood as metaphorical and hortatory, not as a legal threat. And in any event, it is hard to see how he could have brought about censorship of telegraph companies because the Mann-Elkins Act, enacted in 1910, deemed them to be common carriers, and that meant that they were obligated to transmit all messages regardless of content. See 36Stat. 544鈥545; T. Wu, A Brief History of American Telecommunications Regulation, in 5 Oxford International Encyclopedia of Legal History 95 (2007). Thus, none of these examples justifies the conduct at issue here.
C
Finally, Facebook鈥檚 responses to the officials鈥 persistent inquiries, criticisms, and threats show that the platform perceived the statements as something more than mere recommendations. Time and time again, Facebook responded to an angry White House with a promise to do better in the future. In March, Facebook attempted to assuage the White House by acknowledging 鈥淸w]e obviously have work to do to gain your trust.鈥 30 Record 9365. In April, Facebook promised to 鈥渕ore clearly respon[d] to [White House] questions.鈥 Id., at 9371. In May, Facebook 鈥渃ommitted to addressing the defensive work around misinformation that you鈥檝e called on us to address.鈥 9 id., at 2698. In July, Facebook reached out to the Surgeon General after 鈥渢he President鈥檚 remarks about us鈥 and emphasized its efforts 鈥渢o better understand the scope of what the White House expects from us on misinformation going forward.鈥 Id., at 2690. And of course, as we have seen, Facebook repeatedly changed its policies to better address the White House鈥檚 concerns. See supra, at 7, 10, 13.
The Government鈥檚 primary response is that Facebook occasionally declined to take its suggestions. Reply Brief 11; see, e.g., supra, at 10. The implication is that Facebook must have chosen to undertake all of its anti-misinformation efforts entirely of its own accord.
That is bad logic, and in any event, the record shows otherwise. It is true that Facebook voluntarily undertook some anti-misinformation efforts and that it declined to make some requested policy changes. But the interactions recounted above unmistakably show that the White House was insistent that Facebook should do more than it was doing on its own, see, e.g., supra, at 11鈥12, and Facebook repeatedly yielded 鈥 even if it did not always give the White House everything it wanted.
Internal Facebook emails paint a clear picture of subservience. The platform quickly realized that its 鈥渉andling of [COVID] misinformation鈥 was 鈥渋mportan[t]鈥 to the White House, so it looked for ways 鈥渢o be viewed as a trusted, transparent partner鈥 and 鈥渁void . . . public spat[s].鈥 Committee Report 181, 184, 188. After the White House blamed Facebook for aiding an insurrection, the platform realized that it was at a 鈥渃rossroads . . . with the White House.鈥 Id., at 294. 鈥淕iven what is at stake here,鈥 one Facebook employee proposed reevaluating the company鈥檚 鈥渋nternal methods鈥 to 鈥渟ee what further steps we may/may not be able to take.鈥 Id., at 295. This reevaluation led to one of Facebook鈥檚 policy changes. See supra, at 8鈥10.
Facebook again took stock of its relationship with the White House after the President鈥檚 accusation that it was 鈥渒illing people.鈥 Internally, Facebook saw little merit in many of the White House鈥檚 critiques. One employee labeled the White House鈥檚 understanding of misinformation 鈥渃ompletely unclear鈥 and speculated that 鈥渋t鈥檚 convenient for them to blame us鈥 鈥渨hen the vaccination campaign isn鈥檛 going as hoped.鈥 Committee Report 473. Nonetheless, Facebook figured that its 鈥渃urrent course鈥 of 鈥渋n effect explaining ourselves more fully, but not shifting on where we draw the lines,鈥 is 鈥渁 recipe for protracted and increasing acrimony with the [White House].鈥 Id., at 573. 鈥淕iven the bigger fish we have to fry with the Administration,鈥 such as the EU-U. S. dispute over 鈥渄ata flows,鈥 that did not 鈥渟eem like a great place鈥 for Facebook-White House relations 鈥渢o be.鈥 Ibid. So the platform was motivated to 鈥渆xplore some moves that we can make to show that we are trying to be responsive.鈥 Ibid. That brainstorming resulted in the August 2021 rule changes. See supra, at 13, 19鈥20.
In sum, the officials wielded potent authority. Their communications with Facebook were virtual demands. And Facebook鈥檚 quavering responses to those demands show that it felt a strong need to yield.
For these reasons, I would hold that Hines is likely to prevail on her claim that the White House coerced Facebook into censoring her speech.
*鈥冣赌*鈥冣赌*
For months, high-ranking Government officials placed unrelenting pressure on Facebook to suppress Americans鈥 free speech. Because the Court unjustifiably refuses to address this serious threat to the First Amendment, I respectfully dissent.
Notes
[1] Centers for Disease Control and Prevention, Deaths by Week and State, (last accessed June 21, 2024).
[2] This includes information about the origin of the COVID鈥19 virus. When the pandemic began, Facebook began demoting posts supporting the theory that the virus leaked from a laboratory. See Interim Staff Report of the House Judiciary Committee, The Censorship-Industrial Complex: How Top Biden White House Officials Coerced Big Tech To Censor Americans, True Information, and Critics of the Biden Administration, p. 398 (May 1, 2024) (Committee Report), https://judiciary.house.gov/sites/evo-subsites / republicans-judiciary.house.gov/files/evo-media-document/Censorship-Industrial-Complex-WH-Report_Appendix.pdf. 鈥淚n February 2021, in response to . . . tense conversations with the new Administration,鈥 Facebook changed its policy to instead remove posts about the lab leak theory wholesale. Ibid.; accord, id., at 463 (Facebook executive explained that the platform removed these posts 鈥淸b]ecause we were under pressure from the administration and others to do more and it was part of the 鈥榤ore鈥 package鈥). But since then, both the Federal Bureau of Investigation and the Department of Energy have found that the theory is probably correct. See, e.g., A. Kaur & D. Diamond, FBI Director Says Covid鈥19 鈥淢ost Likely鈥 Originated From Lab Incident, Washington Post (Feb. 28, 2023), ; J. Herb & N. Bertrand, US Energy Department Assesses Covid鈥19 Likely Resulted From Lab Leak, Furthering US Intel Divide Over Virus Origin, CNN (Feb. 27, 2023), . Facebook reversed its policy, and Mark Zuckerberg expressed regret that the platform had ever removed the posts: 鈥淭his seems like a good reminder that when we compromise our standards due to pressure from an administration in either direction, we鈥檒l often regret it later.鈥 Committee Report 398.
[3] See, e.g., J. Liedke & L. Wang, News Platform Fact Sheet, Pew Research Center (Nov. 15, 2023), ; A. Watson, Most Popular Platforms for Daily News Consumption in the United States as of August 2022, by Age Group, Statista (Jan. 4, 2024), .
[4] C. Newton, Read the Full Transcript of Mark Zuckerberg鈥檚 Leaked Internal Facebook Meetings, The Verge (Oct. 1, 2019), .
[5] For pending or potential legislation affecting internet platforms, see Congressional Research Service, C. Cho, L. Zhu, & K. Busch, Defining and Regulating Online Platforms (Aug. 25, 2023), .
[6] E. Dwoskin, Massive Facebook Study on Users鈥 Doubt in Vaccines Finds a Small Group Appears To Play a Big Role in Pushing the Skepticism, Washington Post (Mar. 14, 2021), .
[7] Notes recounting these calls were released by the House Judiciary Committee after the District Court entered the preliminary injunction and were published in a Committee Report. See Committee Report; Fed. Rule Evid. 201.
[8] .
[9] .
[10] .
[11] .
[12] .
[13] .
[14] .
[15] The Court discounts this evidence because Hines did not draw the same links in her briefing. See ante, at 20, n. 7. But we have an 鈥渋ndependent obligation鈥 to assess standing, Summers v. Earth Island Institute, 555 U.S. 488, 499 (2009), and a 鈥渧irtually unflagging obligation鈥 to exercise our jurisdiction if standing exists, Colorado River Water Conservation Dist. v. United States, 424 U.S. 800, 817 (1976). 鈥淸A] case like this one, where the record spans over 26,000 pages鈥 and the plaintiffs have provided numerous facts, deserves some scrutiny before we simply brush standing aside. Ante, at 20, n. 7. As it happens, Hines has said enough to establish standing. First, she says that, at the behest of the White House, Facebook announced new measures to combat misinformation about COVID鈥19 and the vaccines. Second, she says that her Facebook pages fell under those policies. Third, she says that she suffered the penalties imposed by Facebook, such as demotion of her posts and pages. See 4 Record 1315; 78 id., at 25503. She may not explicitly say that the policy changes caused the penalties she experienced. But what theory makes more sense 鈥 that a user falling within Facebook鈥檚 amended policies was censored under those policies or that something else caused her injury?
[16] See Meta, Policies, (last accessed June 19, 2024).
[17] .
[18] To obtain a preliminary injunction, Hines was required to establish that she is likely to succeed on the merits, that she would otherwise suffer irreparable harm, and that the equities cut in her favor. Winter v. Natural Resources Defense Council, Inc., 555 U.S. 7, 20 (2008). In a First Amendment case, the equities are bound up in the merits. See Elrod v. Burns, 427 U.S. 347, 373 (1976) (plurality opinion) (鈥淭he loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury鈥). So I focus on Hines鈥檚 likelihood of success.
[19] .
[20] .
[21] .
[22] .
[23] .
[24] .
[25] Webster鈥檚 International Dictionary of the English Language 191 (1902).
[26] See D. Goodwin, The Bully Pulpit: Theodore Roosevelt, William Howard Taft, and the Golden Age of Journalism, pp. xi鈥搙ii (2013) (Goodwin).
[27] .
[28] .
[29] .