ýappٷ

Table of Contents

The FTC is overstepping its authority — and threatening free speech online

The Federal Trade Commission’s attempt to sidestep the Constitution by labeling editorial decisions it doesn’t like as “unfair or deceptive trade practices” won’t work. 
FTC Federal Trade Commission logo seen on the display in a dark room and blurred finger pointing at it

mundissima / Shutterstock.com

Federal Trade Commission Chair Andrew Ferguson  yesterday asking for “public submissions from anyone who has been a victim of tech censorship (banning, demonetization, shadow banning, etc.), from employees of tech platforms.” His post was accompanied by  and , both making the same requests. 

This outreach is being conducted, according to Ferguson, “to better understand how technology platforms deny or degrade users’ access to services based on the content of their speech or affiliations, and how this conduct may have violated the law.”

In reality, the chair is angling to label editorial decisions he doesn’t like “unfair or deceptive trade practices.” But consumer protection law is no talisman against the First Amendment, and the FTC has no power here.

The simplified formulation of Ferguson’s argument is this: If social media platforms are not adhering to their content policies, or “consistent” (whatever that means) in their enforcement, they are engaging in “false advertising” that harms consumers.

Calling something censorship doesn’t make it so, and framing content moderation as “unfair or deceptive trade practices” does not magically sidestep the First Amendment. 

Now, it is true that the FTC can generally act against deceptive marketing. That’s because pure commercial speech — that is, speech which does no more than propose a commercial transaction — possesses “a lesser protection” under the First Amendment than other forms of protected speech. And commercial speech that is false or misleading receives no First Amendment protection at all. But when speech —&Բ; —&Բ;, government power over that speech gives way to the First Amendment.

Content policies and moderation decisions made by private social media platforms are inherently subjective editorial judgments. In the vast majority of cases, they convey opinions on social policy as well as what expression they find desirable in their communities. Attempts to control or punish those editorial judgments violate the First Amendment.

The Supreme Court recently made clear that these subjective decisions enjoy broad First Amendment protection. In Moody v. NetChoice, the Court rebuffed direct attempts by Texas and Florida to regulate content moderation decisions to remediate allegedly “biased” enforcement of platform rules:

The interest Texas asserts is in changing the balance of speech on the major platforms’ feeds, so that messages now excluded will be included. To describe that interest, the State borrows language from this Court’s First Amendment cases, maintaining that it is preventing “viewpoint discrimination.” Brief for Texas 19; see supra, at 26–27. But the Court uses that language to say what governments cannot do: They cannot prohibit private actors from expressing certain views. When Texas uses that language, it is to say what private actors cannot do: They cannot decide for themselves what views to convey. The innocent-sounding phrase does not redeem the prohibited goal. The reason Texas is regulating the content-moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.

This is no less true when the government attempts to regulate through the backdoor of “consumer protection.”

To illustrate the problem: Imagine a claim that platforms are engaging in unfair trade practices by removing some “hate speech,” but not speech that aligns with a certain view. What constitutes “hate speech” is entirely subjective. For the FTC to assess whether a “hate speech” policy has been applied “consistently” (or at all), they would have to supplant the platform’s subjective judgment with the government’s own “official” definition of “hate speech” — which, as you can probably already guess, will likely not be the same as anyone else’s. 

And this illustration is not the product of wild imagination. In fact, FIREis litigating this very question before the U.S. Court of Appeals for the Second Circuit right now. In Volokh v. James, FIREis challenging a New York law requiring social media platforms to develop and publish policies for responding to “hateful conduct” and to provide a mechanism for users to complain about the same. Our motion for a preliminary injunction, which the district court granted, argued that the First Amendment prohibits the government from substituting its judgments about what expression should be permitted for a platform’s own:

Labeling speech as “hateful” requires an inherently subjective judgment, as does determining whether speech serves to “vilify, humiliate, or incite violence.” The Online Hate Speech Act’s definition is inescapably subjective—one site’s reasoned criticism is another’s “vilification”; one site’s parody is another’s “humiliation”—and New York cannot compel social media networks to adopt it. . . . The definition of “hateful,” and the understanding of what speech is “vilifying,” “humiliating,” or “incites violence,” will vary from person to person . . .

The First Amendment empowers citizens to make these value judgments themselves, because speech that some might consider “hateful” appears in a wide variety of comedy, art, journalism, historical documentation, and commentary on matters of public concern. 

Ferguson and the FTC’s actions are particularly egregious given the fact that it has been made perfectly — and repeatedly — clear in the past that these kinds of editorial decisions are outside of their authority.

Eugene Volokh

LAWSUIT: New York can’t target protected online speech by calling it ‘hateful conduct’

Press Release

Today, the FIREsued New York Attorney General Letitia James, challenging a new state law that forces websites and apps to address online speech that someone, somewhere finds humiliating or vilifying.

Read More

In 2004, the political advocacy groups MoveOn and Common Cause , arguing that it was false and misleading. Then-FTC Chair Tim Muris appropriately replied, “There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.”

In 2020, the nonprofit advocacy group  violated its free speech rights by restricting access to some of its videos and limiting its advertising. They claimed that as a result, the platform’s statements that “everyone deserves to have a voice” and “people should be able to speak freely” . However, the U.S. Court of Appeals for the Ninth Circuit rejected this claim, holding that the platform’s statements are “impervious to being quantifiable” and, as a result, were non-actionable.

The bottom line is this: Calling something censorship doesn’t make it so, and framing content moderation as “unfair or deceptive trade practices” does not magically sidestep the First Amendment. And as always, beware — authority claimed while one is in power will still exist when one is not.

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share