ýappٷ

Table of Contents

FIREtells Ninth Circuit that social media companies cannot be forced to weigh in on culture war.

Friend-of-the-court brief says California cannot force companies to report their views on “racism,” “disinformation,” and other controversial subjects.
Twitter X logo icon on a mobile phone screen

Michele Ursi / Shutterstock.com

What is a state legislator to do when he wants to change how social media companies decide what speech to permit (or limit) on their platforms, but knows the First Amendment would bar the government from dictating the “right” answer?

In California, a new law () resolves this conundrum by forcing social media platforms to post their moderation policies and to submit detailed reports twice yearly with the attorney general explaining how they define a swath of hot-button political topics, including “hate speech or racism,” “extremism or radicalization,” “disinformation or misinformation,” and “foreign political interference.” The reports must provide detailed information about how the companies applied their policies to the millions of posts on their platforms and describe what actions they took.

The law doesn’t tell companies precisely how to define these subject areas, but it makes clear Big Brother will be watching by empowering the attorney general and city attorneys to seek stiff fines and injunctive relief if platforms “materially omit or misrepresent required information in a report.” Given the inherently subjective content categories and the degree of political polarization surrounding the issue of social media moderation, you don’t have to be Nostradamus to foresee enforcement actions in the future. 

Forcing a platform or publisher to disclose its editorial views, especially on heated political subjects, violates the First Amendment. 

But then, that was the point. Knowing it could not regulate moderation decisions directly, the California legislature chose to “nudge” social media platforms to adopt content policies more to its liking — or else. According to the state assembly’s judiciary , AB 587’s sponsor saw the law as an “important first step” in protecting against divisive content on social media. The report notes that the bill will serve the purpose of “pressur[ing]” social media companies to “eliminate hate speech and disinformation.”

Late last year, X, formerly Twitter, asked a federal court to enjoin the law. The court refused, and X appealed. Now, FIREis asking the U.S. Court of Appeals for the Ninth Circuit to reverse the lower court’s decision and stop AB 587’s enforcement.

鷡’s&Բ;friend-of-the-court brief argues that the First Amendment prohibits the government from accomplishing indirectly what it cannot do directly. Sixty years ago, Rhode Island tried a scheme using similar “nudge” tactics. Knowing the First Amendment limited its ability to ban “racy” books, the state established a “Commission to Encourage Morality in Youth” to “advise” booksellers on titles it would be best to avoid. But the Supreme Court was not fooled by this effort to evade the law, and in Bantam Books, Inc. v. Sullivan, held this type of “informal regulation” violates the First Amendment. The same principle limits California’s attempt to evade the Constitution. 

But AB 587 is even worse. In addition to the law’s unconstitutional purpose of putting the government’s thumb on the moderation scale, it compounds the problem by compelling the social media companies to speak on the controversial topics prescribed by the law.

It is basic law under the First Amendment that Americans have a right to refuse to speak and cannot be compelled to opine on controversial topics. Social media companies are in the business of editing and curation: Like newspaper or television editors, they choose what their users see. The government cannot compel them to report their editorial policies and practices to the attorney general.

android phone with icons for Parler and Twitter

FIRE Statement on Free Speech and Social Media

Issue Pages

Social media and free speech.

Read More

For its part, the federal trial court ruled that California’s law was no different from requiring advertisements to disclose truthful information about the product — the so-called “commercial speech doctrine.” But there are two problems with that analogy. 

First, users of social media aren’t purchasing a product or making a commercial transaction. Rather, they are readers or viewers, just like someone reading a magazine or watching the news. And moderation policies are not advertisements; they are statements of editorial policy. 

Second, the Supreme Court has only permitted these sorts of requirements if the speech is factual and non-controversial. But the disclosures compelled by AB 587 are the opposite of this. The subjective judgments that go into moderation policies are not “facts,” and the content categories prescribed by the law touch on the most polarizing issues of the day. The legislature selected those topics because they are controversial.

Forcing a platform or publisher to disclose its editorial views, especially on heated political subjects, violates the First Amendment. That’s doubly so when the intent behind the requirement is to invite public “pressure” on the platform for those views. The Ninth Circuit should enjoin AB 587.

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share