Table of Contents
How to kill online free speech
This article originally appeared in on May 30, 2023.
By and
On Cinco de Mayo, New York City Mayor Eric Adams鈥 celebration, and the comments section turned into a dumpster fire. Many came to vent 鈥 often hatefully 鈥 about the city鈥檚 migrant crisis. One commenter lamented the 鈥渄estruction鈥 of New York by 鈥渁ll, these unwashed illegals immigrants,鈥 and another told the mayor, 鈥淒on麓t send your garbage upstate! Keep it down there!鈥
Anti-immigrant and xenophobic comments are frequent on Adams鈥 Facebook account. When Adams streamed a of Eid at his official residence featuring several Muslim speakers, a user replied 鈥淪tupid. Here to ruin another state. Another non American here to ruin within.鈥
However offensive to Mexicans and Muslims, these comments are perfectly legal under the First Amendment. And as a public official, Mayor Adams is prohibited by the First Amendment from suppressing specific viewpoints on a public forum such as his official Facebook account. But had Adams been the mayor of Paris, the irate users could potentially have landed Adams in legal limbo.
According to a by the European Court of Human Rights, freedom of expression does not immunize public officials from criminal liability if they fail to promptly remove manifestly illegal content (such as 鈥渉ate speech鈥) posted on their accounts by followers. The recent decision reveals the censorial route that Europe鈥檚 judiciary is choosing, and it provides a cautionary tale as the U.S. Supreme Court faces many challenges to its robust protections for online speech.
U.S. protections for political speech are vastly more expansive than those of Europe. Although the First Amendment does not protect certain narrow categories of speech, such as true threats and imminent incitement of lawless action, it does not have a general 鈥渉ate speech鈥 exception.
In Sanchez v. France, the mayor of a French village, local councilor, and parliamentary candidate for the right-wing Rassemblement National Party used his Facebook account to mock a rival political party鈥檚 failing website. Some users commented on the posting, accusing the rival political party of being 鈥渁llies of the muslims鈥 and describing the city of Nimes as being infested by 鈥淒rug trafficking run by the muslims鈥 and where 鈥渟tones get thrown at cars belonging to 鈥榳hite people.鈥欌
Even though one of the comments was quickly deleted by the user and the politician warned his Facebook followers to 鈥渂e careful with the content of [their] comments,鈥 he was convicted for incitement to religious hatred and fined 3,000 euros. In its decision, the ECHR stressed that due to a politician鈥檚 鈥減articular status and position in society,鈥 he or she is more likely to 鈥渋nfluence voters, or even to incite them, directly or indirectly, to adopt positions and conduct that may prove unlawful鈥 and therefore politicians must be 鈥渁ll the more vigilant鈥 in policing content.
The ECHR鈥檚 decision is deeply antithetical to the egalitarian ideals of online free speech, and it鈥檚 likely to skew the public sphere in favor of the powerful and platformed to the disadvantage of the voiceless and marginalized. One of social media鈥檚 most empowering aspects is that it gives ordinary citizens and voters the ability to tell their 鈥渂etters鈥 what they think of them without pleasantries.
Government attempts to label speech misinformation, disinformation, and malinformation are a free-speech nightmare
News
Allowing the government to decide what speech is and is not fit for public consideration will likely make the problem worse.
If politicians and public officials risk criminal liability for comments made by third party users, they will be strongly incentivized to simply disable comments. This is especially true for the most prominent politicians. Eric Adams,鈥 for example, has 362,000 Facebook followers, and his posts often attract hundreds of comments, which no single person 鈥 even with staff 鈥 could reasonably be expected to proactively review for compliance with existing laws, while also serving in office.
This problem is compounded by the fact that the ECHR has never defined 鈥渉ate speech,鈥 which may include such vague and inherently subjective categories as 鈥渋nsulting鈥 and 鈥渉urtful鈥 comments. In 2018, the ECHR that a Russian journalist had exceeded the limits of free speech and 鈥渟tir[red] up a deep-seated and irrational hatred鈥 towards the Russian army by comparing its soldiers to 鈥渕aniacs鈥 and 鈥渕urderers.鈥 This decision hasn鈥檛 aged well given the horrors inflicted by Russian troops in Ukraine.
Even worse, the ECHR鈥檚 decision might actually legitimize the selective and viewpoint-based repression of dissent and political criticism. A politician might, for instance, discriminate systematically against his or her most vociferous online critics by defining their comments as 鈥渉ate speech鈥 or other forms of illegal content. The result is likely to reduce the social media accounts of politicians and public officials to one-sided public relations platforms where they can spread their messages with little opportunity for public criticism by the people they鈥檙e supposed to represent. After all, no politician likes to be mocked and criticized in public.
U.S. protections for political speech are vastly more expansive than those of Europe. Although the First Amendment does not protect certain narrow categories of speech, such as true threats and imminent incitement of lawless action, it does not have a general 鈥渉ate speech鈥 exception. In 1969, the Supreme Court the conviction of an Ohio Ku Klux Klan leader who organized a rally and cross burning, writing that the law under which he was prosecuted 鈥減urports to punish mere advocacy.鈥
Even if hate speech was not constitutionally protected, Section 230 of the Communications Decency Act immunizes platforms from civil claims and state criminal prosecutions arising from the platforms鈥 failure to take down (or keep up) user content.
The U.S. faces serious online harms and real challenges, and the tech giants that dominate centralized social media platforms are often part of the problem. But new challenges should not cause the nation to deviate from more than a century of robust free speech ideals.
In 1997, the a law that criminalized online indecency, recognizing that the internet is 鈥渁 unique and wholly new medium of worldwide human communication.鈥 In the quarter-century since, the Court has consistently rejected attempts to allow lawmakers, officials, and courts to micromanage online speech. This month, the U.S. Supreme Court whether Section 230 applies to algorithmic promotion of user content, leaving in place more than a quarter century of broad interpretations of the statute鈥檚 immunity.
But the U.S.鈥檚 hands-off approach to the internet is increasingly threatened. In the next term, the Supreme Court will likely hear challenges to Florida and Texas laws that restrict the ability of platforms to moderate content. Some states have passed laws that require people to verify their age before using social media, making it far more difficult for people to speak anonymously online. And in the past few years, members of Congress have introduced dozens of proposals to scale back or eliminate Section 230.
The U.S. faces serious online harms and real challenges, and the tech giants that dominate centralized social media platforms are often part of the problem. But new challenges should not cause the nation to deviate from more than a century of robust free speech ideals. The ECHR麓s ruling in Sanchez offers a window into a dystopian future of online free speech in America. It is a window that we hope the U.S. will shut for good.
Mchangama is the CEO of the and a Senior Fellow at FIREand author of "."
Kosseff is a senior legal fellow at The Future of Free Speech. His next book, "Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation," will be published this fall.
Recent Articles
FIRE鈥檚 award-winning Newsdesk covers the free speech news you need to stay informed.