ýappٷ

Table of Contents

Wave of state-level AI bills raise First Amendment problems

There’s no ‘artificial intelligence’ exception to the First Amendment
Abstract graphic coding artificial intelligence US flag and skyline background

Shutterstock.com

AI is enhancing our ability to communicate, much like the printing press and the internet did in the past. And lawmakers nationwide are rushing to regulate its use, introducing hundreds of bills in states across the country.  Unfortunately, many AI bills we’ve reviewed would violate the First Amendment — just as ýappٷ warned against last month. It’s worth repeating that First Amendment doctrine does not reset itself after each technological advance. It protects speech created or modified with artificial intelligence software just as it does to speech created without it.

On the flip side, AI’s involvement doesn’t change the illegality of acts already forbidden by existing law. There are some narrow, well-defined categories of speech not protected by the First Amendment — such as fraud, defamation, and speech integral to criminal conduct — that states can and do already restrict. In that sense, the use of AI is already regulated, and policymakers should first look to enforcement of those existing laws to address their concerns with AI. Further restrictions on speech are both unnecessary and likely to face serious First Amendment problems, which I detail below.

Constitutional background: Watermarking and other compelled disclosure of AI use

We’re seeing a lot of AI legislation that would require a speaker to disclose their use of AI to generate or modify text, images, audio, or video. Generally, this includes requiring watermarks on images created with AI, mandating disclaimers in audio and video generated with AI, and forcing developers to add metadata to images created with their software. 

Donald Trump and Joe Biden wearing sunglasses and smiling in AI portrait

Many of these bills violate the First Amendment by compelling speech. Government-compelled speech—whether that speech is an opinion, or fact, or even just metadata—is generally anathema to the First Amendment. That’s for good reason: Compelled speech undermines everyone’s right to conscience and fundamental autonomy to control their own expression.

To illustrate: Last year, in X Corp. v. Bonta, the U.S. Court of Appeals for the Ninth Circuit  reviewed a  that required social media companies to post and report information about their content moderation practices. FIREfiled an amicus curiae — “friend of the court” — brief in that case, arguing the posting and reporting requirements unconstitutionally compel social media companies to speak about topics on which they’d like to remain silent. The Ninth Circuit agreed, holding the law was likely unconstitutional. While acknowledging the state had an interest in providing transparency, the court reaffirmed that “even ‘undeniably admirable goals’ ‘must yield’ when they ‘collide with the . . . Constitution.’”

There are (limited) exceptions to the principle that the state cannot compel speech. In some narrow circumstances, the government may compel the disclosure of information. For example, for speech that proposes a commercial transaction, the government may require disclosure of  to prevent consumer deception. (For example, under this principle, the D.C. Circuit  federal regulators to require disclosure of country-of-origin information about meat products.) 

But none of those recognized exceptions would permit the government to mandate blanket disclosure of AI-generated or modified speech. States seeking to require such disclosures will face heightened scrutiny beyond what is required for commercial speech.

AI disclosure and watermarking bills

This year, we’re also seeing lawmakers introduce many bills that require certain disclosures whenever speakers use AI to create or modify content, regardless of the nature of the content. These bills include Washington’s , Massachusetts’s , New York’s , and Texas’s.

At a minimum, the First Amendment requires these kinds of regulations to be tailored to address a particular state interest. But these bills are not aimed at any specific problem at all, much less being tailored to it; instead, they require nearly all AI-generated media to bear a digital disclaimer. 

For example, FIRErecently testified against Washington’s , which requires covered providers of AI to include in any AI-generated images, videos, or audio a latent disclosure detectable by an AI detection tool that the bill also requires developers to offer.

Of course, developers and users can choose to disclose their use of AI voluntarily. But bills like HB 1170 force disclosure in constitutionally suspect ways because they aren’t aimed at furthering any particular governmental interest and they burden a wide range of speech.

Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. 

In fact, if the government’s goal is addressing fraud or other unlawful deception, there are ways these disclosures could make things worse. First, the disclosure requirement will taint the speech of non-malicious AI users by fostering the false impression that their speech is deceptive, even if it isn’t. Second, bad actors can and will find ways around the disclosure mandate — including using AI tools in other states or countries, or just creating photorealistic content through other means. False content produced by bad actors will then have a much greater imprimatur of legitimacy than it would in a world without the disclosures required by this bill, because people will assume that content lacking the mandated disclosure was not created with AI.

Constitutional background: Categorical ‘deepfake’ regulations

A handful of bills introduced this year seek to categorically ban “deepfakes.” In other words, these bills would make it unlawful to create or share AI-generated content depicting someone saying or doing something that the person did not in reality say or do.

Categorical exceptions to the First Amendment exist, but these exceptions are few, narrow, and carefully defined. Take, for example, false or misleading speech. There is no general First Amendment exception for misinformation or disinformation or other false speech. Such an exception  to suppress dissent and criticism.

There are, however, narrow exceptions for deceptive speech that constitutes fraud, defamation, or appropriation. In the case of fraud, the government can impose liability on speakers who knowingly make factual misrepresentations to obtain money or some other material benefit. For defamation, the government can impose liability for false, derogatory speech made with the requisite intent to harm another’s reputation. For appropriation, the government can impose liability for using another person’s name or likeness without permission, for commercial purposes.

Misinformation vs. Disinformation - A girl in stress and anxiety of depression closed her ears.

Misinformation versus disinformation, explained

Issue Pages

Confusingly, the terms are used interchangeably. But they are different — and the distinction matters.

Read More

Like an email message or social media post, AI-generated content can fall under one of these categories of unprotected speech, but the Supreme Court has never recognized a categorical exception for creating photorealistic images or video of another person. Context always matters.

Although some people will use AI tools to produce unlawful or unprotected speech, the Court has never permitted the government to institute a broad technological ban that would stifle protected speech on the grounds that the technology has a potential for misuse. Instead, the government must tailor its regulation to the problem it’s trying to solve — and even then, the regulation will still fail judicial scrutiny if it burdens too much protected speech.

AI-generated content has a wide array of potential applications, spanning from political commentary and parody to art, entertainment, education, and outreach. Users have deployed AI technology to create political commentary, like the viral deepfake of  discussing his control over user data — and for parody, as seen in the  pizza commercial and the TikTok account dedicated to satirizing . In the realm of art and entertainment, the  used deepfake technology to bring the artist back to life, and the TV series “The Mandalorian” recreated a young . Deepfakes have even been used for education and outreach, with a deepfake of  raising awareness about malaria.

These examples should not be taken to suggest that AI is always a positive force for . It’s not. But not only will categorical bans on deepfakes restrict protected expression such as the examples above, they’ll face — and are highly unlikely to survive — the strictest judicial scrutiny under the First Amendment.

Categorical deepfake prohibition bills

Bills with categorical deepfake prohibitions include North Dakota’s  and Kentucky’s .

North Dakota’s , a failed bill that FIREopposed, is a clear example of what would have been an unconstitutional categorical ban on deepfakes. The bill would have made it a misdemeanor to “intentionally produce, possess, distribute, promote, advertise, sell, exhibit, broadcast, or transmit” a deepfake without the consent of the person depicted. It defined a deepfake as any digitally-altered or AI-created “video or audio recording, motion picture film, electronic image, or photograph” that deceptively depicts something that did not occur in reality and includes the digitally-altered or AI-created voice or image of a person.

This bill was overly broad and would criminalize vast amounts of protected speech. It was so broad that it would be like making it illegal to paint a realistic image of a busy public park without obtaining everyone’s consent. Why make it illegal for that same painter to take their realistic painting and bring it to life with AI technology?

Artificial intelligence. Technology web background.

Artificial intelligence, free speech, and the First Amendment

Issue Pages

FIREoffers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.

Read More

HB 1320 would have prohibited the creation and distribution of deepfakes regardless of whether they cause actual harm. But, as noted, there isn’t a categorical exception to the First Amendment for false speech, and deceptive speech that causes specific, targeted harm to individuals is already punishable under narrowly defined First Amendment exceptions. If, for example, someone creates and distributes to other people a deepfake showing someone doing something they didn’t in reality do, thus effectively serving as a false statement of fact, the depicted individual could sue for defamation if they suffered reputational harm. But this doesn’t require a new law.

Even if HB 1320 were limited to defamatory speech, enacting new, technology-specific laws where existing, generally applicable laws already suffice risks sowing confusion that will ultimately chill protected speech. Such technology-specific laws are also easily rendered obsolete and ineffective by rapidly advancing technology.

HB 1320’s overreach clashed with clear First Amendment protections. Fortunately, the bill failed to pass.

Constitutional background: Election-related AI regulations

Another large bucket of bills that we’re seeing would criminalize or create civil liability for the use of AI-generated content in election-related communications, without regard to whether the content is actually defamatory.

Like categorical bans on AI, regulations of political speech have serious difficulty passing constitutional muster. Political speech receives strong First Amendment protection and the Supreme Court has recognized it as essential for our system of government: “Discussion of public issues and debate on the qualifications of candidates are integral to the operation of the system of government established by our Constitution.”

Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle.

As noted above, the First Amendment protects a great deal of false speech, so these regulations will be subject to strict scrutiny when challenged in court. This means the government must prove the law is necessary to serve a compelling state interest and is narrowly tailored to achieving that interest. Narrow tailoring in strict scrutiny requires that the state meet its interest using the least speech-restrictive means.

This high bar protects the American people from poorly tailored regulations of political speech that chill vital forms of political discourse, including satire and parody. Vigorously protecting free expression ensures robust democratic debate, which can counter deceptive speech more effectively than any legislation.

Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle. No elections in the United States have been decided, or even materially impacted, by any AI-generated media, so the threat — and the government’s interest in addressing it — remains hypothetical. Even if that connection was established, many of the current bills are not narrowly tailored; they would burden all kinds of AI-generated political speech that poses no threat to elections. Meanwhile, laws against defamation already provide an alternative means for candidates to address deliberate lies that harm them through reputational damage.

Already, a court has blocked one of these laws on First Amendment grounds. In a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, a federal court recently applied strict scrutiny and  a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content.

Election-related AI bills

Unfortunately, many states have jumped on the bandwagon to regulate AI-generated media relating to elections. In December, I wrote about two bills in Texas — HB 556 and HB 228 — that would criminalize AI-generated content related to elections. Other bills now include Alaska’s , Arkansas’s , Illinois’s , Maryland’s , Massachusetts’s, Mississippi’s , Missouri’s , Montana’s , Nebraska’s , New York’s , South Carolina’s , Vermont’s , and Virginia’s .

For example, , a Vermont bill, bans a person from seeking to “publish, communicate, or otherwise distribute a synthetic media message that the person knows or should have known is a deceptive and fraudulent synthetic media of a candidate on the ballot.” According to the bill, synthetic media means content that creates “a realistic but false representation” of a candidate created or manipulated with “the use of digital technology, including artificial intelligence.”

Under this bill (and many others like it), if someone merely reposted a viral AI-generated meme of a presidential candidate that portrayed that candidate “saying or doing something that did not occur,” the candidate could sue the reposter to block them from sharing it further, and the reposter could face a substantial fine should the state pursue the case further. This would greatly burden private citizens’ political speech, and would burden candidates’ speech by giving political opponents a weapon to wield against each other during campaign season. 

Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. To cast a serious chill over electoral discourse, a motivated candidate need only file a bevy of lawsuits or complaints that raise the cost of speaking out to an unaffordable level.

Instead of voter outreach, political campaigning would turn into lawfare.

Concluding Thoughts

That’s a quick round-up of the AI-related legislation I’m seeing at the moment and how it impacts speech. We’ll keep you posted!

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share