Table of Contents
Texas legislators file unconstitutional bills to prohibit use of AI in election campaigns
Earlier this year, Rep. Jennifer Wexton the House of Representatives using her voice entirely generated by AI because a progressive disease has weakened her natural -speaking voice. However, if she used her AI-generated voice in a campaign ad in Texas, newly introduced legislation would likely make that a crime.
The Texas laws would broadly criminalize AI-generated content related to elections. But one major obstacle to this effort is that a Texas state appeals court already on First Amendment grounds, writing that the law was an overly broad content-based regulation of core political speech. The law 鈥 known as the 鈥 made it a criminal offense to pretend that campaign communications are coming from someone else with the intent to cause injury to a candidate or influence the election's outcome.
The court did the right thing, but that hasn鈥檛 stopped Texas legislators. If enacted, these bills will lead to more censorship in the Lone Star state.
Ex Parte Stafford
The doomed True Source law was used to prosecute John Stafford, a self-described Democratic party activist who was indicted for sending text messages that looked like they came from conservative or Republican campaigns with the intent of exposing the political leanings of candidates running in local nonpartisan races.
The law made it a crime to 鈥渒nowingly represent[] in a campaign communication that the communication emanates from a source other than its true source . . . with intent to injure a candidate or influence the result of an election.鈥
The Texas Court of Criminal Appeals, with a lower court ruling, that the True Source statute was a content-based restriction on political speech, on the grounds that the law burdened 鈥渃ore political speech.鈥 Under First Amendment doctrine, this meant the law had to satisfy , which requires the state to show that the law is necessary to serve a compelling government interest and is narrowly tailored to serve that interest.
While the state was able to show a compelling interest in protecting the election process, the court found that the law鈥檚 burden on freedom of speech was neither 鈥渟ufficiently narrow鈥 nor did it impose 鈥渁s few restrictions as possible to meet the state's goals.鈥 The law was overly broad, encompassing communications that were neither false nor misleading.
The court emphasized the law鈥檚 broad sweep, observing that it鈥檚 hard to imagine political speech that would not 鈥渋nfluence the result of an election.鈥 The statute covered even neutral and accurate statements. It also reached parody, which is squarely protected by the First Amendment.
Moreover, the law covered statements merely 鈥渞elated鈥 to a campaign, casting an even wider net over "innocuous statements.鈥 The vague language left too much to interpretation, leaving people at the mercy of local prosecutors.
Texas legislation related to use of AI in election campaigns
A major takeaway from Stafford is the court鈥檚 recognition of the as a content-based regulation. Political speech in particular receives strong First Amendment protection because it is essential for our system of government. The Supreme Court stated this explicitly in Buckley v. Valeo (1976): 鈥淒iscussion of public issues and debate on the qualifications of candidates are integral to the operation of the system of government established by our Constitution.鈥
Stafford demonstrates how broad regulations of speech, particularly core political speech, have difficulty passing constitutional muster. Despite this, Texas legislators have begun filing bills for the 2025 legislative session that seek to broadly regulate election-related, AI-generated content. These bills do little, if anything at all, to avoid the major constitutional issues the court found with the True Source statute.
Below are two examples of bills that have been filed so far.
Texas House Bill 556
would make it a criminal offense to create 鈥渁rtificially generated media鈥 and then to publish or distribute it within 30 days of an election with the 鈥渋ntent to injure a candidate or influence the result of an election.鈥
The bill defines the term "artificially generated media" broadly: it includes pictures, audio recordings (such as a person鈥檚 voice), videos, or text 鈥渃reated or modified using generative artificial intelligence technology or other computer software with the intent to deceive.鈥
Given the decision in Stafford, it is difficult to imagine this bill surviving a court challenge. The bill similarly targets core political speech by broadly going after media that seeks to influence an election, if it was made or modified with software 鈥渨ith intent to deceive.鈥
Texas shouldn鈥檛 take a page from a California law that鈥檚 already been blocked. As Stafford affirmed, overly broad regulations that stifle protected speech simply won鈥檛 pass constitutional muster.
This could include all kinds of speech that is 鈥渄eceptive,鈥 but does no harm to the integrity of elections. For example, candidates use software to produce and send thousands of communications at once purporting to be from the candidate and personally addressed to individual supporters. Is it deceptive to make it look like candidates are the ones actually sending these messages when software helps them rather than campaign staff?
Or consider an AI reproduction of a candidate鈥檚 voice, as illustrated by . The AI-generated voice of Rep. Wexton, who was diagnosed in 2023 with a rare brain disease known as progressive supranuclear palsy, is an example of AI providing greater opportunities for those who have a speech-related impairment or disability to connect with other people. Although arguably deceptive under HB 556, since it was not her actual voice, this use of AI helped her communicate. If this was used in a campaign ad instead of a floor speech, it could run afoul of HB 556.
AI can also improve production quality of campaign ads. It can make it easier to create , high-quality ads for candidates who cannot afford professional audio and video production, lowering the economic threshold to seek public office. Whether this means upscaling the pixel quality of images or suggesting layouts to make an ad look more professional, AI may actually further democratize our elections by giving everyday Americans more effective tools to run competitive races.
While these examples paint a rosy picture of AI as a positive force for shaping political discourse, they should not be taken to suggest that AI is incapable of being used for nefarious purposes. On the contrary, we鈥檝e said before that exceptions to the First Amendment apply to AI-generated speech the same as they do to other speech. These exceptions include incitement to imminent lawless action, true threats, fraud, defamation, and speech integral to criminal conduct.
But laws prohibiting , , and are already covered by Texas law. HB 556 instead criminalizes a broad swath of political speech beyond what is necessary to protect elections, and so it would very likely fail the narrow tailoring that the First Amendment requires.
Even if the bill could survive judicial review, it would chill election-related speech in troubling ways. As described above, it would ban benign uses of AI or software in elections.
And while the intent might be to prevent other people from altering a video of a candidate for some unlawful purpose, the criminal prohibition might very well apply to candidates who alter their own image or audio. If that鈥檚 the case, a law like this could easily be weaponized by political opponents to chill the opposing side鈥檚 speech. It could also empower prosecutors to investigate candidates they oppose on any suspicion that they used any software in their campaign ads or other messages that might count as 鈥渄eceptive.鈥
Texas Senate Bill 228
Another proposed piece of legislation, , would prohibit a person from knowingly distributing political advertising that has been changed by AI technology. SB 228 would make it a criminal offense to publish or share 鈥減olitical advertising that includes an image, audio recording, or video recording鈥 of a politician鈥檚 鈥渁ppearance, speech, or conduct that has been intentionally manipulated using generative artificial intelligence technology鈥 to cause harm. This specifically criminalizes 鈥渁 realistic but false or inaccurate鈥 image or recording that results in depicting something that didn鈥檛 happen in a way that would give a reasonable person 鈥渁 fundamentally different understanding or impression鈥 than the original, unaltered version.
Legislative Policy Reform
Page
FIREworks closely with lawmakers across the country and political spectrum to protect civil liberties.
Like HB 556, this bill is not narrowly tailored, meaning all of the examples of legitimate and helpful uses of AI above could very well be prohibited under this bill as well.
A similarly worded California law has actually been by a federal court on First Amendment grounds. In , the judge assessed the California law as a content-based speech restriction and concluded that it likely failed strict scrutiny review because 鈥渃ounter speech is a less restrictive alternative to prohibiting videos.鈥 In other words, the cure for false or deceptive speech is more speech.
The court declared that fears of AI-generated content, while justified, do 鈥渘ot give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.鈥 The court stated further that 鈥淵ouTube videos, Facebook posts, and X tweets are the newspaper advertisements and political cartoons of today, and the First Amendment protects an individual鈥檚 right to speak regardless of the new medium these critiques may take.鈥
Texas shouldn鈥檛 take a page from a California law that鈥檚 already been blocked. As Stafford affirmed, overly broad regulations that stifle protected speech simply won鈥檛 pass constitutional muster.
Looking ahead
While FIREis just beginning to explore AI legislation that might be introduced in 2025, fighting against content-based regulations is not new to us.
We will keep our readers updated on any further developments.
Recent Articles
FIRE鈥檚 award-winning Newsdesk covers the free speech news you need to stay informed.