¹ū¶³“«Ć½app¹Ł·½

Table of Contents

AI is new ā€” the laws that govern it donā€™t have to be

AI regulation failures bolster case for states to use the tools they already have
Digital map of the U.S.

Wall-E

On Monday, Virginia Governor Glenn Youngkin  , the High-Risk Artificial Intelligence Developer and Deployer Act. The bill would have set up a broad legal framework for AI, adding restrictions to its development and its expressive outputs that, if enacted, would have put the bill on a direct collision course with the First Amendment.

This veto is the latest in a number of setbacks to a movement across many states to regulate AI development that originated with a  put together last year. In February, that group  ā€” further indicating upheaval in a once ascendant regulatory push.

While existing laws may or may not be applied prudently, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.

At the same time, another movement has gained steam. A number of states are turning to old laws, including those prohibiting fraud, forgery, discrimination, and defamation, which have long managed the same purported harms stemming from AI in the context of older technology.

Gov. Youngkinā€™s HB 2094  echoed the notion that existing laws may suffice, stating, ā€œThere are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more.ā€ FIREhas pointed to these abilities of current law in previous statements, part of a number of AI-related  weā€™ve made as the technology has come to dominate state legislative agendas, including in states like Virginia

The simple idea that current laws may be sufficient to deal with AI initially eluded the thinking of many lawmakers. Now ā€” it's quickly becoming common sense in a growing number of states. 

While existing laws may be applied in ways prudent and not, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.

The regulatory landscape

AI offers the promise of a new era of knowledge generation and expression, and these developments come at a critical juncture as AI development continues to advance towards that vision. Companies are updating their models at a , epitomized by OpenAIā€™s  . 

Public and political interest, fueled by fascination and , may thus continue to intensify over the next two years ā€” a period during which AI, still emerging from its nascent stage, will remain acutely vulnerable to threats of new regulation. Mercatus Center Research Fellow and leading AI policy analyst Dean W. Ball  that 2025 and 2026 could represent the last two years to enact the laws that will be in place before AI systems with ā€œqualitatively transformative capabilitiesā€ are released.

With AIā€™s rapid development and deployment as the backdrop, states have rushed to propose new legal frameworks, hoping to align AIā€™s coming takeoff with state policy objectives. Last year saw the introduction of around 700 bills related to AI, covering everything from ā€œdeepfakesā€ to the use of AI in elections. This year, that number is already .

Texas's , the Texas Responsible Artificial Intelligence Governance Act, has been the highest-profile example from this yearā€™s wave of restrictive AI bills. Sponsored by Republican State Rep. Giovanni Capriglione, TRAIGA has been one of several ā€œalgorithmic discriminationā€ bills that would impose liability on developers, deployers, and often distributors of AI systems that may introduce a risk of ā€œalgorithmic discrimination.ā€ Other examples include the recently vetoed HB 2094 in Virginia,  in New York, and  in Nebraska. 

While the bills have several problems, most concerning are their inclusion of a ā€œreasonable careā€ negligence standard that would hold AI developers and users liable if there is a greater than 50% chance they could have ā€œreasonablyā€ prevented discrimination. 

Such liability provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive ā€” even if doing so curtails the modelsā€™ usefulness or capabilities. The ā€œchillā€ of these kinds of provisions threatens a broad array of important applications. 

In Connecticut, for instance, Childrenā€™s Hospitals  how the vagueness and breadth of such regulations could limit health care providersā€™ ability to use AI to improve cancer screenings. These bills also compel regular risk reports on the modelsā€™ expressive outputs, similar to requirements that were held as unconstitutional under the First Amendment in other contexts  last year.

So far, only Colorado has enacted such a law. Its implementation, spearheaded by the statutorily authorized Colorado Artificial Intelligence Impact Task Force, wonā€™t assuage any skeptics. Even Gov. Jared Polis, who conceived the task force and signed the bill,  it deviates from standard anti-discrimination laws ā€œby regulating the results of AI system use, regardless of intent,ā€ and has encouraged the legislature to "reexamine the conceptā€ as the law is finalized.

With a mandate to resolve this and other points of tension, the task force has come up almost empty-handed. In its , it reached consensus on only ā€œminor . . . changes,ā€ while remaining deadlocked on substantive areas such as the lawā€™s equivalent language to TRAIGA on reasonable care.

The sponsors of TRAIGA reached a similar impasse as it came under intense . Rep. Capriglione responded earlier this month by dropping TRAIGA in favor of a new bill, . Among HB-149ā€™s provisions, many of which run headlong into protected expression, is a proposed statute that holds ā€œan artificial intelligence system shall not be developed or deployed in a manner that intentionally results in political viewpoint discriminationā€ or that ā€œintentionally infringes upon a personā€™s freedom of association or ability to freely express the personā€™s beliefs or opinions.ā€ 

But this new language overlooks a landmark Supreme Court ruling just last year that laws in Texas and Florida with similar prohibitions on political discrimination for social media raised significant First Amendment concerns. 

A more modest alternative

An approach different from that taken in Colorado and Texas appears to be taking root in Connecticut. Last year, Gov. Ned Lamont  Connecticut Senate Bill 2, a bill similar to the law Colorado passed. In , he noted, ā€œYou got to know what youā€™re regulating and be very strict about it. If itā€™s, ā€˜I donā€™t like algorithms that create biased responses,ā€™ that can go any of a million different ways.ā€ 

At a press conference at the time of the billā€™s consideration, his office suggested  to AI use in relevant areas like housing, employment, and banking.

Attempting to solve all theoretical problems of AI, before the contours of its problems become clear, is not only impractical but risks stifling innovation and expression in ways that may be difficult to reverse.

Scholars Jeffrey Sonnenfeld and co-author Stephen Henriques of Yaleā€™s School of Management , noting Connecticutā€™s Unfair Trade Practices Act would seem to cover major AI developers and small ā€œdeployersā€ alike. They argue that a preferable route to new legislation would be for the state attorney general to clarify how existing laws can remedy the harms to consumers that sparked Senate Bill 2 in the first place.

Connecticut isnā€™t alone. In California, which often sets the standard for tech law in the United States, two bills ā€” , focusing on liability for algorithmic discrimination in the same manner as the Colorado and Texas bills, and, focusing on liability for ā€œhazardous capabilitiesā€ ā€” both failed. Gov. Gavin Newsom, echoing Lamont,  in his veto statement for SB 1047, ā€œAdaptability is critical as we race to regulate a technology still in its infancy.ā€

Newsomā€™s attorney general followed up by issuing  how existing California laws ā€” such as the Unruh Civil Rights Act, California Fair Employment and Housing Act, and California Consumer Credit Reporting Agencies Act ā€” already provide consumer protections for issues that many worry AI will exacerbate, such as consumer deception and unlawful discrimination. 

,  and  have offered similar guidance, with Massachusetts Attorney General Andrea Joy Campbell noting, ā€œExisting state laws and regulations apply to this emerging technology to the same extent as they apply to any other product or application.ā€ And in Texas, where HB 149 still sits in the legislature, Attorney General Ken Paxton is currently  in cases about the misuse of AI products in violation of existing consumer protection law. 

Addressing problems

The application of existing laws, to be sure, must comport with the First Amendmentā€™s broad protections. Not all conceivable applications will be constitutional. But the core principle remains: states that are hitting the brakes and reflecting on the tools already available give AI developers and users the benefit of operating within established, predictable legal frameworks. 

And if enforcement of existing laws runs afoul of the First Amendment, there is an ample body of legal precedent to provide guidance. Some might argue that AI poses  from prior technology covered by existing laws, but it departs in neither essence or purpose. Properly understood, AI is a communicative tool used to convey ideas, like the typewriter and the computer before it. 

If there are perceived gaps in existing laws as AI and its uses evolve, legislatures may try targeted fixes. Last year, for example,  clarifying that generative AI cannot serve as a defense to violations of state tort law ā€” for example, a party cannot claim immunity from liability simply because an AI system ā€œmade the violative statementā€ or ā€œundertook the violative act.ā€ 

Rather than introducing entirely new layers of liability, this provision clarifies accountability under existing statutes. 

Other ideas floated include "," a voluntary way for private firms to test applications of AI technology in collaboration with the state in exchange for certain regulatory mitigation, the aim being to offer a learning environment for policymakers to study how law and AI interact over time, with emerging issues addressed by a regulatory scalpel rather than a hatchet. 

This reflects an important point. The trajectory of AI is largely unknowable, as is how rules imposed now will affect this early-stage technology down the line. Well-meaning laws to prevent discrimination this year could preclude broad swathes of significant expressive activity in coming years.

FIRE does not endorse any particular course of action, but this is perhaps the most compelling reason lawmakers should consider the more restrained approach outlined above. Attempting to solve all theoretical problems of AI before the contours of problems become clear is not only impractical, but risks stifling innovation and expression in ways that may be difficult to reverse. History also teaches that many of the initial worries will never materialize

As President Calvin Coolidge observed, ā€œIf you see 10 troubles coming down the road, you can be sure that nine will run into the ditch before they reach you and you have to battle with only one of them.ā€ We can address those that do materialize in a targeted manner as the full scope of the problems become clear.

The wisest course of action may be patience. Let existing laws do their job and avoid premature restrictions. Like weary parents, lawmakers should take a breath ā€” and maybe a vacation ā€” while giving AI time to grow up a little.

Recent Articles

FIREā€™s award-winning Newsdesk covers the free speech news you need to stay informed.

Share