果冻传媒app官方

Table of Contents

Greg Lukianoff testimony before the House Judiciary Committee, February 6, 2024

果冻传媒app官方

WRITTEN TESTIMONY of GREG LUKIANOFF 

President & CEO, 

FIRE 

Before the 

UNITED STATES HOUSE OF REPRESENTATIVES 

COMMITTEE ON 

THE JUDICIARY 

SELECT SUBCOMMITTEE ON THE WEAPONIZATION OF THE FEDERAL 

GOVERNMENT 

February 6, 2024 hearing on 

The Weaponization of the Federal Government 

Chairman Jordan, Ranking Member Plaskett, and distinguished members of the select subcommittee: Good morning. My name is Greg Lukianoff and I am the CEO of the 果冻传媒app官方, or 鈥湽炒絘pp官方,鈥 where I鈥檝e worked for 23 years. FIREis a nonpartisan, nonprofit that uses litigation, scholarship, and public outreach to defend and promote the value of free speech for all Americans. We proudly defend free speech regardless of a speaker鈥檚 viewpoint or identity, and we have represented people across the political spectrum.

I鈥檓 here to address the risk AI and AI regulation pose to freedom of speech and the creation of knowledge. 

We have good reason to be concerned. FIREregularly fights government attempts to stifle speech on the internet. FIREis in federal court challenging a New York law that forces websites to 鈥渁ddress鈥 online speech that someone, somewhere finds humiliating or vilifying. We鈥檙e challenging a new Utah law that requires age verification of all social media users. We鈥檝e raised concerns about the federal government funding development of AI tools to target speech including microaggressions. And later this week, FIREwill file a brief with the Supreme Court explaining the danger of 鈥渏awboning鈥 鈥 the use of government pressure to force social media platforms to censor protected speech. 

But the most chilling threat that the government poses in the context of emerging AI is regulatory overreach that limits its potential as a tool for contributing to human knowledge. A regulatory panic could result in a small number of Americans deciding for everyone else what speech, ideas, and even questions are permitted in the name of 鈥渟afety鈥 or 鈥渁lignment.鈥 

I鈥檝e dedicated my life to defending freedom of speech because it is an essential human right. However, free speech is more than that; it鈥檚 nothing less than essential to our ability to understand the world. A giant step for human progress was the realization that, despite what our senses tell us, knowledge is hard to attain. It鈥檚 a never-ending, arduous, necessarily de-centralized process of testing and retesting, of chipping away at falsity to edge a bit closer to truth. It鈥檚 not just about the proverbial 鈥渕arketplace of ideas鈥; it鈥檚 about allowing information鈥攊ndependent of idea or argument鈥攖o flow freely so that we can hope to know the world as it really is. This means seeing value in expression even when it appears to be wrongheaded or useless.

This process has been aided by new technologies that have made communication easier. From the printing press, to the telegraph and radio, to phones and the internet: each one has accelerated the development of new knowledge by making it easier to share information. 

But AI offers even greater liberating potential, empowered by First Amendment principles, including freedom to code, academic freedom, and freedom of inquiry. We are on the threshold of a revolution in the creation and discovery of knowledge. AI鈥檚 potential is humbling; indeed, even frightening. But as the history of the printing press shows, attempts to put the genie back in the bottle will fail. Despite the profound disruption the printing press caused in Europe in the short term, the long-term contribution to art, science, and again, knowledge was without equal. 

Yes, we may have some fears about the proliferation of AI. But what those of us who care about civil liberties fear more is a government monopoly on advanced AI. Or, more likely, regulatory capture and a government-empowered oligopoly that privileges a handful of existing players. The end result of pushing too hard on AI regulation will be the concentration of AI influence in an even smaller number of hands. Far from reining in the government鈥檚 misuse of AI to censor, we will have created the framework not only to censor but also to dominate and distort the production of knowledge itself. 

鈥淏ut why not just let OpenAI or a handful of existing AI engines dominate the space?鈥 you may ask. Trust in expertise and in higher education 鈥 another important developer of knowledge 鈥 has plummeted in recent years, due largely to self-inflicted wounds borne of the ideological biases shared by much of the expert class. That same bias is often found baked into existing AI, and without competing AI models we may create a massive body of purported official facts that we can鈥檛 actually trust. We鈥檝e seen on campus that attempts to regulate hate speech have led to absurd results like punishing people for simply reading about controversial topics like racism; similarly, AI programs flag or refuse to answer questions about prohibited topics.

And, of course, the potential end result of America tying the hands of the greatest programmers in the world would be to lose our advantage to our most determined foreign adversaries.

But with decentralized development and use of AI, we have a better chance of defeating our staunchest rivals or even Skynet or Big Brother. And it鈥檚 what gives us our best chance for understanding the world without being blinded by our current orthodoxies, superstitions, or darkest fears.

Thank you for the invitation to testify and I look forward to your questions.

Share