Table of Contents
āSo to Speakā podcast transcript: Artificial intelligence: Is it protected by the First Amendment?
Note: This is an unedited rush transcript. Please check any quotations against the audio recording.
Nico Perrino: Youāre listening to So to Speak, the free speech podcast, brought to you by ¹ū¶³“«Ć½app¹Ł·½, the ¹ū¶³“«Ć½app¹Ł·½. Welcome back to So to Speak, the free speech podcast, hosted by me, Nico Perrino, where, every other week, we dive into the world of free expression through personal stories and candid conversations.
Today, weāre focusing on the rapid advancement of artificial intelligence and what it means for the future of free speech and the First Amendment. Joining us as we navigate the complex and ever-evolving landscape of free expression in the digital age are our guests, Eugene Volokh, a First Amendment scholar and law professor at UCLA, David Greene, the senior staff attorney and civil liberties director at the Electronic Frontier Foundation, and Alison Schary, a partner at the law firm of Davis Wright Tremaine. Eugene, David, and Alison, welcome onto the show.
Alison Schary: Thanks, Nico.
Eugene Volokh: Thanks for having me.
David Greene: Yes, glad to be here.
Nico: I should say that introduction was not written by me, but rather by OpenAIās popular artificial intelligence chatbot, ChatGPT, with just, I think, one or two tweaks from me there on the back end. I think itās fair to say artificial intelligence technologies have advanced quite a bit here in the past few months, or even past few weeks, and taken many people by surprise. Itās not just ChatGPT, of course, which you can ask questions to and it can churn out surprisingly cogent responses, but thereās also DALL-E, which creates AI images, VALL-E, which turns text into surprisingly human-sounding speech, and then there are tools like QuickVid and Make-A-Video from Meta, which produce AI video.
So, to start this conversation, I wanna begin by asking the basics. Should AI be granted the same constitutional rights as human beings when it comes to freedom of speech and expression, and if so, how would you define the criteria for AI to be considered as having First Amendment rights? And I should ask ā that question itself was generated by AI, and I know, Eugene, over email, you had a quick take on this one, so Iāll let you kick it off if youāre willing.
Eugene: Yeah. So, rights are secured to persons ā letās not say people necessarily, but persons.
Nico: Well, whatās the difference?
Eugene: Right. So, they reflect our sense that certain entities ought to have rights because they have desires, they have beliefs, they have thought, so thatās why humans have rights. Even you could imagine if somebody were to show that orangutans, letās say, or whales, or whatever else are sufficiently mentally sophisticated, we could say they have rights. Corporations have rights because humans who make up those corporations have rights, but computer software doesnāt have rights as such.
So, the real question would be whether, at some point, we conclude that some AIs are like enough to us or thinking enough that they have desires, that things matter to them, and then, of course, weād need to ask what rights they should have, which is often very much based on what matters to us. So, for example, humans generally have parental rights because we know how important parenting is to us. I donāt know what would be important to AIs. Perhaps itās different for different AIs. I think itās way premature to try to figure it out.
Now, thereās a separate question, which is whether the First Amendment should protect AI-generated content because of the rights of the creators of the AI or the rights of users of the AI. Iāll just give you an example. Dead people donāt have rights, not in any meaningful sense, but if somebody were to ban publication of, say, Mein Kampf, it wouldnāt be because we think that the dead Adolf Hitler has rights, itās because all of us need to be able to ā excuse me, it would be unconstitutional, I think, because all of us need to be able to read this very important, although very harmful book.
So, I think thereās a separate question as to whether AI-generated speech should have rights, but at this point, itās not because AIs are rights-bearing entities. Maybe one day, weāll conclude that, but I think itās way too early to make that decision now.
Nico: Well, as a basic constitutional question ā maybe, David, you can chime in here ā what does the First Amendment protect? We talk about the freedom of speech. Is it the speech itself, or is it because the speech is produced by a human that it therefore needs protection, or is there just value in words strung together, for example?
David: Well, I think the First Amendment and freedom of expression as a concept more broadly protects the rights of people to express themselves, or persons, or collections of people that form entities. I agree with Eugene that I think AI is best looked at as a tool that people ā persons who have rights ā use to express themselves, and so, when weāre talking about the rights framework around AI, weāre talking about the rights of the users, of those who want to use it to create expression and to disseminate expression, and those who want to receive it. And, I do think in every other context of the First Amendment, thatās what weāre looking at. Weāre looking at the value in expressing oneself to others.
Nico: But is there a reason weāve historically looked at it that way ā itās because itās only ever sentient people that have produced the expression, right? It can be produced no other way. Now, youāre creating a technology that can take on a life of its own and produce expression that was never anticipated by the so-called creators of that artificial intelligence, and if speech and expression has an informational value and we protect it because of the informational value it provides to, say, democracy or knowledge, is there an argument that that should be protected as well?
Alison: Well, Iām gonna come at this, I think, as a litigator. Just to be totally practical, we can hypothesize about the constitutional, theoretical underpinnings, but when push comes to shove, the way the law gets made is in lawsuits, and you have to have a person to sue, and so, if somebody is gonna bring this case, theyāre going to sue the owner of the ā the person who distributed the speech, or the person who created, or the entity that takes ownership of or develops the AI system. Thatās whatās happening with current ones.
So, I think as a practical matter, when these First Amendment arguments get made, theyāre inevitably going to be made by people, and itās either going to be the person distributing saying, āThis is my speech,ā or itās going to be the developers of the algorithm saying, āThis is my speech in the way itās put together and itās also the speech of people who give the prompts to it, etc.,ā but I just have trouble thinking of ā other than a monkey selfie kind of situation, which got squashed by ā thatās kind of the most on-point precedent ā
Nico: What is that case, for our listeners who arenāt familiar with it?
Alison: Oh, sorry. So, the monkey selfie is when there was a monkey ā a macaque, I think ā took a nature photographerās camera, took a selfie, and then the photographer was distributing it, and I think it was PETA that sued as the next best friend of the monkey, arguing that the monkey had copyright because the monkey was the one who pressed the button, and they lost, and there is no standing for a monkey to assert copyright because it doesnāt have the rights that are contemplated by the Copyright Act. So, I have trouble seeing where itās gonna get in the door to even adjudicate a First Amendment right for an AI in and of itself.
Eugene: Right. I like the litigation frame, I think itās very helpful, so letās imagine a couple of things. Letās imagine somebody sues ChatGPT, or essentially the owners of that ā sue the company that runs it ā and they say, āYou are liable because your product said things that are defamatory,ā letās say. One thing it could do is it could raise its own First Amendment rights ā āThatās our speechā ā but it could also raise third-party First Amendment rights.
So, it could say, āItās not our speech, but itās the AIās speech, and we are here to protect that,ā and there are quite a few cases where, for example, a book publisher could raise the rights of its authors, letās say, or, in fact, a speaker can raise the rights of its listeners. There are cases along those lines. I donāt think that thatās gonna get very far, because again, courts should say, āHuh, why should we think it has rights? Do we even know that itās the kind of thing that has rights?ā
So, I think instead, the publisher will say ā excuse me, the company thatās being sued will say, āWeāre going to assert the rights of listeners, that if itās publishing these things, itās gonna be valuable to our readers, and itās true, the readers arenāt the ones being sued here, but if you shut us down, if you hold us liable, readers will be denied this information, so weāre gonna assert their rights.ā
And again, courts are quite open to that sort of thing, and I think the answer will be yes, readers have the rights to read material generated by AIs regardless of whether AIs have the right to convey it. The same thing would apply if, for example, Congress were to pass a law prohibiting, letās say, the use of AI software to ā either prohibiting it altogether for the use of it to answer questions, or maybe imagine a rule that says that AI software canāt make statements about medical care, like people would be asking it, āWhat should I do about these symptoms?ā We donāt want it to do that.
Alison: It would have to be disclosed.
Eugene: Right, you could have a disclosure requirement, or the easiest example would be a prohibition. Again, I think the AI software manufacturers would say not so much āOh, the AI has a right to play doctor.ā Rather, itās that listeners have a right to gather information, for whatever itās worth with whatever disclaimers are there, and the statute interferes with listenersā rights. I think thatās the way it would actually play out in court.
David: And I do think, Nico, that your question sort of assumes that inevitability of sentience from AI that I donāt know how close we are to or whether we will ever get there, but we are certainly not there right now, and I hate to be one of those tech lawyers who talks tech ā who tries to make analogies to the Stone Age and things like that, but thereās always been tools that speakers have used to help them compose and create their speech, and I do think that AI is best thought of in that way now, and maybe there will be some tipping point of sentience or weāll have to think about whether or not thereās no remedy for speech harm because thereās no one to defend the speech, and maybe we would get there, but I donāt think weāre there yet, and I think itās actually ā I think that itās a bit dangerous from a human rights perspective to give AI this decision-making thatās independent of those who do the inputs into at and those who give the prompts.
It leads us to sort of a magical thinking about AI in that itās inherently objective or that itās always making the correct decision, and I donāt think thatās great to actually disengage it from the responsibilities ā when weāre in the harms framework ā from those who are actually inputting data into it and giving prompts. Thereās really still a very significant human role in what AI spits out.
Nico: Yeah, I asked that question because as I was preparing for this conversation, I talked with some of my colleagues and said, āHey, what would you wanna ask David, Alison, and Eugene?ā, and as we know from popular culture surrounding movies made by artificial intelligence, often they involve artificial intelligence that reaches a sentience that passes the Turing test, where they have an intelligence that is indistinguishable from human intelligence, and thereās this popular horror flick out right now called M3GAN starring an AI-powered doll that gains sentience and murders a bunch of people while doing TikTok dances, and so, they were like, āWell, what are Meganās free speech rights?ā, putting the murders aside.
And then, of course, thereās The Positronic Man written by Isaac Asimov a while back, which became The Bicentennial Man, featuring Robin Williams, when it was made into a movie. That was essentially the story of a lifelike robot petitioning the state for equal rights. I never like to close my mind to the possibility that technology will do miraculous things in the coming years and decades.
I think if you would ask someone 150 years ago about virtual reality, they just wouldnāt have even been able to conceive it, and with the advancement of AI in the past three months, the sort of images that tools like DALL-E are turning out in some cases just blow my mind, images that look like someone drew a portrait that you would have paid thousands of dollars to see. But I want to get back to this question about liability. So, the classic example for artificial intelligence, or the classic worry from those who are very worried about artificial intelligence is, okay, youāll ask AI to do something, and you wonāt be able to anticipate its solution to that problem.
So, you ask AI to eliminate cancer in humans, and it decides the best option is to kill all humans, right? Thatāll eliminate the cancer for sure. So, when AI takes a turn, is it the programmer who is responsible, for example, if they defame someone, or if the artificial intelligence incites violence, or is it the person who takes the generative product and publishes it to the world? How should we think about that?
Alison: I think that what David was saying about thinking of AI as a tool is the right thing here. If you just let a robot come up with how are we gonna solve cancer, and then just go with it, and have no kind of humans in the chain checking whether this is a good idea, that seems pretty negligent to me.
We have claims that can account for that all the way up to all kinds of torts, but having an algorithm ā being able to run the numbers and come up with solutions, and then having a human to look at those and cull through it, but kind of doing the computation ā I think thatās a tool, and so, if, then, a human makes the decision to publish something or to act on something, you have a person to hold liable because itās the person who took that recommendation and went with it, which is the same thing regardless of whoās making that recommendation.
Eugene: So, one of my favorite poems is Kiplingās The Hymn of the Breaking Strain. Itās got some deeper things going on in it which, actually, I donāt much care for ā those donāt work well for me ā but itās, in some measure, a poem ā it starts out as a poem about engineers. Here are the opening eight lines.
āThe careful textbookās measure/let all who build beware/the load, the shock, the pressure/material can bear/So when the buckled girder/lets down the grinding span/the blame of loss or murder is laid upon the man/not on the stuff, the man.ā I used it for my torts class when I used to teach torts class, that the bridge isnāt negligent, the creator of the bridge may be negligent, maybe the owner of the bridge is negligent in not maintaining it. Maybe the user of the bridge is negligent in driving a truck over that exceeds the posted limits.
Now, to be sure, note thereās one difference. The careful textbooks do not exactly measure what AIs are going to be able to do. In fact, one of the things that we think about modern AIs is precisely they have these emergent properties that are not anticipatable by the people who create it, but it is the job of the creators to anticipate it at least to some extent, and if they are careless ā this is a negligent standard, generally speaking, for these kinds of things ā if they are negligent in their design ā if, for example, they design an AI that can actually do things, that can actually inject people with things, and then theyāre careless in any failsafes they put in, theyāre careless in what the AI could inject people with, then, in that case, the creators will be liable, or perhaps the users, if the carelessness comes on the part of the user.
Alison: But the userās gonna sign a release no matter ā youāre not gonna do that in the real world without somebody signing away every possible right.
Eugene: Well, my understanding ā and Iām sure it varies sharply from jurisdiction to jurisdiction, but at least in my own state of California, there are limits to releases as to, for example, bodily injury. Theyāre not always enforceable. In fact, in many situations, theyāre not enforceable. So, for example, a hospital canāt say, āAs a condition of coming to this hospital, you waive malpractice liability.ā You canāt do that.
So, again, it may vary from jurisdiction to jurisdiction, and what if the AI is not even in the U.S.? What if the AI is in Slovenia, and who knows what Slovenian law is on this kind of thing? Maybe itās in a place which specifically, deliberately has law that is relatively producer-friendly rather than relatively consumer-friendly. But the important thing is, generally speaking, the creator is going to be subject to a negligent standard, or, again, it doesnāt have to be the creator, it could be the user, it could be whoever it is who contributes to this.
Now, one difficulty, of course, is often in trying to figure out what is negligent. What if the AI does have some capacity to manipulate things, and experts come to the stand, and they say, āWell, they did as good a job as they could have, we think.ā Will the jury believe that it wasnāt negligent, or will they say, āNo, no, surely you must have been careless in not anticipating this particular harm.ā Interesting question.
Thereās also the question of what if the AI only provides recommendations? Does the First Amendment provide some sort of defense against a negligence cause of action in the absence of knowing a reckless falsehood, letās say, the liable actual malice standard and such? So, those are interesting questions, but in principal, again, I think we need to look to the people behind the AI, whether, again, its creation, or its adaptation, or its use, and not to the AI itself.
David: Yeah, I agree. I do think the answer here is the nerdy lawyer answer, that it is going to depend on the mens rea of whatever the tort claim is and whether thatās going to be a negligence claim, or, as we often have in free expression cases, a higher, more demanding mens rea standard, a subjective intent standard.
And then, to what extent any act is going to be a negligent act is really going to depend on that particular AI, that tool at the moment of time, and what the known risks are, and all the context about whether someone ā whether the user ā what they knew about the tool and its propensity to give wrong answers or say harmful things. I do think it will end up playing out that way. Weāll be looking at this just as a standard mens rea problem.
Nico: I wanna ask about fraud and misrepresentation. Iāve seen some futurists posit online that weāll be able to eliminate a lot of our email inbox by just training artificial intelligence on how we typically respond to emails and having it go through and respond for you. Do you think there are any concerns about fraud or misrepresentation there?
Another example protected under the First Amendment is the petition of government for a redress of grievances. Iām just thinking here about activists at organizations, not unlike ¹ū¶³“«Ć½app¹Ł·½, who might train artificial intelligence to make it seem like there are more activists in support of them who write and call their congressman or -woman with unique emails generated by AI, or even unique voicemails that are left at the congressional office generated by AI, but itās really just one organization or one person trying to ā itās kind of like the bot problem that you have on social media.
Alison: Yeah, I feel like this exists. There are nominally ā
David: Yeah, SBC, I think, had leveled some accusations about it.
Alison: Yeah. I think this exists, this is just a more efficient version of āHere, we have a bunch of letters, just sign your name here and weāll send them all out.ā Itās slightly different because there is a human attached to each of them, but in terms of being organized by a central organizing force, I think, is not like a totally new issue, itās just probably the volume.
David: And I think itās totally possible under the law for someone to commit fraud through the use of an AI tool. Thereās nothing in the law that I can think of that would bar liability because the fraud was committed through the use of an AI tool as opposed to any other tool, so I think itās certainly possible, and thereās probably lots of examples, but I donāt see any obstacle to that.
Alison: I donāt think I would trust AI to respond to my email, certainly not at this point, certainly not as a lawyer.
Eugene: So, all that sounds right to me, but let me point out a couple of things that I think are implicit, Nico, in your question. One is what if weāre not after what would normally be actionable fraud or misrepresentation, like somebody signing āEugene Volokhā and itās not actually me, itās actually an AI. That might be ā in some situations ā fraud. But what if itās an unsigned letter and it looks like itās a human, but it doesnāt say that, and maybe itās not reasonable to just assume that itās a human whoās sending it.
So, what about disclosure mandates? What about a law that says any email sent by an AI has to be flagged āsent by an AI,ā which, again, means that any email that a human authorizes to be sent by an AI has to have this disclaimer? Is that impermissible speech compulsion ā again, impermissible violation of the rights of the human who is using AI to create this ā or is this a permissible way of trying to prevent people from being misled?
A second related question relates to the fact that there is a right to petition the government, but there is no obligation as a constitutional matter on the governmentās part to respond to the petitions. So, for example, if you were a government agency, you might say, āWeāre not gonna prosecute you for sending us AI comments on some rule or some such. If you wanna do it, thatās fine. You have every right to clog our inboxes.
Thatās not enough of a harm to justify ā at least, unless itās like the equivalent of a denial-of-service attack ā not enough of a harm to justify punishing that, but we will ignore ā we will just not pay any attention ā to anything that doesnāt say at the bottom, āI certify as a human being that this was written by me, signed, the name of the person.āā And then, if I send that certification through an AI, then I am committing, possibly, for example, the crime of a full statement to the government on a matter within its jurisdiction. Thatās 18 USC Section 1001, perhaps.
So, it may well be that the government and others will have to set up similar such rules to say, āLook, Iām only going to respond to messages that arenāt from AIs.ā More broadly, you can imagine email facilities that actually do say, āLook, at least with things sent by people whom you donāt know, one feature we will offer our users is the option of saying āblock all material that isnāt certified as being from a humanā because the last thing you want is your email box clogged by all this bot mail.ā And if thatās so, then, again, somebody bypassing that by false certification would be committing fraud.
Alison: I think related to this how do you solve ā the way people might solve the problem because the problem of all this is the generation of junk, just creation of junk mail, just the drowning out of the real people in the mass of ā the cacophony of speech created by nonhuman means, and I think whatās going to happen, potentially, is systems that place a premium on verification ā not necessarily something clicking a box, but maybe youāre holding more town meetings, youāre holding more hearings in person in a way that canāt be gamed as much.
It can also cause ā if youāre making policy, maybe youāre not reading the comments and youāre really talking to the stakeholders who you know who they are, and thatās kind of how a lot of laws have been made for a long time. Maybe thatās not so different than whatās already going on. Iām not sure how diligently every random personās letter into the agency is being read as opposed to the briefs ā the papers that are submitted by people that they know, and have connections, and have the ability to go in and push for their position.
So, Iām not sure that ā I think itās going to exacerbate a problem that already exists, and maybe what we might lose, potentially, is some of the ability for the democratic access that comes with being able to petition the government or show up as somebody who doesnāt have a way to get in the door already because you might be drowned out in the unverified mass.
David: And you wonder whether the big problem ends up being that the government either doesnāt believe that thereās a popular support or popular opposition to something because theyāre making some assumption that itās some bot thatās just spitting out these things. āThat was a beautiful letter in opposition, probably written by AI, so I can just ignore it.ā I get more concerned about official lawmakers having some excuse to ignore really, really valuable input because theyāve been convinced that itās not the work of real humans.
Alison: Well, theyāre not convinced, but they understand that they can dismiss it in that manner, not to be cynical. I think Iām the cynical voice on this podcast.
Nico: I have family that work and/or worked in congressional offices, and one of the things when constituents or anyone calls into the office ā one of the first questions they ask is āAre you a constituent? Are you in this district?ā If the answer is no, then they donāt really continue the conversation, but if the answer is yes, they hear the complaints and they log it. And then, for emails, they log all those emails, too. It actually surprises me how many of these offices log every constituent concern or complaint, but the problem, of course, with AI is are they really a constituent.
When youāre talking about text-to-audio, they might sound like a constituent or say theyāre a constituent, but that would be ā speaking to Eugeneās point ā a misrepresentation or fraud thatās already accounted for by the law. I think a big concern there would mostly be the denial-of-service-type thinking that Eugene was talking about. You only have so many people that can answer the phones and so many hours in a day, and if you keep bombarding them with AI ā
Alison: Unless you use AI to sort through it. Itās turtles all the way down.
David: Exactly! Although I wonder whether it might be good to think a little bit about human-AI partnerships. So, my sense is that there are a lot of people who might say, āI have some thoughts about this, but I know Iām not the most articulate person. Iām not sure I have the best arguments for this, but Iām going to use AI to create a better thing than I would have myself done, but I will endorse it, or maybe Iāll edit it a little bit, or Iāll review it and endorse it.ā
Or there may even be people who will say, āI need to write up a letter about something, so Iām just gonna let it do the first draft,ā the way I think a lot of translators use translation software. They realize translation softwareās far from perfect, but provides a good first cut, and then that cuts down the total translation time. So, one interesting question is what should we think of that?
Alison: I think itās good. I think that saving ā you have so much mental capacity in a day, in a week, and saving that for the tasks that are the highest and best use can be good if youāre talking about, within my job, Iām a really good editor, but it takes me forever to write that first draft. Or maybe itās a way to get a bunch of words on a paper, and then ā I love when someone else writes a first draft.
David: Right, but letās think about this specifically with an eye towards the submission of comments to, say, an administrative agency or whatever else. So, on one hand, we could say itās actually fairer to people who may not be as articulate or may not be as experienced with doing this to use the comment-writing bot, and then they have to review it and then sign it.
On the other hand, if it turns out that a lot of people, as a practical matter, donāt sign it, and what they do is they get advocacy groups to submit little prompts saying, āAll of our fans, why donāt you use an AI running this prompt, and then, of course, edit the results as you think is necessary?ā, but as a practical matter, people just submit it this way, and you get all the problems ā itās true, you do have somebodyās at least formal statement, āI am a human and I endorse this message.ā It may not be practically that realistic ā that much of a human judgement there.
The other problem is to the extent we do use AIs to detect AI-written stuff, which, in fact, we academics have been thinking about, whether we can do that to deal with AI-based plagiarism, like what if somebody submits to us a paper, how do we know that itās written by an AI? Well, we may run it through the AI-based AI detector.
Part of the problem, though, is that presumably, it would detect material that is written in its first draft and then only slightly modified as AI-generated material. If you think it should be accepted so long as a human endorses it, then you wouldnāt really have the option of an AI being used to sort through all of that spam AI-submitted things because the human-endorsed version looks just like the one that is just submitted by a bot.
Alison: Yeah. I think weāre not gonna out-computer the computer. Itās always gonna be a race. Itās like encryption. Somebodyās gonna come up with something better, and then itās gonna get ā itās just gonna be kind of like this ā sorry, this is a podcast ā this is me saying āone after the other.ā
Nico: There is a video component of it too, Alison.
Alison: Oh, good.
David: So, itās turtles all the way down ā you mean elephants all the way up.
Alison: Elephants all the way up, exactly. But I think ā there was an op-ed ā I think it was in the Times ā recently that I thought was pretty compelling. It was about people who are ā it was taking the opposite view of rather than āLetās try to stamp out plagiarism,ā instead being like, āPeople are going to use this. This is a new tool. It's important to have literacy. Letās use it to say when we give a prompt of āwrite X in the style of Y,ā what is it drawing on? Why is it in the style of Y? What elements of it are reminiscent? What things is it getting wrong?ā and having that ā rather than this fear of technology, which I think is old, very old ā the radio, the TV, phones, everything ā weāre always afraid itās gonna be the end of the world, and maybe learning how to ā what to do with it and what its best use is.
I think trying to trip it up is not gonna necessarily be a productive or reliable thing, as we canāt always know that we have the best algorithm to ā and it may be wrong, and so, maybe just assume people might do it and have that not be the point. Learn how to incorporate it even affirmatively into a lesson.
David: Yeah, I think thatās right, and my understanding is that journalists have been experimenting with using the ChatGPT to do first drafts of articles, to just see ā and I think we have these tools, and we should learn to become comfortable with them, and those of us who are just naturally beautiful writers and have had this advantage over everyone else because we can effortlessly spit out beautiful text ā this might be an equalizer. Weāre going to have to find our advantage someplace else, right? But I agree, we canāt be running away from these things or trying to impose equities on them that really arenāt necessary.
Eugene: So, as a general matter, I think itās right that the stuff is coming, itāll be here to stay, itāll be getting better and better. We have to figure out how to adapt to it. At the same time, if my goal as a professor is to measure a studentās mastery of particular knowledge, I canāt accomplish that goal now through an essay that they write, especially at home, if ChatGPT can write comparably good essays.
Alison: Or you could do an oral exam still.
Nico: Yeah, thatās how they do it in Europe, right?
Eugene: Right, so we may need to change things to do that. So, in a sense, weāre not running away from the technology, but we are essentially saying that this technology threatens the accomplishment of a particular function. I donāt think the solution is to ban the technology generally. The solution may have to be, though, to change the function so it can use the technology.
So, as to oral exams, I appreciate the value of them. Thatās not as good a solution, I think, in many ways, partly because itās more time-consuming for the graders, and partly because my sense is that oral exams, first of all, are further away from what lawyers do. Most of lawyersā work is written, not oral, so they measure things a little bit less, and there are also lots of opportunities for bias in oral exams that are, in some measure, mitigated. There are other opportunities for bias in written exams, but they are, in a considerable measure, mitigated in written exams.
And Iām not even just talking about race and sex, itās just appearance and manner. Everybody likes the good-looking and the fun-seeming, and in writing, thankfully, I canāt tell what somebody looks like or how fun they are, I just look to see what theyāre actually saying. So, in any event, I do think that the AI stuff is going to be potentially quite harmful to the way we do academic evaluation ā again, I agree we shouldnāt ban it, but we shouldnāt also ignore the fact that it should lead us to think hard about how to prevent this kind of cheating.
David: No, I agree that I think the cheating thing is something we have to deal with. There was the calculator conundrum. This was when I was a student, was when calculators became mass available, there was this big question about whether to allow their uses in class, or was it a better ā and ultimately, they exist, and itās better to actually assess people with their facility with the tool than to pretend the tool doesnāt exist and to require that people have this capability, and then we did the same thing with search.
I remember when I first started teaching, there were questions about whether people could use Wikipedia because it was too easy. One of the ways I think we have to do as educators is get over this idea that thereās some nobility in having people do the difficult way, and one of the things we can do is to teach them how do you use available tools to do your excellent job, right? I understand the assessment gets ā we have to change what weāre assessing, and maybe weāre assessing the output with the use of the tool instead of the output before the use of the tool.
Nico: I was watching an Instagram reel ā or maybe it was a TikTok video ā the other day of an attorney ā maybe a real estate attorney ā in New York city who asked ChatGPT to essentially draw up a real estate contract with these specific terms, like standard New York language with this force majeure and this jurisdiction, and it spit it out in four minutes, and then he splitscreened it, and he goes through ā clause by clause ā each term, and heās like, āThis is pretty good. This would save me hours of work, and I just go in here and tweak around the edges.ā So, he was viewing it as kind of an augment to his work.
And I will say, as I was asking ChatGPT to write up the introduction to this show introducing these guests, saying āevery other week, this is the tagline,ā it did a pretty good job, but I wanted to add my own kind of language around the edges because it sounded a little bit stilted, and youāll hear that as Iām asking some of the questions during the show. I asked ChatGPT to write the questions, but I needed to tweak them a little bit so it sounded more authentic.
Alison: Well, and itās based on whatās been asked before, so itās gonna sound a little ā itās not creating necessarily a new thing. I wanna also add, just to maybe ā an optimist, just to allay the fears about academic cheating, my husband is a professor and actually wrote what I thought was a pretty interesting article about this in The Atlantic at the end of last year.
He was making a point that these are all free tools right now, and itās a sandbox, and itās a playground, and everyone can kind of go and make their college essay on it, but itās extraordinarily expensive to run these tools, and theyāre not going to stay free, and eventually, theyāre going to be incorporated into something where there is money, thereās a use case for it, and thatās where theyāre going to be used, or youāre going to have to pay for it, and thatās going to also make it easier to tell who is using it. So, Iām not sure that itās always gonna be the case that anyone can just hop on and use the best ChatGPT generator to generate their ā it might be a now problem, but Iām not sure if itās a forever problem.
Nico: Well, that was actually interesting. I saw an exchange on Twitter where someone in the tech space ā because a lot of us see this, and itās free, and we assume it costs nothing to produce, just like prior to paywalls on news websites. We were like, āOh, yeah.ā But, a smart tech person asked, āWell, what are the server costs associated per use on this?ā, and I thought that was a smart question because it shows that there are limits to how free this technology is gonna be.
Alison: It is apparently extraordinarily expensive to run this, to do it free right now, especially with ā but itās getting a lot of buzz, and people are learning about it, and thereās ā
Eugene: And the costs always decline. Remember how, when CD players first came around, I think people would say, āOh, well, this is just for the rich.ā
Alison: Yeah, but itās the computing cost of it that is very high, so maybe it comes down over time, but right now, itās expensive, and Iām not sure how long itās going to just be anyone can screw around with DALL-E.
Nico: So, I have two more topics that I wanna get to because I know weāve got a hard stop in 10 minutes, and I think David has to hop off here in five minutes, which is okay because he said he didnāt wanna talk about the IP stuff or didnāt have much to add on the IP stuff. Weāll cover that on the last question here. But I wanna ask about deepfakes, and I wanna start by playing some audio for you all.
Video: I am not Morgan Freeman, and what you see is not real ā well, at least in contemporary terms, it is not. What if I were to tell you that Iām not even a human being? Would you believe me? What is your perception of reality? Is it the ability to capture, process, and make sense of the information our senses receive? If you can see, hear, taste, or smell something, does that make it real, or is it simply the ability to feel? I would like to welcome you to the era of synthetic reality. Now, what do you see?
Nico: So, thatās actually a video that I saw on social media, and weāll cut that into the video version of the podcast, but itās someone standing in front of a camera, and that someone is Morgan Freeman, but it wasnāt actually Morgan Freeman saying those words, and I was talking to one of my colleagues about this, and he says right now, thereās AI-based deepfake-detecting technology that has kept up with deepfake production, so itās pretty easy ā if you have the technology ā to determine what is a deepfake, but this looked exactly like ā to my untrained eye ā Morgan Freeman, and it sounded exactly like Morgan Freeman.
With all new technologies, of course, thereās scaremongering, but we have a real War of the Worlds-type panic happening as a result of deepfake, and I imagine none of you would say that this sort of thing would be protected speech, or maybe Iām wrong. It could be fraud or misrepresentation, depending how itās used. In that case, Morgan Freeman ā the AI-generated Morgan Freeman ā said, āI am not Morgan Freeman, full disclosure,ā but you could imagine a world where they do that with then-President Barack Obama, and people think it is actually him. What are the thoughts on deepfakes?
Eugene: So, this is an important subject. Itās also, like so much, not really a new subject. My understanding is that when writing was much less common, people were much more likely to look at what seems to be a formal written document and just presume that it must be accurate because, after all, itās written down, maybe it was filed in a court, and so on and so forth, but then, of course, as it became more common ā I think we are all familiar with the possibility of forgeries.
Itās true that we kind of grant most documents an informal presumption of validity if it looks serious, but if somebody says, āWait a minute, how do you know itās not forged?ā, I think itās pretty easy for people to say, āOh, yeah,ā or very likely that people react, āOh, yeah, right, we need to have some proof, we need to have some sort of authentication.ā
Often, in a testimony, someone says, āYeah, Iām the one who wrote itā or āHere are the mechanisms we have for detecting forgeries and the like.ā So, I do think that if somebody puts up a video that purports to be some government official doing something bad, and it turns out itās a deepfake, I think the person can say, āLook, we all know about deepfakes. This is one of them. I never did that.ā
Just like if you were to post a letter that I supposedly wrote, the answer is I didnāt write it. I believe in the late 1800s, there was some forged letter, I want to say by then-candidate James Garfield, that played a big role in the election campaign, and it turns out it was a forgery, and I think it was denounced promptly as a forgery. One interesting question is to what extent will people who really are guilty of whatās depicted in the video will say, āOh, no, no, all deepfake, I didnāt do that. Why do you believe this nonsense? Itās obviously fake.ā
Alison: Itās kind of like the āIāve been hackedā defense.
Eugene: Right, exactly, the āIāve been hackedā defense. So, one possible problem isnāt so much that people will believe too much, although it may be that, at some visceral level, the fact weāve seen that even if we know itās fake, weāll still kind of absorb it, it will still color our perception of the person ā maybe ā part of the problem may be that people will become even more skeptical for fear of becoming too credulous.
Theyāll become too skeptical, and as a result, people will become very hesitant to believe even really important and genuine allegations about misconduct by people, or by governments, or by others. So, I do think itās gonna be a serious problem. I do think itās important to realize that this is just a special case of the broader problem of forgery, and if you think of deepfakes as basically video and audio forgery, then I think you see the connection more than if you just sort of have a completely new label for it. In fact, I just came up with this. I think Iāll blog it later today.
David: I agree, and going back, one of the reasons that libel was a more serious offense than slander was because of the inherent reliability of the written word, and fortunately, I guess, weāve had ā common law has had a thousand years of creating a series of legal remedies based on falsity, whether thatās damaged reputation, or emotional distress, or whatever, and I do think in terms of legal frameworks, we look to see whether those remain sufficient for this new type of false statement, and itāll be interesting, but I agree. I think societally, the idea that maybe we just donāt know what to believe anymore is going to be the much more difficult thing to get used to than tort law.
Nico: Well, thatās one of the things you see in societies where thereās ā I read Peter Pomerantsevās book Nothing is True and Everything is Possible, which is about the state of modern Russia, where they just flood the zone with shit and nobody knows what to believe anymore, and so, as a result, they just become cynical about everything. You could see that sort of situation happening, where people ā
Eugene: To be fair, Russians ā weāve been cynical about everything for a very long time.
Alison: Not to be a media lawyer, but let me put in a plug here. One way out of this is excellent journalism because I think media literacy is important, and not just believing things because you see a picture of it is not necessarily a bad thing, but you can authenticate, you can say, āHereās this thing, hereās what we did. We talked to this, we examined XYZ.ā
Hereās showing/explaining why itās consistent with ā why you feel comfortable reporting this and why you think it's authentic, or why youāre not sure. I think thatās helpful to show people, and good journalists can use ā itās okay to have some healthy skepticism about sources ā audiovisual sources like this. I donāt think thatās necessarily a bad thing, and I think explaining and showing people how to evaluate them is good media literacy and is good journalism.
Eugene: So, I think thatās right, although one problem is this issue has come up at a time when my sense is people are much more distrustful of the mainstream media than ever before, with good reason. I think we would need to regain a notion of an ideologically impartial media to do that, not to say that the First Amendment should only protect ideologically impartial media.
I think ideologically partial media are fundamentally important ā that is, ideologically one-side-or-the-other media are a fundamentally important part of public debate, but when youāre getting to questions about basic fact, like is this real or this not, and people are afraid, āOh, well, maybe the reason that theyāre not investigating this is because they have some agenda, some social justice agenda, letās say, or some traditional values agenda thatās keeping it from doing it,ā or when they say itās fake, is it being colored by their preferences?
Part of the problem is that those are really serious concerns ā bye, David ā and today, I think theyāre much more serious than ever, at a time when we need impartial media more than ever.
Nico: David, we appreciate you joining us. I know you have to run.
David: Yeah, Iām sorry I have to run. I have my canned answer, also, for the IP stuff if you want me to say it so you have the recording.
Nico: No, thatās okay. I think weāll probably cover what you were gonna say anyway. The question to IP ā and David, if you need to hop off, Iāll let you ā is artificial intelligence has the ability to generate ā and this is an artificial intelligence question right here ā artificial intelligence has the ability to generate original work, such as music, art, and writing, and some have raised concerns that this could potentially lead to violations of intellectual property laws.
So, what are your thoughts on that? You say, āWe want this written in the style of so-and-so,ā or thereās this thing going around social media where DALL-E generated images of Kermit the Frog or Mickey Mouse, or thereās this Al-Qaeda-inspired Muppet that has been going around and is kind of burned into some of my colleaguesā brains. How do we think about that? Iām assuming what youāll say is we already have a legal framework for addressing that - fair use ā
Alison: Or substantial ā or copyright ā to me, it doesnāt so much matter how you came up with the thing that looks exactly like the copyrighted work. If you are distributing it and doing one of the things that is covered by the Copyright Act, then I donāt think it necessarily matters if you used a brush to make it or if you used a computer to make it. We have a framework for that.
David: Yeah, I think thatās right, and Iāll give you my last bit before I hope off. I think in terms of outputs from AI, sure, AI could spit out potentially infringing materials the same way that any other tool could as well. I think the more difficult question ā or, I donāt think itās difficult, but I think an interesting question is in terms of the training of AI tools, using copyright images for training, I certainly think that using those as inputs for training purposes ā I think thatās a fair use, that using copyrighted images in order to train an AI tool would be a fair use of those images, but then, the output certainly could be infringing, and you would have to look at each individual output to determine whether it was or wasnāt.
Nico: Thatās an interesting question there, David. I hadnāt even thought of that.
David: Now I have to go.
Nico: Well, weāll let you go.
Alison: Iāve gotta go in five.
Nico: Yeah, Alison has a hard stop in five minutes. Okay, so, youāre using a copyright work to produce a commercial product, right? I think of when Iām going to USAA, my insurer, and weāre talking about what I need to insure with my home, they say, āI canāt look at your home on Google Street View because we havenāt created a license with Google to be able to look at your home through that product.ā Eugeneās looking skeptical.
Eugene: Thatās a strange thing for them to say, I think, although who knows?
Nico: Thatās what they told me. It sounded strange to me.
Eugene: People say all sorts of things.
Nico: I said, āI have a split-level home. You can go on Google Street View and look at it.ā Theyāre like, āNo, we canāt, because we havenāt licensed that technology to use in our insurance underwriting business.ā Iām like, āOkay.ā
Alison: Maybe thatās a liability issue.
Eugene: Maybe thereās some terms of use that we donāt pay attention to in Google Street View, but I will say ā so, while I agree with what people have said generally, I do think thereās gonna be some important legal questions that are different, so let me give you an example.
So, the Supreme Court, in the Sony and Universal case, held that VCR manufacturers couldnāt be held liable for copyright infringement done using the VCRs because thatās just a tool, so you could say, well, likewise, AI developers shouldnāt be held liable for the copyright infringement done using them, like, for example, if you run it and then use it in an ad or some such.
Alison: As long as thereās a substantial non-infringing use.
Eugene: Right, right. But itās possible that the analysis might be different for AIs. You might say, well, first, we can expect more copyright infringement detection, as it were, from AIs than we could from just a VCR. Another thing is the VCR manufacturer had done nothing at all in developing its VCR that used anybody elseās copyrighted work, so it was only the use that might be infringement.
Maybe you might say ā Iām not sure this is right, but you might say if you are using other peopleās copyrighted work in developing, essentially, and training your AI, then that is a fair use, but only if you then also try to make sure that youāre preventing it from being used by your users to infringe. Of course, thereās also the complication that it may be that a lot of usersā stuff will be just kind of for fun, and maybe it will look exactly ā it is Mickey Mouse, and itās just for home use, for noncommercial use, maybe thatās a fair use, whereas you put it in an ad, itās not a fair use, and the AI may not know what the person will ultimately use it for.
So, those are interesting questions, but at the very least, I think we canāt assume that all the existing doctrines, such as contributory liability, will play out in quite the same way. One other factor is copyright law, unlike patent law, does provide that independent creation is not infringement. So, if they create something that happens to look just like a Picasso ā just happens to look just like a Picasso ā thatās not an infringement, but of course, you might say if the training data included the Picasso, maybe that was fair use at the outset in the training, but now you can no longer say itās independent creation because, after all, itās not independent. Itās very much dependent.
Then, what happens if you deliberately exclude Picasso from there, but you end up using all sorts of other artists who were influenced by Picasso, maybe even including that they had some sort of infringing elements, but that nobody sued over? In any event, I do think this will raise interesting and complicated questions because the existing doctrines have been developed in a particular environment thatās now shifting.
Alison: I am also gonna throw, and then I do have to drop ā I think to be the practical lawyer angle on this, one thing that I see as impacting the way this may play out is kind of that the copyright office ā at least so far ā has refused to copyright things that were just solely generated by AI, unless there was substantial human involvement, and thatās gonna affect what you can do with these kinds of works because no oneās gonna wanna ā rather than using an artist that you can hire and license their work from or do as a work-for-hire, use an AI, and then they have no ability to copyright the output, and then itās gonna be ā itās not gonna necessarily be of valuable commercial use if you want to be able to protect what youāre using the AI to create on the other end ā in a commercial sense.
Nico: Yeah, that was kind of the flip side, right? Who gets to copyright works produced by AI?
Alison: Yeah, it sounds like nobody right now.
Nico: Well, I know Alison has to drop off, so if she needs to drop ā
Alison: This was really fun.
Nico: Yeah, that was fun, Alison. And then there was one, Eugene. Iāll let you finish up and give your thoughts.
Eugene: Sure. So, Iām not terribly worried about people not being able to copyright AI-generated works ā that is to say, users not being able to copyright them. Itās an interesting question whether they could, based on their creativity in creating the prompt, but letās say they canāt.
The whole point of copyright protection is not copyright being valuable in the abstract, itās that it makes possible for people to invest a lot of time, effort, and money in making a new movie, or writing a novel, or whatever else. If indeed itās very easy for you to create a work, we donāt really need to stimulate the creation of that work through copyright protection ā that is to say, very easy for you being the user. It may be very difficult for OpenAI to do it. Thatās a separate matter.
So, to be sure, copyright law does indeed protect even things that are easy to create, like I can write down an email thatāll take me half a minute and no real creative effort. That email is protected by copyright law, but thatās a side effect of copyright law generally protecting textual works that people write, which is motivated by a desire to have an incentive to create. If indeed a picture is easy to create with just the relatively modest effort required to select a prompt and then sort through the results, not such a big problem, I think, if thatās not copyrightable.
Now, for commercial purposes, it may be important that the result could be used as a trademark, essentially ā I oversimplify here, but basically, if I create a logo using OpenAI, I should be able to stop other people from selling products using a very similar logo, but I think trademark law would already do that. Trademark law already protects things that are not protected by copyright.
Nico: Do you think ā Alison said, as someone who works in the copyright space, that the government isnāt copyrighting anything produced by AI. Do you think eventually, weāll get to a place where it will?
Eugene: Iām sorry, that the law does not provide for this protection? You said āthe government.ā
Nico: Yeah, what is it? Itās not the patentā¦ What government agency issues copyrights? And I should know this because I have some.
Eugene: There is no government agency that issues copyrights. A work is copyrighted when you write it down, when you fix it in a tangible media. You can write an email, and thatās protected by copyright the moment you write it, at least under modern American law. Now, before you sue, you have to register it, but thatās just a condition of filing a lawsuit, itās also a condition of getting some remedies.
So, the question isnāt so much if somebody is registering these copyrights, the question is whether the law offers this kind of protection. Do we say that an AI-generated image is protected? And there, the question is to what extent does that reflect the expression provided by the supposed author? So, if I just say, āShow me a red fish and a blue fish,ā at most, what Iāve provided is the idea of having a red fish and a blue fish. Thatās not enough to be expression.
On the other hand, if I were to give enough details, then it may be that itās protected, at least insofar as a literal copy that includes all of the details that Iāve asked for. It might not be infringing. So, I do think thereās gonna be some degree of protection if the prompt is sufficiently creative.
Nico: Well, I think, Eugene, we should leave it there. Itās just left to the two of us. A lot ofā¦interesting thoughts to chew on, and I imagine weāll have to return to the subject in the next couple of years because there will be litigation surrounding artificial intelligence and the First Amendment, but thanks for spending the time today, and I hope to talk with you again soon.
Eugene: Likewise, likewise. Very much my pleasure.
Nico: That was Eugene Volokh, a First Amendment scholar and law professor at UCLA, David Greene, the senior staff attorney and civil liberties director at the Electronic Frontier Foundation, and Alison Schary, a partner at the law firm of Davis Wright Tremaine. This podcast is hosted by me, Nico Perrino, produced by me and my colleague, Carrie Robison, and edited by my colleagues Aaron Reese and Ella Ross.
To learn more about So to Speak, you can follow us on Twitter or Instagram by searching for the handle āfree speech talk,ā you can like us on Facebook at Facebook.com/SoToSpeakPodcast. We also have video versions of this podcast available on So to Speakās YouTube channel and clips available on ¹ū¶³“«Ć½app¹Ł·½ās YouTube channel. If you have feedback, you can email us at sotospeak@thefire.org, and you can leave a review. Reviews help us attract new listeners to the show, so please do, and you can leave those reviews on any podcast app where you listen to this show, and until next time, I thank you all again for listening.