¹ū¶³“«Ć½app¹Ł·½

Table of Contents

ā€˜So to Speakā€™ podcast transcript: Artificial intelligence: Is it protected by the First Amendment?

Artificial intelligence: Is it protected by the First Amendment?

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.

Nico Perrino: Youā€™re listening to So to Speak, the free speech podcast, brought to you by ¹ū¶³“«Ć½app¹Ł·½, the ¹ū¶³“«Ć½app¹Ł·½. Welcome back to So to Speak, the free speech podcast, hosted by me, Nico Perrino, where, every other week, we dive into the world of free expression through personal stories and candid conversations.

Today, weā€™re focusing on the rapid advancement of artificial intelligence and what it means for the future of free speech and the First Amendment. Joining us as we navigate the complex and ever-evolving landscape of free expression in the digital age are our guests, Eugene Volokh, a First Amendment scholar and law professor at UCLA, David Greene, the senior staff attorney and civil liberties director at the Electronic Frontier Foundation, and Alison Schary, a partner at the law firm of Davis Wright Tremaine. Eugene, David, and Alison, welcome onto the show.

Alison Schary: Thanks, Nico.

Eugene Volokh: Thanks for having me.

David Greene: Yes, glad to be here.

Nico: I should say that introduction was not written by me, but rather by OpenAIā€™s popular artificial intelligence chatbot, ChatGPT, with just, I think, one or two tweaks from me there on the back end. I think itā€™s fair to say artificial intelligence technologies have advanced quite a bit here in the past few months, or even past few weeks, and taken many people by surprise. Itā€™s not just ChatGPT, of course, which you can ask questions to and it can churn out surprisingly cogent responses, but thereā€™s also DALL-E, which creates AI images, VALL-E, which turns text into surprisingly human-sounding speech, and then there are tools like QuickVid and Make-A-Video from Meta, which produce AI video.

So, to start this conversation, I wanna begin by asking the basics. Should AI be granted the same constitutional rights as human beings when it comes to freedom of speech and expression, and if so, how would you define the criteria for AI to be considered as having First Amendment rights? And I should ask ā€“ that question itself was generated by AI, and I know, Eugene, over email, you had a quick take on this one, so Iā€™ll let you kick it off if youā€™re willing.

Eugene: Yeah. So, rights are secured to persons ā€“ letā€™s not say people necessarily, but persons.

Nico: Well, whatā€™s the difference?

Eugene: Right. So, they reflect our sense that certain entities ought to have rights because they have desires, they have beliefs, they have thought, so thatā€™s why humans have rights. Even you could imagine if somebody were to show that orangutans, letā€™s say, or whales, or whatever else are sufficiently mentally sophisticated, we could say they have rights. Corporations have rights because humans who make up those corporations have rights, but computer software doesnā€™t have rights as such.

So, the real question would be whether, at some point, we conclude that some AIs are like enough to us or thinking enough that they have desires, that things matter to them, and then, of course, weā€™d need to ask what rights they should have, which is often very much based on what matters to us. So, for example, humans generally have parental rights because we know how important parenting is to us. I donā€™t know what would be important to AIs. Perhaps itā€™s different for different AIs. I think itā€™s way premature to try to figure it out.

Now, thereā€™s a separate question, which is whether the First Amendment should protect AI-generated content because of the rights of the creators of the AI or the rights of users of the AI. Iā€™ll just give you an example. Dead people donā€™t have rights, not in any meaningful sense, but if somebody were to ban publication of, say, Mein Kampf, it wouldnā€™t be because we think that the dead Adolf Hitler has rights, itā€™s because all of us need to be able to ā€“ excuse me, it would be unconstitutional, I think, because all of us need to be able to read this very important, although very harmful book.

So, I think thereā€™s a separate question as to whether AI-generated speech should have rights, but at this point, itā€™s not because AIs are rights-bearing entities. Maybe one day, weā€™ll conclude that, but I think itā€™s way too early to make that decision now.

Nico: Well, as a basic constitutional question ā€“ maybe, David, you can chime in here ā€“ what does the First Amendment protect? We talk about the freedom of speech. Is it the speech itself, or is it because the speech is produced by a human that it therefore needs protection, or is there just value in words strung together, for example?

David: Well, I think the First Amendment and freedom of expression as a concept more broadly protects the rights of people to express themselves, or persons, or collections of people that form entities. I agree with Eugene that I think AI is best looked at as a tool that people ā€“ persons who have rights ā€“ use to express themselves, and so, when weā€™re talking about the rights framework around AI, weā€™re talking about the rights of the users, of those who want to use it to create expression and to disseminate expression, and those who want to receive it. And, I do think in every other context of the First Amendment, thatā€™s what weā€™re looking at. Weā€™re looking at the value in expressing oneself to others.

Nico: But is there a reason weā€™ve historically looked at it that way ā€“ itā€™s because itā€™s only ever sentient people that have produced the expression, right? It can be produced no other way. Now, youā€™re creating a technology that can take on a life of its own and produce expression that was never anticipated by the so-called creators of that artificial intelligence, and if speech and expression has an informational value and we protect it because of the informational value it provides to, say, democracy or knowledge, is there an argument that that should be protected as well?

Alison: Well, Iā€™m gonna come at this, I think, as a litigator. Just to be totally practical, we can hypothesize about the constitutional, theoretical underpinnings, but when push comes to shove, the way the law gets made is in lawsuits, and you have to have a person to sue, and so, if somebody is gonna bring this case, theyā€™re going to sue the owner of the ā€“ the person who distributed the speech, or the person who created, or the entity that takes ownership of or develops the AI system. Thatā€™s whatā€™s happening with current ones.

So, I think as a practical matter, when these First Amendment arguments get made, theyā€™re inevitably going to be made by people, and itā€™s either going to be the person distributing saying, ā€œThis is my speech,ā€ or itā€™s going to be the developers of the algorithm saying, ā€œThis is my speech in the way itā€™s put together and itā€™s also the speech of people who give the prompts to it, etc.,ā€ but I just have trouble thinking of ā€“ other than a monkey selfie kind of situation, which got squashed by ā€“ thatā€™s kind of the most on-point precedent ā€“

Nico: What is that case, for our listeners who arenā€™t familiar with it?

Alison: Oh, sorry. So, the monkey selfie is when there was a monkey ā€“ a macaque, I think ā€“ took a nature photographerā€™s camera, took a selfie, and then the photographer was distributing it, and I think it was PETA that sued as the next best friend of the monkey, arguing that the monkey had copyright because the monkey was the one who pressed the button, and they lost, and there is no standing for a monkey to assert copyright because it doesnā€™t have the rights that are contemplated by the Copyright Act. So, I have trouble seeing where itā€™s gonna get in the door to even adjudicate a First Amendment right for an AI in and of itself.

Eugene: Right. I like the litigation frame, I think itā€™s very helpful, so letā€™s imagine a couple of things. Letā€™s imagine somebody sues ChatGPT, or essentially the owners of that ā€“ sue the company that runs it ā€“ and they say, ā€œYou are liable because your product said things that are defamatory,ā€ letā€™s say. One thing it could do is it could raise its own First Amendment rights ā€“ ā€œThatā€™s our speechā€ ā€“ but it could also raise third-party First Amendment rights.

So, it could say, ā€œItā€™s not our speech, but itā€™s the AIā€™s speech, and we are here to protect that,ā€ and there are quite a few cases where, for example, a book publisher could raise the rights of its authors, letā€™s say, or, in fact, a speaker can raise the rights of its listeners. There are cases along those lines. I donā€™t think that thatā€™s gonna get very far, because again, courts should say, ā€œHuh, why should we think it has rights? Do we even know that itā€™s the kind of thing that has rights?ā€

So, I think instead, the publisher will say ā€“ excuse me, the company thatā€™s being sued will say, ā€œWeā€™re going to assert the rights of listeners, that if itā€™s publishing these things, itā€™s gonna be valuable to our readers, and itā€™s true, the readers arenā€™t the ones being sued here, but if you shut us down, if you hold us liable, readers will be denied this information, so weā€™re gonna assert their rights.ā€

And again, courts are quite open to that sort of thing, and I think the answer will be yes, readers have the rights to read material generated by AIs regardless of whether AIs have the right to convey it. The same thing would apply if, for example, Congress were to pass a law prohibiting, letā€™s say, the use of AI software to ā€“ either prohibiting it altogether for the use of it to answer questions, or maybe imagine a rule that says that AI software canā€™t make statements about medical care, like people would be asking it, ā€œWhat should I do about these symptoms?ā€ We donā€™t want it to do that.

Alison: It would have to be disclosed.

Eugene: Right, you could have a disclosure requirement, or the easiest example would be a prohibition. Again, I think the AI software manufacturers would say not so much ā€œOh, the AI has a right to play doctor.ā€ Rather, itā€™s that listeners have a right to gather information, for whatever itā€™s worth with whatever disclaimers are there, and the statute interferes with listenersā€™ rights. I think thatā€™s the way it would actually play out in court.

David: And I do think, Nico, that your question sort of assumes that inevitability of sentience from AI that I donā€™t know how close we are to or whether we will ever get there, but we are certainly not there right now, and I hate to be one of those tech lawyers who talks tech ā€“ who tries to make analogies to the Stone Age and things like that, but thereā€™s always been tools that speakers have used to help them compose and create their speech, and I do think that AI is best thought of in that way now, and maybe there will be some tipping point of sentience or weā€™ll have to think about whether or not thereā€™s no remedy for speech harm because thereā€™s no one to defend the speech, and maybe we would get there, but I donā€™t think weā€™re there yet, and I think itā€™s actually ā€“ I think that itā€™s a bit dangerous from a human rights perspective to give AI this decision-making thatā€™s independent of those who do the inputs into at and those who give the prompts.

It leads us to sort of a magical thinking about AI in that itā€™s inherently objective or that itā€™s always making the correct decision, and I donā€™t think thatā€™s great to actually disengage it from the responsibilities ā€“ when weā€™re in the harms framework ā€“ from those who are actually inputting data into it and giving prompts. Thereā€™s really still a very significant human role in what AI spits out.

Nico: Yeah, I asked that question because as I was preparing for this conversation, I talked with some of my colleagues and said, ā€œHey, what would you wanna ask David, Alison, and Eugene?ā€, and as we know from popular culture surrounding movies made by artificial intelligence, often they involve artificial intelligence that reaches a sentience that passes the Turing test, where they have an intelligence that is indistinguishable from human intelligence, and thereā€™s this popular horror flick out right now called M3GAN starring an AI-powered doll that gains sentience and murders a bunch of people while doing TikTok dances, and so, they were like, ā€œWell, what are Meganā€™s free speech rights?ā€, putting the murders aside.

And then, of course, thereā€™s The Positronic Man written by Isaac Asimov a while back, which became The Bicentennial Man, featuring Robin Williams, when it was made into a movie. That was essentially the story of a lifelike robot petitioning the state for equal rights. I never like to close my mind to the possibility that technology will do miraculous things in the coming years and decades.

I think if you would ask someone 150 years ago about virtual reality, they just wouldnā€™t have even been able to conceive it, and with the advancement of AI in the past three months, the sort of images that tools like DALL-E are turning out in some cases just blow my mind, images that look like someone drew a portrait that you would have paid thousands of dollars to see. But I want to get back to this question about liability. So, the classic example for artificial intelligence, or the classic worry from those who are very worried about artificial intelligence is, okay, youā€™ll ask AI to do something, and you wonā€™t be able to anticipate its solution to that problem.

So, you ask AI to eliminate cancer in humans, and it decides the best option is to kill all humans, right? Thatā€™ll eliminate the cancer for sure. So, when AI takes a turn, is it the programmer who is responsible, for example, if they defame someone, or if the artificial intelligence incites violence, or is it the person who takes the generative product and publishes it to the world? How should we think about that?

Alison: I think that what David was saying about thinking of AI as a tool is the right thing here. If you just let a robot come up with how are we gonna solve cancer, and then just go with it, and have no kind of humans in the chain checking whether this is a good idea, that seems pretty negligent to me.

We have claims that can account for that all the way up to all kinds of torts, but having an algorithm ā€“ being able to run the numbers and come up with solutions, and then having a human to look at those and cull through it, but kind of doing the computation ā€“ I think thatā€™s a tool, and so, if, then, a human makes the decision to publish something or to act on something, you have a person to hold liable because itā€™s the person who took that recommendation and went with it, which is the same thing regardless of whoā€™s making that recommendation.

Eugene: So, one of my favorite poems is Kiplingā€™s The Hymn of the Breaking Strain. Itā€™s got some deeper things going on in it which, actually, I donā€™t much care for ā€“ those donā€™t work well for me ā€“ but itā€™s, in some measure, a poem ā€“ it starts out as a poem about engineers. Here are the opening eight lines.

ā€œThe careful textbookā€™s measure/let all who build beware/the load, the shock, the pressure/material can bear/So when the buckled girder/lets down the grinding span/the blame of loss or murder is laid upon the man/not on the stuff, the man.ā€ I used it for my torts class when I used to teach torts class, that the bridge isnā€™t negligent, the creator of the bridge may be negligent, maybe the owner of the bridge is negligent in not maintaining it. Maybe the user of the bridge is negligent in driving a truck over that exceeds the posted limits.

Now, to be sure, note thereā€™s one difference. The careful textbooks do not exactly measure what AIs are going to be able to do. In fact, one of the things that we think about modern AIs is precisely they have these emergent properties that are not anticipatable by the people who create it, but it is the job of the creators to anticipate it at least to some extent, and if they are careless ā€“ this is a negligent standard, generally speaking, for these kinds of things ā€“ if they are negligent in their design ā€“ if, for example, they design an AI that can actually do things, that can actually inject people with things, and then theyā€™re careless in any failsafes they put in, theyā€™re careless in what the AI could inject people with, then, in that case, the creators will be liable, or perhaps the users, if the carelessness comes on the part of the user.

Alison: But the userā€™s gonna sign a release no matter ā€“ youā€™re not gonna do that in the real world without somebody signing away every possible right.

Eugene: Well, my understanding ā€“ and Iā€™m sure it varies sharply from jurisdiction to jurisdiction, but at least in my own state of California, there are limits to releases as to, for example, bodily injury. Theyā€™re not always enforceable. In fact, in many situations, theyā€™re not enforceable. So, for example, a hospital canā€™t say, ā€œAs a condition of coming to this hospital, you waive malpractice liability.ā€ You canā€™t do that.

So, again, it may vary from jurisdiction to jurisdiction, and what if the AI is not even in the U.S.? What if the AI is in Slovenia, and who knows what Slovenian law is on this kind of thing? Maybe itā€™s in a place which specifically, deliberately has law that is relatively producer-friendly rather than relatively consumer-friendly. But the important thing is, generally speaking, the creator is going to be subject to a negligent standard, or, again, it doesnā€™t have to be the creator, it could be the user, it could be whoever it is who contributes to this.

Now, one difficulty, of course, is often in trying to figure out what is negligent. What if the AI does have some capacity to manipulate things, and experts come to the stand, and they say, ā€œWell, they did as good a job as they could have, we think.ā€ Will the jury believe that it wasnā€™t negligent, or will they say, ā€œNo, no, surely you must have been careless in not anticipating this particular harm.ā€ Interesting question.

Thereā€™s also the question of what if the AI only provides recommendations? Does the First Amendment provide some sort of defense against a negligence cause of action in the absence of knowing a reckless falsehood, letā€™s say, the liable actual malice standard and such? So, those are interesting questions, but in principal, again, I think we need to look to the people behind the AI, whether, again, its creation, or its adaptation, or its use, and not to the AI itself.

David: Yeah, I agree. I do think the answer here is the nerdy lawyer answer, that it is going to depend on the mens rea of whatever the tort claim is and whether thatā€™s going to be a negligence claim, or, as we often have in free expression cases, a higher, more demanding mens rea standard, a subjective intent standard.

And then, to what extent any act is going to be a negligent act is really going to depend on that particular AI, that tool at the moment of time, and what the known risks are, and all the context about whether someone ā€“ whether the user ā€“ what they knew about the tool and its propensity to give wrong answers or say harmful things. I do think it will end up playing out that way. Weā€™ll be looking at this just as a standard mens rea problem.

Nico: I wanna ask about fraud and misrepresentation. Iā€™ve seen some futurists posit online that weā€™ll be able to eliminate a lot of our email inbox by just training artificial intelligence on how we typically respond to emails and having it go through and respond for you. Do you think there are any concerns about fraud or misrepresentation there?

Another example protected under the First Amendment is the petition of government for a redress of grievances. Iā€™m just thinking here about activists at organizations, not unlike ¹ū¶³“«Ć½app¹Ł·½, who might train artificial intelligence to make it seem like there are more activists in support of them who write and call their congressman or -woman with unique emails generated by AI, or even unique voicemails that are left at the congressional office generated by AI, but itā€™s really just one organization or one person trying to ā€“ itā€™s kind of like the bot problem that you have on social media.

Alison: Yeah, I feel like this exists. There are nominally ā€“

David: Yeah, SBC, I think, had leveled some accusations about it.

Alison: Yeah. I think this exists, this is just a more efficient version of ā€œHere, we have a bunch of letters, just sign your name here and weā€™ll send them all out.ā€ Itā€™s slightly different because there is a human attached to each of them, but in terms of being organized by a central organizing force, I think, is not like a totally new issue, itā€™s just probably the volume.

David: And I think itā€™s totally possible under the law for someone to commit fraud through the use of an AI tool. Thereā€™s nothing in the law that I can think of that would bar liability because the fraud was committed through the use of an AI tool as opposed to any other tool, so I think itā€™s certainly possible, and thereā€™s probably lots of examples, but I donā€™t see any obstacle to that.

Alison: I donā€™t think I would trust AI to respond to my email, certainly not at this point, certainly not as a lawyer.

Eugene: So, all that sounds right to me, but let me point out a couple of things that I think are implicit, Nico, in your question. One is what if weā€™re not after what would normally be actionable fraud or misrepresentation, like somebody signing ā€œEugene Volokhā€ and itā€™s not actually me, itā€™s actually an AI. That might be ā€“ in some situations ā€“ fraud. But what if itā€™s an unsigned letter and it looks like itā€™s a human, but it doesnā€™t say that, and maybe itā€™s not reasonable to just assume that itā€™s a human whoā€™s sending it.

So, what about disclosure mandates? What about a law that says any email sent by an AI has to be flagged ā€œsent by an AI,ā€ which, again, means that any email that a human authorizes to be sent by an AI has to have this disclaimer? Is that impermissible speech compulsion ā€“ again, impermissible violation of the rights of the human who is using AI to create this ā€“ or is this a permissible way of trying to prevent people from being misled?

A second related question relates to the fact that there is a right to petition the government, but there is no obligation as a constitutional matter on the governmentā€™s part to respond to the petitions. So, for example, if you were a government agency, you might say, ā€œWeā€™re not gonna prosecute you for sending us AI comments on some rule or some such. If you wanna do it, thatā€™s fine. You have every right to clog our inboxes.

Thatā€™s not enough of a harm to justify ā€“ at least, unless itā€™s like the equivalent of a denial-of-service attack ā€“ not enough of a harm to justify punishing that, but we will ignore ā€“ we will just not pay any attention ā€“ to anything that doesnā€™t say at the bottom, ā€˜I certify as a human being that this was written by me, signed, the name of the person.ā€™ā€ And then, if I send that certification through an AI, then I am committing, possibly, for example, the crime of a full statement to the government on a matter within its jurisdiction. Thatā€™s 18 USC Section 1001, perhaps.

So, it may well be that the government and others will have to set up similar such rules to say, ā€œLook, Iā€™m only going to respond to messages that arenā€™t from AIs.ā€ More broadly, you can imagine email facilities that actually do say, ā€œLook, at least with things sent by people whom you donā€™t know, one feature we will offer our users is the option of saying ā€˜block all material that isnā€™t certified as being from a humanā€™ because the last thing you want is your email box clogged by all this bot mail.ā€ And if thatā€™s so, then, again, somebody bypassing that by false certification would be committing fraud.

Alison: I think related to this how do you solve ā€“ the way people might solve the problem because the problem of all this is the generation of junk, just creation of junk mail, just the drowning out of the real people in the mass of ā€“ the cacophony of speech created by nonhuman means, and I think whatā€™s going to happen, potentially, is systems that place a premium on verification ā€“ not necessarily something clicking a box, but maybe youā€™re holding more town meetings, youā€™re holding more hearings in person in a way that canā€™t be gamed as much.

It can also cause ā€“ if youā€™re making policy, maybe youā€™re not reading the comments and youā€™re really talking to the stakeholders who you know who they are, and thatā€™s kind of how a lot of laws have been made for a long time. Maybe thatā€™s not so different than whatā€™s already going on. Iā€™m not sure how diligently every random personā€™s letter into the agency is being read as opposed to the briefs ā€“ the papers that are submitted by people that they know, and have connections, and have the ability to go in and push for their position.

So, Iā€™m not sure that ā€“ I think itā€™s going to exacerbate a problem that already exists, and maybe what we might lose, potentially, is some of the ability for the democratic access that comes with being able to petition the government or show up as somebody who doesnā€™t have a way to get in the door already because you might be drowned out in the unverified mass.

David: And you wonder whether the big problem ends up being that the government either doesnā€™t believe that thereā€™s a popular support or popular opposition to something because theyā€™re making some assumption that itā€™s some bot thatā€™s just spitting out these things. ā€œThat was a beautiful letter in opposition, probably written by AI, so I can just ignore it.ā€ I get more concerned about official lawmakers having some excuse to ignore really, really valuable input because theyā€™ve been convinced that itā€™s not the work of real humans.

Alison: Well, theyā€™re not convinced, but they understand that they can dismiss it in that manner, not to be cynical. I think Iā€™m the cynical voice on this podcast.

Nico: I have family that work and/or worked in congressional offices, and one of the things when constituents or anyone calls into the office ā€“ one of the first questions they ask is ā€œAre you a constituent? Are you in this district?ā€ If the answer is no, then they donā€™t really continue the conversation, but if the answer is yes, they hear the complaints and they log it. And then, for emails, they log all those emails, too. It actually surprises me how many of these offices log every constituent concern or complaint, but the problem, of course, with AI is are they really a constituent.

When youā€™re talking about text-to-audio, they might sound like a constituent or say theyā€™re a constituent, but that would be ā€“ speaking to Eugeneā€™s point ā€“ a misrepresentation or fraud thatā€™s already accounted for by the law. I think a big concern there would mostly be the denial-of-service-type thinking that Eugene was talking about. You only have so many people that can answer the phones and so many hours in a day, and if you keep bombarding them with AI ā€“

Alison: Unless you use AI to sort through it. Itā€™s turtles all the way down.

David: Exactly! Although I wonder whether it might be good to think a little bit about human-AI partnerships. So, my sense is that there are a lot of people who might say, ā€œI have some thoughts about this, but I know Iā€™m not the most articulate person. Iā€™m not sure I have the best arguments for this, but Iā€™m going to use AI to create a better thing than I would have myself done, but I will endorse it, or maybe Iā€™ll edit it a little bit, or Iā€™ll review it and endorse it.ā€

Or there may even be people who will say, ā€œI need to write up a letter about something, so Iā€™m just gonna let it do the first draft,ā€ the way I think a lot of translators use translation software. They realize translation softwareā€™s far from perfect, but provides a good first cut, and then that cuts down the total translation time. So, one interesting question is what should we think of that?

Alison: I think itā€™s good. I think that saving ā€“ you have so much mental capacity in a day, in a week, and saving that for the tasks that are the highest and best use can be good if youā€™re talking about, within my job, Iā€™m a really good editor, but it takes me forever to write that first draft. Or maybe itā€™s a way to get a bunch of words on a paper, and then ā€“ I love when someone else writes a first draft.

David: Right, but letā€™s think about this specifically with an eye towards the submission of comments to, say, an administrative agency or whatever else. So, on one hand, we could say itā€™s actually fairer to people who may not be as articulate or may not be as experienced with doing this to use the comment-writing bot, and then they have to review it and then sign it.

On the other hand, if it turns out that a lot of people, as a practical matter, donā€™t sign it, and what they do is they get advocacy groups to submit little prompts saying, ā€œAll of our fans, why donā€™t you use an AI running this prompt, and then, of course, edit the results as you think is necessary?ā€, but as a practical matter, people just submit it this way, and you get all the problems ā€“ itā€™s true, you do have somebodyā€™s at least formal statement, ā€œI am a human and I endorse this message.ā€ It may not be practically that realistic ā€“ that much of a human judgement there.

The other problem is to the extent we do use AIs to detect AI-written stuff, which, in fact, we academics have been thinking about, whether we can do that to deal with AI-based plagiarism, like what if somebody submits to us a paper, how do we know that itā€™s written by an AI? Well, we may run it through the AI-based AI detector.

Part of the problem, though, is that presumably, it would detect material that is written in its first draft and then only slightly modified as AI-generated material. If you think it should be accepted so long as a human endorses it, then you wouldnā€™t really have the option of an AI being used to sort through all of that spam AI-submitted things because the human-endorsed version looks just like the one that is just submitted by a bot.

Alison: Yeah. I think weā€™re not gonna out-computer the computer. Itā€™s always gonna be a race. Itā€™s like encryption. Somebodyā€™s gonna come up with something better, and then itā€™s gonna get ā€“ itā€™s just gonna be kind of like this ā€“ sorry, this is a podcast ā€“ this is me saying ā€œone after the other.ā€

Nico: There is a video component of it too, Alison.

Alison: Oh, good.

David: So, itā€™s turtles all the way down ā€“ you mean elephants all the way up.

Alison: Elephants all the way up, exactly. But I think ā€“ there was an op-ed ā€“ I think it was in the Times ā€“ recently that I thought was pretty compelling. It was about people who are ā€“ it was taking the opposite view of rather than ā€œLetā€™s try to stamp out plagiarism,ā€ instead being like, ā€œPeople are going to use this. This is a new tool. It's important to have literacy. Letā€™s use it to say when we give a prompt of ā€˜write X in the style of Y,ā€™ what is it drawing on? Why is it in the style of Y? What elements of it are reminiscent? What things is it getting wrong?ā€ and having that ā€“ rather than this fear of technology, which I think is old, very old ā€“ the radio, the TV, phones, everything ā€“ weā€™re always afraid itā€™s gonna be the end of the world, and maybe learning how to ā€“ what to do with it and what its best use is.

I think trying to trip it up is not gonna necessarily be a productive or reliable thing, as we canā€™t always know that we have the best algorithm to ā€“ and it may be wrong, and so, maybe just assume people might do it and have that not be the point. Learn how to incorporate it even affirmatively into a lesson.

David: Yeah, I think thatā€™s right, and my understanding is that journalists have been experimenting with using the ChatGPT to do first drafts of articles, to just see ā€“ and I think we have these tools, and we should learn to become comfortable with them, and those of us who are just naturally beautiful writers and have had this advantage over everyone else because we can effortlessly spit out beautiful text ā€“ this might be an equalizer. Weā€™re going to have to find our advantage someplace else, right? But I agree, we canā€™t be running away from these things or trying to impose equities on them that really arenā€™t necessary.

Eugene: So, as a general matter, I think itā€™s right that the stuff is coming, itā€™ll be here to stay, itā€™ll be getting better and better. We have to figure out how to adapt to it. At the same time, if my goal as a professor is to measure a studentā€™s mastery of particular knowledge, I canā€™t accomplish that goal now through an essay that they write, especially at home, if ChatGPT can write comparably good essays.

Alison: Or you could do an oral exam still.

Nico: Yeah, thatā€™s how they do it in Europe, right?

Eugene: Right, so we may need to change things to do that. So, in a sense, weā€™re not running away from the technology, but we are essentially saying that this technology threatens the accomplishment of a particular function. I donā€™t think the solution is to ban the technology generally. The solution may have to be, though, to change the function so it can use the technology.

So, as to oral exams, I appreciate the value of them. Thatā€™s not as good a solution, I think, in many ways, partly because itā€™s more time-consuming for the graders, and partly because my sense is that oral exams, first of all, are further away from what lawyers do. Most of lawyersā€™ work is written, not oral, so they measure things a little bit less, and there are also lots of opportunities for bias in oral exams that are, in some measure, mitigated. There are other opportunities for bias in written exams, but they are, in a considerable measure, mitigated in written exams.

And Iā€™m not even just talking about race and sex, itā€™s just appearance and manner. Everybody likes the good-looking and the fun-seeming, and in writing, thankfully, I canā€™t tell what somebody looks like or how fun they are, I just look to see what theyā€™re actually saying. So, in any event, I do think that the AI stuff is going to be potentially quite harmful to the way we do academic evaluation ā€“ again, I agree we shouldnā€™t ban it, but we shouldnā€™t also ignore the fact that it should lead us to think hard about how to prevent this kind of cheating.

David: No, I agree that I think the cheating thing is something we have to deal with. There was the calculator conundrum. This was when I was a student, was when calculators became mass available, there was this big question about whether to allow their uses in class, or was it a better ā€“ and ultimately, they exist, and itā€™s better to actually assess people with their facility with the tool than to pretend the tool doesnā€™t exist and to require that people have this capability, and then we did the same thing with search.

I remember when I first started teaching, there were questions about whether people could use Wikipedia because it was too easy. One of the ways I think we have to do as educators is get over this idea that thereā€™s some nobility in having people do the difficult way, and one of the things we can do is to teach them how do you use available tools to do your excellent job, right? I understand the assessment gets ā€“ we have to change what weā€™re assessing, and maybe weā€™re assessing the output with the use of the tool instead of the output before the use of the tool.

Nico: I was watching an Instagram reel ā€“ or maybe it was a TikTok video ā€“ the other day of an attorney ā€“ maybe a real estate attorney ā€“ in New York city who asked ChatGPT to essentially draw up a real estate contract with these specific terms, like standard New York language with this force majeure and this jurisdiction, and it spit it out in four minutes, and then he splitscreened it, and he goes through ā€“ clause by clause ā€“ each term, and heā€™s like, ā€œThis is pretty good. This would save me hours of work, and I just go in here and tweak around the edges.ā€ So, he was viewing it as kind of an augment to his work.

And I will say, as I was asking ChatGPT to write up the introduction to this show introducing these guests, saying ā€œevery other week, this is the tagline,ā€ it did a pretty good job, but I wanted to add my own kind of language around the edges because it sounded a little bit stilted, and youā€™ll hear that as Iā€™m asking some of the questions during the show. I asked ChatGPT to write the questions, but I needed to tweak them a little bit so it sounded more authentic.

Alison: Well, and itā€™s based on whatā€™s been asked before, so itā€™s gonna sound a little ā€“ itā€™s not creating necessarily a new thing. I wanna also add, just to maybe ā€“ an optimist, just to allay the fears about academic cheating, my husband is a professor and actually wrote what I thought was a pretty interesting article about this in The Atlantic at the end of last year.

He was making a point that these are all free tools right now, and itā€™s a sandbox, and itā€™s a playground, and everyone can kind of go and make their college essay on it, but itā€™s extraordinarily expensive to run these tools, and theyā€™re not going to stay free, and eventually, theyā€™re going to be incorporated into something where there is money, thereā€™s a use case for it, and thatā€™s where theyā€™re going to be used, or youā€™re going to have to pay for it, and thatā€™s going to also make it easier to tell who is using it. So, Iā€™m not sure that itā€™s always gonna be the case that anyone can just hop on and use the best ChatGPT generator to generate their ā€“ it might be a now problem, but Iā€™m not sure if itā€™s a forever problem.

Nico: Well, that was actually interesting. I saw an exchange on Twitter where someone in the tech space ā€“ because a lot of us see this, and itā€™s free, and we assume it costs nothing to produce, just like prior to paywalls on news websites. We were like, ā€œOh, yeah.ā€ But, a smart tech person asked, ā€œWell, what are the server costs associated per use on this?ā€, and I thought that was a smart question because it shows that there are limits to how free this technology is gonna be.

Alison: It is apparently extraordinarily expensive to run this, to do it free right now, especially with ā€“ but itā€™s getting a lot of buzz, and people are learning about it, and thereā€™s ā€“

Eugene: And the costs always decline. Remember how, when CD players first came around, I think people would say, ā€œOh, well, this is just for the rich.ā€

Alison: Yeah, but itā€™s the computing cost of it that is very high, so maybe it comes down over time, but right now, itā€™s expensive, and Iā€™m not sure how long itā€™s going to just be anyone can screw around with DALL-E.

Nico: So, I have two more topics that I wanna get to because I know weā€™ve got a hard stop in 10 minutes, and I think David has to hop off here in five minutes, which is okay because he said he didnā€™t wanna talk about the IP stuff or didnā€™t have much to add on the IP stuff. Weā€™ll cover that on the last question here. But I wanna ask about deepfakes, and I wanna start by playing some audio for you all.

Video: I am not Morgan Freeman, and what you see is not real ā€“ well, at least in contemporary terms, it is not. What if I were to tell you that Iā€™m not even a human being? Would you believe me? What is your perception of reality? Is it the ability to capture, process, and make sense of the information our senses receive? If you can see, hear, taste, or smell something, does that make it real, or is it simply the ability to feel? I would like to welcome you to the era of synthetic reality. Now, what do you see?

Nico: So, thatā€™s actually a video that I saw on social media, and weā€™ll cut that into the video version of the podcast, but itā€™s someone standing in front of a camera, and that someone is Morgan Freeman, but it wasnā€™t actually Morgan Freeman saying those words, and I was talking to one of my colleagues about this, and he says right now, thereā€™s AI-based deepfake-detecting technology that has kept up with deepfake production, so itā€™s pretty easy ā€“ if you have the technology ā€“ to determine what is a deepfake, but this looked exactly like ā€“ to my untrained eye ā€“ Morgan Freeman, and it sounded exactly like Morgan Freeman.

With all new technologies, of course, thereā€™s scaremongering, but we have a real War of the Worlds-type panic happening as a result of deepfake, and I imagine none of you would say that this sort of thing would be protected speech, or maybe Iā€™m wrong. It could be fraud or misrepresentation, depending how itā€™s used. In that case, Morgan Freeman ā€“ the AI-generated Morgan Freeman ā€“ said, ā€œI am not Morgan Freeman, full disclosure,ā€ but you could imagine a world where they do that with then-President Barack Obama, and people think it is actually him. What are the thoughts on deepfakes?

Eugene: So, this is an important subject. Itā€™s also, like so much, not really a new subject. My understanding is that when writing was much less common, people were much more likely to look at what seems to be a formal written document and just presume that it must be accurate because, after all, itā€™s written down, maybe it was filed in a court, and so on and so forth, but then, of course, as it became more common ā€“ I think we are all familiar with the possibility of forgeries.

Itā€™s true that we kind of grant most documents an informal presumption of validity if it looks serious, but if somebody says, ā€œWait a minute, how do you know itā€™s not forged?ā€, I think itā€™s pretty easy for people to say, ā€œOh, yeah,ā€ or very likely that people react, ā€œOh, yeah, right, we need to have some proof, we need to have some sort of authentication.ā€

Often, in a testimony, someone says, ā€œYeah, Iā€™m the one who wrote itā€ or ā€œHere are the mechanisms we have for detecting forgeries and the like.ā€ So, I do think that if somebody puts up a video that purports to be some government official doing something bad, and it turns out itā€™s a deepfake, I think the person can say, ā€œLook, we all know about deepfakes. This is one of them. I never did that.ā€

Just like if you were to post a letter that I supposedly wrote, the answer is I didnā€™t write it. I believe in the late 1800s, there was some forged letter, I want to say by then-candidate James Garfield, that played a big role in the election campaign, and it turns out it was a forgery, and I think it was denounced promptly as a forgery. One interesting question is to what extent will people who really are guilty of whatā€™s depicted in the video will say, ā€œOh, no, no, all deepfake, I didnā€™t do that. Why do you believe this nonsense? Itā€™s obviously fake.ā€

Alison: Itā€™s kind of like the ā€œIā€™ve been hackedā€ defense.

Eugene: Right, exactly, the ā€œIā€™ve been hackedā€ defense. So, one possible problem isnā€™t so much that people will believe too much, although it may be that, at some visceral level, the fact weā€™ve seen that even if we know itā€™s fake, weā€™ll still kind of absorb it, it will still color our perception of the person ā€“ maybe ā€“ part of the problem may be that people will become even more skeptical for fear of becoming too credulous.

Theyā€™ll become too skeptical, and as a result, people will become very hesitant to believe even really important and genuine allegations about misconduct by people, or by governments, or by others. So, I do think itā€™s gonna be a serious problem. I do think itā€™s important to realize that this is just a special case of the broader problem of forgery, and if you think of deepfakes as basically video and audio forgery, then I think you see the connection more than if you just sort of have a completely new label for it. In fact, I just came up with this. I think Iā€™ll blog it later today.

David: I agree, and going back, one of the reasons that libel was a more serious offense than slander was because of the inherent reliability of the written word, and fortunately, I guess, weā€™ve had ā€“ common law has had a thousand years of creating a series of legal remedies based on falsity, whether thatā€™s damaged reputation, or emotional distress, or whatever, and I do think in terms of legal frameworks, we look to see whether those remain sufficient for this new type of false statement, and itā€™ll be interesting, but I agree. I think societally, the idea that maybe we just donā€™t know what to believe anymore is going to be the much more difficult thing to get used to than tort law.

Nico: Well, thatā€™s one of the things you see in societies where thereā€™s ā€“ I read Peter Pomerantsevā€™s book Nothing is True and Everything is Possible, which is about the state of modern Russia, where they just flood the zone with shit and nobody knows what to believe anymore, and so, as a result, they just become cynical about everything. You could see that sort of situation happening, where people ā€“

Eugene: To be fair, Russians ā€“ weā€™ve been cynical about everything for a very long time.

Alison: Not to be a media lawyer, but let me put in a plug here. One way out of this is excellent journalism because I think media literacy is important, and not just believing things because you see a picture of it is not necessarily a bad thing, but you can authenticate, you can say, ā€œHereā€™s this thing, hereā€™s what we did. We talked to this, we examined XYZ.ā€

Hereā€™s showing/explaining why itā€™s consistent with ā€“ why you feel comfortable reporting this and why you think it's authentic, or why youā€™re not sure. I think thatā€™s helpful to show people, and good journalists can use ā€“ itā€™s okay to have some healthy skepticism about sources ā€“ audiovisual sources like this. I donā€™t think thatā€™s necessarily a bad thing, and I think explaining and showing people how to evaluate them is good media literacy and is good journalism.

Eugene: So, I think thatā€™s right, although one problem is this issue has come up at a time when my sense is people are much more distrustful of the mainstream media than ever before, with good reason. I think we would need to regain a notion of an ideologically impartial media to do that, not to say that the First Amendment should only protect ideologically impartial media.

I think ideologically partial media are fundamentally important ā€“ that is, ideologically one-side-or-the-other media are a fundamentally important part of public debate, but when youā€™re getting to questions about basic fact, like is this real or this not, and people are afraid, ā€œOh, well, maybe the reason that theyā€™re not investigating this is because they have some agenda, some social justice agenda, letā€™s say, or some traditional values agenda thatā€™s keeping it from doing it,ā€ or when they say itā€™s fake, is it being colored by their preferences?

Part of the problem is that those are really serious concerns ā€“ bye, David ā€“ and today, I think theyā€™re much more serious than ever, at a time when we need impartial media more than ever.

Nico: David, we appreciate you joining us. I know you have to run.

David: Yeah, Iā€™m sorry I have to run. I have my canned answer, also, for the IP stuff if you want me to say it so you have the recording.

Nico: No, thatā€™s okay. I think weā€™ll probably cover what you were gonna say anyway. The question to IP ā€“ and David, if you need to hop off, Iā€™ll let you ā€“ is artificial intelligence has the ability to generate ā€“ and this is an artificial intelligence question right here ā€“ artificial intelligence has the ability to generate original work, such as music, art, and writing, and some have raised concerns that this could potentially lead to violations of intellectual property laws.

So, what are your thoughts on that? You say, ā€œWe want this written in the style of so-and-so,ā€ or thereā€™s this thing going around social media where DALL-E generated images of Kermit the Frog or Mickey Mouse, or thereā€™s this Al-Qaeda-inspired Muppet that has been going around and is kind of burned into some of my colleaguesā€™ brains. How do we think about that? Iā€™m assuming what youā€™ll say is we already have a legal framework for addressing that - fair use ā€“

Alison: Or substantial ā€“ or copyright ā€“ to me, it doesnā€™t so much matter how you came up with the thing that looks exactly like the copyrighted work. If you are distributing it and doing one of the things that is covered by the Copyright Act, then I donā€™t think it necessarily matters if you used a brush to make it or if you used a computer to make it. We have a framework for that.

David: Yeah, I think thatā€™s right, and Iā€™ll give you my last bit before I hope off. I think in terms of outputs from AI, sure, AI could spit out potentially infringing materials the same way that any other tool could as well. I think the more difficult question ā€“ or, I donā€™t think itā€™s difficult, but I think an interesting question is in terms of the training of AI tools, using copyright images for training, I certainly think that using those as inputs for training purposes ā€“ I think thatā€™s a fair use, that using copyrighted images in order to train an AI tool would be a fair use of those images, but then, the output certainly could be infringing, and you would have to look at each individual output to determine whether it was or wasnā€™t.

Nico: Thatā€™s an interesting question there, David. I hadnā€™t even thought of that.

David: Now I have to go.

Nico: Well, weā€™ll let you go.

Alison: Iā€™ve gotta go in five.

Nico: Yeah, Alison has a hard stop in five minutes. Okay, so, youā€™re using a copyright work to produce a commercial product, right? I think of when Iā€™m going to USAA, my insurer, and weā€™re talking about what I need to insure with my home, they say, ā€œI canā€™t look at your home on Google Street View because we havenā€™t created a license with Google to be able to look at your home through that product.ā€ Eugeneā€™s looking skeptical.

Eugene: Thatā€™s a strange thing for them to say, I think, although who knows?

Nico: Thatā€™s what they told me. It sounded strange to me.

Eugene: People say all sorts of things.

Nico: I said, ā€œI have a split-level home. You can go on Google Street View and look at it.ā€ Theyā€™re like, ā€œNo, we canā€™t, because we havenā€™t licensed that technology to use in our insurance underwriting business.ā€ Iā€™m like, ā€œOkay.ā€

Alison: Maybe thatā€™s a liability issue.

Eugene: Maybe thereā€™s some terms of use that we donā€™t pay attention to in Google Street View, but I will say ā€“ so, while I agree with what people have said generally, I do think thereā€™s gonna be some important legal questions that are different, so let me give you an example.

So, the Supreme Court, in the Sony and Universal case, held that VCR manufacturers couldnā€™t be held liable for copyright infringement done using the VCRs because thatā€™s just a tool, so you could say, well, likewise, AI developers shouldnā€™t be held liable for the copyright infringement done using them, like, for example, if you run it and then use it in an ad or some such.

Alison: As long as thereā€™s a substantial non-infringing use.

Eugene: Right, right. But itā€™s possible that the analysis might be different for AIs. You might say, well, first, we can expect more copyright infringement detection, as it were, from AIs than we could from just a VCR. Another thing is the VCR manufacturer had done nothing at all in developing its VCR that used anybody elseā€™s copyrighted work, so it was only the use that might be infringement.

Maybe you might say ā€“ Iā€™m not sure this is right, but you might say if you are using other peopleā€™s copyrighted work in developing, essentially, and training your AI, then that is a fair use, but only if you then also try to make sure that youā€™re preventing it from being used by your users to infringe. Of course, thereā€™s also the complication that it may be that a lot of usersā€™ stuff will be just kind of for fun, and maybe it will look exactly ā€“ it is Mickey Mouse, and itā€™s just for home use, for noncommercial use, maybe thatā€™s a fair use, whereas you put it in an ad, itā€™s not a fair use, and the AI may not know what the person will ultimately use it for.

So, those are interesting questions, but at the very least, I think we canā€™t assume that all the existing doctrines, such as contributory liability, will play out in quite the same way. One other factor is copyright law, unlike patent law, does provide that independent creation is not infringement. So, if they create something that happens to look just like a Picasso ā€“ just happens to look just like a Picasso ā€“ thatā€™s not an infringement, but of course, you might say if the training data included the Picasso, maybe that was fair use at the outset in the training, but now you can no longer say itā€™s independent creation because, after all, itā€™s not independent. Itā€™s very much dependent.

Then, what happens if you deliberately exclude Picasso from there, but you end up using all sorts of other artists who were influenced by Picasso, maybe even including that they had some sort of infringing elements, but that nobody sued over? In any event, I do think this will raise interesting and complicated questions because the existing doctrines have been developed in a particular environment thatā€™s now shifting.

Alison: I am also gonna throw, and then I do have to drop ā€“ I think to be the practical lawyer angle on this, one thing that I see as impacting the way this may play out is kind of that the copyright office ā€“ at least so far ā€“ has refused to copyright things that were just solely generated by AI, unless there was substantial human involvement, and thatā€™s gonna affect what you can do with these kinds of works because no oneā€™s gonna wanna ā€“ rather than using an artist that you can hire and license their work from or do as a work-for-hire, use an AI, and then they have no ability to copyright the output, and then itā€™s gonna be ā€“ itā€™s not gonna necessarily be of valuable commercial use if you want to be able to protect what youā€™re using the AI to create on the other end ā€“ in a commercial sense.

Nico: Yeah, that was kind of the flip side, right? Who gets to copyright works produced by AI?

Alison: Yeah, it sounds like nobody right now.

Nico: Well, I know Alison has to drop off, so if she needs to drop ā€“

Alison: This was really fun.

Nico: Yeah, that was fun, Alison. And then there was one, Eugene. Iā€™ll let you finish up and give your thoughts.

Eugene: Sure. So, Iā€™m not terribly worried about people not being able to copyright AI-generated works ā€“ that is to say, users not being able to copyright them. Itā€™s an interesting question whether they could, based on their creativity in creating the prompt, but letā€™s say they canā€™t.

The whole point of copyright protection is not copyright being valuable in the abstract, itā€™s that it makes possible for people to invest a lot of time, effort, and money in making a new movie, or writing a novel, or whatever else. If indeed itā€™s very easy for you to create a work, we donā€™t really need to stimulate the creation of that work through copyright protection ā€“ that is to say, very easy for you being the user. It may be very difficult for OpenAI to do it. Thatā€™s a separate matter.

So, to be sure, copyright law does indeed protect even things that are easy to create, like I can write down an email thatā€™ll take me half a minute and no real creative effort. That email is protected by copyright law, but thatā€™s a side effect of copyright law generally protecting textual works that people write, which is motivated by a desire to have an incentive to create. If indeed a picture is easy to create with just the relatively modest effort required to select a prompt and then sort through the results, not such a big problem, I think, if thatā€™s not copyrightable.

Now, for commercial purposes, it may be important that the result could be used as a trademark, essentially ā€“ I oversimplify here, but basically, if I create a logo using OpenAI, I should be able to stop other people from selling products using a very similar logo, but I think trademark law would already do that. Trademark law already protects things that are not protected by copyright.

Nico: Do you think ā€“ Alison said, as someone who works in the copyright space, that the government isnā€™t copyrighting anything produced by AI. Do you think eventually, weā€™ll get to a place where it will?

Eugene: Iā€™m sorry, that the law does not provide for this protection? You said ā€œthe government.ā€

Nico: Yeah, what is it? Itā€™s not the patentā€¦ What government agency issues copyrights? And I should know this because I have some.

Eugene: There is no government agency that issues copyrights. A work is copyrighted when you write it down, when you fix it in a tangible media. You can write an email, and thatā€™s protected by copyright the moment you write it, at least under modern American law. Now, before you sue, you have to register it, but thatā€™s just a condition of filing a lawsuit, itā€™s also a condition of getting some remedies.

So, the question isnā€™t so much if somebody is registering these copyrights, the question is whether the law offers this kind of protection. Do we say that an AI-generated image is protected? And there, the question is to what extent does that reflect the expression provided by the supposed author? So, if I just say, ā€œShow me a red fish and a blue fish,ā€ at most, what Iā€™ve provided is the idea of having a red fish and a blue fish. Thatā€™s not enough to be expression.

On the other hand, if I were to give enough details, then it may be that itā€™s protected, at least insofar as a literal copy that includes all of the details that Iā€™ve asked for. It might not be infringing. So, I do think thereā€™s gonna be some degree of protection if the prompt is sufficiently creative.

Nico: Well, I think, Eugene, we should leave it there. Itā€™s just left to the two of us. A lot ofā€¦interesting thoughts to chew on, and I imagine weā€™ll have to return to the subject in the next couple of years because there will be litigation surrounding artificial intelligence and the First Amendment, but thanks for spending the time today, and I hope to talk with you again soon.

Eugene: Likewise, likewise. Very much my pleasure.

Nico: That was Eugene Volokh, a First Amendment scholar and law professor at UCLA, David Greene, the senior staff attorney and civil liberties director at the Electronic Frontier Foundation, and Alison Schary, a partner at the law firm of Davis Wright Tremaine. This podcast is hosted by me, Nico Perrino, produced by me and my colleague, Carrie Robison, and edited by my colleagues Aaron Reese and Ella Ross.

To learn more about So to Speak, you can follow us on Twitter or Instagram by searching for the handle ā€œfree speech talk,ā€ you can like us on Facebook at Facebook.com/SoToSpeakPodcast. We also have video versions of this podcast available on So to Speakā€™s YouTube channel and clips available on ¹ū¶³“«Ć½app¹Ł·½ā€™s YouTube channel. If you have feedback, you can email us at sotospeak@thefire.org, and you can leave a review. Reviews help us attract new listeners to the show, so please do, and you can leave those reviews on any podcast app where you listen to this show, and until next time, I thank you all again for listening.

Share