果冻传媒app官方

Table of Contents

So to Speak Podcast Transcript: Debating social media content moderation

Jonathan Rauch and Ren茅e DiResta

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.

Nico Perrino: Welcome back to 鈥淪o to Speak,鈥 the free speech podcast, where every other week we take an uncensored look at the world of free expression through personal stories and candid conversations. I am, as always, your host, Nico Perrino. Now, over the summer, I received a pitch from regular, 鈥淪o to Speak鈥 guest Jonathan Rauch. He wanted to debate the idea that social media companies have a positive obligation to moderate the speech they host, including based on content and sometimes viewpoint.

Jonathan recognized that his view on the issue may be at odds with some of his friends and allies within the free speech space, so why not have it out, on 鈥淪o to Speak鈥? It was a timely pitch, after all. For one, the Supreme Court was considering multiple high-profile cases involving social media content moderation, two cases dealing with states鈥 efforts to limit social media companies鈥 ability to moderate content, and another case dealing with allegations that the government pressured or 鈥渏awboned鈥 social media companies into censoring constitutionally protected speech.

Of course, there was and still is, an active debate surrounding Elon Musk's purchase of Twitter鈥攏ow called X鈥攁nd his professed support for free speech on the platform. And finally, a new book was published in June by Ren茅e DiResta that takes a look at social media movements and their ability to destabilize institutions, manipulate events, and, in her telling, distort reality.

That book is called 鈥淚nvisible Rulers: The People Who Turn Lies into Reality.鈥 Ren茅e may be a familiar name to some in the free speech world. Her work examines rumors and propaganda in the digital age, and most famously, or perhaps infamously, depending on how you look at it, Ren茅e was the technical research manager at the Stanford Internet Observatory, which captured headlines for its work tracking and reporting on election and COVID-related internet trends.

And as part of that effort, the observatory submitted tickets to social media companies flagging content the observatory thought might violate their content moderation policies. For that work, particularly the ticketing work, Ren茅e is seen by some as a booster of censorship online鈥攁 description I suspect she rejects. Fortunately, Ren茅e is here with us today to tell us how she sees it. Joined, of course, by Jonathan Rauch, who is a writer, author, and senior fellow at the Brookings Institution. He may be best known to our audiences as the author of the 1995 classic 鈥淜indly Inquisitors: The New Attacks on Free Thought,鈥 and the 2021 鈥淭he Constitution of Knowledge: A Defense of Truth.鈥 Ren茅e and Jonathan, welcome to the show.

Ren茅e DiResta: Thanks for having me.

Jonathan Rauch: Happy to be here.

Nico Perrino: So, Jonathan, as the precipitator of this conversation, let's start with you. What is the general framework for how you think about social media content moderation?

Jonathan Rauch: Well, let's begin, if it's okay, with what I鈥攁nd I think many of your listeners and folks at 果冻传媒app官方鈥攁gree on, which is that it is generally not a good idea for social media companies to pull a lot of stuff offline because they disagree with it or because they don't like it. That usually only heightens the visibility of the material that you are taking down, and it should be treated as a last resort. So, one reason I wanted to do this is I noticed a steady disagreement I was having with friends in the free speech community, including a lot of FIREfolks, who apply a First Amendment model to social media platforms. You know, X and Facebook and Instagram and the rest.

And they use the term censorship as a way to describe what happens when a company moderates content. And the lens they apply is unless this is an absolute violation of a law, it should stay up because these companies shouldn't be in the business of, quote-unquote, censorship. Well, there are some problems with that framework. One of them is that social media companies are a hybrid of four different kinds of institutions.

One of them is, yes, they are platforms, which is what we call them. There are places for people to express themselves. And in that capacity, sure, they should let a thousand flowers bloom, but they are three other things at the same time. The first is they鈥檙e a corporation, so they have to make a profit, which means that they need to attract advertisers, which means that they need to have environments that advertisers want to be in, and therefore users want to be in.

Second, they are communities, meaning that they are places where people have to want to come and feel safe and like the product. And third, they are publishers. Publishers aggregate content, but then curate it in order to assemble audiences to sell to advertisers. Now, in those three capacities that are not platforms, they鈥檙e not only allowed to moderate content and pick and choose and decide what鈥檚 going to be up there and what鈥檚 not, and what the rules and standards are going to be鈥攖hey must do that. They are positively obligated to do that. And if they don't, they will fail.

So that means this is a wicked hard problem because, on the one hand, yes, free speech values are important. We don't disagree about that. But on the other hand, just saying, "Content moderation bad, boo, hiss"鈥攖hat will never fly. So we are in a conversation where what we need to be talking about is getting content moderation right or doing it better, not doing away with it.

Nico Perrino: Ren茅e, I'd love to get your thoughts on content moderation generally. Presumably, you think there's some sort of ethical imperative, like Jonathan, to moderate content. That was part of your work with the Stanford Internet Observatory, right?

Ren茅e DiResta: Yes, though not in the way that some of you all have framed it. So, let's just dive right in with that. So I agree with John and what he said. Content moderation is an umbrella term for an absolutely vast array of topical areas that platforms choose to engage around. Some of them are explicitly illegal, right? CSAM, child sexual abuse materials, terrorist content. There's rigorous laws and platform determinations where they do take that kind of content down. Then there are the things that are what Daphne Keller refers to as lawful but awful鈥攖hings like brigading, harassment, pro-anorexia content, cancer quackery, things that are perceived to have some sort of harm on the public.

That question of what is harmful, I think, is where the actual focus should be: what defines a harm, and how should we think about that? There are other areas in content moderation that do refer to particular policy areas that the platforms choose to engage in, oftentimes in bounded ways. Right. I think it's also important to emphasize that these are global platforms serving a global audience base. And while a lot of the focus on content moderation here in the U.S. is viewed through the lens of the culture wars, the rules that they put in place must be applicable to a very, very broad audience. So, for example, the set of policies that they create around elections apply globally. The set of policies that they created around COVID applied globally.

I don't think that they're always good. I think that there are areas where the harm is too indeterminate, too fuzzy, like the lab leak hypothesis. The decision to impose a content block on that during COVID, I think, was a very bad call because there was no demonstrable harm, in my opinion, that was an outgrowth of that particular moderation area. When they moderate, as Jon alluded to, they have three mechanisms they can use for enforcement. There is 鈥渞emove,鈥 which he alludes to, is the takedowns, right? I agree, for years I've been writing that takedowns just create a backfire effect. They create forbidden knowledge. They're largely counterproductive. But then there's two others, which are "reduce," where the platform temporarily throttles something or permanently throttles it. And depending on what it is, right, spam is often rigorously throttled. And then 鈥渋nform鈥 is the last one. And 鈥渋nform鈥 is where they'll put up an interstitial or a fact check label or something that, in my opinion, adds additional context that, again, in my opinion, is a form of counter speech.

These three things, content moderation writ large, have all been reframed as censorship. That's where I think you're not having a nuanced conversation among people who actually understand either the mechanisms or the harms or the means of engagement and enforcement around it. You're having a鈥攜ou know what we might say鈥攁 rather propagandistic redefinition of the term, particularly from people who are using it as a political cudgel to activate their particular political fandom around this particular form of the sort of grievance narrative that they鈥檝e spun around it.

Nico Perrino: Well, what I want to get at is whether, normatively, you think content moderation is an imperative, Jonathan, because you talk about how it's essential for creating a community, to maintain advertisers, for other reasons. But you can build a social media company around a model that doesn't require advertisers to sustain itself鈥攆or example, a subscription model. You can build your community by professing free speech values. For example, Twitter, when it first got started, said it was the free speech wing of the free speech party. I remember Mark Zuckerberg gave a speech at Georgetown, I believe in 2018, talking about how Facebook is a free speech platform, making arguments for the imperative of free speech so they can define their communities. It seems like they're almost trying to define them in multiple ways. And by trying to please everyone, they're not pleasing anyone. So for a platform like X now, where Elon Musk says that free speech must reign鈥攁nd we've talked extensively on this podcast about how sometimes it doesn't reign鈥攂ut I mean, do you think it's okay to have a platform where pretty much any opinion is allowed to be expressed? Or do you see that as a problem more broadly for society?

Jonathan Rauch: Yes, I think it is good to have a variety of approaches to content management. And one of those certainly should be, if companies want to be鈥攜ou know, I'm a gay Jew, so I don't enjoy saying this鈥攂ut if one of them is going to be a Nazi, white supremacist, anti-Semitic content platform, it can do that. Normatively, I don't like it, if that's what you're asking. But on the other hand, also normatively, it is important to recognize that these are private enterprises, and the First Amendment and the ethos of the First Amendment protects the freedom to edit and exclude just as much as it does the freedom to speak and include.

And that means that normatively, we have to respect both kinds of platforms. And the point that I think Ren茅e and I are making is that the big commercial platforms, which are all about aggregating large numbers of users and moving large amounts of dollars and content, are going to have to make decisions about what is and is not allowed in their communities. The smaller platforms can do all kinds of things.

Nico Perrino: Ren茅e, I mean, presumably you have a normative position on topics that the social media companies should moderate around. I mean, otherwise why would the Election Integrity Partnership or your Virality Project be moderating content at all or submitting tickets to these social media companies? Not moderating content yourself, of course, but submitting tickets to the social media companies identifying policies or posts that violate the company's policies. Presumably you support those policies, otherwise you wouldn鈥檛 be submitting URLs to the companies, right?

Ren茅e DiResta: So, the Election Integrity Partnership鈥攍et me define the two projects as they actually are, not as, as unfortunately, your blog had some, you know, some erroneous information about them as well. So, the Election Integrity Partnership was started to look at election rumors related to the very narrowly scoped area of voting misinformation, meaning things that said vote on Wednesday, not on Tuesday. Text-to-vote suppression efforts, that sort of thing. It did not look at what candidate A said about candidate B. It did not have any opinion on Hunter Biden's laptop. It did, in fact, absolutely nothing related to that story. The other big topical area that the Election Integrity Partnership looked at was narratives that sought to delegitimize the election absent evidence or preemptive delegitimization. That was the focus of the Election Integrity Partnership.

The platforms had independently set policies, and we actually started the research project by going and we made a series of tables鈥攁nd these are again, these are public in our 200-and-something page report that sat on the internet for two years before people got upset about it鈥攁nd we sort of, coded basically, here's the policy. Here are the platforms that have implemented this policy. You see a lot of similarities across them. You see some differences, which to echo John's point, I think that in an ideal world, we have a proliferation of marketplaces and people can go and engage where they choose, and platforms can set their terms of service according to, you know, again, moderating explicitly illegal content. But they can set their own sort of speech and tolerance determinations for other topic areas.

So, within the realm of this sort of rubric of these are the policies and these are the platforms that have it, as we observed election narratives from about August until late November of 2020, we would every now and then鈥攕o students would鈥攖his was a student-led research project. It wasn鈥檛 an AI censorship Death Star superweapon or any of the things you've heard. It was students, who would sit there and they would file tickets when they saw something that fell within our scope. Meaning this is content that we see as interesting in the context of our academic research project on these two types of content, right? The procedural and the delegitimization content. When there were things that began to go viral that also violated a platform policy, there were occasionally determinations made to tag a platform into them.

So, that was us as academics with free speech rights, engaging with another, private enterprise with its own moderation regime, if you will. And we had no power to determine what happened next. And as you read in our Election Integrity Partnership report after the election鈥攁nd so after the election was over and, you know, February or so of the following year鈥攚e went and we looked at the 4,000 or so URLs that we had sent in this kind of escalation ticketing.

So, what wound up happening was, 65% of the URLs, when we looked at them after the fact, had not been actioned at all.

Ren茅e DiResta: Nothing had happened. Of the 35% that had been actioned, approximately 13% came down, and the remainder stayed up and got a label. So overwhelmingly, the platforms did nothing, right? Which is interesting because they have policies, but they don't seem to be enforcing uniformly. Or we got it wrong. Right? That's possible too. But when they did enforce, they on the side of, again, what I consider to be counter speech, slapping a label, appending a fact check, doing something that indicates... And oftentimes when you actually read the text, it says, this claim is disputed. It's a very neutral 鈥渢his claim is disputed.鈥 Read more in our election center here. And that was the project, right? So, was it a鈥攖he ways that people have tried to make this controversial include alleging that it was government funded. It was not. That I was a secret government agent.

Nico Perrino: The government was involved in some respects, right?

Ren茅e DiResta: The government was involved in it in the sense that the Global Engagement Center sent over a few tickets, under 20, related to things that it thought were foreign interference. Now, keep in mind, we also exercised discretion. So, just because the Global Engagement Center sends a ticket doesn't mean that we're like, oh, let's jump on this. Let's rush it over to a platform. No, that didn't happen at all. And what you see in the Jira, which we turned over to Jim Jordan and he very helpfully leaked鈥攕o now anyone can go read it鈥攊s you'll see a ticket that comes in. And again, just because it comes in doesn't mean that we take it seriously. We are doing our own independent analysis to determine whether we think a) this is real and important, b) this rises to the level of something a platform should know.

So, there's several degrees of analysis. And again, you can see the very, very detailed notes in the Jira. Again, mostly by students, sometimes by postdocs. Or I was an analyst too on the project. I was a second-tier analyst. And then a manager would have to make a determination about whether it rose to the level of something that a platform should see.

The other government figures that we engaged with was a nonprofit, actually, that engaged with state and local election officials. So, when state and local election officials are seeing claims in their jurisdiction鈥攁nd again, this is all 50 states represented in this consortium, the Election Integrity ISAC鈥攚hen they see something in their jurisdiction that says, for example, a tweet saying, "I'm working the polls in Pennsylvania and I'm shredding Trump ballots," which is an actual thing that happened, that's the kind of thing where they get very concerned about that. They get very upset about that. They can engage with CIS. They can file a ticket with us. And again, the ticketing was open to the RNC. It was open to the DNC. It was a very broad invitation to participate. And what they could do when they sent something to us is, again, we would evaluate it and we would decide if we thought that further action was necessary.

There is nothing in the ticketing in which a government agency sends something to us and says, "You need to tell the platforms to take this down." So again, for a lot of what FIREwrote about in the context of jawboning, we had no power to censor. We're not a government agency. We weren't government funded. I'm not a secret CIA agent.

Nico Perrino: I don鈥檛 think we said that (laughs).

Ren茅e DiResta: No (laughs). But other people that unfortunately have been boosted as defenders of free speech very much did say that. And that's why when I'm trying to explain what actually happened versus the right-wing media narrative and the sort of Substack parody narrative of what happened, like it's not borne out by the actual facts.

Nico Perrino: I think what people have a problem with鈥攂ecause telling people to vote on a certain day and doing it with the intent to deceive, is not First Amendment protected activity. There is no First Amendment鈥攖here are some exceptions for this. And of course, I'm just talking about the First Amendment here broadly. I know these are private platforms. They can do what they please.

CSAM material is not protected, child sexual abuse material is not protected under the First Amendment. I think what people have a problem with is the policing of opinion. Even if it's wrongheaded, it's dumb, and it can lead to deleterious effects throughout society. It can destabilize. So, when you're talking about, you know, your work in the Election Integrity Project, and you're starting by saying like, people deceiving about where voting locations are or the day to vote or "text in your vote here," like that makes sense. But, submitting tickets about trying to delegitimize the election before it happens鈥攖hat鈥檚 an expression of opinion. Now, we all in this room, I suspect, think that that's a bad idea and it's dumb. But it's still the expression of opinion. And I think that's where folks get most frustrated.

Ren茅e DiResta: Can I go back to this one second? You made an amicus brief in 鈥淣etChoice,鈥 you have a sentence in there that says, describing the sort of state theory in 鈥淣etChoice鈥. First, it confuses private editorial decisions with censorship. So, let's be totally clear. We had no power over Facebook. I have no coercive power over a tech platform. If anything, as you鈥檝e seen in my writing over the years, we're like constantly appealing to them to do basic things like share your data or be more transparent.

So first, there is no coercive power. Second, the platform sets its moderation policies. The platform makes that decision. And you, in your鈥攐r not you personally鈥攂ut FIREhas acknowledged the private editorial decisions, the speech rights of the platforms, the right of the platforms to curate what shows up. So if the platform is saying, "We consider election delegitimization," and again, this is not only in the United States. These policies are global. "We consider election delegitimization to be a problem. We consider it to be a harm. We consider it to be something that we are going to action." And then we, as a private academic institution, say, "Hey, maybe you want to take a look at this."

Nico Perrino: But you agree with them, presumably, otherwise you wouldn鈥檛 be coordinating with them on it.

Ren茅e DiResta: Well, it wasn't鈥攊t wasn鈥檛 like I was coordinating with them.

Nico Perrino: I mean, okay, so you're an academic institution. You can either research something, right, and learn more about it and study the trends. But then you take the second step whereas you鈥檙e鈥

Ren茅e DiResta: We exercise our free speech.

Nico Perrino: Yes, and nobody's saying that you shouldn鈥檛 be able to do that.

Ren茅e DiResta: Many people have been saying that I shouldn鈥檛 be able to do something else. I鈥檝e been subpoenaed and sued multiple times.

Nico Perrino: I鈥檓 not saying you shouldn鈥檛 be able to...

Ren茅e DiResta: Okay.

Nico Perrino: And in fact, we have reached out to certain researchers who are involved in the project, who are having their records FOIAed, for example. And we've always created a carve-out for public records.

Jonathan Rauch: Can I try a friendly amendment here to see if we can sort this out? You鈥檙e both right. Yes, people get uncomfortable, especially if there is a government actor somewhere in the mix, even in an advisory or informal capacity, when posts have to do with opinion. You used the word "policing opinion." I don鈥檛 like that because we鈥檙e not generally talking about taking stuff down. We鈥檙e talking about counter speech, is that "policing opinion?鈥 But on the other hand, the fact that something is an opinion also does not mean that it鈥檚 going to be acceptable to a platform. There鈥檚 a lot of places that are going to say, "We don鈥檛 want content like 'Hitler should have finished the job.'" That鈥檚 an opinion. It鈥檚 constitutionally protected. And there are lots of reasons why Facebook and Instagram and others might want to take it down, or dis-amplify it. And if it鈥檚 against their terms of service, and we know it鈥檚 against their terms of service, it is completely legitimate for any outside group to go to Facebook and say, "This is against your terms of service. Why is it here?" and hold them accountable to their terms of service. All of that is fine. It鈥檚 protected. If I鈥檓 at 果冻传媒app官方, I鈥檓 for it. If the government鈥檚 doing it, it gets more difficult. But we can come back to that.

So, question for you, Nico: how much better would you and your audience feel if everything that a group like, let鈥檚 say, an academic group or outsiders calling things to platforms鈥 attention was done all the time in full public view? There鈥檚 no private communication with platforms at all. Everything is put on a public registry as well as conveyed to the platform, so everyone can see what it is that the outside person is calling attention to the platform. Would that solve the problem by getting rid of the idea that there鈥檚 subterfuge going on?

Nico Perrino: Well, I think so. And I might take issue with the word "problem," right. Like, I don鈥檛 know that academics should be required to do that, right? To the extent it鈥檚 a voluntary arrangement between academic institutions and private companies, I think the confusion surrounding, like, the Election Integrity Partnership...

Jonathan Rauch: The problem is the confusion, not the...

Nico Perrino: And the fact that there is a government somewhere in the mix, right now, of course...

Jonathan Rauch: Set that aside. Separate issue. Yes. But just in terms of people doing what the Stanford Internet Observatory or other private actors do, of bringing stuff to platforms' attention, would it help to make that more public and transparent?

Nico Perrino: I鈥檓 sure it would help, and I鈥檓 sure it would create a better sense of trust among the general public. But again, I don鈥檛 know that it鈥檚 required or that we think it鈥檚 a good thing, normatively. I think it probably is, but I can鈥檛 say definitively, right?

Ren茅e DiResta: On the government front, I think we鈥檙e in total agreement, right? You all have a bill or bill template, I鈥檓 not sure where we are in the legislative process, but which I agree with, just to be clear. And as鈥

Jonathan Rauch: Yeah me too, I think it's the right framework.

Ren茅e DiResta: I think that鈥檚 the reason I argued with Greg about this on, like, Threads or something. But, no, I think it鈥檚 the right framework. Look, the platforms have proactively chosen to disclose government takedown requests. We鈥檝e seen them from, you know, Google鈥攜ou can go and you can see鈥攖here鈥檚 a number of different areas where when the government is making the request, I think the transparency is warranted, and I have no problem with that, with that being codified law in some way. The private actor thing, you know, it鈥檚 very interesting because we thought that we were being fairly transparent, and that we had a Twitter account, we had a blog, we had鈥擨 mean, we were constantly鈥擨 mean, I was...

Nico Perrino: You had a 200-page report.

Ren茅e DiResta: We had a 200-page report where you can鈥擨 mean, you can... The only thing we didn鈥檛 do, we didn鈥檛 release the Jira鈥攏ot because it鈥檚 secret, but because now that Jim Jordan has helpfully released it for you, you can go try to read it right, and you鈥檙e going to see just a lot of people debating, you know, "Hey, what do we think of this? What do we think of this?"

Jonathan Rauch:The Jira is your internal...

Ren茅e DiResta: The Jira was just an internal ticketing system. It鈥檚 a project management tool. And, you know, again, you can go read it, you can see the internal debates about "Is this in scope? Is this of the right threshold?" I鈥檒l say one more thing about the Virality Project, which was... The Virality Project was a different type of project. The Virality Project sought to put out a weekly briefing, which again went on the website every single week in PDF form. Why did that happen? Because I knew that at some point we were going to get, you know, some records request. We are not subject to FOIA at Stanford. But I figured that, again, because the recipients of the briefings that we were putting up did include anyone who signed up for the mailing list, but government officials did sign up for the mailing list.

Ren茅e DiResta: So people at Health and Human Services or the CDC or the office of the Surgeon General signed up to receive our briefings. So, we put them on the website. Again, anybody could go look at them. And what you see in the briefings is we鈥檙e describing the most viral narratives of the week. It is literally as basic as here are the narratives that we considered in scope for our study of election rumors. And, you know, there they are. And we saw the project as how can we enable officials to actually understand what is happening on the internet, because we are not equipped to be counter speakers. We are not physicians, we are not scientists, we are not public health officials, but the people who are don鈥檛 necessarily have the understanding of what is actually viral, what is moving from community to community, where that response should come in.

And so, we worked with a group of physicians that called themselves "This Is Our Shot," right? Just literally a bunch of doctors, frontline doctors, who decided they wanted to counter speak, and they wanted to know what they should counter speak about. So again, in the interest of transparency, the same briefings that we sent to them sat on our website for two years before people got mad about them. And then this again was turned into some sort of, "Oh, the DHS established the Virality Project." Complete bullshit. Absolutely not true. The only way that DHS engaged with it, if at all, is if somebody signed up for the mailing list and read the briefings.

Nico Perrino: So, there wasn鈥檛 any Jira ticketing system...

Ren茅e DiResta: There was a Jira ticketing system so that we internally could keep track of what was going on.

Nico Perrino: It wasn鈥檛 sent on to the platforms?

Ren茅e DiResta: In the Virality Project, I think there were 900 something tickets. I think about a hundred were tagged for the platforms, if I recall.

Nico Perrino: What were those tickets associated with?

Ren茅e DiResta: So, one of the things that we did was we sent out an email to the platforms in advance of the鈥攜ou know, as the project began, and we said, "These are the categories that we are going to be looking at for this project." And I鈥檓 trying to remember what they are off the top of my head: it was like vaccine safety narratives, vaccine conspiracy theories, you know, metal鈥攍ike, "It makes you magnetic," the "mark of the beast," these sorts of things. Or the other ones鈥攐h鈥攏arratives around access, who gets it and when. So, again, these sorts of, like, big, overarching, long-term vaccine hesitancy narratives. We pulled the narratives that we looked at from past academic work on what sorts of narratives enhanced vaccine hesitancy.

And what we did after that was we reached out to the platforms. We said, "These are the categories we鈥檙e looking at. Which of these are you interested in receiving tickets on? When something is going viral on your platform鈥攁gain, that seems to violate your policies," because you鈥檒l recall, they all had an extremely large set of COVID policies. Lab leak was not in scope for us. It鈥檚 not a vaccine hesitancy narrative. And so, in that capacity, again, there were about a hundred or so tickets that went to the platforms. And again, they were all turned over to Jim Jordan. And you can go look at them all.

Nico Perrino: This conversation is coalescing around: "can" but "should." I think we鈥檙e all in agreement that social media companies can police this content. The question is, should they? Right. So, should they have done...

Jonathan Rauch: I prefer "moderate" to "police."

Nico Perrino: Okay. I mean, but they are out there looking for people posting these narratives or violating their terms of service. So, I mean, we could debate semantics whether "police" is the right word, but they鈥檙e out there looking and they鈥檙e moderating surrounding this content. Should they? Should they have done all the content moderation they did surrounding COVID, for example?

Ren茅e DiResta: Well, I鈥檝e said some of it, I think, like the lab leak, I thought was rather pointless. But again, I didn鈥檛 see the risk there, the harm, the impact that justified that particular moderation area. Facebook Oversight Board, interestingly, wrote a very comprehensive review of Facebook鈥檚 COVID policies. And one thing that I found very interesting reading it was that you really see gaps in opinion between people who are in the global majority or the global south and people who are in the United States.

And that comes through. And this again is where you saw a desire for, if anything, more moderation from some of the people who were kind of representing the opinions of, you know, Africa or Latin America, saying, "No, we needed more moderation, not less," versus where moderation had already by that point, beginning in 2017, become an American kind of culture war flashpoint. The very idea that moderation is legitimate had been sort of established in the United States. That鈥檚 not how people see it in Europe. That鈥檚 not how people see it in other parts of the world. So, you do see that question of should they moderate and how being in there.

Ren茅e DiResta: I want to address one other thing, though, because for me, I got into this in part looking at vaccine hesitancy narratives as a mom back in 2014.

my cards have always been on the table around, you know, my extremely pro-vaccine stance. But one of the things that I wrote about and talked about in the context of vaccines specifically for many, many, many years鈥攔ight, it鈥檚 all in 鈥淲ired,鈥 you can read it鈥攚as the idea of platforms having a right to moderate. In my opinion, there鈥檚 a difference between what they were doing for a very, very, very long time, which was they would push the content to you. So, you had鈥攜ou, as a new parent, had never searched for vaccine content. They were like, "You know what you might want to see? You might want to join this anti-vaccine group over here." Right?

So there鈥檚 ways in which platforms curate content and have an impact further downstream. The correlation between the sort of rise in vaccine hesitancy online over a period of about a decade鈥攁ctually, you know, six years, give or take before COVID began鈥攊s something that people, including platforms, were very, very concerned about because of its impact on public health, long before the COVID vaccines themselves became a culture war flashpoint.

So, do I think that they have an obligation to take certain鈥攜ou know, to establish policies related to public health? I think it鈥檚 a reasonable ethical thing for them to do, yes. And where I struggle with some of the conversation, you know, from your point of view, I think鈥攐r maybe what I intuit is your point of view, based on 果冻传媒app官方's writings on this鈥攊s that you both acknowledge that platforms have their own free speech rights, and then I see a bit of a tension here with, "Well, they have their own free speech rights, but we don鈥檛 want them to do anything with those speech rights. We don鈥檛 want them to do anything with setting curation or labeling or counter speech policies. We just want them to do nothing, in fact." Because then you have this secondary concern鈥攐r maybe dual concern鈥攁bout the speech rights of the users on the platforms. And these two things are in tension for the reasons that John raised when we first started.

Nico Perrino: Well, do you worry that efforts to label, block, ban content based on opinion, viewpoint鈥攚hat鈥檚 true or false鈥攃reate martyrs? Supercharges conspiracy theories? You had mentioned, Jonathan, the "forbidden fruit" idea. Or maybe that was you, Ren茅e. I worry that doing so, rather than creating a community that everyone wants to be a part of, creates this sort of erosion of trust. I suspect that the actions taken by social media companies during the COVID era eroded trust in the CDC and other institutions. And I think if the goal is trust and if the goal is institutional stability, it would have been much better to let the debate happen without social media companies placing their thumb on the scale, particularly in the area of emerging science.

Jonathan, I remember we were at a University of California event, right as COVID was picking up, and we were talking about masks and we were talking about just regular cloth face masks. And I think it was you or I who said that like, "Oh, no, those don鈥檛 actually work. You need an N95 mask," right? And then that changed, right? Then the guidance was that you should wear cloth masks, that they do have some ameliorating effect.

Andrew Callaghan created a great documentary called 鈥淭his Place Rules鈥 about January 6th and kind of conspiracy theory movements. And he said in his reporting that when you take someone who talks about a deep state conspiracy to silence him and his followers, and then you silence him and his followers, it only adds to his credibility. Now, here we鈥檙e not talking about deep state, we鈥檙e talking about private platforms. But I think the idea surrounding trust is still there. So I鈥檇 love to get your guys' thoughts on that. Like, sure, we all have an agreement around COVID or the election, for example, but the moderation itself could backfire.

Jonathan Rauch: Well, I鈥檒l make a big point about that and then a narrower point. The big point is I think we鈥檙e getting somewhere in this conversation because it does seem to me, correct me if I鈥檓 wrong, that the point Ren茅e just made is something we agree on鈥攖hat there are tensions between these roles that social media companies play.

The first thing I said is there are multiple roles and tensions between them, and that means that simple answers like, "They should always do X and Y," or "Not and Y, X," are just not going to fly. And if we can establish that as groundwork, we鈥檙e way ahead of the game. Until now, it鈥檚 all been about, "What they should, should not ever, ever do." So I鈥檓 very happy with that.

So, then there鈥檚 the narrower question, which is a different conversation about what should they be able to do in principle, and that鈥檚, "What should they do in practice?" which is Jonathan, Nico, and Ren茅e all sitting here and saying, "Well, if we were running Facebook, what would the best policy be? How do we build trust with our audience? Did we do too much or too little about this and that?" And the answer to those questions is, "I don鈥檛 know." This is a wicked hard problem. I will be happy if we can get the general conversation about this just to the point of people understanding, "This is a wicked hard problem," that simple bromides about censorship, freedom of speech, policing speech won鈥檛 help us.

Once we鈥檙e in the zone of saying, "Okay, how do we tackle this in a better way than we did in COVID?" and I鈥檓 perfectly content to say that there were major mess-ups there. Who would deny that? But once we鈥檙e having that conversation, we can actually get started to understanding how to improve this situation. And thank you for that.

Nico Perrino: Well, I mean, let鈥檚 take a real-world example, right? The deplatforming of Donald Trump. Do you think that was the right call? That鈥檚 the first question. The second call鈥擳he second question is: was that consistent with the platform's policies?

Ren茅e DiResta: That was an interesting one. That was an interesting one because there was this question around incitement that was happening right in the context in which it happened. It was as January 6th was unfolding, as I recall it, maybe it was 48 hours later that he actually got shut down.

Nico Perrino: I should add that the question as to whether Donald Trump鈥檚 speech on the Ellipse that day met the standard for incitement under the First Amendment is like the hottest debate topic...

Ren茅e DiResta: Right, where do people come down on it?

Nico Perrino: All sides. I mean, both sides.

Ren茅e DiResta: Interesting, interesting.

Nico Perrino: Yeah, there鈥檚 not a unity of opinion within the First Amendment community on that one. You have really smart people...

Jonathan Rauch: Is that true in FIREas well?

Nico Perrino: Yes, yes, it is.

Ren茅e DiResta: That鈥檚 interesting. So, within kind of tech platform- I mean tech policy community, I mean, there were, like, literally entire conferences dedicated to that. I felt like as a moderate, I maybe it was kind of like maybe punted. I wrote something in 鈥淐olumbia Journalism Review鈥 as it came out about, "With great power comes great responsibility."

And one of the things when I talk about moderation enforcement鈥攁gain, one of the reasons why, with Election Integrity Partnership, when we went and looked after the fact, "Had they done anything with these things that we saw as violative?"鈥攚e found that 60, 65% of the time, the answer was "No." This was an interesting signal to us, because when you look at some of the ways that moderation enforcement is applied, it is usually the lower-level, nondescript, ordinary people that get moderated for inciting type speech...

Nico Perrino: You talk about borderline speech?

Ren茅e DiResta: Yeah, when the President of the United States does not. Right. There鈥檚 a protection given. You鈥檙e like too big to moderate.

Nico Perrino: A public figure privilege.

Ren茅e DiResta: Sure, yeah. And I believe...

Nico Perrino: Actually, some of the platforms had that.

Ren茅e DiResta: Absolutely. They did, yes. And this was the one thing where occasionally, you know, you did see every now and then something interesting would come out of the Twitter files. And it would be things like the sort of internal debates about what to do about some of the high-profile figures where there were questions about whether language veered into borderline incitement or violated their hate speech policies or whatever else.

So, there is this question. If I recall correctly, my argument was that if you鈥檙e going to have these policies, you should enforce them. And it seemed like this was one of the areas where鈥攖here was鈥攜ou have to remember also in the context of the time, there was very significant concern that there would be continued political violence. Facebook had imposed what it calls the break glass measures. I think I talk about this in my book.

Nico Perrino: Yeah, you do.

Ren茅e DiResta: Yeah. And that鈥檚 because鈥攖his is, I think, this is something also worth highlighting鈥攚hich is that the platforms are not, curation is not neutral. There is no baseline of neutrality that exists somewhere in the world. When you have a ranked feed, and you just can鈥檛 be. Right? This is a very big argument in tech policy around how much transparency should they be obligated to disclose around how they curate, not only how they moderate, but how they curate, how they decide what to show you. Because that鈥檚 a different鈥

Nico Perrino: You have some states that are trying to mandate that by laws, that place their algorithms鈥攖hat from 果冻传媒app官方鈥檚 perspective would create a compelled speech issue.

Ren茅e DiResta: But see, this is an interesting thing. I was going to raise that with you, like the AB 587 court case is an interesting one, right? The California transparency disclosures, where, you know, there's a, there鈥檚 um, platforms, they have their First Amendment rights, but they can't be compelled to actually show people what's going on. But also, maybe they shouldn't be moderating. But if they moderate, they should be transparent, but they shouldn鈥檛 be compelled to be transparent.

Like, we wind up in these weird circles here where we鈥檙e鈥擨 feel like we just get nowhere. We just鈥攚e just sort of always point to the First Amendment and say, "No, no, no, we can't do that."

Nico Perrino: Well, in the political sphere, you have some transparency requirements, for example, around political contributions and whatnot.

Ren茅e DiResta: Right, exactly. And I think the question around how should transparency laws be designed to do the thing that you鈥檙e asking, right, to do the thing that Jon referenced also, which is if you want to know how often a platform is actually enforcing or on what topics, right now that is a voluntary disclosure. That is something that an unaccountable private power decides benevolently to do. In Europe, they鈥檙e saying, "No, no, no, this is no longer a thing that鈥檚 going to be benevolent. It鈥檚 going to be mandated," right, for very large online platforms.

Ren茅e DiResta: It鈥檚 going to be a topic that鈥檚 litigated quite extensively. And it again comes back to, in a free society, there are things that the law shouldn't compel, but that we, as individual citizens, should advocate for. Right? And where is that line? And it always鈥攜ou can feel uncomfortable advocating for things that the law doesn鈥檛 require, but I think that鈥檚 just kind of part of living in a free society as well.

Nico Perrino: Jonathan, I would like to get your take on the Trump deplatforming.

Jonathan Rauch: Well my take is I don鈥檛 have a take. It depends on the platform and what their policies are. My general view is a view I got long ago from a man named Alan Charles Kors.

Nico Perrino: FIREco-founder.

Jonathan Rauch: Yeah, you may have heard of him. And that鈥檚 in reference to universities, private universities, which is: yeah, yeah, private universities ought to tell us what their values are and then enforce those commitments. So, if a university says we are a free speech university, the robust and unhindered pursuit of knowledge through debate and discourse, they should not have a speech code.

Nico Perrino: Sure.

Jonathan Rauch: But if they want to say, "We are a Catholic university, and we enforce certain norms and ideas," fine.

Nico Perrino: Like Brigham Young University.

Jonathan Rauch: You know, so what the first thing I want to look at when anyone is deplatformed is, "Okay, what are the rules?" And are they enforcing them in a consistent way? And the answer is, I don鈥檛 know the particular rules in the particular places relating to the Donald Trump decision. People that I respect, who have looked at it, have said that Donald Trump鈥攏ow we鈥檙e talking about Twitter specifically as it was鈥攈ave said that Donald Trump was in violation of Twitter鈥檚 terms of service and had been multiple times over a long period, and that they were coming at last to enforce what they should have enforced earlier. I can't vouch for that. Reasonable people said it. So what do you think?

Nico Perrino: Well, I think there needs to be truth in advertising if you鈥檙e looking at some of these social media platforms. We had talked about Mark Zuckerberg before. I think I have a quote here. He says, "We believe the public has the right to the broadest possible access to political speech, even controversial speech." He says, "Accountability only works if we can see what those seeking our votes are saying, even if we viscerally dislike what they say." Big speech at Georgetown about free speech and how it should be applied on social media platforms.

Then, at the same time, he gets caught on a hot mic with Angela Merkel, who鈥檚 asking him what he鈥檚 going to do about all the anti-immigrant posts on Facebook. This was when the migrant crisis in Europe was really at its peak in the mid-2010s. I think what really frustrates people about social media is the perception, and maybe the reality, of double standards. And I think that鈥檚 what you also see in the academic space as well. So, you have Claudine Gay going before Congress and, I think, giving the correct answer from at least a First Amendment perspective, that context does matter anytime you鈥檙e talking about exceptions to speech. In that case, they were being asked about calls for Jewish genocide, which was immediately preceded by discussion of chants of "intifada" or "from the river to the sea," which I think should be protected in isolation if it鈥檚 not part of a broader pattern of conduct that would fall under, like, for example, harassment or something.

With the social media companies and Twitter, for example鈥攔ight?鈥攕o you have Donald Trump get taken down, but Iran鈥檚 Ayatollah Khamenei is not taken down. You have Venezuelan President Nicol谩s Maduro, who鈥檚 still on Facebook and Twitter. The office of the President of Russia still is operating a Twitter account. Twitter allows the Taliban spokesperson to remain on the platform. You have the Malaysian and Ethiopian prime ministers not being banned despite what many argue was incitement to violence. So, I think it鈥檚 these double standards that really erode trust in these institutions, and that lead to the sort of criticism that they've received over the years. And I think it鈥檚 why you saw Mark Zuckerberg responding to Jim Jordan and the House committee and saying, "We鈥檙e going to kind of stop doing this. We鈥檙e going to kind of get out of this game."

Jonathan Rauch: There always will be. I would just鈥攚ell, first, I would retreat to my main point that I want to leave people with, which is this problem is wicked hard, and simple templates just won鈥檛 work. But in response to what you just said, I would point out that the efforts to be consistent and eliminate double standards could lead to more lenient policies, which is what鈥檚 happened on Facebook, or less lenient policies. They could, for example, have taken down Ayatollah Khamenei, or Nicol谩s Maduro, or lots of other people. And I鈥檓 guessing FIREwould have said, "Bad, bad, bad. Leave them up." I don鈥檛 know. But the search for consistency is difficult.

And if you take your terms of service seriously, and if you're saying, "We're a community that does not allow incitement or hate," or "We鈥檙e a community that respects the rights of LGBT people, and defines that as 'a trans woman is a woman,' and to say differently is hate," well, then that means they鈥檙e going to be removing or dis-amplifying more stuff.

Nico Perrino: Well, it depends how they define some of those terms.

Jonathan Rauch: Well, that's right. But the whole point of this is, is that it鈥檚 going to be very customized processes. And what I鈥檓 looking for here is, "Okay, at least tell us what you want, what you think your policy is, and then show us what you鈥檙e doing." So, at least we can see, to some extent, how you're applying these policies. And therefore, when we鈥檙e on these platforms, we can try to behave in ways that are consistent with these policies without getting slapped in seemingly capricious or random or partisan or biased directions.

Nico Perrino: Do you think the moderation that social media companies did during the pandemic, for example, has led to vaccine hesitancy in the country?

Ren茅e DiResta: That鈥檚 a really interesting question. I don鈥檛 think I鈥檝e seen anything鈥擨 don鈥檛 think that that study has been done yet. There鈥檚 a鈥攜ou know, the question is: is it the鈥攊t's very hard, I would say, to say that this action led to that increase. One of the things that has always been very fuzzy about this is the idea that鈥攊s it the influencers you hear that are undermining the鈥攜ou know, the vaccines became very partisan, very clear lines. You can see, expressed in vaccination rates, conservatives having a much lower uptake. Is that because of, you know, some concern about censorship, or is that because the influencers that they were hearing from were telling them not to get vaccinated anyway?

I think it鈥檚 also important to note that there was no mass censorship in terms of actual impact writ large on these narratives. I can鈥攜ou can open up any social media platform and go and look, and you will find the content there. If you were following these accounts, as I was during COVID, you saw it constantly. It was not a hidden or suppressed narrative. This is one of the things that I have found sort of curious about this argument. The idea that somehow every vaccine hesitancy narrative was summarily deleted from the internet is just not true. The same thing with the election stuff. You can go and you can find the delegitimization content up there because, again, most of the time they didn鈥檛 actually do anything. So, I have an entire chapter in the book on where I think their policies were not good, where I think their enforcement was middling, where I think the institutions fell down. I mean, I don鈥檛 think that COVID was, you know鈥攏obody covered themselves in glory in that moment.

But do I think that the sort of backfire effect of suppression led to increased hesitancy? It鈥檇 be an interesting thing to investigate.

Nico Perrino: Do you have any insight into whether content that was posted on YouTube, for example, or Facebook, that mentioned the word COVID during COVID, was de-amplified because there was a big narrative鈥

Ren茅e DiResta: We can鈥檛 see that. We can鈥檛 see that. And this again is where I think the strongest policy advocacy I鈥檝e done as an individual over the last, you know, seven years or so has been on the transparency front, basically saying we can鈥檛 answer those questions. One of the ways that you find out, you know, why Facebook elected to implement break-glass measures around January 6th, for example, comes from internal leaks from employees who worked there. That鈥檚 how we get visibility into what鈥檚 happening. And so while I understand First Amendment concerns about the compelled speech and the transparency realm, I do think there are ways to thread that needle because the value to the public of knowing, of understanding, you know, what happens on unaccountable private power platforms is worthwhile. It is, in fact, in my opinion, it meets that threshold of the value of what is compelled being more important than the sort of First Amendment prohibitions against compulsion.

Jonathan Rauch: Incidentally, footnote, some of the momentum around compulsion for transparency could presumably be relieved if these companies would just voluntarily disclose more, which they could, and they鈥檝e been asked to do many times, including by scholars who鈥檝e made all kinds of rigorous commitments about what would and would not be revealed to the public.

Nico Perrino: Doesn鈥檛 X open-source its algorithm?

Ren茅e DiResta: That doesn鈥檛 actually show you how they curate鈥

Nico Perrino: Yeah, sure. I mean, it shows how the algorithm might moderate content, but it presumably wouldn鈥檛 show how human moderators would get involved, right?

Ren茅e DiResta: Well, there鈥檚 two different things there. One is curation. One is moderation. The algorithm is a curation function, not a moderation function. So, these are two different things that you鈥檙e talking about that are both, I think, worthwhile. You know, so some of the algorithmic transparency arguments have been that you should show what the algorithm does. The algorithm is a very complicated series of鈥攐f course, it means multiple things depending on which feature, which mechanism, what they鈥檙e using machine learning for. So there鈥檚 the algorithmic transparency efforts, and then there鈥檚 basic鈥攁nd what I think John is describing more of鈥攖ransparency reports. Now, the platforms were voluntarily doing transparency reports. And I know that Meta continues to鈥 I meant to actually check and see if Twitter did last night, and I forgot. It was on my list of votes.

Nico Perrino: You still call it Twitter too.

Ren茅e DiResta: Sorry, I know. I know. But some people refuse to call it X.

Nico Perrino: No, no. It鈥檚 not a refusal. It鈥檚 just a鈥

Ren茅e DiResta: No, no, no. It鈥檚 just where it comes to mind. But no, Twitter actually did some鈥攕orry, whatever鈥攄id some interesting things regarding transparency, where there was a database, or there is a database, called Lumen. You must be familiar with Lumen.

Nico Perrino: Yes.

Ren茅e DiResta: Lumen is the database where if a government or an entity reached out with a copyright takedown under DMCA, for a very long time, platforms were proactively disclosing this to Lumen. So, if you wanted to see whether it was, you know, a movie studio, a music studio, or the government of India, for example, requesting a copyright鈥攗sing copyright as the argument, as the justification for a takedown鈥攖hose requests would go into the Lumen database. Interestingly, when somebody used, I believe, that database and noticed that X was responding to copyright takedowns from the Modi government鈥攖his was in April of last year.

Nico Perrino: I think there was a documentary about, for example...

Ren茅e DiResta: Yes. So, and Twitter did comply with that request. There was a media cycle about it. They were embarrassed by it because, of course, the rhetoric around the sort of, you know, free speech relative to what they had just done or what had been, you know, sort of revealed to have been done鈥攁nd again, they operated in accordance with the law. This is a complicated thing, you know. We can talk about sovereignty if you want at some point. But what happened there though was that the net effect of that was that they simply stopped contributing to the database. So what you鈥檙e seeing is the nice-to-have of like, "Well, let鈥檚 all hope that our, you know, benevolent private platform overlords simply proactively disclose." Sometimes they do. And then, you know, then something embarrasses them, or the regime changes, and then the transparency goes away.

And so I actually don鈥檛 know where Twitter is on this. We could check, but, you know, put it in the show notes or something. But it is an interesting question because there have been a lot of walk-backs related to transparency, in part, I would argue, because of the chilling effect of Jordan鈥檚 committee.

Jonathan Rauch: But maybe鈥攎aybe for now the best clutch is going to be private outside groups and universities and nonprofits that do their best to look at what鈥檚 going up on social media sites, and then compare those with their policies, and report that to social media companies and the public. And that鈥檚 exactly what Ren茅e was doing.

Ren茅e DiResta: And look what happened.

Jonathan Rauch: Incidentally, if we want to talk about entities that look at what private organizations are doing regarding their policies, looking for inconsistencies between the policies and the practices, and reporting that to the institutions and saying, "You need to get your practices in line with your policies," we could talk about 果冻传媒app官方, because that鈥檚 exactly 果冻传媒app官方鈥檚 model for dealing with private universities that have one policy on speech and do something else.

Nico Perrino: Sure.

Jonathan Rauch: It鈥檚 perfectly legitimate. And it鈥檚, in many ways, very constructive.

Nico Perrino: Well, that鈥檚 one of the reasons that we criticize these platforms, normatively, right? Is because you do have platforms that say, "We鈥檙e the free speech wing of the free speech party," or "We鈥檙e the public square." Or you have Elon Musk saying that Twitter, now X, is a free speech platform, but then censorship happens that we think, or, you know, you guys don鈥檛 like using the word censorship in the context of online private platforms. But moderation happens that would be inconsistent with those policies, and we will criticize him as we have. We鈥檒l criticize Facebook as we have in the past.

Jonathan Rauch: Yes. I jumped on your use of the word "policing" earlier. You mentioned it鈥檚 a semantic difference, and I don鈥檛 think it is because I think it would be unfair and inaccurate to describe what FIREis doing, for example, as policing. I think that鈥檚 the wrong framework. And that鈥檚 really the big point I鈥檓 trying to make.

Ren茅e DiResta: So, you have an interesting鈥攐ne of the things that I was鈥擨 think about is you have private enterprise and then you have state regulators, and everybody agrees that they don鈥檛 want the state regulators weighing in on the daily adjudication of content moderation, right? It鈥檚 bad. I think we all agree with that.

Nico Perrino: And the DOJ just came out with a whole big website, I believe, on best practices for its鈥

Ren茅e DiResta: Yeah, for how government should engage. And I think that that鈥檚 a鈥攖hat鈥檚 a positive thing. But then the other side of it is private enterprise makes all those decisions, and there鈥檚 a lot of people who are also uncomfortable with that, because then you have鈥攏ormally when you have unaccountable private power, you also have some sort of regulatory oversight and accountability that isn鈥檛 really happening here, particularly not in the US. The Europeans are doing it in their own way. So, you have sort of "nobody wants the government to do it, nobody wants the platforms to do it." So, one interesting question then becomes, "Well, how does it get done?" Right? When you want to鈥攚hen you want to change a platform policy, when somebody says, "Hey, I think that this policy is bad." I鈥檒l give a specific example.

Early on in COVID, I was advocating for state media labeling, right? That was because a lot of weird stuff was coming out from Chinese accounts and Russian accounts related to COVID. And I said, "Hey, you know, again, in the interest of the public being informed, not take these accounts down, but just throw a label on them," right? Almost like the digital equivalent of Foreign Agent Registration Act to say, like, "Hey, state actor. Just so that when you see the content in your feed, you know that this particular speaker is literally the editor of a Chinese propaganda publication." That, I think, is, again, an improvement in transparency.

Those platforms did, in fact, for a while, move in that direction. And they did it in part because academics wrote op-eds in The Washington Post saying, "Hey, it鈥檇 be really great if this happened. Here鈥檚 why we think it should happen. Here鈥檚 the value by which we think it should happen." So, that wasn鈥檛 a determination made by some regulatory effort. That was civil society and academia making an argument for a platform to behave in a certain way. You see this with the advertiser鈥攜ou know, advertiser boycotts. I鈥檝e never been involved in any of those in any firsthand way. But again, an entity that has some ability to say, "Hey, in an ideal world, I think it would look like this." The platform can reject the suggestion, the platform can鈥攜ou know, Twitter imposed state media labels and then walked them back after Elon got in a fight with NPR, right?

Nico Perrino: Sure, sure. I know, for example, that they don鈥檛 always listen to us. We reach out to them, okay. But I think we鈥檙e all in agreement that the biggest problem is when the government gets involved and does the so-called jawboning. I just want to read Mark Zuckerberg鈥檚 August 26th letter to Jim Jordan and the committee on the judiciary, in which he writes in one paragraph:

"In 2021, senior officials from the Biden administration, including the White House, repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire, and expressed a lot of frustration with our teams when we didn鈥檛 agree. Ultimately, it was our decision whether or not to take content down, and we made our own decisions, including鈥攁nd we own our decisions鈥擟OVID-19-related changes we made to our enforcement in the wake of this pressure. I believe that government pressure was wrong, and I regret that we were not more outspoken about it. I also think we made some choices that, with the benefit of hindsight and new information, we wouldn鈥檛 make."

And then in some of the releases coming out of this committee, you see emails, including from Facebook. There鈥檚 one email that showed Facebook executives discussing how they managed users' posts about the origins of the pandemic, that the administration was seeking to control. Here鈥檚 a quote:

"Can someone quickly remind me why we were removing, rather than demoting, labeling claims that COVID is man-made?" asked Nick Clegg, the company president of global affairs, in a July 2021 email to colleagues.

"We were under pressure from the administration and others to do more," responded a Facebook vice president in charge of content policy.

Speaking of the Biden administration, "We shouldn鈥檛 have done it." There are other examples, for example, Amazon employees strategizing for a meeting where the White House openly asked whether the administration wanted the retailer to remove books from its catalog:

"Is the admin asking us to remove books, or are they more concerned about the search results order or both?" one employee asked.

And this was just in the wake of the [鈥淲hen Harry Became Sally鈥 book] incident, where the platform鈥攐r Amazon, in this case鈥攔emoved a book on transgender issues and got incredible backlash. And so they were reluctant to remove any books, even books promoting vaccine hesitancy.

So, I think we鈥檙e all in agreement that the sort of activity you talked about there...

Jonathan Rauch: It鈥檚 in a different category. And this is frankly not a difficult problem to address. 果冻传媒app官方's bill is in the right direction. There are lots of proposals and there are all versions, as I understand it鈥攜ou guys can correct me鈥攐f the same thing, which is: instead of having someone in some federal agency or the White House pick up the phone and yell at someone at some social media company, there should be a publicly disclosed record of any transactions. It should be done in a systematic and formal way, and those records should be, after some reasonable interval, inspectable by the public. And that's it. Problem solved.

Ren茅e DiResta: Yeah, I would agree with that also. I mean, and this is where we wrote amicus briefs in Murthy, because my residual gripe with that is that this is an important topic. This is an important area for there to be more jurisprudence. And yet that particular case was built on such an egregiously garbage body of fact and so many lies that unfortunately, it was tossed for standing because it just wasn鈥檛 the case that people wanted it to be.

And my frustration with FIREand others was that there was no reckoning with that鈥攖hat even as, you know, as Greg was talking about the bill, which I absolutely support those types of legislation, it was like, "Well, the Supreme Court punted and, you know, didn鈥檛 come down on where they needed to come down on this issue." You know, they didn鈥檛 give us guidance. That happened because the case was garbage鈥攗nambiguously. And, you know, I thought it was very interesting to read 果冻传媒app官方's amicus in that where it鈥檚 pointing out quite clearly and transparently over and over again, the hypocrisy of the attorneys general of Missouri and Louisiana, the extent to which this was a politically motivated case.

And I wish that we could have just had a, you know, both things can鈥攚e can hold both ideas in our head at the same time鈥攖hat jawboning is bad, that bills like what you're broaching are good, and also that we could have had a more honest conversation about what was actually shown in Murthy and the lack of evidence where there were very few鈥擨 think none, in fact, for many of the plaintiffs鈥攁ctual mentions of their names in a government email to a platform. Like, that through-line is just not there. So, I think we had a lousy case and, you know, and it left us worse off.

Jonathan Rauch: Bad cases make bad law. Just restating my earlier point, this decision should be made in Congress, not the courts. This could be solved statutorily. And by the way, I don鈥檛 believe necessarily jawboning is bad if it's done in a regular, transparent way. I think it鈥檚 important for private actors to be able to hear from the government.

I鈥檓 a denizen of old media. I came up in newsrooms. It is not uncommon for editors at The Washington Post to get a call from someone at the CIA or FBI or National Security Council saying, "If you publish this story as we think you鈥檙e going to publish it, some people could die in Russia." And then a conversation ensues. But there are channels and guidance for how to do that. We know the ropes. But it is important for private entities to be able to receive valuable information from the government. We just need to have systems to do it.

Nico Perrino: I鈥檓 going to wrap up here soon, but I got an article in my inbox that was published on September 10th in the 鈥淐hronicle of Higher Education,鈥 I believe, titled, "Why Scholars Should Stop Studying Misinformation," by Jacob Shapiro and Sean Norton. Are either of you familiar with this article?

Ren茅e DiResta: Jake Shapiro from Princeton?

Nico Perrino: I don鈥檛 know. Let鈥檚 see if there鈥檚 a byline here at the bottom. No, I don鈥檛 have it here at the bottom. Jacob Shapiro and Sean Norton. Anyway, the argument is that while the term "misinformation" may seem simple and self-evident at first glance, it is, in fact, highly ambiguous as an object of scientific study. It combines judgments about the factuality of claims with arguments about the intent behind their distribution, and inevitably leads scholars into hopelessly subjective territory. It continues later in the paragraph, "It would be better, we think, for researchers to abandon the term as a way to define their field of study and focus instead on specific processes in the information environment." And I was like, "Oh, okay, this is interesting. I just so happen to be having a misinformation researcher on the podcast today."

Ren茅e DiResta: No, I鈥檓 not a misinformation researcher. I hate that term. I say that in the book over and over and over again. This is echoing a point that I鈥檝e made for years now. It is a garbage term because it turns something into a debate about a problem of fact. It is not a debate about a problem of fact. And one of the things that we emphasize over and over and over again, and the reason I use the word "rumors and propaganda" in the book, is that we have had terms to describe unofficial information with a high degree of ambiguity passed from person to person鈥攖hat's a rumor. Propaganda鈥攑olitically motivated, incentivized, often where the motivations are obscured鈥攕peech by political actors in pursuit of political power. That鈥檚 another term that we鈥檝e had.

Nico Perrino: So, you wouldn鈥檛 call鈥

Ren茅e DiResta: Since 1300 (laughs).

Nico Perrino 鈥攙accine hesitancy or the anti-vaccine crowd that got you into this work early on as engaging in misinformation because you did鈥

Ren茅e DiResta: I did study鈥攖here鈥檚 a鈥攈ere鈥檚 the nuance that I鈥檒l add, right? Misinformation鈥攁nd the reason I think it was a useful term for that particular kind of content鈥攊s that in the early vaccine conversations, the debate was about whether vaccines caused autism. At some point, we do have an established body of fact. And at some point, there are things that

At some point, we do have an established body of fact. And at some point, there are things that are simply wrong, right? And again, this is where the鈥攁nd I try to get into this in the context of, you know, in the book鈥攖he difference between some of the vaccine narratives with routine childhood shots versus COVID is that the body of evidence was very, very clear on routine childhood vaccines: they鈥檙e safe, they鈥檙e effective, right?

And most of the kind of hesitancy-inducing content on the platforms around those things is rooted in false information, lies, and, you know, a degree of propaganda. Candidly, with COVID鈥攁nd the reason that we did these weekly reports鈥攊t was to say, people don鈥檛 actually know the truth. The information is unclear. We don鈥檛 know the facts yet. Misinformation is not the right term. Is there some sloppiness in terms of how I used it? Probably. I鈥檓 sure I have in the past. But I have to go read Jake鈥檚 article, because I, you know鈥攁gain, this, like, "Why do we have to make up new terms?" Malinformation was by far the stupidest.

Nico Perrino: Malinformation is true, but insidious information, right?

Ren茅e DiResta: Yes, it鈥檚 the way that鈥擨 mean, you can use true claims to lie. This is actually the art of propaganda going back decades, right? You take a grain of truth, you spin it up, you apply it in a way that鈥檚, you know, where it wasn鈥檛 intended. You decontextualize a fact, right? I don鈥檛 know why that term had to come into existence again. I feel like propaganda is quite a ready and available term for that sort of thing.

Nico Perrino: I want to end here, Jonathan, by asking you about your two books, 鈥淜indly Inquisitors鈥 and 鈥淭he Constitution of Knowledge鈥. In 鈥淜indly Inquisitors,鈥 you have kind of two rules for liberal science: no one gets final say, no one has personal authority. My understanding is that 鈥淭he Constitution of Knowledge鈥 is kind of an expansion on that, right? Because if we鈥檙e talking here about vaccine hesitancy, and vaccines鈥 connection with autism, I guess I鈥檓 asking: how do those two rules鈥攏o one gets final say and no one has personal authority鈥攁ffect a conversation like that? Because presumably, if you鈥檙e taking that approach and you want to be a platform devoted to liberal science, you probably shouldn鈥檛 moderate that conversation, because if you do, you are having final say or giving someone personal authority.

Jonathan Rauch: Well, boy, that鈥檚 a鈥攖hat鈥檚 a big subject. So let me try to think of something simple to say about it. Those two rules and the entire constitution of knowledge, which spins off of them, set up an elaborate, decentralized, open-ended public system for distinguishing, over time, truer statements from false statements, thus creating what we call objective knowledge. I鈥檓 in favor of objective knowledge. It鈥檚 human beings鈥 most valuable creation by far. As a journalist, I have devoted my life and my career to the collection and verification of objective knowledge. And I think platforms in general, not all of them, but social media platforms and other media products generally serve their users better if what鈥檚 on them is more true than it is false. If you look up something online, in fact, and the answers are reliably false, we call that system broken, normally. And unfortunately, that鈥檚 what鈥檚 happening in a lot of social media right now.

So, the question is, of course, these platforms are all kinds of things, right? They鈥檙e not truth-seeking enterprises. They鈥檙e about these other four things I talked about. Would it be helpful if they were more oriented toward truth? Yes, absolutely. Do they have some responsibility to truth? I think, yes, as a matter of policy. One of the things they should try to do is promote truth over falsehood. I don鈥檛 think you do that by taking down everything you think is untrue, but adding context, amplifying stuff that鈥檚 been fact-checked, as Google has done, for example. There are lots of ways to try to do that. And I, unlike Ren茅e, am loath to give up the term misinformation, because it鈥檚 another way of holding鈥攐f anchoring ourselves in an important distinction, which is that some things are false and some things are true. And it can be hard to tell the difference. But if we lose the vocabulary for insisting that there is a difference, it鈥檚 going to be a lot harder for anyone to insist that anything is true. And that, alas, is the world that we live in now.

Nico Perrino: All right, folks, I think we鈥檙e going to leave it there. That was Jonathan Rauch, author of the aforementioned books 鈥淜indly Inquisitors鈥 and 鈥淭he Constitution of Knowledge,鈥 both of which are available at fine bookstores everywhere. Ren茅e DiResta has a new book that came out in June called 鈥淚nvisible Rulers: The People Who Turn Lies Into Reality.鈥 Jonathan, Ren茅e, thanks for coming on the show.

Jonathan Rauch: Thank you for having us.

Ren茅e DiResta: Thank you.

Nico Perrino: I am Nico Perrino, and this podcast is recorded and edited by a rotating roster of my FIREcolleagues, including Aaron Reese and Chris Maltby, and co-produced by my colleague Sam Li. To learn more about 鈥淪o to Speak鈥, you can subscribe to our YouTube channel or Substack page, both of which feature video of this conversation. You can also follow us on X by searching for the handle @FreeSpeechTalk, and you can find us on Facebook. Feedback can be sent to SoToSpeak@thefire.org. Again, that鈥檚 SoToSpeak@thefire.org. And if you enjoyed this episode, please consider leaving us a review on Apple Podcasts or Spotify. Reviews help us attract new listeners to the show. And until next time, thanks again for listening.

Share