Forkast.News is now Forkast Labs

Why decentralization can help solve the content moderation problem | Ep. 3

Why is content moderation a cosmically huge problem, do humans need to be involved and how can the decentralized web ensure it’s a place for everyone.

Powered by the Filecoin Foundation, The Future Rules is hosted by Forkast.News Editor-in-Chief Angie Lau, alongside top legal mind in blockchain and Filecoin Foundation Board Chair, Marta Belcher. Together with some of the most renowned names in the industry as their special guests they dive into the future and the ethical issues that technology will raise, and how to address them today before they determine our tomorrow. From NFTs, to CBDCs and beyond, the team explores issues of civil liberties, law, compliance, human rights, and regulation that will shape the world to come.

Find more episodes in the podcast series: The Future Rules

In this episode, Alex Feerst, who co-launched the Digital Trust and Safety Partnership, which establishes best practices for ensuring online safety, and now leads Murmuration Labs, which works with tech companies to create and improve online trust and safety practices, takes an expert jump into content moderation. He explains what content moderation involves and why it’s a cosmically huge problem, and he considers the need for human involvement and the nuanced tools necessary to ensure the decentralized web is a place for everyone. 

Highlights

  • Why content moderation is a tricky problem: “When we think about like content moderation, in some ways, I think the first thing to realize is like the enormity of the problem, which is like having issues over a billion printing presses and film studios to everybody in their phones and all this technology to distribute it to each other and to bounce off each other. We’re talking about the problem of how are we going to interact in a way that doesn’t end in total chaos. Content moderation is a sort of modest word, I think, for what people do when they design platforms and network systems to try to channel human conduct in ways that are going to mitigate some risk and keep people as healthy as possible.” (Alex Feerst)
  • The problem with automating content moderation: “The goal of trying to automate as much as possible and trying to help humans do the work more efficiently has been something people tried to do for 15, 20 years already, but I think there’s a couple of things going on. One of them is that when it comes to human judgment around the subtleties, there’s things like irony and infinite types of context that machines are still pretty bad at understanding all of these different lenses that can make one form of speech more OK in context, and something else really problematic. And so a lot of times we’re nowhere near having anything that approaches human judgment when it comes to figuring out how to think about human expression. There are ways to use machine learning and natural language processing and other tools to look at a network and see trends emerging that are problematic. But when it comes to sorting through individual pieces of expression, the solution is surprisingly manual.” (Alex Feerst)
  • The spirit of Web 3.0: “When I think about Web 3.0, it fits extremely well with the spirit of having lots of communities and lots of different norm sets that you can opt into and ensuring that they might have different outcomes and that you can have all these different environments coexisting and resist the flattening impulse to centralize them all into one thing.” (Alex Feerst)
  • The importance of decentralization for content moderation: “I think the design challenge and what we’re spending a lot of time working on is how to make sure that we keep this spirit of decentralization very core to the way we’re implementing content moderation so that we don’t wind up with the same outcomes that we would have had, that you do get the sort of diversity of choices and that you get different actors in the network acting differently and having different incentives to experiment with different things.” (Alex Feerst)
  • Solving versus managing content moderation: “I think content moderation is cosmically interesting and cosmically hard because I think it is not a problem to be solved – like saying it’s sort of an emanation of the human condition and like human expression is a condition to be managed, not a problem to be solved. And so you’re not going to have perfection, just as you’re never going to have a crime rate that is zero, or you’re not going to have a society that is perfectly happy. I would maybe think about it this way, there are some things which are illegal and taboo virtually everywhere, and then there are some things which are illegal in some places, taboo in other places, merely awful in other places… What we’re really doing is trying to implement tools that allow for all of these dimmers and proportionality and norms to exist so that people can have different environments where different things are acceptable and that there’s fewer capacities for things to get out of control.” (Alex Feerst)
  • The dangers of direct speech regulation: “Part of the fundamental misunderstanding and lack of trust now is the notion that direct speech regulation could be good. My view on that is it’s virtually never good to do direct speech regulation and that there’s a long and bad history on this and that speech, for all the reasons we’ve been talking about, is too nuanced and too subtle. If you try to directly regulate speech, it’s incredibly dangerous. The incredible danger of this current moment is that some of the laws that Europe and the US are looking at admirably attempt to try to mitigate risk and encourage companies to do that, but they do it in a language that is virtually identical to the laws that would be usable for censorship. So it’s an incredibly dangerous temptation, it leads you down a bad road and it really gives cloud cover to illiberal regimes that want to censor, and could do so, using very similar language to the sort of public safety language that’s being used.” (Alex Feerst)

Transcript:

Angie: Welcome to The Future Rules. I’m Angie Lau, Editor-in-Chief and Founder of Forkast News.

Marta: And I’m Marta Belcher, Chair of the Filecoin Foundation. Today we’re going to talk about content moderation on the decentralized web.

Angie: What does it do for us, should we be concerned about free speech, and how will it evolve in the future?

Marta: Exactly, we’re lucky to have with us Alex Feerst, who leads Murmuration Labs, which builds tools for trust and safety for the decentralized web. He’s also an advisor to the Filecoin Foundation and was previously Head of Legal and of Trust and Safety for Medium. 

So, let’s first talk about content moderation in general, before we dive into content moderation on the decentralized web specifically.

Angie: Alex, welcome to the show. What is content moderation and why should we care about it?

Alex: Thanks for inviting me to talk about this. I was so drawn to this problem and I’ve been working on it for seven-odd years now because I think content moderation is another word for just other people’s expression, just like the hell of other people’s conduct and expression. It’s a cosmically huge problem. All the different things that human beings do, and are motivated to say, now that we’re much more connected, are all under this giant bucket. So I think when we think about like content moderation, in some ways, I think the first thing to realize is like the enormity of the problem, which is like having issues over a billion printing presses and film studios to everybody in their phones and all this technology to distribute it to each other and to bounce off each other. We’re talking about the problem of how are we going to interact in a way that doesn’t end in total chaos. Content moderation is a sort of modest word, I think, for what people do when they design platforms and network systems to try to channel human conduct in ways that are going to mitigate some risk and keep people as healthy as possible.

Angie: It’s just the earliest form of human relationship, right? It’s how we want to talk to each other, how we want to express ourselves. But then you just scale it and magnify it to, you know, to a magnitude of a million. And here we are in the 21st century where everybody can be exposed to everybody else’s opinions and thoughts and ideas. And sometimes that’s a great thing and sometimes that’s not a good thing. I mean, let’s talk about the internet even,  it’s come a long way from bulletin boards and AOL chat rooms. Take us back perhaps, for those of us who maybe weren’t there at the time, tell us some of the key changes.

Alex: I think there’s a couple of ways to talk about internet history. It’s amazing that the last 20 or 30 years, it feels like a very, very long time. But in some ways we see the same challenges and the same patterns of behavior over and over again, even in the short time we’ve had. So you start with things like, your AOLs, and your Prodigys and you have things that from today’s perspective, these are relatively centralized services, they used to call them portals, where people enter in and you have the ability to interact with all these other people. And I think there’s a couple of themes that I would mention, because I think they keep coming up and we’re still dealing with them. One of them is how structured you want your environment to be. 

You have environments like Twitter, which are largely sort of undifferentiated masses of people that are able to encounter each other through serendipity and through topic and through sometimes very magical random interactions and sometimes very unmagical random interactions, but that’s part of what makes it so wonderful. And then with something like AOL in the early days, you had the idea of, OK, we should have rooms, we have topics, we should create some structure for channel and human interaction because we have a sense, not even necessarily, things will go wrong, but just that people want to talk about relevant things in the right place with each other. 

To me, that’s one of the big themes over the past 30 years, which is like structure. versus letting things lack structure, especially in an environment that’s totally artificial, like being online, you have the ability to sort of selectively repeal or partially repeal some of the laws of nature – you can allow one person to communicate to millions and you can allow things to scale or go viral or move very quickly, and so a lot of the laws of the natural world that keep speech working in a certain way, like how loud my voice can go and how many people can hear at the same time, and it’s ephemerality, and the fact it’s not permanent, they’re different online. 

And so those structures can help us sometimes, like try to replicate the ways that norms work in the real world. Another thing is realizing that somebody needs to oversee the community, or people have a role of moderating, and I think you have sort of community in the more positive sense before you start calling it moderation, but the question of, whether they should be professionals, whether they should be volunteers, whether you just have, elders or mentors who sort of rise up and start to take responsibility for teaching people how to act or suggesting how to act in a certain space. 

Even from early days of AOL you have of tools, like blocking and suspending accounts. You have a sort of legalistic approach eventually of making a rule set. Like, OK, let’s try to channel the behavior in a way that makes sense and have our own little sort of common law of our online space that’s going to help people. And then over the years, I think really we’ve seen these things recapitulate in a lot of different ways, but I don’t know any of those have actually changed so much – if you look at Reddit and Subreddits and one particular version of how do we structure the online space, so that people have clarity around what to do and know how to interact, but when you move to social web and it becomes much, much easier for people to interact and the number of people gets way higher and the conditions of going viral and getting distributed, you can become much more present for it, for a lot more people.

Marta: I think that brings up this issue of just the scale at which content moderation is done today and how difficult it is to do content moderation at that scale. So could you talk a little bit about how content moderation has scaled over the years and the challenges that arise from that?

Alex: The law of large numbers is one of these things. I always try to constantly remind myself of these sorts of things – five hundred million tweets per day and X millions of hours of video on YouTube, X billions of images uploaded to Facebook and Instagram. And I think a couple of things become clear. One of them is the amount of activity in these networks is so teeming that you have humans bouncing off each other in ways that are both like very, very large numbers, but then also stimulating each other until further scale and further velocity.

 I think a truth of content moderation also is that, even if it is possible to determine what an error rate is, let’s say, you have an error rate of whatever 0.01 or you’re inconsistent, sometimes you get these right, sometimes you get those right. 

You know, 0.01 percent, when you’re dealing with five hundred million tweets a day or billions of images there’s inevitably going to be errors, and those problems are going to be human and messy and there’s human dignity at stake. So,I think one of the largest challenges of all of it is when you’re attempting to regulate this amount of sheer human conduct and expression going through the pipes. I sometimes say that it is inevitable that everybody will eventually be mad at you. 

Marta: I would add that I think part of the problem with the scaling here is that so much of this has to be human, which is sort of counterintuitive, because you would think this is something that machines ought to be able to do. But, you know, I’ve seen this done in various conferences and other places where we’re talking about scaling content moderation, where you actually put in front of the people in the room – OK, how would you make this call or that call? 

And it turns out, once you actually get down to looking at making decisions about what gets blocked and what doesn’t, it’s hard, even as a human to make those decisions. And you can imagine completely impossible for a machine to do. So I guess maybe you want to talk a little bit about that, about why machines can’t be the ones to, primarily do this, or the extent to which they obviously they’re assisted by machines and talk a little bit about what’s machine, what’s human here.

Alex: I think, of course, like the goal of trying to automate as much as possible and trying to help humans do the work more efficiently has been something people tried to do for 15, 20 years already, but I think there’s a couple of things going on. One of them is that when it comes to human judgment around the subtleties, there’s things like irony and infinite types of context that machines are still pretty bad at understanding all of these different lenses that can make one form of speech more OK in context, and something else really problematic. 

And so a lot of times we’re nowhere near having anything that approaches human judgment, when it comes to figuring out how to think about human expression. There’s ways to use machine learning and natural language processing and other tools to look at a network and see trends emerging that are problematic. But when it comes to sorting through individual pieces of expression, the solution is surprisingly manual. And again, it’s like a human machine collaboration. 

But you really have very large numbers of people and I think from early on days at YouTube and Facebook, it was understood there was no way to simply automate out of existence the need for human judgment. And I do not see any time soon the magical A.I. solution, or the true automation of it. And I just want to also mention I think there’s also a subjective dignitary aspect to it, which is I think when you talk to people who have had something that they’ve written, removed or experienced taken down or whatever, they experience it differently if they believe that it was done through an automated means versus if they believe that a human did it. 

Not that they’re thrilled that a human did it, but there’s really this dignitary feeling of who are you to tell me that, because of algorithmic reason X. And that’s another thing that I think is very hard to automate.

Angie: And so when you take a look at the promise of Web 3.0, and the promise of blockchain as it pertains to just this growing mass of really social issues that arise out of something as complex and humanly deep as content moderation. What are the benefits and the promise of Web 3.0 here?

Alex: I think one of the things that is so exciting about blockchain applications and Web 3.0 is the spirit of decentralization, you’re going to want a diversity of approaches, you’re going to want ways for different communities to achieve different versions of consensus on what is OK and what is not OK. And you sort of are building an architecture that resists the impulse to centralize and to have one size fits all and to flatten out those differences. 

And I think if you look at a lot of the critique now of the larger platforms and also the new excitement around antitrust law in the US, part of what I think you’re seeing is the belief that there needs to be a diversity of venues, and a diversity of networks, and as many possible ways for communities to think about norms and think about what’s OK, because then for somebody, there’s not no place for you. There may be fewer places or more places for different types of things, but there’s a lot less of a chance of going from having some place to having no place. Right. 

And so when I think about Web 3, it fits extremely well with the spirit of having lots of communities and lots of different norm sets that you can opt into and ensuring that they might have different outcomes and that you can have all these different environments coexisting and resist the flattening impulse to centralize them all into one thing.

Angie: That’s utopia, but on the other side is the dystopic version of that, there’s always a black to the white and we live in a world of gray. Marta, I’m just conscious of the fact that, as you think about the future rules, as it were – civil rights, civil liberties and free expression – philosophically, where does this where does this leave us? Is this something that enables us, that allows freer speech or do we lose control? 

Marta: Well, I think that having the decentralized web means that you don’t have to have central intermediaries that are doing content moderation. So everyone doesn’t have to abide by just a handful of companies’ rules. That doesn’t mean that there aren’t going to be any rules. That doesn’t mean you’re able to share horrible content without any restrictions. 

But it just means that the power to do that content moderation, to make those decisions, doesn’t need to be centralized in just a few players’ hands. So the way that we’ve been thinking about it at the Filecoin Foundation and working closely with Alex at Murmuration Labs is setting up ways to do content moderation at scale, that are decentralized. Where you have a bunch of different nodes that are all able to essentially make their own decisions about what kind of content they are hosting, but making that really easy for them.

So, for example, creating tools that allow people to build lists that are – well, here’s the list of content that’s bad for this reason, or here’s the list of content that’s bad in a particular jurisdiction, and having a bunch of different lists that people can effectively opt into at a node by node level. 

Alex: It’s a really interesting question for this kind of work. If you have several thousand actors in the network jwho are each able to make decisions, you want to give them, as Marta said, the tools to make their individual decisions and then you also want to acknowledge they don’t have infinite time. 

So you want to give them a head start, and you want to give them the ability to make the decisions they have in the time that they have without also running into what one scholar calls the content cartel problem, which is like if you have lists and then everybody pops into all the everybody else’s list, then you wind up with an outcome where, like you have this more like decentralized, maybe network, but then everybody copies each other and wired up with a similar outcome, as if you had had a few centralized players. 

And so part of, I think the design challenge and what we’re spending a lot of time working on is how to make sure that we keep this spirit of decentralization very core to the way we’re implementing content moderation, so that we don’t wind up with the same outcomes that we would have had, that you do get the sort of diversity of choices and that you get different actors in the network acting differently and having different incentives to experiment with different things.

Angie: Is it a design problem? And what is the problem that we’re trying to fix? Are we optimizing for free speech? What kind of compromises are acceptable, and/or is nothing, or do we not want to compromise on it? How do we make sure we don’t hurt people? And that, you know, I guess it’s one person’s utopia and another person’s dystopia. What is it that we are trying to design for?

Alex: I think this goes back to why I think content moderation is cosmically interesting and cosmically hard, because I think it is not a problem to be solved – it’s sort of an emanation of the human condition, and like human expression is a condition to be managed, not a problem to be solved. You’re not going to have perfection, just as you’re never going to have a crime rate that is zero, or you’re not going to have a society that is perfectly happy. 

I would maybe think about it this way, there are some things which are illegal and taboo virtually everywhere, and then there are some things which are illegal some places, taboo other places, merely awful other places. And then there’s all forms of speech that, depending on their context, are unacceptable, embarrassing, abusive – like people say “lawful, but awful.” And you want to accept that there are dimmers on these things that whenever possible, human beings and their social norms, can act to scale them and keep them in proportion to how risky and terribly harmful they are.

What we’re really doing is trying to implement tools that allow for all of these dimmers and proportionality and norms to exist so that people can have different environments where different things are acceptable and that there’s fewer sort of capacities for things to get out of control. 

Marta: There’s a lot to be said about content moderation from private actors, versus content moderation from government actors, but I think those are things that often get conflated and would love to hear your take, especially because these types of questions are coming up very, very frequently in the legislative world these days in the US.

Alex: It’s been a long, hard road on this because I think we’re at a level of low trust right now between government and tech and other actors, all of whom are trying to do what they view as the right thing in protecting the public. I think part of the fundamental misunderstanding and lack of trust now is the notion that direct speech regulation could be good. 

My view on that is it’s virtually never good to do direct speech regulation and that there’s a long and bad history on this and that, speech, for all the reasons we’ve been talking about, is too nuanced and too subtle. If you try to directly regulate speech, it’s incredibly dangerous. There’s a history of authoritarian, illiberal regimes, do not pass a censorship law, governments pass a public safety law, and then apply the ostensible public safety law to crack down on political criticism and so the incredible danger, I think of this current moment is that some of the laws that Europe and the US are looking at admirably attempt to try to mitigate risk and encourage companies to do that, but they do it in a language that is virtually identical to the laws that would be usable for censorship. 

So it’s an incredibly dangerous temptation, it leads you down a bad road and it really gives cloud cover to illiberal regimes that want to censor, and could do so, using very similar language to the sort of public safety language that’s being used. That said, I think there are things that the government can and should do and I think part of this comes down to, and it took me a long time of thinking about this, to come to this conclusion, was that platform design and system design is a very indirect art. If you have a media company that creates content and puts out content, distributes it and has some level of control and oversight over what it does, and that’s true for lots of companies. 

The thing about platform companies in this flavor of it is that you create incentives and structures and you design in the way that they work, but it’s really normal people’s conduct. It’s really you’re creating an environment that fosters and discourages and creates probabilities of different things happening or not happening and that to me is incredibly different and incredibly important, because if you’re going to regulate that, you’re really regulating normal people’s expression and conduct. What you’re really doing is stopping citizens, right? It’s a pass through, right. You’re going to restrict the way that platforms do X, Y and Z, with speech regulation and you would normally never let the government directly regulate speech that way. If you’re using the platforms as a pass through to regulate speech, you’re getting into a very dangerous game. 

But there are processes and other things that I think it makes total sense to potentially regulate. So, of course, transparency is one of them, having rules around platforms, like publicly saying what interventions do you take and how do you take these interventions and how do your rulesets work and how many things have been taken down and how many things have been suppressed and the reason I think these sorts of process oriented rules are so important and better, is that they put us into an ecosystem where journalists can report on it, citizens know what’s going on, researchers can study it. 

And you’re having a much larger conversation that drifts around the norms, as opposed to something that attempts to do something very directly and efficiently, and winds up with some really potentially bad outcomes, if that makes sense. I think things around sort of transparency and standards and other best practices for platforms for me is a way to try to ensure that this type of development work is being done thoughtfully and carefully.

Angie: It’s a learning curve, right? I mean, I think we all experienced it first hand, in real time, when we saw platforms actually take down the voice of the single representative of the US government, and I might be opening Pandora’s box here by just bringing it up, but I’m curious, Alex, how you viewed Twitter coming in and kicking President Donald Trump off of its platform?

Alex: Twitter and Facebook had a very hard decision to make at that point. They’d also had years of like prior behavior that they were sort of dealing with, so I think it came at a particular moment. Maybe I would start this way, by acknowledging that a lot of content moderation rulesets often have a newsworthiness safety valve, which sort of says, if you have conduct or content that’s otherwise harmful or abusive or intentionally creating misinformation or trying to foment harm or something like that, if it seems newsworthy, we might relax those rules a little bit, because it’s important for people to know about newsworthiness, right? And that rule sort of served well, and serves well a lot of the time. 

But what I think arguably happens with the case of Trump, and some politicians, is that you get the ability to hack that rule by saying – if the president does something, it is de facto newsworthy because – or if the White House announces something, it is de facto newsworthy, and therefore whatever is said should be amplified and should not be treated on the basis of what it actually says. 

I think part of what led to this flash point right around the election was the acknowledgment that you have rulesets that are designed to mitigate harm and then they’re sort of hackable and you have to try to keep up with these edge cases, where it seems like the rules that you’ve tried to put in place to do things well, are having things fall through the cracks. This question maybe feels more momentous if you are led to believe that Twitter is everything, but it’s not, it’s one of many platforms. 

And so the more platforms you have and the more places you can go to to hear things and to speak and to have your speech amplified, if it’s the right context, the less any one of these things really matters. Part of those sorts of discussions is really a symptom of people’s perception, which may be correct or incorrect, that there’s not that many venues where their speech can be amplified. I think also there’s a separate conversation that has been categorized under like free speech doesn’t mean free reach, which means that you may be entitled to express yourself in lots of different venues, but you’re not entitled to the maximum amount of amplification that you think you’re entitled to at any given moment. 

And so I think there’s a lot of nuance around what really happens, for example, of putting a label over something, saying OK, this may be contested or maybe the statement goes against something that the CDC has put out, and then having people call that, quote unquote, censorship, I think shows a little bit of a of a degradation of some of the rigor around what we’re really talking about, because I think a lot of these non-binary solutions, like you don’t need to take something down, but we can put up a label, we can put up a screen that you click through for friction, we can create other ways of trying to draw people’s attention to something that’s potentially harmful. 

I think lots of folks understandably don’t appreciate some of those things, but I think it’s worth taking into account all these different strategies short of de-platforming that exist, as platforms try to sort of, you know, mitigate harm and nudge things in different directions.

Angie: And therein lies the challenge of The Future Rules, Marta, I’m going to let you have the last word as you think about just what the future holds in terms of free speech and all of the things that currently we face and experience in real time, how you’re thinking about decentralization along with Alex and everybody else in Web 3.

Marta: The initial reaction to decentralization is, well, if you decentralized, how are we going to prevent bad content from proliferating? I think that’s the knee jerk response to talking about decentralized technologies. But I think that it’s actually the opposite, it’s actually decentralization gives us the ability to take content moderation decisions and scale them in a way that they haven’t been scaled before, with a diversity of views and a diversity of actors who are making those content moderation decisions, instead of just a couple.

 And so I think it’s actually a really exciting new area and all we need to do here is build the tools on top of decentralized technology, which is the very important work that Murmuration Labs and Alex are doing. So very, very grateful to get to have this conversation with him and for all of his work on these decentralized content moderation tools.

Angie: Thank you so much, Alex Feerst, Lead of Murmuration Labs and advisor to Filecoin Foundation. And of course, my co-host, Marta Belcher, chair of the Filecoin Foundation. Guys, thank you so much.

Alex: Thank you. Great talking to you.

Marta: Thank you. Alex and Angie. 

Angie: You can listen and subscribe to The Future Rules anywhere you get your podcast fix and find the full series on the Forkast website. We hope to meet you all here again in the future.