09.24.2021

YouTube’s Chief Product Officer on Misinformation Crisis

Read Transcript EXPAND

CHRISTIANE AMANPOUR: With over half the American population fully vaccinated, a similar hesitancy that affects the climate as we discussed lingers on. Also in large part, due to misinformation that spread online. Neal Mohan is the chief product officer at YouTube and in charge of combatting this issue at the platform. Here he is speaking to Hari Sreenivasan about striking the balance between free speech and dangerous lies.

(BEGIN VIDEO CLIP)

HARI SREENIVASAN: Thanks, Christiane. Neal Mohan, thanks for joining us.

NEAL MOHAN, CHIEF PRODUCT OFFICER, YOUTUBE: Hi, Hari. It’s great to be with you.

SREENIVASAN: So, Neal, for our audience to have some scale of how big YouTube is, some of the stats that I think we should remind of is that there are about 2 billion users of YouTube. They consume about a billion hours of video every day and about 500 hours of new content is uploaded to YouTube every single minute. While there’s no doubt that there’s phenomenal YouTube videos for all kinds of things that would be helpful in my life, that lack of a gatekeeper also creates an opportunity for people to use your platform, misinform or disinform audiences that might not know any better. So, what kinds of steps have you taken in the past few years to try to kind of deprioritize that when it comes to someone searching for the answer to a question?

MOHAN: So, that responsibility that you describe, Hari, in terms of ensuring our platform is not a place for misinformation to spread other types of, you know, what we deem to be violated content on our platform to spread is my number one priority. It’s the top priority of — you know, of all of us at YouTube. If you’ll indulge me for a minute, I’ll give you sort of the — our full approach to that challenge. The first step, of course, is that, yes, we are an open platform and I’m a firm believer in the power of that open platform. Our mission is to give everyone a voice and show them the world, which is about giving everybody a creative outlet, a business outlet but also giving people, as you’re describing, access to information that helps them make their lives better, and I think that comes from an open platform. But it has never been anything goes. We’ve always had community guidelines on our platform. And those community guidelines, whether it’s around misinformation or hate speech or harassment, child safety, violent extremism, govern the content that can remain on our platform or that will get removed. In addition to the first R of remove, we have three other R’s that I would argue are equally and if not more important. The next one is we call raise. And that’s about, when users are looking for information around the fast-breaking news even, for example, or a health crisis like the COVID-19 pandemic, we endeavor to raise up content from authoritative sources, from channels like our news partners or health authorities, whether it’s the CDC or the World Health Organization or other local health authorities in countries around the world, and you’ve seen that run on YouTube. Every time you open the app, you probably noticed that COVID-19 news shelf running there, it’s been running for over a year and a half. The third R we call reduce. That’s about reducing recommendations of content that might not have been quite policy (INAUDIBLE), might be in a blurry space, might be in a faster moving space where there aren’t yet content community guideline policies. We endeavor to reduce recommendations of that type of content in our home feed, in the videos that we recommend to watch after you’re done consuming a video on YouTube. That’s the third R. And the final R is called reward. And that’s the recognition that, you know, 99.9 percent of the creators on our platform are looking to do the right thing. They’re looking to build an audience, build a business. And we want to reward those creators, we want to direct, you know, the financial resources that are generated through what we talked about in terms of advertising, et cetera, to those creators that are looking to do the right thing. And it’s that four R’s approach, remove, raise, reduce and reward, that is our comprehensive approach not just to misinformation, but to other types of content that we deem to be problematic and (INAUDIBLE) on our platform.

SREENIVASAN: It’s kind of a fine line to walk between what is free speech, what is safety, what’s an unpopular opinion versus a dangerous one. And in a way, you are the app — the company that is tasked with figuring that out on the fly with 500 hours of video coming at you every day.

MOHAN: Yes. I mean, I think I will go back to what I alluded to before, which is, I am a firm believer and I think many of us at YouTube are around the power of an open platform. You know, one of my favorite examples of the power of our open platform is the gold medal winner at the recent Olympic Games in Tokyo in the javelin. It was a young Indian man who learned to throw the javelin by watching YouTube videos. It was an incredibly powerful inspiring story and that is the power of an open platform. But you’re pointing out something that I completely agree with. Misinformation in particular, by nature is fast moving, it changes regularly. We have seen that play out in the midst of this global pandemic. There’s been all sorts of types of misinformation that I simply would not have been able to predict, even a few days before they became a trend. You know, for example, it sounds like ancient history now but when the coronavirus was associated with 5G cell towers. That was a piece of information that sort of came out of the blue. But the reason why we weren’t able to act is we had in place a framework around medical misinformation, even before the pandemic, that was based on, you know, things like false cures, based on actions that would prevent people from — or content that would prevent people from seeking timely medical intervention. And we use that framework to write the specific rules around COVID-19 misinformation. And in fact, we were the first platform early as February 2020 to have a comprehensive COVID-19 misinformation policy. Now, did that mean that we needed to continue to adjust that with new types of misinformation, you know, popping up in the midst of the pandemic? Of course, we did. To make sure that viewers around the world were getting credible high-quality information. But again, it’s not just about the content that we remove. Also, remember, when users were looking for that information, either searching on YouTube or watching videos, we were recommending content that came from authoritative sources, health outlets. We ran information panels on the order of hundreds of billions of times for viewers all over the world that had links to local health authorities with the most recent information about how you could protect your health, protect your families in the midst of this very fast-moving pandemic.

SREENIVASAN: Do you have any idea of how those panels work? Because at the same time, there were — you know, there was research that looked into, I think it was about a dozen people that spread enormous amounts of vaccine and coronavirus misinformation, and they were on all kinds of social platforms. They were profiting off of misinforming others, in some cases, very intentionally so. So, I’m wondering if one of those bad actors is on YouTube and you have this panel at the bottom there that says, here’s the best information from the CDC, right below the video. How many actually click through from a video where they could have been misinformed to a better source that you had taken the time to curate?

MOHAN: The core sort of way that we measure it and the way that we hold ourselves accountable at YouTube is a metric called the violative view rate. And that is view rated random sampling of videos across our corpus that we look at and evaluate to see if those videos contain misinformation, contain content that is violative of our policies. And in a nutshell, we try to drive that number down as close as possible to zero. And, of course, in a fast-moving world of misinformation, other types of content, speech, et cetera, it is not quite zero. In fact, the latest metric there is, I think, between 0.19 percent and 0.21 percent. So, relatively small. But that means that out of 10,000 videos, about 20 were deemed to be of the nature of that you were describing. And in the interest of transparency, a few months ago we’ve actually started to publish that number, the violative view rate number on a quarterly basis externally as well.

SREENIVASAN: Now, you know, you mentioned the 0.19, the 0.21 number. If it was 10,000, you’d have 20 videos, right. But you’ve got billions and billions of videos that are on there. So, if I t’s 0.19 to 0.20, that’s still hundreds of thousands, if not millions of videos that are kind of slipping through the cracks. I mean, how do you improve that?

MOHAN: We’ve made, I think, a dramatic improvement in this area over the last few years. For example, recommendations of that sort of harmful misinformation, borderline content is down 70 percent. We’ve made dozens of changes to our recommendation algorithms to do that, in addition to content we remove. But again, our work is by no means done. We’re not perfect and we’re going to continue to chip away at it.

SREENIVASAN: People are concerned about what they would call the rabbit hole. How the recommendation engine leads down paths that are more sensational and more sticky. The sort of super harmful side, you can see increased political radicalization. And the less harmful side, you’ve watched a child start with a soccer video and then 15 minutes later, you come back in the room and it’s like gory animals eating other animals video that they shouldn’t have been watching. But it’s hard for people to wrap their heads around what is at the core of this recommendation and how is it structured to get to these other end results and why can’t we stop that?

MOHAN: When we have looked at it in an aggregate basis, we haven’t really seen evidence of that. Again, that doesn’t mean it hasn’t happened, you know, for individual viewers anecdotally, et cetera, but we have looked at that and actually, third-party researchers have looked at that as well, and have not seen our recommendation sort of moving people towards more of this polarized type of content. But what I will say, that’s even — I would say sort of even kind of one level above that is, we don’t want that to happen. We don’t want our recommendations to push people to those extremes. That is not, I think, good for responsibilities as a global platform standpoint, and also, make the point that it is not fundamentally good for us as a business.

SREENIVASAN: I get that the researchers might not see it, but we have interviewed former white supremacists on this program and there have been plenty of cases where people tell you that here’s the way to beat the algorithm, here’s the way be more sensational, get more clicks and, you know, be more edgy. And that these rabbit holes exist. And once you are watching, you know, a vaccine misinformation thing, the next day, you are kind of shown a flat earth thing. And the next, realize you’re in a 9/11 conspiracy truther thing. No human could ever sit there and watch all the incoming YouTubes and, you know, moderate and say, OK, this is going to be OK and this is questionable. Let’s put this this bin. Let’s put this in that bin. How do you train an algorithm to do that?

MOHAN: We are by no means perfect. We were going to continue to chip away. We make improvements to our recommendation algorithms on a regular basis. In a fundamental sense, the way that we do this at our scale is twofold. The first is, we are not making these decisions on our own. We work with external evaluators all over the world. They are given a set of guidelines. We actually publish those guidelines that are externally available guidelines that are the means by which they should evaluate a sample of videos that we give them. And a video is evaluated by nine different, you know, radars that are — that come from this, you know, kind of broad external pool based on these external guidelines. In cases of specialized content, for example, health related, we actually rely on medical doctors and it is — you know, these are the soft of external evaluators that create the ratings of a certain set of videos and we use those videos to train well tested machine learning algorithms that allow us to apply those principles and those guidelines that are developed by the radars to our entire corpus. The vast majority of, you know, videos that we act on — are acted on, you know, with minimal views, you mentioned flat earth, for example. So, what I can tell you is that there’s a lot more videos around the earth being flat being created than videos of the earth being round. You know, you make a video of the earth being round, you don’t need to make too many of those, right? You make one that sort of becomes the canonical video. But the views on those, on average, are dramatically, dramatically lower than the, you know, kind of round earth videos. And so, I do think it’s a bit of a myth that a video that’s a conspiracy video by its nature is actually going to draw more engagement, more interaction. We actually don’t see it bearing out and that’s a very concrete example of kind of a longstanding conspiracy where that hasn’t been true.

SREENIVASAN: So, what happens in the political context? Recently, there were cases in Brazil with their president and some of his supporters. You decided to take some of those videos down, I think, under a violation of the health policies guidelines because it was in the context of coronavirus, right? And you had mentioned that kind of all users have to live by the same community guidelines. So, in the case where you are taking down a president’s video, is that not an inherently political act to all of that president’s supporters? I mean, let’s bring it closer to home. President Trump is not on YouTube right now. Is he going to come back and what would the threshold be?

MOHAN: The best way for us to be able to address that is on an open platform like YouTube is to have a clear set of community guidelines and to have those guidelines apply universally, regardless of who the speaker is, and then also, to have a framework by which we act. And so, one of the long-standing sorts of frameworks that we’ve had at YouTube is what’s called a three strikes framework. You’re given three chances, if you will, around policy, you know, violations around your content before your channel can be terminated from YouTube. And of course, we terminate channels for violations on a regular basis. Again, regardless of who the speaker is. It’s based on their — on the content on the platform. In the case of — you know, the examples that you’re describing, we have taken action on channels that have — on videos that have come from heads of state. They have been violations, clear violations of our community guidelines. And when that happens, we will issue — we will remove the video and issue a strike. And in the case of President Trump’s channel, that’s what happened back in January. There was a video that was uploaded, it was a violation of our community guidelines. And as a result, the video came down and channel received a strike. Now, normally, our strike regime is one where, for the first strike, the channel is, you know, kind of in a suspended state for about a week, for seven days. But we reserve the right to also take into account external factors like risk, as I said at the very beginning, to things like kind of real-world harm, egregious harm physical violence, et cetera. And when that’s the case, we will maintain that kind of suspended state underneath the strike regime that we described, and that happens to be the position that that channel is in right now.

SREENIVASAN: Neal Mohan, thanks so much for joining us.

MOHAN: Thank you. It’s great to be here.

About This Episode EXPAND

Climate scientist Katharine Hayhoe discusses her new book “Saving Us.” Colombia’s outgoing president Iván Duque reflects on his time in office. The New York Times’ Berlin bureau chief Katrin Bennhold reflects on the end of Angela Merkel’s 16-year chancellorship in Germany. YouTube’s Chief Product Officer Neal Mohan explains how the platform is handling the misinformation crisis.

LEARN MORE