09.24.2020

Facebook’s Failures to Stop Misinformation

Facebook finds itself under fire yet again, as whistleblower Sophie Zhang alleges that the company is turning a blind eye to political manipulation on a global scale. Facebook says it has been investigating these issues carefully, including those that Ms. Zhang raises. BuzzFeed media editor and misinformation expert Craig Silverman joins Hari Sreenivasan to discuss this latest scandal.

Read Transcript EXPAND

CHRISTIANE AMANPOUR: Now we turn to Facebook, which has found itself again under fir. Whistle-blower Sophie Zhang alleging that it’s turning a blind eye to political manipulation a global scale. Facebook says it’s been investigating these issues carefully, including those that Ms. Zhang raises. Well, our next guest is the man who published the accusations. BuzzFeed’s media editor Craig Silverman is an expert when it comes to online misinformation. Here he is speaking to our Hari Sreenivasan about all of the details.

HARI SREENIVASAN: Christiane, thanks. Craig Silverman, thanks for joining us. First, let’s talk about the most recent article, which was based on a memo that a whistle-blower at Facebook put out to a large group of employees. For people who haven’t read that article, what did she say? Who was she? And what did she say?

CRAIG SILVERMAN, MEDIA EDITOR, BUZZFEED NEWS: Yes, the memo was written by a data scientist, a relatively low-level data scientists at the company named Sophie Zhang. And in the memo, she details how, over the past three years that she had worked at the company, she had come across repeated examples of fake accounts and other kinds of manipulation on the platform being used to really influence public debate and, in some cases, elections. So, for example, she found that accounts, fake accounts linked to the president of Honduras had actually been manipulating discussions on Facebook. She talked about findings in countries like Bolivia and Ecuador, an election in India early this year where multiple parties were using fake accounts and other types of manipulation Facebook. And so part of it is talking about everything she found. But she’s also saying that it’s her last day at Facebook, because Facebook just didn’t, in her view, take these things seriously enough, act on them, further investigate them, give resources to go after this kind of manipulation in countries other than the U.S., and other than countries in Western Europe. And so she’s really expressing this personal toll and the weight that was on her of saying: I suddenly am one person responsible for finding this stuff in these countries, and my colleagues aren’t able to follow up on it or aren’t interested in doing that. And so the memo was really a condemnation of the unwillingness of the company to take these kinds of things seriously in different parts of the world.

SREENIVASAN: Then, tell us a little bit about this employee. What was the context in which he published this memo? Was she fired? Is she disgruntled? Did she have anything to lose by going public?

SILVERMAN: So, she — one of the things that she noted in the memo, which was posted on kind of an internal discussion board at Facebook on her last day there, one of the things she noted was that Facebook had offered her a $64,000 severance package, which included a non-disparagement agreement. And she said that she turned that down because she wanted to be able to send this memo. She wanted to be able to alert her colleagues. And her motivation, as she described it was to help the company be better and take this stuff seriously. She — and, in the end, she admitted that in some ways her performance had suffered because she was spending time on these things, as opposed to her usual kind of spam work. And so, in the end, she was fired by the company. They offered her a severance package, conditional on a non-disparagement agreement. But, in trying to verify this memo, we did speak to people who knew her, who had engaged with her within the company, and they really had glowing things to say about the work that she was doing in this area. And they felt that the company ultimately had kind of mistreated her and marginalized her. One person said they should have given her a team, rather than kind of pushing her aside and telling her to stop working on this stuff.

SREENIVASAN: Now, how much of this is new? I mean, Facebook has literally had to answer some of these questions to members of Congress on what kind of manipulation is happening on their platform, what responsibility they have and what steps they’re taking.

SILVERMAN: Yes. I mean, Facebook, obviously has been under a lot of pressure for roughly around four years now about how people, whether it’s politicians, or for- profit people or trolls abuse its platform. And so, in that sense, it’s a continuation of the story we have been hearing about for years and years and years. But I think what’s really key about this is, one, it’s coming from someone inside the company who was very skilled at identifying this stuff, even though it wasn’t actually her job, and who is saying that the company that says publicly, yes, we care about this stuff, we have hired lots of people, we have specialized teams, she’s saying, well, then how come you’re not paying attention to these countries? And so I think it reinforces perhaps a perception that was already there that Facebook is very U.S.-focused in Western Europe focused, but it gives us the goods and the insight of someone on the inside saying that, yes, that’s the case. And, in their exact words, it leaves them with feeling like they have blood on their hands.

SREENIVASAN: So, what’s Facebook’s response been to this?

SILVERMAN: Facebook has basically given us a relatively generic statement saying that, you know, we do a lot of work in this area, we have specialized teams, we take this seriously, we have dedicated people in countries around the world. And that was the official statement. I think what was actually even more interesting than that is that the company’s vice president of integrity, which oversees all of these efforts to kind of secure the platform, he commented to my colleague Ryan Mac on Twitter saying, You know, guys, this was really just about fake likes, and we have to prioritize, and this just wasn’t that important. And so I thought that was a really telling message, because it’s true. Some of what she had found was by following fake likes that had been put on content, and been used to help spread it. But, at the core of it, she still identified governments doing this, she still identified fake accounts, she still identified manipulation of elections. So, for the V.P. to come out and say something that’s dismissive was really quite telling, I think.

SREENIVASAN: So, just so people understand how this kind of manipulation occurs, give us one of the examples that she was talking about. What are these networks of people that are working to tip an election in one way or another? What are we likely to see if we were on their Facebook pages or if we were in that ecosystem from that country?

SILVERMAN: Yes, so a few things that you would see is, let’s take the example of what the Facebook V.P. said was going on, fake likes. So, if you have networks of fake accounts, one thing that you can do is direct them to go and engage with certain types of content. And because Facebook pays attention to the engagement levels with content to decide what to kind of promote and spread more, if you hammer one particular post or one particular account with tons of fake likes and reactions, that’s going to send signals to Facebook’s systems that, hey, this content is popular, people are engaging with it. And it might spread that to more people, so attempts to use fake engagement, like likes and reactions and comments, to get content to spread more on the platform.

SREENIVASAN: So, let’s connect this kind of story that just published back to the United States. So, we have this employee detailing how Facebook is not living up to its own standards, its own self-professed standards, in several parts of the world. How does that connect to what’s happening in the United States?

SILVERMAN: Well, I guess there’s a couple ways of looking at it. One is that this person is saying that, really, nothing is more important to Facebook than the U.S. elections when it comes to this kind of work. And so, if that’s the case, but there are still examples of Facebook kind of failing to spot things and failing to act things in the U.S. election, well, that reinforces to us that, even in its top-priority areas, we’re dealing with these cases, and they still can’t get their arms fully around it. So this is kind of the reality of the world we’re in. The other part, I think, is just remembering that Facebook is a massive global corporation, but, being an American company and having, as she said, a bias towards American things, I think that connects back to the U.S., in that they’re putting a lot of effort there. And it’s — what’s falling through the cracks are literally countries around the world, which is a crazy thing to think about. Her — this is what kept her up at night, literally, was sitting there and realizing that she didn’t have the time to look into more into Bolivia, didn’t have the time to look more into Ecuador, and then seeing, in some cases, deadly civil unrest happen in these countries. That’s where she felt she had blood on her hands. And so you have employees, junior level employees in America, feeling like they’re responsible for the public debate and public safety of countries around the world.

SREENIVASAN: Now, we have seen in reporting that Facebook uses human beings in some instances to try and figure out what content to moderate and what not, whether it’s horrendous videos that people shouldn’t be seeing, et cetera. But, in this case, is it all the computers? Is it the algorithms, the machine learning, that tries to figure out what’s a fake like vs. a real like, and whether that posts really should be elevated to more people or not? And then, if that’s the case, why isn’t it working?

SILVERMAN: I think a lot of Facebook’s efforts take on the form of kind of hybrid efforts, where you have typically highly trained specialists around disinformation, fake engagement, those kinds of things, using their behind- the-scenes access to the platform to run really kind of sophisticated programs or queries to see if something looks inauthentic. And, in this case, what’s interesting about Sophie Zhang is that she was working on a team that, yes, was focused on fake engagement, but it was really looking at spam. So it was looking at commercially oriented efforts to use fake accounts to help push a product, for example. And so, as she was doing that work, she kind of saw some patterns that were unrelated to spam, but actually connected to kind of political discussions. And so, in her case, we have a very highly trained human who has one job, but comes across other things. At the same time, yes, Facebook operates systems that it runs to analyze engagement patterns and to kind of sniff out and elevate stuff that looks suspicious that human analysts and data scientists like Sophie Zhang would then look at more closely. And I think, if the question is, so why does stuff get missed, is, let’s remember the scale of Facebook as a platform. We’re talking about roughly two billion users in most of the countries around the world, people who are uploading tremendous amounts of content, as well as ads, as well as commercial offers, and a wide range of things on the platform. So, it is just so big, that, even though Facebook builds tools to supposedly deal with that scale, there’s no way for it to actually know everything that’s going on, on its platform. And that just creates amazing opportunities for scammers, for hackers, for all kinds of bad actors to exploit the platform.

SREENIVASAN: I can hear someone arguing on behalf of Facebook or perhaps taking a contrarian view, saying, look, well, are we expecting perfection in a way that is unfair? Is there anything that’s 100 percent? There is no government agency that’s that solid. There’s maybe no other private company or public company either.

SILVERMAN: Yes, I mean, it’s true that no company and no platform, for that matter, can sort of 100 percent enforce on things. But if we look at what’s going on inside the company and what are they prioritizing, if it’s basically saying, we don’t have the resources to deal with this election in that country and the election in this country, and if you’re showing us fake accounts connected to the president of a country, we just can’t — it’s going to take us nine months to deal with that, I think there are kind of basic standards of competence and oversight that people expect. And this memo, I think, really underlines that Facebook is just not delivering on a lot of the things it says it’s doing around the world. And so, as a minimum barrier, are they doing the things they say they’re doing, and are they able to enforce the policies that they have? And I think, on those two questions, the amount of information that’s been coming out from this memo and other things says, there are still some serious problems there. And it’s also a question of the resources. Facebook makes billions of dollars in profit every quarter. And they’re basically saying, well, we don’t have all the people we need to be able to look at every country. I think it’s reasonable to say, then you need to step up, and you need to use your significant resources to make sure that you have people in the country with domain expertise, language expertise to actually make sure you’re supporting your users there.

SREENIVASAN: Let’s pivot a little bit about what’s happening with the platform in the United States. So, if Ms. Zhang’s concerns were with elections overseas, here in the United States, our intelligence agencies have pretty much established several years ago that the Russians were involved in trying to manipulate our elections. And, even now, the FBI has said that there is active participation from foreign actors that are trying to manipulate how we think and what we what we do come November. So, how is Facebook being manipulated now and in the next 50 days in a way that’s perhaps different than what was happening four years ago?

SILVERMAN: Well, I think one of the things that’s changed since four years ago is that people have to be a lot more clever. The easy, simple, obvious approaches to spreading false information, to using fake accounts, that actually has gotten harder. And so you can give Facebook some credit for that. Where it really was not doing a good job on these things, it certainly has improved in some areas. So, what does that mean? Well, it means people have to be more creative with the ways they go about this. And if we think about — a recent example was that Turning Point USA, a conservative student group in the U.S., apparently, they were using these kind of student ambassadors, people they were paying, to run accounts on Facebook and to kind of spread messages, sometimes in a very coordinated fashion, of literally copying and pasting the same kind of message. And so I think one of the things that campaigns and people trying to influence public debate have done is try to think about, well, what is Facebook good at spotting at? Dumb automated accounts. So, what do we need to do? Well, we need to coordinate real humans to behave in ways that it’s going to look like it’s authentic. I also think that Facebook continues to miss ads that are political ads that are meant to influence people and that should have a specific disclaimer and be in a database. So there’s still people able to do things like, for example, paying to rent someone’s Facebook account, and using that to set up an ads account to then run ads through them. And we know, for example, that Russia did that in Ukraine’s election last year. So, a lot of the tactics that have been — often been used by spammers and advanced kind of commercial operators are now being adopted to cover the tracks of disinformation operators and to use real humans to try and influence other humans in a more coordinated fashion.

SREENIVASAN: I mean, it seems that, by kind of the architecture itself of the platforms, the very things that make them so successful, that make ideas and thoughts go viral are the things that support mis- and dis- information.

SILVERMAN: There are certain things about the designs of these systems that are, yes, amplifying and creating problems that are worse than we have experienced in these areas ever before. And there’s also the question of what decisions they make and what priorities they have. And when you look at Mark Zuckerberg, one of the things that I think he’s made very clear is that, if he has a bias around this stuff, what he says is, his bias is that he would rather leave content up than ever remove it and take it down. He thinks that there are, of course, thorny issues around free speech. And there certainly are around that. We don’t want Facebook to turn into the world’s biggest censor because they’re trying to overcorrect for these issues. But him sending that message from the top has also created an atmosphere and a culture within the organization where things like the QAnon conspiracy theory and other things have really taken flight on their platform, and been able to really grow and pull in and recruit more people than ever before, because they tend to be slow to react, and they tend to prefer not to remove, not to try and restrain things in a big way.

SREENIVASAN: Let’s talk a little bit about the money that’s behind some of this, the motivation here. I mean, these are very successful companies that, if they told their shareholders, hey, guys, listen, you know what, we’re turning off all political ads from now until the election, we’re going to be, for lack of a better word, censoring anything that even smells close to political, we’re just going to turn it off, right, they could continually do this for 60 days, and it might hit their bottom line for a little while. But what else can they actually do? Because it seems that, even when they deploy engineers and new tweaks to the algorithms, we still have what could be edge cases become very successful, where they — whether it’s the QAnon conspiracy, whether it’s, horribly, someone taking their own life on the platform and livestreaming it, we’re still not able to catch these incredibly painful incidents.

SILVERMAN: I asked a former engineer at Facebook about this. I’m like, is it just too big? Is it impossible to really get your arms around this problem? And their point of view was actually that there still remains a lack of will at Facebook and perhaps in some of these other countries. Their view was, for example, when the U.S. government went to Facebook years ago and said, you have an Islamic state problem on your platform, you have terrorists recruiting, you have them here, and you need to deal with this, Facebook leapt into action, and it was priority number one for all of these teams. And they have done a good job of keeping ISIS off the platform. And so the engineer’s point was that, if they — there’s still room for them to make even more of a commitment to these things than what they have said. There’s room for them to grow these teams even more. There’s room for them to invest even more, and to make it even more of a priority. And I think that that’s something that Facebook pushes back on a lot. They talk about how much money they spend, but people inside the company still look at it and say that there is more we could be doing that we’re not in terms of making this a priority, investing in people, building tools and doing things to stop it.

SREENIVASAN: Craig Silverman of BuzzFeed, thanks so much for joining us.

SILVERMAN: Thank you.

About This Episode EXPAND

Black Lives Matter co-founder Patrisse Cullors reacts to the news that none of the officers involved in Breonna Taylor’s death will be charged with her killing. Charles Ramsey joins the program to give his take. Novelist Robert Harris reflects on the passing of legendary journalist Sir Harold Evans. BuzzFeed Media Editor Craig Silverman analyzes recent allegations against Facebook.

LEARN MORE