11.20.2023

Attorney: Elon Musk’s X “Is Unable to Rein in Hate”

Read Transcript EXPAND

CHRISTIANE AMANPOUR, INTERNATIONAL HOST: Now, in times of war, disinformation spreads like wildfire. And tech mogul Elon Musk is often criticized for fanning those flames. Several companies have pulled their ads from his platform X, otherwise Twitter, after he personally endorsed an antisemitic post. Musk denies any wrongdoing. He says, nothing could be further from the truth. I wish only the best for humanity. Now, Nora Benavidez, senior counsel and director of Digital Justice and Civil Rights at Free Press joins Hari Sreenivasan to discuss why content moderation is crucial for democracy.

(BEGIN VIDEO CLIP)

HARI SREENIVASAN, INTERNATIONAL CORRESPONDENT: Christiane, thanks, Nora Benavidez, thanks so much for joining us. You are a free speech attorney, yet you also work with the platforms to try to figure out how to navigate these waters these days. And I’m wondering what you consider the biggest threats to, I guess, balancing digital freedoms with preserving democracy.

NORA BENAVIDEZ, SENIOR COUNSEL AND DIRECTOR OF DIGITAL JUSTICE: Well, thanks so much for having me, Hari. It’s an honor. Look at where we are right now. We’re in a moment of one of the biggest geopolitical crises that we’ve seen and it’s playing out on our digital platforms. The biggest social media companies are not stepping up to do their jobs. In fact, over the last year they’ve been rolling back their policies, laying off critical teams that are tasked with keeping the platforms healthy and functional. And so, disinformation is rising. Hate is rising on these platforms and people are hungry for credible information. It’s a very dangerous combination. It makes people unaware of really what the stakes are and what’s fact or fiction.

SREENIVASAN: You know, recently we have seen a tremendous activity on social media, especially in the context of the war between Israel and Gaza right now. And you wrote a recent article, a piece called “Social Media Platform Integrity Matters in Times of War.” And you drew particular attention to the platform X. What do you see there that’s troubling?

BENAVIDEZ: Well, Elon Musk has taken one of the most widely used platforms, which wasn’t perfect to begin with, but it was functional, and he has driven it into the ground. He has cut teams that are critical in keeping a platform functional. He’s also done a number of design changes that just make it harder for users to make sense of what they’re seeing. He’s removed the headline in link cards. And the biggest change in this context, with respect to Israel and Gaza, is the blue checkmark feature. It used to be a sign of authenticity, of credibility and trust, and it’s now a subscription feature. So, if you want to pay $8 a month, you have your content boosted more in people’s feeds. That feature has been weaponized over the last five or six weeks. And people who are willing to pay that amount are often bad actors. Sometimes they are spreading unauthenticated content, videos and images that were misappropriated from totally different places and times in history. So, it’s been this chaotic frenzy on Twitter, now X, all thanks to a year of bad decisions by Elon Musk.

SREENIVASAN: Last week, he had essentially agreed with and supported an idea that was espoused by one of the users that said, this is — you know, that basically that accused Jewish people of “hatred against whites,” and I don’t want to go into the rest of the message and give it a platform that it doesn’t deserve. But Elon Musk’s kind of amplification of that and saying that this is, in fact, you have spoken the truth. What does that do? Because his tweets, almost by engineering, reach practically everyone that’s on the platform.

BENAVIDEZ: It’s so troubling that there almost aren’t words for it, that the world’s richest man who bought a platform, out of boredom or interest, has used it to exert his power and influence. He has an outsized following, and I believe is not somehow above rejiggering the algorithms to make sure that his content is seen more widely than anyone else’s. So, the content that he posts, the values, the opinions that he promotes are seen by more people than ever. What is he doing with that power and influence? He’s promoting hate. He picks people who promote Nazi sentiment, things that are frankly just terrible to say out loud, and he gives them amplification. He interacts with them. It’s so dangerous. And in the face of that, the new CEO, Linda Yaccarino, has turned around and said, we have no room for antisemitism on this platform, which flies in the face of everything that her boss has done. So, it’s a platform that is unable to reign in hate because the very top is promoting it.

SREENIVASAN: What’s been the ripple effect of Elon Musk’s decisions in how he chooses to value trust and safety or the veracity of claims that are made on the platforms? How does that impact Meta, which owns Facebook and Instagram and WhatsApp or Alphabet, which owns Google and YouTube?

BENAVIDEZ: Well, Elon Musk loves to champion that he is a free speech advocate, but he’s really anything but that. And his behavior, his decisions regarding Twitter over the last year point to his almost authoritarian tactics. You know, he decides something is true if he likes it. And if he doesn’t like it, isn’t true. You know, just the last few days of frenzy as he has amplified antisemitic sentiments all flow from very weird veiled exchanges with white supremacist and Nazi accounts. That’s dangerous to begin with. But several advertisers have started pulling their ad revenue from the platform because of his bad behavior. They’ve seen that their ads are being featured next to hateful content, and they’re saying it’s too much. So, IBM has pulled their ads. And researchers who found that IBM’s content was featured next to hate, that organization is called Media Matters. Elon Musk’s response was to call that organization evil, solely for them doing their job. So, the climate here is that we have a leader in place of a massive, massive social media platform eager to cherry pick speech when he likes it. And if he doesn’t like it, he’s going to silence it.

SREENIVASAN: You know, I’d be remiss to have such a conversation with you and not bring up kind of the elephant in the social media space, which is TikTok. It is by a long shot where so many young people are getting their entertainment and a lot of them are turning to for news and there have been recently a lot — there’s a lot more scrutiny on whether or not ByteDance, the company that runs TikTok, has been in any way influenced by the Chinese government putting their thumb on the scale of what type of content is surfacing in larger volumes, especially in the context of the Israel Gaza war.

BENAVIDEZ: Many people are speculating that TikTok has tried to not get caught. There are so many risks that TikTok and its parent company, ByteDance, are facing even just in the United States. State lawmakers and federal lawmakers are toying with whether to ban the platform here in the U.S. In one state, that legislation went through and app stores are banned from featuring the platform. There will, of course, be work arounds, but it poses the question for the platform, how do they maintain good behavior? And frankly, TikTok has actually rolled back less policies than any of the other platforms over the last year. They’ve tried to correct whenever their algorithms produce bad results of some sort or down rank certain content of activists. And so, there’s really a question here of, does the threat of regulation help prompt better behavior? If these companies will not regulate themselves to enhance and protect platform integrity, other actors need to step in, lawmakers, regulators around the world have to take up that task.

SREENIVASAN: You know, while there is bipartisan concern over what role the Chinese government may be playing in TikTok, there’s also severe pushback from conservative Republicans who say that essentially social media is trying to censor Americans with more conservative points of view. Is there any evidence to what Representative Jim Jordan and so many others say on Capitol Hill?

BENAVIDEZ: This is such an excellent question, Hari. This is one of the most concerning threats right now, I think, that we’re seeing in the tech accountability space. While we have the likes of Elon Musk threatening researchers, then we have lawmakers saying that these platforms are somehow not neutral, that they are attacking conservatives. That’s wrong. And much of the evidence that we have from research studies and others actually show the opposite, that conservative values and content is not being somehow censored. So that, frankly, lie has seeped into our public consciousness. It’s allowed people to feel like there is some kind of partisanship happening within these platforms to the point where government officials are now nervous to even communicate with platforms because the risk may seem somehow that they are pressuring the platforms to take actions. All of that gives the companies, however, licensed to do less, license to ignore requests for takedowns of violative content, less coordination on national security threats, none of this is good. As we look towards 2024.

SREENIVASAN: We’ve heard recently that Meta, the company that owns Facebook, is going to allow political advertising in this coming cycle for 2024 in the United States, which might say that the previous election was stolen, even though there’s no factual basis to support that. Now, is that — should that be protected political speech? What are the ramifications or implications of that? And doesn’t — didn’t Facebook have policies against spreading something like that in the first place?

BENAVIDEZ: That Meta policy isn’t even new unfortunately. That has been in effect sadly for many months. And this question around political ads and free speech is a good one. Advertisers have every right to submit any kind of content they want because that’s their free speech, right? They can go submit whatever it is. It can contain lies or bigotry or truth. But the companies then, the social media platforms, also then have a right and a duty to uphold basic platform integrity features. Most of the major platforms have weakened or relaxed how they are now dealing with political ads, which will absolutely lead to disinformation thriving on their platforms. So, that calls into question, you know, how will users interact with ads that contain falsity? How will they interact and who will even see that content? Currently, these companies operate in almost total opacity. We have very little insight into virality, visibility, how people engage with content. And so, to better understand it, researchers are trying to investigate, but they’re getting attacked for that. So, to even understand the scope of the problem of what happens when these political ads policies are relaxed, it comes with so many barriers to tech accountability.

SREENIVASAN: So, what are the solutions that these platforms can take? Because oftentimes it is challenged by this notion of, well, this just falls under free speech. You know, we’re not here to monitor and police what you might call hate speech. We’re going to be here for freedom of expression in all its forms.

BENAVIDEZ: Well, first, the platform should not be going after researchers who are trying to investigate and understand what’s happening on these platforms. There is a very horrific chilling effect that’s happening now within the tech accountability field. People are afraid to investigate and dig into the data of what’s happening on these platforms because they’re afraid of being retaliated against or even sued. Elon Musk personally has gone after researchers for they’re doing their jobs. But there’s so much that the companies can do. And I feel I would be remiss if I didn’t mention that we’re looking at, in the next 12 months, a massive election season. We’re going to see over 40 national elections around the world, and most people will use social media to understand issues of the day. It is incumbent on the major platforms to reinvest in content moderation and their trust and safety teams. Elon Musk, just to name one, he gutted his board of directors, his trust and safety council, there is no director for their trust and safety team, and he’s cut thousands of jobs, most of them, the critical pieces to ethical engineering, A.I., trust and safety. We need those people back. We need all of these companies to reinvest to make platform integrity a priority. When they don’t do that, we’ve seen that the real-world impact is on users and democracies.

SREENIVASAN: Where’s the incentive for these platforms to take these steps? Because it seems that structurally there’d be a disincentive here. I mean, all of the — if I allow, for example, news or political speech on my platform, then there is an expectation that I maintain the integrity of what’s being said, even though I’m not directly responsible for it, most people are going to think I am, right? So, I wonder what gives Facebook or Google or X any incentive to say, I should be in this space because the return, the way that they probably look at it on this, is very low and the headache quotient is very high?

BENAVIDEZ: Well, you know, over the last year, free press and other organizations have been working on an initiative called Stop Toxic Twitter. Part of our work was partnering with brands who ultimately divested from spending on Twitter. For much of the reason that we’ve discussed today, they saw that hate was rising and that has made a massive financial dent in Twitter. That company is now valued at less than half of what Elon Musk bought it for, which helps reinforce that content moderation, that platform integrity, these values are good for someone’s bottom line. And it’s a question now of, will other platforms follow suit? Will they see the writing on the wall that Elon Musk’s complete erosion of a functional platform has decimated his bottom line? I hope so. That’s the work that we’re now doing.

SREENIVASAN: So, is there something that U.S. regulators can and should do considering that the E.U. has taken a very different approach and, you know, I don’t know exactly how to measure success on what’s working there and what’s not working here, but are there some lessons that we can take?

BENAVIDEZ: Well, we’re waiting to see what compliance will look like for Europe’s Digital Services Act. And I’m eager to see that. For any violation that a company may commit, they’ll be fined about 6 percent of their annual revenue. That’s a lot of money. I don’t even know if Elon Musk could survive that given how many horrific actions Twitter has taken. But here in the United States, I would say we have to start with minimizing the data collected about us, because the data collected about us is ultimately what allows platforms to create different, sometimes discriminatory experiences for us. They use their A.I. tools, their algorithms to sort through where users can get content, and that ultimately leads to echo chambers, silos and misunderstanding. So, we need lawmakers to take up that task, to minimize the data collected about us and to prioritize human rights.

SREENIVASAN: Senior Counsel and Director of Digital Justice and Civil Rights at Free Press Nora Benavidez, thanks so much for joining us.

BENAVIDEZ: Thanks so much, Hari.

About This Episode EXPAND

Former prime minister of Israel Ehud Barak on the news out of al-Shifa Hospital and what must happen next. Paul Caruana Galizia, author of “A Death in Malta” tells the story of his mother, renowned Maltese journalist Daphne Caruana Galizia, who was killed by a car bomb. Nora Benavidez, senior counsel at Free Press, on the responsibility of social media platforms when it comes to misinformation.

LEARN MORE