Read Transcript EXPAND
CHRISTIANE AMANPOUR, INTERNATIONAL HOST: Now, some Israeli parents have been told today to remove social media from their children’s phones with horrific videos spreading online, and of course, for fear of grisly hostage videos emerging. The misinformation and hate speech that can fill these platforms created dangerous environment that extends well beyond the internet. Imran Ahmed is the CEO of the Center for Countering Digital Hate. And he is joining Hari Sreenivasan to discuss the challenges of regulating this damaging content.
(BEGIN VIDEO CLIP)
HARI SREENIVASAN, INTERNATIONAL CORRESPONDENT: Christiane, thanks. Imran Ahmed, thanks so much for joining us. Your Center for Countering Digital Hate works kind of overtime when it comes to wars and propaganda campaigns that are happening. What have you seeing online over the past 48, 72 hours as this war has broken out?
IMRAN AHMED, CEO, CENTER FOR COUNTERING DIGITAL HATE: We’ve seen a wave of disinformation that is really to speak of the information on social media platforms. I mean, it’s kind of unusable at the moment. We are seeing, you know, two different things. We’ve got bad actors, and that’s, you know, foreign states, it’s extremists, both foreign and domestic. There are cloud chasers, people who are literally trying to sort of get as much engagement as possible to boost their followers, which is incredibly cynical, but it’s a business model for some people. And also, people who just enjoy causing pain. So, hate actors, trolls, et cetera. And then you’ve got a bad platform problem. We have platforms in which their algorithms amplify the most extreme content which gets the most emotion. So, that will get amplified into lots of newsfeeds. They don’t enforce their rules. And in the very worst case, in the case of X, once known as Twitter, Elon Musk himself, the owner, told people to follow an anti-Semite and a disinformation actor to get the real truth. So, you see sort of — this tidal wave of disinformation amplified by the way the platforms work, and it is really overwhelming us. But I don’t think we are ready for what happens next, because these platforms, which have been shedding trust and safety staff hand over first in the last few months and years think what will happen once Hamas starts live streaming executions.
SREENIVASAN: Do you think that is plausible? Or technologically, I don’t know actually the way to stop it. I mean, YouTube says that they’ve gotten better at algorithm. They’re trying to figure it out. But usually, you can’t predict what is going to be on a live stream. So, I don’t know how you stop it.
AHMED: I think that’s the problem, is that the platforms have never spent any time thinking about the safety by design. They have prioritized the amount of content that they can get out there. They can place ads next to it. But, you know, I’ve just come from a meeting with advertisers where I’ve warned them that we are going to be seeing adverts next to a child, an elderly person, a woman being hurt. And I don’t think we’re ready for the way in which corporate, monetized disinformation is going to disfigure our society, our geopolitics, in this instance, even further.
SREENIVASAN: You know, so I want to point out, I know that Elon Musk, I think, deleted those specific tweets, and then he tweeted, as always, please try to stay as close to the truth as possible, even for stuff you don’t like. Now, I mean, going to the platform that is X, that is controlled by him, his role in spreading disinformation seems singular. I mean, you don’t see, you know, Sundar Pichai or Mark Zuckerberg or anyone on their platforms doing the kind of stuff that he does.
AHMED: But he’s a personification of a systemic problem. Like, you know, he’s a single figure that we focus our attention on. I have no real issue with Elon Musk, the man. I’m sure that he is a quite brilliant engineer. And I’m sure that to his many, many children he’s a great dad. But, you know, I don’t know about the industry that he represents. And really, what he represents is the indifference to human safety. The indifference to the harm done on these platforms. And the greed that prioritizes advertising bucks over human safety. And I think that that’s why he’s been the focal point, because he is saying out loud what Mark Zuckerberg, the head of Facebook and Instagram, Sundar Pichai, the head of Google and YouTube, haven’t ever set out but actually do all the time, which is profits before people.
SREENIVASAN: So, in the case of X, now there has been, in the past six months, a sort of total flip. I once had a verified blue check mark because I was a journalist, I proved who I was. That check mark went away and was replaced by people who are willing to pay a few dollars a month for that check mark, myself not included. So, what does that do to how we think of a fact or something that is verifiable?
AHMED: So, I mean, this is something that’s been going on for some years. And when I set up CCDH seven years ago what had become apparent to me was that the primary place in which societies around the world now, you know, do things like share information, set and develop norms of attitude and behavior, negotiate our values, but most importantly, negotiate the information that we decide to call facts had shifted to online spaces, to groups, to the discourse that was controlled by social media algorithms. And that what was being prioritized, one of the most extreme voices, to the lens through which you see the world was actually distorted to bring the fringes further towards the middle. You know, I kind of liked — but without any warning like you would get in a rearview mirror. You know, things may appear closer than they really are. Actually, what you were being told was, this is the global debate. And as a result, a lot of people were fooled into thinking that things that really are quite framed are quite mainstream, and it has reshaped our politics globally as a result Now, that’s true. That’s affected journalists, politicians, as well as members of the public. So, none of us can say that we saw the problem and not we dealt with it out of time.
SREENIVASAN: So, when this misinformation and sometimes disinformation spreads as rapidly as it does on these social platforms, walk us through the consequences here. What does that do to, I guess, the consumer of the information? But what does that also do to kind of actors in the field if they also fall prey to it?
AHMED: So, I mean, there are sort of different modes of disinformation. There’s the disinformation that is drip fed overtime. The drip, drip of disinformation that recolors the lens through which someone sees the world, makes them see the world in a different way and it gives them the precursors to hate. You see, lies and hate have always been deeply interconnected. They actually reflexively — so, lies underpin hate, they create the conditions for hate, then they lead to the operationalization hate, they lead to — they give people the motivation that act in a hateful way as well. The creation of fake emergencies of, you know, threats to life and say, well, we must do something about these people that I’ve been telling you about for a long time. So, the truth is that — and no one knows that better than Jewish people, believe me, whether it be the (INAUDIBLE) 2,000 years ago or it be the protocols of the Elders of Zion that informed Adolf Hitler’s ideology. Lies have always been a critical part of hate. Now, we’ve worked with platforms over the years urging them to adopt the most strict standards possible of not amplifying or monetizing the lies that underpin hate, the incredibly familiar conspiracy theories because we know they lead directly to people doing terrible things. And, you know, let’s take the Great Replacement theory, which is the theory that most — that Jews are trying to bring in Muslims and black people to destroy the white race through migration. That led to the slaughter of Jews in Pittsburgh, the Tree of Life synagogue, it led directly to the slaughter of Muslims in Christchurch, that very conspiracy theory, and it led to the murder of my colleague, Jo Cox, MP, a 35-year-old mother of two, in the British E.U. referendum, which started me on my journey in this work seven years ago.
SREENIVASAN: So, now, I guess in the past 72 hours, have you seen examples of anti-Jewish social media content, anti-Arab, anti-Palestinian?
AHMED: There’s an overwhelming wave of it. I mean, the truth is that because platforms in the last few years have become worse at that, that they’ve reduced of the amount of staff that they have working on trust and safety. And because they’ve become less transparent. In part, as a reaction to the growing awareness amongst legislators, the media, and others that these platforms are actually quite problematic at times. They’ve actually made themselves harder to study. So, one — you know, and in the most extreme cases, suing people who try to study them as acts has have (INAUDIBLE), that actually it’s very difficult to me to quantify in this particular war (ph) what things actually look like in terms of the overall universal disinformation. What I can say though is that if you speak to anyone, as you and I have, and you and I both use social media, if you look at your news feed, it’s almost unusable, because there is such a huge amount of disinformation intermingling seamlessly with good information that it makes it almost a job onto itself to read social media and trying to work out what is actually true here. You know, it’s the first conflict I can remember where my first instinct to switch on social media, you know, very quickly I switched it off and turned on instead CNN or the BBC, because I needed to have access to high quality, maybe not super, super up to date, but still timely fact-checked and well curated information.
SREENIVASAN: You know, I don’t want you to amplify disinformation or misinformation that you are seeing, but give me some examples of the kinds of things that you are seeing online now.
AHMED: We have a policy internally not to talk about individual memes and to only ever talk about themes. But we are seeing things which dehumanize Israelis and dehumanized Jews. And so, it is the typical stuff which will be to inflate the numbers of dead on one side, to deflect the numbers of dead on the other side, but there are a lot of images. And the truth is, it’s been very difficult to pass between reality and falsehood. But the truth is that we are — you know, we are also seeing real images which are beyond human understanding. You know, atrocities, women raped, people murdered. And we are also seeing people reacting to those with joy. Now, in any other walk of life, if you reacted to seeing someone being murdered brutally by yippee-ing and jumping for joy, if you did it in a park on the street, anywhere else, you very quickly realize that there are consequences for behavior like. On social media, what it gives you is more amplification, more clout, more followers, more money. That is a series, you know, dangerous incentive which are misaligned with the public good.
SREENIVASAN: Now, I should mention that Elon Musk’s platform basically is trying to sue you. And they say that you are unjustly, I’m just quoting from their blog here, targeting people you don’t agree with, attempting to affect their business by attacking free speech and illegally gained people’s passwords. So, I know you have to respond legally in some cases, but what can you say about not just their lawsuit but what was the initial report that drew this action?
AHMED: The initial report that drew the action was a study that we did on Twitter looking at the number of times that seven of the most extreme slurs against black people, the N-word, against gay people, against women, against Jews were used on the platform on a daily basis, on average, how often a day in the year before he took over and the month after he took over. And what we found was that the use of the N-word tripled after he took over. Because he put up the bat signal, did he not, to hate actors. He lets thousands of them back onto the platform who previously been suspended. He said, this is a free speech zone. And in doing so, he gave them license to be as racist as possible, saying, we are not really going to enforce our rules anymore. And as a result, there was an increase in hate. And we quantified it. Now, I think that that was basically us putting up a mirror to these platforms saying, do you like the reflection you see in it? And whereas, you know, most people, you or I, if we don’t like to reflect in the mirror, we comb our hair or go on a diet, you know, he sued the mirror. He said, this mirror, how dare you show my image to be ugly because I must be perfect, for I am Elon Musk. And I think that’s the problem. He’s literally suing us because he is annoyed that we reflected back the reality of his platform to him. There’s bit question of, you know, he’s taking us the court. We have absolutely every bit of confidence we will defeat him in court. And that the truth — thankfully, that the truth still matters in that — in the courts of the United States. And so, we hope that when he takes his argument off Twitter where he is king of the castle and he can say whatever nonsense he wants and, you know, literally reprogram the algorithms to boost himself, in a thought, I have an equal voice.
SREENIVASAN: You know, Jim Jordan, a representative in the U.S. House, he also subpoenaed your organization because he believed that you were working with a government and big tech to censor Americans. Now, you’ve turned over those documents.
AHMED: We’ve done it. We’ve sent him about 100 e-mails. So, in total, three or four years, we’ve had 100 e-mails or so between us and the government, both the Trump administration and the Biden administration. With the Trump administration, we worked on reducing the amount of antisemitism online. And, you know, Mike Pompeo wrote me a very nice letter saying, thank you for the work you’re doing combatting hate. And we’ve had similarly cordial but not friendly conversations with the government. The truth is I’m very critical of the governments all around the world for failing to get to grips with social media and the harms there. Now, the E.U. and U.K. have legislated now, I’m very proud of that. We don’t take any money from governments or from tech companies. We are that rarest of flowers an independent organization that represents the people.
SREENIVASAN: You know, I’m wondering, considering how you are working with legislators in the U.K. or the E.U., what is the role for the regulatory environment? And what can the United States do? Because it seems that across the pond, so to speak, there is — there are attempts to try to figure this out, whereas in the U.S., not so much.
AHMED: It’s really interesting, actually. You know, five or six years ago, I said to the British government, if you could just give me a law that said that if the company — that companies have to abide by their own rules, I would go off and retire to Antigua with my cat and leave you alone. And, you know, five or six years later, they’ve delivered us something which is really interesting. It basically says there are four components to safety online, to a healthier and more productive and more prosperous existence online, transparency. And without transparency, you can’t have meaningful accountability to a democratic body. Now, Lindsey Graham and Elizabeth Warren actually have a bill, a joint bipartisan bill for a new accountability regulator in the U.S. The U.K. and the E.U. have one now. But you need the transparency, otherwise, they are not looking at the correct data. And then, if they are harms and the platforms that clearly negligent in not doing anything about it, they should be held partly responsible. Because then you create an economic disincentive for them doing the wrong thing. Now, across all three of those components, transparency, accountability and responsibility, if you get them in place, then you actually don’t need them anymore because you create a culture of safety by design. And that’s what every other industry has. Every other industry has to think about safety before they release products. It’s uniquely social media companies that believe they’re free a bit. And that star framework, safety by design, transparency, accountability, responsibility is what governments around the world are implementing, and we’d like to see it in the U.S. too. I think we are some years away from it. But I’m going to spend the next few years showing its work overseas. It hasn’t restricted freedom of speech. What it’s actually is healthier platforms online, in which the algorithms, the enforcement, the way that they run actually helps humanity, it doesn’t hinder it.
SREENIVASAN: Imran Ahmed, co-founder and CEO of the Center for Countering Digital Hate, thanks so much for joining us.
AHMED: It was my pleasure.
About This Episode EXPAND
Former Israeli prime minister Ehud Barak on the war with Hamas. Abbey Onn, an American Israeli, reports that five members of her family were kidnapped. Gaza doctor Khamis Elessi condemns the death of civilians on both sides. Imran Ahmed on how misinformation about the war is being amplified by social media. John Kirby on Israel’s right to defend itself — and what role the U.S. is playing.
LEARN MORE