03.14.2024

How AI-Generated Content Is Impacting Elections

Read Transcript EXPAND

CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR: And now, we’ve already seen digitally manipulated video and audio files or deep fakes infiltrating the 2024 election cycle. But just how will it impact voters? Sam Gregory, executive director of WITNESS and Claire Wardle, who’s co- director of the Information Futures Lab at Brown University, are disinformation experts. And they’re joining Hari Sreenivasan now to discuss what’s at stake politically and technologically.

(BEGIN VIDEO CLIP)

HARI SREENIVASAN, CORRESPONDENT: Christiane, thanks. Sam Gregory, Claire Wardle, thank you both for joining us. You are both experts in studying misinformation and disinformation, and we want to help our audience unpack not just some examples, but maybe what they can learn from how to process information so that they don’t get taken for a ride, especially during this election year. Claire, I want start with you. I guess in the longer arc of misinformation, where is it when it comes to disinformation or misinformation online? Because around election years, you know, lies are pretty common, and making the other team out to be horrible is just par for the course. But how is this changing in this digital landscape?

CLAIRE WARDLE, CO-DIRECTOR, INFORMATION FUTURES LAB, BROWN UNIVERSITY: So, the world that we live in now means that it’s more — it’s much easier than ever to create this kind of content and much, much easy to spread it. But you’re right, lies, as humans, that’s something that we’re used to. But what we are not used to is the amount of content and how fast it can spread.

SREENIVASAN: Sam, I want to tell our audience a little bit about your work at WITNESS. I mean, it’s a human rights organization that tries to use video to defend people’s human right. But you’re also trying to use technology to make sure that videos are undermining those basic rights as well. And you’ve got yourself a deepfake rapid response force, for example. How do you — what are the tools that are available to you to try to vet these videos that can be generated so quickly as Claire said?

SAM GREGORY, EXECUTIVE DIRECTOR, WITNESS: So, we’re in a really complicated arms race, because as Claire says, it’s super easy to create certain forms of photorealistic and audio-realistic media like audio, like the Biden robocall, but it is not super-easy to technically detect them yet, and that capacity isn’t with most ordinary people and many journalists. And so, what we work on is really how do we bridge the gap that’s existing now in terms of the technologies and tools that are available to, you know, our frontline of defense against mis and disinformation because the technologies aren’t there yet. And I would echo what Claire says. I think we want to — when we start to think about how we detect A.I., we need to start with recognizing that this builds on previous problems and we also need build on previous solutions as well.

SREENIVASAN: I want to share a couple of examples of images that were created using A.I. tools. This was generated by third-parties, not associated with the Trump campaign at all, but just in the past couple of weeks, there were these photos here of Former President Trump who is — looks like a Christmas party with a bunch of African-American voters. They — you know, look like they’re having a good time. And, you know, if you weren’t careful enough to look at the hat and see the misspellings or something and try to look for something, you would have thought, wow, so he’s at a Christmas party with a bunch of African-Americans. And then there was another photo of him with a group of young men. Now, again, you know, I want to say a caveat to our audience that, you know, we’re trying to be careful, will definitely be framing these online and on air with very visual, easy to spot captions, don’t want to amplify disinformation. And Claire, I want ask, look, what is the harm in these? On the one hand, we know that these were not real human beings. On the other hand, look, President Trump does have support from real-life African-Americans, right? So, where’s the danger here?

WARDLE: Well, the first thing I’ll say is that this was generated by A.I., but the same kind of content could have been created with the more basic editing. So, we just first of all say that. But the second thing is, if we as a society just go, well, it doesn’t really matter does it, like he does have some friends, this could have been the case, then we kind of lose the foundation upon which we’re all making decisions and understanding the world around us. So, even, I think, in these kind examples, well, what’s the harm? There is harm in the idea that we don’t know what to trust. If people just go, well, it does matter. So, I think that’s — we have to label, we to make it clear when this is A.I. generated or if it has been photoshopped. It’s important that we know and have an accurate historical record of what did actually happen or not. And that’s what we have to keep reminding each other. Many disinformation actors don’t have a very big audience. They might be on a kind of a niche online site and they might have couple of people who follow them. What they’re desperate for is the megaphone that the media brings. So, so much of the tactics and techniques that that we see shared, it’s not a particularly clever use of technology. The vulnerability is how can I get the media to cover it, how can get I the outrage, how I can people to hate, like it on Twitter? And that’s what we have to be careful about, which is eventually — ultimately having our brains hijacked. The attention economy is all about that. And unfortunately, that is a bigger problem, I think, than the technology itself.

SREENIVASAN: Claire, I want to ask a little bit about the Biden robocall that became quite famous before the New Hampshire primaries. And that’s a long ways away from the general election. And I think it was the first time a lot of people understood how good the technology was. And I want to roll in a clip of this audio here. And this is really not Joe Biden’s voice. Again, for our audience, this a piece of A.I.-generated audio, it is not the president.

(BEGIN VIDEO CLIP)

AI RENDERING OF JOE BIDEN’S VOICE: What bunch of malarkey. You know the value of voting Democratic and our votes count. It’s important that you save your vote for the November election. We’ll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.

(END VIDEO CLIP)

SREENIVASAN: Now, that is allegedly the voice, which is not the voice of Joe Biden, asking people not to go out and vote. I mean, that’s a pretty powerful tool when it comes to a close election, whether it’s in New Hampshire or the general election. So, Claire, I wonder, what are you studying when it comes to how audio is being used to manipulate people? Because honestly, for most people, if it wasn’t the content that was a little suspicious, I would have thought that Joe Biden’s voice.

WARDLE: Yes. And as Sam has said, unfortunately, we see audio — generative audio messages used quite frequently because our ears — or actually, our eyes aren’t very good but our ear is at worse. We don’t have good checks in terms of saying is that real or not. And I think when it comes to scams, and we’re already seeing this in other countries, you get a call and it sounds like your sister or your mom and says they’re in trouble. So, we have to worry about well-known voices. But the other concern here is by the family and friends, or I would argue in an election context, the worry is the use of local trusted messengers, maybe a local faith leader whose voice gets used to say, I wouldn’t bother voting or I’m worried if you vote, it might be dangerous. So, that does mean in this election cycle, we all have to be aware of how our communities might potentially at risk through some of these technologies, not making people overly concerned, but to saying, that is a potential threat, let’s be away if hear something, double check it before you just trust it implicitly.

SREENIVASAN: Sam, what’s your tip to people, especially when it comes to audio? That it — why is it so hard for us to discern those facts from fiction?

GREGORY: The first thing we say, and I think, unfortunately, often the guidance we give people places all the blame or pressure on them to sort of detect the little glitch in the audio or spot something in one of those images we just saw, and that’s not the right strategy in long run. These keep getting better. There are glitches. If we listen closely, there are things we might hear as an expert. So, first of all, we’ve got to take the pressure of people to say, look out for the glitch. That makes it hard, though, because that means that, like, the sort of our first strategy when we listen to something, which is just to listen closer. It doesn’t necessarily work well. It’s particularly hard then to do the next stage with audio, which, for example, to see if this comes from a manipulated original or is something else, right? With, for example, the Donald Trump A.I. images before, you could do a reverse image search, you could try and see, if there’s another source, you can’t do that with the audio. We don’t have a way to sort of search for other audio sources.

SREENIVASAN: Sam, you’ve also been looking into what you call resurrection deepfakes. First, our audience might be surprised even what that idea is. But we saw this really being played out in the Indonesian elections. Explain what this A.I. trend is.

GREGORY: So, this is a trend that’s been gaining speed over the last two or three years, which is when you recreate someone who has passed and use them typically for political purposes. So, we’ve seen resurrected journalists calling out state violence in Mexico, deceased people from the Parkland shootings talking from beyond the grave. And most recently in the Indonesia election, one of the parties brought back the former Indonesian dictator, Suharto, to ask people to support the party. And it’s very complicated because it’s all about taking someone’s previous presence and making them say things they never said in real-life. And so, it goes back to a lot of the issues that really we need to grapple with when we look at A.I. in the public domain about consent, like who consents to the use of these images and turns them into this audio, this video and disclosure.

SREENIVASAN: There was recently a deepfake that went around, and I want to play a clip of it. This is from Paul Harvey, who was a respected broadcaster who worked in the United States for decades. He had a really signature sound. And he made a speech once which was completely manipulated for political gain here. Let’s roll this.

(BEGIN VIDEO CLIP)

AI RENDERING OF PAUL HARVEY’S VOICE: And on June 14, 1946, God looked down on his planned paradise and said, I need a caretaker. So, God gave us Trump. God said, I need somebody willing to get up before dawn, fix this country, work all day, fight the Marxist, eat supper, then go to the Oval Office and stay past midnight at a meeting of the heads of state. So, God made Trump.

(END VIDEO CLIP)

SREENIVASAN: Claire, originally Paul Harvey had made that speech and it was God made farmers, right? And it was a completely different speech that has been altered. Paul Harvey died in 2009. This was not done with his consent. And I wonder how much of this notion of nostalgia and emotion factors into whether a piece of misinformation or disinformation seems more believable.

WARDLE: We know this from psychological studies in that the way our brains work is that we rely very much on heuristics, particularly at a time when we’re overwhelmed. So, when we have heard a voice before, or it reminds us of something, even in misinformation, the more you see something, even if it’s fact-checked, there’s something to that. So, this kind of playback, this nostalgia, this like, oh, I’ve heard that voice before and I trusted it before, all of that is exceptionally powerful. So, we’re seeing a pattern here of relying on another time and the ways that people have feelings around earlier political moments that were less charged or using figures that people had relationships with. So, that’s what’s happening here. And I would argue it’s exceptionally powerful.

SREENIVASAN: Yes. Sam, we’re also starting to see politicians use technology as sort of an excuse to cover up things that actually might have happened. There was a Lincoln Project video and it showed — well, let me just roll that clip here.

(BEGIN VIDEO CLIP)

UNIDENTIFIED FEMALE: Hey, Donald, we noticed something. More and more people are saying it, you’re weak. You seem unsteady. You need help getting around. And wow.

DONALD TRUMP, FORMER U.S. PRESIDENT AND U.S. REPUBLICAN PRESIDENTIAL CANDIDATE: An anomonis — really anomonis country and —

(END VIDEO CLIP)

SREENIVASAN: Sam, what’s interesting is, is right after that compilation of videos that the Lincoln Project team had assembled of President Trump looking these ways and taking these excerpts, he went on to his own social media platform and said, look, this is — you know, “these are losers.” They’re using A.I., that this is all fake TV commercials. And I wonder whether or not this kind of plausible deniability changes our understanding and expectation of what is real and what is not. How do we maintain some integrity that there is a real fact versus what’s manipulated? It seems that if you see 20 of these examples, after a while, you’re just going to assume it’s all bad.

GREGORY: This is a phenomenon we’re seeing globally of this plausible deniability. And it really relies on the fact that people are often confused about what A.I. can do and they’re confused about their ability to detect or discern it. So, it’s incredibly easy for people in power when there’s something compromising to say, hey, A.I. could have made it. Hey, A.I. is capable of this. And to some extent, that’s not true. Some of the examples we see are people exploiting our fears of A.I., right? And I think a lot of the exploitation is actually around people’s fears of A.I. versus the reality. But it also ties into people’s very deep sense that maybe they were fooled by the pope in the puffer jacket last year, this sense that maybe we can’t discern. And I think this is a really challenging phenomenon also because it’s very easy to say you can’t believe this image, this audio, or you can’t believe any image or audio. But it’s increasingly hard to conclusively prove that something was made with A.I. So, one of the experiences we’ve had in our deepfakes rapid response forces, you know, we’ll get cases where someone has been caught on a compromising tape and they say something and then they instantly come out and say, this was made in A.I. when it becomes public. And then it may take several days for experts to verify that, in fact, it is 90 percent likely to be A.I. made or 90 percent likely to be authentic. And in the gap, the public hears, well, A.I. can be used to fake almost anything. And it does, I think start to undermine our trust.

SREENIVASAN: Sam, what’s your tip? Whether as a seasoned journalist or it’s one of Claire’s students, what do you say to somebody who’s trying to verify a fact? Are there — you know, what are the most simple tools that you suggest that they use and what’s the mindset that they should approach it with?

GREGORY: So, I always described that we need to go back to thinking about how A.I. adds on to our media literacy and our verification skills already, you know, how it complicates that, right? So, I use the acronym, which is SIFT, which is, first stop. Don’t let your emotions carry you away when you see something that seems too good to be true or too convincing to be true. Then I investigate the source, try to find out where this comes from, then see if anyone else is covering it, F, find alternative coverage and then trace the original, see if there’s an original. And the reason I’m sharing those is, I think, for many of the examples we’re looking at, for example, if we traced back those deepfakes, we’d see they came from a satirical site. If we looked at the Trump images on reverse image search, we’d find that they had news coverage around them, right? So, by doing those steps first, so we don’t have to all do the same work of trying to be forensic analysts, I think it’s absolutely critical and is learning from our existing skills. Then for journalists, there is a gap, and it’s one of the things we’ve been calling out is that there’s a gap in access to the more technical tools for a broad range of journalists to do the analysis and then know how to explain it to the public. And that’s something we’ve got to address as A.I. tools get better. And that also requires really putting the onus on platforms and other people who are creating the A.I. tools to make it as easy as possible, both to detect the presence of A.I., to be able to label it, and also to be able to authenticate the real, right? Again, I said I don’t want us to place pressure on the individual to be a forensic analyst, but we shouldn’t place all the pressure on news organizations to do this all themselves. We need to find much better ways to make A.I. detectable, make it easy to label it when we see it in our timelines or when we encounter it in the wild. And also make it easier to authenticate the real, to be able to show when something was made in a particular time and place. And if we do those together, we’ll be in a much more resilient place.

SREENIVASAN: Claire, where are we in that conversation with the technology platforms and the big technology companies that are creating these tools in the first place, whether they are able to add appropriate watermarks that, you know, are easy to trace or find, or whether, you know, can you — if you’re Google, can you update the Chrome browser to automatically flag that this is a synthetic piece of media?

WARDLE: So, about two weeks ago, OpenAI launched a new tool, Sora, which allows you to essentially write a sentence and you get a 60-second video. And so, the first five minutes, I saw, like, many people were scrolling in awe of what had been created. And then, the second 10 minutes, I was like, how is this allowed? It’s like putting a really, really fast car that can drive twice as fast on the interstate, but they don’t have to wear a seatbelt and, you know, there’s no rules of the road. I mean, I find it astonishing that this can just be rolled out without any of those kinds of safeguards. And we knew this was coming. And so, we can’t just, you know, put a watermark on that can be photoshopped out. We need really significant and sophisticated technology that would embed those kinds of markings. But the idea that this — they can launch these new products without that is astonishing. And we don’t have a regulatory framework right now. I mean, imagine a new food or a new car, or I mean, we’d have to have that. These guys are, you know, creating all sorts of things. So, I am concerned that in this very short time period before the election, these new tools are being rolled out at a pace that’s much faster than the speed at which we as consumers can catch up and adapt.

GREGORY: I’m nervous. I’m worried that we’re panicking and our panicking is driving hasty actions and a real degradation of trust. And in some ways, I’m also excited. I think — you know, I come from a context of really also thinking about the creativity of video and of images. And so, there’s potential there. But overall, I’m nervous and I want us to prepare much more actively, as Claire said, but not to panic because that plays into the hands of people who want to use these tools maliciously.

SREENIVASAN: Claire Wardle, the co-director of the Information Futures Lab at Brown University and Sam Gregory, the executive director of WITNESS. Thank you both for joining me.

WARDLE: Thank you very much.

GREGORY: Thank you.

About This Episode EXPAND

Former U.S. Ambassador to Russia John Sullivan discusses Putin’s rule and the danger of waning U.S. support for Ukraine. Baroness Sayeeda Warsi and David Baddiel join the show to discuss their new podcast, “A Muslim and a Jew Go There.” Deepfakes are infiltrating the 2024 election cycle. Just how will this impact voters? Misinformation experts Sam Gregory and Claire Wardle discuss what’s at stake.

LEARN MORE