09.25.2023

Will Big Tech End Privacy As We Know It?

Clearview AI claims to be able to identify anyone with 99% accuracy, based on just one photo of their face. The controversial software first came to the attention of New York Times reporter Kashmir Hill back in 2019. She joins Hari Sreenivasan to discuss her new book, “Your Face Belongs to Us,” a deep dive into her reporting of the company and the dangers of this new technology.

Read Transcript EXPAND

CHRISTIANE AMANPOUR, INTERNATIONAL HOST:  Now, as A.I., we’ve just been discussing, continues to change everyday life, it’s also rewriting our right to privacy. One small A.I. company in the U.S. claims to be able to identify anyone with just about 99 percent accuracy based on just one photo of their face. “New York Times” tech journalist, Kashmir Hill, has been reporting about the controversial software in her new book, “Your Face Belongs to Us.” And joins Hari Sreenivasan to discuss the dangers of this technology.

(BEGIN VIDEO CLIP)

HARI SREENIVASAN, INTERNATIONAL CORRESPONDENT: Christiane, thanks. Kashmir Hill, thanks so much for joining us. First, let’s start with the title, “Your Face Belongs to Us.” Who is the us? How did they get my face?

HILL: Well, the us in the book is primarily Clearview AI, which is this company I heard about a few years ago that scraped billions of photos from the public internet without people’s permission to create this face recognition app that they were secretly selling to the police.

SREENIVASAN: And how successful is Clearview?

HILL: Well, Clearview works with thousands of police departments. They have $2 million in contracts with the Department of Homeland Security, they have a contract with the FBI. They’ve received funding from the air force and the army to work on facial recognition glasses, augmented reality glasses that you can wear and identify someone. So, they have had success selling their products to law enforcement agencies

SREENIVASAN: So, give me an idea in the sort of grand scheme of biometrics, from fingerprints to taking a picture of your photo — or your face and identifying it, what goes into facial recognition to make it work and how good is the stuff you’re talking about?

HILL: Yes. The technology that scientists, engineers have been working on for decades, it used to not work very well, very flawed when it came to particularly people who were not white men. What is happening is that a computer is looking at all the information from a face, from a digital image. And, you know, if it’s trained on enough faces, which there are a lot of faces now in the internet age, it’s able to kind of figure out what is unique from one face to another. And so, these face recognition apps go out and essentially like look for a face that matches the face it’s given. And so, it — they can work pretty well at finding you, but they might also find doppelgangers. And so, that’s been a problem in police use of the app. There have been — several people have been arrested for the crime of looking like someone else.

SREENIVASAN: This technology that the companies have doesn’t require a nice well-lit headshot of me looking directly into the camera, right?

HILL: Yes. This is what police officers told me when I first heard about Clearview AI. They said, the facial recognition technology that they had been using before that just worked on criminal, you know, mugshots, state driver’s licenses, it didn’t work that well. But when they started using the Clearview AI app, which had newer technology, a fresher algorithm, it would work even when somebody was turned away from the camera, wearing a hat, you know, wearing glasses. I talked to this one financial crimes detective in Gainesville named Nick Ferrara, and he said, you know, it’s incredible. I had a stack of wanted fraudsters on my desk. I ran them through Clearview AI and I got hit after hit after hit. And he said, I’d be their spokesperson if they wanted me.

SREENIVASAN: Is this leap forward that we’re seeing now, is it that the technology has gotten better or is it that essentially the ethics have gotten looser?

HILL: The technology has gotten better. But one thing I discovered while I was doing the research for the book is that both Google and Facebook developed this technology internally, as early at 2011. Eric Schmidt, the then-chairman of Google, said that it was the one technology that Google had developed and decided to hold back because it was too dangerous. Facebook engineers, at one point, rigged up this smart phone on baseball cap. And when you turned your head and the camera zoomed in on somebody, it can call out their name. But Facebook too decided to hold it back. And these are not companies known as being, you know, privacy protective organizations. They’ve pioneered many technologies that can change our notions of privacy, but they felt it was too dangerous. What was different about Clearview AI, it wasn’t they made a technological breakthrough, it was an ethical one that they were willing to do what other companies hadn’t been willing to do.

SREENIVASAN: Wow. You know, I had a chance to interview the CEO of Clearview a couple years ago. And at the time, he said they had not had any sort of falsehoods, any misidentification. And yet, what you’re talking about and writing about in “The Times” these days is a serious of instances where people have been misidentified for crimes that they didn’t commit by facial recognition software and had to, well, suffer because of it.

HILL: Yes. I discovered one case that appears to involve Clearview AI, a man named Randal Reid. He lives in Georgia. He gets pulled over one day by the police and they say there’s a warrant for his arrest. He is arrested. He is held in jail for a week. The crime was committed in Louisiana, and he never even been to Louisiana. And so, he’s sitting in jail waiting to be extradited. He has no idea why he’s tied to this crime. And it turns out that the detectives had run Clearview AI on surveillance footage and it had matched to him and he was arrested even though, you know, he lived, you know, hundreds of miles away from where the crime occurred.

SREENIVASAN: And what eventually happened? How was he kind of exonerated? He was sort of guilty until proven innocent by this technology.

HILL: So, he got a good lawyer. And that’s what happened to these people who are falsely arrested, they do have to hire lawyers to defend them. The lawyer went and actually went to the consignment stores where he was accused of using a stolen credit card to buy designer purses and he asked to see the surveillance footage. And one of the store owners showed it to him and he said, wow, you know, that guy actually does look a lot like my client, but it’s not him. And he called the detective and the detective revealed to him that they had use facial recognition app in the case. And so, he got a bunch of photos of his client, of videos that his client has made of his face, gave it to the police and they realized that they had wrong person and the case was dropped.

SREENIVASAN: There was a recent case you wrote about, a woman who was eight months pregnant who was taken to jail after a misidentification.

HILL: Yes. There’s a woman named Porcha Woodruff. It happened on a Thursday morning in February. She was getting her two young children ready for school. Police turned up at her door school saying she was under arrest for car — for robbery and carjacking. And she was just in shock. She couldn’t believe it. She said, well, is the person who committed this crime pregnant? You know, look at me. And she got taken to jail. Spent the day in jail, you know, was charged. Again, had to hire a lawyer. And it was all, again, a case of mistaken identity. She was arrested for the crime of looking like someone else. And after she spent the day in jail, she went to the hospital because she was so dehydrated and stressed out from being accused of this crime. It is actually the third time that this has happened in Detroit that — that’s where Porcha Woodruff lives. And all of the cases that we know about where someone has been falsely arrested, the person has been black.

SREENIVASAN: You know, we have heard about kind of algorithmic bias and bias in the structure of systems. How does that work when it comes to facial recognition?

HILL: yes. I mean, facial recognition technology for a long time was really flawed when it came to how well it worked on different groups of people. And the reason was that when it was initially being developed, the people who were working on it tended to be white men and they tended to make sure the technology worked well on them and people who looked like them. And so, they would train it on photos of white men. This was kind of pointed — you know, this was kind of realized and people ignored it. Facial recognition technology was deployed in the real world with this basic flaw in it. But, you know, the vendors had taken the criticism to heart and they now do train their algorithms with more diverse sets of faces. And so, the technology has come a long way. But as you see from these false arrests, there are still, you know, disturbing outcomes, racist outcomes that we’re seeing in the way the technology is being deployed and misused.

SREENIVASAN: I’ve pointed out a couple of the cases where it’s been used and it’s come out with horrible outcomes. What are some cases where facial recognition has been used to actually catch the correct bad guys, so to speak?

HILL: Yes. I mean, facial recognition technology is a powerful investigative tool, that’s what police officers told me. They said it can really be a gamechanger in an investigation when all you have somebody’s face. Particularly, it’s been a popular with child crime investigators who are often working with, you know, basically photos of abuse, and they have photos of not just the abuser but also the child who is being abused. And they have been using Clearview AI to try to solve these cases. And, you know, I have heard of many success stories. One of the crazier stories I heard from a Department of Homeland Security agent was he had a photo. He’s trying to figure out who the abuser was. An agent friend of his ran it through Clearview AI and found the guy standing in the background of someone else’s Instagram photo.

SREENIVASAN: Wow.

HILL: And that was — you know, that was a crumb that led him to figure out who that man was, lived in Las Vegas, and he was able to arrest him and remove the child from him having access to her.

SREENIVASAN: This is one conversation about how it’s used in policing. But you’ve pointed in the book multiple situations where it’s beyond policing. It is in grocery stores, in the U.K. right now. It is in department stores, in America today. You know, tell us a little bit about what happened when you tried to go into a Rangers game with someone — well, tell me that story at Madison Square Garden.

HILL: Yes. So, I went with a personal injury lawyer to — it was actually Nicks game at Madison Square Garden. And, you know, we put our bags on the security belt to go through the metal detector. And as we were collecting our bags, a security guard came over and pulled this personal injury attorney aside and he said, oh, you know, you’ve been flagged. We use a facial recognition system here. And my manager is going to come over and he’s going to need to talk to you. And this attorney was one of thousands of attorneys who have been placed on a ban list at Madison Square Garden, enforced with facial recognition technology because she works at a firm that has a case against the company. She’s not working on that case, but this something that the owner of Madison Square Garden, James Dolan, has decided to deploy to kind of punish his enemies. And so, the manager came over and said — he gave her basically a note kicking her out and said, you know, you’re not welcome here until your firm, you know, resolves that litigation, drops the case against us.

SREENIVASAN: Interestingly, you point out that, for example, the owner of MSG could not use this tool at a facility that he owns in Chicago. How come?

HILL: So, he can deploy the technology against lawyers at his New York venues like Madison Square Garden and Radio City Music Hall, but not their Chicago theater because Illinois has a law called the Biometric Information Privacy Act, presciently passed in 2008 that says that people have basically control of their biometric information, including their faceprint. And if a company wants to use it, they need get consent. And if they don’t, the company would need to pay up to $5,000 per, you know, face or biometric information that it uses. And so, yes, Madison Square Garden has a ban list in Chicago, but it does not enforce it by scanning people’s faces as they enter the venue.

SREENIVASAN: So, if this is happening in Illinois because of regulation and there’s also European countries that are also following suit, right? I mean, when it comes to Clearview AI, it’s banned in several countries on whether it can be used.

HILL: And after I first exposed the existence of Clearview AI, a number of privacy regulators around the world announced investigation. And privacy regulators in Europe, in Canada, Australia all said that what the company had done was illegal and said that, you know, they can’t operate in their countries anymore and needed to delete their citizens information from the database. They also were issued some fines. While they haven’t been able to get their citizens information out of the database, they have effectively kept Clearview AI from operating in their countries. And so, yes, we do live in a world right now where your face is better protected in basically some places better than others.

SREENIVASAN: I mean, so let’s kind of fast forward five years out. I mean, we seem to be at an inflection point where we ought to be thinking about the impact and the ramifications this technology has on society and maybe, you know, in a best-case world, creating policies around it. But at the pace at which technology is changing and the pace at which legislatures are actually responding, where do you see this going in five years?

HILL: Yes. I think unless privacy laws are more uniformly passed and enforced, I do think we could have a world where facial recognition is pretty ubiquitous, where people could have an app on this phone and it would mean that when you’re out in public, you could be readily identified, you know, whether you’re buying, you know, a hemorrhoid cream at the pharmacy or when you go into a bar and you meet somebody you never want to see again and they just find out who you are, or you’re just having a sensitive conversation over dinner, assuming you have the anonymity of being surrounded by strangers. And if you say something that’s interesting, maybe somebody takes a picture of your face and now they understand what you’re talking about. I think if we don’t kind of reign it in, it could really change what it is to be anonymous.

SREENIVASAN: Did you speak with Clearview AI about it? I mean, because in the beginning of the book, what was interesting was just literally how they knew you were working on this story. But did they eventually talk to you?

HILL: Yes. Originally, Clearview AI did not want to talk to me. They were not happy I was going to be writing about them. There were strange, you know, red flags about the company. They had an address on their website for a building that did not exist. They had a kind of one fake employee on LinkedIn. They didn’t want to talk to me. And I ended up talking to police officers who were using the app. And oftentimes, the police officers would offer to run my photo to kind of show me how well the app worked. And every time this happened, the police officer would eventually stop talking to me. And — or two of the police officers they said, you don’t have any results. There’s no photos in the app for you. It’s really strange. And eventfully, I found out that Clearview AI was actually — even through it wasn’t talking to me, it was tracking me. And it has some kind of alert for when my photo was uploaded. It had blocked results for me. And one of the officers I talked to, minutes after he ran my face, got a call from the company telling them — telling him that, you know, they knew he had done this and he wasn’t supposed to and they deactivated his app. And it really freaked him out. He said, I didn’t realize that this company would know who I was looking for, that they know who law enforcement is searching for and that they can control whether they could be found. It was really a pretty chilling start to the investigation.

SREENIVASAN: So, when you eventfully did speak to them, what did they say about these cases of misidentification or the possibilities of that?

HILL: So, at the time that I first started talking to Hoan Ton-That, in Clearview AI, they didn’t know of any misidentifications yet. And they said, you know, it’s a risk, but our technology is never meant to be used to arrest somebody. We are just trying to, you know, give police a lead in a case. And then, they have to do more investigating, they need have to find evidence. And so, he kind of distanced themselves from the responsibility for when this goes wrong.

SREENIVASAN: And has this changed how you do your reporting?

HILL: Yes. I mean, the first time that Hoan Ton-That ran my own photo through Clearview AI, once the company has stopped blocking the results, I was really shock by the photos that came up. Photos of me walking in the background of other people’s photos. A photo of me actually with a source, somebody I had been interviewing at the time for a story that I didn’t realize was on the internet, and it made me think, wow, I might need to be more careful in public, you know, you can’t just leave your phone at home and meet at a dive bar and assume that no one will know it. And this something the federal government has realized as well while I was working on the book. The CIA sent out a warning to all of its outpost and said, our informants are being identified, you know, their identities are being compromised by new artificial intelligence tools, including facial recognition technology.

SREENIVASAN: The book is called “Your Face Belongs to Us.” Kashmir Hill from “The New York Times,” thanks so much for joining us.

HILL: Thank you so much, Hari.

 

About This Episode EXPAND

German foreign minister Annalena Baerbock explains how Germany’s vow to help Ukraine has shaped her nation’s affairs. Former Tory minister and member of parliament Rory Stewart joins the show to discuss his new memoir, “How Not to Be a Politician.” New York Times reporter Kashmir Hill joins the show to discuss her new book, “Your Face Belongs to Us.”

LEARN MORE