12.17.2021

Fmr. Google Insider on Whistleblowers, Unions and AI Bias

Read Transcript EXPAND

BIANNA GOLODRYGA: Well, our next guest is one of the most prominent black women working in artificial intelligence today. Timnit Gebru was a leader of Google’s ethical AI team who says she was fired after speaking out about how some AI systems are reenforcing social inequalities. Google however says that she resigned. Now, she’s tackling the problem head on as she’s been explaining to Hari Sreenivasan.

(BEGIN VIDEO CLIP)

HARI SREENIVASAN: Thanks, Bianna. Dr. Timnit Gebru, thanks for joining us. First, tell us a little bit about the new institute that you just formed. What does it study?

TIMNIT GEBRU, FOUNDER, DISTRIBUTED ARTIFICIAL INTELLIGENCE RESEARCH INSTITUTE (DAIR): I formed a new institute that’s called Distributed Artificial Intelligence Research Institute or DAIR, that the acronym. And I’m hoping for it to be a positive model for doing research in artificial intelligence in general. So, that means not just critiquing AI systems after they have been built, but also a positive model of how to do research in artificial intelligence that centers the voices of members of marginalized communities who are currently mostly harmed by the technology and not benefitting from it.

SREENIVASAN: So, explain that. Does artificial intelligence weigh differently on different communities?

GEBRU: I think that artificial intelligence systems affect different communities differently. So, for instance, some people talk about dataset bias. So, the — some of the ways in which AI systems are currently trained are using lots and lots of data to make inferences about things, whether it’s people or other things. And if that is bias, then your inferences are going to be biased. I can give you a couple examples of that. For instance, there is an application called predictive policing that actually, the LAPD used to use, something called Pred-Pol, and this application is supposed to tell you crime hot spots before they occur. And so, now, imagine — and then, they send more police to those hot spots. So, now, imagine what kind of data they are using. They are using data to train their algorithm on who was arrested for a particular crime, not who caused the particular crime, but who was arrested for a particular crime. And knowing the kinds of issues the U.S. has with biased policing, that’s going to disproportionately target black and brown communities, right? So, now, you send more police to these neighborhoods and then you arrest more people because you’ve sent more police and you sent — you arrest more people and you think that your algorithm is actually telling you something accurate. And so, it kind of increases the societal issues we already have. So, that’s one example. But the issues not just about a biased data set, right? So, in my view, whether or not the data set is biased, I don’t think we should have things like predictive policing or other surveillance methodologies like face recognition and face surveillances and other issue where people talk about its impact on people of darker skin because we have shown that the error rates on people of darker skin are much higher, especially darker skin women. But it doesn’t mean even if this technology were, you know, perfectly well and was able to surveil everybody equally that it would still be something that we want to use because certain communities are much more surveilled than others, and those communities are going to be negatively impacted.

SREENIVASAN: So, backing up just a step. You’re trying to train a computer to think by giving maybe examples of what’s good and what’s bad and then kind of unleashing it on a whole pile of things. So, I guess, you know, that initial bias of datasets that you’re talking about, it seems that whatever biases I might have in determining originally what’s good and what’s bad, I’m basically just mapping that directly on to the machine and it’s going to be able to amplify that.

GEBRU: So, I think there’s disagreements between various communities about what people should try to achieve. So, for instance, there is something that people talk about called Artificial General Intelligence, AGI. And for some people in AI, this is the holy grail. That’s where they want to achieve this one super intelligent being. It seems kind of like a God to me, and I don’t agree with that. I think that even nature is much smarter, right? You don’t have one human that does everything. You have many different kinds of humans, right? So, I don’t believe that we should try to create a machine that thinks and then can do good and bad. I think that we should try to create machines that can help us with various tasks, but it can also be applied to things that we don’t want it to be applied to, right? So, one thing that I really don’t agree with is the use of autonomous weapon weapons. And so, right now, AI is being developed by all sorts of regimes and countries and actors to create autonomous weapons, right? Like you said, we can amplify some of the societal issues that we have with this technology.

SREENIVASAN: The internet in some large part came about because there was an impetus from the Defense Department in the United States to try to create a network that wouldn’t fall down if we were attacked, right? Now, if government is one of those key drivers in how new technologies come to market, I wonder the other side of the coin that you mentioned, big companies, if those are the only ones that have the deep enough pockets to invest this this space, then aren’t we essentially catering to their incentives? Meaning, they are the ones that will decide what’s good for, say, their bottom line or their shareholder value on what comes to market?

GEBRU: Absolutely. And that’s — so, when I talk about how we can reduce the harms of AI, I always point to antitrust measures and worker protections as well. And people might not see the connection because if the majority of the AI being developed is by corporations and if workers inside these companies can’t point out the issues without being fired, like I was, then we’re not going to see the harms right away, right? People on the inside can warn us about these issues because they see what’s going on in the inside, like, you know, the whistleblowers, Frances Haugen, for example, in Facebook. So, we need much harder, you know, tougher, much more whistleblower protection loss than we have right now. And we need to punish companies when they go union busting and attack workers now. So, a lot of the big tech companies that are currently developing AI that they unleash and — onto us without testing it properly are also engaged in, you know, antiworker organizing practices, right, and union busting practices. And unless we, you know, take the curb, the power of the companies that produce these harmful products and increase the power of the people who work against them and the public, we will never be able, in my view, to stop the harms of AI. And on the other hand, it also means investing. So, investing in groups of people who are trying to have alternative futures and create technologies that actually benefit them rather than the current two incentives that we have, which is defense, you know, or how to kill more people more efficiently, or how to get more money as a corporation. So, I think we need an alternative. We need our government. We know that our government can invest a lot of money in technology. The issue is when our government invests a lot of money in technology, the impetus, as you mentioned, is always something to do with defense. Shouldn’t we have another impetus? Shouldn’t we say, how can we make the livelihoods of our people better? You know, how can we work on technology that does that? So, that’s what I’m hoping to see.

ROKER: How much of your world view you think is shaped by the fact that you emigrated to the United States? You are (INAUDIBLE) who grew up in Ethiopia and came here via Ireland. You know, and you have a different perspective than most of the engineers sitting — or who used to the cubicles next to you.

GEBRU: Yes. I mean, I remember in graduate school, one guy said, oh, it must be so great. You’re black and you’re a woman and, you know, you just get all this free stuff, you know. And I’m like, OK. That’s one way to think about it, right? That is how people when I — in these positions, like I said, at engineering, et cetera, that’s how — that’s what they would say to me, right? Like it was like winning a lottery for them that, you know, I check all those boxes. But I try to tell him to try, you know, being a refugee at 15 and, you know, leave your country because of war. And so, when I look at my colleagues and their points of view about our societies, it’s very different from mine. And that point of view is what is imbued in technology. So, for instance, when I found out that in 2016 there was a ProPublica article that talked it about a startup that was selling software that — purporting to determine someone’s likelihood of committing a crime again. And this software was being used as input by judges for either to set bail or sentencing or whatever. And the moment I heard about that, I was panicking because I have had experiences, negative experiences with police, but I have also had experiences with the people who would develop such type of technology, right, my colleagues, my lab mates and other students, right, and what their views are. So, I could see both of those things. And that’s sort of one of the reasons that I wanted to make sure that I worked on minimizing the impacts, the negative potential impacts of artificial intelligence.

SREENIVASAN: You know, one of the reasons that you started this institute, and frankly, one of the reasons that you are as widely known a name now as — is because of what happened with your time at Google. And right off the bat, there’s even a core disagreement in how they see your departure, versus how you see it. They say they accepted your resignation, and you say you were fired. What happened?

GEBRU: I was fired. So, what happened was that I — so, I was at Google for two years and I was co-leading the Ethical AI team, it was a research team that was founded by my co-lead, Meg Mitchell, who was also later fired when she spoke up against my firing. So, we were a research team that were doing exactly sort of what my research institute hopes supposed to do, work on reducing the harms artificial intelligence and working on AI systems that we think would benefit people. I believe and many of us believe that in order to create beneficial AI, you need to have organizations that are not discriminatory, you know need to — it starts from there. It starts from those values. And at Google, I faced a lot of discrimination and I faced a lot of hostility. And so, I spoke up about that a lot, about the harassment that women would face, about what we deal with as black people, et cetera. And so, because of that, there was always a lot of friction. I — we used to joke that there’s probably like one of those detective, you know, white board things that HR has and my face is on it, you know, somehow. And so, what happened to me — what I’m trying to say is what I’m happened to me didn’t come out of the blue. So, I had that. And then, finally, the nail in the coffin was that I wrote a paper, just doing my job, that was trying to warn people about the dangers of what we called large language models. And people at Google, after my paper went through the internal processes just fine, asked me to either retract the paper or take the names of the Google authors off the scientific paper. And I said that I would agree to take the names of the Google authors off the paper if we have a conversation about why and how this decision was made, how — what process we’re going to have in the future, because as a researcher, I can’t just work at a place where you can randomly tell me to retract a paper that, you know, compromises my integrity as a researcher. And they said, oh, no, we can’t do that. And we accept your resignation as a result. And I found out from my direct report that I had apparently resigned, because my manager didn’t know either. That’s not how you resign at Google. There is a whole paperwork and a whole process you go through to resign. So, that’s what happened. And I think that my guess is that they thought I would be quiet about it and I would just say, oh, guess I resigned and leave, you know. But that was not me. So —

SREENIVASAN: You know, not many employees after they resign cause such a fear — the CEO. I just want to read Sundar Pichai’s statement in an e-mail out of staff saying in part, I’ve heard the reaction to Dr. Gebru’s departure loud and clear. It seeded doubts and led some in our community to question their place at Google. I want to say how sorry I am for that and I accept the responsibility of working to restore your trust. Do you believe that?

GEBRU: No. It was like, I’m sorry you feel that way. This came after the uproar, right? It didn’t — after I got fired, they were hoping to keep everything quiet, but I wasn’t quiet. And a number of people came — you know, they did so many — so much organizing on my behalf, former and current Google employees and many others. And so, when — and they kept on doubling down. The first reaction was a very different reaction. A senior VP of research, Jeff Dean, wrote an e- mail to the research community which he later made public, first, attacking my work, saying it was subpar. Then saying that I told people to stop working on diversity, which you know, I don’t think anybody would believe that because I spent so much time working on diversity. And then, talking — say — misrepresenting how I approached the whole situation, saying I tried to dock (ph) people in the review system almost, and that created a lot of harassment for me. That sort of summoned a bunch of white supremacists in the dark — from the dark web who are doing a lot of coordinated attacks against me.

SREENIVASAN: Why is it that even at this stage of the tech industry that there are still so many people of color who do not feel like, not just that it is not a welcoming environment, that it is an actively hostile one?

GEBRU: I think that even though we talk about all the progress that’s been made in civil rights, we’re still living in a white supremacist society. And inherently, if you don’t pay attention and you don’t intentionally do things differently, that’s what’s going to happen. That’s the default. It’s a patriarchal white supremacist society. So, the default, if you don’t act differently is going to be that. So, you’re going to experience a lot of sexism. And when you talk about harassment or — then you’ll have retaliation. And the same with racism. And so, that’s, for me, why I don’t think that we should expect any of these corporations to do things of their own good will. We have to have regulation that it forces them to do something differently. And so, for me, that’s why I focus, for example, on worker organizing because that is a different — shifting of power, right, from these multinational corporations to workers who at least can have a say. And this is really not a radical idea, right? I mean, workers should have some amount of organizing power. So, to me, in a nutshell, that’s what it is. I mean, racism is — exists and it is the default and if we don’t intentionally do something about it, that’s what’s going to come to the surface.

SREENIVASAN: Dr. Timnit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute, thanks so much for joining us.

GEBRU: Thank you for having me.

About This Episode EXPAND

Dr. Tom Frieden and Dr. Richard Horton give an update on the pandemic. NASA Acting Senior Adviser on Climate Gavin Schmidt discusses natural disasters and the climate crisis. “The Great Successor” author Anna Fifield looks back on the past decade of Kim Jon Un’s rule in North Korea. Former Google insider Timnit Gebru talks whistleblowers, unions and AI bias.

LEARN MORE