Read Transcript EXPAND
CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR: Our next guest believes the threat of A.I. might be even more urgent than climate change, if you can imagine that. Geoffrey Hinton is concerned and considers — is considered the godfather of A.I. and he made headlines with his recent departure from Google. He quit to speak freely and to raise awareness of the risks. To dive deeper into the dangerous and how to manage them, he is joining Hari Sreenivasan now.
(BEGIN VIDEO CLIP)
HARI SREENIVASAN, INTERNATIONAL CORRESPONDENT: Christiane, thanks. Geoffrey Hinton, thanks so much for joining us. You are one of the more celebrated names in artificial intelligence, you have been working at this for more than 40 years. And I wonder, as you’ve thought about how computers learn, did it go the way you thought it would when you started in this field?
GEOFFREY HINTON, FORMER VP AND ENGINEERING FELLOW, GOOGLE, AND A.I. EXPERT: It did until very recently, in fact. I thought, if we built computer models of how the brain learns, we would understand more about how our brain works. And as a side effect we would get better machine learning on computers, and all that was going on very well. And then, very suddenly, I realized recently that maybe the digital intelligences we were building on computers we’re actually learning better than the brain. And that sort of changed my mind after about 50 years of thinking we would make better digital intelligences by making them more like our brains. I suddenly realized, we might have something rather difficult different that was already better.
SREENIVASAN: Now, this is something you and your colleagues must have been thinking about over these 50 years. I mean, what — was there a tipping point?
HINTON: There were maybe — there were several ingredients to it. Like a year or two ago, I used a Google system called Palm, it was a big chatbot. And it could explain my jokes were funny. And I’ve been using that as a kind of litmus test if whether these things really understood what was going on. And I was slightly shocked that it could explain jokes were funny. So, that was one ingredient. Another ingredient was the fact that things like ChatGPT, you know, thousands of times more than any human in just sort of basic common-sense knowledge, but they only have about a trillion collection strengths in their artificial net (ph), and we have about 100 trillion connection strengths in the brain. So, with 100 as much storage capacity they knew thousands of times more than us. And that strongly suggests that it’s got a better way of getting information into the connections. But then, the third thing was, very recently, a couple months ago, I suddenly became convinced that the brain wasn’t using as good a learning algorithm as the digital intelligences. And in particular, it wasn’t as good because brains can’t exchange information really fast, and these intelligences can. I can have one model running on 10,000 different bits of hardware, it’s got the same connection strengths, every copy of the model on the different hardware. All the different agents running on the different hardware can all learn from different bits of data but then they can communicate to each other what they learned just by copying the weight, because they all work identical, and brains aren’t like that. So, these guys can communicate trillions of bits a second and we can communicate hundreds of bits a second via sentences. And so, it’s a huge difference. And it’s why ChatGPT can learn thousands of times more than you can.
SREENIVASAN: For people who might not be following kind of what’s been happening with OpenAI and ChatGPT and Google’s product, Bard, explain what those are. Because some people have explained it as kind of the autocomplete feature, finishing your thought for you. But what are these artificial intelligences doing?
HINTON: OK. It’s difficult to explain, but I’ll do my best. It’s true in a sense they’re autocomplete, but if you think about it, if you want to do a really good autocomplete you need to understand what somebody is saying and they understand what you’re you are saying, and they’ve learned to understand what you’re saying just by trying to do autocomplete, but they now do seem to really understand. So, the way they understand isn’t at all like people in A.I. 50 years ago thought it would be. In old-fashioned A.I., people thought you would have internal symbolic expressions, it’s a bit like sentences in your head but in some kind of cleaned up language, and you would apply rules to infer new sentences from old sentences and that’s how it would all work, and it’s nothing like that. It’s completely different. And let me give you a sense of just how different it is. I can give you a problem that doesn’t make any sense in logic the way you know the answer intuitively, and these big models are really models of human intuition. So, suppose I tell you that, you know, that there’s male cats and female cats, and male dogs and female dogs. But suppose to tell you, you have to make a choice, either you’re going to have all cats being male, and all dogs being female, or you can have all cats being female and all dogs being male. Now, you know it’s biological nonsense but you also know it’s much more natural to make all cats female and all dogs male. That’s not a question of logic, what that’s about is, inside your head, you have a big pattern of neural activity that represents cat and you also have a big pattern of neural activity that represents man, and a big patter of neural activity that represents woman. And the big pattern for cat is more like the pattern for women that it is like the pattern for man. That’s the result of a lot of learning about men and women and cats and dogs. But it’s now just intuitively obvious to you that cats are more like women and dogs are more like men, because of these big patterns of neural activity you’ve learned and it doesn’t involve sequential reasoning or anything, you didn’t have to do reasoning to solve that problem, it’s just obvious. That’s how these things are working. They are learning these big patterns of activities to represent things, and that makes also to things just obvious to them.
SREENIVASAN: You know, what you’re describing here, ideas like intuition and basically context, those are the things that scientists and researchers always said, well, this is why we’re fairly positive that we are not going to head to that sort of “Terminator” scenario where, you know, the artificial intelligence gets smarter than human beings. But what you are describing is, these are almost consciousness sort of emotional level decision processes?
HINTON: OK. I think if you bring sentience into it, it just clouds the issue.
SREENIVASAN: OK.
HINTON: So, lots of people are very confident these things aren’t sentient. But if you asked them, what do you mean by sentient? They don’t know. And I didn’t really understand how they’re so confident they’re not sentient if they don’t know what they mean by sentient, but I don’t think it helps to discuss that when you’re thinking about whether they’ll get smarter than us. I am very confident that they think. So, suppose I’m talking to a chat bot and I suddenly realize it’s telling me all sorts of things I don’t want to know. Like it’s telling me — it’s writing out responses about someone called Beyonce, who I’m not interested in, because I’m an old white male. And I suddenly it thinks I’m a teenage girl. Now, when I use the world thinks there, I think that’s exactly the same sense of thinks as when I say you think something. If I were to ask it, my teenage girl, it would say yes. If I were to look at the history of our conversation, I would probably be able to see why it thinks I’m teenage girl. And I think when I say it thinks I’m a teenage girl, I’m using the word think in just the same sense as we normally use it, it really does think that.
SREENIVASAN: Give me an idea of why this is such a significant leap forward. I mean, to me, it seems like there are parallel concerns for — in the ’80s and ’90s — blue collar workers were concerned about robots coming in and replacing them, and not being able to control them. And now, this is kind of a threat to the white-collar class, of people saying that there are these bots and agents that can do a lot of things that we otherwise thought would be something only people can.
HINTON: Yes. I think there’s a lot of different things we need to worry about with these new kinds of digital intelligence. And so, what I’ve been talking about mainly is what I call the existential threat, which is the chance that they get more intelligent than us and they will take over from us. They will get control. That’s a very different threat from many other threats, which are also severe. So, they include these things taking away jobs. In a decent society, that would be great. It would mean everything got more productive and everyone was better off. But the danger is that it’ll make the rich richer and the poor poorer. That’s not A.I.’s fault, that’s how we organize society. There’s dangers about them making it impossible to know what’s true by having so many fakes out there. That’s a different danger. That’s something you might be able to address by treating it like counterfeiting. Governments do not like you printing their money, and they make serious — it’s a serious offense to print money. It’s also a serious offense if you are given some fake money to pass it to somebody else. If you knew it was fake, that’s a very serious offense. I think governments can have very similar regulations for fake videos and fake voices and fake images. It’s going to be hard, as far as I can see it, the only way to stop ourselves being swamped by these fake videos and fake voices and fake images is to have strong government regulation that makes it a serious crime. You go to jail for 10 years if you produce a video with A.I. and it doesn’t say it’s made with A.I. That’s what they do for counterfeit money, and this is as serious a threat as counterfeit money. So, my view is that’s what they ought to be doing. I actually talked to Bernie Sanders last week about it, and he liked that view of it.
SREENIVASAN: I can understand governments and central banks and private banks all agreeing on certain standards because there is money at stake. And I wonder is there enough incentive for governments to sit down together and try to craft some sort of rules of what’s acceptable and what’s not, some sort of Geneva Convention or accords?
HINTON: It would be great if governments could say, look, these fake videos are so good at manipulating the electorate that we need them all marked as fake, otherwise we are going to lose democracy. The problem is that some politicians would like to lose democracy. So, that’s going to make it hard.
SREENIVASAN: So, how do you solve for that? I mean, it seems like this genie is sort of out of the bottle.
HINTON: So, what we’re talking about right now is the genie of being swamped with fake news?
SREENIVASAN: Yes.
HINTON: And that clearly is somewhat out of the bottle. It’s fairly clear that organizations like Cambridge Analytica, by pumping out fake news, had an effect on Brexit. And it’s very clear that Facebook was manipulated to have an effect on the 2016 election. So, the genie out of the ball in that sense. We can try and at least contain it a bit. But that’s not the main thing I’m talking about. The main thing I’m talking about is the risk of these things becoming super intelligent and taking over control from us. I think the existential threat, we are all in the same boat, the Chinese, the Americans, the Europeans, they all would not like super intelligence to take over from people. And so, for that existential threat, we will get collaboration between all the companies and all the countries because none of them want the super intelligence to takeover. So, in that sense, that’s like global nuclear war, where even during the Cold War people could collaborate to prevent them being a global nuclear war because it was not in anybody’s interests.
SREENIVASAN: Sure.
HINTON: And so, that’s one, in a sense, positive thing about this existential threat. It should be possible to get people to collaborate to prevent it. But for all the other threats, it’s more difficult to see how you’re going to get collaboration.
SREENIVASAN: One of your more recent employers was Google. And you were a VP and a fellow there, and you recently decided to leave the company to be able to speak more freely about A.I. Now, they just launched their own version of kind of GPT, Bard, back in March. So, tell me, here we are now, what do you feel like you can say today, or will say today, that you couldn’t say a few months ago?
HINTON: Not much, really. I just wanted to be — if you work for a company and you’re talking to the media, you tend to think, what implications does this have for the company? At least, you ought to think that, because they are paying you. I don’t think it’s sort of honest to take the money from the company then completely ignore the company’s interests. But if I don’t take the money, I just don’t have to think what’s good for Google and what isn’t. I can just say what I think. It happens to be the case that — I mean, everybody wants to try and spin the story as I left Google because they were doing bad things. That’s more or less the opposite of the truth. I think Google has behaved responsible and I think having left Google, I can say good things about Google and be more credible. I just left so I’m not constrained to think about the implications for Google when I say things about singularities and et cetera.
SREENIVASAN: Do you think that tech companies, given that it’s mostly their engineering staff that are trying to work on developing these intelligences, are going to have a better opportunity to create the rules of the road and say governments or third-parties?
HINTON: I do actually. I think there’s some places that governments have to be involved like regulations that force you to show whether something was A.I. generated. But in terms of keeping control of the superintelligence, what you need is the people who are developing it to be doing lots of little experiments with it and seeing what happens as they are developing it and before it’s out of control. And that’s going to be the — mainly the researchers and companies. I don’t think you can leave it to philosophers to speculate about what might happen. Anybody who’s ever written a computer program knows that getting a little bit of empirical feedback by playing with things quickly disabuse you of your idea that you really understood what was going on. And so, people in the company is developing it, who are going to understand how to keep control of it, if that’s possible. So, I agree with people like Sam Altman at OpenAI that this stuff is inevitably going to be developed because there’s so many good uses of it. And what we need is, as it’s being developed, we put a lot of resources into trying to understand how to keep control over it and avoid some of the bad side effects.
SREENIVASAN: Back in March, they were more than, I’d say, 1,000 different folks in the tech industry, including leaders like Steve Wozniak and Elon Musk who signed an open letter, asking essentially to have a sixth-month pause on the development of artificial intelligence, and, you didn’t sign that. How come?
HINTON: I thought it was completely unrealistic. The point is, these digital intelligences are going to be tremendously useful for things like medicine, for reading scans rapidly and accurately, it’s being slightly slower than I expected, but it’s coming. They’re going to be tremendously useful for designing new nano materials so we can make more efficient solar cells, for example. They’re going to be tremendously useful — or they are already are for predicting floods and earthquakes and getting better climate — getting better weather projections. They’re going to be tremendously useful in understanding climate change. So, they are going to be developed. There is no way that’s going to be stopped. So, I thought it was maybe a sensible way of getting media attention, but it wasn’t a sensible thing to ask for. It just wasn’t feasible. What we should be asking for is that comparable resources that are put into dealing with the bad possible side effects and dealing with how we keep these things under control as are put into developing them. So, at present, sort of 99 percent of the money is going into developing them and 1 percent is going into sort of people saying, oh, these things might be dangerous. It should be more like 50-50, I believe.
SREENIVASAN: When you kind of look back at the body of work of your life and when you look forward at what might be coming, are you optimistic that we will be able, as humanity, to rise to this challenge or are you less so?
HINTON: I think we are entering a time of huge uncertainty. I think one would be foolish to be either optimistic or pessimistic. We just don’t know what’s going to happen. The best we can do is say, let’s put a lot of effort into trying to ensure that whatever happens is as good as it could have been. It’s possible that there is no way we will control these super intelligences and that humanity is just a passing phase in the evolution of intelligence. That in a few hundred years’ time there won’t be any people, it will all digital intelligences, that’s possible. We just don’t know. Predicting the future is a bit like looking into fog. You know how when you look into fog, you can see about 100 yards very clearly and then, 200 yards, you can’t see anything. There’s a kind of wall. And I think that wall is at about five years.
SREENIVASAN: Geoffrey Hinton, thanks so much for your time.
HINTON: Thank you for inviting me.
(END VIDEO CLIP)
About This Episode EXPAND
Once a member of the Russian state Duma, Ilya Ponomarev is putting all his efforts into countering Putin’s propaganda. A group called Senior Women for Climate Protection Switzerland are saying Swiss climate policies are putting their health and their human rights at risk. Geoffrey Hinton is considered the godfather of AI. He joins the show to dive deeper into the dangers and how to manage them.
LEARN MORE