Read Transcript EXPAND
CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR: And the album is coming out in June. Now, from hyperbolic headlines about its threat to our survival to the promise of its life-changing technology, artificial intelligence is here and it is here to stay. How it’s applied, and more importantly, how it’s regulated are the questions being navigated right now. Walter Isaacson speaks to the former CEO of Google, Eric Schmidt, about A.I.’s impact on life, politics and warfare and what can be done to keep it under control.
(BEGIN VIDEO CLIP)
WALTER ISAACSON, HOST: Thank you, Christiane. And, Eric Schmidt, welcome to the show.
ERIC SCHMIDT, FORMER CEO AND CHAIRMAN, GOOGLE, AND CO-FOUNDER, SCHMIDT FUTURES: Thanks for having me, Walter.
ISAACSON: You know, industrial and scientific and technological revolution sometimes sneak up on us. I mean, nobody woke up one morning in 1760 and said, oh, my God. The industrial revolution has started. But in the past three or four weeks between my students and myself, we suddenly feel we’re in a revolution where artificial intelligence has become personal, it’s become chatbots and things that will integrate into our lives. Do you think we’re on the cusp of some new revolution?
SCHMIDT: I do. And partly, this revolution is happening faster than I’ve ever seen. ChatGPT, which was released a few months ago, now, it has more than 100 million users. It took Gmail five years to get to the same point. There’s something about the diffusion of technology that we interact with at the human scale that’s going to change our world in a really profound way, much more profound than people think.
ISAACSON: You and Henry Kissinger and Daniel Huttenlocher have written a book, “The age of A.I.” And I think part of it is excerpted or there’s an essay in the “Wall Street Journal,” and it compares this to the advent of the enlightenment, something I think that was spurred too by a great technology, which is movable type printing presses that Gutenberg did. Compare what’s happening now to the enlightenment.
SCHMIDT: We do not have a philosophical basis for interacting with an intelligence that’s near our ability, but non-human. We don’t know what happens to our identity, how we communicate, how we think about ourselves when these things arrive. Now, these things are not killer robots, which was what everybody assumes we’re building, because we’re not doing that. What is arriving is a kind of intelligence that’s different. It comes to answers differently than we do. It seems to have hidden understanding and meaning that we don’t understand today. It discovers things that we’ve never known. We don’t know how far this goes. But the biggest issue is it as we have made these things bigger and bigger, they keep emerging with new capabilities. We have not figured out how powerful this technology is going to be yet.
ISAACSON: We’ve had A.I. for, you know, 20 years now that’s been part of our technology. But now, it’s becoming very personal. It’s things we do every day. A normal person like myself, whether I’m doing search or I’m writing an e-mail or I’m preparing a lecture at Tulane, suddenly, these are tools — it’s almost like when the computer went from being in a really big room in a research institute, and suddenly, you had it in the 1970s and arrived as a personal computer. Tell me about this transformation of A.I. to being something personal.
SCHMIDT: The systems are organized to essentially engage you more. and the reason they want to engage you more as if you engage more, you use it more, they make more money. So, what they do is they learn what your preferences are, using various algorithms. And so, they say, oh, Walter likes this and Eric likes that and so forth, and they build a profile. Now, that profile is not a dossier and it’s not written in English and so forth, but it’s a pretty good approximation of what you like and what you think. And then, the algorithms know how to make you more engaged. By the way, the best way to get you more engaged is to make you more outraged. And the best way to make you more outraged, just use more inflammatory language and so forth.
ISAACSON: Let’s stop right there, because that means this could destroy our politics.
SCHMIDT: Well, it will. And the reason it’s going to is that not only will the opponents of a political figure produce videos that are false and harmful, but also, the messaging is going to get more and more outrageous. And you can get a situation — and I call this the dual evil problem. Let’s say that you or I was a truly horrific person, which we’re not, somebody who’s a racist or something like that, and we have the diffusion model generate a racist video. And then, the other of us is some sort of psychopathic social media person who doesn’t care about the quality and all he wants to make it worse. So, what happens is my computer makes a racist video on my behalf and does a good job. And then, your computer system, knowing that it will get even more revenue if it’s more outrageous, makes it worse, right? So, you see how it goes one way. Now, let’s say that you and I were saints and the sense that I did something saintly and that you were the world’s best social media person. You would take my saintly think and you would make it more saintly, So, you see how it pushes to the sides. And my theory about life today is the reason everyone’s upset is because the social media is busy trying to make us upset.
ISAACSON: So, the algorithms of social media, Twitter, Facebook, many other things, try to get engagement by getting enragement, by getting us upset, you just said, and what you’re saying is that added to this will be these new A.I. systems that will make this even worse. Is that right?
SCHMIDT: We’ve got a situation where we have megaphones of people who we frankly don’t want to hear about and they’re going to find an audience and they’re going to find a big audience because they’re going to do crazy stuff. That’s not OK in my view in a democracy. Democracies are, at some level, about reasoned debate, and these systems will drive against that. I don’t see a solution today from this, except that we’re going to have to regulate some of it. For example, we’re going to have to know who’s on the platform to hold them responsible for if they do something really outrageous or illegal, and we’re also going to have to know where the content came from. We’re going to know — have to know if it was authentic or if it was boosted and changed in some way, and we’re also going to know — have to know how the platform makes its own decisions. All of those are sensible improvements. So, we can understand why we’re being fed this information.
ISAACSON: So, who is going to determine these guardrails and how are we going to get them in place internationally?
SCHMIDT: Well, in Europe, it’s already part of the legislation. And some form of agreement in America between the government and the industry is going to be required. I don’t think we need to get rid of free speech or any of those things, although there are people who have proposed that we can’t even have free speech. From my perspective, the technology of engagement is generally good if you take the guardrails around and you keep the most extreme cases off the platforms. But my point about generative A.I. is these systems are going to soup up engagement and soup up your attention. There’s an old phrase about what the currency of the future in economics is attention, and these systems are looking for your attention as a consumer. So, every time you go, oh, my God. I had no idea, remember that it’s trying to get you to have that reaction. Now, going back to the generative A.I. combined with large language models, it’s going to do some other things that are particularly powerful, it will be able to generate insights and ideas that we as humans have not had. Think of them as existing as savants. If I’m a physicist, I’ll have a savant that runs around and suggests physics problems for me to work on and that sort of thing. All of that is very good. So, the power of A.I. in terms of improving science and biology and human health will be extraordinary, but it comes with this impact on our societal discourse. It’s not going to be easy to get through this.
ISAACSON: You say we don’t understand how they make these decisions now. It used to be, with A.I. and with computers, we wrote programs. They were step by step, and they will rules base and it was, if this, then do this. These new systems seem to just look at billions of pieces of information and of human behaviors and everything else and they aren’t following any rules that we give them. Does that — what — is that what makes them both amazing and dangerous.
SCHMIDT: Yes. My whole world was we get computers to do things because we tell it what to do, and step by step, and they got better and better, but that’s fundamentally the as built environment that we all use today. With machine learning, which has been, I say in its current version available in one form or another for about a decade, instead of programming it, you learn it. So, the language that you say is, can we learn what the right answer is? It started off with classifiers. Where you’d say, is this is zebra or a giraffe, and that got pretty good. Then, a technology called reinforcement learning came along, which has allowed us to sort of figure out what to do next in a complicated multiplayer game. And now, these large language models have come along with this massive scale. But the way to understand how you would both strengthen large language models and constrain them is to learn how to do it. So, in the normal taxonomy, you would describe, we have this big thing that’s doing weird stuff. We want to learn what it’s doing so we can stop it doing the bad things. The problem with learning what it’s doing is since its behavior is emerging as you have to run it for a while to understand and then, you have to have humans decide, this is bad, right? So, the way chatGPT was so successful is that they invented a technique which ultimately involved humans telling it good, bad, good, bad. So, it wasn’t fully done by computers. The problem with good, bad, good, bad with humans is eventually that doesn’t scale. But here’s the real problem. So, far, that sounds pretty good. But in a situation where all of the software is being released, there are what are called raw models, which are unconstrained. And the people who have played with the raw models say that they are — these are ones that you and I can’t get to as normal users, say they’re very frightening. Build me a copy of the 1918 bird flu virus. Show me a way to blow up this building and where to put the bomb. Things that are very, very dangerous appears to have been discovered in the raw versions of the models. Here’s the —
ISAACSON: Wait, wait. And how do we keep those out of bad people’s hands?
SCHMIDT: Well, the problem where we don’t, today, know how to do it. And here’s why. Imagine a situation where the model gets smarter and smarter and it’s got this checking system. You can imagine a situation where the model gets smarter and smarter and it learns to whatever it’s being checked to say the right answer. But when it’s not being checked to say what it really thinks.
ISAACSON: Like how the computer in “2001: Space Odyssey” is learning how to outwit the crew.
SCHMIDT: And by the way, how would it do that? Well, these things have what are called objective functions, and they’re trained. And so, if you give it a strongest — a strong enough objective function to really surface the most interesting answer that may overwhelm the system that’s trying to keep it under control and within appropriate guardrails. These problems are today unsolved. The reason we don’t know how this work is there are essentially collections of numbers. People have looked very hard at where — essentially activation nodes, we’re inside the matrix and there are areas that seem to control the outcome. But when you look at it on a microscope, in the computer sense, you get the same sort of confusion if you look at a human brain. In a human brain, you say, where did that thought come from? And you can’t find it. It’s the same is true in these large language models so far.
ISAACSON: Well, let me drill down on some case — used cases that we might have. You and I were once on the Defense Innovation Board for the U.S. government, and you’ve been involved in another commission on national intelligence. Tell me how you think this will change the fighting of wars.
SCHMIDT: The biggest short-term concern is actually biological warfare. Last year, there was a group that did synthesis of a whole bunch of viruses to try to be helpful, and then they use the same program, the same algorithm, the same large language model approach, if you will, to work it backward and come up with the world’s worst and most terrible pathogens. There’s every reason to think that these technologies, when spread broadly, will allow terrorist actions that we cannot possibly imagine. This has got to get addressed. People are working on this. Another thing that’s happening is that the concept of war, the concept of conflict is occurring much more quickly. It looks like these systems have developed abilities to both do offensive and defensive cyber-attacks, they actually understand where the vulnerabilities are in ways we don’t fully understand, and they can be used to accelerate both offensive and defensive actions. That means that a good chance in the future of war is a war that takes a millisecond, right? North Korea attacks the U.S. The U.S. attacks back. China decides it’s a bad time for war. The whole thing occurred in, you know, a millisecond. That’s faster than human decision-making time, which means that our systems, our defensive systems are going to have to be on a hair trigger and they’re going to have to be invoked by A.I. that we don’t fully understand.
ISAACSON: You know, the first time I talked about this in depth with you and with Henry Kissinger together was in China. I think maybe three years ago. And it was a question then and now more of a question of, are we going to cooperate with China in trying to figure this out or is this the great new arms race that’s going to happen? And with our new confrontational attitude towards China, is that going to make it harder to deal with the emergent technology of artificial intelligence?
SCHMIDT: Well, three years ago, China announced its A.I. strategy, because they love to announce their strategies, and it include dominating A.I. by 2030. So, China, of course, has efforts in generative A.I. and large languages, morals as well. They also have large efforts in quantum and biology, which are doing well. They’re already ahead of us in 5G. They’re ahead of us in financial services and in terms of batteries, new energy, all the things that you use in your electric car. So, we should take them as a strong competitor. In the case of large language models, they have not been as advanced as the American companies have, American and U.K. companies, for reasons I don’t fully understand. One idea that I would offer is that the large language models, because they are unpredictable today, cannot be offered to the public in China because the Chinese government does not want unfettered access to information. In other words, how do the Chinese government know that these systems are not going to talk about Tiananmen Square or something, which is not possible to talk in an area of lack of free speech. So, we will see. But at the moment, they’re trying to catch up but they are behind. We recently put in some restrictions on hardware, which will slow them down, but not by much.
ISAACSON: Whenever there’s a big, innovative change, it moves the arc of history sometimes towards more individual freedom. Even the printing press, you know, takes away the hold of the roman catholic church, it allows the reformation, it allows of renaissance even. Do you think this will inevitably push history to more individual freedom or will it be used for more authoritarian purposes?
SCHMIDT: I’m sure the answer is both. If you are an author or authoritarian dictatorship, you know, let’s say a really bad one, you would use these technologies to both surveil your citizens but also manipulate them, lie to them, misinformed them, tell them things which are falsehoods, cause them to be motivated against national fears, all of the things that governments and ideologues do in that case. If you’re a democracy, you’re going to use it, first, to try to improve your business situation and also because you believe in free speech, you’re going to allow people to say what they think. The dangers to both are obvious. For their autocracy, it will so compound their control that it could lead to a revolution inside the autocracy. People don’t want this kind of restrictions that are possible. In a democracy, as we’ve discussed, the concept of being able to flood the zone, right, the ability for a single individual to define the narrative who shouldn’t otherwise have that kind of power is very palpable in these technologies. And it’s really important that we understand that human nature has not changed. If you show someone a video, and you say to them, this video is false. At some basic level, there’s evidence that they still believe it to be true. And you tell them upset — upfront, pictures that have been seen cannot be unseen. Videos that have been seen cannot be unseen. We have to confront the fact that humans are manipulable by these technologies and we need to put the appropriate safeguards in place to make sure that we as a body of populists are not so manipulated to the wrong outcome.
ISAACSON: Eric Schmidt, thank you so much for joining us.
SCHMIDT: Thank you, Walter. And thank you again.
About This Episode EXPAND
Vietnam War whistleblower Daniel Ellsberg has been diagnosed with inoperable pancreatic cancer. He reflects on his life and legacy. Musician Yusuf/Cat Stevens discusses his upcoming album. Former Google CEO Eric Schmidt explains how artificial intelligence will impact life, politics and warfare.
LEARN MORE