Read Transcript EXPAND
BIANNA GOLODRYGA, HOST: Well, our next guest believe international cooperation is the key to managing artificial intelligence. Founder of the Eurasia Group, Ian Bremmer, and CEO of Inflexion A.I., Mustafa Suleyman, have analyzed the A.I. power paradox for foreign affairs. To discuss why countries and technology companies and need to unite, they joined Walter Isaacson.
(BEGIN VIDEO CLIP)
WALTER ISAACSON, HOST: Thank you, Bianna. And Ian Bremmer and Mustafa Suleyman, welcome to the show.
IAN BREMMER, FOUNDER AND PRESIDENT, EURASIA GROUP: Thank you.
MUSTAFA SULEYMAN, CEO AND CO-FOUNDER, INFLECTION AI: Thanks for having us.
ISAACSON: You all have this piece that just came out in foreign affairs, called “The A.I. Power Paradox.” Let me start with you, Ian. Exactly what does that title mean?
BREMMER: Well, it means that for my lifetime, when I thought about power around the world, I thought in terms of governments. And, you know, a year ago, you talked to a head of state, you talk to the head of the U.N., or the IMG or any of those organizations, they weren’t asking you about A.I. Now, they all are, and it’s a top priority. I’ve never seen that before. And yet, the actors that actually martial control sovereignty over artificial intelligence are not governments. They’re in the private sector. They’re technology companies. So, I mean, the most dramatic geopolitical change in my lifetime happens to be one that’s being developed completely outside of the governments that control — (INAUDIBLE) have controlled power. And it’s also the one piece that I’ve put together that I felt completely incapable of doing myself, precisely because I’m a political scientist. I’m not a technologist. Mustafa is actually driving the stuff. So, we really had to put our heads together here.
ISAACSON: Well, so, let me turn to Mustafa then about the technology. You’ve got a book coming out called “The Coming Wave” and it really (INAUDIBLE) explains the technology in-depth about artificial intelligence. But tell us, what will it do to society, and how long will it take? Are we talking about a dozen years from now? Are we talking about next year?
SULEYMAN: I think we’re talking about in the next decade, everybody in the world getting access to an intelligent agent that is as good as the top professor, the most kind and supportive coach or counselor, as good as the very best research assistant that you might want as a scientist in your lab, helping you to synthesize information, provide you with summaries and reports. Think of it as a turbo charger, an amplifier, to everything that we might do in the world. Having the very best chief of staff in your pocket. Everybody is now going to get access to this new tool, just as, over the last 30 years, everybody in the developed world has got access to smartphones. No matter how rich you are, you know, whether you’re a billionaire or whether you earn 20,000 bucks a year, we all get the same super high-quality smartphone, and that’s an incredibly meritocratic moment, the top billion or so of us. We’re on the same trajectory for access to intelligence.
ISAACSON: Why should we be worried then?
SULEYMAN: I think just as it amplifies the good actors, just as it functions as a teacher and a support, and a motivator, and an educator, people with bad intentions will also use it to help go about their goals, right? I mean, these are tools that are for sure going to be used to spread misinformation, they’re going to be used to drive greater wedges between us as societies to make us feel one another. They’re for sure going to be used to make cyberattacks easier. I’ve seen experimental test use cases where people have been using large language models as coaches for the development of bioweapons. This means that you don’t need to have the same undergraduate level skills in biology to be able to synthesize a new dangerous compound, because you’ve got this intelligent aid, a coach alongside you. So, it reduces the barrier to entry to be able to cause chaos, and that’s potentially extremely disruptive.
ISAACSON: So, Ian, your Eurasia Group and your consulting, you know, the writing you’ve done, has always been about how geopolitics works and it’s always been based on nation states, you know, how countries deal with one another. And in this piece, you all right that it can’t be regulated, A.I., the way previous technologies have been regulated. I just watched “Oppenheimer.” Why is it more difficult than the atom bomb? We regulated that?
BREMMER: Well, I mean, the atom bomb was very complex to get your hands on the material required, a lot of expertise, there were very different components of it. Basically, only governments were really capable — of actors, of getting a hold of nuclear weapons. And the Americans and the Soviets, even though we hated each other, we’re prepared to talk to each other because we recognize the terrible dangers of a nuclear Armageddon would relay. As you just heard from Mustafa, there are aspects of A.I. in the wrong hands that have, you know, sort of analogies to how we think of weapons of mass destruction, except that the proliferation danger of A.I. lot is — you know, it’s logarithmically greater. You’re talking about hundreds of millions of people. You’re talking about anybody with a smartphone or — you know, or with a decent computer at their hands. And they’re not just state actors, they’re obviously non-state actors. So, governments are going to have to recognize that this is not only very fast moving and it’s massively proliferated, but also that the governments themselves are not in a position, they don’t have the expertise to understand what these algorithms do, to the extent that anyone does, it’s the technology company. So, the governance itself is going to have to be a hybrid model of technology companies and governments together. I mean, I don’t know if that means as treaty signatories, but you certainly won’t have governments that will be able to drive this regulatory environment, these new institutions themselves. That will be a road to ruin.
ISAACSON: After World War II, after the dropping of the atom bomb, we created these great government structures like the United Nations or, you know, the World Bank, many other things, NATO, that did it. Those are all governments at the table. Are you actually, Ian, envisioning sort of a new type of United Nations organization that has both Google and Meta and Amazon as well as the U.S. and China and Russia, sitting at the same table?
BREMMER: So, we’re envisioning a couple of things. One is something that looks like the United Nations Intergovernmental Panel on Climate Change. Walter, the one reason why we know all agree that there’s 1.2 centigrade degrees of global warming, despite all of the fake news and disinformation, is because you’ve had governments and scientists and corporate leaders and public policy types from every country in the world altogether trying to understand where climate change is going. So, we know how many particles of carbon are in the atmosphere, we know how much methane, we know the deforestation. That’s a critical, critical aspect of fighting climate change. And with A.I., so much more urgent and fast- moving, you will need an organization like that, a multistakeholder. But another thing we’re calling for is a techno credential approach, like you see in the global financial community. In other words, a geotechnology stability board where you will have governments and non-state actors together being able to respond to crises in real-time as they occur. This isn’t like the United Nations. This is more like how the world responded to the 2008 financial crisis, and where even though they were different governments, the United States and China are both members of the IMF. They’re both members of the bank of international settlements. You will be need A.I. institutions to be similarly inclusive and similarly non-politicized to be able to respond to challenges that are fundamentally global.
ISAACSON: Well, Mustafa, you’re talking about three layers of governance regimes that you need. And the first one is something to establish just what the facts are. Tell me what you think needs to be established in terms of the facts of the technology?
SULEYMAN: So, an intergovernmental panel on A.I. would be one that has access to all of the largest commercial labs and academic labs all around the world developing these large language models. They would be able to probe them and test them, audit them, look at what data they are, you know, using to do training and try to find weaknesses and failure modes in the models. Once they discover those, they should then be able to share those with other national or international commercial competitors in order to improve the quality and performance of those models. But the first step is really just understanding and auditing and establishing the fact pattern of what are the boundaries that these models can’t cross today and what — where are they headed in the future.
ISAACSON: And the second step, Ian, at least according to your piece, is to prevent an arms race. How do you do that?
BREMMER: Yes. You think it would be hard because the United States and China, we don’t even have direct high-level military to military conversations today. And yet, Walter, as you know and I know, during the Cold War, the Americans and the Soviets with much less in common with virtually no interdependence between the two great countries nonetheless worked together to ensure that we understood what kind of nuclear capabilities we were developing and we were putting limits on them. And so, when I look — when Mustafa and I look at how fast A.I. is being deployed and the fact that everything is dual use, these general models, you can use them for civilian purposes and the same models you can use for national security, for defense purposes. It’s not a question of how — you know, how far it’s going to be to talk with the Chinese, it’s a fact there is no alternative. I mean, the two countries and their technology corporations that are driving these existentially important for good and potentially for bad technologies. If they don’t talk with each other, they don’t create mechanisms to engage with each other around the dangers of these technologies, then we are going to end up destroying each other.
ISAACSON: Let me read a sentence from your piece that jumped out, which is, A.I. will empower those who wielded to survey, to deceive and to even control populations. One assumes companies could do that but also nation states could do that. In other words, it could supercharge, you say, the collection of the use of personal data and democracies and sharpen the tools of repression that authoritarian governments use to subdue their populace. Tell me where you see that happening and maybe can — and then, I’ll ask Ian to compare and contrast the way the United States might be doing this and what it might mean in China.
SULEYMAN: Yes. I mean, I think that the way to understand this is that it actually empowers and amplifies power wherever it is. So, whatever the agenda, it is a tool for reducing the barrier to entry to action in that environment. And, so you know, you don’t have to be too imaginative to see how this is being misused in China or used for large-scale state surveillance. I mean, these are tools that basically process vast amounts of information enabling, you know, you to make sense of video stream data, track faces, identify people as they’re moving around cities. And obviously, that is a very dystopian and dark outcome. It’s one that we really do want to avoid. And the challenges that — so, harnessing the upside whilst mitigating the downsides is going to be the story over the next 20 years. And some states will take advantage of this to entrench their authoritarianism. And so, we in the West have to defend our values and not slip into authoritarianism. That’s going to be our great challenge, is to withstand that pressure to want to surveil everything.
BREMMER: And the Chinese are certainly aware of the power of A.I. to increase the government’s ability to surveil, repress and nudge their population into so-called patriotic behaviors. The United States government has done virtually none of that domestically. It’s been in the hands of corporate actors. Those corporate actors are not interested in subverting democracy but polarization has happened to be very aligned with their commercial and business models as they’re doing everything they can to generate more clicks, more engagement, build more data. So, the question is going to be, how will the United States, the Europeans and other governments ostensibly, driven by rule of law, work together with these technology companies to harness the productivity that comes from A.I. without slipping into authoritarianism? Because the — as I mentioned before, the easiest way for governments to align with technology companies around A.I. is to forget about the common interest and will of the average citizen. It will be to use the power to surveil, to use the power to control and to nudge, and that is contrary to everything that we are as citizens of a representative democracy.
ISAACSON: Mustafa, you were at that meeting with President Biden when he convened the pioneers of A.I., including yourself, to the White House. What did you say to him and what was that meeting like?
SULEYMAN: The president was actually pretty well briefed on this issue. We spent quite a long time talking about some of the details of what we would do and practice together. He was very keen that we cooperate with one another as competitive technology companies. One of the ways that he’s propose that we do that is that we establish best practices for sharing information about the weaknesses of our own models. Because if my — if I identify that my large language model is vulnerable to a certain type of exploit, it might be that it’s good generating attacks in a cyber security threat environment, then I should share that with my competitors and with other nation states potentially so that they can patch up those weaknesses, and that’s the kind of proactive, I think, behavior that is in the best interest of everybody and one of the proposals that came out of the meeting with Biden. I mean, he was actually very, very — I was surprised actually at how forward-thinking he was and the administration was on this issue.
ISAACSON: And, Ian, is there something where we can find common ground with China and might even be a way that we can work together or are we inevitably going to be in conflict over this?
BREMMER: Look tomorrow. But I do believe that as this technology proliferates into the hands of more and more private sector actors, into the hands of individuals that can run these models on their smartphones, then you’re going to suddenly see the United States and China with very similar challenges, they’re both going to want to maintain sovereignty. I mean, the U.S. and China has an interest in ensuring, for example, that cryptocurrencies don’t threaten fiat currencies to like take away governance and power from the state. A.I. will do that in spades. And so, I do believe that it’s not just about avoiding a Cold War, it’s also about maintaining stability of the existing system. And in that regard, China as probably soon to be the largest economy in the world, part of a globalized trade system, a country that is the largest creditor for the developing world. I mean, China is just as much a part of incumbency for wanting the present system to sustain itself as the United States. This isn’t Russia, this isn’t North Korea. It is not a rogue state. And so, in that regard, these two countries do have very strong interest to ultimately work together around A.I. Unfortunately, domestic politics right now are all pointed in the opposite direction, which I am sure is a part of why you asked that question. I do think it will likely take a few slaps in the face, some crises that occur around A.I. before it becomes obvious how — just how far the Americans and the Chinese work together. And Mustafa and I, our purpose in this piece is to really try to get some of the world leaders to start thinking in these ways before that crisis occurs so it’s obvious how we pick it up.
ISAACSON: Ian Bremmer, Mustafa Suleyman, thank you all so much for joining us and for writing this piece.
BREMMER: Thank you.
SULEYMAN: Thank you. It’s been great.
About This Episode EXPAND
Jared Bernstein discusses the positives and negatives of the Inflation Reduction Act. Hawaii’s Lieutenant Governor Sylvia Luke gives the latest on the wildfires. Former U.S. Ambassador to South Korea discusses what we can expect next from the South Korea-Japan relationship. Ian Bremmer and Mustafa Suleyman talk about why countries and technology companies need to unite to regulate A.I.
LEARN MORE