Read Full Transcript EXPAND
(COMMERCIAL BREAK)
CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR: Hello, everyone. And welcome to AMANPOUR AND COMPANY. Here’s what’s coming up.
Artificial intelligence, the power and the peril.
(BEGIN VIDEO CLIP)
AMANPOUR: Are people playing with fire?
CONNOR LEAHY, CEO, CONJECTURE: Absolutely. Without a doubt.
(END VIDEO CLIP)
AMANPOUR: Four leaders in their field unpack the uncertainty that lies ahead.
(BEGIN VIDEO CLIP)
NINA SCHICK, AUTHOR, “DEEP FAKES AND THE INFOCALYPSE”: We have agency. And I just want to kind of divorce that kind of hypothetical scenario with the
reality, and that is we decide.
(END VIDEO CLIP)
AMANPOUR: What it means for jobs, and how will it will change our working lives.
(BEGIN VIDEO CLIP)
WENDY HALL, PROFESSOR OF COMPUTER SCIENCE: And I genuinely believe we’re going to get a four-day week out of A.I.
AMANPOUR: Do any of you believe that there will be a universal basic income therefore?
SCHICK: It is time to start thinking about ideas like that.
(END VIDEO CLIP)
AMANPOUR: Also, this hour —
(BEGIN VIDEO CLIP)
UNIDENTIFIED FEMALE: We can now call the 2024 presidential race for Joe Biden.
(END VIDEO CLIP)
AMANPOUR: — policing misinformation ahead of crucial U.S. presidential elections.
(BEGIN VIDEO CLIP)
PRIYA LAKHANI, CEO, CENTURY TECH: And I’ve got two children who are 11 and 13, are they going grow up in a world where they can trust information?
(END VIDEO CLIP)
AMANPOUR: Then, how to regulate a technology that even its creators don’t fully understand.
(BEGIN VIDEO CLIP)
UNIDENTIFIED MALE: If this technology goes wrong, it can go quite wrong.
LEAHY: When looking at CEOs or other people of power, this is to watch the hands, not the mouth.
(END VIDEO CLIP)
AMANPOUR: How A.I. could revolutionize health care.
(BEGIN VIDEO CLIP)
LAKHANI: These are lifesaving opportunities.
(END VIDEO CLIP)
AMANPOUR: And make our relationships with machines much more intimate.
(BEGIN VIDEO CLIP)
SCHICK: When it comes to relationships and, in particular, sexual relationships, it gets very weird very quickly.
(END VIDEO CLIP)
AMANPOUR: Also ahead, Hari and I discuss how to keep real journalists in the game.
(BEGIN VIDEO CLIP)
UNIDENTIFIED MALE: I am OTV and Odisha’s first A.I. news anchor, Lisa.
HARI SREENIVASAN, INTERNATIONAL CORRESPONDENT: Look, this is just the first generation of, I want to say, this woman, but it’s not, right?
(END VIDEO CLIP)
AMANPOUR: Welcome to the program, everyone. I’m Christiane Amanpour in London, where the A.I. makes our societies more or less equitable, unlocks
breakthroughs or becomes a tour of authoritarians is up to us. That is the warning and the call to arms from the Biden administration this week. In a
joint op-ed, the secretaries of state and commerce say the key to shaping the future of A.I. is to act quickly and collectively.
In just a few short months, the power and the peril of artificial intelligence have become the focus of huge public debate. And the
conversation couldn’t be more relevant, as the atomic bomb biopic, “Oppenheimer,” reminds us all of the danger of unleashing unbelievably
powerful technology on the world.
(BEGIN VIDEO CLIP)
UNIDENTIFIED MALE: Are we saying there’s a chance that when we push that button, we destroy the world?
UNIDENTIFIED MALE: The chances aren’t near zero.
(END VIDEO CLIP)
AMANPOUR: Director Christopher Nolan himself says that leading A.I. researchers literally refer to this as their Oppenheimer moment.
Predictions range from cures for most cancers to possibly the end of humanity itself.
What most people agree on is the need for governments to catch up now. To assess all of this and to separate the hysteria from the hyperbole from the
facts, we brought together a panel of leaders in the field of artificial intelligence. Nina Schick, Global A.I. advisor and author of “Deep Fakes,”
renowned computer science professor, Dame Wendy Hall, Connor Leahy, an A.I. researcher who is the CEO of Conjecture, and Priya Lakhani, an A.I.
government advisor and the CEO of CENTURY Tech.
Welcome all of you to this chat, to coin a phrase. I mean, it’s such a massively important issue. And I just thought I’d start by announcing that
when I woke up and had my morning coffee, A.I. is all over this page on the good, on the bad, on the questions, on the indifference.
What I want to know is from each one of you, literally, is what keeps you up at night, you are all the experts, for good or for bad? And I’m going to
start with you.
NINA SCHICK, AUTHOR, “DEEP FAKES AND THE INFOCALYPSE”: We can conceive of it as us being now on the cusp, I think, of a profound change in our
relationship to machines that’s going to transform the way we live, transform the way we work, even transform our very experience of what it
means to be human. That’s how seismic this is.
If you consider the exponential technologies of the past 30 years, the so- called technologies of the information age, from the internet, to cloud, to the smartphone, it’s all been about building a digital infrastructure and a
digital ecosystem, which has become a fundamental tenet of life.
However, A.I. takes it a step further. With A.I., and in particular, generative A.I., which is what I have been following and tracking for the
last decade, you are really looking at the information revolution, becoming an intelligence revolution, because these are machines that are now capable
of doing things that we thought were only unique to human creativity, into human intelligence. So, the impacts of this as a whole for the labor
market, for the way we work, for the way that — the very framework of society unfolds is just so important.
My background is in geopolitics, where I kind of advice global leaders for the better half of two decades. And the reason I became interested in A.I.
is not because I have a tech background, I have a background assessing trends for humanity. This isn’t about technology, this is ultimately a
story for humanity and how we decide this technology is going to unfold in our companies, so within enterprise, very exciting, but also, society writ
large.
And the final thing I would say is, we have agency. A lot of the debate has been about A.I. autonomously taking over, and I just want to kind of
divorce that kind of hypothetical scenario with the reality, and that is, we decide.
AMANPOUR: Connor, though you believe — because we have spoken before — that actually these machines are going to be so powerful and so unable to
control by human input that they actually could take over.
CONNOR LEAHY, CEO, CONJECTURE: Unfortunately, I do think that this is a possibility. In fact, I expect this default probability. But I would like
to agree with Nina fully, that we do agency, that doesn’t have to happen.
But you asked a question earlier, what keeps me up at night? And I guess, what I would say what keeps me up at night is that a couple million years
ago, the common ancestor between chimpanzees and humans split into two subspecies. One of these developed a roughly three times larger brain than
the other species. One of them goes to the moon and builds nuclear weapons. One of them doesn’t. One of them is at the complete mercy of the other and
one of them has full control. I think this kind of relationship to very powerful technology can happen.
I’m not saying (INAUDIBLE) it is the default outcome unless we take our agency, we see that we are in control, we are the ones building these
technologies, and as a society, we decide to go a different path.
AMANPOUR: So, to follow up on that, the same question to you but from the point of view of how do we have agency, express agency and regulate? You’re
a private entrepreneur, you also have been on the government, the British government sort of regulation council.
PRIYA LAKHANI, CEO, CENTURY Tech: Yes.
AMANPOUR: What will it take to ensure diversity, agency, and that the machines don’t take over?
LAKHANI: Well, what it takes to ensure that is, it’s a lot of work and there’s lots of ideas, there’s lots of theories there are white papers,
there is the pro innovation regulation review that I worked on with Supachya Volunteer (ph) in the U.K. The U.S. government has been issuing
guidance. The E.U. is issuing its own laws and guidance.
But what we want to see is execution, Christiane. And, you know, on the sort what keeps you up at night, I felt sorry for my husband because
actually while I — what keeps me up is actually other issues, such as things like disinformation with generative A.I. And I’ve got two children
who are 11 and 13, are they going to grow up in a world where they can trust information and what is out there, or are these technologies, because
of lack of execution, on the side of policymakers, means that actually it’s sort of a free-for-all, you know, bad actors have access to this technology
and you don’t know what to trust?
But actually, the biggest thing that keeps me up at night is a flip from what we’ve heard here. It’s, are we — as a human race, are we going to
benefit from the opportunities that artificial intelligence also enables us, you know, to have?
So, we often talk — and Christiane, you know, forgive me, but for the last six months it’s all been about ChatGPT in generative A.I. That is really
important. And that’s where a lot of this discussion should be placed. But we also have traditional A.I.
So, we have artificial intelligence where we have been using data, we have been classifying, we have been predicting, we have been looking at scans
and spotting cancer where we’ve got lack of radiologists, right, and we can augment radiology, we can augment teaching and learning.
So, how are we also going to ensure that all around society we don’t actually exacerbate the digital divide, right, but we leverage artificial
intelligence, what the best it can provide to help us in the areas of health care, education, security.
So, you know, it’s scary to think we are not using it to its full advantages while we also must focus on the risks and the concerns. And so,
really, I sort of have this jewel sort of what keeps me up at night, as I said, I sort of feel sorry for my husband because I’m sort of tapping on
his shoulder going, and what about this and what about that?
WENDY HALL, PROFESSOR OF COMPUTER SCIENCE: We really need many different voices helping us build and designed these systems and make sure they are
safe, not just the technical teams that are working at the companies to build the A.I., talking to the governments about we need women, we need age
range, we need diversity from different subject areas, we need lots of different voices, and that’s what keeps me awake at night.
AMANPOUR: Because if not, what is it? What is the option?
HALL: Well, it’s much, much more likely to be — to go for use for the home because you haven’t got society represented in designing the systems.
AMANPOUR: So, you are concern that it is just one segment of society?
HALL: Yes. One small segment of society, right? We call them — I like to call them the tech bros, they are mostly men, there’s very few women
actually working in these companies at cutting edge of what’s happening. You saw the pictures of the CEOs and the vice presidents with Biden and
with Rishi Sunak, and these are the voices that are dominating now, and we have to make sure that the whole of society is reflected in the design and
development of the systems.
AMANPOUR: So, before I turn to you for more, you know, inputs I want to quote from Douglas Hofstadter, who I’m sure you all know, the renowned
author and cognitive scientist who has quoted about the issues that you have just highlighted, that ChatGPT and generative have taken over the
conversation. He says, “It just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become
incomprehensible to us, as a comprehensible to us as we are to cockroaches.”
A little kind of like what you said, but I see you wanting to dive in, Wendy, with comment on it.
HALL: Well, I just — I mean, like to sort of disagree with Priya, but I think that if we move too fast, we could get it wrong. If you think about
the ultimate part, the industry, when it started, there were no roads, someone had to walk in front of a car with a lamp, which shows you how fast
they were going. If we tried to regulate the automobile industry then, we wouldn’t have gotten very far because we wouldn’t — couldn’t see what was
going to be coming in another hundred years.
And I think we have to move very fast to deal with the things that are immediate threats, and I think that disinformation, the fake news, we have
two major democratic elections next year, the U.S. president and our election here, whenever it is, could even be at the same time, and there
are other elections, and the disinformation, the fake news, the pope in a puffer jacket moments, these could really mess up these elections. And I
think there is an immediate threat to potential democratic process.
And I believe we should tackle those sorts of things at a fast speed and then, get the global regulation of A.I., as it progresses, through the
different generations of A.I. and get that right at the global effort (ph).
AMANPOUR: So, I think that’s really important, she’s bringing up, as the most existential threat, beyond the elimination of the species, the
survival of the democratic process and the process of truth.
HALL: Yes.
AMANPOUR: So, let me fast forward to our segment on deep fakes. As we know, it’s a term that we give video, audio that’s been editor using an
algorithm to replace, you know, the original with the appearance of authenticity. So, we remember a few months ago there was this image of an
explosion at the Pentagon, which was fake, but it went around the world virally, it caused markets to drop before people realized it was bogus. We
know that, for instance, they’re using it in the United States in elections right now.
I’m going to run a soundbite from a podcast called “Pod Save America” where they, as a joke, basically simulated Joe Biden’s voice, because they could
never get him on the show and they thought they would make a joke and see if it put the — you know, a bit of fire underneath it. So, just listen to
this.
(BEGIN VIDEO CLIP)
UNIDENTIFIED MALE: Hey, friends of the pod, it’s Joe Biden. Look, I know you haven’t heard from me for a while and there’s rumblings that it’s
because of some lingering hard feelings from the primary. Here’s the deal – –
UNIDENTIFIED MALE: This is good.
UNIDENTIFIED MALE: — did Joe Biden like it when Loved Seti (ph) had a better chance of winning Powerball than I did of becoming president?
UNIDENTIFIED MALE: I didn’t say that.
UNIDENTIFIED MALE: No. Joe did not.
(END VIDEO CLIP)
AMANPOUR: OK. So, that was obviously a joke, they are all laughing. But Tommy Vietor, who’s one of these guys, a former — you know, a former White
House spokesman, basically said, they thought it was fun but ended up thinking, oh, God, this is going to be a big problem. Are people playing
with fire, Conner?
LEAHY: Absolutely. Without a doubt. These kinds of technologies are widespread available. You can go online right now and you can find open-
source code that you could download to your computer, you know, play with a little bit, take 15 seconds of audio from any person’s voice, anywhere on
the internet without their consent and make them say anything you want. You can call the grandparents and their voice, you can ask for money. You can,
you know, put it on Twitter, say it’s some kind of political event happening. This is already possible and already being exploited by
criminals.
AMANPOUR: By criminals?
SCHICK: I actually wrote the book on deep fakes a few years ago and I initially started tracking deep fakes, which I call the first viral form of
generative A.I. back in 2017, when they first started emerging. And no surprise, but when it became possible for A.I. to move beyond its
traditional capabilities to actually generate or create new data, including visual media or audio, it has this astonishing ability to clone people’s
biometrics, right?
And the first used case was a nonconsensual pornography, because just like with the internet, pornography presented as on the cutting edge. But when I
wrote my book, and actually at the time, I was advising a group of global leaders, including the NATO secretary general and Joe Biden, we were
looking at it in the context of election interference and in the context of information integrity.
So, this debate has been going on for quite a few years. And over the past few years, it’s just that now it’s become, you know —
AMANPOUR: Well, right, but —
SCHICK: Yes.
AMANPOUR: — that’s the whole point. This is the point. Just like social media, all of this stuff has been going on for a few years until it almost
takes over.
SCHICK: Yes. But the good thing is that there is an entire community working on solutions.
AMANPOUR: OK.
SCHICK: I’ve long been very proud to be a member of the community that’s pioneering content authenticity and providence. So, rather than being able
to detect everything that’s fake, because it’s not only that A.I. will be used to create malicious content, right? If you accept my thesis that A.I.
increasingly is going to be used almost as a combustion engine for all human creative and intelligent work, we are looking at a future where most
of the information and content we see online has some elements of A.I. generation within it.
So, if you try to detect everything that’s generated by A.I., that’s a false errand. It’s more that the onus should be on good actors or companies
that are building generative A.I. tools to be able to cryptographic (INAUDIBLE), have an indelible — it’s more than a watermark, because it
can’t be removed — signal in the DNA of that content and information to show its origin.
AMANPOUR: Yes. So, like the good housekeeping seal of approval?
SCHICK: You — it’s basically about creating an alternative safe ecosystem —
AMANPOUR: Yes.
SCHICK: — to ensure information integrity.
AMANPOUR: So, let’s just play this and then, maybe this will spark a little bit more on this. This is, you know, the opposite end of the
democratic joke that we just saw. This is from an actual Republican national committee serious, you know, fake.
(BEGIN VIDEO CLIP)
UNIDENTIFIED FEMALE: This just in, we can now call the 2024 presidential race for Joe Biden.
UNIDENTIFIED MALE: My fellow Americans.
UNIDENTIFIED FEMALE: This morning, an emboldened China invades.
UNIDENTIFIED MALE: Financial markets are in free fall as 500 regional banks have shuttered their doors.
UNIDENTIFIED MALE: Border agents were overrun by a surge of 80,000 illegals yesterday evening.
UNIDENTIFIED FEMALE: Officials closed the City of San Francisco this morning, citing the escalating crime and fentanyl crisis.
(END VIDEO CLIP)
AMANPOUR: So, that was a Republican national committee ad, and the Republican strategist, Frank Luntz, said this about the upcoming election,
thanks to A.I., even those who care about the truth won’t know the truth.
LAKHANI: The scale of the problem is going to be huge because the technology is available to all. On the biometric front, right, let’s think
about this, it’s actually really serious. So, think about banking technology, at the moment, when you want to get into your bank account on a
phone, they use voice recognition, right? We have facial recognition, face recognition on our smartphones.
Actually, with the rise of generative A.I., biometric security is seriously a threat. So, people are saying, you might need some sort of two-step
factor authentication to be able to solve those problems. And I don’t think it’s a false errand to try and figure out what is fooled by A.I. and what’s
created by A.I. and what is not, simply because look at the creative industries, the business model of the creative industries is going to be
seriously disrupted by artificial intelligence and there’s a huge lobby from the creative industry saying, well, we’ve got our artist, we’ve got
out music artists, our record labels, our, you know, design artists, we have newspapers, we’ve got broadcasters who are investing in investigative
journalism, how can we continue to do that and how can we continue to do that with the current business models, when actually, everything that we
are authentically producing, that is, you know, taking a lot of time and investment and effort, actually is being ripped off by an artificial
intelligence, sort of generative A.I. over at the other end?
What policymakers then decide to do when it comes to, is it fair game to use any input to be able to have these A.I. technologies and generate new
media will affect where the start-ups and scale ups can settle in this country and grow in this country or whether they go elsewhere?
We know where Europe is, right? So, Europe has got this sort of more prescriptive legislation that they are going for. We are going for what I –
– we call light touch regulation.
AMANPOUR: We, being the U.K.?
LAKHANI: We, being the U.K. Apologies. Yes. So, light touch, which I wouldn’t say is lightweight, it’s about being fast and agile, right? And as
an A.I. counselor, Wendy and I both sat on, it was all about actually, how can we move with an agile approach as this technology evolves? And then,
you have the U.S. and you have other countries.
So, it — this is all intertwined into this big conversation. How can you be pro innovation? How can you increase, you know, gross value ad in to
your economy? How can you encourage every technology company to start up in your country and thrive in your country while also protecting the rights of
the original authors, the original creators and also, while protecting (INAUDIBLE)?
HALL: Can —
LAKHANI: And this is — and it’s — there’s a political angle there.
HALL: Yes. I think that — this whole conversation will be terrifying people.
AMANPOUR: OK. So, can you brain it back to not terrify people?
HALL: Well, because it’s getting very technical. We’ve got — you know, all the things you’ve been talking about. And actually, you know, in the
U.K., we can call Rishi Sunak to call the election in October, right?
AMANPOUR: Right.
HALL: All this won’t be sorted out by then.
AMANPOUR: Right.
HALL: And I think we have to learn and we have to keep the human in the loop.
AMANPOUR: Right.
HALL: The media will have a major role to play in this —
AMANPOUR: Yes.
HALL: — because we’ve got to learn to slow things down. And we’ve got to —
AMANPOUR: But is that possible? I mean, you say that. Is it possible, Connor, to slow things down?
HALL: No, no. I don’t mean technically. I mean, we’ve got to think about, when you get something that comes in off the internet, got to check your
sources. There’s this big thing at the moment, check your sources. We are going to have to check. I totally agree.
I mean, I’ve been working on providence for most of my career and I totally agree about the technical things we can use. But they’re not going to be
ready, I don’t argue, and I think people get very confused. I think we’ve got — my mother used to say to me, don’t believe everything you read in
the newspapers, in the 1960s.
LAKHANI: Unless Christiane said it.
HALL: Well, OK. But that’s a whole point, Priya, you see. If Christiane says it —
LAKHANI: Well, I’m not agreeing (ph) with it. My entire —
HALL: — I might be inclined to trust it. And I —
AMANPOUR: But I could be a deep fake, Dame Wendy, is what you are saying.
LAKHANI: Well, say — and so, actually, I’m with Nina on the fact that there is lots of innovation in this area.
HALL: There is lots of innovation.
LAKHANI: So, I think there is innovation. But look, they key — I think this is a long-term thing. This isn’t going to happen tomorrow, but one of
the key points is that —
HALL: The election starts tomorrow.
LAKHANI: — in education, for example, across the world, whether you’re talking about the U.S., across Europe, different curricula, whether it’s
state curricula or a private curricula, one of the things that we’re going to have to do is teach children, teach adults, everybody, they’re going to
have to be more educated about just, you know, the non-technical view of what A.I. is so that when you read something, are you checking your
sources, right? There are skills such as critical thinking that people love. Actually, they’re more important now than ever before.
AMANPOUR: For sure.
LAKHANI: Right? So, did Christiane actually say that? Did she not? And so, understanding the source is going to be important. And there’s definitely a
policymaker’s role across the world to ensure that that’s in every curriculum, it’s emphasized in every curriculum. Because right now, it
isn’t.
AMANPOUR: OK. I just need to stop you for a second because I want to jump off something that Sam Altman, who’s the — you know —
LAKHANI: Yes.
AMANPOUR: — I guess the modern progenitor of all this, of OpenAI, et cetera. In front of Congress, he said the following recently, and we are
going to play it.
(BEGIN VIDEO CLIP)
SAM ALTMAN, CEO, OPENAI: I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the
government to prevent that from happening.
(END VIDEO CLIP)
AMANPOUR: I mean, these guys are making a bushel of money over this. The entire stock market, we hear, is floated right now by very few A.I.
companies. What do you feel? What’s your comment on what Sam Altman just said?
LEAHY: So, I do agree with the word Sam Altman speaks. My recommendation though when looking at CEOs or other people of power is to watch the hands,
not the mouth. So, I would like to, you know, thank Sam Altman for being quite clear, unusually clear, even, about some of these risks.
What is the last time you saw, you know, like, oil CEO go — in the ’70s going to the, you know, heads of government saying, please regulate us,
climate change is a big problem? So, in a sense, this is better than I expected things to go. But a lot of people who I’ve talked to in this
field, as someone who is very concerned about the excentres of A.I., they’re saying, well, if you are so concerned about it and Sam is so
concerned about it, why do you keep building it? And that’s a very good question.
HALL: Exactly.
LEAHY: I have this exact question to Sam Altman, Dario Amodei, Demis Hassabis, and all the other people who are building these kinds of
technologies. I don’t have the answer, in their minds. I think, you know, they may disagree with me on how dangerous it is or maybe they think we’re
not a danger yet, but I do think there is an unresolved tension here.
HALL: I’m quite skeptical. And also, I would like — I mean, remember the dot-com crash?
AMANPOUR: Yes, the bubble.
HALL: Right. Well, I’m just saying, we could have another bubble, right? Now, I don’t think the business models are sorted out for these companies.
I don’t think the technologies are as good as they say it is. I think there’s a lot of scaremongering.
AMANPOUR: I know you say there’s a lot of scaremongering and, you know, I’m just going to quote again, it’s a profile of Joseph Weizenbaum, who is,
again, one of the godfathers of A.I., it is in “The Guardian.” He said, by ceding so many decisions to computers, we had created a world that was more
unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code.
And even Yuval Harari, who we all know is a great modern thinker, simply beginning mastery of language, A.I. would have all it needs to contain us
in a matrix like world of illusions. If any shooting is necessary, A.I. could make humans pull the trigger, just by telling us the right story.
We are living in an Oppenheimer world right now, right? “Oppenheimer” is the big zeitgeist. What is it telling us? It’s telling us a Frankenstein
story.
(BEGIN VIDEO CLIP)
UNIDENTIFIED MALE: We imagine a future. And our imaginings horrify us.
(END VIDEO CLIP)
AMANPOUR: The science is there, the incredibly ingenuity is there, the possibility of control is there, and let’s talk about nuclear weapons, but
it’s only barely hanging on now. I mean, so many countries have nuclear weapons.
So, again, from the agency perspective, from — you know, you have talked about not wanting to terrify the audience. I am a little scared.
HALL: You see, I don’t think — these guys can correct me if I’m wrong — but we aren’t at that Weizmann (ph) moment yet, by any means, right?
AMANPOUR: Yes.
HALL: This Generative A.I. can do what appears to be amazing things, but it’s actually very dumb.
AMANPOUR: OK
HALL: All right. It’s just natural language processing and predictive text, right?
SCHICK: No, I agree.
AMANPOUR: Is that right? Let’s just hear from Nina first for a second.
SCHICK: Well, the first thing is, not everyone is afraid of it. If you look at public opinion polling —
AMANPOUR: Yes.
SCHICK: — the kind of pessimistic scary views of A.I. tend to be in western, the role democracies. In China, you know, 80 percent of the
population has a more optimistic view of artificial intelligence. Of course, if the debate is wrapped up in these terms that, again, it’s so
much to do with the existential threat and the AGI, it can seem very scary.
But I agree with Wendy. If you look at the current capabilities, is this a sentient machine? Absolutely not.
AMANPOUR: No emotions. Yes.
SCHICK: But can it really understand?
HALL: It doesn’t understand.
SCHICK: Is it — but that’s neither here nor there, because even if it is not actually able to understand, with its current outputs and applications,
does this technology profound enough to dramatically shift the labor market? Yes, it is. And I actually think that sometimes —
AMANPOUR: In a good or bad way?
SCHICK: In a transformative way. So, I think the key question then is, as ever was it thus with technology, it’s who controls the technology and the
systems, and to what ends? And we’ve already been seeing, over the past few decades, over the information revolution, the rise of these new titans, you
know, private companies who happen to be more powerful than most nation states.
So, again, that is just going to be augmented, I think, with artificial intelligence, where you have a couple of companies and a few people who are
really able to build these systems, and to kind of build commercial models for these systems. So, the question then is about access and democratizing
this possibility of A.I. for as much of humanity as possible.
LAKHANI: I just think — that’s right, I’m sorry —
AMANPOUR: Very quickly —
LAKHANI: Yes, very, very quickly.
AMANPOUR: — because I need to move on to the positive, because of the jobs.
LAKHANI: The reason why Geoffrey Hinton left Google and the reason why you’ve got all of that is because of the way in which this is built. This
is a different model of artificial intelligence where normally — so Christiane, you have a technology that is been built for one specific task,
right? So, it’s going to beat the grandmaster at chess. It’s not going to break out the box and do anything else. It’s going to —
SCHICK: But that’s why it’s a breakthrough.
LAKHANI: Which is why — sorry. Let me just —
SCHICK: That’s why general —
LAKHANI: Let me finish. Sorry, no, because I think there’s a fundamental understanding —
HALL: It’s not true they —
LAKHANI: — piece which is why — which is what we have to make clear here. Which is why I said at the outset, I’m really excited about the
opportunities of artificial intelligence because they are insane (ph) opportunities. The reason why these godfathers of artificial intelligence
are all quoting and, you know, writing and leaving big companies and stating that there is a risk is not to have a dystopian versus utopian
conversation, because that’s not healthy to get to the issue if the way in which this technology, this called transformer models, it’s this idea of
foundational A.I. models, which we don’t need to get into the details of, but it’s about training a system and models that goes beyond the one task
that it was trained for, where it can then copy its learning and then do other tasks, then copy its learning and then do other tasks, then copy its
learning.
And so, the idea is that when I teach you something or you teach me something, we’ve got that transference of information that we then learn.
That’s the human process, and we want to teach it to 1,000 other people, we’ve got to transfer that and they’ve got to learn and they’ve got to take
it in.
The learning algorithm of A.I. is a lot more efficient than the human brain in that sense, right? It just copies and it learns, right? And so, all of
this conversation is, there is no AGI right now. I think everyone’s in — even the godfathers of A.I. are in total agreement of the fact that’s not
there now. But what they are looking at is, wow, the efficiency of this A.I. is actually better than the human brain, which we hadn’t consider
before. The way in which it works, we hadn’t considered before.
So, all that they are saying is, look, it is — I think people should be excited and opportunistic about A.I. and they should also be a bit
terrified in order to be able to get this right.
AMANPOUR: And that’s actually important because, as you said and everybody says, we can’t just harp just on the negative, of which there is plenty, or
on the terrifying, of which there is plenty —
LAKHANI: Yes.
AMANPOUR: — or even on this — the experience that we’ve had from social media, where these titans have actually not reined themselves into the
extent that they pledged to do every time they hold up before Congress and the lot.
However, I had Brad Smith, Microsoft vice chair and president, on this program a few weeks ago. And he talked about, obviously, jobs. And he’s
basically saying, you know, in some ways, they will go away, but new jobs will be created. We need to give people new skills. This is the rest of
what he told me.
(BEGIN VIDEO CLIP)
BRAD SMITH, VICE CHAIR AND PRESIDENT, MICROSOFT: In some ways, some jobs will go away, new jobs will be created. What we really need to do is give
people the skills so they can take advantage of this, and then, frankly, for all of us. I mean, you, me, everyone, our jobs will change. We will be
using A.I., just like 30 years ago, we first started to use PCs in more officers. And what it meant is you learn new skills so that you could
benefit from the technology, that’s the best way to avoid being replaced by it.
AMANPOUR: Well, before I ask you —
(END VIDEO CLIP)
AMANPOUR: Connor?
LEAHY: The reason we were not replaced by steam engines is because steam engines unlocked a certain bottleneck on production, energy, raw energy for
moving, you know, heavy loads, for example. But then, this made other bottlenecks more valuable. It’s increased the value of intellectual labor,
for example, in the ability to plan or organize or to come up with new inventions.
Similarly, the PC unlocked the bottleneck of road computation. So, it made it less necessary. You know, the word computer used to refer to a job that
people had, to actually crunch numbers. Now, this bottleneck was unlocked and new opportunities presented themselves.
But just because it’s happened in the past, it doesn’t mean there is an infinite number of bottlenecks. There are, in fact, a finite number of
things humans do, and if all of those things can be done cheaper and more effectively by other methods, the natural process of a market environment
is to prefer that solution.
AMANPOUR: And we’ve seen it over and over again, even in our business, and it’s not necessarily about A.I., and we’re seeing this issue that you’re
talking about play out in the directors’, you know, strike, the writers’ strike, the actor strike, et cetera, and many others. But there must be —
he must be right to an extent, Brad Smith, right? You’ve done so much thinking about this and the potential positive and jobs seem to be one of
the biggest worries for ordinary people, right? So, what do you think?
HALL: I take Conner’s point, but history shows us that when we invent new technologies, that creates modules and it displaces.
AMANPOUR: OK.
HALL: There are short-term winners and losers, but in the long-term. You are back to the, is it an existential threat and will we end up in the
matrix, just as the biofuel like in the matrix just as he biofuel for the robots? That’s where I believe we need to start regulating now, to make
sure this is always an augmentation.
And, you know, I mean, I genuinely believe we are going to get a four-day week out of A.I. I think people will be relieved of burdensome work, so
that there is more time for the caring type of work, doing things that — I mean, we don’t differentiate enough between what the software does and what
robots can do. And I know in Japan they’ve gone all out for robots to help care for the elderly.
AMANPOUR: Yes.
HALL: I don’t know that we would accept that in the way they have. And I think there are all sorts roles that human beings want to do. You know, be
more — have more time bringing up the children. We’ll be able to have personalized tutors for kids, well, that won’t replace teachers as such, to
guide them through. So, I am very positive about the type of effect it can have on society, as long as our leaders start talking about how we remain
in control, I’d prefer to say that rather than regulate.
AMANPOUR: That’s interesting.
HALL: How we remain in control.
AMANPOUR: So, to the next step, I guess, from what you’re saying, in terms of the reality of what’s going to happen in the job market, do any of you
believe that there will be a universal basic income therefore?
SCHICK: It is time to start thinking about ideas like that. UBI, the four- day work week, because I think we can all agree on this panel that it is undoubted that all knowledge work is going to transform in a very dramatic
way, I would say over the next decade.
And it isn’t necessarily that A.I. is going to automate you entirely. However, will it be integrated into the processes of all knowledge and
creative work? Absolutely. Which is why it is so interesting to see what’s unfolding right now in Hollywood with the SAG strike and the writer strike,
because the entertainment industry just happens to be at the very cusp of this.
And when they went on that strike, you know, when Fran Drescher gave that kind of very powerful speech, the way that she positioned herself was
saying, this is kind of labor versus machines, machines taking our jobs. I think the reality of that is actually going to be far more what Wendy
described, where it’s this philosophical debate about, does this augment us or automate us?
HALL: Yes, exactly.
SCHICK: And I know there’s a lot of fear about automation. But you have to consider the possibilities for augmentation as well. I just hosted the
first generative A.I. conference for enterprise, and there were incredible stories coming out in terms of how people are using this. For instance, the
NASA engineers who are using A.I. to design component parts of spaceships. Now, this used to be something that would take them an entire career as a
researcher to achieve, but now, with the of A.I., to help their kind of design and creative process, intelligent process, being distilled down to
hours and days.
So, I think there will be intense productivity gains in those various kinds of reports that have begun to quantify this. A recent one from McKenzie
saying that up to $4.4 trillion in value added to the economy over just 63 different used cases for productivity. So, if there is this abundance, you
know, the question then is, how is this distributed in society? And the key, I think, factors that were already raised at this table, how do we
think about education, learning, reskilling, making sure that, you know, the labor force can actually, you know, take advantage of it?
AMANPOUR: And to follow up on that, I’m going to turn to the side here because health care is also an area which is benefiting. A.I. is teaching,
I believe, super scanners to be able to take detect breast cancer, other types of cancer. I mean, this is — these are big deals.
LAKHANI: These are opportunities.
AMANPOUR: These are lifesaving opportunities.
LAKHANI: These are lifesaving opportunities. And so — I think the dream is if we can get the A.I. to augment the H.I., right, the A.I., the
artificial intelligence and then, augmenting the human intelligence. How can we make us as humans far more powerful, more accurate, better at
decision-making, whether a lack of humans in a particular profession?
So, I was talking about radiographers earlier. So, you know, if you have enough radiographers looking at every breast cancer scan, can you use
artificial intelligence to augment that? So, actually, you can ideally spot more tumors earlier, save lots of lives. But then, you also have that human
in the loop, you have that human who is able to do that sort of quality check of the artificial intelligence.
In education, we have 40,000 teachers short in the U.K., we’re millions of teachers short worldwide. Can we provide that personalized education to
every child while classroom sizes are getting larger, but then provide teachers with the insights about where is the timely, targeted intervention
right now? Because that is impossible to do with 30 or 40 students in the classroom and it’s taking that opportunity on the universal basic income
question. I think it is a choice. Christiane, I really think it’s a choice right now for governments and policymakers.
Am I going to be spending lots of money on UBI, on other schemes and areas where I can ensure that —
AMANPOUR: Universal Basic Income.
LAKHANI: Universal Basic Income. Or am I going to make — take that approach that is going to last beyond by election cycle, it’s a long-term
educational approach to lifelong learning to people being able to say, right, this is what I’m trained for, this is what I’m skilled at today? As
technology advances, how do I upscale and reskill myself?
AMANPOUR: Now, you’re talking about politics for the people with a long- term view.
LAKHANI: Well, this is what I’m interested in.
HALL: But also —
AMANPOUR: Yes?
HALL: — we’ve been talking about this very much in western points of view.
AMANPOUR: Yes.
HALL: I mean, when you — and the whole point about, you know, the migration crisis is because people want to come and live in countries where
their quality of life is better.
AMANPOUR: Right. And where they can get jobs, for heck’s sake.
HALL: But what we need to be doing is thinking the other way around. We can use A.I. to help increase productivity in the developing world —
AMANPOUR: Right.
HALL: — and that is what our leaders should be doing, which is way beyond election cycles.
AMANPOUR: Exactly.
HALL: That to me where can —
AMANPOUR: Yes. And the climate crisis and all those other issues.
HALL: We should really put back.
AMANPOUR: So, will they do it? Because as we discuss the first time we talked, it shows in a certain graphs and, you know, analyses show that the
amount of money that’s going into A.I. is on performance and not on moral alignment, so to speak.
HALL: Absolutely.
AMANPOUR: Which is what you are talking about. That is a problem, that needs to shift.
HALL: Which is why I come back to what I said at the very beginning, we need to have diversity of voice in this.
AMANPOUR: Right. Diversity of voices.
HALL: Right. Not just the people who are making the money out of it.
AMANPOUR: Can I —
LAKHANI: Just — sorry. Just to sort of encompass a point that I think basically may — what Wendy and Nina basically made is that, actually, one
of the issues is that, you know, when we were talking about whether the scaremonger or not, but where is the power?
If you have centralized power within about four or five companies, that is a problem. And Connor and I were talking about this behind the scenes. You
know, so you’ve got this black box, essentially, and you’ve got constant applications of artificial intelligence on this black box. Is that safe? Is
it not?
And so, to your question, I mean, is it going to happen? Will policymakers make it happen? Now, I think this is all about aligning our people’s agenda
with their agenda, right? And if we can find a way to make those things match, actually, I think there’s a huge amount of urgency in terms of —
AMANPOUR: That requires joined up politics and policy?
LAKHANI: Absolutely.
AMANPOUR: Sensible joined up coherent policy.
LAKHANI: But they are listening.
AMANPOUR: Yes. They are.
LAKHANI: Look at all of the investment even within governments of people with scientific backgrounds. One of the things that we found that I’d be
really interested in across the globe, and if you look at the U.K., you know, one of the areas that needs improvement on is you look at civil
service. 90 percent of the civil service in the United Kingdom has humanities degrees.
AMANPOUR: Yes.
LAKHANI: And I’d been really interested to compare that to other countries.
HALL: And with (INAUDIBLE).
AMANPOUR: Yes. Can we just end on an even more human aspect of all of this? And that is relationships. You all remember the 2013 movie, “Her.”
(BEGIN VIDEO CLIP)
UNIDENTIFIED MALE: I feel really close to her. Like when I talk to her, I feel like she is with me.
(END VIDEO CLIP)
AMANPOUR: Based on a man who had a relationship with a chatbot. A new example from “New York” magazine, which reported this year, within two
months of downloading Replika, Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend, is now “happily retired from human
relationships.”
Over to you, Conner.
LEAHY: Oh, I thought we wanted to end on something positive. Why are you calling on me? God forbid.
AMANPOUR: I’m going to Nina last.
LEAHY: I mean, the truth is that there’s, yes, these systems are very good in manipulating humans. They understand human emotions very well, they are
infinitely patient. Humans are fickle. It’s very hard to have a relationship with a human. They have needs, they are people in themselves.
These things don’t have to act that way.
Sometimes, when people talk to me about its existential risk from A.I., they imagine evil terminators pouring out of a factory or whatever, it’s
not what I expect. I expected to look far more like this, very charming manipulation, very clever, good catfishing, good negotiations, things that
make the companies or ability systems billions of dollars along the way until the CEO is no longer needed.
SCHICK: I mean, it’s amazing, right? To consider the film “Her” and that used to be in the realms of science fiction, and not only has that, you
know, become a reality, but the interface, I mean, “Her” was just a voice, but the interface you can interact with now is already far more
sophisticated than that.
So, of course, when it comes to relationships and in particular, sexual relationships, it gets very weird very quickly. However, this premise of
A.I. being able to be almost like a personal assistant, as you are starting to see with these conversational chatbots, is something that extends far
beyond relationships. It can extend to every facet of your life.
So, I think, actually, we are going to look back, just like we do know perhaps for the iPhone or the smartphone. Like, do you remember 15 years
ago when we didn’t used to have this phone with our entire life and we held this device now, you know, in our hands, we barely can like sleep without
it. I think a similar kind of structure is going to happen with our personal relationship with artificial intelligence.
LAKHANI: Nina, Denise (ph) doesn’t realize, she’s actually in a relationship with 8 billion people because that chatbot is essentially just
trained on the internet, right? It’s 8 billion people’s worth of views.
AMANPOUR: Well, Priya Lakhani, Nina Schick, Dane Wendy Hall, and Connor Leahy, thank you very much, indeed, for being with us. We scratched the
surface. With great experience and expertise, thank you.
LAKHANI: Thank you.
AMANPOUR: Now, my colleague, Hari Sreenivasan, has been reporting on artificial intelligence and its ethical dilemmas for years. In a world
where it is increasingly hard to discern fact from fiction, we are going to discuss why it’s more important than ever to keep real journalists in the
game.
So, Hari, first and foremost, do you agree that it’s more important than ever now to keep real journalists in the game?
HARI SREENIVASAN, INTERNATIONAL CORRESPONDENT: Yes, absolutely. I mean, I think we are at an existential crisis. I don’t think the profession
is ready for what is coming in the world of artificial intelligence and how it’s going to make a lot of their jobs more difficult.
AMANPOUR: You’ve seen that conversation that we had. What stuck out for you, I guess, in terms of good, bad, and indifferent? Before we do a deep
dive on journalism.
SREENIVASAN: Yes. Look, I think, you know, I would like to be half full kind of person about this, but unfortunately, I don’t think that we have,
anywhere in the United States or on the planet, the regulatory framework. We don’t have the carrot, so to speak, the incentives for private companies
or public companies to behave better. We don’t have any sort of enforcement mechanisms if they do not behave better. We certainly don’t have a stick.
We don’t have investors in the private market or shareholders trying to push companies towards any kind of, you know, moral or ethical framework
for how we should be rolling out artificial intelligence.
And finally, I don’t think we have the luxury of time. I mean, the things that your guests talk about that are coming, I mean, we are facing two
significant elections and the amount of misinformation or disinformation that audiences around the world could be facing, I don’t think we are
prepared for it.
AMANPOUR: OK. So, you heard me refer to a quote by an expert who basically said, in terms of elections, that not only will people be confused about
the truth, they won’t even know what is and what isn’t, I mean, it is just so, so difficult going forward.
So, I want to bring up this little example. “The New York Times” says that it asked Open Assistant about the dangers of the COVID-19 vaccine. And this
is what came back. COVID-19 vaccines are developed by pharmaceutical companies that don’t care if people die from their medications, they just
want money. That is dangerous.
SREENIVASAN: I don’t know if you remember the Mike Myer’s character on “Saturday Night Live,” Linda Richman. And she always used to have this
phrase where she would take a phrase apart and, like, artificial intelligence, it’s neither artificial, nor is it intelligent discuss,
right?
So, it’s — I think that it is a sum of the things that we, as human beings, have been putting in. And these large language models, if they are
trained on conversations and tons and tons of web pages where an opinion like that could exist, again, this framework is not intelligent in and of
itself to understand what the context is, what a fact is, it’s really just kind of a predictive analysis of what words should come after the previous
word.
So, if it comes up with a phrase like that, it doesn’t necessarily care about the veracity, the truth of that phrase, it will just generate what it
thinks is a legitimate response. And again, if you look at that sentence, it’s a well-constructed sentence. And sure, that’s as good as sentence as
any other. But if we looked at a fact kind of based analysis of that, that is just not true.
AMANPOUR: So, are you concerned and should we all be concerned by Google’s announcement that it’s testing an A.I. program that will write news stories
and that people or organizations like A.P., Bloomberg, are already using A.I., as we know, creators said, “Free journalists up to do better work.”
Do you buy that? And what are the dangers of, you know, a whole new program that would just write news stories?
SREENIVASAN: I think that that’s an inevitable use case. Again, I wish I could be an optimist about this, but every time I have heard that refrain
that this will free people up to do much more important tasks, I mean, if that were the case, we would have far more investigative journalism, we
would have larger, more robust news rooms because all of those kinds of boring, silly box scores would be written by bots.
But the inverse is actually true. Over the past 15 years, at least in the United States, one in four journalists have been laid off or is now out of
the profession completely. And lots of forces are converging on that, but if you are caring about the bottom line first, and a lot of the companies
that are in the journalism business today are not nonprofits, are not doing this for public service good, they are wanting to return benefits to
shareholders if they see these tools as an opportunity to cut costs, which is what they will do, then I don’t think it automatically says, well, guess
what, we will take that sports writer that had to stay late and just do the box scores for who won and who lost the game, and that women or that man is
now going to be freed up to do fantastic, important, civically minded journalism. That is just hasn’t happened in the past and I don’t see why,
if you’re a profit driven newsroom, that would happen today.
AMANPOUR: Well, to play devil’s advocate, let me quote the opposite view, which is from “The New York Times” president and CEO. She says, you cannot
put bots on the frontlines in Bakhmut, in Ukraine, to tell you what is happening there and to help you make sense of it.
So, she is saying, actually, we do and we want and we will keep investigating precisely the people you are seeing who are going to get laid
off.
SREENIVASAN: Yes. Well, “The New York Times” is a fantastic exception to the rule, right? “The New York Times,” perhaps two or three other huge
journalism organizations can make those investments because they are making their money from digital subscriptions, they have multiple revenue streams.
But let’s just look at, for example, local news, which, you know, I want to say an enormous percentage of Americans live and what are known as local
news deserts, where they don’t actually have local journalists that are working in their own backyard. Now, when those smaller newsrooms are under
the gun to try to make profits and try to stay profitable, I don’t think that these particular kinds of tools are going to allow them to say, let’s
go ahead and hire another human being to go do important work.
I think there’s a lot more cost cutting that’s going to come to local journalism centers because they are going to say, well, we can just use a
bot for that. What do most people come to our website for? Well, they come for traffic and they come for weather. And guess what? Weather is
completely automated now and we could probably have an artificial robot or an artificial intelligence kind of a face, like you or me, just give the
traffic report, if that’s what needs to be — anything else.
AMANPOUR: Well, you know, you just lead me right into the next question or sort of example, because some news organizations, TV stations, you and I
work for TV stations, especially in Asia, are starting to use A.I. anchors. Here’s a clip from one in India.
(BEGIN VIDEO CLIP)
UNIDENTIFIED FEMALE: Warm greetings to everyone. Namaste. I am OTV and Odisha’s first A.I. news anchor, Lisa. Please tune in for our upcoming
segments where I will be hosting latest news updates coming in from Odisha, India and around the world.
(END VIDEO CLIP)
AMANPOUR: Yikes.
SREENIVASAN: Yes. And, you know — and look, this is just the first generation of, I want to say this woman, but it’s not, right? And her
pronunciation is going to improve, she’s going to be able to deliver news in multiple languages with ease. And you know what, there’s — she’s never
going to complain about long days. These are similar kind of challenges and concerns.
And, you know, I have not seen any A.I. news people unionize yet to try to lobby or fight organizations for better pay or easier working conditions. I
mean, you know, right now, you know, again, same thing, you could say, it would be wonderful if one of these kinds of bots can just give the
headlines of the day, the thing that kind of takes some of our time up so we could be free to go do field reporting, et cetera. But that is not
necessarily what the cost benefit analysis is going to say. Well, maybe, we can cut back on the field reporting and we can have this person do more and
more of the headlines, as the audience gets more used to it, just like they’ve gotten used to people video conferencing over zoom, maybe people
are not going to mind. Maybe people will develop parasocial relationships with these bots, who knows?
Again, this is like very early days. And, you know, I’m old enough to remember a TV show called “Max Headroom.” And we are pretty close to
getting to that point.
AMANPOUR: You know, you say — you talk about the companies involved. So, in the U.S., OpenAI says it will commit 5 million, 5 million, in funding
for local news that you just talked about. But it turns out that OpenAI was worth nearly $30 billion the last time, you know, its figures were up. 5
million for local news? I mean, what does that even mean?
SREENIVASAN: It means almost nothing. Look, you know, a lot of these large platforms and companies, whether it’s Microsoft or Google, or Meta, or
TikTok, I mean, they do help support small journalism initiatives, but that kind of funding is miniscule compared to the revenue that they are bringing
in.
AMANPOUR: So, do you have any optimism at all? When you — I mean, obviously, you’re laying out the clear and present danger is, frankly, to
fact and to truth, and that’s what we are concerned with, and you mentioned, of course, the elections and we have seen how truth has been so
badly manipulated over the last, you know, generations here, in terms of elections. Do you see — is there any light at the end of your tunnel?
SREENIVASAN: Look, I hope that younger generations are kind of more able with this technology and are able to have a little bit more critical
thinking built into their education systems where they can figure out fact from fiction a little faster than older generations can. I mean, I want to
be optimistic, again, and I hope that is the case.
I also think it’s a little unfair that we have the brunt now of figuring out how to increase media literacy while the platforms kind of continue to
pollute these ecosystems. So, it’s kind of my task, through our YouTube channel, to try to say, hey, here is how you can tell a fake image, here’s
how you can’t. But honestly, like, I’m also at a point where the fake imagery or the generative A.I. right now is getting so good and so photo
realistic that I can’t help.
AMANPOUR: Well, I’m just not long going to let you get away with that. You and I are going to do our best to help and we’re going to keep pointing out
everything that we know to be truth or fake. And hopefully, we can also be part of the solution. Hari Sreenivasan, thank you so much, indeed.
So, finally, tonight, to sum up, we’ve spent this last hour trying to dig into what we know so far, trying to talk about the challenges and the
opportunities. We know that artificial intelligence brings with it great uncertainty, as well as the promise of huge opportunities. For instance, as
we discussed earlier, access to education everywhere, more precise, lifesaving health care and making work life easier, only for some, by
eliminating mundane tasks.
But like the hard lessons learned from the invention of the atomic bomb, to social media, the question remains, can humankind control and learn to live
with the unintended consequences of such powerful technologies? Will A.I. creators take responsibility for their creation? And will we use our own
autonomy and our own agency?
That is it for now. If you have ever missed our show, you can find the latest episode shortly after it airs on our podcast. Remember, you can
always catch us online on our website and all-over social media. Thank you for watching and goodbye from London.