05.02.2023

May 2, 2023

Connor Leahy and Marietje Schaake discuss the dangers of A.I. In his latest project, “Our Common Nature,” cellist Yo-Yo Ma seeks to enhance our humanity by deepening our ties with the natural world. Ben Smith, founding editor-in-chief of Buzzfeed News, explores the history of online journalism in his new book, “Traffic.”

Read Full Transcript EXPAND

CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR: Hello everyone, welcome to AMANPOUR AND COMPANY. Here’s what’s coming up.

The godfather of artificial intelligence sounds the alarm about his own dangerous creation. Is A.I. a major threat to humanity or a world saving

breakthrough? I ask a senior A.I. researcher and the head of Cyber Policy at Stanford University.

Also —

(BEGIN VIDEO CLIP)

(MUSIC PLAYING)

(END VIDEO CLIP)

AMANPOUR: — an ode to mother nature. World renowned cellist Yo-Yo Ma tells me about his new project. The harmony between music and our natural

world.

Then —

(BEGIN VIDEO CLIP)

BEN SMITH, AUTHOR, “TRAFFIC”: We didn’t want to write things that weren’t true, and we were trying to do a kind of a traditional journalism in a new

form.

(END VIDEO CLIP)

AMANPOUR: — Walter Isaacson asked the co-founder of “BuzzFeed News,” Ben Smith, where the billion-dollar race to go viral went wrong.

Welcome to the program, everyone. I’m Christiane Amanpour in London.

The man known as the godfather of artificial intelligence is now scared of the very technology that he helped pioneer. Geoffrey Hinton has left Google

to warn the world about the dangers of A.I. Hinton’s decades-long research has shaped the A.I. products and systems that we used today. And in 2018 he

was a co-winner of the Touring Prize, a sort of Nobel for computer science. Now he says, he regrets his work. And here he is speaking to the BBC.

(BEGIN VIDEO CLIP)

DR. GEOFFREY HINTON, ARTIFICIAL INTELLIGENCE EXPERT: The issue is, now that we’ve discovered it works better than we expected it a few years ago,

what do we do to mitigate the long-term risks of things more intelligent than us taking control?

(END VIDEO CLIP)

AMANPOUR: And Hinton joins a growing chorus of experts worrying that bad A.I. could conceivably even lead to the extinction of the human race.

Today, Samsung banned its staff from using tools like ChatGPT, citing security concerns. Meanwhile, the I.T. giant, IBM, announced that it will

pause on hiring for roles that A.I. could potentially fill, which puts nearly 8,000 jobs at risk in the next five years.

So, how do we innovate and protect our future by ensuring the so-called moral alignment of this expanding technology? We’ll discuss public policy

in a moment, but first, to an expert. The CEO of the A.I. company, Conjecture, Connor Leahy, joining me now here in London.

Welcome. Thank you very much indeed. Do you share Geoffrey Hinton’s worries?

CONNOR LEAHY, A.I. RESEARCHER AND CEO, CONJECTURE: Absolutely.

AMANPOUR: Do you believe, as he thinks, because I’m quoting him, that it is not inconceivable that it could actually lead to the extension of the

human race?

LEAHY: Not only is it not inconceivable, I think it is quite likely unfortunately. And I’m not just the only one saying this, more and more

people, such as Hinton, who is really the godfather of this field, as you have already said, the closest we have to Einstein in the field of A.I., is

now taking this risk extremely seriously and going to the public to actually speak about them.

AMANPOUR: OK. So, this is very dystopian. I mean, you say, you know, not just conceivably, it could do. How? In layman’s terms, what is the current

danger and the nature of this technology that is so dangerous for us?

LEAHY: Companies that are working on this technology, you know, Google, OpenAI and other ones, explicitly in their goals, for what they state they

are trying to do, is to build god-like intelligence. They are not trying to build just an auto complete system. This is explicitly their goal,

explicitly stated in their founding documents.

AMANPOUR: And it means what, god-like?

LEAHY: It means something that outstrips humans in every form of capability. It does better than humans at every type of reasoning task,

every type of physical task, at some point every type of skill-based task, more creative in every way.

I believe that if we create a system of any kind that is just vastly more intelligent than the human race, I don’t expect that to end well.

AMANPOUR: So, what can be done now? So, some of these people, I think Geoffrey Hinton may have been one of them, big A.I. and tech giants, names

that we recognize, signed, I think a couple of months ago, a letter, more than 1,000 or nearly 2,000 of them, to call for a pause. Do you remember

that?

LEAHY: Yes

AMANPOUR: And what were they saying, and what happened?

LEAHY: So, the point of that letter was to call for a moratorium, at for least six months, I personally push for longer, on the development of

larger and more powerful A.I. systems that have been built so far.

So, I think it’s quite important to explain quickly that the difference between A.I. systems and software system. A traditional software system is

you write code. So, you write code, a programmer writes code which solves a problem. You have a problem, you wanted to do something and you write the

code to make it do that.

A.I. is very different. A.I.s are not really written. They are more like grown. You have a sample of data of what you wanted to accomplish. You

don’t know how to solve the problem, you just have a description like or samples of the problem, and then you use huge supercomputers to crunch

these numbers, to kind of like organically almost grow a program that solves these programs.

And importantly, we have no idea how these programs work internally. They are complete black boxes. We don’t understand at all how their internals

work. This is an unsolved scientific problem and we do not know how to control these things.

AMANPOUR: OK. So, this is the bit that I don’t understand. Because human beings are making the stuff, right, the hardware, the bits. So, how do you

not know? This is the bit that I find very difficult to comprehend. How do you not know, and therefore, how are you not able to, you know, control it?

LEAHY: This is a great question. And so, we could take examples of synthetic evolution and biology. So, in biology, sometimes you would like a

bacterium that produces better milk, for example, right? We don’t really know how all the genes work in the bacterium, but we could select for good

milk bacteria. You know, we can have — we can make — try different bacteria and keep the ones who make really good milk. And then, we breed

those and then, we get some more and so on and so on.

It’s quite similar to this. It’s not exactly like this. But basically, instead of us writing a program, we just try an incredible number of

programs and we search for the ones who are — that are the best, that are the best programs. But the way these programs are written is not in human

language, it’s not in code. It’s in what’s called neural weights, which is you can kind of imagine like a — some massive list of numbers, like

billions of numbers. Now, like billions of knobs on a box, and you have a big supercomputer that twiddles all the knobs, you know, billions and

billions and billions of times, really, really fast.

And then, eventually, finds some setting up the knobs that works. But what do those knobs mean? It’s unclear.

AMANPOUR: Can you, for those who are not critics of this, give an idea of how it can be used to the betterment of humanity? Can A.I. solve world

peace? Can it solve the, you know, war in Russia? Can it solve — you know, between Israel and the Palestinians?

LEAHY: Well, at this point, definitely not. But I do think it’s very important to be clear like what makes humanity great to a large degree is

our intelligence, you know? The reason we are not chimps is we have intelligence. We developed all these wonderful technology around us. We

have language. We have culture. You know, we developed societies and art and all these beautiful things. These are all wonderful things. I love

intelligence. You know, I love being human. I love all my human friends.

But — and A.I. can help us with this, of course. We’re seeing now, you know, a revolution, you know, tools that allow us to automate simple tasks

or complex tasks, it also generates new forms of art or media that allow us to, you know, translate text much better than any previous method allowed

us to. You’re really like starting to break down the barriers between languages to a pretty surprising degree often.

So, you know, can intelligence, at some point, solve these problems you’ve described? Yes.

AMANPOUR: And global hunger?

LEAHY: I mean, yes. I mean, probably. I don’t know, obviously. Definitely not current systems. But, you know, if we have a system which is superior

to humans in every conceivable facet, then I expect it to be capable of solving problems that we humans currently can’t solve.

AMANPOUR: Currently, what is its main positive? I mean, we hear the word efficiency, which to many means replacing humans, as we just saw, IBM

might, with whatever A.I. is.

LEAHY: Yes. And, you know, I wish I had a (INAUDIBLE) positive story, but there’s none (INAUDIBLE) positive story. This is always a — I mean, this

is a classic risk that always happens with modern technology and better tools get developed, some people get replaced. Usually, new jobs are

created until they are not. You know, at some point, we will actually run out of things for humans to do. And I think we are approaching that.

You know, when we created the steam engine, it allowed humans to do lots of more cognitive labor. You know, we get to think more. We could do more

writing and speaking because now the machines could do all the heavy lifting. But if the machines also do all the talking and all of the

thinking, well, what’s left? I don’t know.

Currently, they’re still very useful, like there’s many applications in science and medicine that benefit greatly from artificial intelligence

technology to develop better. You know, there are therapeutics, to understanding proteins, to, you know, also generate art or write codes. So,

many, many developers nowadays, suffer developers, use products such as GitHub Copilot, which is an A.I. system which aids them. It doesn’t replace

them, it aids them. It answers their questions, it makes writing the code faster, which is quite convenient.

AMANPOUR: In some of the reading I’ve done, it appears that what’s kind of scary is that the amount resources put into the capability of this A.I. far

outstrips — and the graph is getting wider — the resources put into the safety aspect of it, what they call the moral alignment to make sure it’s

not bad and destructive. Can you see that continuing like that?

LEAHY: It seems completely unsustainable to me. Billions of dollars, and you know, thousands, tens of thousands of our brightest engineers and

scientists are working day in and day out to create, you know, every war powerful systems. Well, the number of people who work full-time on like the

alignment problem is probably less than 200 people, if I had to guess.

AMANPOUR: The alignment means making it safe, the moral alignment?

LEAHY: Yes. Like the controlling a very, very powerful engines (ph). So, the A.I. safety field in general, which also includes other concerns, is a

bit larger, not very much larger, but it is a bit larger. But the A.I. alignment field, the question of if we have superhuman intelligence, if we

have superintelligence, if we have god-like A.I., how do we make that go well is a very, very important — and very importantly, this is a

scientific problem. A scientific problem, an engineering problem that we have to understand. And also, a political problem to a large degree.

But the number of people working on this and the amount of funding accessible to them is extraordinarily small.

AMANPOUR: Can you put the genie back into the box or how do you regulate? I know you are concerned about regulation. And what does your company do on

this?

LEAHY: This is a great question. So, my feeling on regulations here, in general, is — you ask a good question, can you put the genie back in the

bottle? And honestly, the truth is, I don’t know. I don’t know. I hope you can. I think this is going to be necessary to some degree. I think if we

continue at this pace, and we just continue to let the bottle, you know, have smoke billowing out of the bottle, this is not going to end well.

What I think is the first step I would advocate for is that I think the public deserves to know what is going on. I think this is still a topic

that is — people have been talking about these things for years. You know, like people that are the heads of these labs have publicly stated that they

think there are extinction risks from these things, some of them as far back as 2011 is. These are these are old discussions that the public is not

informed about.

I think that — you know, I think that parliament and Congress in the U.S. should call upon these labs to testify under oath and actually state what

is going on, how risky do you think these things actually are and what can we do about it? I think this is the first step towards any kind of sensible

regulation. And then, we also have to talk about, how do we put the genie back in the bottle? How do we progress safely? I think there are ways to do

this.

AMANPOUR: These is this model — and I just want to know also what your company is doing — you know, the CERN model, the biggest particle physics

lab in the world operates not necessarily on a profit module, but it’s intergovernmental to do their experiments and research in sort of an

island, not in the public until they have developed the right things.

LEAHY: I would absolutely love this. I think this would be fantastic. I would love if governments, especially intergovernmental bodies, could get –

– come together and control A.I. and AGI research in particular.

I think there’s many small applications of A.I. which do not pose significant risks. But the type of superintelligence research, which is

exactly what these large companies currently are doing, like — let me be very frank here, there’s currently more regulation on selling a sandwich to

the public than there is to building potentially god-like intelligence by private companies. There is no regulatory oversight, there is — there are

no audits, there are no licensing processes, there is nothing.

Anyone can just grab the billion dollars of easy money, big supercomputer, and start doing, you know, cutting-edge work on this and release it on the

internet and no one can stop them.

AMANPOUR: Why did that six-month call for the six-month pause by the big giants of this, why did it go nowhere? What happened?

LEAHY: That’s a good question. I would like to ask this question generally to the regulators and the wider people in the world. I think a lot of

people are just not informed. So, there’s a very funny dynamic that happens very often when we talk to other people in this field. People suspect that,

oh, we can stop this. There’s nothing we can do. People don’t care.

But I think people do care. This is something that affects all of us. This not something that a few, you know, tech people, you know, like the people

at the head of this company, or even me, should be able to decide upon. This is something that affects all of us. This is something that affects

all governments, all people.

And this is not something far away. You know, even, you know, people — Geoffrey Hinton himself has said that he used to think this was decades

away and he no longer thinks this. This is something that will probably affect you and me and definitely our children.

AMANPOUR: Connor Leahy, CEO of Conjecture, thank you so much indeed.

So, now, let’s turn to the slightly deeper dive inter regulation, how can we do this? Former member of the European parliament, Marietje Schaake, now

works with Stanford University’s Cyber Policy Center. And she says, tech is facing a regulation revolution. And she is joining me now from Stanford.

Marietje, welcome to the program. You heard Connor Leahy and myself talking about this, and how it’s not something in the distant future. So, first and

foremost, what can you do? I mean, it goes back to almost like the splitting of the atom. People will do the progress and innovation that they

can. What needs to be done and what can be done to regulate?

MARIETJE SCHAAKE, INTERNATIONAL POLICY DIRECTOR, STANFORD CYBER POLICY CENTER AND FORMER EUROPEAN PARLIAMENT MEMBER: Right. So, I think what

we’re seeing is this race between companies that are really looking more at their competitors for how quickly they can turn out products and updates

and really release very under researched A.I. applications into society just like that.

And I think they’ve — the companies, Microsoft, Google, are losing track of the real issue here, which is the societal risk that we need to focus

on. And the fact that these companies have so much power and agency to experiment in real-time with all the risk that the experts are pointing to

is unacceptable.

And so, it is important that democratic lawmakers step up, both in terms of which laws already apply. I don’t agree with the notion that it is entirely

a lawless space. For example, discrimination is illegal. And so, when an A.I. application discriminates, that is still illegal. But the question is,

will we find out? Can we look in the inner workings of the apps that companies build to see whether we have been mistreated?

And part of that is known, that there are systemic biases built into the way that data sets are formed and the way in which products are built on

top of those data sets discriminating against black people, for example. But part of it we may never know, because all of this powerful technology,

all the insights into it are in the hands of private companies. And that, in and of itself, are is a risk to the rule of law.

AMANPOUR: OK. OK. So, let me play this from the Google — sorry, Apple co- founder, Steve Wozniak, who spoke on CNN this morning. You say private companies, he’s one of the co-founders. He is talking out about this. I’m

just going to play this little bit.

(BEGIN VIDEO CLIP)

STEVE WOZNIAK, CO-FOUNDER, APPLE: Look at how many bad people out there just hitting us with spam and trying to get our passwords and take over our

accounts and mess up our lives and — you know, now, A.I. is another more powerful tool and it’s going to be used by those people, you know, for

basically really evil purposes. And I hate to see technology being used that way. It shouldn’t be. And some — probably some types of regulation

are needed.

(END VIDEO CLIP)

AMANPOUR: So, it really is interesting that the actual developers of all of this are the ones sounding the largest and the loudest alarm. So, is

there anything underway right now by countries, intergovernmental or individual, like the E.U.? Is anything happening to regulate right now?

SCHAAKE: Absolutely. The E.U. is actually in the final stages of concluding and A.I. act, a law, that applies to A.I. applications the way

they are used, for example, in screening people’s CVs when they apply for a job or when they apply for college, but also, when there might be fraud

detection through A.I. systems.

Very complicated and consequential applications where the E.U. says there is a risk attached to how A.I. can make decisions about one’s liberties,

one’s rights, one’s access to information or education or employment. And in that context, there needs to be mitigating measures in place depending

on the level of risk.

And so, I think the question is, what will the final A.I. act look like? It’s the final stages of negotiation, some last-minute changes, not in the

least because of all of these generated A.I. developments that happened since the process of this law started have taken place.

So, the E.U. is definitely leading when it comes to developing laws specific to A.I.

AMANPOUR: So, you are now head of the Cyber Policy Unit at Stanford University. What is the United States doing, which frankly is the leader in

all of this tech innovation?

SCHAAKE: Well, I don’t think the U.S. Congress or American leadership is doing enough in the interest of the public, in the interest of the rule of

law in preserving democracy. There’s been a long-term trend in the United States to trust market forces. That may explain why, for example, there is

not a federal data protection law, and data is a very important research — resource, excuse me, for A.I.

So, if you don’t have laws government the use of data, then that will also bear on the way in which data can be fed into the trainings of these A.I.

applications. And so, you see, I think, almost a domino effect now happening in the United States of a lack of regulations. And the political

climate is so polarized that I know very few people who have high expectations of what Congress can achieve.

I do have to say that the concerns about A.I. are now making for a coalition of concerned politicians that I have never seen before. There are

Democrats and Republicans concerned. There are people in Europe and the United States that are concerned.

So, maybe, just maybe, the urgency that we hear expressed by the experts working in the companies should make us all wonder as public leaders, what

do they know that we don’t know and how can we find out exactly what is going on so that these risks — I mean, we have to realize what people are

talking, the destruction of human race, the end of human civilization. Who would want to continue playing with that risk? It is preposterous. I think

it is almost absurd when you think about it, but it’s happening today, and companies are continuing.

Letters may be written to say, we need to pause, people may raise the alarm bells and resign their jobs, but there is not enough divestment, there is

not enough real meaningful action by the experts to say, we are going to change our behavior in the interest of protecting humanity.

AMANPOUR: I mean, it is extraordinary. And as you say it just sounds absurd that serious people, like yourself, these tech people, can say —

can talk about the end of the human race, it really — it’s concentrates the mind.

In the meantime, the threat to democracy, you’ve seen the deepfake A.I. generated or A.I. generated Republican ad that was launched to — you know,

in response to Joe Biden’s presidential, you know, reelection campaign. You probably may have come across some of these things that have been

commissioned apparently by the president of Venezuela, or at least his people.

There were some deepfake accounts trying to confuse everyone with fake anchors, fake news about how wonderful Venezuela is, how great the economy.

I mean, everything that it’s not right now in terms of infrastructure and political dysfunction.

YouTube then took them down. And the company in question then said that they would ban people using their A.I. for that kind of behavior. Is that

enough, self-regulation?

SCHAAKE: Unfortunately, it’s not because there are no many, many companies that are offering synthetic media options. So, ways in which people can

just start creating things at home. Many of the viewers have probably experimented with creating images or creating text. All the concerns we’ve

heard about ChatGPT making people’s homework or academic papers, or writing code.

And so, it’s increasingly easy to generate synthetic media and the quality will become better and better. And, indeed it will erode trust, it can

amplify and make it much easier to generate a lot of disinformation at a moment where we really don’t need more undermining of trust or confusion in

our democratic societies. So, I think that that is definitely a source of concern.

And we heard people talking about, oh, we must be careful that bad actors don’t get their hands on these technologies. But the point is, of course,

that it’s a very political question, what is good? What is bad? What is morally just? And those are political questions that are now all being

answered by companies.

And I take issue with the notion of calling it a god-like capability, because we shouldn’t forget, these are not effects that fall from the

heavens above. These are effects that are sought after, designed, improved, tested by people over and over again, day in and day out, and I think it’s

quite concerning the lot of the people that have invested in this technology, that have researched this technology have actually pushed the

frontier to come to the point of where we are now only to suddenly realize, my goodness, what a risk it is, too so many parts, including democracy.

So, it is definitely concerning. And I wish I had a good solution to make sure that people could detect synthetic media, but it will be incredibly

difficult for people to discern authentic from computer generated.

AMANPOUR: Yes. It’s really extraordinary. And we will definitely keep our spotlight on this. And presumably, you are doing stuff at you are very,

very powerful platform in the heart of technology land there at Stanford University. So, Marietje Schaake, thank you very much.

And, you know, as we prepare to move on to our next, I just want to read this by Robert Oppenheimer, who, of course, led the U.S. effort to develop

the atomic bomb. He said, when you see something technically sweet, you go ahead and do it, and you argue about what to do about it only after you’ve

achieved success.

Well, we’ve just heard why that’s a very, very dangerous pattern to follow. So, let’s turn now to how we can enhance our humanity by actually deepening

our ties with the natural world, through music even. The ever-innovating world class cellist, Yo-Yo Ma, explores this unity in his latest protect,

Our Common Nature. He swapped playing for presidents and royalty, at least for the moment, to perform in and for nature.

Here is a clip of him playing Bach at the foothills of the Great Smoky Mountains.

(BEGIN VIDEO CLIP)

(MUSIC PLAYING)

(END VIDEO CLIP)

AMANPOUR: That is beautiful in sound and vision. And Yo-Yo Ma joins me now from Chicago. Welcome to our program. Welcome back, Yo-Yo Ma.

YO-YO MA, CELLIST: Thank you.

AMANPOUR: I just wonder, before we talk about your antidote to this craziness that we’ve just been discussing, what do you think? Does it come

across your desk as well, the threat of A.I.?

MA: Well, I was fascinated listening to a little bit of your last conversation because your last interview we talked about the erosion of

trust, which makes me think about what it is that we, as humans, what our purpose is and — from looking at nature or talking about A.I. or music,

and all the technical means that we have to achieve something in music, we talk about — music starts to happen when we transcend technique. And right

now, we’re talking about the technique of A.I. And — but we’re — what we are not talking about is what is our common human nature —

AMANPOUR: Right.

MA: — that — and purpose. And so, in music, again, another value that I come — that comes to me from music is the idea that you’re working towards

something that’s bigger than yourself.

AMANPOUR: And that is the title. And —

MA: So, right now, we’re talking about —

AMANPOUR: Yes. Sorry. That’s your, project isn’t it?

MA: Yes. Go ahead.

AMANPOUR: Our Common Project — Our Common Nature project, rather. And you have been performing in these amazing landscapes. I mean, the Grand Canyon

and Smoky Mountains, as I’ve said, and elsewhere. So, what motivates you? What made you think of going out and doing that there now?

MA: Well, Christiane, I have to admit, I’m a city boy. I’m an urban dweller. I lived in, Paris, New York, Boston, you know, and this is — and

lately, I have realized that the time that I spent in nature is what brings me back to something much bigger than myself.

And I’m going to ask you a question, it brings me to wonder. So, here’s a question for you. Who said this? A shaman, a scientist or an artist? Nature

has the greatest imagination, but she guards her secrets jealously. Who said that?

AMANPOUR: OK. I’m going to say it was a scientist.

MA: You are so right.

AMANPOUR: Quizzed by Yo-Yo Ma.

MA: A plus.

AMANPOUR: OK. So, what is your message?

MA: Richard Feynman.

AMANPOUR: Yes. Go ahead.

MA: Well, Richard Feynman is the physicist that said that.

AMANPOUR: Yes.

MA: And the message that I am trying to figure out for myself is, are we part of nature or are we separate from nature? And part of what I have

found out so far is that there are two different groups of people that hold old knowledge and new knowledge, and I am fascinated by what happens when

they come together, when we visit those natural spaces. And these peoples are indigenous folk, natives and scientists.

So, I think that we know so much. We have such capacity. But in fact, so much of that capacity, what is the purpose for it? You know, if it’s for to

advance humanity, that is one thing. But if we are talking about, as our last interview, he said, you know, there is a distinct erosion of trust in

A.I., then let’s go back — further back to say, why are we living? What is our purpose? To live, to care for and what is our, you know, job as

individuals or as citizens or as family, community members, to ourselves, as well as to the world around us?

AMANPOUR: So —

MA: If we find ourselves as part of nature, then we start to care for it the way that we try to care for ourselves.

AMANPOUR: So, let’s give another beautiful example. We have cut some of your performance in Kentucky, which was just this past weekend. So, let’s

see you there at the Mammoth Cave National Park.

(BEGIN VIDEO CLIP)

(MUSIC PLAYING)

(END VIDEO CLIP)

AMANPOUR: I mean, it’s extraordinary. I mean, we’re looking at this incredible picture. It’s all dark and you got the lights over the music and

we can see the audience behind. What — you have said that this is not transactional for you. You’re making relationships. You’re not going to end

these relationships, you’re going to pursue this and maybe go on to other places, Antarctica or wherever. But what are you getting from the people

who you encounter in these outdoor natural environments?

MA: Well, first of all, community building. I think everybody that we talked to, Teddy Abrams, the conductor of Louisville Orchestra, you know,

Devon Hines, the great singer, and Zach Viniker (ph), the staging director, everybody, to the park rangers, to the citizens around, to the guides said,

oh, my, gosh you must do this for 1,500 people standing around with new performances, you need to tell the story of those caves, millions of years

old, 5,000 years of history with people from natives, indigenous people, to what its story is written in — right in there, but it takes a musical

narrative to bring it into the heart and minds to the people who are listening.

The war of 1812, all the ammunition, Jefferson said, would be available from the salt peter dug out from that cave. It was the second largest

visitor site in the United States in 1800s after Niagara Falls.

AMANPOUR: Wow.

MA: At 400 miles of caves. And so, the descendants of both the owners of the land and of slaves as well as seven generations of slaves are the

guides who are friends and leading thousands of people who go into the caves every month, and it tells a story of our country’s history. But much

more so, it goes way beyond.

So, that’s one way concretely using culture to show and to make us feel what a country’s history is, but in relationship to our planet. And I

think, you know, to have that in concrete form, I think changes lives and gives us a different perspective.

AMANPOUR: Yo-Yo Ma, I’ve only got a few seconds left, but you are known as bringing — you know, performing in many, many instances, whether it’s an

inauguration or times of global mourning or indeed celebration, you bring the U.S. and the world together.

Here you are doing it in a completely different environment. What are you trying to — are you trying to bring us all together for the planet or for

culture or for what?

MA: I think I am just, first of all, trying to explore what I am interested in. And I think at my age, I think I’m very much interested in

meaning and purpose. And I think, you know, if we go back to the founding of nations, which by the, way isn’t a human invention, we need to examine,

what our purpose and what is meaning and what is our relationship to each other as well as to the world around us?

AMANPOUR: Amazing.

MA: If we can find that, then we can solve the problems of A.I. and other things. But it’s through building trust, searching for truth, and making

sure that what we discover is for the service of us. Very much like the CERN model that you talked about —

AMANPOUR: Yes. OK.

MA: — earlier on.

AMANPOUR: Amazing. Really, thank you so much, Yo-Yo Ma.

Now, the social media news revolution appears to be coming to an end. BuzzFeed News, one of the first to harness social media’s influence is

shutting down. While Vice Media are also reporting — reportedly for bankruptcy.

Ben Smith was the founding editor-in-chief of BuzzFeed News, and he explores this crisis in his new book, and he’s joining Walter Isaacson to

discuss what the past decade of digital news has shown us.

(BEGIN VIDEO CLIP)

WALTER ISAACSON, HOST: Thank you, Christiane. And, Ben Smith, welcome to the show.

BEN SMITH, AUTHOR, “TRAFFIC”: Thanks so much for having me.

ISAACSON: So, your great book, “Traffic,” coming out this week is all about BuzzFeed, Gawker, that era of the internet where everybody was

chasing traffic. It kind of feels like in the past few months that era may be ending, that BuzzFeed News, for example, which you were part of, has

closed down. Tell me, is this the end of an era and what do you think about what’s happening now?

SMITH: Yes. I mean, I think this era that was defined by social media in the 2010s, which is really what the book is about, you know, it felt — I

would say, you know, when Joe Biden got elected, in a way, to me, that was a sign that people were tired of all the drama and the conflict that was,

to me, defining that era. But I do think in the last — really the last few weeks it has felt really, like, OK this is drawing to a close and it’s time

to figure out what is next.

ISAACSON: You helped found BuzzFeed News, the news division. What happened? Why did BuzzFeed News close?

SMITH: I mean, you know, there are a lot of reasons. And it’s something I’m really heartsick about. The reason was that, you know, our goal was to

build kind of a new news channel for the social web. We imagine that these new platforms like Facebook and Twitter, they were the new cable. And in

the way that CNN had built itself on this new pipe called cable, we would that on these social media platforms. These platforms, I don’t think are

proving enduring the way cable did. The whole era is changing and ending and people — consumers are moving away from them.

And so, I think the biggest — you know, the biggest problem was just that we were building for an age that never really arrived or that came and

went. But we also never — you know, there were — there was — media companies imagine that they would be the ones who made money off of this,

ultimately, Facebook and Twitter. You know, Facebook in particular, you know, was the only company that got really rich off Facebook.

ISAACSON: Well, you talk in your book about Jonah Peretti, who you worked with, a wonderful guy who starts BuzzFeed. His rivalry with Nick Denton,

who starts Gawker, tell me about the two of them and their personalities seem so different when you read the book.

SMITH: Yes. You know, so when I went back to try to figure out like what’s the origin moment of this whole era, it did seem to me that it was in this

downtown media scene in Manhattan in the early 2000s. And with these — among others, these two guys, one of whom had this very basically

optimistic positive view of a kind of internet that would, you know, ultimately produce Barack Obama, among other things.

ISAACSON: And that’s Jonah Peretti?

SMITH: And that’s Jonah Peretti who started BuzzFeed. And the idea was, well, you know what, the kinds of things people are going to share on

Facebook are ultimately would be more positive, more constructive than the old media.

And this — and Nick Denton, his rival, who’s founded Gawker, whose basic premise was this new internet journalism is going to allow people to

express the things they wouldn’t express before, not the kind of polite old truisms of old media, but the kind of real things that journalist would say

to each other in bars, and by the way, the things that people would be too embarrassed to buy at the newsstand but could — but really the kind of

like (INAUDIBLE) and the gossip and the pornography that they really wanted.

ISAACSON: What was Jonah’s insight about going viral? Because that seems to be the core insight that drives this decade?

SMITH: Yes. I mean, it was — the core insight was that where media had been distributed through cables, through newspaper printing presses,

through broadcast towers, it was moving to being distributed, essentially hand-to-hand on the internet, that we were our own distributors, and that

the content that the media companies that would succeed were the ones that produce things that people wanted to share with each other. And that was

really the core insight.

It’s pretty — you know, it’s an insight that is neutral to what that content is. That can be pictures of kittens, that can be antisemitic

propaganda, as we learned, right?

ISAACSON: But wait. Didn’t really turn out to be neutral or did it add just into, as Steve Bannon says in your book, more enragement, more

engagement?

SMITH: Yes. Well, it edged in a lot of different ways, and I think it began actually with a lot of very sweet, harmless stuff mostly. And by the

mid-2010s, partly because of the systems, the platforms, particularly Facebook and Twitter, have themselves set up, and the rules of the game as

they had written them.

The — what was most successful was the most, yes, “engaging thing.” And by engaging, it tended to mean, I say something unbelievably insulting to you,

you replied it by telling me to kill myself. And then, we have a 15, you know, comment exchange, and the platform, says, fabulous, these people are

engaged.

ISAACSON: Is that inevitable that the algorithms you talk about in social media had to inflame us and enrage us and engage us or could the algorithms

had been written in a way that Jonah Peretti would have wanted, which is to connect us and make us feel better about ourselves?

SMITH: I don’t think these were inevitable. I think there were technical choices. But they also certainly — you know, elements of human nature are

not avoidable. And I think the — you know, and I do think that some of us — some of Jonah, but I think me, like a lot of people in the early days of

the internet imagined, you know, that people are basically better than they are, that people would never go out and publicly say the sorts of things

that you see every second on the internet.

ISAACSON: The relationship between BuzzFeed, BuzzFeed News, and Facebook, and Jonah, Mark — Jonah Peretti and Mark Zuckerberg, seem to drive this

book. Tell me about how Facebook’s decision affected the decade?

SMITH: Yes. I mean, Facebook’s, you know, engineers were making — were trying to figure out, how do we get people to use our platform and click on

ads on our platform and come to something called newsfeed, that’s all these mixed-up interesting stuffs that, you know, baby pictures and hard news

stories and everything and funny memes? And for a while that felt kind of delightful to consumers.

And as it started to bleed into very, very controversial difficult politics, Facebook got freaked out about it, Facebook started taking tons

of criticism from people like you and me about, you know, what the hell is happening on this platform? And started to try to figure out, you know, how

can we, you know, keep our business, keep it really sticky, but — and — but engage people in things that don’t make them feel horrible about

themselves, and they just made — and they made a series of — I mean, they would now say, also, mistakes in how they run about this.

ISAACSON: Like what? What was a big mistake, you think?

SMITH: The most — I mean, the biggest was shifting after Donald Trump was elected and they felt that their platform was — had been poisoned by

politics. They said, you know what, people are engaging in ways that are not meaningful to them, that are sort of ephemeral and they feel bad about,

and we’re going to switch to a technical measure called meaningful social interaction that is about — you know, about signs, like writing a comment

that mean you really care about this thing that you saw. And really, what it did was inflame the absolute worst and most divisive stuff.

And there’s an e-mail in there from 2018 that Jonah had sent to — that Jonah Peretti, the founder of BuzzFeed, send to a senior person at Facebook

saying, hey, I don’t know if you guys see what you’re doing here, but we are finding that the things that spread most on Facebook are sort of inside

jokes about race in particular that escape that inside.

You know, the post in particular was a post that was like, you know, things white people like to do. That was like a funny joke among friends that if

it spread widely enough became — people interpreted as insulting and offensive. And Facebook was seeing people being insulted, being offended,

and saying, wow, this is so meaningful. This is great engagement. Let’s show it to more people. And was amplifying, in particular, the most

racially divisive content they could find

ISAACSON: You know, you look at Jonah Peretti and the BuzzFeed crowd and Kenny Lerer and yourself, it was generally trying to find a way that our

country could be better, it was sort of a Barack Obama hope moment. And yet, it ends up producing, not only a populism of the far-left and far-

right, but a Donald Trump. How did that happen?

SMITH: You know, one of, to me, the most fascinating things about going back to this period where all these people who, really, in the case of

having to post explicitly founded this thing to get someone like Obama elected or work to elect him and thought that they — that that represented

the culmination of this internet they build.

Obama visits Facebook, kind of obvious in moment that Facebook is a Democratic Party thing, beyond progressive people. And yet, all along, the

people who would found the new far-right were hanging around. Andrew Breitbart, who was a key Trump promoter, was among the founders of — whose

site was a key Trump, was one of the founders of “Huffington Post.” The guy who founded a site called 4Chan worked out of BuzzFeed’s offices for a

while.

And they were sort of learning from all of these techniques that we were creating. But we were very constrained by, among other things, like we

didn’t want to write things that weren’t true and we were trying to do a kind of traditional journalism in a new form, with all of the caveats and

to be sure and questions about fairness that came with that.

And in fact, in 2016, I, you know, sat down with Steve Bannon, the — Trump’s campaign chairman then, in Trump Tower, and he was just totally

perplexed that we had not turned into a Bernie Sanders propaganda outlet the way Breitbart had turned into a Trump propaganda outlet, not because he

believed in Bernie Sanders, but just because that’s just where the traffic was, and that’s the sort of heat and the signal he had followed. And

because, I think, they had no constraints, because they were totally invested in really tearing down the existing system. They, in some ways,

were much more successful than this social media ecosystem than anyone else.

ISAACSON: You talk about Andrew Breitbart, and in your book, there’s a wonderful chapter on Matt Drudge. And in some ways, he’s the godfather of

all of that because he is just aggregating little things that people can click on but doing it with a political slant. And I remember, and you

certainly talk about it, a seminal moment in internet history.

When I was at “Time” magazine, “Newsweek” was beating us on the story of Monica Lewandowski, but nobody was publishing it yet because we hadn’t

pinned it down. And after “Newsweek” doesn’t publish it one Saturday night, Drudge publishes is on a Sunday, and it just changes everything. There are

no longer gatekeepers in the news business. Tell me about how that affected all this?

SMITH: Yes. I mean, I think that the early beginnings of this were this assault on the gatekeepers and who — and certainly in the case of “Gawker”

and I think in the way that I saw my work at “BuzzFeed.” And remember, this was soon off the Iraq war. And I think the gatekeepers were seen as

sclerotic (ph) and corrupt and having — you know, having really profoundly messed up the most important story of that generation.

And so, there was this real kind of positive energy around, we can’t — you know, we can’t — we got to build a new media that is more transparent,

that’s open to outside voices, that’s going to listen to the people who say there are weapons of mass discussion, even if they’re not — they don’t

have the rank in the White House. And I think that was actually feeding a lot of that energy.

I mean, I think, if you look back now, you say, wow, we really did a number on these institutions, and they’re in terrible shape and the project now

is, we’ve got to, you know, buttress the remaining ones and build new ones.

ISAACSON: Do you actually feel that, that maybe this whole thing did help undermine our institutions and you are kind of sorry for that?

SMITH: Yes, I do. I mean, I think that the institutions — I mean, it’s complicated, right? I mean, the pendulum swings. These institutions had

earned their undermining. I think that, you know, the anger at the mainstream media after the Iraq war was totally justified. And I think it

was very healthy for them to face a challenge.

That said, you know, 15 years later, the damage — and I don’t think this is about — what a few blogs attack on media particularly, right, like all

institutions in society have been really shaken and eroded by a number of different factors. But I do think that if you think about where we are now,

the project is about building institutions, it’s about, you know, trying to buttress and strengthen the existing ones that came under this incredibly

fierce assault, in part from social media, and in a way that was kind of wraparound social media.

ISAACSON: So, one of the most self-reflective chapters in the book is about this deal, dossier. And you’ve been sort of a minor character through

the book. I love the way you sort of handle your role. But suddenly, you are the central player, and you publish this dossier that tries to connect,

not only Trump campaign to Russia, but has all sorts of salacious things, and it turns out not to be fully vetted or fully true, other journalist

hadn’t published it. BuzzFeed News does it.

Tell me in retrospect what you feel whether you did not right or not.

SMITH: You know, in retrospect, I do, as I wrote, think that we should have published it. And I don’t think it was — I think that the specifics

matter these stories. I do think that probably the reason that we publish it rather than somebody else is that we did have this instinct and this

tendency borne of the internet to say, hey, like, we’re not gatekeepers. Our —

ISAACSON: Well, wait. Let me push back for a moment —

SMITH: Yes.

ISAACSON: — because it was wrong. It was misinformation.

SMITH: Oh, yes.

ISAACSON: What —

SMITH: And nobody thinks. And this is what I’m saying, nobody thinks that if I send you an e-mail full of crazy allegations you should tweet it. The

specific situation there though was that this document had been influencing American politics at the highest level for months. Harry Reid had written a

letter to James Comey saying, I know you have these secrets, release them. John McCain had it and was acting based on it. And James Comey had then

briefed it to two presidents, the sitting president, Barack Obama, the president-elect, Donald Trump.

And then, CNN had reported that there is this document that’s been briefed to these two presidents, it’s affecting policy over — all over of the

place. And by the way, that says the president of the United States has been compromised by the Russians. That’s the point at which I think —

ISAACSON: But —

SMITH: — you can’t sit there and say, I have in my hand a list of communists, I’m not going to show you the list. Once you have just

characterized the document, as a court later found in part, you — the notion that it should just sit there and that you and I should say to your

viewers, to my readers, hey, we’ve seen it, it would burn your eyes out if you saw it. We don’t really trust you, doctor, lawyer, teacher to look at

it. I just think isn’t — actually, it’s just not tenable.

That said, when we published it, we wrote that we hadn’t — we’ve been trying for weeks to report on the allegations in Moscow and Prague, we

hadn’t been able to stand up or knock down the key ones, but that we found errors in it. There were just some little descriptive stuff about Moscow

that was wrong, Alfa-Bank was spelled wrong. And we wrote a kind of caveat emptor, published the story with the caveat. We publish the document in the

caveat and it went and totally — and the caveat was sort of cast aside, the document became this symbolic element of gospel.

And I don’t know if it would’ve made that much of a difference if we tried to staple them together better, but I do regret that.

ISAACSON: The fundamental structure of this era you’ve talked about, the traffic era, we’ll call it, or going viral era, it’s trying to capture

people’s attention. But there are only a certain number of eyeballs in the whole world that you can capture, and there’s only a certain amount of

advertising. Was there something structurally wrong about this business model?

SMITH: Yes. There was a core mistake about traffic, which was, I think, the people who first discovered it was, wow we can get people to click on

our website, we can sell advertising, thought they had kind of struck digital oil. The more we get, the more money we’re going to make. And by

the way, this thing is in its infancy.

You know, even 2003 we’re selling these very rudimentary ads for $9 for a thousand views, like we’re going to be selling a thousand times more of

them 4,000 times more per view. And actually, it was not like oil, because oil is scarce and traffic is plentiful, and it’s not a commodity. And in

fact, the price of an — today, the price of the kinds of ads that they were selling in 2003 is lower than the price they were selling for in 2003.

Not adjusted for inflation.

And so, the core notion that you could sell limited attention just was swallowed by the scale, in particular, of Google and Facebook who had

infinite access to people’s eyeballs and more sophisticated things they could do with those eyeballs.

ISAACSON: The new business model that seems to be emerging, once again, you’re helping to lead it. Semafor, your new publication, seems to be a new

way of looking how do you do valuable enough journalism that people will pay for, not totally beholden to clicks and advertising revenue? Explain

what you’re doing with Semafor, you and Justin Smith, and how a few others seem to be saying, this is the next wave?

SMITH: Well, I think in this new moment, kind of amid the rubble of social media and all the things that we built and the — what consumers want is so

different. So, what we’re trying to do is hire journalists who really know what they’re talking about, who can really be fair, but who are also

transparent about their own opinions.

And you can say, here is what I’ve reported, here’s the scoop, here’s what I think about it. And by the way, here’s what somebody who disagrees with

me thinks about it, and here’s some other pieces, views from around the world, from other perspectives, that we’re going to pull all together in

one place for you so you don’t have to read a story, wonder if it’s true, Google, 11 other stories on the same time to kind of triangulate the truth,

which I think is how a lot of people try to navigate at this moment.

ISAACSON: Thank you for being with us, Ben

SMITH: Thank you so much, Walter.

(END VIDEO CLIP)

AMANPOUR: And finally, tonight, remembering a musician very much embodying the spirit of Yo-Yo Ma. Gordon Lightfoot’s melodic and evocative

songwriting made him one of the most successful artists of the 1970s. The Canadian singer-songwriter has died, age 84.

Through poetic and autobiographical tales about heartbreak, loneliness and adventure, Lightfoot provided a soundtrack for a whole generation and leave

behind a vast catalog of hits, a number of them covered by artist like Johnny Cash, Elvis Presley and Harry Belafonte. Bob Dylan once said that he

wished Lightfoot’s songs would last forever. After seven decades, maybe they will. And we leave you now with the sound of “Sundown.”

Thank you for watching and goodbye from London.