Read Transcript EXPAND
CHRISTIANE AMANPOUR: And now, let’s fast forward to 2041, the Bestselling author, Kai-Fe Lee, knows the world of tech like no other. Once president of Google China and a senior executive at Microsoft and Apple, he’s not CEO of Sinovation Ventures. And he’s out with a new collection of short stories that imagines how artificial intelligence will shape the way we live for better or for worse. And here he is speaking to our Hari Sreenivasan about why A.I. could in fact be the economic issue of our time.
(BEGIN VIDEO CLIP)
HARI SREENIVASAN: Christiane, thanks. Kai-Fu Lee, thanks for joining us. Now, we have spoken before about artificial intelligence a few years ago. You are out with a new book now which is kind of mix of sci-fi as well as a projection into the technologies that might be impacting us in the next 20 years. And when we think about this kind of socially and structurally, usually, people fear what they don’t know. And one of the fears that a lot people have is that artificial intelligence will create a massive shift to our economy, where it will leave lots and lots of people out of jobs.
KAI-FU LEE, CO-AUTHOR, “AI 2041”: Yes. So, on the one hand, I think because A.I. is trying to replicate human intelligence, it will obviously take over some jobs when it is successful. And it will be successful because A.I. is learning on data and there is more and more data. So, A.I. gets smarter. So, I think on the one side, we definitely see substantial job displacements for blue and white-collar jobs, especially those that are routine. But at same time, there are new jobs being created. People who have to write A.I. algorithms, who are repairing robots or who are collecting data and labeling data, and many more jobs will be created. We just don’t know what they are. So, we have two engines. One of job disruption and one of job creation. It is hard to tell which ones will go faster. The pessimistic side of me would say, it doesn’t look good because the disruptions will come before the creations. But the optimistic part of me would say, hey, look at all of this in the past, it has happened before. The industrial revolution, the invention of automobiles and so on. They were all destroying jobs and creating jobs. And ultimately, there are more jobs and there are more interesting jobs at the end of the day. So, I can’t tell you definitively, but there will be a lot of turmoil and change.
SREENIVASAN: So, what do you think? By 2041, what percentage of the economy, as we know it today, do you think will be impacted by artificial intelligence? And more important, how do governments and societies prepare for that? You have whole chapter on this. And what kinds of things should we be doing and thinking about?
LEE: Well, A.I. will disrupt every imaginable industry. For example, we’ve already seen in the internet space because of A.I. all these internet companies can make all that money help us become more efficient but also sometimes becoming brainwashed and perhaps losing our privacy. All financial companies will also become A.I. driven. Stock trading will be done by A.I. And in transportation, we’ll be using autonomous vehicles all the time. We don’t have to buy cars or own cars. And garages will be no longer needed or parking lots. In manufacturing, A.I. will produce everything. And that will essentially drop the cost of labor down to near zero, thereby making products much more available more cheaply, thereby potentially eradicating poverty. So, this can — I can go on for healthcare and education and every industry. So, huge changes. And I think that governments really have to do several things. I think one is to regulate the use of A.I. so that it doesn’t do all the bad things that could happen that would be done by humans behind A.I., not A.I. itself. I think the other is to manage the transition of jobs. Jobs lost. Jobs gained. How to train the people and also, deal with the wealth inequality. Because with the success of A.I., some tycoons will make a lot of money while many jobs will be lost. And also, governments will have to rethink what’s the future of education. You know, road learning will be useless. We want kids to have creativity and compassion and teamwork, and how do you train for that? And finally, the economy will change because if cost of goods comes down a lot and if people change the way they believe that their lives revolve around having a good job and making money, but many jobs are gone and products are becoming cheaper, then accumulation of wealth is probably not the only thing that matters. How do we change that mind set, and what does the government need to do with the economy? So, the list goes on. There are many big headaches. But I think at end of the day, when this is all done, we’ll no longer have to do routine jobs and we can do work that we love rather than repetitive work that we currently hate. So, it is a good outcome but very tough process to getting there.
SREENIVASAN: You know, one of the interesting chapters you had was on education. And how essentially kids with different learning styles could have the equivalent of an A.I. companion that adapts, almost listens and helps those students achieve in a way that perhaps individual teachers just do not have the time to focus per student.
LEE: Yes, that is something I envision for the future. Because today’s education, there just isn’t enough tuition to afford one teacher per student on an individual basis. Yet, we know everyone is different. They are slow in — each child is slow in learning something and an A.I. companion or a teacher can find that and build a stronger foundation. Each child is excited by something, perhaps basketball, perhaps super hero, and that could be integrated with the curriculum to make learning more fun. And also, each child may have some, you know, inner capability that needs to be distilled. And of course, a human can do that but there isn’t enough time. So, I believe in the future, a lot of the optimizing will be done by A.I. A.I. can teach a course, give an exam, build the foundation, make things interesting. But the human teacher will actually evolve to be more of a mentor/coach to each student in terms of having the right values and knowing right and wrong, learning creativity, learning to work with the other people and learning to be a good contributing member of the society and team. So, by taking the routine and individualizing, optimizing part of teachers’ work today, I think we make everything better for both the teachers and the students. The teacher’s work is more interesting. The student learns what he or she needs from the A.I. and from the student simultaneously.
SREENIVASAN: Right now, there is a lot of talk about when, for example, autonomous driving will hit the streets. And at the core of that, besides the technology being refined enough, there is also this ethical dilemma that people wonder about. I mean, we call it the trolley problem, right? How do we program ethical decisions into machine that is driving itself? Should I swerve to avoid this one person? And if the cost is that I hit these other two people, how am I making those kinds of decisions? I mean, because A.I. is only as good as the how humans that program it.
LEE: Right. Those specific decisions about priority of, you know, two children lives versus two adult lives, that’s, I think, beyond capability of the programmer. The programmer would set some goals and then, the A.I. would look at all the data and figure out for itself how to achieve those goals. So, those goals could be getting from place A to place B as quickly as possible and without hurting anybody in the process. And then the A.I. would, based on data, to do a better and better job over time. And this is a product you have to launch, and then it gets better over time. So, the story in the book talks about what is the process of launching? What is good enough to launch? It seems like you have to be better than humans. But still, mistakes will be made. Then what happens? And there are different mistakes than humans would make. But the good thing is, it will improve over time. I mean, we’re seeing this today, when Tesla launched the summoning feature. People made a joke and said it was horrible in the first few weeks. Then they collected all the data, then the feature worked great. So, that is the same thing that will happen. So, question is do we, as the human race, have the courage or the audacity to accept that there is an intermediate process towards a future where perhaps 90 percent of all the fatalities can be cut down but there is a price the pay along the way? So, that is a moral question we, the human race, have to answer.
SREENIVASAN: We have capacity to automate defense systems. Would we ever allow an A.I. to determine who is an enemy and wipe them out?
LEE: Yes, all technologies are a double-edged sword. Think about drones that can automatically shoot to kill. It is — there are some positive attributes in sense that if all wars were fought by autonomous weapons, then people, soldiers don’t die as much. And autonomous weapons may be more accurate. So, there would be less collateral damage. However, there is an overwhelming negative, which is building a drone that can recognize someone and kill that person can be done for $1,000 today, and that lowers barrier of assassination one at a time or genocide, a hundred thousand at a time by terrorists and non-state actors. Furthermore, they are very likely not to get caught because it is just a drone or robot who is going — how can you tell who is behind it? So, I think there really needs to be effort. This is probably one exception in the book where I generally feel technologies will tend to go to positive direction, even if there are concerns. Autonomous weapon one where I feel regulation needed today and people really need to put their minds to this problem because it is weapons, it is lives, it is directly taking lives and it is putting a powerful weapon in the hands of potentially malicious or evil people.
SREENIVASAN: So, I wonder, right now, there is not — or at least, there doesn’t seem to be any kind of global agreement on what the rules of the road of A.I. should be, what kind of ethical standards, you know, every researcher should be taking into consideration before they publish something, whether this can be used against you or against humanity. I mean, between now and 2041, that infrastructure seems more and more necessary.
LEE: Yes. There are so many problems. There’s, you know, how do we store personal data? And who gets the right to use it? What is the consequence for violation? And also, deep fix. What if a video is distributed that says you committed a crime that you didn’t commit, but no human or A.I. can tell it is real or not? So, all those things, I think, need to be put in place. And also, fairness. How do we detect an A.I. that might discriminate against people or do things unfairly? And also, how do you hack into an A.I. system? What if you fool an A.I. and fool the autonomous vehicle into thinking a stop sign is not a stop sign and thereby, having someone killed though it looks like an accident. So, the list goes on. Yes.
SREENIVASAN: You know, the pandemic is part of the storylines that you have coming 20 years from now. But I wonder what the impact of the pandemic now is today on the workplace. What has A.I. shown that it can do pretty well? Where has it fallen short?
LEE: Well, actually, A.I. has done a great job in advancing the combination of A.I. and healthcare. My day job is an investor. And we’ve invested in the company that uses A.I. to find drugs for rare diseases. And it is able to do that much faster than humans. So, that has the long-term impact of potentially making rare diseases treatable because they were not economical enough for large pharmaceutical companies to go after them because of cost invent drug to fix the problem, but now, A.I. can reduce the cost. Another example is the automation of the laboratory. People might assume that factories are easier to automate. Assembly line workers are easier to replace. But actually, many jobs in the factory require a high degree of dexterity that is very hard to replace. But lab technicians or the people who currently manually do the COVID tests, those jobs that they do, if you think about it, are relatively routine and repetitive. And once you cover COVID — so, you know, we invest in a company that makes one giant robot that can do a 120,000 COVID tests per day and that robot with some modification modifications can work on crisper growing organoids and can work in molecular biology, can do drug discovery. So, essentially, lab technicians are being replaced with robots making it much faster to invent new drugs and treatments and do experiments. So, there are many technologies like that that are becoming faster, robots for social distancing, the use of robotics, we see that a lot. And of course, also, people working from home, and this is more in the U.S. than perhaps in China, makes the workload digitized. Then A.I. can be applied to either replace or enhance parts of those workloads. And I would predict that is probably what we’ll see in the coming years.
SREENIVASAN: Do you think that we are going to recognize because of this pandemic and this disruption that humans should be providing a different type of value, don’t compete with the computer, do something complimentary, do something that computer or the robot cannot do?
LEE: I think that is wise advice. And A.I. is fundamentally limited in certain areas. And also, what A.I. is good at is generally things that we don’t want to do anyway, you know, routine work, repetitive work, whether it is blue collar or white collar, those are not the most rewarding. They are not really delivering self-actualization to many people. So, if we can be elevated from that and open up a whole wide spectrum that you — if you are a creative, then go after it. Because the society has the wealth for people to explore their dreams. And if you are someone who’s very warm, then you should spread your warmth, whether it is in an elderly home or foster home. So, I think people can really do what they are passionate about and find things that they can contribute to the society even though the contribution may not be measurable in money but it might be measurable in some other way such as making the world a better place.
SREENIVASAN: The book is called “AI: Ten Visions for the Future, 2041.” Co-Author Kai-Fu Lee, Thanks so much for joining us.
LEE: Thank you, Hari.
About This Episode EXPAND
Shkula Zadran; Carol Moseley Braun; Peter Baker & Susan Glasser; Kai-Fu Lee
LEARN MORE