03.03.2023

“Playing With Dynamite:” How Chatbots Will Change the World

What might the rise of chatbots reveal about the future of artificial intelligence? Microsoft, Google, and OpenAI are developing these digital assistants to simulate conversation with human users, changing the way we interact with technology. Steven Levy, Editor at Large of the tech magazine Wired, joins Walter Isaacson to discuss this phenomenon.

Read Transcript EXPAND

BIANNA GOLODRYGA, HOST: Well, next we take a look at the rise of chatbots and what they reveal about the future of artificial intelligence. Microsoft, Google, and OpenAI are developing these digital assistants to simulate conversation with human users, changing the way we interact with technology. Steven Levy is the editor-at-large of Wired, a tech magazine, and joins Walter Isaacson to discuss this latest phenomenon.

(BEGIN VIDEO CLIP)

WALTER ISAACSON, HOST: Thank you, Bianna. And Steven Levy, welcome to the show.

STEVEN LEVY, EDITOR-AT-LARGE, WIRED: It’s great to be on. Thank you, Walter.

ISAACSON: Everybody is talking about these chatbot. These things you can chat with on your computer, or your iPhone, or your phone, including ChatGPT. Explain to us what those are exactly.

LEVY: Well, they are computer systems that talk to you. And, you know, they’re called large language models because they’re trained on a lot and lot of text that the scientists have scanned and mixed up and geared to respond to you just like a person would respond to you. They try to figure out what the next response would be to bounce off what you said. And they have accessed to a lot of information about the world that they can use to inform their answers.

ISAACSON: Well, you mean — so I could adjust type something into the chatbot and it would then give me an answer like to a natural question?

LEVY: Exactly.

ISAACSON: OK. I’m going to try. I’m going to do — I’ve got ChatGPT and Bing, at the moment, the latest model. I am going to ask it the same question which is, you know, what is a chatbot? Hold on. I’m going to do it now. And click. And starting to generate it says, a chatbot is a computer program that uses artificial intelligence and natural language processing to understand questions and automate responses to them. They can be used for various messaging application. So, in other words, what do you think of that answer, is that — it says it’s a form of artificial intelligence. Is that right?

LEVY: Well, that’s true. Right. Yes. Well, I mean, it’s interesting because that answer, sort of, assumed that you wanted, maybe, a little more technical jargon, like natural language processing, you know. So —

ISAACSON: What does it mean by that, natural language processing?

LEVY: It’s a term of art for, you know, the way computers handle conversations.

ISAACSON: So, in other words, Natural language processing meant that this machine learned from reading or ingesting millions and millions of documents and books and things on the web and then was able to retreat from that instantly, something in response to my natural language question to him.

LEVY: Exactly. Yes. So, you know, they would draw that answer from the data it had available to it.

ISAACSON: And can it be, sort of, creative?

LEVY: Well, a chatbot can come up with a response that seems creative. For instance, you can ask a chatbot to write a poem, it could be a sonnet or a limerick. And it’ll scan just like one of those form of poetry and we consider poems creative.

ISAACSON: Well, wait. OK. I’m going to try it. I’m going to — I’m going to do, write a poem about a chatbot. Whoa. Here it comes. I am a chatbot. I like to talk and learn. But sometimes I get confused by the words that humans use. Whoa. That’s pretty amazing. I try to be helpful and friendly. But sometimes I make mistakes. Please don’t being greedy or rude. I’m doing the best that I can. That’s not only a pretty good poem but that shows it has feelings.

LEVY: Well, it certainly expressed feelings which was one of the more interesting developments we’ve seen over the past few weeks as a lot of people gotten their hands on these chatbots. They’ve engaged in conversations with a chatbot have explicitly said, hey, I am a chatbot and I do have feelings. And sometimes, the chatbots even —

ISAACSON: Wait, how does it do — I mean, how does it learn to do that? You tell me just — who — it vacuums up information from around the world. How does it learn that it has feelings?

LEVY: Well, the information it’s trained as a lot of people are expressing feelings. So, why wouldn’t a chatbot want to tap into that form of conversation?

ISAACSON: Tell me about Kevin Roose, “The New York Times” reporter who got into a really intense conversation with a chatbot.

LEVY: Yes, it was a two-hour conversation that Kevin had with a chatbot. And, you know, it was interesting to see that unfold because, you know, he was sort of baiting the chatbot into expressing its feelings. And yes, you could almost sense that the chatbot had boundaries that it didn’t want to overcome but he would then express in a way, well, you could actually say this, because, you know, it’s hypothetical. You’re not really, you know, expressing yourself as a chatbot but what a chatbot might say. And the chatbot wound up expressing its love for him and urged him to leave his wife.

ISAACSON: Whoa. And you say it had boundaries. Who puts those boundaries on and how did Kevin Roose, “The New York Times” report circumvent them?

LEVY: So, the companies that built these chats understand that they are playing with a form of dynamite. And they try to put, you know, some sort of guardrails on what the chatbot might say. They don’t want the chatbot to express hate speech, for instance, that would be very bad, or be used for propaganda, or to be insulting to people. So, you know, they put some parameters in there, but as it turns out, over long conversation or sometimes a clever shorter one, you can get the chatbot to jump over the boundaries and say things which are, you know, hair raising if not eyebrow raising.

ISAACSON: Well, you say that they put boundaries around it. There are only a few theys, right? I guess Google would be one of them, and OpenAI working with Microsoft, and Bing is another. Are there other companies doing this?

LEVY: Yes, there is a bunch of others. There’s one called you.com which is out that people can try. And as it turns out, it’s not a formula that’s limited to a few giant companies. It’s one of those things like when Google came out, we figured, search only Google can do that. But other places turned out to do a reasonable search, not quite as well as Google did it. But Microsoft managed to come up with a search engine. And there’s, you know, a number of other ones that you can try that seemed pretty good. And I think we’re going to find other new players come into this market.

ISAACSON: Well, if there are a lot of players coming into this market, won’t there be some that might not to put guardrails on? It might be perfectly fine with hate speech, or racist speech, or propaganda.

LEVY: Sure. For instance, the Chinese are developing their own chatbots. And I think what they consider topics that shouldn’t be spoken about, you know, might be censored. And then other ones that, you know, that might be blocked in the U.S. or some other European countries would be — let them – – let it say it. Let’s go ahead. So, it could be, you know, like an anti- capitalist chatbot.

ISAACSON: Could a chatbot, or a computer algorithm, or a machine learning device, could it be racist?

LEVY: Absolutely — I mean — and actually it would be surprising if it were not racist and had to be constrained — — because if you look at the body of human expression, you are going to find a lot of racism, you’re going to find a lot of things that we wouldn’t want to hear from the chatbots which we’re going to be talking with. Let me get this straight with you, a lot, in the future. This — a lot of our conversation is going to be taking place with these chatbots that, you know, of uncertain origins.

ISAACSON: Will that replace search?

LEVY: Well, that’s — it’s going to be tough to replace search in all forms. There are certain forms of search, they’re clearly going to be better. If you’re going to plan a vacation, for instance, just like you would speak with the travel agent. You could have a lengthy conversation, saying, you know — well, that hotel looks good but can you find one that’s closer to, you know, The Louvre. You know, how about one with the kinds of pillows I like and, you know, what kind of — here’s the kind of food of eat, you know, can you direct me to restaurants like that. And the — you know, conversation would build on the previous responses to tailor a vacation specifically for you. But if you’re asking it for some more factual things, currently, what the chatbot do — and this is pretty disturbing, is they come up what they’re called hallucinations, meaning, false facts. And —

ISAACSON: Wait, wait. How do they do that and why?

LEVY: Because right now, they’re not tied, necessarily to, you know, real- time information. When a search engine scans something, most often they’re going to give you the sources of information that you could look through and leave the search engine to find it. Chatbots give you instant information and trying to give you what you want to hear. It might say, well, this is the kind of information that this person is asking me for. So, it might give a fact which is, you know, in the flavor of what you’re asking for. But actually, it’s factually wrong. When I looked up my own obituary, for instance, and said I won a national magazine award for looking into the dot.com bust, well, I didn’t get the national magazine award for that. I got some awards but it didn’t miss the one and awarded me, you know, an ELI (ph) for something I didn’t write.

ISAACSON: What other things could it replace pretty easily in the next five to 10 years?

LEVY: Well, as we speak it’s replacing a lot of boiler plate communication that we use every day. You know, recommendation letters, descriptions of products. Right now, companies are integrating this into their work flow to make their employees more productive and maybe one day get by with your employees.

ISAACSON: Aren’t there some companies or some media companies that are just generating stories using ChatGPT and not using journalists?

LEVY: Yes, there are. But they have to be vetted because of these hallucinations. And also, they — right now, the output from these things is not, you know — doesn’t really have the flare that a clever writer would bring to something.

ISAACSON: What about things like lawyers, or doctors, or even psychiatrists someday could — you have a ChatGPT that acts as your therapist?

LEVY: Well, I think really soon. I mean, we found, you know, decades ago that a really simple chatbot program, it doesn’t really use very much AI to, sort of, powering your questions back. Vote (ph) feelings from people that they felt that they were in a therapy session. So, I feel, you know, right now you could use these chatbots and get some therapeutic benefit from it, from this thing talking to you.

ISAACSON: You know, Microsoft, working with its search engine, Bing, is an investment in OpenAI which created ChatGPT. So, they’re putting it all together into a Bing like product, the one I was just using, and they’re calling it Prometheus. I don’t know whether they have an ironic sense of humor or they haven’t read Greek the theology. But Prometheus is about the god who snatches fire from the other gods and gets torture the rest of his life for giving technology to humans that is bad for them. Is there a Prometheus moment in here where this might be a bad thing we are snatching from the gods?

LEVY: Well, definitely right. And — you know, maybe they shouldn’t ask the chatbot who Prometheus was and maybe they would’ve gotten a good answer and make them think of something different. But right, now some of the disturbing answers we’ve seen is when people have asked the chatbots, gee, what could you do that’s bad? And they have actually listed some things. I could, kind of, go into Bing’s files and delete everything, that is what one of them said. So, I think maybe we should be a little nervous at Microsoft because it’s the number one company in productivity software, is going to link this chatbot to your information. That seems inevitable to me. Where you can kind of go and say, you know, hey, Chatbot. What did I write, like, a year ago about this? Could you build on that and, you know, I can rewrite that to update it or, you know, I can — and we give these things access to what we do, it’s possible that these chatbots might interpret their mission or what we think they want them to do into something quite different and maybe have the power to delete our formation.

ISAACSON: Some people are accusing these chatbots of being too WOKE. The companies are putting so many guardrails, that it will write a nasty poem about maybe, Donald Trump or — but not do something nasty about Joe Biden, or that it has a political bias. Have you seen any of that?

LEVY: I really haven’t seen too much of that. I think maybe if you’re trying to filter for misinformation, it’s reasonable to think that it would block information that comes from the side of the political spectrum which promotes more misinformation. This is something we have seen in complaints about what Facebook, you know, up ranks or down ranks in their feed. I think, you know, really, it’s a question of how difficult it’s going to be to control what the chatbots say because the degree that you bind them, the degree that you build these guardrails, you are probably limiting their usefulness. You are lowering the ceiling on what they can do for you, the more you try to constrain what they say. So, it’s going to be a very tricky balancing act to let the chatbot be who they can be and let them be wholesome.

ISAACSON: Do you think there’s any way for government, especially in our dysfunctional politics, to figure out how to regulate this?

LEVY: Well, I think it is going to be really tough because this is a question that is bedeviling the people who build them and the close observers of artificial intelligence who have been, you know, worried about ethics in this field for decades. And I don’t have much confidence that Congress is going to come in, you know, with like Solomon with, like, the right answer on how these things grow. We are strapping ourselves in for a rollercoaster ride that no state inspection has looked at.

ISAACSON: In 1950, the Seminole Paper about this topic was written. It was all Alan Turing’s paper on computing machinery and intelligence. And it said, can machines think? And he imagined the conversations you could have with a machine that would be indistinguishable between that of a human and it was called the Turing test or the imitation game. Have we reached the point where we’ve passed the Turing test and we can say that machines thinking?

LEVY: I think these things run rings around the Turing test. We are here. I mean, there’s no way you could read these conversations and think, there is no way, you know, human can say I’m poking hole in it. You know, it — they might tell lies. Humans tell lies, right? And sometimes be less than coherent. Sometimes humans don’t make perfect sense. So, I think that they’ve aced this Turing test and we’re in uncharted territory now.

ISAACSON: So, if we have machines that appear to think. We don’t really know what they are doing inside their heads but they can appear to think just like humans do. Do they have consciousness? Do they have feelings? And is it possible for a machine to have consciousness or feelings?

LEVY: Well, I don’t believe that they have consciousness. You know, there was a Google researcher about a year ago who went public saying that he felt that Google’s chatbot, which really isn’t open to the public yet called Lambda, was sentient, was conscious. And even try to get it a lawyer to help, you know, represent it in getting freed from Google. And I’m not sure if that was performance art or what. But, you know, he insisted, he believes it. But I think in a way, it doesn’t really matter. If the thing — if something acts sentient, we have to deal with it. If the thing — if something acts sentient, we have to deal with it as it is, right? You know, we’re talking now, you know, I know you are a human being. So, I’m accepting that you are sentient, right? We can be having the same conversation and you could be a chatbot, you know, expressed by an A.I. And, you know, even though that chatbot isn’t sentient, I’d have to deal with the output. So, in a way that is a red herring.

ISAACSON: Well, it goes back to Descartes, as all great philosophical questions do, which is, we know our own consciousness but we’re not sure we know that the people in front of us have consciousness, or maybe that are machines of consciousness. Will this make us reflect more on whether there is something special about consciousness that is uniquely human?

LEVY: Absolutely. Absolutely. When I look at the output of the chatbots, particular, you know, people try to have them write essays. And, you know, I used to grade freshmen composition, I was a fellow in grad school and I taught. And I read hundreds and hundreds of college essays and some of the duller ones looked very much like this ChatGPT output there. And I’m wondering, can a chatbot produce something that has soul? You can’t measure that. But when you hear something with soul, you know it. And that is a question that I have been grappling with.

ISAACSON: Steven Levy, thank you so much for joining us.

LEVY: My pleasure.

About This Episode EXPAND

Journalist James Lasdun reacts to the trial of Alex Murdaugh. Jixian Wang and Dasha Zakopaylo discuss what it’s like on the ground in Odesa, Ukraine. Salah Hamwi discusses the humanitarian disaster in Yemen. WIRED Editor at Large Steven Levy explains the guardrails that may be necessary to keep AI chatbots in line.

LEARN MORE