03.12.2024

Josh Tyrangiel: “Let AI Remake the Whole U.S. Government”

Read Transcript EXPAND

CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR: Now, the historic rapid response on the vaccine, as we said, was the game changer. Our next guest says that it also shows a perfect example of governments using artificial intelligence to help their citizens. Now, Josh Tyrangiel is The Washington Post A.I. columnist, and he explores its untapped potential by government for his latest article. And he’s joining Walter Isaacson now to discuss the benefits.

(BEGIN VIDEO CLIP)

WALTER ISAACSON, CO-HOST, AMANPOUR AND CO.: Thank you, Christiane. And Josh Tyrangiel, welcome to the show.

JOSH TYRANGIEL, ARTIFICIAL INTELLIGENCE COLUMNIST, THE WASHINGTON POST: My pleasure.

ISAACSON: You’ve been writing these columns on artificial intelligence for “The Washington Post,” and your latest talks about the ability of A.I. to revolutionize the relationship between government and citizens. Give me an example of that.

TYRANGIEL: Sure. I mean, we have this sort of foundational example that was Operation Warp Speed. So, as people may not recall, even though it was quite recent, the entire reason that we were able to produce and distribute vaccines to all 50 states at the same time was that there was a general named Gus Perna, who was appointed to oversee Operation Warp Speed. He shows up in Washington with no — really nothing. No plan, no money, he has three colonels, and he does these series of conversations to try and figure out how he’s going to operationalize it. And it turns out the answer is A.I.-driven software. OK. And what A.I.-driven software is able to do at its best is operationalize excellence by creating and ingesting data from lots of different places. So, the challenge for Warp Speed was not just to be connected to all the pharmaceutical makers, but to the truckers, to all the state level agencies, all the healthcare providers, all the pharmaceuticals. So, you’re talking about thousands of different data inputs. And so Perna sits down, and ultimately, he needs A.I.-driven software to ingest and integrate all that data in a clean way, and then turn it into an app, so you can actually make decisions based on information. And now, that is a very complicated thing. I’m going to reduce it just for a second. Think about plastic, right? So — and you’re trying to get vaccines to everybody. Well, you can make the vaccine, but what is the national state of the output and production of plastic, right? So, just that alone, to put the vaccine into vials requires a whole vision of a cosmos. And so, we did it, and it worked. But our politics have kind of buried that effort in the rearview mirror. And what I took away from that is, what if you could operation Warp Speed entire government functions?

ISAACSON: But since Operation Warp Speed, we’ve had a major advance in A.I., which is large language models and generative A.I. that can answer questions, do things. How will that help affect what you’re talking about?

TYRANGIEL: I think A.I. has had some of the worst P.R. you could possibly imagine for technology. And there’s a reason for that, which is that the stakes are incredibly high financially for the companies that are driving it. And nobody sort of stopped to explain at its basics. A.I. is math. It is large amounts of probability driven around language and structured data. And it turns out that while, yes, some large language models will hallucinate and deliver weird math, if you have structured data, for instance, around the IRS, which is just a series of codes, and you can talk to the IRS about whether you deserve an exemption for this or that, an LLM is perfect for it. If you get SNAP benefits and you have questions about their delivery, what to do with them, again, structured data turned into language is perfect. And so, there’s this communications layer that private industry has begun to figure out, which is 24/7, it’s never shut down, you can do it in any language, and it provides a level of service and interaction that is really good. And we don’t really want to talk about that at the government level, and there’s a couple reasons. One is, you know, on the Democratic side, it means there might not be as many people working for the federal government in the future, and that is a sacred cow for Democrats, for obvious political reasons. On the Republican side, you know, unfortunately, it seems like a lot of Republicans are invested in proving the illegitimacy of government, and they don’t want to make it stronger and restore it. But yes, this revolution in LLMs is perfect for these kinds of agencies and these kinds of benefit providers.

ISAACSON: You talk about how sometimes these LLMs, this artificial intelligence, will hallucinate and make mistakes, and you mentioned SNAP benefits, basically supplemental nutrition benefits, like food stamps. I was stunned to read in your piece that nowadays, humans get it wrong 44 percent of the time when they’re trying to figure out SNAP benefits. Do we just have to show that A.I. will be much better than humans, rather than showing it’ll be perfect?

TYRANGIEL: Yes, I think that there’s this sort of fundamental thing that we need to embrace, which is we are defending a status quo that is not worth defending, right? And so, for all sorts of emotional reasons, we defend the status quo. It’s not great, when 44 percent of human-driven decisions end up in the wrong decision for SNAP eligibility, and that’s food, right? There are people who are going hungry or are getting benefits that don’t deserve them. Do we think an LLM or an A.I.-driven system can do better than that? For sure, and what I’m talking about is not having A.I. make the decisions, but operationalize the information so that humans can make better decisions.

ISAACSON: I was on the Defense Innovation Board with former Google CEO Eric Schmidt and Code for America had Jennifer Pahlka, and you’re right about that, which is that it’s not just A.I., it’s government can’t do software. Explain why government has problems procuring software.

TYRANGIEL: The thing about software is that software starts as a sort of abstract idea, which is we’re going to create something that will be fluid and respond to user demand and user services. And in the software industry, there’s a saying, software is never done, right? So, try telling a civil servant who is going to get raked over the coals that, no, no, no, we need you to approve appropriations and procurement for this product that will never be done, right? And so, it’s a real challenge. And on top of it, of course, you know, look, our system has only gotten more complicated. We have more people, more spending, and we insert rules and regulations. We never take them out, ever, because that’s really dull and unglamorous work. And it’s hard to do. And so, what we end up with — and Jennifer Pahlka, who’s brilliant, sort of taught me through this in her mind, you get this kind of loop of absurdity and dysfunction. And it starts with procurement rules that are torturous and absurd. Then you hold public servants accountable for their ability to enforce absurd procurement rules. So, what ends up happening is you either get absurd and bad software, right, really bad software, or they take a chance, and then they end up in front of Congress. And each cycle of this drives more good, smart people out of government, and then you rinse and repeat. And so, that’s why — you know, as Eric Schmidt sort of said to me, he’s like, basically, software — you know, government is the perfect enemy of software. And so, we have to find a way, given that software is basically the most important noun in the American budget, we have to find a way to get the government to accept its role in providing software. And it means there’s some risk. And as you’ve seen right now, politics and risk don’t go very well together.

ISAACSON: One of the things that jumped out in your piece for me was the V.A., the Veterans Administration, and its hospitals. And there was a statistic that made my head snap, which is 18 veterans per day commit suicide. And you were talking about how A.I. could maybe help that process. But explain to me, isn’t something like psychological issues, something we really want humans, the empathy of humans to do, is that something A.I. can help deal with?

TYRANGIEL: So, 100 percent. We want to free human beings to actually solve human problems. I mean, I think everybody would agree with that, but there’s a corollary to that stat, which, you know, ProPublica did some incredible work looking at the V.A.’s own inspector general reports, and it turns out that human beings were frequently misdiagnosing or ignoring signs of mental distress in veterans. I mean, to a really — if you’re an American who has any sort of patriotism, just a gut-wrenching degree. And the reasons, again, are that the system is incredibly complicated. You’re losing track of who has what condition. So much of the work they’re doing is still manual, let alone the stuff that’s digital not being functional. And so, we have to get to a place. If we really value our veterans and their service, you know, they deserve a god view of their own care. They deserve to be able to log in and look at where they are, and someone on the other end deserves to have that same functionality, so they can see, oh, this is a person who has severe PTSD. How frequently are we checking in? And I would say that at the first level, an automated LLM that says, how are you feeling today, is not a bad thing, because any sort of even parasocial engagement that can ladder up immediately to a human when you detect distress is a positive. I’m not suggesting that that becomes the counselor by any means, but the system is so broken that just instituting some sort of basic check-in, some sort of order, can get that number down. And what I ultimately suggest in the piece, you know, steal from software, right? So, as I said, software isn’t built all at once. You start with one small use case, and you learn from it. And we owe veterans the opportunity to learn and work with them first. And so, let’s solve that problem. You know, if Joe Biden was going to give a State of the Union about A.I., and we wanted to sort of take our moonshot, why don’t we reduce that number of suicides by half in one year? Could we do it? I mean, we manufactured and distributed vaccines in six months. So, I think we could. But we — you know, Americans respond to big challenges and it’s complicated. A.I. is very technical subject. People have natural fear and confusion around it. We need to set a really, really big goal and see if we can accomplish it. And then I think people will understand the value of what A.I. can deliver in operational excellence.

ISAACSON: One of the things large language models could do is could answer your questions about taxes, if you called up since you can’t get an IRS person on the phone, it could do food nutrition benefits, it could do Social Security, and it could even make the judgments probably better than humans could of do you deserve this? What should be the resolution of things? Do you think people would accept a machine making those decisions rather than having a human talk to them?

TYRANGIEL: I think they will for sure. I think we’re a couple of years away from that happening everywhere. And taxes is a really good example because it’s math, right? It’s a series of inputs There’s a limited amount of data, and you have on the other side of it a corresponding sort of regime of rules that is incredibly complicated. I promise you, your taxes in the next three years, if you’re using any of the big services, they’re going to be done by A.I. They just will be. The private firms are going to move to it. And so, the idea that the government would take part in it makes no sense. And so, if you’ve got all of these individuals using A.I. to file taxes, maybe what we should do is centralize A.I. around that tax regime so that we don’t all have to go with our independent, very expensive A.I.s to figure out what we owe.

ISAACSON: Let me ask you the really big question on A.I. If it’s going to increase productivity enormously, does that mean it ends up destroying more jobs or creating more jobs?

TYRANGIEL: It’s a really good question. I think in the short-term, most of the economists I talk to say, we’re in a — in a three-to-five-year period, there’s probably not going to be dramatic change. And so, even the Bureau of Labor Statistics for the jobs you would think would be most affected immediately, which are things like translators and transcription, they don’t show a great reduction in those job categories. But five to 10 years out, they do show those categories being eroded tremendously. Now, a lot of the most optimistic economists will tell you, that’s great. We have been through this many, many times where a massive new canonical kind of technology comes along, and it eliminates job categories and new jobs are created, right? And every single time we go through the same crisis of faith. Well, what if this time is different? I’m very sensitive to the fact that we don’t know the answer. But if we look back at previous changes, industrialization, even things like tractors, what you’ll find is that, yes, we end up leveling out. We create new job categories with each thing. I think a lot of people’s confusion and worry about A.I. is grounded in that, right? And I am very aware. When you talk about, oh, well, A.I. can eliminate drudgery, you know, that sounds great unless you put food on your table through the act of drudgery. And so, this is why — you know, my whole thing is really, we’re not having the appropriate conversations around what this technology can do, because unfortunately, we’re not in a place where mature conversation is what our politics provide, but we really need it because this stuff is moving very quickly. And it is going to start impacting people’s lives, you know, it already has, but it’s going to impact what they bring home in the next six months, next two years, next five years.

ISAACSON: A lot of people, even in the A.I. industry, and certainly a lot of politicians keep calling for more regulation of A.I. Is that something that we should be frightened of, government trying to regulate it, or is that something that makes some sense? And if so, what type of regulations would you suggest?

TYRANGIEL: So, we have no regulation of it to this point. So, I wouldn’t be frightened of some regulation. And by the way, this is not strictly about A.I. You know, we don’t have a federal digital privacy. It’s state by state. And so, we’ve been way behind the eight ball on this. Private industry is obviously — while some people are outwardly saying, please regulate us, there are a lot of people who behind the scenes will say, well, for as long as they don’t, let’s move as fast as we can. I’ve heard a couple different ideas about regulation, some of which is about training data and what are people using to train their A.I. and how can we make sure it’s not discriminatory and it’s not stolen from creators. So, that’s one area. Another area which is super ambitious is that in order to do large-scale artificial intelligence work, you need very expensive chips. And so, there’s some people who talk about, well, should we have a kind of atomic energy sort of regime that actually knows how many chips each company has and how many chips each individual has, because if you’re going to do something nasty with A.I., you’re going to need a massive amount of computing power, and chips are physical, right? So, in the same way that uranium or plutonium is a physical item, there are people out there suggesting that we regulate the amount of chips people hold. As always, the European Union goes first. They are very good at going first, and American companies will tell you they’re too draconian. My question is, what are we waiting for, right? And I don’t really know why we haven’t seen the first steps on this.

ISAACSON: But wait, you say, what are we waiting for? And you say, we’re behind the eight ball on regulation. Maybe that’s why we’re ahead of every other country on developing A.I.

TYRANGIEL: It could be, although I’m not sure we are ahead of every other country. You know, we do — I believe in private enterprise, and I believe that it does. Competition is great. At the same time, you look at some of the state-run regimes. You know, China is doing very well on A.I. Abu Dhabi has a state-run LLM. They’re also surprisingly competitive. So, I don’t think what we’re talking about is putting a handbrake on A.I. development, but there’s literally no regulation on it right now. And I think we do need to at least get something on the board, particularly around privacy. There has to be — you know, you look at deep fakes, you look at all of these things that can be corrosive to people’s lives and to the trust of the technology, you’d think we could get something basic on the board before the end of the year.

ISAACSON: What do you think of President Biden’s executive order on A.I. and just in general, how the Biden administration is doing?

TYRANGIEL: You know, I think they’ve done a really good job on the facts that are in front of them. I think that, you know, there’s this multi- hundred-page executive order that they delivered and it’s really smart. It’s really comprehensive and it’s smart about the sort of activating the federal government to begin to understand how A.I. is going to affect each department. The other thing that’s really smart about it is it doesn’t treat all departments and all department heads equally. A lot of the most important stuff has been delegated to Gina Raimondo, who’s the secretary of commerce, who really understands these issues, has a ton of faith and trust from private industry, but also really, she just understands this stuff. And so, I do think that they’ve done well to engage. I think the president himself, you know, has done a nice job of calling out the compromises to election security and privacy that A.I. is capable of. What they haven’t done, and what really nobody in politics has done is sketch out the big vision of what A.I. is going to mean for society and what it’s going to mean for the government itself.

ISAACSON: Josh Tyrangiel, thank you so much for joining us.

TYRANGIEL: My pleasure. Thank you, Walter.

About This Episode EXPAND

Haitian Prime Minister Ariel Henry has announced that he will resign. Monique Clesca and Ambassador Pamela White join to discuss. Four years after WHO declared the coronavirus a global pandemic, Dr. Cornelia Griggs discusses her new memoir, “The Sky Was Falling.” Josh Tyrangiel joins Walter Isaacson to discuss his latest piece: “Let AI Remake the Whole U.S. Government (and Save the Country).”

LEARN MORE