03.12.2019

Joi Ito of MIT Discusses Complex Moral Issues in New Tech

Walter Isaacson sits down with Joi Ito, Director of the MIT Media Lab, to discuss some of the most complex moral problems facing tech innovators.

Read Transcript EXPAND

NOW WE TURN TO THE ACADEMICS STRUGGLING WITH THE MOST CONSCIENTIOUS PEOPLE.

TAKE INNOVATORS FIND THEMSELVES IN A BUBBLE IMMUNE TO THEIR REAL-WORLD IMPACT.

READING.

IT'S PARTLY WITH THAT IN FIND THE MASSACHUSETTS INSTITUTES OF TECHNOLOGY, M.I.T., FOUNDED ITS FORWARD THINKING MEDIA LAB.

THE CURRENT DIRECTOR SAT DOWN WITH OUR WALTER ISAACSON ABOUT A FASCINATING DISCUSSION ABOUT WHAT THEY'RE STRUGGLING WITH, ALLEL THE MORAL CONUNDRUMS.

WHY HE REMAINS FUNDAMENTALLY OPTIMISTIC.

WELCOME TO THE SHOW.

THANK YOU.

TELL ME ABOUT WHAT THE LAB IS DOING?

THE MEDIA LAB WAS STARTED OVER 30 YEARS AGO IN A SORT OF BURST AND MANY OF US LOVE THE PHRASE 'YOU PREDICT THE FUTURE BY INSLEPTING IT' I THINK MANY OF US ARE AWARE OF THE STUFF WE BUILT OVER THE LAST 30 YEARS ARE CAUSING SOME SOCIETIES DIFFICULT PROBLEMS.

IT'S GONE FROM BEING COMPLETELY TECHNO U TOMORROWIAN TO BEING PARTIALLY, IF NOT WHOLLY REFLECTIVE.

WE'VE ALSO PIVOTED A LITTLE FROM HUMAN-MACHINE INTERACTION TO SOCIETY AND TECHNOLOGY AND WE'VE MOVED PROBABLY 30% INTO BIO ENGINEERINGS.

SPACES INVOLVED SOMETHING ABOUT SOCIETY AND TECHNOLOGY.

YOU WERE A DESK JOCKEY.

YES.

YOU GREW UP ORGANIZED ROCK AND PARTIES.

YOU DIDN'T HAVE A Ph.D. AND BOOM!

YOU'RE SUDDENLY IN CHARGE OF THE MIT MEDIA LAB.

UH-HUH.

HOW DID THAT HAPPEN AND HOW DOES THAT AFFECT THE WAY OF THINKING?

IL WASN'T THEIR FIRST PICK, FIRST OFF.

THEY WENT THROUGH A LOT OF CANDIDATES WITH Ph.D.s BEFORE THEY GOT TO ME.

IF YOU'RE INTERESTING IN EVERYTHING YOU USUALLY DON'T HAVE A Ph.D.. I SURVIVED DESPITE THE FACT THAT I WAS INTERESTED IN EVERYTHING.

I DON'T THINK I WOULD BE IN ANY OTHER LAB BUT I THINK IT'S DIFFICULT TO HAVE A SPECIALIST RUNNING THE MEDIA LAB.

ALSO, I GREW UP ON THE INTERNET.

I LEARNED EVERYTHING ON THE INTERNET AND THE OTHER PART, THE DISC JOCKEY PART IS I LOVED TO WORK WITH COMMUNITIES.

I'M RUNNING A NIGHTCLUB AND DISC JOCKEY IS HOW YOU GET A SENSE OF THE ROOM AND TRY TO PRODUCE SOME KIND OF CREATIVE ENERGY IN THE ROOM WHICH IS KIND OF WHAT THE MEDIA LAB IS ABOUT.

HOW DO I CREATE A CULTURE WHERE PEOPLE FEEL FREE AND PASSIONATE AND NOT BE TOO FOCUSED OM ONE THAT WE DON'T ALLOW JUST ABOUT ANYTHING TO HAPPEN.

TELL ME ABOUT WHAT YOU'RE DOING IN KENTUCKY.

IN KENTUCKY WE'RE DOING AN INTERESTING SET OF EXPERIMENTS.

STARTED.

WAS TRIGGERED IN A STUDY THAT SHOWED RISKS STRUCTURES WERE BIASSED AGAINST AFRICAN-AMERICANS.

AND SO WE STARTED THERE.

ALGORITHMS ARE BEING USED TO IN CRIMINAL JUSTICE SYSTEM TO SEND POLICE TO PLACES, FOR SETTING BAIL, PAROLE AND SENTENCING.

IT SOUNDED UNFAIR BUT THE DATA IS VERY OPAQUE.

THESE ARE COMPANIES THAT HAVE A TRADE SECRET PROTECTIONS AGAINST THE ALGORITHMS.

SO WE STARTED THERE AND WHAT WE REALIZED, THAT PREDICTION, WHICH IS WHAT MOST MACHINES ARE USED FOR RIGHT NOW IS KIND OF A SYSTEMATIC WAY OF TAKING POWER FROM THE PREDICTEE AND GIVING IT TO THE PREDICTOR.

WE'RE TALKING ABOUT WHETHER WE'RE TALKING ABOUT CHILD WELFARE SYSTEMS OR THE CRIMINAL JUSTICE SYSTEM.

IT'S LOOKING FOR WAYS TO MORE EFFICIENTLY AND EFFECTIVE REPLY POLICE OR JAIL PEOPLE.

IN THE SHORT RUN THAT'S USEFUL IF YOU'RE TRYING TO EFFICIENTLY POLICE.

WHAT HASN'T HAPPENED IS LOOKING AT THE LONG TERM EFFECTS OF THINGS LIKE POLICING OF THE WAY THAT COURTROOMS ARE RUN OR THE THING THAT WE'RE SPECIFICALLY LOOKING AT, CONDITIONS OF RELEASE, WHAT WE'RE DOING IS LEGISLATE PEOPLE GO SAYING YOU HAVE TO DO DRUG TESTS, KERR FEW OR GPS ANKLE BRACELETS.

THEY SOUND LIKE A GOOD IDEA BUT WE DON'T KNOW WHAT THE EFFECTS ARE.

IF YOU USE CAUSAL INFLUENCE AND DATA YOU CAN LOOK LONG RUN INSTEAD OF JUST PREDICTS WHERE THE NEXT CRIME IS GOING TO CONCERN.

YOU CAN PREDICT THE LONG TERM.

IT'S TURNING THE CAMERA FROM THE PREDICTEE AND FACING IT INTO THE SYSTEM AND TRYING TO ALLOW POLICY MAKERS AND THE PUBLIC TO UNDERSTAND HOW THE SYSTEMS WORK.

IF YOU LOSE MACHINE LEARNING TO ASSESS RISKS AND CREATE ALGORITHMS AND SAY HERE ARE THE RISKS, YOU'RE SAYING THE ALGORITHMS CAN BE RACIST?

A GREAT EXAMPLE IS THE IDEA OF RECIDIVISM RATES.

THAT'S THE LIKELIHOOD THAT YOU'RE GOING TO GET RE-ARRESTED FOR A CRIME.

OFTEN PEOPLE SAY RE-COMMIT A CRIME.

THERE'S A GREAT STUDY BY WITH TWO PEOPLE AND IT TAKES THE CITY OF OAKLAND AND THE POLICE, WHEN YOU ASK THEM WHERE ARE THE DRUG CRIMES, THEY SHOW THIS LITTLE HEAT MAP OF WHERE THEY ARREST PEOPLE FOR DRUG CRIMES.

WHEN YOU DO A HEALTH STUDY, IT'S HAPPENING ALL OVER THE PLACE.

THEY MAKE -- THEY CONFOUND RECIDIVISM WITH CRIME.

WHAT HAPPENS IS THAT DATA GETS USED TO PREDICT CRIME TO SEND THE POLICE TO THOSE NEIGHBORHOODS AND ALSO TO PROJECT THAT POOR PEOPLE ARE MORE LIKELY TO COMMIT A CRIME BECAUSE THEY END UP IN A NEIGHBORHOOD WITH MORE POLICE.

IT'S OFTEN A SELF-FULFILLING PROPHECY.

ONE MORE THING, BY THE DEFINITION OF THE CURRENT WAY THAT WE USE LEARNING IS WE COLLECT A TON OF DATA AND USE THE DATA TO PREDICT THE FUTURE.

ONE OF THE PROBLEMS IS THAT IF YOU TAKE ALL THIS OLD DATA AND ALL IT'S DOING IS TAKING ALL THE BOOISZS AND ACTUALLY, THE OLD BIASES AND TRY TO GENERATE PREDICTIONS OF THE FUTURE.

WHAT WE WANT TO BE DOING IS WE WANT TO BE LEADING AND BEING PROGRESSIVE RATHER THAN USING MACHINES TO WORK IN OLD BIASES.

TELL ME HOW YOU DEAL THE QUESTION OF RACISM.

IT'S REALLY IMPORTANT AND INTERESTING.

YOU START OUT WITH SLAVERY.

THEN YOU HAD SEGREGATION.

EACH TIME WE GO THROUGH THIS SORT OF UPHEAVAL.

1967 WAS THE YEAR OF RACE RIOTS.

AND EACH TIME WE THINK THAT WE'VE GOTTEN -- SOLVED IT BUT WE'VE PUSHED IT DOWN ANOTHER LEVEL.

LE WHAT WE TALK ABOUT IS MASS INCARCERATION AS A PROXY FOR RACISM.

EACH TIME IT LOOKS MORE COLOR BLIND.

WHAT'S HAPPENING WITH ALGORITHMS IS WE'RE ABOUT TO LOCK INTO ALGORITHMS, A KIND OF SYSTEMIC RACISM THAT HAS JUST EVOLVED BUT I DON'T KNOW THAT IT'S GOTTEN -- IT'S GOTTEN BETTER IN SOME WAYS AND IT'S GOTTEN WORSE IN SOME WAYS.

RACISM WAS LOCKED INTO THE CODE OF INSURANCE POLICIES, FOR EXAMPLE, IN THE 70s AND IN CIVIL SOCIETY AND THE FEMINISTS LOST THOSE BATTLES.

WE'RE FIGHTING FOR ALGORITHM FAIRNESS BUT I THINK IT'S IMPOSSIBLE TO LOCK THAT IN.

I'M INTERESTED IN ENGAGING THE HISTORIANS THE SCLOORS AROUND THE HISTORY OF RACISM AND HAVING THEM UNDERSTAND THE WORK OF THE SCHOLARS IN FAIRNESS AND VICE VERSA.

A LOT OF THIS STUFF ISN'T NEW IN 1950 THEY TALKED ABOUT THE HUMAN USE OF HUMAN BEINGS WAS THE BOOK AND HE SAID ORGANIZATIONS ARE MACHINES OF FLESH AND BLOOD AND WE CAN'T CONTROL CORPORATIONS.

THEY'RE SORT OF LIKE SUPER INTELLIGENCES.

WHAT WE'RE DEALING WITH IN ALGORITHMS AND A.I. IS IS A SUPER CHARGED VERSE OF THAT.

WHEN PEOPLE ASKED ME WHAT I THINK IS A.R. -- FIRST OF ALL I CALL IT EXTENDED INTELLIGENCE AND NOT ARTIFICIAL INTENSE.

WHAT'S GOING TO HAPPEN IS WE'RE GOING TO GO CAREENING IN WHATEVER DIRECTION, WHETHER THAT'S GOOD OR BAD, AND WE REALLY NEED TO GET OUR HOUSE IN ORDER.

I THINK THAT GOING AND GOING BACK AND LOOKING AT THINGS LIKE RACISM AND HAVING THAT CONVERSATION NOW BEFORE WE LOCK IT IN IS MORE IMPORTANT THAN ANY OF THIS SORT OF MATHEMATICAL STUFF WE'RE GOING.

HOW DO YOU LOOK AT AN ALGORITHM AND SAY I'M GOING TO REACH INTO THE ALGORITHM AND TWEAK IT SO IT BECOMES LESS RACIST?

IT'S VERY DIFFICULT.

FIRST OF ALL, IT'S GARBAGE IN, GARBAGE OUT.

SOCIETY IS RACIST, SO BY DEFINITION OUR ALGRICHLSZ ARE RACIST.

YOU COULD INCLUDE SOMETHING THAT LOOKS LIKE AFFIRMATIVE ACTION WHERE YOU BIAS IT MANUALLY TOWARDS GIVING MORE OPPORTUNITIES TO VULNERABLE PEOPLE.

YOU CAN TRY TO FIGURE OUT HOW TO COUNTERACT THE BIASES.

THAT'S DIFFICULT BECAUSE YOU CAN TAKE OUT PROTECTED THINGS LIKE GENDER AND RACE.

HOW WOULD YOU FIX IT?

TO NOT ALLOW PURE LY ENGINEER APPROACHES.

I THINK IT'S COME UP WITH A DEFINITION FOR FAIRNESS, MAKE THAT A CHECK BOX AND MOVE ON.

IN ENGAGING WITH THE ACLU IS A PAIN FOR PEOPLE WHO ARE BUILDING AND SUPPLYING THESE SYSTEMS.

WHAT WE NEED TO DO IS THAT EVERY DECISION, WHETHER IT'S SCHOOL BUSES AND SCHOOL START TIMES TO HIRING SYSTEMS IS TO MAKE A TRANSPARENT SYSTEM WHERE ALL OF US CAN UNDERSTAND WHAT IS THE DATA GOING IN, WHAT DO WE BELIEVE BIASES ARE, HOW SHOULD WE TUNE THESE OPTIMIZATIONS AND SHOULD INVOLVE ALL THE STAKE HOLDERS THAT ARE INVOLVED.

RIGHT NOW THAT PROCESS ISN'T HAPPENING.

ONE REASON IS THE PEOPLE IN PUBLIC DON'T UNDERSTAND THE SYSTEMS AND THEY'RE NOT DESIGNED TO BE THAT.

AND THE ZMOOERS DON'T REALLY UNDERSTAND WHAT -- THEY DON'T KNOW WHAT THEY DON'T KNOW.

THEY DON'T KNOW WHEN THEY LOOK AT RECIDIVISM RATES IT'S NOT ACTUALLY CRIME LATE.

THEY DON'T KNOW IF YOU DON'T HAVE BROWN AND BLACK FACES IN YOUR DATAS SET THAT YOU WON'T BE ABLE TO PREDICT FACE RECOGNITION.

PEOPLE LIKE YOU NEED TO HELP US TALK TO PEOPLE ABOUT IT AND INCREASE THE AWARENESS, I THINK.

ONE OF THE TECHNOLOGIES THAT'S GOING TO DEFINE THE NEXT 50 YEARS IS BIOTECHNOLOGY AND IN PARTICULAR GENETIC EDITING TOOLS.

UH-HUH.

WHAT ARE YOU DOING WITH SNA WE'RE FOCUSED ON TRYING TO COME UP WITH FRAME WOKS OF ETHICS AND TRYING TO USE ALL MANNER WAYS TO GET PEOPLE USED TO IT.

IT'S DIFFICULT GIVE TO CONTROL IF I WERE TO PICK THE NUMBER ONE CONCERN I HAVE, I THINK TIG RISK OF A MISTAKE IN BIOTECHNOLOGY IS PROBABLY ONE OF OUR BIGGEST -- OTHER THAN THE FACT THAT THE ENVIRONMENT'S FALLING APART.

THAT'S A MORE KNOWN RISK.

I THINK THE WAY IN WHICH THE SCIENTIFIC COMMUNITY SHOULD AND HAS TO COME TOGETHER AROUND CREATING SAFETY SYSTEMS FOR THESE THINGS IS ON TOP OF MIND AT THE MEDIA LAB.

WE HAVE A GROUP WORKING ON THAT.

THROUGHOUT HUMAN HISTORY WE'VE HAD INNOVATIONINGS.

USUALLY OUR MORAL PROCESSING POWER KEEPS UP WITH THE NEW TECHNOLOGY.

EVERY NOW AND THEN, NOT QUITE.

LIKE DROPPING TO HAVE ATOM BOMB BEFORE WE HAD FULLY PROCESSED THAT NEW TECHNOLOGY, BUT GENERALLY IT'S WORKED PRETTY WELL, EVEN WITH THE INTERNET.

DO YOU THINK IT'S GOING TO WORK WITH BIOTECHNOLOGY OR WILL THAT HAPPEN SO FAST AS IS ALREADY HAPPENING WITH A DESIGNER BABY BORN IN CHINA THAT WE WOJTD HAVE RULES OF THE ROAD FOR THIS NEW TECHNOLOGY?

I WOULD SAY WE HAVEN'T BEEN ABLE TO DO IT EVEN WITH OTHER TECHNOLOGIES.

I THINK WHAT YOU DON'T WANT TO DO IS THROW OUT THE GOOD BECAUSE OF SORT OF A FEAR MONGERING REACTION TO THE BAD.

I THINK IT NEEDS TO BE CAUTIONARY VIGILANCE INSTEAD OF PRECAUTIONARY PRINCIPLE.

I THINK THIS REQUIRES A VERY EVIDENCE-BASED PUBLIC CONVERSATION WHICH WE'RE VERY BAD AT IT, JUST LOOKING AT CLIMATE CHANGE, BUT I THINK IT'S IMPORTANT.

IF YOU LOOK AT -- REMEMBER THE COVER OF 'TIME' MAGAZINE ABOUT THE TEST TUBE BABY?

NOW, IBF IS COVERED BY INSURANCE.

WHICH IS A GOOD THING.

EDITING THE GEE HOMES IN OF BUGS BUGS TO CONTROL THEM IS A GOOD THING.

DO NOT GOOD TOO REACTIONARY AND SERVICE A BETTER CONVERSATION AND ENGAGE THE PUBLIC.

BUT WASN'T SOCIAL MEDIA, WASN'T THAT THE HOPE OF SOCIAL MEDIA?

IT IS STILL THE HOPE.

I THINK WE'RE GOING THROUGH A PHASE.

I'M HOPEFUL IN THE LONG RUN.

SORT OF LIKE AN ADD LESS END PHASE.

I THINK WE'RE GOING THROUGH A PHASE WHERE A BUNCH OF ADOLESCENTS FIGURED OUT A WAY TO GAME THE SYSTEM AND BAD GUYS USED IT FOR BAD THINGS.

NOW IT'S BECOMING A FAIRLY REGULAR THING FOR GENERATION X KIDS TO USE AND I DO HAVE FAITH THAT IT WILL OH VOVL INTO SOMETHING.

IT MIGHT NOT EVOLVE INTO SOMETHING THAT YOU AND I KNOW.

BUT THE WAY KIDS APPROACH THIS STUFF -- WHEN I TALK TO THE YOUNG KIDS AT MIT, THEY THINK ABOUT THIS STUFF IN A MUCH MORE SOBER WAY.

THANKS FOR BEING HERE.

THANK YOU.

About This Episode EXPAND

In just over two weeks the UK is set to leave the EU, and there is still no deal in place. As parliament votes again on a potential deal, Irish political leader Mary Lou McDonald and Anthony Gardner, former U.S. Ambassador to the EU, join the program to discuss. Later, Mouaz Moustafa talks about the fallout from the Syrian war, and Walter Isaacson interviews MIT’s Joi Ito on tech’s moral quandries

LEARN MORE