Read Transcript EXPAND
CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR: Now, it is no secret that technology is getting smarter, faster, especially with the controversy over the safety of A.I., artificial intelligence. Our next guest is particularly concerned about how tech’s rapid expansion could ramp up the assault on our privacy and effectively read our minds. In her new book, “The Battle for Your Brain,” author Nita Farahany warns of the threats of emerging neurotechnology on your freedom of thought, our freedom of thoughts. She joins Walter Isaacson to discuss how governments can protect cognitive liberty.
(BEGIN VIDEO CLIP)
WALTER ISAACSON, HOST: Thank you, Christiane. And Nita Farahany, welcome to the show.
NITA FARAHANY, AUTHOR, “THE BATTLE FOR YOUR BRAIN”: Thanks for having me.
ISAACSON: So, you have this great book, “The Battle for Your Brain.” What is the battle for our brain? Who’s battling us?
FARAHANY: Well, I think the battle is underway to really gain access to our brain activity and get to a world of greater brain transparency, that is being able to peer into our brains, collect the data that there, commodify the data that’s there, and really change our brains and mental experiences.
ISAACSON: But wait, wait. Who’s doing this?
FARAHANY: Well, fair enough. So, the battle that I’m referring to is really the broader battle by corporations and governments to gain access to the brain. So, this is everything from the coming age of brain wearables and neurotechnology, which we can talk about what that means, to government attempts to access the brain, whether that’s through interrogating criminal suspects or the development of brain biometric programs or even purported brain control weaponry that’s maybe underway in other countries.
ISAACSON: Well, let’s start with that technology. You talk about neurotechnology. That means that somebody, perhaps without permission, because we decided to buy some device, will sort of read our brain waves. Is that approximately right?
FARAHANY: That’s approximately right. So, people are already accustomed to having sensors that pick up their heart rates or their breaths or even their sleep patterns. The idea is that brain sensors have already been put into devices right now, they’re largely niche applications with headbands worn across the forehead, but companies from Meta, Snap, Microsoft and even Apple are starting to embed brain centers that can pick up the electrical activity in our brain into everyday devices like earbuds or headphones, the soft cups around them with brain sensors, watches that could pick up brain activity as it goes from your brain down your arm to your wrist. And the hope of these technologies is to really open the brain up for ways for people to be able to track their own brain activity, reduce their stress levels, improve their focus, navigate through augmented reality and virtual reality or even become the way in which we interface with the rest of our technology.
ISAACSON: So, tell me what information could it read?
FARAHANY: So, right now, the advances that had been made have been pretty startling, largely because of improvements and artificial intelligence in pattern recognition, plus, the miniaturization of sensors that can start to pick up electrical activity. These aren’t mind reading devices. They’re not literally decoding the thoughts in a person’s mind. What they can do is pick up different brain staves that reflect emotions. So, are you happy or sad, or are you bored? Is your mind wandering? Are you paying attention? Are you focused? Are you tired? With additional probes in the environment, so, for example, if you’re playing a game and something is embedded into the game platform, like a subliminal message, researchers have shown it’s possible to even probe the brain for information like a pin number or an address or your political preferences or beliefs or desires.
ISAACSON: But if I’m wearing one of these things, it’s because I chose to, it’s because I want to have a better interaction with the machines around me or maybe with the game I play, maybe it’s an Oculus Rift, maybe it’s some virtual reality thing. You talk about cognitive liberty, isn’t that part of my liberty to say, hey, I want these things?
FARAHANY: Absolutely. So, I talk about cognitive liberty as the right to self-determination, that includes both a right to access the technology and learn what’s happening there or enhance your brain or change it, but also, a right from interference. So, Walter, you say you intentionally would make the choice if you wanted to, to use the devices, that won’t be true for everyone. And already in workplaces worldwide employees have been required to wear brain sensors to track their fatigue levels, to track their attention or their focus or even their emotional levels in the workplace. And in China, there are reports that people have even been sent home for work based on what their brain activity reveals. Similarly, students and classrooms in China and other countries have had their brain activity monitored by mandate to have to wear these brain wearables. It’s happening in criminal justice systems worldwide where police are interrogating people’s brains to see if there’s recognition of crime scene details. So, cognitive liberty is about both your rights to make a choice to navigate through a game, to be able to swipe with your mind or type on virtual keyboards, but also to not have both the devices mandated nor interference or collection of your brain data or manipulation of your brains, which can happen too. These aren’t just read devices, many of them are also right devices to the human brain.
ISAACSON: So, you’re talking about it happening in China, where it’s mandated that people have these wearables, is that done at all in the United States or in the West?
FARAHANY: So, I am not familiar in the U.S. of any mandated case of it specifically, except for one, which is, there’s a company that is called SmartCap, who have been selling their lifespan device that’s embedded with electrodes, with sensors, that pick-up brain activity. They’ve used this product with enterprises, with companies worldwide who used it to track fatigue levels of employees. It’s not that different from driver assist technology, which is in some trucks and cars, the differences it’s being trained on the brain to pick up that electrical activity that signals a person’s fatigue level. They’ve reported that they’ve partnered for a trial with the North American Trucking Company to test out SmartCap, and I suspect there will be increasingly more examples of employers starting to integrate that. Employers during the COVID pandemic, especially during work from home, started to introduce a significantly more number of productivity tracking software programs on employees work from home devices. I don’t think it’s a far stretch to imagine in a world where surveillance in the workplace is increased significantly, that certain kinds of sensors might be integrated, at least, in limited context here, even here in the United States.
ISAACSON: Well, let me ask you about the SmartCaps that could be put on truckers to see if they’re getting to fatigue. Once again. That sounds like a pretty good idea to me. Am I wrong?
FARAHANY: So, I actually think that done well, at least, for something like long haul truck drivers or pilots or miners, that the balance of the interest of the individual in that case for their mental privacy relative to the societal risk of somebody barreling down the highway while they’re asleep may favor tracking the individual’s sleep. The right to cognitive liberty looks at the balance between societal and individual interest. And one thing that’s SMART kapp is doing really well is they’re minimizing the data that they’re collecting? You could collect a lot more information from the brain and mine it if you’re an employer. SmartCap overrides all of that data on the device itself, they provide only an algorithmic interpretation of a score one to five as to whether or not the person is wide awake or falling asleep, and those kinds of practices start to get to the responsible use of this technology. If you’re going to have it in a setting where, for example, a truck driver has their brain activity monitored for fatigue levels, implementing those kinds of safeguards, I think, decreases the intrusion into their mental privacy.
ISAACSON: The fundamental notion in your book seems to be freedom of thought. I mean, that’s what the cognitive liberty, I think, is aiming at, right? Why is it so important that we guard freedom of thought?
FARAHANY: So, I think freedom of thought is really, as you say, foundational to this concept of cognitive liberty. I interpret it more narrowly than the concept of mental privacy, which is why I include mental privacy as part of it. There’s a lot that happens in our brains from automatic responses to emotions, to basic brain states that mental privacy would cover. Freedom of thought gets it, that inner monologue, our thought, that space for private reprieve, which I think is so fundamental to human flourishing. It’s what gives us that space to decide who we are, develop our own self- identity, choose what will share and won’t share with other people, define our own terms of vulnerability, have a place where you think daring thoughts or thoughts that might go against the green, if you’re in a tyrannical or not an authoritarian regime, dreaming a dream of resistance and rising up against injustice. All of that requires that we have a space in which your thoughts are not accessed, your thoughts aren’t manipulated and you aren’t being punished for what you’re thinking. And I worry when we breach that final domain of privacy, that space for private reprieve, that it will be very difficult for people to be able to continue to cultivate that kind of inner monologue. I worry that there will be a chilling of even our inner thoughts and ways that could be, I think, devastating to humanity.
ISAACSON: It really is the stuff of science fiction or that we’ve been warned about by the great science fiction writers, obviously, Orwell, above all. But so much of this is what happens when machines can read our brains?
FARAHANY: I think that’s right. And, you know, it’s an anxiety you see repeated, not just with neuro technologies, but increasingly with generative A.I. People worry a lot about our ability, for example, to resist manipulation. Take for example, the cognitive biases or the shortcuts that our brains views, to be able to have selective attention to different things in our environment, or to be able to tell like, of all the things the threats that are coming at me, that one is the tiger to which I need to pay attention to. When technologies are designed to take advantage of those brain shortcuts and heuristics, it can be very difficult for us to resist, difficult to not return to our phones over and again, to platforms or you say, like, OK, I’ll watch one more episode. And cognitive liberty is about that too, which is to try to define the line between persuasion and manipulation to try to enable us to think freely in a world in which technologies are being designed to try to compete for if not dominates and take over our attention.
ISAACSON: Well, you call it neuromarketing in your book, which is this notion that, you know, companies could sort of manipulate your thinking as it goes along. And this has — it’s ever been thus, I mean, I think it was John Kenneth Galbraith who writes about how advertising is doing that subliminally. Why is this much worse and what are the technologies that’s going to do this?
FARAHANY: So, neuromarketing is one on a spectrum that I discuss right in my chapter on mental manipulation in the book, and neuromarketing is really designed to try to figure out what people’s actual responses are there, their Actual preferences are or their biases well beyond what their self- reports are. But this idea of trying to figure out what people actually want or desire, bypassing what they’re conscious preferences and desires are it’s not inherently different than what most other forms of marketing have done, except for its precision and its possibility for misuse. So, for example, one of the techniques that I talked about is a technique called dream incubation. This is another form of neuromarketing, but what it does is it tries to find the moment at which people are in the most suggestible state to market to them, when blood flow hasn’t fully restored to all parts of their brain after being asleep, and then to use that moment to try to create positive associations with brands. This idea of trying to get to the brain when the conscious awareness isn’t there, again, when used just to sell us products that we may want or that are consistent with our preferences or desires, it’s not that different than what marketers have done in the past. What’s different is when it’s used for purposes that may harm us or it’s used to try to intentionally overcome our ability to act otherwise, then we need to draw a line and say, this actually falls on the line of manipulation. And they’re going to be a lot of technologies in this world of generative A.I. that we’re going to need to look and see whether they’re being designed to do exactly that, to bypass our conscious decision making.
ISAACSON: Let’s talk about some of the upsides maybe of this technology. How might it help us with mental health issues, for example?
FARAHANY: So, I think that’s one of the biggest reasons and drivers, Walter, the reason why I think people will embrace the technology is because it does have extraordinary potential for our mental health and wellbeing. I think it’s pretty stunning that, you know, people are able to tell you everything down to their cholesterol levels or the number of steps that they’ve taken each day, but they know virtually nothing about what’s happening in their own brains. And that’s true whether it’s a person who suffers from epilepsy where they can’t know in advance that they’re going to have an epileptic seizure to a person studying depression or someone like me with chronic migraines, where I have some indications that it’s coming on, but not many until I have a full-blown headache.
ISAACSON: Well, let me ask you on that. Suppose you’re fighting migraines, would you voluntarily start using a wearable device where people —
FARAHANY: I have.
ISAACSON: — could pull up the medical data?
FARAHANY: I have, very much so. So, I’ve used neurofeedback devices to try to decrease my stress levels, which is one of the triggers that I have. I have used wearables that are neurostimulation devices that instead of needing to take medication, I can use instead to try to modulate and decreased my pain or interrupt my migraines. And for people with epileptic seizures, for example, the ability to longitudinal, like overtime, where brain wearables where they can get a real-time and potentially lifesaving alert sent to a mobile device or someone who is suffering from depression and able to interrupt the patterns of electrical activity that make them the most symptomatic, or even just the majority of us who have a difficult time paying attention during the day or are increasingly distracted, as we’ve just talked about, by all of the different stimuli in our environment to use these devices to be able to train or reclaim our focus and to bring down our stress levels and cognitive load, I think the promise is really extraordinary. It’s the reason that I think most people will be excited about it and the reason why it’s so urgent that we actually changed the terms of service for this brand-new category of technology in favor of individual rights, in favor of being able to keep the data private and use it for our own personal wellbeing, rather than introducing a new form of surveillance, neural surveillance of the masses.
ISAACSON: You were on President Obama’s Commission for the Study of Bioethical Issues, right?
FARAHANY: Yes.
ISAACSON: President Biden doesn’t have one of those. What would you do if there were one right now? Do you think government can actually play in this field or is it something beyond the scope of our current politics?
FARAHANY: I think they can. So, first of all, I think it’s really unfortunate that since our bioethics commission there has not been another presidential bioethics commission. There was one going back all the way to President Carter under different titles and names, and it had the effect of being able to kind of these major technological and scientific advances to be able to bring it to the forefront of public discussion, to convene experts to come up with specific recommendations, everything from, you know, the basics of what funding we need for different programs and making recommendations to funding agencies to, you know, what kind of expertise and oversight or adaptive regulation would help us get out ahead of these different products and technologies. I think we need something like that. We need something at the presidential level, at the executive level that helps to both identify and flush out those kinds of recommendations and builds a broader societal conversation and consensus around the pathway forward. Because, you know, whether it’s Metaverse or A.I. or neurotechnology, all of these in combination are changing fundamentally our brains and mental experiences, and it’s really important that we come up with a federal approach to how we’re going to govern and think about and enable people to have cognitive liberty in this digital age.
ISAACSON: Nita Farahany, thank you so much for joining us.
FARAHANY: Thank you so much for your time. I really enjoyed the conversation.
About This Episode EXPAND
Fox News Corporation and Dominion Voting Systems go head to head. Leaked Pentagon documents revealed just how quickly Kyiv is running out of weapons — and how doubtful appears the U.S. government’s hope of bringing the war to a rapid conclusion. In her new book, “The Battle For Your Brain,” Nita Farahany warns of the threat posed by emerging neurotechnology on our freedom of thought.
LEARN MORE