Connecting Brains to Computers Is a Shortcut to Dystopia

If we expect the right to free political thought, and want to prevent corporations from controlling our minds, we need a robust A.I. policy framework now

Credit: Alfred Pasieka/Science Photo Library/Getty

Co-authored by Susan Schneider

IIt’s 2045. You stroll into the Center for Mind Design. There you can purchase a brain chip to augment your intelligence or a bundle of several such chips. People wishing for savant-like mathematical abilities can purchase the “Human Calculator” chip while those in the market for supreme serenity now can buy “Zen Garden.” And that’s just the beginning. Enhanced attention, virtuoso musical abilities, telepathy to directly experience other augmented people’s thoughts, and so much more are all there for the choosing. Which would you pick?

If you’re unsure, how about mulling it over with a philosopher? Or two.

Evan Selinger is a professor of philosophy at Rochester Institute of Technology and co-author of Re-Engineering Humanity. Susan Schneider is the NASA-Baruch Blumberg chair in astrobiology and technological innovation at NASA and the Library of Congress and author of the new book Artificial You: A.I. and the Future of Your Mind. The following edited excerpt contains highlights of our conversation.

A.I. in your head

Evan: I’m impressed by the New York Times series Op-Eds from the Future. By crafting fictional opinion pieces on hypothetical situations that could happen 10–100 years from now, writers are freeing up our conceptual and moral imaginations so we can see the present with new eyes. I even used the model as the basis of a writing assignment in my surveillance seminar so that students could critically consider how pressing, contemporary problems concerning privacy and power might intensify or improve.

Your contribution to the series—“Should You Add a Microchip to Your Brain?”—depicts a future where advances in A.I. create a sense of desperation among the unemployed masses, leading them to consider purchasing brain chips to enhance their cognitive abilities and alter their emotional dispositions. Why should we care about this type of scenario now?

Susan: We should care because science fiction is quickly becoming science fact. Today, several large-scale A.I. research projects are trying to put artificial intelligence components inside the brain or connect them to the peripheral nervous system. For corporations like Facebook, Neuralink, and Kernel, your brain is an arena for future profit. They are trying to hook you up directly to the cloud without the intermediary of a keyboard. Without proper regulations, your innermost thoughts and biometric data will be sold to the highest bidder. Authoritarian dictatorships will finally have the ultimate way to ensure good citizenship. They can pipe Orwellianisms into dissidents’ heads directly and locate dissidents by scanning the brain.

Evan: That’s really troubling. As things stand, fighting for privacy is hard. A.I.-powered facial recognition technology is a perfect tool of oppression, one so dangerous Woodrow Hartzog and I argue it should be banned. Perhaps the last thing society needs is for tech companies to have yet another way to sell our data — especially if the technology reduces barriers for extracting deeply personal and highly intimate details. Do you think we should try to prevent A.I. from intruding into our heads?

Susan: I’m excited about the development of A.I. and neurotechnology for medical therapies. Existing research includes Ted Berger’s project to create an artificial hippocampus to aid those with radical memory loss, which is already in human clinical trials, and DARPA’s efforts on closed-loop neural implants to aid those with PTSD. But the idea that we should move beyond therapeutic uses to enhancement is a different beast.

So, should you enhance at a Mind Design Center? Science alone can’t determine this. How we should proceed is a matter of value. And as the second part of my book suggests, it is also a matter of getting the metaphysics right and understanding debates over the nature of consciousness, selfhood, and mindedness. And we need proper A.I. regulations, though the default route is to trust the future of the mind to companies like Facebook.

Evan: Absolutely. To be sure, we need to be vigilant about tech companies hooking us in with enticements about innovation making life more convenient and fun. It’s all too plausible to imagine Facebook hyping up a service that lets consumers send status updates directly into the minds of their friends by emphasizing that it’s a drag to exert the effort required for saying or typing words. And if Mark Zuckerberg were to tout the medical benefits of brain-controlling wearable and implantable tech, watch out. It would be prudent to be skeptical and, as Katina Michael rightly recommends, ask how the tools associated with care can be repurposed for expanding social control.

Susan: Yes, this illustrates how crucial it is that all stakeholders are involved in discussions of A.I. and the future of the mind.

Evan: Beyond concerns associated with privacy, what’s at stake in a scenario like this — one that has preoccupied many cyberpunk science-fiction writers?

“The gap between an unenhanced human and a highly enhanced one, or an A.I., will be vast. It’s unclear how democracy will even work with such an intellectually fragmented citizenship.”

Susan: Elon Musk has suggested that in order to keep up with the complex computations of superintelligence and respond to future technological unemployment, we’ll have to merge with A.I. But imagine, decades from now, that you’re outmoded in the workforce by A.I. and you’re feeling pressure to upgrade your skills to keep your job. You might feel that you can’t opt out of “enhancements” from a Center for Mind Design. People should have a fundamental right to opt out. After revamping your mind, you might not even be yourself anymore.

How can we respond to this challenge? We need to move beyond a crude business model of the mind that views the brain, mind, and self as the next lucrative arena for the development of A.I. technology. We aren’t commodities. (If you ask me, the general lesson should be: Sentient beings aren’t commodities. We should apply the same standard to nonhuman animals). And we will need to respect intellectual differences more. The gap between an unenhanced human and a highly enhanced one, or an A.I., will be vast. It’s unclear how democracy will even work with such an intellectually fragmented citizenship. Their needs and mindset will be vastly different.

Changing today by thinking about tomorrow

Evan: If companies push automation to the limit, and conditions of dire material scarcity arise that lead the masses to see brain chips as a necessary and coercive evil, people won’t have the luxury to make free decisions about what to do with their lives and bodies. As Brett Frischmann and I argue in our book, people don’t want to be dehumanized and treated like robots in the workplace. But they already are. And they have been for a long time. Before digital Taylorism, there was assembly-line Taylorism.

Susan: That’s why public dialogue, especially about regulation, is key. Technological unemployment is a big concern in Washington, but there aren’t enough policymakers seriously considering this possibility that A.I. will eventually outmode us in every domain. Retraining workers may be feasible in the short term, but what if A.I. becomes cheaper and more efficient in every domain? It’s unclear how more than a few humans will have any income.

I hear a lot about universal basic income. If that is needed, we’d better figure out how to get money from the few people who own the intellectual property for some profitable algorithm and who rapidly move offshore and/or have massive tax breaks. And if we make A.I.s “persons,” perhaps because we mistakenly think they are sentient (sentient A.I. is an issue that I believe is up for grabs), they will simply own the products of their labor. The digital economy could render humans obsolete. These issues aren’t intractable, but we need to do it right.

Evan: Do you think that considerations about the future of technology can motivate us to become better people today? To pick another example, let’s say the robot-rights people are correct, that if sentient A.I. is ever created it will deserve ethical treatment. Should one takeaway be that society should do a better job respecting animal rights? In other words, should we use the future as a lens for figuring out how to be more ethically consistent?

“Perhaps A.I.s will ask: Should we give special moral consideration to humans even though they are vastly less intelligent than A.I.?”

Susan: Absolutely. It’s wrong to care about the potential exploitation of sentient androids yet remain unconcerned about animal suffering, which is clearly happening all around us today. Further, we humans may soon find ourselves in the place of nonhuman animals. Suppose superintelligent A.I. outsmarts us, and the intellectual gap between humans and superintelligence is akin to the gap between you and your dog or cat. Perhaps A.I.s will ask: Should we give special moral consideration to humans even though they are vastly less intelligent than A.I.?

Evan: Isn’t there a danger that people will misuse hypotheticals to avoid doing the right thing here and now? Take the control problem, the worry that A.I.’s interests will be misaligned with our own. In Musk’s thought experiment, an A.I. designed to pick strawberries destroys civilization because it redesigns itself to become more efficient and plants strawberry fields everywhere — literally everywhere.

Science fiction writer Ted Chiang — who wrote the very first “Op-ed From the Future” — calls bullshit on this. He suggests that Musk’s description of A.I. can be interpreted as psychological projection — a displaced description of how tech companies themselves behave when embracing the liberation ethos of pursuing maximum growth with minimal regulatory impediments. From this perspective, there’s something hypocritical about leadership at big tech companies worrying about problems that autonomous A.I. might cause. They don’t care enough about their own all-too-human misdeeds. And when they get worked up about saving humanity from the control problem, they get to look like our planetary saviors.

Susan: I doubt that those who have devoted their time, money, and energy to solving and publicizing the control problem, like Bill Gates and Elon Musk, are being disingenuous. But I’ve noticed that some business leaders tend to get nervous when the control problem comes up because they are wary of all the clickbait, Terminator news articles. Bad PR.

Anyway, setting aside speculations about people’s motives, the real issue is that the control problem is serious business. Even if there’s only a 6% risk that we could lose control over A.I. in the next 20 to 70 years, shouldn’t we lay the groundwork now to prevent this? Because the potential impact is so great, even a small chance that this could happen demands our attention. Doing so doesn’t make today’s issues, like privacy, algorithmic discrimination, and black-box algorithms, less important. They’re all important! Unfortunately, lots of people believe there’s a “worry budget.” From this perspective, being concerned about one set of issues detracts from making progress on the others. But that’s self-defeating.

Evan: It would definitely be a problem if there were enough resources to go around to address all of the important issues concerning A.I. But I don’t think this is the case. Since worry budgets go hand in hand with financial ones, it’s important that funding for possible problems that might happen down the road don’t come at the expense of devaluing more immediate concerns — especially when they involve justice issues impacting vulnerable and marginalized populations.

Susan: The long-term and short-term issues we’ve discussed today are intimately connected. The more we do now to protect privacy and ensure algorithmic fairness, the better off we will be when you really do walk into a Center for Mind Design. The policy framework we set up now, if done right, could help ensure your thoughts aren’t tracked or that any apps running in your head do not import biases or remain “black boxes” you don’t understand.

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere. http://eselinger.org/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store