A Bill of Rights for the Age of Artificial Intelligence

We should be concerned about the rights of all sentients as an unprecedented diversity of minds emerges

Credit: Donald Iain Smith/Getty Images

In 1950, Norbert Wiener’s The Human Use of Human Beings was at the cutting edge of vision and speculation in proclaiming:

[T]he machine like the djinnee, which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us… Whether we entrust our decisions to machines of metal, or to those machines of flesh and blood which are bureaus and vast laboratories and armies and corporations… [t]he hour is very late, and the choice of good and evil knocks at our door.

But this was his book’s denouement, and it has left us hanging now for 68 years, lacking not only prescriptions and proscriptions but even a well-articulated “problem statement.” We have since seen similar warnings about the threat of our machines, even in the form of outreach to the masses, via films like Colossus: The Forbin Project (1970), The Terminator (1984), The Matrix (1999), and Ex Machina (2015). But now the time is ripe for a major update with fresh, new perspectives — notably focused on generalizations of our “human” rights and our existential needs.

Concern has tended to focus on “us versus them” (robots) or “gray goo” (nanotech) or “monocultures of clones” (bio). To extrapolate current trends: What if we could make or grow almost anything and engineer any level of safety and efficacy desired? Any thinking being (made of any arrangement of atoms) could have access to any technology.

Probably we should be less concerned about us versus them and more concerned about the rights of all sentients in the face of an emerging unprecedented diversity of minds. We should be harnessing this diversity to minimize global existential risks, like supervolcanoes and asteroids.

While we may not know what ratio of bio/homo/nano/robo hybrids will be dominant at each step of our accelerating evolution, we can aim for high levels of humane, fair, and safe treatment (“use”) of one another.


What did the 33-year-old Thomas Jefferson mean in 1776 when he wrote, “We hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the Pursuit of Happiness”? The spectrum of current humans is vast. In 1776, “Men” did not include people of color or women. Even today, humans born with congenital cognitive or behavioral issues are destined for unequal (albeit in most cases compassionate) treatment — Down syndrome, Tay-Sachs disease, Fragile X syndrome, cerebral palsy, and so on.

If robots don’t have exactly the same consciousness as humans, then this is used as an excuse to give them different rights, analogous to arguments that other tribes or races are less than human.

And as we change geographical location and mature, our unequal rights change dramatically. Embryos, infants, children, teens, adults, patients, felons, gender identities and gender preferences, the very rich and very poor — all of these face different rights and socioeconomic realities. One path to new mind-types obtaining and retaining rights similar to the most elite humans would be to keep a Homo component, like a human shield or figurehead monarch/CEO, signing blindly enormous technical documents, making snap financial, health, diplomatic, military, or security decisions. We will probably have great difficulty pulling the plug, modifying, or erasing (killing) a computer and its memories — especially if it has befriended humans and made spectacularly compelling pleas for survival (as all excellent researchers fighting for their lives would do).

Radically divergent rules for humans versus nonhumans and hybrids

The divide noted above for intra–Homo sapiens variation in rights explodes into a riot of inequality as soon as we move to entities that overlap (or will soon) the spectrum of humanity. In Google Street View, people’s faces and car license plates are blurred out. Video devices are excluded from many settings, such as courts and committee meetings. Wearable and public cameras with facial recognition software touch taboos. Should people with hyperthymesia or photographic memories be excluded from those same settings?

Shouldn’t people with prosopagnosia (face blindness) or forgetfulness be able to benefit from facial recognition software and optical character recognition wherever they go, and if them, then why not everyone? If we all have those tools to some extent, shouldn’t we all be able to benefit?

These scenarios echo Kurt Vonnegut’s 1961 short story “Harrison Bergeron,” in which exceptional aptitude is suppressed in deference to the mediocre lowest common denominator of society. Thought experiments like John Searle’s Chinese room and Isaac Asimov’s three laws of robotics all appeal to the sorts of intuitions plaguing human brains that Daniel Kahneman, Amos Tversky, and others have demonstrated. The Chinese room experiment posits that a mind composed of mechanical and Homo sapiens parts cannot be conscious, no matter how competent it is at intelligent human (Chinese) conversation, unless a human can identify the source of the consciousness and “feel” it. Enforced preference for Asimov’s first and second laws favor human minds over any other mind meekly present in his third law, self-preservation.

If robots don’t have exactly the same consciousness as humans, then this is used as an excuse to give them different rights, analogous to arguments that other tribes or races are less than human. Do robots already show free will? Are they already self-conscious? The robot Qbo has passed the “mirror test” for self-recognition, and the robot Nao has passed a related test of recognizing its own voice and inferring its internal state of being, mute or not.

For free will, we have algorithms that are neither fully deterministic nor random but aimed at nearly optimal probabilistic decision-making. One could argue that this is a practical Darwinian consequence of game theory. For many (not all) games/problems, if we’re totally predictable or totally random, then we tend to lose.

What is the appeal of free will anyway? Historically, it gave us a way to assign blame in the context of reward and punishment on earth or in the afterlife. The goals of punishment might include nudging the priorities of the individual to assist the survival of the species. In extreme cases, this could include imprisonment or other restrictions if Skinnerian positive/negative reinforcement is inadequate to protect society. Clearly, such tools can apply to free will, seen broadly — to any machine whose behavior we’d like to manage.

We could argue as to whether the robot actually experiences subjective qualia for free will or self-consciousness, but the same applies to evaluating a human. How do we know that a sociopath, a coma patient, a person with Williams syndrome, or a baby has the same free will or self-consciousness as our own? And what does it matter, practically? If humans (of any sort) convincingly claim to experience consciousness, pain, faith, happiness, ambition, and/or utility to society, should we deny them rights because their hypothetical qualia are hypothetically different from ours?

The line between human and machines blurs, both because machines become more humanlike and humans become more machinelike.

The sharp red lines of prohibition, over which we supposedly will never step, increasingly seem to be short-lived and not sensible. The line between human and machines blurs, both because machines become more humanlike and because humans become more machinelike — not only since we increasingly blindly follow GPS scripts, reflex tweets, and carefully crafted marketing, but also as we digest ever more insights into our brain and genetic programming mechanisms. The NIH BRAIN Initiative is developing innovative technologies and using these to map out the connections and activity of mental circuitry so as to improve electronic and synthetic neurobiological ware.

Various red lines depend on genetic exceptionalism, in which genetics is considered permanently heritable (although it is provably reversible), whereas exempt (and lethal) technologies, like cars, are, for all intents and purposes, irreversible due to social and economic forces. Within genetics, a red line makes us ban or avoid genetically modified foods but embrace genetically modified bacteria making insulin, or genetically modified humans — witness mitochondrial therapies approved in Europe for human adults and embryos.

For “human subject research,” we refer to the 1964 Declaration of Helsinki, keeping in mind the 1932–1972 Tuskegee syphilis experiment, possibly the most infamous biomedical research study in U.S. history. In 2015, the Nonhuman Rights Project filed a lawsuit with the New York State Supreme Court on behalf of two chimpanzees kept for research by Stony Brook University. The appellate court decision was that chimps are not to be treated as legal persons since they “do not have duties and responsibilities in society,” despite claims by Jane Goodall and others that they do, and despite arguments that such a decision could be applied to children and the disabled.

What prevents extension to other animals, organoids, machines, and hybrids? As we (e.g., Hawking, Musk, Tallinn, Wilczek, Tegmark) have promoted bans on “autonomous weapons,” we have demonized one type of “dumb” machine, while other machines — for instance, those composed of many Homo sapiens voting — can be more lethal and more misguided.

Do transhumans roam the earth already? Consider the “uncontacted peoples,” such as the Sentinelese and Andamanese of India, the Korowai of Indonesia, the Mashco-Piro of Peru, the Pintupi of Australia, the Surma of Ethiopia, the Ruc of Vietnam, the Ayoreo-Totobiegosode of Paraguay, the Himba of Namibia, and dozens of tribes in Papua New Guinea. How would they or our ancestors respond? We could define “transhuman” as people and cultures not comprehensible to humans living in a modern yet untechnological culture.

Such modern Stone Age people would have great trouble understanding why we celebrate the recent LIGO gravity-wave evidence supporting the hundred-year-old general theory of relativity. They would scratch their heads as to why we have atomic clocks, or GPS satellites so we can find our way home, or why and how we have expanded our vision from a narrow optical band to the full spectrum from radio to gamma. We can move faster than any other living species; indeed, we can reach escape velocity from Earth and survive in the very cold vacuum of space.

If those characteristics (and hundreds more) don’t constitute transhumanism, then what would? If we feel that the judge of transhumanism should not be fully paleo-culture humans but recent humans, then how would we ever reach transhuman status? We “recent humans” may always be capable of comprehending each new technological increment — never adequately surprised to declare arrival at a (moving) transhuman target. The science fiction prophet William Gibson said, “The future is already here — it’s just not very evenly distributed.” While this underestimates the next round of “future,” certainly millions of us are transhuman already — with most of us asking for more. The question “what was a human?” has already transmogrified into “what were the many kinds of transhumans, and what were their rights?”

From The Rights of Machines by George M. Church. Adapted from POSSIBLE MINDS: Twenty-Five Ways of Looking at AI edited by John Brockman, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2019 by John Brockman.

Robert Winthrop Professor of Genetics at Harvard Medical School and Professor of Health Sciences and Technology, Harvard-MIT.

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store