Asimov’s Three Laws Helped Shape A.I. and Robotics. We Need Four More.
A leading expert in the emergent field of A.I. law argues it’s high time to update the three laws of robotics
Isaac Asimov’s three laws of robotics are probably the most famous and influential science fictional lines of tech policy ever written. The renowned writer speculated that as machines took on greater autonomy and a greater role in human life, we would need staunch regulations to ensure they could not put us in harm’s way. And those proposed laws hark back to 1942, when the first of Asimov’s Robot stories were published. Now, with A.I., software automation, and factory robotics ascendant, the dangers posed by machines and their makers are even more complex and urgent.
In Frank Pasquale’s provoking and well-wrought book, New Laws of Robotics: Defending Human Expertise in the Age of AI, the Brooklyn Law School professor proposes adding four new principles to Asimov’s original three. Which, for those unfamiliar, are as follows:
- A robot must not harm humans, or, via inaction, allow a human to come to harm.
- Robots must obey orders given to them by humans, unless an order would violate the first law.
- Finally, a robot must protect its own existence, as long as doing so would not put it into conflict with the other two laws.
Pasquale says we must push much further, arguing that the old laws should be expanded to include four new ones:
- Digital technologies ought to “complement professionals, not replace them.”
- A.I. and robotic systems “should not counterfeit humanity.”
- A.I. should be prevented from intensifying “zero-sum arms races.”
- Robotic and A.I. systems need to be forced to “indicate the identity of their creators(s), controller(s), and owners(s).”
The new laws stem from Pasquale’s insistence that the spike in robotics and A.I. — and the role these technologies play in society — should not be left to Silicon Valley alone to solely dictate. If these laws are followed, Pasquale says, then it could be possible to use these technologies to enrich all kinds of jobs, rather than take them over, particularly in the health care and education sectors. For that to happen, however, he contends that policymakers and regulators must help shape and democratize those outcomes. Hampering “automation that controls, stigmatizes, and cheats innocent people is a vital role for 21st-century regulators,” he writes. “We do not simply need more A.I., we need better AI.”
To dive deeper into these new proposed laws, OneZero reached out to Pasquale to discuss why he chose to update Asimov’s dictums, the reasoning behind his new ones, and more.
The chat below has been edited and condensed for clarity.
OneZero: Why did you deem it important to update Asimov’s three laws? Moreover, why make a focus in the book about democratizing these decisions about how robotics and A.I. are used?
Frank Pasquale: Asimov’s laws were about robots not mistreating people. And I think that’s super important, certainly, but we need more ideas given our present moment and the future we face. Much more. We need to ensure lasting and democratized human control of the development of technology.
I think the big dream of a lot of folks in A.I. is presumably just letting it take on the job of a doctor, nurse, journalist, teacher, and so on. And my idea is that’s really not the goal we should be going for, right? The larger question is should we even be even doing this in the first place? I think no. For me, the proper role of a lot of these technology fields is to complement and support professionals, not replace them.
Is it plausible to think the conversation around A.I. could be reframed like that and away from the thinking that “robots are going to take our jobs”?
I think it is. I honestly do. Take the field of medicine — 10 years ago, you had people saying, “Watch out, doctors, the robots are going to be coming in. They’ve got great pattern recognition, great data, and they will be replacing you.” And they particularly said, “If you’re a doctor in the fields of radiology, anesthesiology, dermatology, and pathology, all those things are just pattern recognition, and a robot or A.I. with access to millions of images of normal and diseased tissue is going to do a lot better than you can.”
One question becomes, “Where do they get that data?” And that data is from people, and maybe it will eventually become a machine that can do it on its own, but even if it could, it’s not replacing any of those positions. It’s actually supplementing them in many ways, and should stay that way.
Think of a dermatologist. Even if you have an app that can recognize your moles, you still are going to want to talk with a real person about, say, “How likely is it that this is a false positive or a false negative? Do we cut out this mole, or is it fine?” This is a critical decision for humans to make.
What’s the reasoning behind each of these proposed new laws of robotics?
This vision is very broad. If I had my way, the comparable breadth of tasks would be something like the New Deal, where essentially you had an array of agencies established over decades — ones started by legislation that ranged from five to 100 or so pages long of statutes. And then agencies like the Securities and Exchange Commission, Federal Communications Commission, and others, over time, elaborated on what the statutes meant. And something similar, I would hope, would happen with the new laws of robotics—that there would be independent agencies that would be empowered to apply these ideas.
Can you give a couple concrete examples of how some of your laws would work in practice?
Sure. Take the proposed first new law about professions and A.I. complementing each other: A robot should not be certified as a teacher. You could permit robots or A.I. to be used as technology that teachers might prescribe, the same way doctors prescribe drugs, but you would not aspire to have teachers or some combination of online lectures monitored and curated solely by the A.I. You’d never aspire to have those actually replace teachers. So, that’s the complementarity law at work.
The second new law, “no counterfeiting of humanity,” would require, for example, any bot that’s online to be clearly labeled as such. And to bring in the fourth new law, with respect to attribution, is that [bot] would need to be attributed to a particular person. And actually, the bot example is a good one even for the third new law of robotics, which would say that to the extent that these bots start proliferating because people are trying to grab attention online, that we could take some legislative or administrative action to stop that arms race for attention among the bots.
Your third new law focuses on preventing robots from intensifying arms races. This one isn’t limited to military issues, right?
That’s correct. I think there are many areas in life where we are subject to control by technology and where large technology firms use A.I. to make profits by setting people on a rat race, scrambling against each other. I’ve written for a long time about ratings and rankings, and I think credit scoring was an early example of this. And with algorithmic lending, it’s only becoming more intense, where you’re going to see more and more lenders using A.I. to say, “Well, we could give you a 1% discount on your mortgage, but we’ll just need to see your entire cellphone history.” Or, “We’ll just need to see more data about you.” Or, “We’ll give you a discount on your health insurance, but you’ve just gotta have this wearable that’s going to use A.I. to compute exactly how healthy you are.”
This constant hunger for more and more data is itself an arms race among technology firms. It sets up an arms race among individuals, where they’re trying to just give more and more data about themselves to distinguish themselves from others. Because remember, whenever you’re being given a discount because of your behavior or something like that, somebody else is probably being charged more, right? It’s not as if this is being done out of the goodness of the company’s heart.
Could you discuss further your final new law — that A.I. and robotics need to address their creators? Why is that important?
That one is essentially to stop even the idea of or aspiration of A.I. being autonomous of humans. What I mean by that is that there are all these science fictional or futuristic aspirations to create robots that could be like your children, say. We train them when they’re young, and then they are sent off into the world. Imagine the robot in the film Ex Machina, Ava, just sort of walking out into the world uncontrolled, and so forth. These things should never happen.
Essentially, any robot that is let out into the world or any A.I. that is put online as a bot — all of them need to be traceable back to their owner or controller, who could, at any time, shut them off or in other ways alter their behavior. So, then, at that point, it’s not autonomous. We just get rid of the idea of A.I. autonomy. Cancel it out as something we’re never going to do.
The reason I think that’s so important is because I worry about us living in, for instance, a world where sidewalk robots or drones or other things can sort of approach us and we don’t know what they’re doing or why they’re doing it. I think that whenever you’re approached by a drone or a sidewalk robot, you should be able to point your cellphone at it and immediately get something beamed to you that says, “Here’s the owner, here’s someone you can complain to,” like a Digital Millennium Copyright Act notice on a website for copyright infringement. Here, in other words, is the relevant governing authority that you can complain to if the owner doesn’t respond to your concerns.
What responsibilities do the creators of A.I. and robotics have for what they end up being used for?
It’s time for roboticists and A.I. folks to take the responsibility for that type of incredible malleability and flexibility of their software, which could be programmed otherwise. Take this example: I shouldn’t be upset or empathize with a crying robot because it could be just as easily programmed to laugh at whatever made it artificially cry. And we have to be hardheaded here, because there’s this burgeoning field of affective computing, which to me is, in so many ways, a prelude to manipulation and control of human beings by large corporations and governments through the mediating effects of technology. And we have to always keep our eyes on those very grim possibilities.
In the book, you focus mostly on the impact of A.I. and automation on “professional” jobs. But what about blue-collar ones? What about people who are working, say, at a warehouse or as a cashier in a grocery store?
It’s true. And it’s so interesting when thinking about the cashier job, because you could say when there are automated kiosks that check people out, and then there’s one person watching three or four kiosks in front of the store, you could say, “Well, that’s put four people out of work.” You could also say, though, “Well, this has allowed one person to be more productive, because now they’re in charge of these machines that are getting people through, trying to fix them and other things. They’re trying to make sure that it works efficiently.”
That’s going to be a very interesting question, but ultimately, I don’t think the policy should stand in the way of automation there, unless people who are in those positions can say, “Hey, there’s a reason why you need human judgment and humans in control of this process. And if you take us out of the loop, there’s going to be a big problem.”
In your previous book, The Black Box Society: The Secret Algorithms That Control Money and Information, you scrutinize how platforms like YouTube and Facebook thrive by using algorithms to hook you in and stay on their sites via suggestions for other content that’s loosely based on what you’re viewing. And as we’ve all seen by now, that often winds up fostering and spreading misinformation and conspiracy theories. In your view, is more regulation needed to address this as well?
Absolutely. First, in the new book, I look at the problem of A.I. driving media and taking over editorial functions, especially, and even sometimes content-creating functions, which is so disturbing. It’s worrisome because the systems are being optimized in a crude and reductive way that doesn’t reflect social values of truth. And, as you mentioned, with the conspiracy theories being pushed out so substantially. It’s so easy to be routed into them. And the other problem is that these platforms will say, “Well, we’re just giving people what they want.” But, in fact, how you develop the choice architecture is hugely consequential in what people want.
We shouldn’t live in a world where an A.I. can say, “All right, well, there are 10 more viewers if we show them these crazy pedophilic cabal videos or whatever.” It’s just a crude way to tally metrics of success.
What do you think makes humans want to create robots and A.I. that simulate us?
The main, dominant arguments for making robots that are like people come out of a post-human or transhuman perspective. So, a post-human subscriber would say, “Well, we’ve had a good run on Earth, but you know, ultimately our brains are too slow. Robots have faster processors. They’re going to understand more and do more in the world. Let them.”
And the transhumanist would say, “Yeah, if you want to get beyond human limitations, you’ll probably have to go beyond the human body.” And part of that is to go into robotics, merge with them, that sort of thing. And my view is that transhumanism and post-humanism perspectives are ultimately very anti-humanist.
How, specifically, are these positions anti-humanist?
In part, an essential element of being human is accepting and understanding our limitations. Our frailties. And that effort to transcend it and say, “Well, here’s an immortal entity; let’s treat it as being above and beyond the human,” is problematic. It involves rejecting the fact that we are mortal. That we feel pain. We have a limited amount of things that we can spend our attention on. Hey, when I die, there’ll be a billion books I haven’t read. And I don’t think someone might say, “Well, imagine if you could, you’d be so much better — if you were a robot that could process 1 million books in a short timespan.” My short response is I don’t think it would be in any way similar to what happens when I, as a person, read a book, or when any human does. There is a unique way we engage with things, and that’s what makes us singular.
As a whole, what motivated you most to confront emerging A.I. systems and to write this book?
I really wanted to explore what it would mean to put human labor at the center of the economy and society. As opposed to human labor being just one more resource — like a natural resource — where these things get inputted into a machine, and what they care about doing becomes just the output.
Specifically, this quest is really important because our economy right now is geared toward things like financial assets, stocks, and the owner of those things, and that’s how we measure success in so many ways: Is the stock market going up or down? And my question is, “How could we think about the future of technology and work in regards to making human labor the center of all this discussion?” And thinking about that hopefully gives an outlook on how the economy could be a lot fairer and more stable for us humans.