Who Killed Elaine Herzberg?
Who is ultimately to blame in the first self-driving car fatality — the technology, the victim, the safety driver, Uber, or the American city itself?
The National Transportation Safety Board (NTSB) just released the final report of a crash in which a woman was killed after being hit by a self-driving car operated by Uber. The NTSB report excoriates Uber for its poor safety culture and takes issue with the threadbare rules that govern the testing of self-driving cars on public roads. But it also notes that there were methamphetamines found in the victim’s system, which may have impaired her ability to react to an approaching vehicle.
In an excerpt from his new book, Who’s Driving Innovation?, Jack Stilgoe, a professor in the science and technology studies department at University College London, argues for the need to learn from this tragedy as we develop a more intelligent approach to new technologies.
Elaine Herzberg did not know that she was part of an experiment. She was walking her bicycle across the road at 10 p.m. on a dark desert night in Tempe, Arizona. Having crossed three lanes of a four-lane highway, Herzberg was run down by a Volvo SUV traveling at 38 miles per hour. She was pronounced dead at 10:30 p.m.
The next day, the officer in charge of the investigation rushed to blame the pedestrian. Police Chief Sylvia Moir told a local newspaper, “It’s very clear it would have been difficult to avoid this collision… she came from the shadows right into the roadway… the driver said it was like a flash.” According to the rules of the road, Herzberg should not have been there. Had she been at the crosswalk just down the road, things would probably have turned out differently.
Rafaela Vasquez was behind the wheel, but she wasn’t driving. The car, operated by Uber, was in autonomous mode. Vasquez’s job was to monitor the computer that was doing the driving and take over if anything went wrong. A few days after the crash, the police released a video from a camera on the rear-view mirror. It showed Vasquez looking down at her knees in the seconds before the crash and for almost a third of the 21-minute journey that led up to it. Data taken from her phone suggested that she had been watching an episode of The Voice rather than the road. Her colleagues’ investigation calculated that, had Vasquez been looking at the road, she would have been able to stop more than 40 feet before impact.
Drivers and pedestrians make mistakes all the time. More than 90% of crashes are blamed on human error. The police report concluded that the crash had been caused by human frailties on both sides: Herzberg should not have been in the road; Vasquez should have seen the pedestrian, she should have taken control of the car, and she should have been paying attention to her job. In the crash investigation business, these are known as “proximate causes.” If we focus on them, we fail to learn from the novelty of the situation. Herzberg was the first pedestrian to be killed by a self-driving car. Of course, the Uber crash was not just a case of human error — it was also a failure of technology.
This was artificial intelligence in the wild: not playing chess or translating text, but steering two tons of metal.
Here was a car on a public road in which the driving had been delegated to a computer. A thing that had very recently seemed impossible had become, on the streets of Arizona, mundane — so mundane that the person who was supposed to be checking the system had, in effect, switched off. The car’s sensors — 360-degree radar, short- and long-range cameras, a lidar laser scanner on the roof, and a GPS system — were supposed to provide superhuman awareness of the surroundings. The car’s software was designed to interpret this information based on thousands of hours of similar experiences, identifying objects, predicting what they were going to do next and plotting a safe route. This was artificial intelligence in the wild: not playing chess or translating text, but steering two tons of metal.
When high-profile transport disasters happen in the U.S., the National Transportation Safety Board (NTSB) is called in. The NTSB is less interested in blame than in learning from mistakes to make things safer. Their investigations are part of the reason why air travel is so astonishingly safe. In 2017, for the first time, a whole year passed in which not a single person died in a commercial passenger jet crash. If self-driving cars are going to be as safe as airplanes, regulators need to listen to the NTSB. The board’s report on the Uber crash concluded that the car’s sensors had detected an object in the road six seconds before the crash, but the software “did not include consideration for jaywalking pedestrians.” The A.I. could not work out that Herzberg was a person and the car continued on its path. A second before the car hit Herzberg, the driver took the wheel but swerved only slightly. Vasquez only hit the brakes after the crash.
Beyond the vehicular impact, Elaine Herzberg’s death was the result of a set of more distant choices about technology and how it should be developed. Uber chose to test their system quickly and cheaply, claiming it was in a race against other manufacturers. Other self-driving car companies put two or more qualified engineers in each of their test vehicles. Vasquez was alone, and she was no test pilot. The only qualification she needed before starting work was a driver’s license.
Uber’s strategy filtered all the way down into its cars’ software, which was much less intelligent than the company’s hype had implied. As the company’s engineers worked out how to make sense of the information coming from the car’s sensors, they balanced the risk of a false positive (detecting a thing that isn’t really there) against the risk of a false negative (failing to react to an object that turns out to be dangerous). After earlier tests of self-driving cars in which software overreacted to things like steam, plastic bags, and shadows on the roads, engineers retuned their systems.
The misidentification of Herzberg was partly the result of a conscious choice about how safe the technology needed to be in order to be safe enough. One engineer at Uber later told a journalist that the company had “refused to take responsibility. They blamed it on the homeless lady [Herzberg], the Latina with a criminal record driving the car [Vasquez], even though we all knew Perception [Uber’s software] was broken.”
The companies that had built the hardware also blamed Uber. The president of Velodyne, manufacturers of the car’s main sensors, told Bloomberg, “Certainly, our lidar is capable of clearly imaging Elaine and her bicycle in this situation. However, our lidar doesn’t make the decision to put on the brakes or get out of her way.” Volvo made clear that they were not part of the testing. They provided the body of the car, not its brain. An automatic braking system that was built into the Volvo — using well-established technology — would almost certainly have saved Herzberg’s life, but this had been switched off by Uber engineers, who were testing their own technology and didn’t want interference from another system.
We don’t know what Herzberg was thinking when she set off into the road. Nor do we know exactly what the car was thinking: The decisions made by machine learning systems are often inscrutable. The evidence from the crash, however, points to a reckless approach to the development of new technology. The company shouldered some of the blame, agreeing to an out-of-court settlement with the victim’s family and changing their approach to safety. But to point the finger only at the company would be to ignore the context. Roads are dangerous places, particularly in the U.S. and particularly for pedestrians. A century of decisions by policymakers and carmakers has produced a system that gives power and freedom to drivers. Tempe, part of the sprawling metropolitan area of Phoenix, is car-friendly. The roads are wide and neat and the weather is good. It is ideally suited to testing a self-driving car. For a pedestrian, the place and its infrastructure can feel hostile. Official statistics bear this out. In 2017, Arizona was the most dangerous state for pedestrians in the U.S.
In addition to the climate and the tidiness of the roads, Uber had been attracted to Tempe by the governor of Arizona, Doug Ducey. The company had started its testing in San Francisco, near its headquarters. But when one of their self-driving cars ran a red light, California regulators told Uber that they needed a $150 permit. Uber objected and Ducey seized his opportunity. With the governor’s blessing, the company had already been testing in secret on the streets of Phoenix. Ducey could now go public and claim that he had tempted a big tech company away from Silicon Valley. He tweeted “This is what over-regulation looks like #ditchcalifornia” and “Here in AZ we WELCOME this kind of technology & innovation! #ditchcalifornia #AZmeansBIZ.”
With almost no oversight, Uber moved their experiments to Arizona in 2016. When Herzberg was killed less than 18 months later, Ducey’s enthusiasm collapsed, and Uber was thrown out of its new laboratory. Members of Herzberg’s family thought that the design of the city’s streets and the governor’s embrace of Uber were causes of her death and sued the state.
It’s called a “tombstone mentality”: defects are noticed, lessons are learned and rules are written in grim hindsight.
Yet, two months after the crash, the governor of Ohio announced plans to make his state “the wild, wild West” for unregulated self-driving car testing.
This regulatory race to the bottom is unedifying and depressingly familiar. Policymakers are often seduced by the promise of new technologies, which arrive without instructions for how they should be governed. Despite claims that this technology is inevitable, it is unclear where we are going. It is not obvious what a future full of self-driving cars would look like but, as it stands, many policymakers in the U.S. and elsewhere are complicit, through passivity, in the development of a technology that looks set to widen existing injustices.
When technologies fail, it is often hard to find the person responsible and easy for those involved to blame others or claim it was a freak occurrence. It’s a symptom of a wider problem, which is that we aren’t clear who is in control of the development of new technologies. When technological dreams meet the real world, the results are often disappointing and messy. It is all too common for regulation to be an afterthought. In the world of aviation, it’s called a “tombstone mentality”: defects are noticed, lessons are learned, and rules are written in grim hindsight. In Arizona, policymakers allowed a private experiment to take place in public, with citizens as unwitting participants. It ended badly for everyone involved. Tragedies are opportunities for learning, opportunities to challenge claims made about technology, and opportunities to think about alternatives. We should ask if the technology is safe enough, but this means also asking, “Safe enough for what? Why are self-driving cars being developed? Where are they taking us?”
It is vital to investigate technologies at an early stage before they become just another fact of life. If we agree that technology is too important to be left to technology companies, we are left with the challenge of how to democratize innovation. New technologies suggest the need to update the question posed by the political scientist Robert Dahl: “Who governs?”
“If we are to hang onto democracy in the 21st century, we should be asking ‘Who’s driving?”