Delivery Robots Can Save Jobs and Lives
Why self-driving vehicles and teleoperation are poised to thrive in a post-Covid-19 world
In the summer of 2018, I was walking near the campus of UC Berkeley when I turned a corner and came face-to-face with an irate robot.
I didn’t know it was a robot at first. It looked like a cheerful microwave oven with big, knobbly tires, sitting smack in the middle of a busy urban sidewalk. It had a tiny blue flag on the end of a whippy little pole, positioned right at eye level — presumably to get the attention of distracted pedestrians like me.
I had clearly disrupted its mission by stepping in front of it, and it wasn’t happy. Friendly digital eyes on its front morphed from a cheerful grin into a frustrated scowl as I stood there gaping and blocking its path. With some annoyed beeps, it turned its wheels, zipped around me in a surprisingly assertive way, and continued on with its day.
I would later learn that the high-strung little blue and yellow box I had encountered was a delivery robot from a startup called Kiwi. At UC Berkeley and other college campuses, Kiwi has been testing its robots as an alternative to human food delivery drivers from companies like Uber Eats and DoorDash.
When a student places a food order, the robots drive to selected restaurants, receive the order (which is stored in a locking compartment on the robot’s body), drive it to the student’s dorm or apartment, confirm their arrival via an app on the student’s phone, and unlock their compartment to complete the delivery.
The robot I encountered was likely trying to bring some hungry student their burrito bowl, and I had momentarily interfered with this crucial mission — thus the angry beeps and digital eye roll. By the time of my encounter in 2018, the company had already delivered 10,000 orders using this process.
Kiwi has created a cool new tech and a clever business model for food delivery, which is becoming more crucial as Covid-19 reshapes the gig economy. And the company has managed to imbue its robots with the intelligence to navigate city streets and deal with irritating meat-space obstacles like myself.
Kiwi’s robots appear to drive themselves and to operate autonomously. In reality, that’s partially an elaborate illusion.
The robots come with cameras and computer vision driven by machine learning. They can detect and classify what they see, and tell the difference between a car, a person, or a wall.
But they’re not totally autonomous, either. The robots are monitored and directed by remote workers in Colombia, who check in and provide guidance every 10 seconds. They’re paid around $2 per hour, a standard wage for the region.
This allows the robots to navigate safely, and avoid colliding with innocent passersby. It also means the robots can use cheap hardware like cameras, instead of the pricey lidar units in real self-driving cars. And it means radically expanding the capacity of the bots’ human operators — since they’re just checking in periodically, each human worker can operate more than 10 bots at once.
This new human/machine hybrid model is called teleoperation. And it’s exploding in popularity across multiple industries, as a cheaper, safer alternative to self-driving.
As I write this, Tortoise and scooter startup Go X were preparing to launch the first self-driving micro-mobility scooter business, in Peachtree Corners, Georgia. (The impact of the coronavirus on these plans is unclear.)
If self-driving scooters sound even more terrifying than the normal kind, take a deep breath. As with Kiwi’s bots, the scooters are not truly controlling themselves. They’re also teleoperated by workers in a remote location. And as with the Birds and Limes of the world, human drivers will still be in charge once the scooters are rented — for better or worse.
Where does the teleoperation aspect come in? When a rider wants to use a Go X self-driving scooter, they’ll press a button in an app, and the scooter will drive to their location. Then the scooters will drive themselves back to a safe parking spot once a ride is done.
Scooters abandoned on sidewalks are a major scourge on modern cities, and a big reason why scooter companies are often banned from city streets. Go X and Tortoise hope to avoid this by having their scooters return themselves to safe areas when they’re no longer needed.
These companies’ self-driving model also has the major advantage of reducing costs and avoiding many coronavirus-related risks. Most scooter companies have to pay an army of independent contractors to return and charge their scooters each day. The risk to these workers is a big part of why many scooter companies have closed up shop during the time of the coronavirus.
Go X’s teleoperated scooters could presumably drive to a central location at the end of the day, for overnight charging and maintenance by a single team. It would mean fewer workers to pay, and fewer people out and about potentially transmitting the virus.
If their trial in Peachtree Corners (which appears to have the support of the local government, unlike a challenging conventional scooter rollout in San Francisco) moves ahead and succeeds, they will presumably roll the program out to more cities worldwide.
Teleoperation is a relatively new model in the transportation world. But the concept of a human/machine hybrid is already ubiquitous in other industries, like artificial intelligence.
That fancy new deep learning API you’ve been using? It may be performing its magic using neural networks or some other machine learning tech. Or it may be using humans behind the scenes and essentially faking its output (or at least relying on humans to do a lot of QA and data cleansing.)
At first, A.I. firms tried to hide the human involvement underlying many of their products. Then they realized that customers really don’t care how an operation is performed. An API could use advanced artificial intelligence, human operators, or magical gnomes — it really doesn’t matter to most customers, so long as the output is useful and the price is right.
Today, several A.I. companies have leaned into the idea of combining humans with deep learning systems using a model called hybrid intelligence. Cloudsight AI, for example, uses a hybrid human/A.I. platform to write sentence-length descriptions for images.
When the company launches a new engagement, humans initially provide up to 97% of the work. The A.I. learns from the humans’ output, though, and slowly takes over more of the work. By the time a particular hybrid system is mature, the A.I. is performing over 90% of the work, and the humans are basically just there for quality assurance. They’re like Kiwi’s remote operators — checking in to make sure the machine doesn’t run over a pedestrian, but otherwise leaving second-by-second choices up to the A.I.
If teleoperation works for scooters and delivery robots — and human machine hybrids have proven themselves in other industries — why not use them to accelerate the arrival of true self-driving cars?
These lidar- and sensor-studded vehicles, currently undergoing testing by companies like Waymo and Cruise, are just the latest iteration of a technology that has long seemed “almost there.” The first self-driving car was developed in the 1920s, and GM seriously pursued the concept in the late 1950s.
Today’s self-driving vehicles are orders of magnitude more complex than those early prototypes. Yet they’ve still failed to deliver on predictions about a driverless future. And many experts agree that they’re still a ways off from commercial success.
Why? With most tech products, you can get your technology part of the way there and still release something usable. You can then iterate over time, improving the product while it’s out in the field. These early iterations are called minimum viable products (MVP), and they’ve been all the rage in tech for nearly a decade. With a self-driving car, you can’t release an MVP. The technology has to be nearly perfect to be accepted by consumers and regulators.
With a tech product like a smart speaker, edge cases don’t matter very much. If you ask Alexa to add “Halo oranges” to your shopping list and she adds “Haitian songs” instead (this actually happened to me), it’s an amusing goof. If your Tesla thinks a wall is a nice stretch of open road, it’s a much bigger problem.
Self-driving vehicles are also held to far higher standards than human drivers. About half of Americans feel that self-driving cars are more dangerous than regular cars, and two-thirds of Americans feel they should be regulated more aggressively. Despite the fact that they’re actually safer than humans — who crash into things all the time — this public mistrust makes self-driving cars nearly impossible to release commercially.
Pair self-driving tech with teleoperation, though, and things might change. Let’s assume that true self-driving technology is about 90% of the way to its full potential. The vast majority of the time, it can drive safely with no issues. But sometimes it’s uncertain what to do. And very rarely, it makes major, fatal mistakes.
What if in those times of uncertainty, a remote human was able to step in and briefly take control over a self-driving vehicle — or at least give it some rapid input as to what it’s seeing?
Is that a plastic bag billowing across the highway, or am I about to hit a tree? The lane markers on this road are worn down and degraded — can you help me find them? Remote humans could quickly answer these questions.
Would drivers feel more comfortable stepping into a self-driving car if they knew its actions were being supervised by a real human (as with Kiwi’s setup), even if that human was remote? Several companies are betting that the answer is yes.
Designated Driver, a startup, is developing technology to allow remote operators to drive customers’ vehicles. At the moment, their tech is used primarily for applications away from public roads (such as equipment at a mine or farm), but they ultimately aim to allow remote operators to pilot cars through city traffic. Phantom Auto is another company developing similar tech.
Teleoperation of vehicles on public roads, of course, comes with its own challenges. There are obvious security concerns, for example. It’s one thing to imagine that someone could break into Tesla’s Autopilot system, for example, and make modifications that cause a vehicle to run haywire. But when cars are designed explicitly to allow remote control, it’s easy to see how hijacking them would be an almost too tempting target for hackers.
These risks, though, are surmountable. We already allow remote control of mission-critical transportation systems, like trains. The risk of hacking can be managed. The Muni rail system in San Francisco was briefly hacked in 2016, but the agency managed to continue safely operating trains. Within a few days, they had restored normal operations.
If self-driving cars could reach an acceptable level of reliability through the integration of teleoperation, it would be a major boon to road safety. It would also be a significant lifeline for seniors and the disabled, who may not be able to safely operate a traditional car.
Perhaps most importantly in a coronavirus-ravaged world, deploying self-driving and teleoperated vehicles could be an ideal way to shore up transportation systems for essential workers. It would also allow quarantined gig workers to continue delivering essential supplies by piloting remote vehicles from their homes, rather than remaining out in the world and risking infection.
Food delivery services like DoorDash and ride-hailing services like Uber have proven themselves an essential cog in the machinery of modern cities during recent outbreaks of the disease. But continuing to operate during a pandemic puts their delivery drivers (and possibly the public) at risk.
Self-driving vehicles (which were used to deliver food even before the pandemic) and delivery robots could still complete essential deliveries, while keeping human drivers out of harm’s way. Indeed, Kiwi has pivoted to delivering medical supplies since the Covid-19 pandemic began. Nuro, another startup, had already received approval to test self-driving delivery vehicles on Texas roads before the pandemic began in earnest.
In the short term, many self-driving and teleoperation companies have suspended testing during the outbreak. But as lockdowns wear on, more transit systems close, the need for delivery becomes increasingly acute, and a larger number of gig economy workers are confined to their homes, self-driving vehicles with teleoperation are likely to emerge as an increasingly valuable and resilient technology.
In many ways, there’s never been a better time to deploy the technology. Roads are nearly deserted, reducing the risk of accidents. Disaster declarations have led to the loosening of bureaucratic red tape in many other industries. With high unemployment, there is a glut of skilled workers who could be trained to supervise delivery robots, operate self-driving delivery cars, or remotely return micro-mobility scooters.
So far, self-driving tech has been applied behind the scenes by the Go Xs and Kiwis of the world. It has made possible smart forklifts and other warehouse tech. It has been used for drones and other UAVs, and enthusiastically applied by the military.
With demand spurred by Covid-19 and the addition of teleoperation systems, self-driving vehicles may finally go from an “almost there” technology to an essential part of our transportation system.
If they do, they have the potential to take millions of our most at-risk workers out of harm’s way, while maintaining the flow of food, groceries, medical supplies, and other essential goods we so desperately need to weather this pandemic at home.