The U.S. Military Is Building Voice-Controlled War Robots
And unlike Siri, they’ll be able to understand the speaker’s ‘intent’
Welcome to General Intelligence, OneZero’s weekly dive into the A.I. news and research that matters.
War robots today take just too much darn time to control. I know it, you know it, and the U.S. Army knows it.
That’s why its research branch is cooking up a system that would allow soldiers to give orders to small robotic cars by speaking naturally, as opposed to using specific commands. The robots would be able to understand the soldiers’ intent and complete the given task, according to an Army press release. The system would be used for scouting out areas and search-and-rescue.
But there’s reason to be skeptical: The Army claims that robots with this system would be able to understand the operator’s intent, but that could misfire, literally.
Intent recognition has become a standard part of any chatbot, the most similar technology to what the Army is trying to create. These bots are trained on common questions or phrases, which are matched with a specific intent. For instance, “call a cab,” “get me a car,” and “taxi,” would all be tied to the command for Siri to open the Uber app. But these commands need to be preprogrammed into the system. What the algorithm actually learns is the method of matching words to commands, not how to follow commands, create new commands, or adapt in any way.
Under this new system, an Army robot would have to integrate context about its physical surroundings with informal speech — a considerably more complex task than asking Siri to reference Wikipedia for a piece of trivia, for example. “Go over there,” can mean something different every time it’s said, creating a realm of infinite possibilities to be navigated. Even if the soldier operating the robot were to point or mention a landmark, the A.I. would still require layers and layers of additional algorithms to understand what landmarks look like, or how to trace the path of a human pointing.
This might be why companies like Boston Dynamics, the clear leader in mobile robots, still relies on remote controllers for its creations. Autonomous delivery robots, like those from Starship and Refraction AI, all have remote operators ready to take control in case anything goes off the rails. These are all technical challenges that still vex Silicon Valley’s largest companies and nimblest startups, with billions and billions of dollars invested into robotics.
There are also ethical considerations. The U.S. military has maintained that any robot or drone used to kill has a human operator. But when it’s a robot’s job to determine intent, and one of those intent commands in the future might be to fire a weapon, it requires a lot of trust that the command isn’t going to be triggered accidentally. A military robot could kill someone because it guessed a soldier’s intent incorrectly.
Auditory cues also open the robots to attack by adversarial examples, images or noises specifically engineered to trick an algorithm into doing something it wouldn’t normally do. These attacks have been shown to work on common speech recognition algorithms.
It might end up that military robots need special handlers that are specially trained to command and operate the machinery — think of the soldier holding a military dog’s leash. The system is set to be trialed at a field test in September, so it’s likely the efficacy of the algorithm will be tested there. There’s a precedent for this: Ford’s Spot Mini robot, made by Boston Dynamics, has a dedicated handler.
It’s an interesting thing to think about when pondering the jobs of the future: “Robot handling” might be a desired skill on a resume.
Here are some other things to ponder: some of the most interesting research papers of the week.
This research tries to teach an algorithm morality by making it into a number. The algorithm is trained on acceptable (pushing an elderly man in a wheelchair around a park) and unacceptable phrases (pushing an elderly man in a wheelchair to the ground). The paper is worth reading, as it tackles some tough philosophical problems while trying to turn the field of ethics into something a machine could understand.
Facial recognition is seen by privacy advocates as unequivocally intrusive, but it’s still in use in police departments and federal agencies across the U.S. This research tries to improve the technology by at least showing why an algorithm chose a specific face by pointing to specific similar facial features, rather than just showing how confident it is that the match is correct.
This algorithm analyzes your selfie and edits it to put your arms at your side rather than reaching to hold the camera or smartphone. It unselfies your selfie, to help you feel less selfie-conscious.