Why It’s So Hard for Amazon Alexa to Really Explain Itself
In an interview with OneZero, Amazon’s chief Alexa scientist explains the voice assistant’s complicated new feature
We’ve all heard stories about Alexa mysteriously answering a question you didn’t ask, or turning on the music randomly without anyone in the room saying anything.
Amazon insists that there’s always a perfectly reasonable explanation — and it’s soon giving Alexa the ability to state it.
Starting later this fall, you’ll be able to ask an Alexa device, “Why did you do that?” and it will offer you some sort of reasoning for why it took a certain action.
For now, the explanations are going to be sorted into basic categories, says Rohit Prasad, vice president and chief scientist for Alexa, who sat down for a one-on-one interview with OneZero following the company’s product event in Seattle on Sept. 25, 2019. If music randomly starts playing, it could be the case that it was started by someone either on their phone or in another room, and Alexa would say something like, “Dave’s iPhone played the Black Keys.” You might also ask why Alexa didn’t do something, like turn off the lights after you asked.
This technology could prove to be a powerful tool for getting customers to trust Alexa.
But that’s just the start. The deep neural networks that Amazon uses to make Alexa’s decisions and model its voice are commonly described as black boxes — meaning that Amazon may know what data is considered when the algorithm makes a decision, but not how the decision itself is made. Amazon obviously isn’t the only company using this technology — an entire field of A.I. explainability exists to try and solve this fundamental problem, which plays out on our social media feeds, Google search results, and self-driving cars.
To understand why this is a problem, it’s helpful to know the basics of how a deep neural network works. Deep neural networks are roughly modeled after brains — not on a biological level, but the idea that many, tiny neurons can process small pieces of information together to make a larger decision. With A.I., rather than chemical and biological processes, we have clusters of calculus. Each artificial neuron takes a piece of the data, whether that be audio or an image or text, applies some math to it, and passes it on. There are thousands to millions of these artificial neurons in programs like Alexa. These networks are “deep” because not all neurons have the same instructions; one level of neurons might be trying to find the shapes in an image, while another level could be looking for texture or color. The more levels, the deeper the network.
If this sounds like a complicated system that’s difficult to keep track of, you’ve hit on the exact reason why algorithmic explainability is a difficult field. To magnify this difficulty, the way that an algorithm learns is by changing these calculus formulas autonomously based on the information it’s seen. Every time an algorithm is trained on new information, millions of these neurons change in response, creating an entirely new problem for researchers to solve.
“This is still very much in the early stages.”
That’s the problem Amazon faces now: It’s easy to say Alexa played music because you asked it to, but it’s hard to explain how it understands the command at a mathematical level.
“[Black box explanations are], I would say, the next stage of the evolution of these explanations,” Prasad said. “So this is still very much in the early stages.”
But this technology could prove to be a powerful tool for getting customers to trust Alexa. Amazon is clearly worried about how Alexa’s privacy features are perceived. The company started its product launch by addressing various concerns, like Alexa hearing something you don’t mean it to, before pivoting to the explainability feature along with a slew of new Echo products, including Alexa-connected glasses and wireless earbuds.
This explainability feature — no matter how crude at launch — is an opportunity for users to actually understand what the always-on device in their home is doing. Coupled with other privacy features, like the ability to have your Alexa data automatically delete on a rolling basis or by asking Alexa to delete the day’s commands, Amazon is giving customers even more control over their personal data after its collected.
If you’re already freaked out by having an always-scanning microphone in your house, these features probably aren’t going to change your mind. Any privacy controls that Amazon puts into the device won’t change the basic facts of owning an A.I. listening device in 2019: Alexa is going to get things wrong sometimes, and Amazon needs data to make it better.
But if you get a little weirded out by your Alexa device sometimes randomly turning on “Uptown Funk” or giving you the population of Brazil, the move toward transparency , however small, might let you rest a little easier.
More from our interview with Rohit Prasad: Here’s How Amazon Alexa Will Recognize When You’re Frustrated