Why It’s So Hard for Amazon Alexa to Really Explain Itself

In an interview with OneZero, Amazon’s chief Alexa scientist explains the voice assistant’s complicated new feature

Dave Gershgorn
OneZero
Published in
4 min readSep 26, 2019

--

Credit: T3 Magazine/Getty Images

WWe’ve all heard stories about Alexa mysteriously answering a question you didn’t ask, or turning on the music randomly without anyone in the room saying anything.

Amazon insists that there’s always a perfectly reasonable explanation — and it’s soon giving Alexa the ability to state it.

Starting later this fall, you’ll be able to ask an Alexa device, “Why did you do that?” and it will offer you some sort of reasoning for why it took a certain action.

For now, the explanations are going to be sorted into basic categories, says Rohit Prasad, vice president and chief scientist for Alexa, who sat down for a one-on-one interview with OneZero following the company’s product event in Seattle on Sept. 25, 2019. If music randomly starts playing, it could be the case that it was started by someone either on their phone or in another room, and Alexa would say something like, “Dave’s iPhone played the Black Keys.” You might also ask why Alexa didn’t do something, like turn off the lights after you asked.

This technology could prove to be a powerful tool for getting customers to trust Alexa.

But that’s just the start. The deep neural networks that Amazon uses to make Alexa’s decisions and model its voice are commonly described as black boxes — meaning that Amazon may know what data is considered when the algorithm makes a decision, but not how the decision itself is made. Amazon obviously isn’t the only company using this technology — an entire field of A.I. explainability exists to try and solve this fundamental problem, which plays out on our social media feeds, Google search results, and self-driving cars.

To understand why this is a problem, it’s helpful to know the basics of how a deep neural network works. Deep neural networks are roughly modeled after brains — not on a biological level, but the idea that many, tiny neurons can process small pieces of information together to make a larger decision. With A.I., rather than chemical and…

--

--

Dave Gershgorn
OneZero

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.