As major tech events go, Google I/O lacks the glamour of an iPhone launch, the tension and drama of a Facebook keynote, or the cringe-inducing, over-the-top spectacle of a Samsung unveiling. The company’s announcements tend to be wonky, incremental, and heavily focused on artificial intelligence, especially its confusing inner workings.
Yet I find Google’s annual developer conference the most consistently intriguing of the four, because the company isn’t just releasing nifty gadgets: It’s pushing the boundaries of what can be automated, down to the most quotidian tasks in our everyday lives. In the process, Google is giving us glimpses of a future that often looks more like sci-fi than we’re really prepared to grapple with — even as it tries to reassure us with privacy and security measures that often feel like attempts to paper over the can of worms it just opened.
Here are the announcements that stood out during Google’s opening keynote, held at Mountain View’s Shoreline Amphitheatre on Tuesday, May 7. I’ve ranked them, not necessarily by their traditional news value, but according to my own opinion as to how interesting they are — that is, their potential to shake up the existing relationships between humans and machines.
1. A souped-up Google Assistant
It may lack the name recognition of Siri or Alexa, partly because it lacks a catchy name. But Google’s Assistant is one of the world’s most widely used consumer A.I. products, powering more than 1 billion devices around the world via Android phones, tablets, smart speakers, and smart displays. In many ways, it was already the most advanced — and now Google says it has found a way to make it 10 times faster, by pulling a lot of the complex computing out of the cloud and onto each user’s device.
Practically speaking, that means you can operate your Android phone faster by voice than you could by touch. In an onstage demo, a Google rep fired off a string of voice commands that required Google Assistant to access multiple apps, execute specific actions, and understand not only what the rep was saying, but what she actually meant. “Hey Google, what’s the weather today? What about tomorrow? Show me John Legend on Twitter. Get a Lyft ride to my hotel. Turn the flashlight on. Turn it off. Take a selfie.” Assistant executed the whole sequence flawlessly in a span of about 15 seconds. Further demos showed off its ability to compose texts and emails that drew on information about the user’s travel plans, traffic conditions, and photos.
All of that, of course, relies on users continuing to grant Google’s software deep access into their lives, which is why the company will have a hard time ever “pivoting to privacy,” as Facebook plans to. But Google’s push to perform this machine learning locally on your device will help control the flow of personal data to the cloud, and new privacy features, such as the ability to regularly delete old data, should help. Even so, Google’s vision of the future is still one in which it learns more and more about you all the time. How that squares with the preferences of an increasingly privacy-conscious public remains to be seen.
The next-generation Google Assistant will come first to Pixel phones later this year.
2. A big, powerful, scary, do-everything smart display
Sticking with the theme of managing your personal life, Google’s new Nest Hub Max exemplifies the type of potent, versatile hardware the company can build to take advantage of all that data and A.I. It basically throws every Google smart home device together into one, combining a Nest security camera, a Google Home Hub smart display, and Google Home Max smart speakers in a single gadget that’s meant to sit in your living room and act as a command center for your household.
The combination of all those features, especially the camera, opens some new possibilities that could make the Nest Hub Max more capable than the sum of its parts. For instance, it is beginning to incorporate some gesture controls, like the ability to raise one hand to pause a song or video — something that will come in handy for anyone who has ever tried to repeatedly yell “Hey Google, stop!” above the din of a noisy room. Face recognition allows it to distinguish between members of your family and personalize greetings and information to each, as well as alerting you if it sees a stranger in your home when they’re not supposed to be.
Creepy? It sure has that vibe, which is why Google included a green indicator light to tell you when the camera is on and a switch that cuts off power to both the camera and its mic. But for the millions who have already decided to allow smart devices from Google, Amazon, and other tech companies into their home, the Nest Hub Max could have a lot of appeal. This is the closest Google has come yet to its longtime dream of building a real-life Star Trek computer.
The Nest Hub Max will launch this summer at $229.
3. Automatic, real-time captioning for video and audio
This is one of those features that might seem minor to some people but is crucial to others, and it could have far-reaching effects. Google’s latest mobile operating system, Android Q, can transcribe the words from any video or audio you play on your device—in real time—and overlay them on your screen. That means you can effectively turn on A.I.-generated closed captioning for everything from YouTube videos to autoplay clips in your social feeds to a video you took of your friends on vacation.
On the level of the average user, it’s a relatively small convenience. But assuming it works and is widely used, it could be a big deal for mobile video more broadly: The format has arguably been held back by people not wanting to turn on the sound when they don’t have headphones in. And, of course, it’s even more of a breakthrough for the hundreds of millions of people around the world who are deaf or hard of hearing.
The Live Caption feature is built into Android Q, and you can activate it with a tap.
4. New transparency tools for A.I.
A fundamental problem with cutting-edge machine learning software is that A.I. can draw conclusions based on signals and features that are opaque to the people using it and, often, even the people building it. For example, you probably couldn’t say exactly why your Instagram feed is ordered as it is. Shut into a black box, algorithms can be dangerously biased or discriminatory, even if their creators didn’t intend to make them that way.
At Google I/O, CEO Sundar Pichai touted the company’s use of an approach called TCAV, or testing with concept activation vectors, to shed light on the conceptual “reasoning” that underlies the software’s outputs. For instance, in an example that Google offered, it could tell you that the software identified an image’s subject as a doctor partly because of the white coat and stethoscope, but also partly because the person appeared to be male — presumably because it was trained on a dataset in which men were more likely than women to be doctors. Just identifying that bias doesn’t fix the problem, of course, but it’s a necessary first step toward confronting and correcting for those sorts of biases.
5. Incognito mode for Maps, Search, and YouTube
If Google is going to continue to build its business on the combination of A.I. and personal data — and I/O 2019 strongly suggests that it is — then it’s going to have to find ways to reconcile that with tougher privacy regulations, more intense media scrutiny, and greater public awareness of the trade-offs involved. Google already announced last week that it will let you auto-delete some of your sensitive data, including location and activity history.
At I/O, the company announced a new “incognito” mode for Google Maps, which will stop keeping records of your whereabouts while it’s enabled. That’s important, because your location data is some of your most sensitive, revealing behaviors that could be of interest not only to advertisers but also to stalkers and other malicious actors. It’s akin to the incognito mode that has long been part of Google’s Chrome browser. The company said it will also bring incognito features to Google Search and YouTube in the months to come.
- A cheaper Pixel phone: While everyone else’s smartphones are getting more expensive, Google is heading the other way with its new Pixel 3a. It will be less powerful than the existing Pixel 3, but at a base price of $399, it will be half as expensive. Reviewers are already recommending it as an option for buyers who want the best smartphone camera at the lowest price.
- Focus mode: A new feature coming to Android P and Q devices this summer will let you turn off your most distracting apps to focus on a task, while still allowing text messages, calls, and other important notifications through.
- Augmented reality in Google Maps: AR is one of those technologies that always seems to impress the tech companies that make it more than it impresses their actual users. But Google may finally be finding some practical uses for it, like overlaying walking directions when you hold up your phone’s camera to the street in front of you.
- Automatic rental car bookings and movie tickets: Google’s most controversial demo last year featured an A.I. system that could place a call to book restaurant reservations for you automatically. Ethicists wondered whether the receptionists on the other end would be informed that they’re talking to a machine and not a human. This year, Google found a less thorny application for its A.I. reservations bot: Duplex on the Web can rent you a car or buy your movie tickets online by filling in all the required fields with your information — no uncanny valley required.