testsetset
A whole lot of news was announced at Google’s I/O developer conference this week, and unsurprisingly, much of it — including the new Pixel 3a smartphone and Nest Hub Max smart display — is centered on artificial intelligence and Google Assistant.
That’s no surprise for a company whose CEO called AI more profound to humanity than fire or electricity but it can be a challenge to follow all that news closely, so here’s everything noteworthy or significant Google Assistant.
Google Assistant 10x faster
Replay the keynote address by CEO Sundar Pichai and Google executives and you’ll notice a whole lot of mention of on-device machine learning. Edge computing, or the use of on-device compute power for inference, is growing in popularity and was a clear sub-theme of the I/O conference.
One place this is being applied: Google Assistant speech recognition. The company put the speech recognition and recurrent neural networks required for voice communication with Google Assistant onto smartphones to make performance up to 10 times faster.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
That may all sound fairly dry on its surface, but it’s another thing entirely to see it in action.
Latency is the rarely talked about element in the AI assistant competition between Google Assistant, Alexa, and others, but not being fast enough to respond or deliver results is a major hurdle to AI adoption. That’s true for AI researchers as well as consumers using the assistant on their smartphone.
Speed paired with continuous conversation means you’re more likely to consider completing multiple tasks. It can change the way people speak with an assistant.
The next generation Google Assistant will be available on Pixel phones this summer.

Above: Nest Hub Max
Facial recognition for daily updates and smart display personalization
Nest Hub Max is Google’s newest smart display, and unlike Google Home Hub, which was renamed the Nest Hub, it’s got a camera. It uses facial recognition to identify up to 6 users in a household.
On-device machine learning is used to deploy the facial recognition software. The Face Match setup process only takes a turn of your head to the left and right. Once it’s set up, it can proactively, without a need for you to say anything, speak up twice a day to deliver updates to stay on track of your to-do list, your calendar, your morning commute, and other personalized information.
Though the “Max” mention in the product’s name may lead you to believe otherwise, with a 10-inch HD screen, the Nest Hub Max resembles Amazon’s Echo Show and Facebook’s Portal in size.
Duplex for web
Duplex bust onto the conversational commerce scene like Kool-Aid Man through a wall last year. All at once the tech that accompanies Google Assistant drew astonishment and fear from the public for its human-sounding voice in phone calls for haircut appointments and restaurant reservations.
This year, Google introduced Duplex for the web, so the tech that accompanies Google Assistant will be able to do things like rent a car. Instead of making a phone call, Duplex on the web will automatically fill in your information and make a purchase in as little as a few taps.
Duplex on the web launches later this year on Android phones. Duplex is currently available for Pixel owners in many parts of the United States.
On-the-spot translation and help picking dinner
Google wants its computer vision tool Lens to begin to index the physical world the way it indexes web pages, and this week it took some steps in that direction. Lens will soon gain the ability to read text out loud in more than 100 languages, an ability that may be helpful for people traveling
This builds on Interpreter Mode, a translation service introduced in January for Home speakers.
Lens is also gaining the ability to scan restaurant menus to highlight top-rated dishes and surface info from online reviews. When the meal’s over, Lens can scan the receipt and calculate the proper tip.
Once only available for Google’s flagship Pixel smartphones, Lens is now available in Google Photos, Google search results, and natively through the camera app on a number of smartphones.
Finally, Lens will increasingly be able to roll video over the real world. This begins with powering a behind-the-scenes look at the de Young Museum in San Francisco and recipes in the next issue of Bon Appetit magazine. Apple and Adobe’s AR Kit plans to do the same.
This appears likely to accompany augmented reality experiences, which Google announced Tuesday will become part of Google search results.
In other visual search options, augmented reality is coming to Google search results alongside podcasts and Google Assistant actions. Google introduced plans to make a more visual search experience for the 20th anniversary of its search engine earlier this year.
Podcasts and Google Assistant actions are also coming to Google search results.
App actions and built-in intents
A series of new tools were made available for Google Assistant voice app developers, including the How-to markup language for voice apps, videos, and other content to show up in search results and Google Assistant responses to “how to” questions.
Deep links for Android like app actions, which closely resemble Apple’s Siri Shortcuts, were first introduced to connect with Google Assistant last year.
When paired with built-in intents released Tuesday, app actions make it possible to make voice commands that trigger specific actions in Android apps. Built-in intents were also introduced last year, but were extended to specific app categories this year such as food orders, ridesharing, and banking.
Google is pitching app actions to Android developers as a way to make access to an app easier and as a re-engagement opportunity. Android Slices were also introduced last year as a way to display content from an app so users can ask questions that are answered by Google Assistant.
Driving Mode
When in Driving Mode, Google Assistant’s visual display will change to show personal things like a map to your next appointment or your schedule.
Google Assistant in Driving Mode will also automatically help you start your podcast where you left off.
Google has gradually brought its assistant to more surfaces in cars, beginning with Android Auto in January 2018 and Google Maps earlier this year.
Driving Mode for Google Maps was introduced last year.
Picks for You and content personalization
Google Assistant can now remember more personal information about the people and things you care most about, so you can for example ask for directions to a friend’s house or say “Remind me to get flowers before mom’s birthday” instead of specifying a specific date.
Picks for You is also being introduced this summer for personalized recommendations beginning with podcasts, recipes, and local events. At launch, Picks for You is only available on smart displays.
Amazon’s Echo Show smart display is also able to make podcast recommendations.
Assign reminders
One Google Assistant feature added at I/O that may have gone unnoticed by many is the ability to assign reminders to specific members of a household.
When paired with a Nest Hub Max’s twice-daily rundown of things like your reminders, that means a new way to share things like household chores and to-dos.
This feature, paired with the ability to do things like send video messages, seems like the sort of feature that may increase the viability of a device like Nest Hub Max as a hub of family activity.