Every Google I/O developers conference delivers a firehose of product news, from Android updates to new features added to Google’s assorted apps. This year’s edition of I/O was no exception, with Google putting an extra emphasis on how artificial intelligence will drive upcoming innovations.
It’s a lot of news from Google, and while some of it sounds promising — we’re pretty excited about the changes slated for Android P later this year — a few I/O announcements left us scratching our heads. (That’s a lot of time to devote to self-driving cars, Google.) Here’s a breakdown of the Google I/O announcements that both impressed and distressed us from this year’s conference. (Image Credit: Google)
Anyone who’s ever dreaded having to schedule an appointment or book a reservation over the phone can now turn to this soothing mantra: Let the robot do it. Google Duplex (an upcoming feature of Google Assistant) can call up a hair salon, restaurant or other place where you need to book something and hold an amazingly realistic — some might say “frighteningly realistic” — conversation with a real live person, complete with life-like pauses and “ums.”
Google Duplex can even understand context and respond to questions asked by people on the other end of the line, tapping into the natural conversation improvements Google is building into Google Assistant. Look for Google Duplex to debut in the fall after Google puts it through some more rigorous testing, but from what we heard at I/O, the feature seems pretty polished already. (Image Credit: Google)
That blue dot in the Google Maps app means well, but it doesn’t do a very good job of orienting you, or letting you know which direction you need to walk in. A massive improvement to directions in Maps that incorporates your phone’s camera figures to change that. Using your camera’s viewfinder, virtual arrows and street names appear superimposed on your phone’s screen, guiding you from Point A to Point B with no more guesswork. (Image Credit: Google)
Google detailed some of the changes coming to Android P later this year, but the one that seems like it could have the biggest impact on how you’ll use Google’s mobile OS is App Actions. The new features builds off the predicted apps feature Google introduced last year that surfaces apps Android thinks you’re going to use based on past behavior.
In the case of App Actions, Android P will give you tappable actions based on behavior like calling a family member if there’s a regular time you tend to check in with them or playing an album when you connect a pair of headphones. App developers will also be able to build App Actions into their software. (Image Credit: Google)
The flipside of App Actions is App Slices, a developer tool Google will offer for Android P that lets slivers of apps — hence, Slices — show up when you’re using the search feature on your phone. Say you search for “Hawaii” on your phone: in addition to a list of search results, Android P might also surface images from the Google Photos app that you took on your last trip to the Aloha State.
pp makers can get into the act, too, as the I/O keynote included a Slices demo where the ability to summon rides from Lyft appeared in searches for the ride-sharing app. Slices sounds like a real-saver that cuts down on the taps separating you from the actions you want to perform or information you need to access. (Image Credit: Google)
Google detailed plenty of changes coming to Google Lens, its AI-powered image recognition feature, but the one that really caught our eye was the ability to copy and paste text with nothing more than your smartphone’s camera. In an onstage demo, a Google rep was able to point a camera at a recipe and get editable text right on her smartphone.
Other Google Lens improvements coming this June include real-time content analysis that will let you use your camera to get a list of ingredients just by pointing it at a menu item and a Style Match feature that summons up similar-looking items to whatever you’re looking at. But it’s copy-and-paste that figures to add the greatest utility to Google Lens and keep it a few steps ahead of the improved image recognition features in Samsung’s Bixby assistant. (Image Credit: Google)
Most addition to Android focus on helping you get more out of your phone, but the new Digital Health tool wants to make sure you’re doing less — or at least to make sure that the time you’re spending staring at a phone screen is meaningful. Some of that boils down to more information about how you’re spending time on your phone, from the number of times you’ve unlocked it to the time you’ve spent on assorted apps.
But other Digital Health tools are aimed at getting you to look up from your screen. You can set time limits on how long you’re allowed to use things like Twitter or Instagram, while an improved Do Not Disturb feature — called Shush — makes sure you won’t be bothered by incoming notifications when you place your phone screen-down. Wind Down mode will even gray out your screen at night to discourage you from staring at your phone into the wee small hours of the morning. (Image Credit: Google)
It may have gotten a new name earlier this year — Android Wear, we really hardly knew ye — and Google may have introduced other changes like better integration with Google Assistant and support for third-party actions. But Google decided that was sufficient, with no further information about its wearable OS included in the keynote.
In fairness, a lot of topics got left on the cutting room floor — Google didn’t talk about its virtual reality efforts or Android Auto during the keynote — and a Wear OS session is scheduled for later this week at the developer conference. But the lack of attention paid to Wear OS (particularly when there were new Wear OS features to talk about) isn’t doing much to alleviate suspicions that wearables aren’t much of a focus for Google these days like they are with Apple and its ever-improving watchOS software. (Image Credit: Google)