The first time you use the Pixel 4's new Motion Sense feature to skip a song or silence an alarm on Google's latest phone, it's a charming moment. Not a magical one, just quietly delightful.
That's appropriate, given the ambient computing initiative that Google hammered home during its hardware-focused event in New York on Oct. 15, where it announced the Pixel 4. Rather than dazzling users with specs and flashy features, like pop-up selfie cameras and a handful of lenses on the back, Google's latest flagship is about solving lots of little problems.
To that end, Motion Sense seems promising. Built on Google's Project Soli radar system, Motion Sense allows users to perform hand gestures in midair, from a short distance above or in front of the device, to trigger shortcuts. However, Motion Sense is pretty limited at this point in terms of applications and it's in danger of becoming just a gimmick.
Yes, it really works...
I went hands on with Motion Sense immediately following the Pixel 4 and Pixel 4 XL reveal. I was impressed with how natural it felt to wave my hand to move to the next track in a Spotify playlist, and how receptive and accommodating the device was to my actions.
It wasn't a perfect experience — there is a slight learning curve, because if you swipe your hand too slowly, the motion won't register as intentional. I also found it harder to interact with the Pixel 4 this way while it was propped up on a Pixel Stand. But that minor frustration never compared to my smoldering rage when trying to use the LG G8 ThinQ’s ill-conceived gestures, which incorporate a time-of-flight camera (rather than a radar) in a fruitless attempt to pull off similar tricks.
Motion Sense first impressions are positive. Takes a minute to get the muscle memory down, but it seems to prefer fast, sweeping gestures rather than slow, deliberate ones. I just hope Google has bigger plans than simply skipping songs and alarms. #madebygoogle @tomsguide #Pixel4 pic.twitter.com/ttlIj7gCl6October 15, 2019
Motion Sense works even better with alarms, because any hand movement near the Pixel 4 will immediately quiet it. I imagine this will be extremely useful when you wake up in the morning, and you're desperately fumbling to stop your phone from blaring, just to get a few more fleeting minutes of peace and quiet.
...but Motion Sense doesn't go far enough
Some might disparage Motion Sense as a meaningless gimmick, though I think the gestures themselves work well enough, fundamentally — even though this is Google's first iteration of the technology — and the practical benefits are far reaching. Much as the world slowly discovered with voice control (which was similarly written off upon Siri's debut), once you eliminate the need for physical contact with a device, you open up a whole new class of experiences.
The problem is, Google hasn't delivered enough of those experiences yet.
Right now, Motion Sense is surprisingly limited in what it can actually do. You can wave your hands to skip songs, reject calls and silence alarms with it, as mentioned, and the Pixel 4's new Face Unlock system uses the Soli radar to fire up the infrared cameras, dot projector and flood illuminator necessary for authentication before you lock eyes with the device, which speeds up the whole process.
Those are tangible examples of ways in which Motion Sense can be useful, but they're the only examples. And if Google can't launch more shortcuts and gestures in the near future, Motion Sense may very well end up being Mountain View's version of the iPhone's 3D Touch — a new level of interaction with tremendous potential that is underused, forgotten and ultimately scrapped in two years because users weren't convinced of its value from the start.
"[Whether Motion Sense is] compelling or gimmicky will depend on what Google makes of it," Tuong Nguyen, senior principal analyst at Gartner, told Tom's Guide. "I think this feature is launched as a building block for Google to learn from — figure out what/how/when gesture should be applied to a multi-modal interface."
Nguyen makes a fair point, in that Motion Sense on the Pixel 4 is merely the beginning. Given Google's penchant for experimentation, the company may regard Motion Sense and Soli as a long game and an opportunity to learn from users and developers, rather than something that will prove its worth immediately. If Google is truly committed to the technology, that strategy may be enough to guarantee Motion Sense's long-term development. It is too early to tell where this technology goes from here.
And yet, given the rather short list of use cases available out of the box to Pixel 4 owners, I'm worried Google may never realize Motion Sense's true potential. Early Soli demos depicted pressing imaginary buttons, sliding your thumb against your index finger to adjust volume and rubbing your fingers against each other to turn knobs.
I'd love to have these modes of interaction available to me when I'm cooking and my hands are dirty, for example, or when I'm in a meeting and I need to discreetly mute my handset. I take a lot of photos with smartphones, and on the bigger ones, it can be tough to reach the on-screen shutter button. Motion Sense could help with that, too.
Will it, though? I asked the same question I posed to Nguyen to Avi Greengart, lead analyst at Techsponential, about whether Motion Sense seemed compelling or gimmicky. And he thinks it's both.
"The implementation is actually useful: Google is mostly using Motion Sense to give the Pixel 4 spatial awareness for faster, more responsive unlocking along with more nuanced reaction to incoming calls," Greengart said. "However, Google is marketing Motion Sense as a way to wave at your phone or tickle Pokémon, which certainly seems like a gimmick."
The language surrounding Motion Sense seems a bit muddied, to say the least. If waving at Pikachu is supposed to endear people to the technology — as Google demonstrated during the Pixel 4 launch — it's not very illustrative of it. Google could accomplish as much using the Pixel's front-facing camera, for crying out loud. And if the company leads with those problem-solving use cases, there aren't very many of them to call out quite yet.
I really hope that changes, because I do believe in the potential of Motion Sense. Of course, gesture recognition has been done before on smartphones, though rarely to great success. In the past, you always had the overriding feeling that the hardware simply wasn't up to the task of reliably and accurately letting you use your device. I don't have that fear when I use Motion Sense.
Instead, I have more of a concern: What if Soli is the hardware that gestures have so desperately needed all this time, but ultimately nothing of substance evolves from it?