I just tried Google’s smart glasses built on Android XR — and Gemini is the killer feature

I don't know if smart glasses are the device that's going to make us put away our smartphones for good, as some in the tech world do. But if smart glasses do enjoy their moment, it's going to be because they come equipped with a pretty good on-board assistant for helping you navigate the world.
I came to that conclusion after trying out a prototype of some smart glasses that Google built on its Android XR platform. The glasses themselves are pretty ordinary devices — instead, the standout feature is the AI-powered Gemini assistant that adds enough functionality to even win over smart glass skeptics like me.
Google announced its AI-powered Android XR smart glasses during the company's Google I/O keynote yesterday (May 20). The inclusion of AI in the form of a Gemini assistant isn't exactly a surprise — a week ago, Google outlined plans to bring Gemini to more devices, including smart glasses and mixed reality headsets. But Google planning on its own pair of glasses equipped with an on-board assistant who can see what you see and answer your questions is certainly noteworthy.
@tomsguide ♬ News - yagobeats
There's no timeframe for Google releasing its glasses. The company says the prototypes are with testers, who will provide feedback on the design and feature set that Google brings to the market. That means the finished product could be a lot different from what I briefly got the chance to wear in a demo area at the Google I/O conference.
But the real takeaway here is not what Google's glasses might look like and how they might compare to rival products, some of which will also be built on the Android XR platform. Instead, what I chose to focus on from my demo was what Gemini brings to the table when you try on a pair of smart glasses.
Using Google's smart glass prototypes
That said, I should spend a little bit of time talking about the glasses themselves. For a prototype design, they're not terribly bulky — certainly the frames didn't look as thick as the Meta Orion AR glasses I tried out last year nor the Snap Spectacles AR glasses I took for a test drive. I don't wear glasses, save for the occasional pair of cheaters at night when I'm enjoying a book, but Google's effort, while thicker than those, didn't feel like ones you'd be embarrassed to wear in public.
My demo time didn't leave a big window to talk to specs — instead, I got a rundown of the controls. A button on top of the right side of the frame takes photos when you press it, while a button on the bottom turns the display off. There's also a touch pad on the side of the frame that you use to summon Gemini with a long press.
When I put on the glasses, my attention was drawn to a little info area down in the lower right of the frames, showing the time and temperature. This is the heads-up display, as Google calls it, and it's not so far out of the way that you can't see the information without breaking eye contact with people. That said, I found my eyes drawn to the area with text, though that might be something I'd be less inclined to look at over time.
Like I mentioned, I really didn't get a specs rundown from Google, and I'm not sure that it matters if Google winds up fine-tuning its glasses based on tester feedback. But the field of view seems narrow — decidedly more so than the 70-degree FOV that Orion provides. If I had to guess, I'd say that's so that there's no question about what you're looking at should you ask Gemini to provide you with more information or actions.
Gemini in action on Google's glasses
Once you tap and hold on the frame — it took me a moment to find the right spot, though I imagine I'd get use to that with more time — the AI logo appears and Gemini announces itself. You can immediately start asking it questions, and I decided to focus on some of the books Google had left around our demo room.
Gemini correctly identified the title of the first book and its subject matter when I asked it to name the book I was looking at. But when I asked how long the book was, the assistant thought I wanted to look up its price. OK, I decided, I'm game — how much does the book cost? Gemini then wanted to know where I was — maybe for currency purposes? — but my response that I was in the United States led Gemini to conclude that I was asking it to confirm whether the U.S. was one of the the places featured in the book. So that was a fruitless conversation.
Things improved when I tried another book, this one with sumptuous pictures of various Japanese dishes. Gemini correctly identified a photo of sushi, then offered to look up nearby restaurants when I asked if there were any places nearby that served sushi. That turned out to be a much rewarding interaction.
Gemini could also identify a painting hanging in the demo area, correctly telling me that it was an example of pointillism and even identifying the artist and the year he painted it. Using the button on the top of the frame, I was able to snap a picture, and a preview of the image I captured floated up before my eyes.
I do wonder if in the process of snapping the photo, I also pressed on the bottom button that turns off the display, because for a couple queries, Gemini was unable to see what I was seeing. Tapping and holding on the frame put things right again, but maybe this is an instance where Google might to think about the placement of buttons. Or perhaps this was just one of those things that can happen when you're trying out a product prototype.
Even though I don't have the best hearing in the world, Gemini came through loud and clear on the speakers that appeared to be located in the frames. Even more impressive, my colleague Kate Kozuch was recording video of my demo and tells me that she couldn't hear any audio spillover — which means at least one end of your Gemini conversations should stay private.
Google XR glasses outlook
I could dwell on some of the mishaps with Gemini during my demo with Google's smart glasses, but I think it's fair to chalk those up to early demo jitters. There's a long way to go before these glasses are anywhere close to ready, and a lot can change for Google's AI in a short amount of time. I look at how much Project Astra has improved in the year since its Google I/O 2024 debut, at least based on the video Google showed during Tuesday's keynote.
Instead, the thing with Google's glasses that impressed me was that I was interacting more or less completely with my voice, save for a button tap here or a frame press there. I didn't have to learn a set of new pinching gestures and hoped that the cameras on my glasses picked those up. Instead, I could just ask questions — and in a comfortably natural way, too.
I imagine the finished version of Google's XR glasses with Gemini are going to look and perform very different from what I saw this week when they eventually hit the market, and I'll judge the product on those merits then. But whatever does happen, I bet that Gemini will be at the heart of this product, and that strikes me as a solid base to build on.
More from Tom's Guide
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Philip Michaels is a Managing Editor at Tom's Guide. He's been covering personal technology since 1999 and was in the building when Steve Jobs showed off the iPhone for the first time. He's been evaluating smartphones since that first iPhone debuted in 2007, and he's been following phone carriers and smartphone plans since 2015. He has strong opinions about Apple, the Oakland Athletics, old movies and proper butchery techniques. Follow him at @PhilipMichaels.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.