Organizer
Gadget news
I tried Google’s Android XR prototype and they can’t do much but Meta should still be terrified
2:32 pm | May 21, 2025

Author: admin | Category: Artificial Intelligence Computers Computing Gadgets Software | Comments: Off

The Google Android XR can’t do very much… yet. At Google I/O 2025, I got to wear the new glasses and try some key features – three features exactly – and then my time was up. These Android XR glasses aren’t the future, but I can certainly see the future through them, and my Meta Ray Ban smart glasses can’t match anything I saw.

The Android XR glasses I tried had a single display, and it did not fill the entire lens. The glasses projected onto a small frame in front of my vision that was invisible unless filled with content.

To start, a tiny digital clock showed me the time and local temperature, information drawn from my phone. It was small and unobtrusive enough that I could imagine letting it stay active at the periphery.

Google Gemini is very responsive on this Android XR prototype

Google's Android XR prototype demonstrated at Google I/O 2025

(Image credit: Philip Berne / Future)

The first feature I tried was Google Gemini, which is making its way onto every device Google touches. Gemini on the Android XR prototype glasses is already more advanced than what you might have tried on your smartphone.

I approached a painting on the wall and asked Gemini to tell me about it. It described the pointillist artwork and the artist. I said I wanted to look at the art very closely and I asked for suggestions on interesting aspects to consider. It gave me suggestions about pointillism and the artist’s use of color.

The conversation was very natural. Google’s latest voice models for Gemini sound like a real human. The glasses also did a nice job pausing Gemini when somebody else was speaking to me. There wasn’t a long delay or any frustration. When I asked Gemini to resume, it said ‘no problem’ and started up quickly.

That’s a big deal! The responsiveness of smart glasses is a metric I haven’t considered before, but it matters. My Meta Ray Ban Smart Glasses have an AI agent that can look through the camera, but it works very slowly. It responds slowly at first, and then it takes a long time to answer the question. Google’s Gemini on Android XR was much faster and that made it feel more natural.

Google Maps on Android XR wasn’t like any Google Maps I’ve seen

Google's Android XR prototype demonstrated at Google I/O 2025

Celebrities Giannis Antetokounmpo and Dieter Bohn wear Android XR glasses and shake hands with the crowd (Image credit: Philip Berne / Future)

Then I tried Google Maps on the Android XR prototype. I did not get a big map dominating my view. Instead, I got a simple direction sign with an arrow telling me to turn right in a half mile. The coolest part of the whole XR demo was when the sign changed as I moved my head.

If I looked straight down at the ground, I could see a circular map from Google with an arrow showing me where I am and where I should be heading. The map moved smoothly as I turned around in circles to get my bearings. It wasn’t a very large map – about the size of a big cookie (or biscuit for UK friends) in my field of view.

As I lifted my head, the cookie-map moved upward. The Android XR glasses don’t just stick a map in front of my face. The map is an object in space. It is a circle that seems to remain parallel with the floor. If I look straight down, I can see the whole map. As I move my head upward, the map moves up and I see it from a diagonal angle as it lifts higher and higher with my field of view.

By the time I am looking straight ahead, the map has entirely disappeared and has been replaced by the directions and arrow. It’s a very natural way to get an update on my route. Instead of opening and turning on my phone, I just look towards my feet and Android XR shows me where they should be pointing.

Showing off the colorful display with a photograph

Google's Android XR prototype demonstrated at Google I/O 2025

(Image credit: Philip Berne / Future)

The final demo I saw was a simple photograph using the camera on the Android XR glasses. After I took the shot, I got a small preview on the display in front of me. It was about 80% transparent, so I could see details clearly, but it didn’t entirely block my view.

Sadly that was all the time Google gave me with the glasses today, and the experience was underwhelming. In fact, my first thought was to wonder if the Google Glass I had in 2014 had the exact same features as today’s Android XR prototype glasses. It was pretty close.

My old Google Glass could take photos and video, but it did not offer a preview on its tiny, head-mounted display. It had Google Maps with turn directions, but it did not have the animation or head-tracking that Android XR offers.

There was obviously no conversational AI like Gemini on Google Glass, and it could not look at what you see and offer information or suggestions. What makes the two similar? They both lack apps and features.

Which comes first, the Android XR software or the smart glasses to run it?

Google's Android XR prototype demonstrated at Google I/O 2025

(Image credit: Philip Berne / Future)

Should developers code for a device that doesn’t exist? Or should Google sell smart glasses even though there are no developers yet? Neither. The problem with AR glasses isn’t just a chicken and egg problem of what comes first, the software or the device. That’s because AR hardware isn’t ready to lay eggs. We don’t have a chicken or eggs, so it’s no use debating what comes first.

Google’s Android XR prototype glasses are not the chicken, but they are a fine looking bird. The glasses are incredibly lightweight, considering the display and all the tech inside. They are relatively stylish for now, and Google has great partners lined up in Warby Parker and Gentle Monster.

The display itself is the best smart glasses display I’ve seen, by far. It isn’t huge, but it has a better field of view than the rest; it’s positioned nicely just off-center from your right eye’s field of vision; and the images are bright, colorful (if translucent), and flicker-free.

Ray-Ban Meta smart glasses reflecting Times Square on author's face

The author in Ray-Ban Meta Smart Glasses looking dumbfounded (Image credit: Future / Philip Berne)

When I first saw the time and weather, it was a small bit of text and it didn’t block my view. I could imagine keeping a tiny heads-up display on my glasses all the time, just to give me a quick flash of info.

This is just the start, but it’s a very good start. Other smart glasses haven’t felt like they belonged at the starting line, let alone on retail shelves. Eventually, the display will get bigger, and there will be more software. Or any software, because the feature set felt incredibly limited.

Still, with just Gemini’s impressive new multi-modal capabilities and the intuitive (and very fun) Google Maps on XR, I wouldn’t mind being an early adopter if the price isn’t terrible.

How the Android XR prototype compares to Meta’s Ray Ban Smart Glasses

Ray-Ban meta glasses up close

My Ray-Ban Meta Smart Glasses are mostly just sunglasses now (Image credit: Future / Philip Berne)

Of course, Meta Ray Ban Smart Glasses lack a display, so they can’t do most of this. The Meta Smart Glasses have a camera, but the images are beamed to your phone. From there, your phone can save them to your gallery, or even use the Smart Glasses to broadcast live directly to Facebook. Just Facebook – this is Meta, after all.

With its Android provenance, I’m hoping whatever Android XR smart glasses we get will be much more open than Meta’s gear. It must be. Android XR runs apps, while Meta’s Smart Glasses are run by an app. Google intends Android XR to be a platform. Meta wants to gather information from cameras and microphones you wear on your head.

I’ve had a lot of fun with the Meta Ray Ban Smart Glasses, but I honestly haven’t turned them on and used the features in months. I was already a Ray Ban Wayfarer fan, so I wear them as my sunglasses, but I never had much luck getting the voice recognition to wake up and respond on command. I liked using them as open ear headphones, but not when I’m in New York City and the street noise overpowers them.

I can’t imagine that I will stick with my Meta glasses once there is a full platform with apps and extensibility – the promise of Android XR. I’m not saying that I saw the future in Google’s smart glasses prototype, but I have a much better view of what I want that smart glasses future to look like.

You might also like...

Google Gemini hands-on: the new Assistant has plenty of ideas
5:00 pm | February 17, 2024

Author: admin | Category: Artificial Intelligence Computers Computing Gadgets Software | Comments: Off

Google has replaced its Google Assistant with a new AI-based tool, Gemini, at least for those of us in the US daring enough to download the new app. I tried Gemini on my Pixel 8 Pro, testing it side-by-side against the older Google Assistant on my OnePlus 12. The experience is changing very quickly, and features that didn’t work yesterday may suddenly work tomorrow. Overall, Gemini is trying to be something very different than Assistant, without removing the features I’ve grown to rely upon. 

Google Gemini hands-on: Design

Google Gemini screens on Google Pixel 8 Pro

(Image credit: Philip Berne / Future)

It was obvious from the first time I opened Gemini that it’s trying a different approach. While Google Assistant asks “Hi, how can I help?,” Gemini posits “What would you like to do today?” Assistant waits for me to start speaking. Gemini listens, and also has a prompt below the question telling me to type, talk, or share a photo. 

When I started using Gemini a week ago, there were many things it couldn’t do that Assistant could handle. Gemini couldn’t control my smart home equipment. It wouldn’t set a reminder. Gemini needed me to press a button after I gave it a command. There were many bugs at first, but in only a week the software has greatly improved. It can control my lights and thermostat, for instance, and its response is now automatic. 

If you want more than just a basic Assistant, you can open up the full Gemini app. Up top, Gemini offers suggestions for things to try, with interesting options that change frequently. 

Beneath sits a list of your three most recent queries. Gemini keeps track of everything you ask, and since it’s an AI it will also summarize the session and give it a title. You can look at your entire query history and delete an entry, or give it a more appropriate heading. You can also pin your best chat sessions. 

Gemini’s hidden strength is its ability to talk to other Google apps. It can replace Google Assistant because it uses Assistant as one of its many tools, along with Maps, Search, and even others. You can save a chat session directly to Google Docs, or export it directly to a Gmail message.

If you don’t want to use Gemini as your Assistant replacement whenever you press the Power button or yell “Hey Google,” you can choose Assistant instead in Settings.

Google Gemini hands-on: The Gemini differences

Google Gemini screens on Google Pixel 8 Pro

(Image credit: Philip Berne / Future)

While Google Assistant is just that, an assistant to do things on your phone, Google Gemini is trying to be smarter, more like a human helper with ideas than a cold machine.

For instance, among the suggested activities, Gemini suggests I “Brainstorm team bonding activities for a work retreat,” and offers “Ideas to surprise a friend on their birthday.” When I tap on the birthday ideas option, it adds “concert-loving” friend, which is clever because I can easily replace that with “table-top game loving” or whatever my friends are actually into. 

For image generation, the suggestions from Google show the granularity of detail that Gemini can handle. To create a space hedgehog, Google started with a 36-word prompt with verbs, descriptions, and things to avoid in the final image. 

Gemini is smart enough to continue a conversation after a prompt. I asked for suggestions for plans in a specific town nearby and it offered four suggestions. I said I liked the fourth option and asked it to expand and it complied, offering more options that were similar. I had no problem referring to previous prompts in a single chat, even if I’d veered off-topic a bit.

Google Gemini screens on Google Pixel 8 Pro

(Image credit: Philip Berne / Future)

So, is Google Gemini the new Google Assistant, or is it an app that runs Assistant on my behalf? Assistant doesn’t have a full screen app, it’s always a pop-up window. Google Gemini starts as a pop-up, and you can open the app to dive into more detail.

Assistant can’t interact with photos like Gemini, though this is still a buggy feature. It would often refuse to help with a photo task, telling me it couldn’t work with images, or it wasn’t yet ready to handle photos of people. Sometimes these photos didn’t include humans, so I’m not sure what caused the error.

On some occasions, Gemini would tell me that it could not interpret an image, and then it would offer me detailed information. I asked about a bird in a photo I’d taken and it told me it couldn’t review the image, then offered me links for info about the Great Cormorant. I expect these bugs will be ironed out soon, but I’m still unclear what Gemini will be able to do with images I upload.

Google Gemini hands-on: Performance

Google Gemini screens on Google Pixel 8 Pro

(Image credit: Philip Berne / Future)

Google Gemini is slow. When I tried the same tasks side-by-side with Google Assistant on my OnePlus 12, Assistant always finished first. That could be the faster Snapdragon 8 Gen 3 processor in the OnePlus 12, but I suspect there are bottlenecks slowing down Gemini. After all, Gemini isn’t replacing Assistant, it’s using Assistant, so that creates an extra step. 

That said, there aren’t many tasks for which I need Gemini to respond with great haste. If I’m asking for weekend plans, I can wait an extra ten seconds for a good answer. If I’m turning off all the lights in my house, the longer pause is annoying. 

The Gemini results can be impressive, and Gemini can expand or adapt its answers. In fact, it always suggests ways it could expand to be more helpful. If I ask for a destination, it might offer a few ideas that are bad and one that’s great, and when I identify the choice I like, it can find similar options. Of course, that’s what a machine does best, match patterns. 

I tried using Gemini to plan a fiction novel about a robbery and it was surprisingly fun. Its suggestions were cliche, but it did a great job offering pathways to expand. After I gave it an initial plot synopsis, it offered to flesh out storylines, create plot twists, and even devise motivation for different character actions. 

I’ll keep using and testing Gemini, and it has a lot of room for performance improvement, but the experience is still fun and satisfying, and the results are often worth the wait. The suggestions are not uniformly good, but they are occasionally great. 

Google Gemini hands-on: First impressions

Google Gemini screens on Google Pixel 8 Pro

(Image credit: Philip Berne / Future)

What is Gemini for? Approaching a new AI tool, it’s hard to know how to use it. It isn’t really a replacement for Assistant so much as a gatekeeper of all of Google’s apps that provide answers, especially Maps and Search.

Following Google’s suggestions in the app helps open doors. Google suggests using Gemini to help make plans, and that’s what I did most often. I made date night plans, weekend plans, and I’ll be using Gemini to help with a road trip soon. Gemini offered ideas that pointed me in the right direction, even if I didn’t use the options listed. 

Google also has created a great tool for brainstorming. Gemini offered its most interesting results at the end when it suggested ways that I could ask it to expand. There were no one-step conversations. Every query ended with a call to action to go further. I liked that, it was very helpful. 

What did I not like? I asked Gemini for a recipe for moist and fluffy muffins and it gave me a recipe but no attribution. An author can’t copyright a recipe, but Gemini didn’t invent muffins, or techniques to make them fluffy. It felt like something was being stolen.

I also didn’t like the faux humanity injected into every response. No matter what I suggested, I got a compliment from Gemini. Sometimes these were subtle words of encouragement, other times it was fawning and embarrassing. 

Look, Gemini, I know that you’re a fake computer personality. It doesn’t make me feel good when you tell me I’m very creative and interesting. It’s less believable than when my Mom told me I was the most handsome … you get the idea. 

I use Google Assistant often for the basic – timers, weather, and smart home control. Gemini can do all of that, so I won’t stop using Gemini. I’ll also try Gemini for help expanding on ideas and plans. I’m very curious to see how it grows its capabilities with all of the other Google Apps it can control.

You might also like: