Subscribe to our Newsletter
The San Francisco Frontier | Est. 2025
© 2026 dpi Media Group. All rights reserved.

Google's Gemini Can Now Order Your Uber and Coffee. And It's Actually Working

Google Gemini Live on Pixel 9 Pro XL - AI voice

Photo by Amanz on Unsplash

Your phone is about to get a lot lazier, and honestly, we’re here for it. Google and Samsung just rolled out task automation for Gemini, and it’s the kind of AI feature we’ve been hearing about for years that might actually… work. We’re talking about your assistant ordering food, calling rides, and handling all those annoying app interactions without you lifting a finger.

The feature just hit beta on Samsung’s Galaxy S26 Ultra, starting with food delivery and rideshare apps. So naturally, we had to see what all the hype was about. Spoiler alert: watching your phone use itself is genuinely trippy.

First test run was straightforward, ask Gemini to order an Uber to the airport. The AI asked which airport (smart move), then navigated through the app steps on its own. It even skipped unnecessary details like selecting an airline since everything’s in one terminal at the local airport. Before actually placing the order, Gemini paused and asked for approval. Safety first, automation second.

Things got more interesting with a vague request for coffee and a croissant from Starbucks. Gemini had to scroll through what felt like a million drink options to find the flat white, but it actually found it. Then came the real test: should the croissant be warmed or straight from the case? Without any prompting, Gemini made the right call and specified warmed. A year ago, this same assistant would’ve argued with you about your own calendar events, so this is genuinely impressive.

What makes this feel different from past AI assistant promises is that it’s actually happening. The system keeps you in control, you can watch each step unfold, jump in whenever you want, or stop the whole thing if Gemini goes rogue. It’s not like those sci-fi visions where your phone’s just doing whatever it wants; you’re always the one with final approval before anything actually gets ordered or booked.

Obviously, there’s a lot more testing to come. The real question is whether Gemini can handle the weird edge cases, the requests that don’t fit neatly into templates, or situations where the app’s interface changes. But right now, it’s working better than anyone expected. It’s still early days for this feature, but if this is what AI automation looks like when it actually ships, maybe all those “coming soon” promises from tech companies weren’t completely full of it.

For Bay Area tech enthusiasts who’ve been hearing about AI assistants handling real tasks forever, this moment finally feels like something worth paying attention to.

AUTHOR: mls

SOURCE: The Verge