Google’s Gemini Live can talk to you, but I wouldn’t call the human-sounding AI chatbot a stimulating conversationalist. Despite that, Google wants you to—at the very least—imagine Gemini Live as a true companion. As the company’s AI models grow more capable, Google’s latest updates to Gemini make it seem like the bot is calling you up on your phone rather than you chatting with a cloud-based AI model.
Late last week, Google updated its Gemini 2.0 Flash model and made it available to anyone using the Gemini app, so paying for a subscription is unnecessary. There seems to be a hidden update amid those changes, as evidenced by 9to5Google, which found that the company had changed Gemini Live notifications to make them appear far more human than before.
Previously, when you exited the Gemini Live on Android while running it in the background, this would appear as a simple notification with a button for users to “End Live mode.” On the new version of the app, Gemini Live appears as a call, with the option to “Hang Up” or put the AI on “Hold.” If you’re running the app from the lock screen, you’ll instead see a notification for “Live with Gemini” and a note the AI is “Listening.”
It’s a small change, but it helps exemplify how Google will position its AI in 2025. Gemini Live can talk with users, and recently, it gained the ability to understand uploaded photo or video content. Eventually, Google wants to add vision capabilities from Google DeepMind’s Project Astra assistant. With these extra capabilities, Gemini Live will need to operate in the background without butting up against you using the phone like usual.
Google is also updating its current Gemini 2.0 to include several smaller and larger AI models for different use cases. In a Wednesday blog post, the company showed off its new “experimental” version of Gemini 2.0 Pro, claiming it is its most powerful user-end model yet. Gemini 2.0 Pro is mostly there for coders and programmers, and it should be available in the app to anybody who pays for Gemini Advanced. If any coders want to use more AI to get themselves out of a job efficiently, here’s a new option for you.
One question is whether or not Google’s latest model is as good as it claims. However, the company says it beats Gemini 2.0 Flash in most benchmarks save for one that checked its ability to provide “factually correct responses given documents and diverse user requests.” On the other side is Gemini 2.0 Flash-Lite, which Google claims has the exact power requirements but far more accurate answers than Gemini 1.5 Flash. The new models arrived after OpenAI showed the public its o3 reasoning model. Last week, the company debuted its miniature o3-mini reasoning model.
Samsung’s Galaxy S25 phone’s big launch feature was cross-app capabilities with Gemini. With a long press of the power button, you can perform simple actions like turning a text message into a calendar invite hands-free. However, in Gizmodo’s own tests, the AI features were far more mundane. The AI is less capable of handling more complex tasks. If you’re spending time checking the AI’s work, you may as well just perform the task yourself.
Google will save its best mobile AI features for later this year, closer to Google I/O 2025 and the anticipated release of the Pixel 10. Compared to the so-so capabilities of Samsung’s phones, Google will be looking to wow audiences with AI capabilities. We’ll have to wait and see if all this effort and hype was worth the trouble.