Google I/O 2025 keynote address just wrapped up, and if there’s one takeaway from the keynote, it’s that the Gemini era is no longer on the horizon. It’s here. Everywhere. All at once.
Broadcasted live from the Shoreline Amphitheatre in Mountain View, this year’s Google I/O opened with an AI-generated countdown set to the New Radicals’ You Get What You Give. Then came the flood.
We got our first look at Gemini 2.5 Pro and Flash, an upgraded Google Search with “AI Mode,” a 3D video calling device called Google Beam, and new tools like Flow for AI-generated filmmaking. Gemini is now baked into everything from Gmail to Chrome, Meet to Maps. Even your calendar knows what you need before you ask!
ALSO READ: Google I/O 2024: Recapping the top 5 AI announcements
And just in case that wasn’t ambitious enough, Google rounded things off with a glimpse at Android XR-powered smart glasses and headsets and the highly teased Project Moohan, which it co-developed with Samsung.
What became clear over the two-hour keynote is that Google no longer sees AI as an add-on. It’s now the foundation. And Gemini is the platform it’s betting on to make everything feel more proactive, more personalised, and more powerful.
ICYMI, here’s a recap of everything that went down at Google I/O 2025. Do note that Android 16 got its own showcase at the Android Show: I/O Edition this year.
Gemini 2.5 Pro and 2.5 Flash: The brains behind everything
Google’s latest AI models, Gemini 2.5 Pro and Gemini 2.5 Flash, are designed to be more than just generative tools. They’re the new infrastructure. Gemini is now used by over 400 million active users every month, and in the last year alone, it’s processed a staggering 480 trillion tokens, a fiftyfold increase over 2024.
The 2.5 Flash model is built for speed and efficiency, making it ideal for lighter tasks like app integration and quick responses. Meanwhile, Gemini 2.5 Pro is the heavyweight – it handles complex reasoning, can interpret sketches into code, and even generates 3D images from flat visuals.
Gemini 2.5 Flash will be available in the Gemini API from early June 2025, while Gemini 2.5 Pro will follow soon after.
One of the more striking announcements came from DeepMind CEO Demis Hassabis, who introduced Gemini 2.5 Pro Deep Think. This is a research-oriented model designed to mimic human-like cognitive processing. While it’s still being tested with a select group, it promises a new frontier in reasoning and AI-driven decision making.
This was followed by another jaw-dropping moment when Google unveiled how deeply Gemini can integrate into your digital life. With user permission, Gemini will soon be able to draw from your calendar, past emails, and documents to proactively surface information or generate personalised content.
For instance, personalised replies in Gmail, won’t just be generic suggestions. Gemini will craft responses that sound like you, based on your communication history and writing tone. It’s the kind of automation that feels both incredibly helpful.
This same personal context engine is also coming to Google Search and the Gemini app. If Gemini sees a meeting scheduled, it can prep background notes in advance. If you have an exam approaching, it might remind you of study resources pulled from your Drive. Google insists these features are opt-in, but they mark a shift toward AI that doesn’t just react, it anticipates.
The new age of video calls with Google Beam
In a world that’s grown weary of grainy webcam boxes, Google is betting on a new kind of presence. Google Beam, the spiritual successor to Project Starline, uses an array of six cameras to create a 3D rendering of the person on the other end of your call.
The result is a remarkably lifelike virtual interaction, one that feels more like you’re sharing a room than a screen. Google confirmed that HP will be the first hardware partner bringing Beam to market later this year, although details like pricing remain under wraps.
Paired with real-time language translation in Google Meet, it is currently rolling out in English and Spanish. With this, the future of remote collaboration might actually start feeling human again.
Google Search, reinvented
Forget everything you know about typing a question into a box. With AI Mode in Google Search, queries are becoming conversations. Powered by Gemini 2.5, AI Mode lets you ask long, multi-part questions, follow up naturally, and receive responses that include tables, visuals, and fully cited reports. AI Mode is currently rolling out in the US.
A new feature called Deep Research, available today, can even pull from your uploaded documents or Google Drive to build out detailed, personalised answers. It’s like having a research assistant that never sleeps.
And it’s not just about information. With new agentic capabilities from Project Mariner baked into AI Mode, Search can now perform tasks for you, like finding affordable concert tickets, applying filters, and navigating to the checkout page. You can tell it what you want, and it will keep searching until it finds something that fits perfectly. These features will begin rolling out to users this summer.
Project Astra and the power behind Gemini Live
Of all the demos at I/O, the one that felt the most futuristic, yet weirdly grounded, was Gemini Live, powered by Project Astra. Unlike traditional assistants that wait for commands, Astra enables real-time, continuous perception and interaction.
It’s always aware of what your phone camera and microphone are picking up, allowing Gemini Live to understand, respond, and act more like a human collaborator than a chatbot.
Gemini Live begins rolling out today via the Gemini app for both Android and iOS, expanding beyond its Pixel-exclusive origins.
In one demo, a user asked Gemini Live for help fixing a bike. It pulled up the correct page in a manual, located a relevant YouTube tutorial, scrolled to the appropriate section, and even offered to call a nearby bike shop.
Another video showed how Astra’s underlying tech is being used to assist someone with limited vision, reading signage aloud and providing real-time assistance in navigating the world.
It’s this Astra foundation that powers the new Search Live experience; essentially, a video-call-like interaction with Google Search. You show it what you’re looking at, and it responds conversationally and contextually in real time.
Smartphones will be getting these features sooner than you know it, and some of the best phones to try them on are Google’s own Pixel devices which you can check out below.
Generative AI levels up: Imagen 4, Veo 3, and Flow
Creativity is no longer limited by your toolset, it’s limited by your prompts. Google introduced a suite of new generative models and creative tools designed to make content creation effortless.
Imagen 4 brings stunning realism to image generation, including the ability to add lifelike typography for invites and posters. Meanwhile, Veo 3 creates short video clips complete with sound effects, dialogue, and audio landscapes. Its improvements in physics simulation and lighting make it almost indistinguishable from real footage. Imagen 4 and Veo 3 are available starting today.
The crown jewel is Flow, Google’s new AI filmmaking suite that combines Imagen, Veo, and Gemini. Users can build storyboards, extend scenes, adjust pacing, and layer in music; all with plain language commands.
Google capped off the demo with Ancestra, a short film generated using Veo and Flow, featuring real actors blended seamlessly with AI-generated visuals. It was poignant and surreal – a glimpse at how storytelling could evolve in the AI age.
Gemini in Chrome, Canvas, and everyday tools
Gemini’s reach doesn’t stop at flashy demos. It’s now integrated into Chrome, where it can summarise web pages, provide context-based answers, and even assist with form filling. In Canvas, it serves as a co-creation partner for building infographics, podcasts, and more. Gemini in Chrome rolls out this week in the US for Gemini subscribers.
The Gemini app itself is getting a major overhaul, with deeper integration into Google Calendar, Keep, and Tasks. Whether you’re planning a project or just trying to remember your to-do list, Gemini is designed to work alongside you, often without you having to ask.
The Android XR future: Moohan and the Glasses we’ve been waiting for
After nearly 90 minutes of AI overload, Google turned to hardware. Specifically, its upcoming XR ecosystem. Project Moohan, the company’s long-anticipated XR platform developed with Samsung, will launch later this year.
Android XR will also support headsets and smart glasses, with partners like Warby Parker and Gentle Monster handling the style department.
The smart glasses demo was particularly ambitious. Google showed how the glasses could recall what you’ve seen, guide you through a 3D map, and even perform live translations on the fly. There were a few hiccups during the translation segment, but the potential was undeniable. NBA superstar Giannis Antetokounmpo even joined the demo to show off real-world use.
Google AI Pro and Ultra: New pricing tiers
Google introduced new AI subscription tiers. Google AI Pro, priced at $19.99 (approximately Rs 1,710) per month, unlocks Gemini 2.5 Pro, Flow with Veo 2, NotebookLM upgrades, and Gemini integrations across Gmail, Docs, and Chrome.
For power users, Google AI Ultra is a $249.99 (approximately Rs 21,388) per month package that includes every advanced tool, including Gemini 2.5 Pro Deep Think, Veo 3, Flow with Veo 3, and up to 30TB of cloud storage. It’s expensive but aimed at professionals who need serious firepower. This tier is available now in the US but will roll out to more countries soon.
Unboxed Take: What we saw at Google I/O 2025
More than anything, I/O 2025 showed that Google’s vision of AI isn’t a chatbot or a widget; it’s an entire layer of intelligence woven into your daily life. Gemini won’t just answer your questions; it will anticipate them. It won’t just suggest what to buy; it will fill out the form, apply your preferences, and notify you when the price drops. It’s helpful. It’s powerful. And yes, it’s a little unsettling.
But if Google gets its way, Gemini won’t just run on your devices. It’ll run beside you – reading your cues, learning your style, and subtly shaping the way you work, think, and create.
Unleash your inner geek with Croma Unboxed
Subscribe now to stay ahead with the latest articles and updates
You are almost there
Enter your details to subscribe
Happiness unboxed!
Thank you for subscribing to our blog.
Disclaimer: This post as well as the layout and design on this website are protected under Indian intellectual property laws, including the Copyright Act, 1957 and the Trade Marks Act, 1999 and is the property of Infiniti Retail Limited (Croma). Using, copying (in full or in part), adapting or altering this post or any other material from Croma’s website is expressly prohibited without prior written permission from Croma. For permission to use the content on the Croma’s website, please connect on contactunboxed@croma.com
- Related articles
- Popular articles



Dhriti Datta
Comments