Google I/O 2024: Top 5 AI announcements from Sundar Pichai’s keynote address

The biggest announcements from the conference where AI was mentioned over 120 times

Google I/O 2024: Top 5 AI announcements from Sundar Pichai’s keynote address

Google kicked off its annual developer conference today in Mountain View, California, and there were a number of noteworthy announcements. As is the current trend in the industry, the theme for CEO Sundar Pichai’s keynote was Artificial Intelligence.

To give a sense of how much AI means to Google this year, the term was used over 120 times during the two-hour keynote address. From advancements in AI space to deeper Intergrations into Google’s suite of products and services, here are the top-five AI announcements from the Google I/O keynote.


An AI tool that can generate videos from scratch, Veo is a generative AI tool that works a lot like OpenAI’s Sora. Veo is capable of using text prompts to create up to 1080p resolution videos. Using Google’s DeepMind model, Veo can create incredibly photo-realistic videos.

The company also claims that the tool is great at understanding the nuance in prompts for a more accurate representation of users’ visualisations. Veo will be made available to select creators via, while details of a wider rollout are currently under wraps.

Project Astra

Google’s Project Astra was created with developing a ‘universal AI’ in mind. As showcased by Google, the tool uses multiple input and output protocols at once along with the power of AI to provide real-time information using visual cues, audio cues and even touch, all at once.

ALSO READ: OpenAI GPT-4o unveiled with powerful text to video, speech processing

What this means is you can point the camera at your room, use your finger to draw an arrow pointing at a particular object in the frame, and speak to the AI tool to understand what you’re pointing at.

These magic-like AI features will be coming to Gemini later this year. Project Astra is one of the coolest announcements of the year, and you check out how it will work in the video below.

Google Search is the world’s most used search engine, and now Google is powering the search engine with Gemini’s multi-modal generative AI capabilities. This will not only make Google Search capable of using AI in real time for better results, but also enable users to take a multi-modal approach to using Search itself.

As shown at the event, users will be able to use AI-powered Google Search to do things like take a video of a problem they’re facing using their smartphones and ask Google what’s wrong to get real-time results with solutions. This will allow Google to directly interpret elements around you, saving you the trouble of penning them down in words.

Ask Photos

Google is integrating its powerful Gemini AI features right within the Photos app for a more intuitive experience. The new ‘Ask Photos’ features will allow users to simply converse with Gemini and get results around the data in their Photos app, across years’ worth of photos.

As Pichai showcased, using Ask Photos, users can simply ask Gemini what their license plate number is, and Photos will find that one photo of your license plate from years ago in your library.

You can also use the feature to dive into old memories like checking your kids’ first swimming lesson and how they have progressed since.

Improved Gemini 1.5 Pro

With a deeper Gmail integration, Gemini 1.5 Pro is now capable of doing things like scouring your recent emails, finding all the important bits, and give you a summary in seconds.

With NotebookLM, users can also use Audio Overviews to easily learn complex concepts in the form of conversations. Using a multi-modal approach, Gemini 1.5 Pro is capable of, say, talking to your kids, teaching them about math, gravity, or other subjects.

ALSO READ: ChatGPT is getting human-like memory

Gemini can also now integrate with multiple apps and services at once to make everyday tasks easier. Pichai displayed how Gemini can help you return a pair of shoes you bought, by going through your mail for the receipt, contacting the company, and using Calendar to schedule a return pickup, all without requiring any manual interventions.

Google also announced Gemini 1.5 Flash, a lighter, faster version of Gemini 1.5 Pro, that still retains all its multi-modal and contextual capabilities.

These were some of the biggest AI announcements from the Google I/O keynote, and more exciting announcements on other developments including Android 15 are likely in store for tomorrow. Stay tuned to Unboxed by Croma for all the updates.

Unleash your inner geek with Croma Unboxed

Subscribe now to stay ahead with the latest articles and updates

You are almost there

Enter your details to subscribe


Disclaimer: This post as well as the layout and design on this website are protected under Indian intellectual property laws, including the Copyright Act, 1957 and the Trade Marks Act, 1999 and is the property of Infiniti Retail Limited (Croma). Using, copying (in full or in part), adapting or altering this post or any other material from Croma’s website is expressly prohibited without prior written permission from Croma. For permission to use the content on the Croma’s website, please connect on


Leave a Reply
  • Related articles
  • Popular articles
  • Smartphones

    OpenAI unveils new GPT-4o “Omni” LLM

    Atreya Raghavan

  • Laptops

    ChatGPT gets GPT-4 Turbo-powered upgrade

    Atreya Raghavan

  • Smartphones

    Apple Developer channel live ahead of WWDC 2024

    Chetan Nayak