Google’s Gemini just bagged a gold at the Math Olympiad, and it did it like a human

Gemini Deep Think cracks Olympiad like a prodigy

Google’s Gemini just bagged a gold at the Math Olympiad, and it did it like a human

An advanced version of Google’s Gemini AI, running in a newly unveiled Deep Think mode, just scored a gold medal at the 2025 International Mathematical Olympiad (IMO).

Gemini solved five out of six hardcore problems in geometry, combinatorics, algebra, and number theory. It earned 35 out of 42 points, a score high enough to secure gold; something most brilliant human teens spend years training for.

And it did all of that in natural language. No formal proof languages like Lean. No help from symbolic solvers. No AlphaGeometry crunching away in the background. Just pure, unfiltered reasoning. Like a human. But faster.

The AI that thinks like a mathematician

Let’s break it down. Last year, Google DeepMind’s approach to high-level math was very different. It leaned on formal logic tools like AlphaProof, and even then, those models took days to solve individual problems.

This year? Gemini Deep Think pulled off full, rigorous proofs within the same four-and-a-half-hour window human contestants get.

The secret weapon is a new Deep Think mode built into Gemini 2.5 Pro. Introduced back in May, Deep Think is designed for exactly this kind of challenge – deep, multi-step, chain-of-thought reasoning. The model juggles multiple hypotheses in parallel and works through complex problems the way a seasoned Olympiad coach would.

Let’s get nerdy for a sec. The Gemini IMO variant wasn’t just some off-the-shelf chatbot. It was:

– Trained with reinforcement learning techniques designed for theorem-proving and problem-solving
– Given access to curated, high-quality solutions from past Olympiads; not for copying, but for inspiration
– Allowed more “thinking time” during its training cycles to simulate long-form reasoning
– Tuned to explore many paths at once, combining them into clear, precise solutions, most of which, according to IMO judges, were impressively easy to follow

ALSO READ: Google unveils Gemini 2.5, its most advanced AI model yet

In fact, the judges themselves reviewed the proofs and gave them the seal of approval. That’s a big deal. This wasn’t just AI-generated filler, it was the kind of clean, elegant math that impresses PhDs.

For context, the standard Gemini 2.5 Pro (the kind you might chat with on a Pixel phone) only managed to solve about 31.5 percent of the Olympiad’s questions. Deep Think absolutely dunked on it.

Google Pixel 9a 5G (8GB RAM, 256GB, Obsidian)

Buy now

Google Pixel 9 Pro 5G (16GB RAM, 256GB, Hazel)

Buy now

We’ve seen AI write poetry, paint portraits, and generate fake Drake songs. But this? This is different. This is AI entering one of the most intellectually demanding arenas in the world and winning on its own terms.

Unleash your inner geek with Croma Unboxed

Subscribe now to stay ahead with the latest articles and updates

You are almost there

Enter your details to subscribe

0

Disclaimer: This post as well as the layout and design on this website are protected under Indian intellectual property laws, including the Copyright Act, 1957 and the Trade Marks Act, 1999 and is the property of Infiniti Retail Limited (Croma). Using, copying (in full or in part), adapting or altering this post or any other material from Croma’s website is expressly prohibited without prior written permission from Croma. For permission to use the content on the Croma’s website, please connect on contactunboxed@croma.com

Comments

Leave a Reply
  • Related articles
  • Popular articles
  • Gaming

    GTA V cheat codes: A complete list

    Karthekayan Iyer

  • Gaming

    GTA San Andreas cheats and codes

    Shubhendu Vatsa

  • Smartphones

    All Apple iPhones launched since 2007

    Chetan Nayak