Artificial IntelligenceBigTech CompaniesNewswireTechnologyWhat's Buzzing

Gemini Deep Think Wins Gold at Math Olympiad

▼ Summary

– The International Math Olympiad (IMO) features top young computational talents competing against advanced AI models like Google’s Gemini Deep Think.
– Google’s AI correctly answered five of six IMO questions, earning gold medal status, while adhering to competition rules.
– Last year, Google’s AlphaProof and AlphaGeometry 2 AI scored four correct answers, achieving silver medal status.
– Google introduced Gemini Deep Think in 2025, a more analytical model that runs parallel reasoning processes for better accuracy.
– Unlike previous AI models, Deep Think processes natural language end-to-end without requiring translation by experts.

Students competing in the International Math Olympiad (IMO) face some of the toughest mathematical challenges imaginable, but this year they had unexpected company, Google’s Gemini Deep Think AI not only participated but secured gold medal status by solving five out of six problems correctly. The achievement marks a significant leap from last year’s performance, where the company’s previous model managed only four correct answers.

Google took a different approach compared to other AI developers by adhering strictly to the IMO’s competition rules. While some models rely on specialized adaptations, Gemini Deep Think was designed as a general-purpose reasoning system rather than a narrow math-solving tool. This makes its success even more noteworthy, demonstrating versatility beyond typical AI benchmarks.

The team behind the project, Google DeepMind, refined their approach after last year’s competition. In 2024, their system required human experts to translate problems into a specialized format before processing. This time, Deep Think operates entirely in natural language, eliminating the need for manual intervention. Thang Luong, senior scientist at DeepMind, emphasized that the new model runs multiple reasoning paths simultaneously, cross-checking results before finalizing an answer, a method that mimics deeper analytical thinking.

What sets this achievement apart is the level of difficulty. The IMO is notorious for its complex problems, with only half of human participants earning any medal. For an AI to reach gold status suggests rapid advancements in machine reasoning. While some critics argue that AI lacks true understanding, the results speak for themselves, Gemini Deep Think outperformed most human competitors while playing by the same rules.

Looking ahead, Google’s work with the IMO could reshape how AI is evaluated in high-stakes problem-solving. If models like Deep Think continue improving, they may soon set new benchmarks not just in mathematics but across multiple disciplines requiring advanced reasoning. For now, though, the gold medal stands as proof that AI can compete, and win, at the highest levels.

(Source: Ars Technica)

Topics

international math olympiad imo 95% googles gemini deep think ai 95% ai performance imo 90% google deepmind 85% advancements machine reasoning 85% natural language processing ai 80% ai competition rules adherence 75% future ai problem-solving 70%