AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Google’s Olympiad-Winning Gemini 2.5 AI Now Public – With a Catch

▼ Summary

Google launched Gemini 2.5 Deep Think, a new AI model optimized for complex reasoning, but it’s a less powerful “bronze” version compared to the gold medal-winning IMO model.
– The bronze model is available through the Gemini mobile app for Google AI Ultra subscribers at $249.99/month, with a promotional rate of $124.99/month for the first three months.
– Gemini 2.5 Deep Think enhances problem-solving with parallel thinking and reinforcement learning, excelling in math, coding, and creative tasks like 3D design.
– The model outperforms Gemini 2.5 Pro and competitors like GPT-4 in benchmarks but trades speed for deeper reasoning, resulting in slower response times.
– While not the IMO gold model, Deep Think offers advanced capabilities for enterprise users, with the full gold version being tested by mathematicians.

Google has unveiled Gemini 2.5 Deep Think, a new AI model designed for advanced reasoning and problem-solving, marking a significant leap in artificial intelligence capabilities. This iteration builds on the success of its predecessor, which recently made history by securing a gold medal at the International Mathematical Olympiad, an unprecedented achievement for AI.

However, the version now available to the public isn’t the exact gold-winning model. Instead, Google has released a streamlined variant optimized for everyday use, described as a “bronze” version by Logan Kilpatrick, Product Lead for Google AI Studio. Kilpatrick clarified on social media that while this version is faster and more practical, the full gold-medal-winning model is being tested by mathematicians to explore its full potential.

Access to Gemini 2.5 Deep Think comes at a premium. It’s currently exclusive to subscribers of Google’s AI Ultra plan, priced at $249.99 per month, with an introductory offer of $124.99/month for the first three months. The model is accessible via the Gemini mobile app, offering enhanced reasoning for complex tasks like mathematical proofs, scientific research, and creative design.

What sets Deep Think apart? The model introduces parallel thinking, allowing it to evaluate multiple ideas simultaneously, and incorporates reinforcement learning to refine its problem-solving approach over time. Early testers, including mathematicians and AI experts, have praised its ability to tackle intricate challenges, such as generating 3D graphics from abstract prompts, a feat no other model has accomplished.

Performance benchmarks highlight its superiority in coding, algorithm design, and scientific reasoning, outperforming competitors like OpenAI’s GPT-4 and xAI’s Grok 4 by significant margins. Yet, these advanced capabilities come with tradeoffs: Deep Think operates slower than standard models and has a higher refusal rate for ambiguous queries, making it better suited for deliberate, high-stakes tasks rather than quick responses.

While the public version isn’t the Olympiad-winning model, its release signals a shift toward more sophisticated AI tools for professionals. Enterprises and researchers can leverage its analytical prowess, though widespread adoption may hinge on cost and accessibility. For now, Deep Think remains a premium feature, offering a glimpse into the future of AI-assisted problem-solving.

How to get it: Subscribers to Google’s AI Ultra plan can activate Deep Think in the Gemini app, unlocking extended reasoning and detailed outputs. Lower-tier plans, including the free version, do not include access, reinforcing its status as a high-end tool for specialized use cases.

For technical leaders and innovators, Gemini 2.5 Deep Think represents a tangible step forward, blending cutting-edge research with real-world applications, even if the full gold-standard model remains under wraps for now.

(Source: VentureBeat)

Topics

gemini 25 deep think ai model 95% advanced reasoning problem-solving capabilities 90% bronze version vs gold medal-winning model 85% performance benchmarks comparisons 85% pricing subscription details 80% enterprise professional applications 80% tradeoffs speed refusal rate 75% accessibility adoption challenges 70%