Gemini AI Solves ICPC Challenge That Stumped 139 Human Teams

▼ Summary
– Google’s Gemini 2.5 AI participated in the 2025 ICPC World Finals and achieved a gold medal performance, which the company views as a step toward artificial general intelligence.
– The ICPC is a large, long-running competition where college coders solve complex algorithmic puzzles over five hours, and Gemini competed in a remote online environment approved by the organizers.
– Human competitors were given a 10-minute head start before Gemini began working on the problems during the event.
– Google used the same general Gemini 2.5 model available in other applications but enhanced it to process thinking tokens continuously for the five-hour competition duration.
– Gemini solved 10 out of 12 problems correctly, matching the performance of only four out of 139 human teams, and the ICPC director called it a key moment for AI in problem-solving standards.
Google’s Gemini AI recently achieved a remarkable milestone by earning a gold medal at the 2025 International Collegiate Programming Contest (ICPC), solving 10 out of 12 complex algorithmic problems within the strict five-hour time limit. This performance not only demonstrates the model’s advanced reasoning capabilities but also places it among an elite group of competitors, only four human teams out of 139 were able to match this result.
The ICPC is widely regarded as one of the most demanding programming competitions in the world, attracting thousands of top university students each year. Participants face a series of intricate coding challenges designed to test logic, efficiency, and creativity under intense time pressure. Google entered Gemini 2.5 into the contest using a remote online environment sanctioned by the competition organizers. Human contestants were given a ten-minute head start before the AI began working on the problems.
Unlike previous specialized AI models developed for events like the International Mathematical Olympiad, Gemini 2.5 was not custom-trained for this contest. Instead, Google used the same general-purpose model available in its consumer applications, though it was enhanced to sustain “deep thinking” over the extended competition period. This approach allowed the AI to process a large number of reasoning steps, referred to as “thinking tokens”, in its search for solutions.
By the end of the five hours, Gemini had successfully solved ten problems, securing a gold medal and tying with the best human teams. ICPC director Bill Poucher noted the significance of this achievement, stating that the event has always set the bar for problem-solving excellence. He emphasized that Gemini’s performance represents a pivotal moment in shaping both AI development and academic benchmarks for the future.
(Source: Ars Technica)