AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

AI Mode Hits 75M Users, Gemini 3 Flash Launches: SEO Pulse

Originally published on: December 19, 2025
▼ Summary

– Google’s AI Mode has reached 75 million daily active users, but promised personal context features linking to Gmail and other apps remain delayed with no public timeline.
– AI Mode queries are two to three times longer than traditional searches, indicating users are engaging in more conversational, multi-turn interactions.
– Google launched Gemini 3 Flash as its new default AI model for search, prioritizing faster performance and immediate deployment to improve user experience.
– Ahrefs research found AI Mode and AI Overviews cite the same specific URLs only 13.7% of the time, meaning they operate as separate citation engines requiring distinct optimization.
– The developments show AI search is now a production-scale reality, requiring optimization for current features like longer queries and separate AI experiences, rather than future promises.

Google’s AI-powered search features are now a dominant force in the digital landscape, with its AI Mode reaching a massive 75 million daily active users. This milestone signals a shift from experimental technology to a core component of the search experience that demands strategic attention. Alongside this growth, Google has rapidly deployed its new Gemini 3 Flash model to enhance speed, while new data reveals significant differences in how various AI features source their information.

A key development is the confirmed delay of highly anticipated personal context features for AI Mode. Announced seven months ago, these capabilities would connect search to a user’s Gmail, Calendar, and other apps. Google’s Nick Fox stated they remain in internal testing with no public release timeline. This changes the immediate optimization landscape. Marketers should focus on creating content that answers the longer, more conversational queries users are already typing, rather than preparing for a wave of hyper-personalized, automated searches. The absence of this automated layer means users are manually adding context, which influences the type of content that performs best.

The launch of Gemini 3 Flash as the new default model in AI Mode and the Gemini app underscores Google’s commitment to speed and efficiency. This model promises faster response times and improved reasoning, making multi-turn conversations with search more practical. For professionals, the immediate deployment highlights a new reality: AI model updates can now flow directly into search products without delay, potentially altering how AI features behave and respond to queries overnight. This necessitates a more agile approach to monitoring search performance.

Perhaps the most actionable insight comes from an Ahrefs analysis of 730,000 queries. The research found that while AI Mode and AI Overviews reach semantically similar conclusions 86% of the time, they cite the same specific URLs only 13.7% of the time. This reveals they operate as separate citation engines. Success in one AI experience does not guarantee visibility in the other, creating a split optimization target. For queries where AI Mode appears, factors like publishing frequency and freshness may carry more weight. For AI Overviews, traditional authority signals and in-depth resource coverage might be more critical.

The collective takeaway from this week’s developments is clear: AI search is an operational reality. With tens of millions of daily users, immediate model deployments, and distinct operational behaviors, these features are current infrastructure. The strategy now is to optimize for the present, catering to longer queries, adapting to rapid model changes, and targeting each AI experience independently based on how it sources and presents information today.

(Source: Search Engine Journal)

Topics

ai mode growth 95% personal context delay 90% gemini 3 flash 88% citation differences 87% seo strategy 85% ai search optimization 85% ai search infrastructure 83% ai overviews 82% query length 80% model deployment speed 78%