Google’s Gemini blocked 99% of bad ads in 2025

▼ Summary
– Google is integrating its Gemini AI into ad enforcement, crediting it with an 80% reduction in incorrect advertiser suspensions.
– In 2025, Google blocked or removed 8.3 billion ads and suspended 24.9 million advertiser accounts, stopping over 99% of policy-violating ads before they ran.
– A major focus was on scams, with 602 million scam-related ads removed and 4 million scam-linked accounts suspended.
– The AI analyzes signals like account behavior to detect malicious intent faster than older keyword-based systems.
– Google aims for most Responsive Search Ads to be reviewed instantly, though some advertisers report issues with bulk false disapprovals.
The fight against malicious online advertising is increasingly being waged by artificial intelligence, with Google positioning its Gemini system as a central weapon. According to the company’s latest data, this AI-driven approach is dramatically improving both the scale and accuracy of ad enforcement, catching more scams while significantly reducing errors that impact legitimate businesses.
Google’s 2025 Ads Safety Report reveals the sheer volume of this effort. Last year, the company blocked or removed a staggering 8.3 billion ads and suspended 24.9 million advertiser accounts. Crucially, the report states that over 99% of policy-violating ads were intercepted before they could ever be shown to users. A key driver of this performance is the Gemini AI, which Google credits with an 80% reduction in incorrect advertiser suspensions. The system also processed four times more user reports than the previous year and can identify scam patterns faster by analyzing the deeper intent behind ad content.
Scam prevention was a primary focus. In 2025, Google removed 602 million scam-related ads and suspended 4 million accounts linked to fraudulent activity. The enforcement scope extends beyond just ads, with the company taking action on over 245,000 publisher sites and blocking or restricting 480 million web pages. These efforts are supported by 35 policy updates made throughout the year to address emerging threats.
In the United States alone, Google removed 1.7 billion ads and suspended 3.3 million advertiser accounts. The most frequent policy violations involved abuse of the ad network, misrepresentation, and unauthorized sexual content.
For advertisers, these developments carry significant implications. Google is clearly signaling that AI will play a bigger role in determining which campaigns go live and which accounts are shut down. This raises the stakes for policy compliance, but also offers the potential benefit of fewer disruptive and costly false suspensions. The technical shift is substantial. Instead of relying primarily on keyword matching and static rules, Gemini analyzes hundreds of billions of signals, including account history, behavioral patterns, and campaign activity, to detect malicious intent much earlier.
Operational changes are following this technological shift. By the end of last year, Google reported that most Responsive Search Ads were being reviewed instantly upon submission, allowing harmful creatives to be blocked before launch. The company plans to expand this instant review capability to additional ad formats throughout this year.
However, the push toward faster, more automated enforcement is not without challenges. Some advertisers in markets like the U. K. and U. S. have recently reported receiving bulk disapproval alerts without clear policy violations, highlighting potential growing pains. This places pressure on Google to demonstrate that its tighter AI systems will not inadvertently create new disruptions for compliant brands.
The ultimate goal for Google is for advertisers to view Gemini as a dual-purpose tool: a robust shield against scams and a more precise filter for legitimate activity. The success of this initiative will be measured by its ability to maintain that delicate balance as ad safety enforcement becomes increasingly swift and autonomous.
(Source: Search Engine Land)




