AI & TechArtificial IntelligenceCybersecurityDigital PublishingNewswireTechnology

Google AI Summaries: How to Spot Scammers Trying to Steal From You

▼ Summary

– Scammers are using AI tools like Google’s AI Overview and OpenAI’s ChatGPT to insert fake customer service numbers into search results.
– Victims have been tricked into calling these numbers and providing payment information, assuming the AI-generated results were legitimate.
– The problem is exacerbated because AI often presents a single result instead of multiple options, increasing the likelihood users will trust and use fraudulent information.
– Both Google and OpenAI acknowledge the issue and state they are taking action to remove fake numbers and improve their systems.
– To protect yourself, avoid using AI for contact searches and instead use traditional search methods or go directly to company websites.

Google’s AI summaries have become a prime target for scammers aiming to steal personal and financial information through fake customer service numbers. As more people rely on AI-powered search tools for quick answers, cybercriminals are exploiting these systems to present fraudulent contact details that appear legitimate. This growing threat highlights the importance of verifying information through trusted sources rather than relying solely on automated summaries.

Several individuals have recently shared alarming stories about falling victim to these sophisticated schemes. One real estate CEO, who considers himself cautious, searched for a Royal Caribbean customer service number using Google’s AI Overview. The result looked authentic, complete with accurate pricing and service terminology, so he provided his credit card information, only to later discover unauthorized charges. In another case, someone searching for Swiggy’s customer care number was instructed to share their screen via WhatsApp, raising immediate red flags. It turned out Swiggy doesn’t even offer phone support.

Scammers have long manipulated search results, but AI summaries intensify the risk by presenting a single, seemingly authoritative answer. Unlike traditional search results, which display multiple links allowing users to cross-reference information, AI overviews condense everything into one response. This streamlined approach can make fraudulent details harder to detect. Even OpenAI’s ChatGPT has been targeted through prompt injection attacks, where hackers embed malicious commands that force the AI to include fake numbers in its responses.

Security researchers have demonstrated how these attacks work. In one technique, bad actors use specially crafted prompts to instruct tools like Google Gemini to incorporate scam messages and bogus contact information into generated summaries. While companies like Google and OpenAI say they are addressing these vulnerabilities, the process of identifying and removing all fraudulent content takes time.

To protect yourself, avoid calling any phone number or using contact details pulled directly from an AI-generated summary. Instead, run a standard search without AI assistance by adding “-AI” to your query, or, even better, navigate directly to the official company website to find verified contact information. Staying vigilant and double-checking sources remains the most effective defense against these evolving scams.

(Source: ZDNET)

Topics

ai-powered search scams 95% fake customer service numbers 90% google ai overview vulnerabilities 85% openai chatgpt security risks 80% prompt injection attacks 75% financial fraud prevention 70% search engine manipulation 65% user protection measures 60%