AI & TechArtificial IntelligenceBigTech CompaniesNewswire

From Badgers to Peanut Butter Heels: Google AI Overviews Attempt to Explain the Unexplainable

▼ Summary

– Google’s AI Overviews feature can generate confident but entirely fabricated explanations for nonsensical phrases and idioms.
– Examples include defining “you can’t lick a badger twice” as not being able to trick someone again and “peanut butter platform heels” as a scientific experiment involving peanut butter and diamonds.
– This behavior is due to large language models attempting to provide context for “data voids” where little relevant information exists online.
– The system struggles to distinguish between obscure slang and deliberate nonsense, leading to plausible-sounding but incorrect definitions.
– This issue highlights the challenge of AI “hallucinations,” and while Google has made improvements, refining the system to avoid misleading information remains ongoing.

Google’s AI Overviews feature, designed to provide quick summaries at the top of search results, has demonstrated an unusual and sometimes amusing capability: it attempts to define completely made-up phrases and idioms. Users have discovered that searching for nonsensical sayings, often appended with the word “meaning,” can prompt the AI to generate confident, yet entirely fabricated, explanations.

Confidently Incorrect

Examples shared online highlight the AI’s willingness to invent context. When asked for the meaning of “you can’t lick a badger twice,” a phrase with no known origin, AI Overviews reportedly explained it means you can’t trick someone a second time after they’ve already been deceived. Similarly, searching for the meaning of “peanut butter platform heels” allegedly yielded an explanation involving a scientific experiment using peanut butter to create diamonds under pressure.

READ ALSO  Google Tackles Escalating Online Scams in Asia Pacific with AI, Partnerships

Other invented explanations include defining “never wash a rabbit in a cabbage” as a humorous warning and suggesting “the bicycle eats first” is a playful idiom. ZDNET tested this by searching for “A duckdog never blinks twice,” a phrase invented by a staff member. The AI initially explained it referred to a hyper-focused dog hunting ducks (which sometimes sleep with one eye open), and on a subsequent search, claimed it emphasized something “so unusual or unbelievable that it’s almost impossible to accept.”

Someone on Threads noticed you can type any random sentence into Google, then add “meaning” afterwards, and you’ll get an Al explanation of a famous idiom or phrase you just made up. Here is mine.

Why Does This Happen?

This behaviour stems from the nature of large language models and the challenge of “data voids” – search queries where little relevant information exists online. Google stated that when faced with “nonsensical or ‘false premise’ searches,” its systems try to find the most relevant results from limited web content, and AI Overviews may trigger in an effort to provide context. Google maintains the feature aims to show information backed by top web results and generally has high accuracy, comparable to features like Featured Snippets.

However, the system appears to struggle distinguishing between genuinely obscure or newly emerging slang and utter nonsense deliberately fed to it. It attempts to find patterns or analogies in existing data, leading to these often plausible-sounding but ultimately incorrect definitions.

READ ALSO  Copilot Vision Comes to Edge: Microsoft’s On-Screen AI Goes Free

An Ongoing Challenge

This phenomenon underscores the ongoing issue of AI “hallucinations”,instances where AI models generate false information with confidence. While sometimes humorous, it highlights the potential for AI Overviews, still labelled as experimental, to provide misleading information, particularly when users input flawed or nonsensical queries. Google acknowledges the issue and states it has rolled out improvements to limit AI Overviews appearing for such queries, but the examples show that refining the system’s ability to discern genuine meaning from gibberish remains a work in progress.

(Source: Mashable)

Topics

google ai overviews feature 100% nonsensical phrases idioms 90% confidently incorrect explanations 85% data voids false premise searches 80% AI Hallucinations 75% googles efforts improve ai accuracy 70%
Show More

The Wiz

Wiz Consults, home of the Internet is led by "the twins", Wajdi & Karim, experienced professionals who are passionate about helping businesses succeed in the digital world. With over 20 years of experience in the industry, they specialize in digital publishing and marketing, and have a proven track record of delivering results for their clients.