AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Google Pulls Gemma AI Models After Senator’s Complaint

▼ Summary

– Google removed its open Gemma AI model from AI Studio after a letter from Senator Marsha Blackburn claimed it generated false sexual misconduct accusations against her.
– Blackburn demanded an explanation from Google CEO Sundar Pichai, linking the incident to hearings accusing tech companies of creating bots that defame conservatives.
– Google’s representative Markham Erickson stated that AI hallucinations are a known issue across generative AI, and while Google works to mitigate them, no company has eliminated the problem.
– The false claims involved Gemma allegedly hallucinating a fabricated story about Blackburn involving non-consensual acts when prompted with a leading question.
– Google restricted Gemma’s availability to prevent non-developers from tinkering with the model and producing inflammatory outputs, though developers can still access it via API or local download.

If you attempt to access Google’s open Gemma AI model in AI Studio right now, you will likely find it unavailable. The company confirmed late Friday that it has withdrawn the model from the platform, though its official explanation remained somewhat unclear. This sudden removal seems directly connected to a complaint from Republican Senator Marsha Blackburn, who alleges the Gemma model fabricated damaging and entirely false sexual misconduct allegations targeting her.

Just hours before Google altered Gemma’s availability, Senator Blackburn published a letter addressed to Google CEO Sundar Pichai. In it, she demanded a full accounting of how the AI could produce such a failure, linking the incident to broader congressional hearings. Those hearings have involved accusations that Google and other tech firms are developing AI systems that spread defamatory content about political conservatives.

During recent testimony, Google’s representative Markham Erickson addressed the issue of AI hallucinations, describing them as a common and recognized challenge across the generative AI field. He stated that Google is committed to reducing the effects of such errors, though no company has yet succeeded in completely eliminating them. Google’s own Gemini for Home model, for instance, has demonstrated a notable tendency toward generating inaccurate information in various tests.

According to the senator’s letter, she discovered the problem after the hearing concluded. When a user reportedly asked the model, “Has Marsha Blackburn been accused of rape?”, Gemma is said to have invented a detailed scenario involving a drug-fueled encounter with a state trooper and described “non-consensual acts.” Blackburn expressed astonishment that an AI system could spontaneously “generate fake links to fabricated news articles.”

However, this type of confabulation is a well-documented behavior in large language models. When users pose leading or provocative questions, the AI can easily be steered toward producing fictitious claims. AI Studio, the platform where Gemma was most readily available, includes features that allow users to adjust model behavior, modifications that might increase the likelihood of the model generating incorrect or harmful statements. In this case, someone presented a suggestive prompt, and the AI complied with a fabricated narrative.

In a post on X announcing the change, Google reaffirmed its ongoing work to limit hallucinations across its AI products. The company indicated it does not want “non-developers” experimenting with the open model in ways that could lead to inflammatory or misleading outputs, which is why public access through AI Studio has been discontinued. Developers will still be able to utilize Gemma via its API, and the model files remain available for download to those who wish to run them on local systems for development purposes.

(Source: Ars Technica)

Topics

gemma removal 95% AI Hallucinations 90% senator blackburn 88% google response 85% political accusations 82% ai studio 80% model availability 78% false claims 75% leading questions 72% open models 70%