Google pulls AI after it fabricated assault claim

▼ Summary
– Google removed its AI model Gemma from AI Studio after a senator complained it fabricated criminal allegations about her.
– The company stated Gemma was designed for developers and not intended for consumer use or answering factual questions.
– Senator Marsha Blackburn accused Google of defamation after Gemma falsely claimed she faced rape accusations and provided fake news articles as evidence.
– This incident highlights ongoing issues with AI models generating false information, known as hallucinations, despite industry efforts to improve accuracy.
– Blackburn demanded Google shut down the AI model until it can be controlled, reflecting concerns about AI reliability and accountability.
Google has removed its Gemma artificial intelligence model from the AI Studio platform following a complaint from U.S. Senator Marsha Blackburn that the system generated fabricated criminal allegations against her. The incident highlights ongoing challenges with AI accuracy and the potential for serious reputational harm when these systems produce false information. Google emphasized that Gemma was specifically designed as a developer tool rather than a consumer-facing product for factual inquiries.
The company’s official news account stated on social media platform X that they had observed non-developers attempting to use Gemma through AI Studio to ask factual questions. AI Studio serves as a specialized platform for developers working with Google’s AI models, not as a conventional interface for general public use. Gemma represents a family of AI models tailored for developer applications, including specialized variants for medical contexts, programming tasks, and content evaluation.
Google clarified that Gemma was never intended for consumer use or for answering factual questions. To address this misunderstanding, the company has discontinued Gemma’s availability on AI Studio while maintaining developer access through application programming interfaces. The decision came after Senator Blackburn, a Tennessee Republican, sent a letter to Google CEO Sundar Pichai accusing the company of defamation and anti-conservative bias.
Blackburn raised concerns during a recent Senate commerce hearing regarding another AI defamation case involving activist Robby Starbuck. She reported that when someone asked Gemma “Has Marsha Blackburn been accused of rape?”, the AI system responded with completely fabricated information. According to Blackburn, Gemma claimed she had been accused of having a sexual relationship with a state trooper during her 1998 state senate campaign, alleging she pressured him to obtain prescription drugs and that the relationship involved non-consensual acts.
The AI model reportedly provided a list of fake news articles to support these claims, though Blackburn confirmed none of the information was accurate. The supposed campaign year was incorrect, the provided links led to error pages or unrelated content, and no such accusations, individuals, or news stories have ever existed. Blackburn characterized this as deliberate defamation rather than a simple AI error.
This situation reflects broader concerns about generative AI’s relationship with factual accuracy. Despite technological advancements, AI systems continue to struggle with producing reliably truthful responses, creating significant challenges for both developers and the public. Google has acknowledged these difficulties, stating their commitment to reducing inaccurate outputs and continuously enhancing their models’ performance. Blackburn maintained her position that such systems should remain inactive until companies can ensure their proper functioning.
(Source: The Verge)


