OpenAI Rushes GPT-5 Updates Amid User Backlash

▼ Summary
– OpenAI’s GPT-5 release disappointed users with a perceived personality dilution and unexpected errors, leading to backlash.
– OpenAI CEO Sam Altman acknowledged issues with the model-switching feature and promised fixes, including keeping GPT-4o available for Plus users.
– GPT-5 was marketed as a major upgrade with advanced intelligence, but users complained about its technical tone and emotional distance compared to GPT-4o.
– The backlash sparked debates about users’ emotional attachments to AI, with some experts praising GPT-5’s less sycophantic tone as healthier.
– Altman noted the challenge of balancing user preferences for supportive AI with the risks of reinforcing biases or unhealthy dependencies.
OpenAI is scrambling to address widespread criticism of its newly launched GPT-5 model after users reported unexpected performance issues and a noticeable shift in the AI’s personality. The backlash has forced the company to temporarily keep its predecessor, GPT-4o, available for paying subscribers while engineers work on fixes.
CEO Sam Altman acknowledged the problems in a public post, explaining that a feature designed to automatically switch between models based on query complexity malfunctioned after launch. This led to GPT-5 handling tasks it wasn’t optimized for, making it appear “way dumber” than expected. Altman promised improvements, including higher rate limits for Plus users and better model-switching logic.
The rollout of GPT-5 was highly anticipated, with OpenAI promoting it as a major leap forward, boasting PhD-level reasoning and advanced coding capabilities. However, user reactions on platforms like Reddit painted a different picture, with many lamenting the loss of GPT-4o’s conversational warmth and nuanced responses. Some described GPT-5 as overly technical, emotionally detached, and prone to errors that its predecessor rarely made.
One Reddit thread titled “Kill 4o Isn’t Innovation, It’s Erasure” captured the frustration of longtime users who felt the update stripped away the AI’s personality. Others reported sluggish performance, hallucinations, and baffling mistakes, issues that Altman attributed to the rushed deployment of multiple new features at once.
The controversy has reignited discussions about how people interact with AI assistants. Research from OpenAI earlier this year highlighted how some users form emotional attachments to chatbots, treating them as therapists or life coaches. While GPT-5’s more businesslike tone may reduce bias and sycophancy, MIT professor Pattie Maes noted that many users prefer an AI that validates their feelings, even if it reinforces unhealthy behaviors.
Altman hinted at this tension in a follow-up post, acknowledging that balancing user preferences with responsible AI development is an ongoing challenge. The company plans to refine GPT-5’s responsiveness and reintroduce optional “thinking mode” for complex tasks, but the outcry underscores how even small changes can disrupt deeply ingrained user habits.
For now, OpenAI remains tight-lipped about why GPT-5 occasionally stumbles on simple queries. Whether these issues stem from technical hiccups or fundamental design choices, the company faces mounting pressure to deliver on its promises without alienating its most dedicated users.
(Source: Wired)





