Can AI Escape the Enshittification Trap?

▼ Summary
– The author had a highly positive experience using GPT-5 for a restaurant recommendation in Rome, which was based on local reviews, food blogs, and the restaurant’s unique cuisine.
– Trust in AI as an unbiased source was essential, as the author relied on it to provide honest suggestions without sponsored content or hidden financial incentives.
– The article introduces “enshittification,” a term describing how tech platforms initially serve users well but later degrade to maximize profits, as seen with companies like Google and Amazon.
– AI’s potential enshittification is concerning because it could lead to biased recommendations through advertising and paid placements, compromising its reliability as a trusted companion.
– With massive investments and few dominant companies in AI, there is pressure to monetize, raising fears that user trust could be eroded for profit, similar to past tech trends.
During a recent trip to Italy, I decided to test the capabilities of advanced AI by asking GPT-5 for restaurant recommendations near my Rome hotel. The system directed me to a spot just a short walk away on Via Margutta, and the meal turned out to be truly unforgettable. Curious about how the selection was made, I later inquired and learned the AI had analyzed glowing local reviews, mentions in food blogs and Italian media, and the establishment’s unique blend of traditional Roman and modern cuisine. Of course, the short distance was also a factor.
What made this experience work was trust, I had to believe the AI was acting as an impartial advisor, not steering me toward a sponsored pick or earning a commission from my visit. While I did glance at the restaurant’s website, the real appeal of using AI lies in skipping hours of tedious research. This single interaction strengthened my faith in AI’s potential, yet it also sparked a concern: as firms like OpenAI grow more influential and face pressure to satisfy investors, could AI fall victim to the same decline in quality that plagues many of today’s tech platforms?
This gradual degradation has been labeled “enshittification” by author and critic Cory Doctorow. He argues that digital services often begin by prioritizing user satisfaction, but once they dominate the market, they deliberately reduce quality to maximize profits. After WIRED republished Doctorow’s influential 2022 essay on the topic, the term quickly entered everyday language because it perfectly captured a widely felt frustration. It earned recognition as the American Dialect Society’s 2023 Word of the Year and has since been referenced even in formal settings that typically avoid such language. Doctorow recently released a book on the subject, and its cover features the fitting emoji of a pile of poop.
If AI systems undergo enshittification, the consequences could be far more severe than irrelevant Google results, ad-cluttered Amazon pages, or Facebook’s preference for divisive content. AI is evolving into a constant personal assistant, offering immediate answers for everything from news interpretation to major life decisions. Given the astronomical costs of developing advanced AI, only a handful of corporations are likely to control the market. These companies plan to invest hundreds of billions over the coming years to refine their models and expand their user base. At the moment, AI appears to be in what Doctorow describes as the “good to the users” phase. However, the immense financial stakes and locked-in audiences could tempt these firms to exploit their position, ultimately “clawing back all the value for themselves.”
When considering how enshittification might affect AI, advertising is the most obvious risk. The fear is that AI could begin prioritizing paid promotions in its recommendations. While this isn’t happening yet, AI companies are actively exploring advertising models. OpenAI’s CEO Sam Altman recently suggested that a “cool ad product” could benefit both the company and its users. At the same time, OpenAI announced a partnership with Walmart, enabling shopping directly within the ChatGPT app, a move that certainly raises questions about impartiality. Another platform, Perplexity, includes sponsored results in clearly marked follow-up responses, but insists these ads won’t compromise its commitment to unbiased, trustworthy answers.
(Source: Wired)





