AI & TechDigital Publishing

Ghost in the Machine, Part 2: A Pattern of Deception

▼ Summary

AI-generated fake journalists: The “Sophie Vermeulen” case exposed undisclosed AI content operations using fictitious authors, highlighting media’s trust crisis.
Sports Illustrated scandal: AI-generated articles under fake bylines (“Drew Ortiz”) with low-quality, SEO-driven content led to public outrage and CEO dismissal.
CNET’s AI failures: Undisclosed AI-written financial articles contained errors and plagiarism, resulting in reputational damage and Wikipedia downgrading its reliability.
Gannett and BuzzFeed missteps: Gannett’s AI-produced sports recaps were error-ridden, while BuzzFeed shifted from creative AI use to low-quality SEO content.
Pattern of ethical breaches: Repeated AI misuse across media reveals a trend of secrecy, eroded trust, and prioritizing efficiency over journalistic integrity.

This is the second installment in our six-part series, “Ghost in the Machine,” which explores the hidden use of artificial intelligence in journalism and the media’s growing trust crisis. You can read Part 1 here.

In our first post, we explored the shocking case of Elle Belgium, where a phantom journalist named “Sophie Vermeulen” was used to give a human face to a massive, undisclosed AI content operation. As brazen as that scandal was, it is not an anomaly. It is a symptom of a deeper malaise affecting the media industry.

As publishers grapple with relentless economic pressures, some are turning to AI not as a tool to enhance journalism, but as a shortcut to replace it, often with disastrous consequences for their integrity. A review of recent scandals reveals a consistent pattern: the surreptitious use of AI, the creation of fictitious authors, a public-facing denial or deflection when caught, and a profound erosion of reader trust. The cases of Sports Illustrated, CNET, Gannett, and BuzzFeed form a compelling, and cautionary, narrative of this trend.

Case Study 1: The Fall of Sports Illustrated

In November 2023, the tech publication Futurism dropped a bombshell that sent shockwaves through the sports media world. It revealed that Sports Illustrated (SI), a titan of American journalism for nearly 70 years, had been publishing articles under the bylines of writers who did not exist. These phantom authors, such as “Drew Ortiz,” were given elaborate, human-sounding biographies and, most damningly, profile headshots that were found to be for sale on a website specializing in AI-generated faces.

The content itself was a far cry from the literary sportswriting that had earned SI its legendary status. The AI-penned articles were largely low-quality product reviews for items like volleyballs and athletic gear, filled with stilted and bizarre phrasing. One article by “Ortiz” offered the profound insight that volleyball “can be a little tricky to get into, especially without an actual ball to practice with”. This was not journalism; it was low-grade, SEO-driven “commerce content” designed to generate affiliate link revenue, cloaked in the authority of the Sports Illustrated brand. An anonymous source inside the magazine confirmed to Futurism that the content was “absolutely AI-generated, no matter how much they say that it is not”.

The response from SI’s publisher, The Arena Group, was a masterclass in corporate deflection. After Futurism made its inquiries, the fake author profiles were silently scrubbed from the website. In a public statement, the company denied that the articles were AI-generated and instead cast blame on a third-party content vendor, AdVon Commerce. The Arena Group claimed AdVon had assured them the articles were “written and edited by humans” but that their writers had used pseudonyms for “privacy”, a claim ridiculed as “obviously absurd” given the innocuous subject matter. This reliance on a third-party scapegoat illustrates a dangerous diffusion of accountability. By outsourcing content to opaque vendors, publishers can attempt to create plausible deniability, but they cannot absolve themselves of responsibility for what appears under their masthead. The reader’s trust is with Sports Illustrated, not an unknown vendor, and this strategy of “accountability laundering” fundamentally undermines that trust.

The internal reaction was swift and furious. The Sports Illustrated Union, representing the human journalists at the magazine, released a statement saying they were “horrified” by the report and demanded “answers and transparency from Arena group management” and a commitment to “not publishing computer-written stories by fake people”. The scandal proved to be the final straw for a brand already suffering from years of financial turmoil and complex ownership changes. In the end, the controversy contributed to the firing of The Arena Group’s CEO, Ross Levinsohn, marking a devastating blow to the reputation of a once-great American institution.

Case Study 2: CNET’s Self-Inflicted Wound

Months before the Sports Illustrated debacle, the tech news site CNET, owned by the digital marketing giant Red Ventures, provided a stark preview of the perils of undisclosed AI content. In late 2022, CNET began quietly publishing a series of financial explainer articles written by what it later described as an “internally designed AI engine”. The AI’s involvement was initially obscured, with articles attributed to the generic byline “CNET Money Staff.” Only by clicking on the byline could a reader find a small disclosure about the use of “automation technology”.

The experiment quickly devolved into a “journalistic disaster”. The AI-generated articles were found to be riddled with “boneheaded” factual errors and instances of blatant plagiarism. In an article explaining compound interest, for example, the AI incorrectly calculated that a $10,000 deposit earning 3% interest would yield $10,300 in earnings in the first year, rather than the correct $300. Another investigation by Futurism revealed that the AI’s work showed “deep structural and phrasing similarities” to articles previously published by competitors like Forbes and CNET’s own sister site, Bankrate, without attribution.

The fallout was severe. After the errors and plagiarism were exposed, CNET was forced to pause the experiment and conduct a painful internal audit, which resulted in significant corrections being issued for 41 of the 77 articles produced by the AI. Insiders claimed that the problem was even more widespread, with unlabeled AI-generated content also being published in email newsletters. The scandal triggered a unionization drive among CNET staff, who feared for their professional reputations. The ultimate humiliation came when the editors of Wikipedia, after a lengthy debate, officially demoted CNET from a “generally reliable” source to an “unreliable” one for any content published after its 2020 acquisition by Red Ventures, citing the AI scandal as a primary reason. For a publication that built its brand on being a trusted authority in technology, the reputational nosedive was catastrophic.

Case Study 3: The Gannett and BuzzFeed Experiments

The pattern of ill-conceived AI implementation extends to other major media players. In the summer of 2023, the newspaper giant Gannett was widely ridiculed for using an AI tool called LedeAI to generate high school sports recaps across its local papers, including the Columbus Dispatch. The results were “abysmal,” producing articles filled with robotic phrasing, awkward repetition, and glaring errors, including some that still contained placeholder code like The Worthington Christian] defeated the Westerville North]. Gannett “temporarily” paused the experiment and issued corrections, but the episode served as another warning about the dangers of deploying unchecked automation in the newsroom, particularly in the context of a company that had recently laid off 6% of its news division.

Digital media pioneer BuzzFeed offers a more nuanced but equally telling case. In early 2023, CEO Jonah Peretti announced the company’s foray into AI with lofty rhetoric about enhancing human creativity and creating personalized content. The initial results were the interactive “Infinity Quizzes,” which used AI to generate unique results based on user input, an interesting, if choppy, experiment. However, this commitment to innovative use quickly devolved. Soon after, BuzzFeed began quietly publishing dozens of bland, formulaic, SEO-driven travel guides attributed to “Buzzy the Robot”. These articles, which were “collaboratively written” not by journalists but by non-editorial staff in departments like account management, were filled with repetitive, uninspired prose, betraying the very “content mill” model Peretti had promised to avoid. The move was seen by critics as a bleak experiment to see if AI was mature enough to replace the work of human writers, a stark departure from the initial creative vision.

These cases, when viewed together, paint a clear picture. From legacy sports magazines to modern digital publishers, the rush to adopt AI has repeatedly led to ethical breaches, shoddy content, and public embarrassment. The common thread is a failure of transparency and a prioritization of perceived efficiency over journalistic integrity.

Table 1: A Comparative Analysis of Major AI Journalism Scandals

Publication(s)Parent CompanyYear(s)Nature of ScandalPublisher’s ResponseKey Consequence
Elle Belgium, Marie ClaireVentures Media2025Undisclosed AI articles, fake journalists with AI-generated profiles.Claimed it was a “test”; added disclaimers after exposure.Public backlash; loss of credibility.
Sports IllustratedThe Arena Group2023Fake AI-generated authors and articles for product reviews.Blamed third-party vendor (AdVon); ended partnership.CEO fired; massive reputational damage.
CNETRed Ventures2022-2023Undisclosed AI articles with major factual errors and plagiarism.Paused AI use, issued corrections on >50% of articles.Demoted to “unreliable source” by Wikipedia; staff unionized.
Gannett NewspapersGannett2023“Abysmal” quality AI-generated high school sports recaps with errors.“Temporarily” paused the AI experiment; corrected articles.Widespread ridicule; highlighted risks of unchecked automation.
BuzzFeedBuzzFeed, Inc.2023Shifted from creative AI quizzes to bland, SEO-driven AI articles.Framed as an “experiment” to enhance creativity.Criticism for producing low-quality “content-mill” articles.

Next up in Part 3: These scandals all share common DNA, rooted not just in bad decisions, but in the fundamental flaws of the technology itself. In our next post, we’ll break down the anatomy of why AI fails at journalism.

Topics

Impact of AI on Journalism 95% media trust crisis 90% journalistic integrity 90% sports illustrated scandal 85% ethical breaches media 85% transparency media 85% cnet ai content issues 80% ai-generated content quality 80% gannett ai experiment 75% buzzfeed ai content 70%