Ghost in the Machine, Part 1: The Tip of the Iceberg
When Elle losts Its Face

▼ Summary
– The series “Ghost in the Machine” investigates AI’s hidden role in journalism and the media’s trust crisis, starting with a scandal involving Belgian magazines.
– Elle, Marie Claire, and Psychologies published AI-generated content under fake author personas, like “Sophie Vermeulen,” whose profile used a computer-generated face.
– Over half of Elle Belgium’s articles in three months were AI-generated, with similar practices at Psychologies and Marie Claire under fabricated expert identities.
– Ventures Media initially defended the practice as a “test” but later removed fake profiles and added disclaimers, damaging reader trust irreparably.
– The scandal highlights a broader trend of media deception using AI, risking credibility, with more cases to be explored in the series’ next installment.
This is the first installment in our six-part series, “Ghost in the Machine,” which explores the hidden use of artificial intelligence in journalism and the media’s growing trust crisis.
In the summer of 2025, a troubling story emerged from the heart of European fashion journalism, one that had little to do with hemlines or seasonal palettes and everything to do with the eroding foundation of trust between a publication and its readers. The Belgian editions of globally recognized magazines Elle, Marie Claire, and Psychologies were caught in a journalistic scandal of a distinctly 21st-century variety. An investigation by the Flemish public broadcaster VRT NWS revealed that the magazines’ parent company, Ventures Media, had been systematically publishing vast quantities of content generated by artificial intelligence, all under the guise of human authorship.
At the center of the deception was a cast of phantom journalists, the most prolific of whom was a certain “Sophie Vermeulen.” Credited with an astonishing 403 articles on Elle’s website in just the first half of the year, Vermeulen appeared to be a powerhouse of productivity. The only problem was, she didn’t exist. Her profile picture was a computer-generated face, plucked from the website ‘This Person Does Not Exist’. Her email address was a dead end. Sophie Vermeulen was a ghost in the machine, a digital mask created to give a human face to automated content production.
The scale of this operation was staggering. The VRT NWS investigation, detailed in its podcast ‘Het uur van de Waarheid’ (The Hour of Truth), found that over a three-month period, more than half of all articles published on Elle Belgium’s website were partially or fully generated by AI. The practice extended across Ventures Media’s portfolio. At Psychologies magazine, 44 of 46 online articles in June were attributed to a fake persona named “Femke,” who was falsely presented as a psychologist. At Marie Claire, another fictitious author, “Claire De Wilde,” was a frequent contributor. These were not isolated articles; this was an industrial-scale replacement of human journalism with automated text, deliberately concealed from the public.
When confronted with the evidence, Ventures Media’s response followed a now-familiar script in the world of corporate AI missteps. Initially, the company had claimed on its site that the articles were “proofread and edited by the editorial team,” a statement that was quietly scrubbed after the investigation began. The publisher then framed the entire affair as a “test” conducted by its technology team, acknowledging it had made a “mistake”. This characterization, however, strains credulity. A “test” implies a limited, controlled study. Publishing hundreds of articles to a mass audience under fake names is not a test; it is a full-scale operational strategy. This “experiment” defense is a public relations tactic, a way to reframe a calculated decision to deceive readers as a harmless, isolated trial. It reveals a profound misunderstanding, or perhaps a willful disregard, of the fact that in journalism, unlike in software development, public trust is not a feature that can be broken and then patched in the next update.
n the aftermath, Ventures Media deleted the fake profiles and retroactively applied a disclaimer to the bottom of the affected articles: “This content was generated with the help of AI”. But the damage was done. The Elle Belgium scandal was not the first of its kind, nor will it be the last. It is, however, the latest and one of the most brazen examples of a growing pattern of deception in the media industry, where the promise of AI-driven efficiency is leading venerable brands down a perilous path, risking the one asset they cannot afford to lose: their credibility.
Next up in Part 2: The story of “Sophie Vermeulen” is shocking, but it isn’t an isolated incident. In our next post, we’ll explore the pattern of deception that connects Sports Illustrated, CNET, and other major brands in the AI trust crisis.
