AI’s eugenics problem in generative models

▼ Summary
– Director Valerie Veatch was drawn to AI communities but was shocked by the racist and sexist outputs from models like OpenAI’s Sora.
– Her documentary, *Ghost in the Machine*, investigates the historical roots of generative AI instead of its promised future benefits.
– The film traces a direct lineage from Victorian-era eugenics, founded by Francis Galton, to the statistical foundations of modern machine learning.
– Veatch found that AI companies, like OpenAI, dismissed concerns about biased outputs as an unfixable problem rather than a critical bug.
– The documentary argues that the entire AI field is deeply shaped by discriminatory scientific history, which explains its present-day issues.
When OpenAI released its Sora text-to-video model in 2024, filmmaker Valerie Veatch was drawn in by the creative communities forming around it. Her initial curiosity, however, quickly turned to shock. She observed that the generative AI models frequently produced outputs laden with racist and sexist imagery, often without any explicit hateful prompting. What unsettled her further was the indifference many enthusiasts showed toward this systemic bias. This disturbing experience not only pushed her away from the technology but also inspired her documentary, Ghost in the Machine, which investigates the historical roots of modern artificial intelligence.
The film deliberately avoids the futuristic promises of AI accelerationists. Instead, it provides a clear-eyed historical analysis to demystify why these systems function as they do. Veatch argues that understanding this requires cutting through the industry’s purposeful obfuscation. “In order to use the phrase ‘artificial intelligence,’ we have to know what that phrase means,” she states. “It’s a marketing term and always has been. It’s a completely misleading phrase that has taken on its own cultural meaning.”
Ghost in the Machine traces a direct lineage from Victorian-era eugenics to contemporary machine learning. The documentary highlights Francis Galton, a founder of eugenics and cousin to Charles Darwin. Galton’s work was steeped in white supremacist beliefs and his development of multidimensional modeling,initially used to quantify the attractiveness of African versus European women,profoundly influenced his protegee, statistician Karl Pearson. Pearson’s subsequent work on tools like logistic regression became a cornerstone of modern machine learning algorithms. The film posits that the racist, quantifiable framework for human difference established by these men helped normalize the idea of the brain as a machine, paving the ideological road for “artificial intelligence.”
This historical context helped Veatch interpret her own troubling encounter with Sora. In an artists’ Slack group, a woman of color noted that the model consistently whitewashed her image, preserving her braids and style but placing her likeness in art galleries the AI interpreted as “white spaces.” Veatch’s attempts to discuss this glaring flaw were met with silence in a normally vibrant community. “This was a Slack where there are always dozens of reactions on every post,” she recalled. “But this time, there was nothing.”
Veatch contacted OpenAI directly, reporting how the software generated misogynistic and racist outputs, including distorted female bodies. The company’s dismissive response was a catalyst. “The feedback I got was basically, ‘This is very cringe to be bringing up; there’s nothing we can do to change it,'” she said. This refusal to engage with the problem fueled her investigation, revealing how eugenic thinking is deeply embedded in the technology’s foundations.
The documentary features AI researchers, historians, and critical theorists who argue that the entire field is shaped by its origins in discriminatory science. When asked if she sought interviews with AI company executives, Veatch was unequivocal. Securing that access, she believes, would require complicit compromises. “I don’t want them in the film and they already speak enough to the media,” she said. “Am I going to hug Sam Altman on camera? Is that a truthful film about this technology? That’s propaganda.”
By examining this fraught history, Ghost in the Machine moves beyond the simplistic “garbage in, garbage out” explanation for AI bias. It constructs a compelling argument that the companies building these systems are disinterested in solving their present-day harms precisely because those harms are not bugs, but features inherited from a legacy of race science and quantified prejudice.
(Source: The Verge)




