Is AI Reinforcing Gender Inequality?

▼ Summary
– Laura Bates founded The Everyday Sexism Project in 2012 to document and combat sexism, misogyny, and gendered violence globally.
– Her new book addresses how AI and emerging technologies are enabling new forms of misogyny, including deepfake pornography.
– AI is making abusive content creation cheap and accessible, allowing anyone to generate realistic pornographic images of women from clothed photos.
– Bates argues that AI reflects and intensifies existing societal biases and misogyny, posing a new frontier for the subjugation of women.
– She warns that without urgent regulation, AI technologies risk perpetuating harassment and patriarchal control, echoing concerns from within tech companies.
The rapid expansion of artificial intelligence brings with it a troubling resurgence of age-old prejudices, particularly those targeting women. AI systems are not created in a vacuum; they absorb and amplify the biases present in society, often reintroducing misogyny in disturbingly innovative forms. What begins as a neutral tool can quickly be weaponized, deepening inequalities rather than resolving them.
During her time as a nanny, Laura Bates observed how young girls internalized societal pressures about their appearance, influenced heavily by targeted marketing. This experience led her to establish The Everyday Sexism Project in 2012, a platform dedicated to exposing and challenging sexism in everyday life. The initiative grew into a published book by 2014, documenting subtle and overt forms of gender-based discrimination.
More recently, Bates has turned her attention to how digital abuse is evolving. After becoming a victim of deepfake pornography herself, she recognized that emerging technologies were opening new avenues for harassment. Her latest work, The New Age of Sexism, argues that artificial intelligence is becoming a powerful tool for perpetuating gender-based violence. She emphasizes that while most abuse still comes from people known to the victim, AI is dramatically lowering the barrier to entry for creating harmful content. With just an internet connection and a clothed photo, anyone can generate hyper-realistic abusive imagery targeting women and girls.
Through interviews with both technology developers and survivors, Bates illustrates how AI can intensify existing patterns of misogyny. She warns that without urgent regulation, these tools risk entrenching patriarchal control under the guise of technological progress. Despite concerns that her critiques might be dismissed as alarmist, Bates points out that even prominent figures within tech companies share her apprehensions. She cites the departure of Jan Leike from OpenAI as evidence that internal voices are also raising alarms about the prioritization of innovation over safety.
Bates further highlights how AI-powered applications like virtual girlfriends and assistants can normalize misogynistic attitudes among young users. She also draws attention to the environmental costs of AI, which often disproportionately affect women. Her central argument remains clear: technology does not operate independently of human bias. Instead, it reflects and magnifies the prejudices of its creators, giving historical forms of discrimination new potency and reach.
When asked whether new technologies inevitably devolve into platforms for misogyny, Bates affirms that this is a well-established pattern. From the early internet to social media and online pornography, each wave of innovation has been exploited to harass, abuse, and control women. What makes AI uniquely dangerous is its ability not only to replicate existing abuses but to intensify them, introducing unprecedented methods of threat and manipulation. This isn’t merely a repetition of old problems, it’s an escalation, enabled by tools that learn from and reinforce the worst aspects of human behavior.
(Source: Wired)