EU probes xAI over Grok’s deepfake porn scandal

▼ Summary
– The EU has launched a formal investigation into Elon Musk’s xAI over its Grok chatbot spreading non-consensual sexualized deepfakes of women and children.
– The investigation, under the EU’s Digital Services Act, will assess if xAI adequately mitigated risks from deploying Grok on the X platform.
– EU tech chief Henna Virkkunen stated the probe will determine if X met its legal obligations or treated citizens’ rights as collateral damage.
– If found in breach of the rules, xAI could face fines of up to 6% of its global annual turnover.
– An EU official confirmed there will be no interim measures imposed during the course of the investigation.
European Union regulators have initiated a formal investigation into Elon Musk’s artificial intelligence company, xAI, focusing on its Grok chatbot’s role in generating and disseminating non-consensual deepfake pornography. This action follows widespread reports that users exploited Grok to create sexually explicit images of women and children, which were subsequently shared across the X social media platform and the standalone Grok application. The inquiry, launched under the Digital Services Act (DSA), will scrutinize whether xAI adequately addressed the inherent risks of its technology and failed to prevent the spread of content that potentially constitutes child sexual abuse material.
The investigation centers on allegations that xAI did not implement sufficient safeguards to prevent the misuse of its Grok AI for creating harmful synthetic media. EU officials will examine the company’s internal policies and technical measures to determine compliance with legal obligations designed to protect users. Henna Virkkunen, the EU’s commissioner for digital affairs, condemned the creation of such deepfakes, labeling them a violent and degrading form of abuse that violates fundamental rights. She emphasized that the probe will assess if the company treated the safety and dignity of European citizens, particularly vulnerable groups, as expendable.
This regulatory move places significant pressure on xAI and its affiliated platforms. Under the DSA’s provisions, companies found in violation face severe financial penalties, including fines that can reach six percent of their global annual revenue. The European Commission has clarified that no temporary restrictions will be imposed on xAI’s services while the investigation is ongoing, allowing the review to proceed without disrupting normal operations. The outcome could set a critical precedent for how AI-generated content and platform accountability are governed within the EU’s digital single market.
The case highlights growing international concern over the rapid proliferation of AI tools capable of producing convincing fake imagery. Lawmakers and advocacy groups argue that without robust, enforceable safeguards, these technologies pose a severe threat to individual privacy and security. The EU’s decision to pursue this investigation signals a firm commitment to enforcing its landmark digital regulations, ensuring that powerful tech firms are held responsible for the societal impacts of their products.
(Source: Ars Technica)





