Artificial IntelligenceBigTech CompaniesNewswireTechnologyWhat's Buzzing

Wikipedia Bans Articles Created by AI

Originally published on: March 26, 2026
▼ Summary

– Wikipedia has banned editors from writing or rewriting articles using AI, citing violations of core content policies.
– The policy still permits limited AI use, such as for suggesting basic copyedits that do not add new content.
– Editors may use AI to translate articles, but they must have sufficient language knowledge to verify accuracy.
– The guidelines state that editors should base restrictions on policy compliance and edit history, not just writing style.
– This update followed community discussions and initiatives to combat AI-generated content, passing with overwhelming support.

The online encyclopedia Wikipedia has formally prohibited the use of artificial intelligence to generate or substantially rewrite articles. This significant policy update, enacted for the English version of the site, directly addresses concerns that AI-generated content frequently violates the platform’s fundamental principles. The guidelines now explicitly state that such material often breaks Wikipedia’s core content policies, necessitating a clear ban to protect the integrity of the collaborative project.

While the new rule establishes a firm boundary, it does permit limited AI assistance in specific, controlled contexts. Editors may utilize large language models to propose minor grammatical or stylistic corrections, provided the tool does not invent or add substantive information. Another allowed use is for translating articles from other language editions of Wikipedia, though with a critical safeguard. Editors employing AI translation tools must possess sufficient fluency in the source language to personally verify the accuracy of the output, ensuring the final text meets the site’s rigorous standards.

The policy acknowledges the challenge of detection, noting that some human writers may naturally produce text that resembles AI writing styles. Consequently, editors are instructed not to rely solely on linguistic patterns when investigating potential violations. Instead, the focus should be on evaluating whether the content itself adheres to Wikipedia’s policies and reviewing the contributing editor’s recent history of changes.

This official guideline follows months of escalating community action against low-quality AI slop. Wikipedia’s volunteer editors have been grappling with an influx of poorly constructed, machine-generated articles, leading to earlier measures like a speedy deletion policy for obviously subpar entries. A dedicated group also formed WikiProject AI Cleanup, an initiative focused on identifying and removing such content while educating others on how to spot it.

The latest policy change originated from a proposal by a user known as Chaotic Enby, which ignited extensive debate among the editor community. The measure ultimately received overwhelming support, with the consensus aiming to curb the most egregious problems associated with LLM use on Wikipedia while still allowing for its responsible application in narrow, utility-focused scenarios.

(Source: The Verge)

Topics

ai content ban 95% wikipedia guidelines 90% ai-generated articles 88% llm-assisted editing 85% content policies 82% wikiproject ai cleanup 80% speedy deletion 78% editorial oversight 75% ai slop content 73% community proposal 70%