AI & TechArtificial IntelligenceNewswireStartupsTechnology

Make AI Writing Sound Human with Wikipedia’s AI-Spotting Guide

Originally published on: January 22, 2026
▼ Summary

– A new tool called Humanizer uses Wikipedia’s AI-detection guide to help AI chatbots produce more human-sounding text.
– The tool works by identifying and removing common AI writing tells, such as vague attributions and promotional language.
– Humanizer is a custom skill for the Claude AI model, designed to make generated text sound more natural and avoid detection.
– The tool automatically updates itself when Wikipedia’s guide for spotting AI-generated content is revised.
– AI companies are expected to adjust their own chatbots against these tells, as OpenAI already did for ChatGPT’s overuse of em dashes.

A new software tool leverages Wikipedia’s own guidelines for spotting artificial intelligence to help AI-generated text appear more natural. The tool, named Humanizer, was developed by Siqi Chen. It works by instructing Anthropic’s Claude chatbot to avoid the specific stylistic patterns that Wikipedia editors have flagged as hallmarks of machine-written content. This initiative highlights the ongoing, cyclical battle between content creation and detection in the age of generative AI.

Wikipedia’s volunteer editors compiled a detailed guide to identify “poorly written AI-generated content.” This guide catalogs common linguistic giveaways, which include vague attributions like “experts say,” excessive promotional language such as calling something “breathtaking,” and overly collaborative phrases like “I hope this helps.” Chen’s Humanizer, implemented as a custom skill for Claude, is designed to scrub these exact indicators from the AI’s output. The goal is to produce text that evades detection by adhering to more natural, human writing patterns.

The project’s GitHub page showcases specific examples of the tool in action. In one instance, a flowery description reading “nestled within the breathtaking region” is transformed into the more factual “a town in the Gonder region.” Another example shows a vague claim being corrected; “Experts believe it plays a crucial role” becomes the more concrete “according to a 2019 survey by…” Chen notes that the tool is built to automatically push updates whenever Wikipedia refines its detection guide, ensuring it stays current with the latest spotting criteria.

This development is part of a broader trend where AI companies are already fine-tuning their models to avoid obvious tells. For example, OpenAI has previously adjusted ChatGPT to curb its notorious overuse of the em-dash, a punctuation habit that had become a clear marker of AI authorship. As detection methods evolve, so too do the methods for circumventing them, creating a continuous feedback loop between human editors and the algorithms they seek to identify.

(Source: The Verge)

Topics

ai chatbots 95% ai detection 90% humanizer tool 88% wikipedia guidelines 85% claude ai 80% ai content 78% github projects 75% streaming wars 70% consumer tech 68% social media 65%