Wikipedia Draws a Hard Line Against AI Slop
The vote wasn't even close. On March 20, Wikipedia's English-language editor community voted 40 to 2 to ban the use of large language models for generating or rewriting encyclopedia articles. For an open, consensus-driven platform famous for its messy internal debates, that level of agreement is practically a landslide — and it signals something important about where the AI content experiment is heading.
The new policy is direct: LLM-generated or LLM-rewritten article content is prohibited. Wikipedia's reasoning cuts right to the credibility problem. As the policy language notes, AI-generated text "often violates several of Wikipedia's core content policies" — not occasionally, not in edge cases, but often enough to be a systemic threat to the encyclopedia's reliability.
The ban does carve out two specific exceptions. Editors can use LLMs for minor copyediting of their own existing writing, as long as a human reviews the suggestions and the model doesn't inject new content. Translation assistance is also permitted, but only when the editor is already fluent enough in both languages to verify accuracy — per the rules on LLM-assisted translation. These aren't loopholes — they're carefully scoped use cases where the human remains firmly in control of substance.
What pushed editors toward a hard ban wasn't a single incident but a slow accumulation of frustration. Ilyas Lebleu, the editor who proposed the new guideline, noted that what once seemed like cautious optimism among holdouts had curdled into "genuine worry." Administrative reports about LLM-related content problems were mounting, and volunteer editors — who already give their time for free — were being buried in cleanup work they hadn't signed up for.
This wasn't a top-down corporate decision. The Wikimedia Foundation, the nonprofit that owns the platform, had actually tried deploying AI-generated article summaries earlier, only to face a community revolt that killed the experiment. The editors aren't against technology in principle; they're against technology that makes their job harder while degrading the product.

The timing matters, too. Wikipedia has spent the last year fighting on multiple fronts against AI encroachment. In January, the foundation struck licensing deals with major AI companies — Amazon, Microsoft, Meta, Perplexity among them — to recoup server costs from years of those companies training their models on Wikipedia's corpus without compensation. Now, having extracted some financial acknowledgment that Wikipedia's data has value, editors are reinforcing the wall: you can license our past work, but you don't get to flood our future work with your outputs.
Jimmy Wales, who founded Wikipedia in 2001, has been diplomatically skeptical about AI's role in the encyclopedia. He told the BBC last year that current models are "nowhere near good enough" by Wikipedia's standards, stopping short of a permanent ban but clearly not enthusiastic. The editor community has now moved faster and more decisively than Wales himself signaled.
The background context here is critical. ChatGPT reportedly surpassed Wikipedia in monthly web visits last year — a remarkable milestone that illustrates just how much AI chat interfaces have disrupted the "look it up" behavior that made Wikipedia a default stop for hundreds of millions of people. AI companies have effectively eaten Wikipedia's traffic while training on its content. The encyclopedia's editor community is now saying, clearly: we are not going to help accelerate that by letting AI write our articles too.
The contrast with Elon Musk's "Grokipedia" — a Grok-powered AI alternative to Wikipedia — couldn't be starker. That platform has already drawn ridicule for outputs including uncritical citations to extremist sources and what critics described as promotional content about Musk's own products. It is, in microcosm, exactly what Wikipedia's editors were afraid of: a knowledge-shaped object with no meaningful editorial accountability.
This puts pressure on the Wikimedia Foundation to fund human editing infrastructure more aggressively, because the policy only holds if there are enough volunteer editors to maintain 7.1 million English-language articles without AI shortcuts. The ban raises the cost of contribution — intentionally — and the foundation will need to support the community that makes it work.
What This Means
Wikipedia's AI ban is more than a policy memo from a nonprofit encyclopedia. It's a data point in a larger argument about what "reliable information" means in an era when generating plausible-sounding text is trivially cheap.
- For developers: If you're building tools that touch knowledge curation, citation, or factual content, Wikipedia's decision is a case study in where AI assistance ends and AI liability begins. Copyediting tools with strong human-in-the-loop design are in; autonomous content generation is out.
- For founders: Any product positioning itself as a knowledge or research layer needs to take Wikipedia's hallucination concerns seriously. "AI-assisted" and "AI-generated" are not interchangeable, and users — and communities — are starting to draw that line explicitly.
- For AI labs: The largest collaboratively edited knowledge base on the internet just said your outputs aren't trustworthy enough to publish under its brand. That's not a PR problem; it's a quality benchmark you haven't met yet. Wikipedia's standards — verifiability, neutrality, source support — are a useful rubric for what "good enough for factual claims" actually requires.
- For the broader web: Wikipedia has long served as a backstop against misinformation precisely because it has human editorial accountability. If that backstop holds, it becomes more valuable as AI-generated content floods the rest of the internet. If it erodes, there are few comparable institutions left to fill the gap.
The 40-2 vote is a signal worth taking seriously. Wikipedia's editor community has more experience than almost any other group on the internet in managing contested, high-stakes factual content at scale. When they look at what LLMs produce and vote overwhelmingly that it isn't good enough, that judgment deserves weight — not as a final verdict on AI, but as honest feedback from people who know exactly what reliable information requires.