Wikipedia Editors Will Quickly Remove AI-Generated Articles

Wikipedia Editors Will Quickly Remove AI-Generated Articles

Wikipedia has adopted new rules to address a growing flood of AI-generated articles. Under the policy, administrators can now swiftly remove such articles—without the usual week-long debate—if they meet specific criteria.

The encyclopedia is maintained by a global community of volunteers who often spend days debating both edits and the broader rules governing them. Deleting an article typically involves a community discussion lasting up to a week, aimed at reaching consensus.

For clear violations of Wikipedia’s rules, an expedited deletion process already exists: one editor flags the article, an administrator verifies the violation, and the article is removed immediately—no discussion required. This fast-track process is generally used for pages filled with gibberish or overtly promotional material. By contrast, subjective judgments—such as whether a topic is “probably not noteworthy”—require full debate.

The problem, according to editors, is that most AI-generated articles flagged today fall into this second, subjective category. Without definitive proof an article was produced by AI, removal has often been stalled.

That’s where the new rules come in.


Why Previous AI Rules Failed

Ilyas Lebleu, co-founder of the WikiProject AI Cleanup initiative and a lead author of the new policy, says earlier attempts to regulate AI content faltered for exactly this reason.

"While it’s easy to spot signs that text may be AI-generated—phrasing choices, overuse of dashes, bullet-point lists with bold headers—it’s rarely conclusive," Lebleu told 404 Media. "We don’t want to delete something just because it looks AI-written. But AI can flood Wikipedia with low-quality text far faster than humans can review it, and our processes weren’t built for that scale. Of course, humans can write bad content too—just not this much of it, this quickly."

The Two Criteria for Expedited AI Article Deletion

Under the new policy, administrators can fast-track deletions if the AI origin of the content is obvious and meets at least one of two criteria:

1. User-Directed Phrases
Phrases such as "Here is your Wikipedia article…" or "As a large language model…" are dead giveaways of LLM-generated text. These markers have long been used to flag AI content in social media and academic writing.

Lebleu notes that when such phrases are present, it’s clear the contributor didn’t review or edit the output:

"If a person misses such basic things, it’s safe to assume they just copied ‘white noise’ without checking facts."

2. Obviously Fake Citations
Fabricated references are another hallmark of LLM output—citations to non-existent books, research papers, or entirely irrelevant sources. The new policy gives a simple example:

"If a computer science article cites a study about beetle species, that’s grounds for deletion."

A Patch, Not a Permanent Fix

Lebleu describes expedited deletion as a "patch" rather than a complete solution. It addresses the most blatant cases, but AI-generated content will continue slipping through if it doesn’t meet the narrow criteria.

He stresses that AI could eventually become a constructive tool for Wikipedia, but for now, the focus is on immediate threats:

"Speculating about future tech only distracts from today’s problems. One of Wikipedia’s core principles is flexibility, so decisions we make now can—and probably will—be revisited."

A Clearer Stance on AI Content

For Lebleu, the biggest win isn’t just faster deletion—it’s clarity:

"Before, the community was split on how to handle AI-written articles, and earlier policy drafts went nowhere. Now, building on past success in managing AI-generated images and drafts, we have guidelines that send a strong message: unverified AI content runs counter to Wikipedia’s spirit."

Read more