
Artificial intelligence is writing our emails, creating our content, and increasingly becoming part of daily life in ways that would have seemed far-fetched just a few years ago.
But not everyone is rolling out the welcome mat and Wikipedia is the latest to draw a line in the sand.
Concerns about AI-generated content have been growing steadily, with many people finding it difficult to tell the difference between human-written content and content churned out by a machine.
For a platform built entirely on the idea of reliable, community-driven knowledge, that's a bit of a problem.
Advert

Wikipedia has been wrestling with how to handle large language models (LLMs) for some time now. But according to its policies, Wikipedia editors are 'prohibited' from using AI tools to 'generate or rewrite article content' - with two specific exceptions.
“Prior proposals for an immediate, all-encompassing community guideline on LLMs have failed due to the standard issues of addressing complex, large-scale issues at once: people, even those who broadly agreed with the goals of said proposals, found specific issues with certain parts of it and critiques that it was too vague/specific, explained Wikipedia administrator Chaotic Enby in the original proposal (via How2Geek).
"Consensus has existed on the idea of change, but not on the implementation of change.”
The first exception allows editors to use LLMs to tidy up their own writing, like a grammar checker or assistance tool, provided they check the edits for accuracy.
“Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited," the policy states.

The second exemption covers translation. Editors can use AI tools to produce an initial draft translation, but they must be sufficiently fluent in both languages to identify and correct any mistakes before being published.
It's worth noting also that this policy applies specifically to the English Wikipedia version.
That said, AI detectors still lag behind the evolution of AI-written content. Wikipedia has published guidance on spotting LLM-generated writing, though the policy itself acknowledges the complication that 'some editors may have similar writing styles to LLMs.' With no reliable way to tell the two apart, it's feeding into a growing trend of AI paranoia, where readers are increasingly second-guessing whether what they're reading was written by a person at all.
After being shared on Reddit, many users agreed that AI should be limited to specific uses on Wiki.
"That's very reasonable. If anything those are two usecases where LLMs are actually very effective at and don't hallucinate out of control," one user commented.
"Wikipedia, as humble as it is, truly represents the best of humanity. It is the combined effort of millions of people attempting to explain and catalog the world for each other for no profit other than belief in the power of knowledge," another wrote.
"Honestly, I respect it. Wikipedia only works if humans can verify sources and write clean, neutral summaries," a third user admitted.