arXiv is a preprint server. It’s a place for academics to post papers before they reach the peer-reviewed journals — if ever. It’s where you get news out quickly. Science runs on the arXiv.
Any paper on arXiv is equivalent to a blog post. Ideally, they should be serious papers announcing a result. Some papers are just marketing — especially in AI. We’ve covered a few.
So obviously, arXiv gets a ton of spammy nonsense. But the chatbots have kicked the flood of spam into high gear.
The rule isn’t listed on the arXiv site as yet. But Tom Dietterich, who is chair of the Computer Science section on arXiv, posted on Bluesky on Wednesday and at full length on Twitter yesterday: [Bluesky; Twitter, thread]
Attention @arxiv authors: Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated.
If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).
… The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue.
Every author named as writing a paper bears full responsibility for the paper.
You can still use a chatbot for your text. But if you leave in hallucinated references or chatbot artifacts, you’re out — because it’s smoking-gun evidence you didn’t read the paper your name’s on, and you’re just spamming everyone else’s time.
Dalmeet Singh Chawla, a science journalist, spoke to Dietterich, who says this has been effective policy at the arXiv for a while: [LinkedIn]
Yes, he told me earlier today: “We have been imposing penalties for AI slop (and many other forms of scientific misconduct) as violations of our Code of Conduct for quite some time. We are publicizing this now in an attempt to deter this behavior.”
arXiv isn’t banning authors for minor errors. The banning standard is leaving in blatant slop: [Bluesky]
Examples of incontrovertible evidence: hallucinated references (not just minor errors), meta-comments from the LLM (“here is a 200 word summary; would you like me to make any changes?”; “the data in this table is illustrative, fill it in with the real numbers from your experiments”).
And you can tell those are real examples Dietterich is raising.
There’s an appeal process: [404, archive]
Dietterich told me in an email on Friday morning that this is a one-strike rule — meaning authors caught just once including AI slop in submissions will be banned — but that decisions will be open to appeal. “I want to emphasize that we only apply this to cases of incontrovertible evidence,” he said. “I should also add that our internal process requires first a moderator to document the problem and then for the Section Chair to confirm before imposing the penalty.”
Chatbot spam is a plague on science. Springer Nature has a bad habit of publishing books full of chatbot artifacts.
Machine learning is full of chatbots because the chatbot vendors are where the money comes from. arXiv already had to tighten up rules on machine learning papers in October because of AI slop. [arXiv]
The Association for Computational Linguistics just had to reject a pile of papers it had already accepted for the ACL 2026 conference when their references turned out to be chatbot fakes. [ACL]
The response to the arXiv chatbot penalty has so far been wild cheering from most of academia — and a lot of whining from AI bros who cannot conceive of a world where they aren’t writing scientific papers with the slop machine.
The fun part is the consequences. Science is collaborative, and a lot of the paper writing is delegated. There’s a principal investigator who runs the lab, gets the grants in, and is the last named author on most of the papers. The PI will delegate work to a postdoc or to a graduate student — who sometimes then delegates the work to a chatbot.
Luca Ambrogioni is an assistant professor of machine learning and a principal investigator at the Generative Memory Lab at Radboud University. Ambrogioni describes himself in his Twitter profile as an “AI realist”. He foresees disaster: [Twitter, archive]
I am quite convinced that, under these arxive guidelines, every single major PI in the field will be banned within a few years.
A hit dog hollers. Just how much blatant slop — to the standards Dietterich lists — has Ambrogioni already put his name on?
You’d think Ambrogioni could just not write scientific papers with a chatbot, and he could enforce a rule in the lab he runs against writing papers with a chatbot. But that’s apparently not an option.
The AI bros are, of course, already selling chatbot tools they claim will fix the chatbot breakage. If that trick worked, the AI vendors would have cured chatbot hallucinations with it already.
The only unfortunate part of the arXiv chatbot penalty is how Dietterich and his team are processing candidates — they’re initially screening papers for slop with a chatbot: [Bluesky]
We rely on some standard LLM detectors to focus our attention on papers that need to be checked.
Eww. That said, the criterion for a ban is a human moderator seeing blatant chatbot spew in what’s supposed to be a serious scientific paper. So you’re not getting just banned by a bot. Yet.



Und das ganz zu Recht auch.