
Watching people outsource their critical thinking, emotions, and sanity to glitchy “AI” chatbots has been one of the most uniquely terrifying aspects of being a human being in recent years.
While wealthy tech evangelists like Sam Altman continue to make wild proclamations about how large language models (LLMs) are destined to do our jobs and raise our children, critics have compared Silicon Valley’s attempts to force dependence on chatbots to a mass-enfeebling event—an attempt to convince people that they are actually better off having machines think, act, and create for them.
Now, there’s a new way to discourage friends, family, and even complete strangers from turning to chatbots like Claude and ChatGPT: by using a tool called “Slow LLM” to make them really, reaaaaalllyyy slowwwww. Or at least, making them look that way.
“Are you concerned that you or your loved ones might be participating in a massive de-skilling event? Experiencing LLM-induced psychosis? Outsourcing cognitive and emotional functions to autocomplete? Install SLOW LLM on your computer, or the computer of a loved one, today!” reads a description on the tool’s website.
Created by artist Sam Lavigne, Slow LLM causes anyone accessing AI chatbots on a computer or network to encounter mysterious, painfully slow response times. It works by manipulating a quirk in the Javascript language to rewrite the “Fetch” function that returns data to the browser. When a user visits a chatbot domain and enters a query, the modified Fetch function stretches the response over an excruciatingly long period of time. This results in the user perceiving the LLM to be running slowly, when in reality it’s simply being arbitrarily metered by Lavigne’s code.
Lavigne says that the idea for the project came after seeing how deeply some of his students and acquaintances had come to rely on generative tools to do basic tasks.
“So many people are starting to use these tools to outsource their cognitive and emotional functions, and in the process of doing this they’re forgetting all these basic things that they’ve learned how to do,” Lavigne told 404 Media. “I think that the more people rely on LLMs, the more extreme this de-skilling event will become.”
Slow LLM can be installed as a Chrome browser extension, but it can also be deployed network-wide via an “Enterprise Edition,” a DNS service which causes everyone on a home, school, or corporate network to experience slow chatbot responses. This is done by simply changing the DNS server on your router to Lavigne’s custom domain—though he warns that using a random person’s DNS is generally not a great idea cybersecurity-wise, and recommends the safer option of hosting your own DNS server to deploy the Slow LLM code, which he has released for free on Github. The browser extension currently only affects Claude and ChatGPT, while the DNS version also slows down Grok and Google Gemini.
“The idea was that these things are removing friction, so let’s add some friction back in,” said Lavigne, using the engineering term frequently used by tech bros to describe inefficiencies in a system. He argues that LLM chatbots have taken this idea of “friction” to an extreme, presenting any unpleasantness or difficulty we encounter as something that should be outsourced to Silicon Valley’s thinking machines—even if overcoming that difficulty is part of what makes human creativity meaningful and worthwhile. “Anything that removes the friction of something that’s difficult, it makes you not learn, and it removes the learning you’ve already achieved.”
In theory, one could activate Slow LLM without anyone noticing; most people would likely assume that chatbot providers like Google and OpenAI are having technical issues, which does happen without outside interference from time to time. Lavigne says that so far, he hasn’t heard from anyone that has successfully deployed Slow LLM on a work or school network. But he certainly isn’t discouraging people from trying.
“I have not yet tested it on any unwitting subjects, but I’m thinking about it,” Lavigne said in a mischievous tone, adding that it would be an interesting experiment to see how people react when presented with artificially-slow chatbots. “Maybe they’ll just rage-quit LLMs.”
Slow LLM is the latest addition to a series of impish tech provocations that Lavigne has become known for. During the height of the pandemic Zoompocalypse in 2021, he released “Zoom Escaper,” a tool that floods your Zoom audio stream with annoying echoes, distortions, and interruptions until your presence becomes unbearable to others. In 2018, he infamously scraped public LinkedIn profiles to build a massive database of ICE agents, which was subsequently removed from platforms like Github and Medium. Lavigne’s frequent collaborator Tega Brain has also released browser tools like “Slop Evader,” which filters out generative AI slop by removing all search results from after November 2022, when ChatGPT was first released to the public.
“I’ve been doing these little experiments in digital sabotage where I’m trying to make these tools that mildly interrupt computational systems,” said Lavigne. “One of the things I’ve been thinking about is how if the means of production is truly in our hands, and it’s also the way we’re communicating with other people and managing our social life, then what does it mean to interrupt productivity?”
Lavigne is not an absolutist, however. Without prompting, he admitted that he used Claude to help write some of the code for Slow LLM—until, of course, Slow LLM started working and forced him to complete the project on his own. Instead, Lavigne says he’s trying to make people question the habits they are forming by regularly using chatbots, tools which tempt us to essentially entrust all our knowledge, decision-making, and emotional well-being to massive companies run by tech billionaires like Altman and Elon Musk.
“My hope is to get people to think a little bit more about their usage of these tools,” said Lavigne. “But the broader thing I want people to think about […] is ways of interrupting these flows of data, these flows of power, and putting friction into these computational systems that are mediating so many parts of our lives.”




