Resident of the world, traveling the road of life
69119 stories
·
21 followers

A Top Google Search Result for Claude Plugins Was Planted by Hackers

1 Share
A Top Google Search Result for Claude Plugins Was Planted by Hackers

A top result on Google for people searching for Claude plugins sent users to a site that recently contained malicious code in an apparent attempt to steal their credentials. 

The news shows how the explosion of interest in generative AI tools is giving hackers new ways to attack users.

The malicious site was flagged to us by a 404 Media reader who was using Claude. 

“I was googling to troubleshoot how to get my Claude Code CLI to authenticate its github plugin to my Github account and may have stumbled upon a malicious site hosted on Squarespace of all places,” the reader, Dan Foley, told me in an email. 

Foley searched for “github plugin claude code” and the top result was a sponsored ad for a Squarespace site with the title “Install Claude Code - Claude Code Docs.”

When he clicked through, he saw a site that was pretending to be the official site for Anthropic’s Claude with identical design and branding.

A Top Google Search Result for Claude Plugins Was Planted by Hackers

The phony Anthropic help site had swapped some of the Claude Code installation instructions for others, Foley pointed out. That included a line users could paste into their terminal to allegedly install the software on a Mac. The command included an obfuscated URL, hiding what its real destination was. When Foley decoded it, he found it downloaded software from another site entirely. 

ThreatFox, a platform for sharing known instances of malware, recently flagged that domain as sharing a “stealer”, a type of malware that steals users credentials. ThreatFox linked that domain to the stealer as recently as a few days ago.

Google’s ad center listed the advertiser behind the malicious sponsored search result as “Enhancv R&D,” which is based in Bulgaria, according to a screenshot of the advertiser profile Foley shared with 404 Media. The advertiser was also listed as being verified by Google, meaning they had to complete an identity verification process which requires legal documentation of their name and location. 

Foley said he flagged the ad to Google, which removed the site from search results. The URL which pointed to the potential stealer is no longer online. 

“We removed this ad and suspended the account for violating our policies,” a Google spokesperson told me in an email. Google said it has strict policies against ads that aim to phish information or distribute malware, and that it uses a combination of Gemini-powered tools and human review to enforce these policies at scale. Google claims the vast majority of these ads are caught before the ads ever run. 

Malicious links included in paid Google ads that are pretending to be legitimate websites is not a problem that’s unique to AI. Hackers often try to get users to click malicious links by pretending to be whatever is popular on the internet at any given moment, be it a pirated movie or video game just before release or celebrity sex tapes. The fact that hackers are targeting Claude users reflects the growing popularity of AI tools and the hackers’ hope that users are not careful enough to check what they’re clicking when using them. 

In January, we wrote about how hackers could similarly target users of the AI agent tool OpenClaw by boosting instructions for AI agents that contained a backdoor for hackers.

Read the whole story
mkalus
28 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Judge Allows DOGE Deposition Videos Back Online

1 Share
Judge Allows DOGE Deposition Videos Back Online

On Monday a judge said videos of recent depositions from DOGE members can be published online once again. The ruling is something of an about face for Judge Colleen McMahon, who originally ordered plaintiffs in the DOGE-related lawsuit “claw back” the videos they had published to YouTube. The videos were already massively viral at the time of that ruling, in part because they showed DOGE members Justin Fox and Nate Cavanaugh unable or unwilling to define DEI, admitting their use of ChatGPT to filter contracts to potentially axe based on words like “Black” and “homosexual” but not “white,” and were broadly one of the first times the public has directly heard from people inside DOGE.

“This decision validates our position that the publication of the videos, which document a process to destroy knowledge and access to vital public programs, was indeed in the public’s interest,” Joy Connolly, president of the American Council of Learned Societies, said in a statement shared with 404 Media. “We look forward to continuing the pursuit of justice in reclaiming government support for important humanities research, education, and sustainability initiatives.”

Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Layoffs don’t boost the share price — they drop it

1 Share

It’s common wisdom that doing a layoff sends your stock price up. But what if that wasn’t true?

Elsie Peng at Goldman Sachs analysed layoffs between July and November 2025 and what the stock price did afterwards. [MarketWatch]

On average, a layoff was followed by the stock price going down 2% over the following two weeks.

But if a company says it’s restructuring, the typical drop isn’t 2% — it’s 7%. So companies have quite the incentive to say it’s AI for the fabulous future. Something positive!

Either way, the markets aren’t buying it. Layoffs really do mean your company is in trouble and your stock should get a price hit:

companies announcing layoffs recently, irrespective of the explanations provided, have experienced higher capex, debt and interest expense growth and lower profit growth than comparable companies within the same industries this year.

Peng’s findings match other sources. Resume.org surveyed 1000 hiring managers: [Resume]

59% admit they emphasize AI when explaining hiring freezes or layoffs because it plays better with stakeholders than citing financial constraints.

And 55% of the hiring managers surveyed expect more layoffs in 2026. Or — as the new euphemism has it — “workforce rebalancing”.

Is it “over-hiring”? No — hiring was way down all through 2025. And before then, people were hired for good reasons at the time. But times are tough. And getting tougher.

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

This Web Tool Sabotages AI Chatbots By Making Them Really, Really Slow

2 Shares
This Web Tool Sabotages AI Chatbots By Making Them Really, Really Slow

Watching people outsource their critical thinking, emotions, and sanity to glitchy “AI” chatbots has been one of the most uniquely terrifying aspects of being a human being in recent years. 

While wealthy tech evangelists like Sam Altman continue to make wild proclamations about how large language models (LLMs) are destined to do our jobs and raise our children, critics have compared Silicon Valley’s attempts to force dependence on chatbots to a mass-enfeebling event—an attempt to convince people that they are actually better off having machines think, act, and create for them.

Now, there’s a new way to discourage friends, family, and even complete strangers from turning to chatbots like Claude and ChatGPT: by using a tool called “Slow LLM” to make them really, reaaaaalllyyy slowwwww. Or at least, making them look that way.

“Are you concerned that you or your loved ones might be participating in a massive de-skilling event? Experiencing LLM-induced psychosis? Outsourcing cognitive and emotional functions to autocomplete? Install SLOW LLM on your computer, or the computer of a loved one, today!” reads a description on the tool’s website.

Created by artist Sam Lavigne, Slow LLM causes anyone accessing AI chatbots on a computer or network to encounter mysterious, painfully slow response times. It works by manipulating a quirk in the Javascript language to rewrite the “Fetch” function that returns data to the browser. When a user visits a chatbot domain and enters a query, the modified Fetch function stretches the response over an excruciatingly long period of time. This results in the user perceiving the LLM to be running slowly, when in reality it’s simply being arbitrarily metered by Lavigne’s code.

Lavigne says that the idea for the project came after seeing how deeply some of his students and acquaintances had come to rely on generative tools to do basic tasks.

“So many people are starting to use these tools to outsource their cognitive and emotional functions, and in the process of doing this they’re forgetting all these basic things that they’ve learned how to do,” Lavigne told 404 Media. “I think that the more people rely on LLMs, the more extreme this de-skilling event will become.”

Slow LLM can be installed as a Chrome browser extension, but it can also be deployed network-wide via an “Enterprise Edition,” a DNS service which causes everyone on a home, school, or corporate network to experience slow chatbot responses. This is done by simply changing the DNS server on your router to Lavigne’s custom domain—though he warns that using a random person’s DNS is generally not a great idea cybersecurity-wise, and recommends the safer option of hosting your own DNS server to deploy the Slow LLM code, which he has released for free on Github. The browser extension currently only affects Claude and ChatGPT, while the DNS version also slows down Grok and Google Gemini.

“The idea was that these things are removing friction, so let’s add some friction back in,” said Lavigne, using the engineering term frequently used by tech bros to describe inefficiencies in a system. He argues that LLM chatbots have taken this idea of “friction” to an extreme, presenting any unpleasantness or difficulty we encounter as something that should be outsourced to Silicon Valley’s thinking machines—even if overcoming that difficulty is part of what makes human creativity meaningful and worthwhile. “Anything that removes the friction of something that’s difficult, it makes you not learn, and it removes the learning you’ve already achieved.”

In theory, one could activate Slow LLM without anyone noticing; most people would likely assume that chatbot providers like Google and OpenAI are having technical issues, which does happen without outside interference from time to time. Lavigne says that so far, he hasn’t heard from anyone that has successfully deployed Slow LLM on a work or school network. But he certainly isn’t discouraging people from trying.

“I have not yet tested it on any unwitting subjects, but I’m thinking about it,” Lavigne said in a mischievous tone, adding that it would be an interesting experiment to see how people react when presented with artificially-slow chatbots. “Maybe they’ll just rage-quit LLMs.”

Slow LLM is the latest addition to a series of impish tech provocations that Lavigne has become known for. During the height of the pandemic Zoompocalypse in 2021, he released “Zoom Escaper,” a tool that floods your Zoom audio stream with annoying echoes, distortions, and interruptions until your presence becomes unbearable to others. In 2018, he infamously scraped public LinkedIn profiles to build a massive database of ICE agents, which was subsequently removed from platforms like Github and Medium. Lavigne’s frequent collaborator Tega Brain has also released browser tools like “Slop Evader,” which filters out generative AI slop by removing all search results from after November 2022, when ChatGPT was first released to the public.

“I’ve been doing these little experiments in digital sabotage where I’m trying to make these tools that mildly interrupt computational systems,” said Lavigne. “One of the things I’ve been thinking about is how if the means of production is truly in our hands, and it’s also the way we’re communicating with other people and managing our social life, then what does it mean to interrupt productivity?”

Lavigne is not an absolutist, however. Without prompting, he admitted that he used Claude to help write some of the code for Slow LLM—until, of course, Slow LLM started working and forced him to complete the project on his own. Instead, Lavigne says he’s trying to make people question the habits they are forming by regularly using chatbots, tools which tempt us to essentially entrust all our knowledge, decision-making, and emotional well-being to massive companies run by tech billionaires like Altman and Elon Musk.

“My hope is to get people to think a little bit more about their usage of these tools,” said Lavigne. “But the broader thing I want people to think about […] is ways of interrupting these flows of data, these flows of power, and putting friction into these computational systems that are mediating so many parts of our lives.”

Read the whole story
mkalus
21 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The (Gas) Price of Hard Power

1 Share

Michael Kalus posted a photo:

The (Gas) Price of Hard Power



Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

VPD play pretending they're the military

1 Share

Michael Kalus posted a photo:

VPD play pretending they're the military



Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories