Resident of the world, traveling the road of life
68219 stories
·
21 followers

This Is How Much Anthropic and Cursor Spend On Amazon Web Services

1 Share

So, I originally planned for this to be on my premium newsletter, but decided it was better to publish on my free one so that you could all enjoy it. If you liked it, please consider subscribing to support my work. Here’s $10 off the first year of annual.

I’ve also recorded an episode about this on my podcast Better Offline (RSS feed, Apple, Spotify, iHeartRadio), it’s a little different but both handle the same information, just subscribe and it'll pop up. 


Over the last two years I have written again and again about the ruinous costs of running generative AI services, and today I’m coming to you with real proof.

Based on discussions with sources with direct knowledge of their AWS billing, I am able to disclose the amounts that AI firms are spending, specifically Anthropic and AI coding company Cursor, its largest customer.

I can exclusively reveal today Anthropic’s spending on Amazon Web Services for the entirety of 2024, and for every month in 2025 up until September, and that that Anthropic’s spend on compute far exceeds that previously reported. 

Furthermore, I can confirm that through September, Anthropic has spent more than 100% of its estimated revenue (based on reporting in the last year) on Amazon Web Services, spending $2.66 billion on compute on an estimated $2.55 billion in revenue.

Additionally, Cursor’s Amazon Web Services bills more than doubled from $6.2 million in May 2025 to $12.6 million in June 2025, exacerbating a cash crunch that began when Anthropic introduced Priority Service Tiers, an aggressive rent-seeking measure that begun what I call the Subprime AI Crisis, where model providers begin jacking up the prices on their previously subsidized rates.

Although Cursor obtains the majority of its compute from Anthropic — with AWS contributing a relatively small amount, and likely also taking care of other parts of its business — the data seen reveals an overall direction of travel, where the costs of compute only keep on going up

Let’s get to it.

Some Initial Important Details

  • I do not have all the answers! I am going to do my best to go through the information I’ve obtained and give you a thorough review and analysis. This information provides a revealing — though incomplete — insight into the costs of running Anthropic and Cursor, but does not include other costs, like salaries and compute obtained from other providers. I cannot tell you (and do not have insight into) Anthropic’s actual private moves. Any conclusions or speculation I make in this article will be based on my interpretations of the information I’ve received, as well as other publicly-available information.
  • I have used estimates of Anthropic’s revenue based on reporting across the last ten months. Any estimates I make are detailed and they are brief. 
  • These costs are inclusive of every product bought on Amazon Web Services, including EC2, storage and database services (as well as literally everything else they pay for).
  • Anthropic works with both Amazon Web Services and Google Cloud for compute. I do not have any information about its Google Cloud spend.
    • The reason I bring this up is that Anthropic’s revenue is already being eaten up by its AWS spend. It’s likely billions more in the hole from Google Cloud and other operational expenses.
  • I have confirmed with sources that every single number I give around Anthropic and Cursor’s AWS spend is the final cash paid to Amazon after any discounts or credits.
  • While I cannot disclose the identity of my source, I am 100% confident in these numbers, and have verified their veracity with other sources.

Anthropic’s Compute Costs Are Likely Much Higher Than Reported — $1.35 Billion in 2024 on AWS Alone

In February of this year, The information reported that Anthropic burned $5.6 billion in 2024, and made somewhere between $400 million and $600 million in revenue:

It’s not publicly known how much revenue Anthropic generated in 2024, although its monthly revenue rose to about $80 million by the end of the year, compared to around $8 million at the start. That suggests full-year revenue in the $400 million to $600 million range.

…Anthropic told investors it expects to burn $3 billion this year, substantially less than last year, when it burned $5.6 billion. Last year’s cash burn was nearly $3 billion more than Anthropic had previously projected. That’s likely due to the fact that more than half of the cash burn came from a one-off payment to access the data centers that power its technology, according to one of the people who viewed the pitch.

While I don’t know about prepayment for services, I can confirm from a source with direct knowledge of billing that Anthropic spent $1.35 billion on Amazon Web Services in 2024, and has already spent $2.66 billion on Amazon Web Services through the end of September.

Assuming that Anthropic made $600 million in revenue, this means that Anthropic spent $6.2 billion in 2024, leaving $4.85 billion in costs unaccounted for. 

The Information’s piece also brings up another point:

The costs to develop AI models accounted for a major portion of Anthropic’s expenses last year. The company spent $1.5 billion on servers for training AI models. OpenAI was on track to spend as much as $3 billion on training costs last year, though that figure includes additional expenses like paying for data.

Before I go any further, I want to be clear that The Information’s reporting is sound, and I trust that their source (I have no idea who they are or what information was provided) was operating in good faith with good data.

However, Anthropic is telling people it spent $1.5 billion on just training when it has an Amazon Web Services bill of $1.35 billion, which heavily suggests that its actual compute costs are significantly higher than we thought, because, to quote SemiAnalysis, “a large share of Anthropic’s spending is going to Google Cloud.” 

I am guessing, because I do not know, but with $4.85 billion of other expenses to account for, it’s reasonable to believe Anthropic spent an amount similar to its AWS spend on Google Cloud. I do not have any information to confirm this, but given the discrepancies mentioned above, this is an explanation that makes sense.

I also will add that there is some sort of undisclosed cut that Amazon gets of Anthropic’s revenue, though it’s unclear how much. According to The Information, “Anthropic previously told some investors it paid a substantially higher percentage to Amazon [than OpenAI’s 20% revenue share with Microsoft] when companies purchase Anthropic models through Amazon.”

I cannot confirm whether a similar revenue share agreement exists between Anthropic and Google.

This also makes me wonder exactly where Anthropic’s money is going.

Where Is Anthropic’s Money Going?

Anthropic has, based on what I can find, raised $32 billion in the last two years, starting out 2023 with a $4 billion investment from Amazon from September 2023 (bringing the total to $37.5 billion), where Amazon was named its “primary cloud provider” nearly eight months after Anthropic announced Google was Anthropic’s “cloud provider.,” which Google responded to a month later by investing another $2 billion on October 27 2023, “involving a $500 million upfront investment and an additional $1.5 billion to be invested over time,” bringing its total funding from 2023 to $6 billion.

In 2024, it would raise several more rounds — one in January for $750 million, another in March for $884.1 million, another in May for $452.3 million, and another $4 billion from Amazon in November 2024, which also saw it name AWS as Anthropic’s “primary cloud and training partner,” bringing its 2024 funding total to $6 billion.

In 2025 so far, it’s raised a $1 billion round from Google, a $3.5 billion venture round in March, opened a $2.5 billion credit facility in May, and completed a $13 billion venture round in September, valuing the company at $183 billion. This brings its total 2025 funding to $20 billion. 

While I do not have Anthropic’s 2023 numbers, its spend on AWS in 2024 — around $1.35 billion — leaves (as I’ve mentioned) $4.85 billion in costs that are unaccounted for. The Information reports that costs for Anthropic’s 521 research and development staff reached $160 million in 2024, leaving 394 other employees unaccounted for (for 915 employees total), and also adding that Anthropic expects its headcount to increase to 1900 people by the end of 2025.

The Information also adds that Anthropic “expects to stop burning cash in 2027.”

This leaves two unanswered questions:

  • Where is the rest of Anthropic’s money going?
  • How will it “stop burning cash” when its operational costs explode as its revenue increases?

An optimist might argue that Anthropic is just growing its pile of cash so it’s got a warchest to burn through in the future, but I have my doubts. In a memo revealed by WIRED, Anthropic CEO Dario Amodei stated that “if [Anthropic wanted] to stay on the frontier, [it would] gain a very large benefit from having access to this capital,” with “this capital” referring to money from the Middle East. 

Anthropic and Amodei’s sudden willingness to take large swaths of capital from the Gulf States does not suggest that it’s not at least a little desperate for capital, especially given Anthropic has, according to Bloomberg, “recently held early funding talks with Abu Dhabi-based investment firm MGX” a month after raising $13 billion.

In my opinion — and this is just my gut instinct — I believe that it is either significantly more expensive to run Anthropic than we know, or Anthropic’s leaked (and stated) revenue numbers are worse than we believe. I do not know one way or another, and will only report what I know.

How Much Did Anthropic and Cursor Spend On Amazon Web Services In 2025?

So, I’m going to do this a little differently than you’d expect, in that I’m going to lay out how much these companies spent, and draw throughlines from that spend to its reported revenue numbers and product announcements or events that may have caused its compute costs to increase.

I’ve only got Cursor’s numbers from January through September 2025, but I have Anthropic’s AWS spend for both the entirety of 2024 and through September 2025.

What Does “Annualized” Mean?

So, this term is one of the most abused terms in the world of software, but in this case, I am sticking to the idea that it means “month times 12.” So, if a company made $10m in January, you would say that its annualized revenue is $120m. Obviously, there’s a lot of (when you think about it, really obvious) problems with this kind of reporting — and thus, you only ever see it when it comes to pre-IPO firms — but that’s besides the point.

I give you this explanation because, when contrasting Anthropic’s AWS spend with its revenues, I’ve had to work back from whatever annualized revenues were reported for that month. 

Anthropic’s Amazon Web Services Spend In 2024 - $1.359 Billion - Estimated Revenue $400 Million to $600 Million

Anthropic’s 2024 revenues are a little bit of a mystery, but, as mentioned above, The Information says it might be between $400 million and $600 million.

Here’s its monthly AWS spend. 

  • January 2024 - $52.9 million
  • February 2024 - $60.9 million
  • March 2024 - $74.3 million
  • April 2024 - $101.1 million
  • May 2024 - $100.1 million
  • June 2024 - $101.8 million
  • July 2024 - $118.9 million
  • August 2024 - $128.8 million
  • September 2024 - $127.8 million
  • October 2024 - $169.6 million
  • November 2024 - $146.5 million
  • December 2024 - $176.1 million

Analysis: Anthropic Spent At Least 200% of Its 2024 Revenue On Amazon Web Services In 2024

I’m gonna be nice here and say that Anthropic made $600 million in 2024 — the higher end of The Information’s reporting — meaning that it spent around 226% of its revenue ($1.359 billion) on Amazon Web Services.

[Editor's note: this copy originally had incorrect maths on the %. Fixed now.]

Anthropic’s Amazon Web Services Spend In 2025 Through September 2025 - $2.66 Billion - Estimated Revenue Through September $2.55 Billion - 104% Of Revenue Spent on AWS

Thanks to my own analysis and reporting from outlets like The Information and Reuters, we have a pretty good idea of Anthropic’s revenues for much of the year. That said, July, August, and September get a little weirder, because we’re relying on “almosts” and “approachings,” as I’ll explain as we go.

I’m also gonna do an analysis on a month-by-month basis, because it’s necessary to evaluate these numbers in context. 

January 2025 - $188.5 million In AWS Spend, $72.91 or $83 Million In Revenue - 227% Of Revenue Spent on AWS

In this month, Anthropic’s reported revenue was somewhere from $875 million to $1 billion annualized, meaning either $72.91 million or $83 million for the month of January.

February 2025 - $181.2 million in AWS Spend, $116 Million In Revenue - 156% Of Revenue Spent On AWS - 181% Of Revenue Spent On AWS

In February, as reported by The Information, Anthropic hit $1.4 billion annualized revenue, or around $116 million each month.

March 2025 - $240.3 million in AWS Spend - $166 Million In Revenue - 144% Of Revenue Spent On AWS - Launch of Claude Sonnet 3.7 & Claude Code Research Preview (February 24)

In March, as reported by Reuters, Anthropic hit $2 billion in annualized revenue, or $166 million in revenue.

Because February is a short month, and the launch took place on February 24 2025, I’m considering the launches of Claude 3.7 Sonnet and Claude Code’s research preview to be a cost burden in the month of March.

And man, what a burden! Costs increased by $59.1 million, primarily across compute categories, but with a large ($2 million since January) increase in monthly costs for S3 storage.

April 2025 - $221.6 million in AWS Spend - $204 Million In Revenue - 108% Of Revenue Spent On AWS

I estimate, based on a 22.4% compound growth rate, that Anthropic hit around $2.44 billion in annualized revenue in April, or $204 million in revenue.

Interestingly, this was the month where Anthropic launched its $100 and $200 dollar a month “Max” plans, and it doesn’t seem to have dramatically increased its costs. Then again, Max is also the gateway to things like Claude Code, which I’ll get to shortly.

May 2025 - $286.7 million in AWS Spend - $250 Million In Revenue - 114% Of Revenue Spent On AWS - Sonnet 4, Opus 4, General Availability Of Claude Code (May 22) Service Tiers (May 30)

In May, as reported by CNBC, Anthropic hit $3 billion in annualized revenue, or $250 million in monthly average revenue.

This was a big month for Anthropic, with two huge launches on May 22 2025 — its new, “more powerful” models Claude Sonnet and Opus 4, as well as the general availability of its AI coding environment Claude Code.

Eight days later, on May 30 2025, a page on Anthropic's API documentation appeared for the first time: "Service Tiers":

Different tiers of service allow you to balance availability, performance, and predictable costs based on your application’s needs.

We offer three service tiers:

- Priority Tier: Best for workflows deployed in production where time, availability, and predictable pricing are important

Standard: Best for bursty traffic, or for when you’re trying a new idea

Batch: Best for asynchronous workflows which can wait or benefit from being outside your normal capacity

Accessing the priority tier requires you to make an up-front commitment to Anthropic, and said commitment is based on a number of months (1, 3, 6 or 12) and the number of input and output tokens you estimate you will use each minute. 

What’s a Priority Tier? Why Is It Significant?

As I’ll get into in my June analysis, Anthropic’s Service Tiers exist specifically for it to “guarantee” your company won’t face rate limits or any other service interruptions, requiring a minimum spend, minimum token throughput, and for you to pay higher rates when writing to the cache — which is, as I’ll explain, a big part of running an AI coding product like Cursor.

Now, the jump in costs — $65.1 million or so between April and May — likely comes as a result of the final training for Sonnet and Opus 4, as well as, I imagine, some sort of testing to make sure Claude Code was ready to go.

June 2025 - $321.4 million in AWS Spend - $333 Million In Revenue - 96.5% Of Revenue Spent On AWS - Anthropic Cashes In On Service Tier Tolls That Add An Increased Charge For Prompt Caching, Directly Targeting Companies Like Cursor

In June, as reported by The Information, Anthropic hit $4 billion in annualized revenue, or $333 million.

Anthropic’s revenue spiked by $83 million this month, and so did its costs by $34.7 million. 

Anthropic Started The Subprime AI Crisis In June 2025, Increasing Costs On Its Largest Customer, Doubling Its AWS Spend In A Month

I have, for a while, talked about the Subprime AI Crisis, where big tech and companies like Anthropic, after offering subsidized pricing to entice in customers, raise the rates on their customers to start covering more of their costs, leading to a cascade where businesses are forced to raise their prices to handle their new, exploding costs.

And I was god damn right. Or, at least, it sure looks like I am. I’m hedging, forgive me. I cannot say for certain, but I see a pattern. 

It’s likely the June 2025 spike in revenue came from the introduction of service tiers, which specifically target prompt caching, increasing the amount of tokens you’re charged for as an enterprise customer based on the term of the contract, and your forecast usage.

Per my reporting in July

You see, Anthropic specifically notes on its "service tiers" page that requests at the priority tier are "prioritized over all other requests to Anthropic," a rent-seeking measure that effectively means a company must either:

- Commit to at least a month, though likely 3-12 months of specific levels of input and output tokens a minute, based on what they believe they will use in the future, regardless of whether they do.

- Accept that access to Anthropic models will be slower at some point, in some way that Anthropic can't guarantee.Furthermore, the way that Anthropic is charging almost feels intentionally built to fuck over any coding startup that would use its service. Per the service tier page, Anthropic charges 1.25 for every time you write a token to the cache with a 5 minute TTL — or 2 tokens if you have a 1 hour TTL — and a longer cache is effectively essential for any background task where an agent will be working for more than 5 minutes, such as restructuring a particularly complex series of code, you know, the exact things that Cursor is well-known and marketed to do.

Furthermore, the longer something is in the cache, the better autocomplete suggestions for your code will be. It's also important to remember you're, at some point, caching the prompts themselves — so the instructions of what you want Cursor to do, meaning that the more complex the operation, the more expensive it'll now be for Cursor to provide the service with reasonable uptime.

Cursor, as Anthropic’s largest client (the second largest being Github Copilot), represents a material part of its revenue, and its surging popularity meant it was sending more and more revenue Anthropic’s way.  Anysphere, the company that develops Cursor, hit $500 million annualized revenue ($41.6 million) by the end of May, which Anthropic chose to celebrate by increasing its costs.

On June 16 2025, Cursor launched a $200-a-month “Ultra” plan, as well as dramatic changes to its $20-a-month Pro pricing that, instead of offering 500 “fast” responses using models from Anthropic and OpenAI, now effectively provided you with “at least” whatever you paid a month (so $20-a-month got at least $20 of credit), massively increasing the costs for users, with one calling the changes a “rug pull” after spending $71 in a single day.

As I’ll get to later in the piece, Cursor’s costs exploded from $6.19 million in May 2025 to $12.67 million in June 2025, and I believe this is a direct result of Anthropic’s sudden and aggressive cost increases. 

Similarly, Replit, another AI coding startup, moved to “Effort-Based Pricing” on June 18 2025. I have not got any information around its AWS spend.

I’ll get into this a bit later, but I find this whole situation disgusting.

July 2025 $323.2 million in AWS Spend - $416 Million In Revenue - 77.7% Of Revenue Spent On AWS

In July, as reported by Bloomberg, Anthropic hit $5 billion in annualized revenue, or $416 million.

While July wasn’t a huge month for announcements, it was allegedly the month that Claude Code was generating “nearly $400 million in annualized revenue,” or $33.3 million (according to The Information, who says Anthropic was “approaching” $5 billion in annualized revenue - which likely means LESS than that - but I’m going to go with the full $5 billion annualized for sake of fairness. 

There’s roughly an $83 million bump in Anthropic’s revenue between June and July 2025, and I think Claude Code and its new rates are a big part of it. What’s fascinating is that cloud costs didn’t increase too much — by only $1.8 million, to be specific.

August 2025 - $383.7 million in AWS Spend - $416 Million In Revenue - 92% Of Revenue Spent On AWS

In August, according to Anthropic, its run-rate “reached over $5 billion,” or in or around $416 million. I am not giving it anything more than $5 billion, especially considering in July Bloomberg’s reporting said “about $5 billion.”

Costs grew by $60.5 this month, potentially due to the launch of Claude Opus 4.1, Anthropic’s more aggressively expensive model, though revenues do not appear to have grown much along the way.

Yet what’s very interesting is that Anthropic — starting August 28 — launched weekly rate limits on its Claude Pro and Max plans. I wonder why?

September 2025 - $518.9 million in AWS Spend - $583 Million In Revenue - 88.9% Of Revenue Spent On AWS

Oh fuck! Look at that massive cost explosion!

Anyway, according to Reuters, Anthropic’s run rate is “approaching $7 billion” in October, and for the sake of fairness, I am going to just say it has $7 billion annualized, though I believe this number to be lower. “Approaching” can mean a lot of different things — $6.1 billion, $6.5 billion — and because I already anticipate a lot of accusations of “FUD,” I’m going to err on the side of generosity.

If we assume a $6.5 billion annualized rate, that would make this month’s revenue $541.6 million, or 95.8% of its AWS spend.  

Nevertheless, Anthropic’s costs exploded in the space of a month by $135.2 million (35%) - likely due to the fact that users, as I reported in mid-July, were costing it thousands or tens of thousands of dollars in compute, a problem it still faces to this day, with VibeRank showing a user currently spending $51,291 in a calendar month on a $200-a-month subscription.

If there were other costs, they likely had something to do with the training runs for the launches of Sonnet 4.5 on September 29 2025 and Haiku 4.5 in October 2025.

Anthropic’s Monthly AWS Costs Have Increased By 174% Since January - And With Its Potential Google Cloud Spend and Massive Staff, Anthropic Is Burning Billions In 2025

While these costs only speak to one part of its cloud stack — Anthropic has an unknowable amount of cloud spend on Google Cloud, and the data I have only covers AWS — it is simply remarkable how much this company spends on AWS, and how rapidly its costs seem to escalate as it grows.

Though things improved slightly over time — in that Anthropic is no longer burning over 200% of its revenue on AWS alone — these costs have still dramatically escalated, and done so in an aggressive and arbitrary manner. 

Anthropic’s AWS Costs Increase Linearly With Revenue, Consuming The Majority Of Each Dollar Anthropic Makes - As A Reminder, It Also Spends Hundreds Of Millions Or Billions On Google Cloud Too

So, I wanted to visualize this part of the story, because I think it’s important to see the various different scenarios.

An Estimate of Anthropic’s Potential Cloud Compute Spend Through September

THE NUMBERS I AM USING ARE ESTIMATES CALCULATED BASED ON 25%, 50% and 100% OF THE AMOUNTS THAT ANTHROPIC HAS SPENT ON AMAZON WEB SERVICES THROUGH SEPTEMBER. 

I apologize for all the noise, I just want it to be crystal clear what you see next.  

alt

As you can see, all it takes is for Anthropic to spend (I am estimating) around 25% of its Amazon Web Services bills (for a total of around $3.33 billion in compute costs through the end of September) to savage any and all revenue ($2.55 billion) it’s making. 

Assuming Anthropic spends half of its  AWS spend on Google Cloud, this number climbs to $3.99 billion, and if you assume - and to be clear, this is an estimate - that it spends around the same on both Google Cloud and AWS, Anthropic has spent $5.3 billion on compute through the end of September.

I can’t tell you which it is, just that we know for certain that Anthropic is spending money on Google Cloud, and because Google owns 14% of the company — rivalling estimates saying Amazon owns around 15-19% — it’s fair to assume that there’s a significant spend.

Anthropic’s Costs Are Out Of Control, Consistently And Aggressively Outpacing Revenue - And Amazon’s Revenue from Anthropic Of $2.66 Billion Is 2.5% Of Its 2025 Capex

I have sat with these numbers for a great deal of time, and I can’t find any evidence that Anthropic has any path to profitability outside of aggressively increasing the prices on their customers to the point that its services will become untenable for consumers and enterprise customers alike.

As you can see from these estimated and reported revenues, Anthropic’s AWS costs appear to increase in a near-linear fashion with its revenues, meaning that the current pricing — including rent-seeking measures like Priority Service Tiers — isn’t working to meet the burden of its costs.

We do not know its Google Cloud spend, but I’d be shocked if it was anything less than 50% of its AWS bill. If that’s the case, Anthropic is in real trouble - the cost of the services underlying its business increase the more money they make.

It’s becoming increasingly apparent that Large Language Models are not a profitable business. While I cannot speak to Amazon Web Services’ actual costs, it’s making $2.66 billion from Anthropic, which is the second largest foundation model company in the world. 

Is that really worth $105 billion in capital expenditures? Is that really worth building a giant 1200 acre data center in Indiana with 2.2GW of electricity?

What’s the plan, exactly? Let Anthropic burn money for the foreseeable future until it dies, and then pick up the pieces? Wait until Wall Street gets mad at you and then pull the plug?

Who knows. 

But let’s change gears and talk about Cursor — Anthropic’s largest client and, at this point, a victim of circumstance.

Cursor’s Amazon Web Services Spend In 2025 Through September 2025 - $69.99 Million

An Important Note About Cursor’s Compute Spend

Amazon sells Anthropic’s models through Amazon Bedrock, and I believe that AI startups are compelled to spend some of their AI model compute costs through Amazon Web Services. Cursor also sends money directly to Anthropic and OpenAI, meaning that these costs are only one piece of its overall compute costs. In any case, it’s very clear that Cursor buys some degree of its Anthropic model spend through Amazon.

I’ll also add that Tom Dotan of Newcomer reported a few months ago that an investor told him that “Cursor is spending 100% of its revenue on Anthropic.”

Unlike Anthropic, we lack thorough reporting of the month-by-month breakdown of Cursor’s revenues. I will, however, mention them in the month I have them.

For the sake of readability — and because we really don’t have much information on Cursor’s revenues beyond a few months — I’m going to stick to a bullet point list. 

Another Note About Cursor’s AWS Spend - It Likely Funnels Some Model Spend Through AWS, But The Majority Goes Directly To Providers Like Anthropic

As discussed above, Cursor announced (along with their price change and $200-a-month plan) several multi-year partnerships with xAI, Anthropic, OpenAI and Google, suggesting that it has direct agreements with Anthropic itself versus one with AWS to guarantee “this volume of compute at a predictable price.” 

Based on its spend with AWS, I do not see a strong “minimum” spend that would suggest that they have a similar deal with Amazon — likely because Amazon handles more than its infrastructure than just compute, but incentivizes it to spend on Anthropic’s models through AWS by offering discounts, something I’ve confirmed with a source. 

In any case, here’s what Cursor spent on AWS.

  • January 2025 - $1.459 million
  • February 2025 - $2.47 million
  • March 2025 - $4.39 million
  • April 2025 - $4.74 million
  • May 2025 - $6.19 million
  • June 2025 - $12.67 million
    • So, Bloomberg reported that Cursor hit $500 million on June 5 2025, along with raising a $900 million funding round. Great news! Turns out it’d need to start handing a lot of that to Anthropic.
    • This was, as I’ve discussed above, the month when Anthropic forced it to adopt “Service Tiers”. I go into detail about the situation here, but the long and short of it is that Anthropic increased the amount of tokens you burned by writing stuff to the cache (think of it like RAM in a computer), and AI coding startups are very cache heavy, meaning that Cursor immediately took on what I believed would be massive new costs. As I discuss in what I just linked, this led Cursor to aggressively change its product, thereby vastly increasing its customers’ costs if they wanted to use the same service.
    • That same month, Cursor’s AWS costs — which I believe are the minority of its cloud compute costs — exploded by 104% (or by $6.48 million), and never returned to their previous levels.
    • It’s conceivable that this surge is due to the compute-heavy nature of the latest Claude 4 models released that month — or, perhaps, Cursor sending more of its users to other models that it runs on Bedrock. 
  • July 2025 - $15.5 million
    • As you can see, Cursor’s costs continue to balloon in July, and I am guessing it’s because of the Service Tiers situation — which, I believe, indirectly resulted in Cursor pushing more users to models that it runs on Amazon’s infrastructure.
  • August 2025 - $9.67 million
    • So, I can only guess as to why there was a drop here. User churn? It could be the launch of GPT-5 on Cursor, which gave users a week of free access to OpenAI’s new models.
    • What’s also interesting is that this was the month when Cursor announced that its previously free “auto” model (where Cursor would select the best available premium model or its own model) would now bill at “competitive token rates,” by which I mean it went from charging nothing to $1.25 per million input and $6 per million output tokens. This change would take effect on September 15 2025.
    • On August 10 2025, Tom Dotan of Newcomer reported that Cursor was “well above” $500 million in annualized revenue based on commentary from two sources.
  • September 2025 - $12.91 million
    • Per the above, this is the month when Cursor started charging for its “auto” model.

What Anthropic May Have Done To Cursor Is Disgusting - And Is A Preview Of What’s To Come For AI Startups

When I wrote that Anthropic and OpenAI had begun the Subprime AI Crisis back in July, I assumed that the increase in costs was burdensome, but having the information from its AWS bills, it seems that Anthropic’s actions directly caused Cursor’s costs to explode by over 100%. 

While I can’t definitively say “this is exactly what did it,” the timelines match up exactly, the costs have never come down, Amazon offers provisioned throughput, and, more than likely, Cursor needs to keep a standard of uptime similar to that of Anthropic’s own direct API access.

If this is what happened, it’s deeply shameful. 

Cursor, Anthropic’s largest customer, in the very same month it hit $500 million in annualized revenue, immediately had its AWS and Anthropic-related costs explode to the point that it had to dramatically reduce the value of its product just as it hit the apex of its revenue growth. 

Anthropic Timed Its Rent-Seeking Service Tier Price Increases on Cursor With The Launch Of A Competitive Product - Which Is What’s Coming To Any AI Startup That Builds On Top Of Its Products

It’s very difficult to see Service Tiers as anything other than an aggressive rent-seeking maneuver.

Yet another undiscussed part of the story is that the launch of Claude 4 Opus and Sonnet — and the subsequent launch of Service Tiers — coincided with the launch of Claude Code, a product that directly competes with Cursor, without the burden of having to pay itself for the cost of models or, indeed, having to deal with its own “Service Tiers.”

Anthropic may have increased the prices on its largest client at the time it was launching a competitor, and I believe that this is what awaits any product built on top of OpenAI or Anthropic’s models. 

The Subprime AI Crisis Is Real, And It Can Hurt You

I realize this has been a long, number-stuffed article, but the long-and-short of it is simple: Anthropic is burning all of its revenue on compute, and Anthropic will willingly increase the prices on its customers if it’ll help it burn less money, even though that doesn’t seem to be working.

What I believe happened to Cursor will likely happen to every AI-native company, because in a very real sense, Anthropic’s products are a wrapper for its own models, except it only has to pay the (unprofitable) costs of running them on Amazon Web Services and Google Cloud.

As a result, both OpenAI and Anthropic can (and may very well!) devour the market of any company that builds on top of their models. 

OpenAI may have given Cursor free access to its GPT-5 models in August, but a month later on September 15 2025 it debuted massive upgrades to its competitive “Codex” platform. 

Any product built on top of an AI model that shows any kind of success can be cloned immediately by OpenAI and Anthropic, and I believe that we’re going to see multiple price increases on AI-native companies in the next few months. After all, OpenAI already has its own priority processing product, which it launched shortly after Anthropic’s in June.

The ultimate problem is that there really are no winners in this situation. If Anthropic kills Cursor through aggressive rent-seeking, that directly eats into its own revenues. If Anthropic lets Cursor succeed, that’s revenue, but it’s also clearly unprofitable revenue. Everybody loses, but nobody loses more than Cursor’s (and other AI companies’) customers. 

Anthropic Is In Real Trouble - And The Current Cost Of Doing Business Is Unsustainable, Meaning Prices Must Increase

I’ve come away from this piece with a feeling of dread.

Anthropic’s costs are out of control, and as things get more desperate, it appears to be lashing out at its customers, both companies like Cursor and Claude Code customers facing weekly rate limits on their more-powerful models who are chided for using a product they pay for. Again, I cannot say for certain, but the spike in costs is clear, and it feels like more than a coincidence to me. 

There is no period of time that I can see in the just under two years of data I’ve been party to that suggests that Anthropic has any means of — or any success doing — cost-cutting, and the only thing this company seems capable of doing is increasing the amount of money it burns on a monthly basis. 

Based on what I have been party to, the more successful Anthropic becomes, the more its services cost. The cost of inference is clearly increasing for customers, but based on its escalating monthly costs, the cost of inference appears to be high for Anthropic too, though it’s impossible to tell how much of its compute is based on training versus running inference.

In any case, these costs seem to increase with the amount of money Anthropic makes, meaning that the current pricing of both subscriptions and API access seems unprofitable, and must increase dramatically — from my calculations, a 100% price increase might work, but good luck retaining every single customer and their customers too! — for this company to ever become sustainable. 

I don’t think that people would pay those prices. If anything, I think what we’re seeing in these numbers is a company bleeding out from costs that escalate the more that its user base grows. This is just my opinion, of course. 

I’m tired of watching these companies burn billions of dollars to destroy our environment and steal from everybody. I’m tired that so many people have tried to pretend there’s a justification for burning billions of dollars every year, clinging to empty tropes about how this is just like Uber or Amazon Web Services, when Anthropic has built something far more mediocre. 

Mr. Amodei, I am sure you will read this piece, and I can make time to chat in person on my show Better Offline. Perhaps this Friday? I even have some studio time on the books. 

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

OpenAI Needs $400 Billion In The Next 12 Months

1 Share

Hello readers! This premium edition features a generous free intro because I like to try and get some of the info out there, but the real indepth stuff is below the cut. Nevertheless, I deeply appreciate anyone subscribing.

On Monday I will have my biggest scoop ever, and it'll go out on the free newsletter because of its scale. This is possible because of people supporting me on the premium. Thanks so much for reading.


One of the only consistent critiques of my work is that I’m angry, irate, that I am taking myself too seriously, that I’m swearing too much, and that my arguments would be “better received” if I “calmed down.”

Fuck that.

Look at where being timid or deferential has got us. Broadcom and OpenAI have announced another 10GW of custom chips and supposed capacity which will supposedly get fully deployed by the end of 2029, and still the media neutrally reports these things as not simply doable, but rational.

To be clear, building a gigawatt of data center capacity costs at least $32.5 billion (though Jensen Huang says the computing hardware alone costs $50 billion, which excludes the buildings themselves and the supporting power infrastructure, and Barclays Bank says $50 billion to $60 billion) and takes two and a half years. 

In fact, fuck it — I’m updating my priors. Let’s say it’s a nice, round $50 billion per gigawatt of data center capacity. $32.5 billion is what it cost to build Stargate Abilene, but that estimate was based on Crusoe’s 1.2GW of compute for OpenAI being part of a $15 billion joint venture, which meant a gigawatt of compute runs about $12.5 billion, and Abilene’s 8 buildings are meant to hold 50,000 NVIDIA GB200 GPUs and their associated networking infrastructure, so let’s say a gigawatt is around 333,333 Blackwell GPUs at $60,000 a piece, so about $20 billion a gigawatt.

However, this mathematics assumed that every cost associated would be paid by the Joint Venture. Lancium, the owner of the land that is allegedly building the power infrastructure, has now raised over a billion dollars.

This maths also didn’t include the cost of the associated networking infrastructure around the GB200s. So, guess what? We’re doing $50 billion now. 

OpenAI has now promised 33GW of capacity across AMD, NVIDIA, Broadcom and the seven data centers built under Stargate, though one of those — in Lordstown, Ohio — is not actually a data center, with my source being “SoftBank,” speaking to WKBN in Lordstown Ohio, which said it will “not be a full-blown data center,” and instead be “at the center of cutting-edge technology that will encompass storage containers that will hold the infrastructure for AI and data storage.”

This wasn’t hard to find, by the way! I googled “SoftBank Lordstown” and up it came, ready for me to read with my eyes.

Putting all of that aside, I think it’s time that everybody started taking this situation far more seriously, by which I mean acknowledging the sheer recklessness and naked market manipulation taking place. 

But let’s make it really simple, and write out what’s meant to happen in the next year:

  • In the second half of 2026, OpenAI and Broadcom will tape out and successfully complete an AI inference chip, then manufacture enough of them to fill a 1GW data center.
    • That data center will be built in an as-yet-unknown location, and will have at least 1GW of power, but more realistically it will need 1.2GW to 1.3GW of power, because for every 1GW of IT load, you need extra power capacity in reserve for the hottest day of the year, when the cooling system works hardest and power transmission losses are highest. . 
    • OpenAI does not appear to have a site for this data center, and thus has not broken ground on it.
  • In the second half of 2026, AMD and OpenAI will begin “the first 1 gigawatt deployment of AMD Instinct MI450 GPUs.” 
    • This will take place in an as-yet-unnamed data center location, which to be completed by that time would have needed to start construction and early procurement of power at least a year ago, if not more. 
  • In the second half of 2026, OpenAI and NVIDIA will deploy the first gigawatt of NVIDIA’s Vera Rubin GPU systems as part of their $100 billion deal.
    • These GPUs will be deployed in a data center of some sort, which remains unnamed, but for them to meet this timeline they will need to have started construction at least a year ago.

In my most conservative estimate, these data centers will cost over $100 billion, and to be clear, a lot of that money needs to already be in OpenAI’s hands to get the data centers built. Or, some other dupe has to a.) have the money, and b.) be willing to front it. 

All of this is a fucking joke. I’m sorry, I know some of you will read this, cowering from your screen like a B-movie vampire that just saw a crucifix, but it is a joke, and it is a fucking stupid joke, the only thing stupider being that any number of respectable media outlets are saying these things like they’ll actually happen.

There is not enough time to build these things. If there was enough time, there wouldn’t be enough money. If there was enough money, there wouldn’t be enough transformers, electrical-grade steel, or specialised talent to run the power to the data centers. Fuck! Piss! Shit! Swearing doesn’t change the fact that I’m right — none of what OpenAI, NVIDIA, Broadcom, and AMD are saying is possible, and it’s fair to ask why they’re saying it.

I mean, we know. Number must go up, deal must go through, and Jensen Huang wouldn’t go on CNBC and say “yeah man if I’m honest I’ve got no fucking clue how Sam Altman is going to pay me, other than with the $10 billion I’m handing him in a month. Anyway, NVIDIA’s accounts receivables keep increasing every quarter for a normal reason, don’t worry about it.” 

But in all seriousness, we now have three publicly-traded tech firms that have all agreed to join Sam Altman’s No IT Loads Refused Cash Dump, all promising to do things on insane timelines that they — as executives of giant hardware manufacturers, or human beings with warm bodies and pulses and sciatica — all must know are impossible to meet. 

What is the media meant to do? What are we, as regular people, meant to do? These stocks keep pumping based on completely nonsensical ideas, and we’re all meant to sit around pretending things are normal and good. They’re not! At some point somebody’s going to start paying people actual, real dollars at a scale that OpenAI has never truly had to reckon with.

In this piece, I’m going to spell out in no uncertain terms exactly what OpenAI has to do in the next year to fulfil its destiny — having a bunch of capacity that cost ungodly amounts of money to serve demand that never arrives.

Yes, yes, I know, you’re going to tell me that OpenAI has 800 million weekly active users, and putting aside the fact that OpenAI’s own research (see page 10, footnote 20) says it double-counts users who are logged out if they’re use different devices, OpenAI is saying it wants to build 250 gigawatts of capacity by 2033, which will cost it $10 trillion dollars, or one-third of the entire US economy last year.

Who the fuck for? 

One thing that’s important to note: In February, Goldman Sachs estimated that the global data center capacity was around 55GW. In essence, OpenAI says it wants to add five times that capacity — something that has grown organically over the past thirty or so years — by itself, and in eight years. 

And yes, it’ll cost one-third of America’s output in 2024. This is not a sensible proposition. 

Even if you think that OpenAI’s growth is impressive — it went from 700 million to 800 million weekly active users in the last two months — that is not the kind of growth that says “build capacity assuming that literally every single human being on Earth uses this all the time.” 

As an aside: Altman is already lying about his available capacity. According to an internal Slack note seen by Alex Heath of Sources, Altman claims that OpenAI started the year with “around” 230 megawatts of capacity and is “now on track to exit 2025 north of 2GW of operational capacity.” Unless I’m much mistaken OpenAI doesn’t have any capacity of its own — and according to Mr. Altman, it’s somehow built or acquired 1.7GW of capacity from somewhere without disclosing it.

For context, 1.7GW is the equivalent of every data center in the UK that was operational last year

Where is this coming from? Is this CoreWeave? It only has — at most — 900MW of capacity by the end of 2025. Where’d all the extra capacity come from? Who knows! It isn’t Stargate Abilene that’s for sure — they’ve only got one operational building and 200MW of power, meaning they can only really support 130MW of IT loads, because of that pesky reserve I mentioned earlier. 

Anyway, what exactly is OpenAI doing? Why does it need all this capacity? Even if it  hits its $13 billion revenue projection for this year (it’s only at $5.3 billion or so as of the end of August, and for OpenAI to hit its targets it’ll need to make $1.5bn+ a month very soon), does it really think it’s going to effectively 10x the entire company from here? What possible sign is there of that happening other than a conga-line of different executives willing to stake their reputations on blatant lies peddled by a man best known for needing, at any given moment, another billion dollars

According to The Information, OpenAI spent $6.7 billion on research and development in the first six months of 2025, and according to Epoch AI, most of the $5 billion it spent on research and development in 2024 was spent on research, experimental, or derisking runs (basically running tests before doing the final testing run) and models it would never release, with only $480 million going to training actual models that people will use. 

I should also add that GPT 4.5 was a dud, and even Altman called it giant, expensive, and said it “wouldn’t crush benchmarks.”

I’m sorry, but what exactly is it that OpenAI has released in the last year-and-a-half that was worth burning $11.7 billion for? GPT 5? That was a huge letdown! Sora 2? The giant plagiarism machine that it’s already had to neuter?

What is it that any of you believe that OpenAI is going to do with these fictional data centers? 

Why Does ChatGPT Need $10 Trillion Of Data Centers?

The problem with ChatGPT isn’t just that it hallucinates — it’s that you can’t really say exactly what it can do, because you can’t really trust that it can do anything. Sure, it’ll get a few things right a lot of the time, but what task is it able to do every time that you actually need? 

Say the answer is “something that took me an hour now takes me five minutes.” Cool! How many of those do you get? Again, OpenAI wants to build 250 gigawatts of data centers, and will need around ten trillion dollars to do it. “It’s going to be really good” is no longer enough.

And no, I’m sorry, they are not building AGI. He just told Politico a few weeks ago that if we didn’t have “models that are extraordinarily capable and do things that we ourselves cannot do” by 2030 he would be “very surprised.” 

Wow! What a stunning and confident statement. Let’s give this guy the ten trillion dollars he needs! And he’s gonna need it soon if he wants to build 250 gigawatts of capacity by 2033.

But let’s get a little more specific.

Based on my calculations, in the next six months, OpenAI needs at least $50 billion to build a gigawatt of data centers for Broadcom — and to hit its goal of 10 gigawatts of data centers by end of 2029, at least another $200 billion in the next 12 months, not including at least $50 billion to build a gigawatt of data centers for NVIDIA, $40 billion to pay for its 2026 compute, at least $50 billion to buy chips and build a gigawatt of data centers for AMD, at least $500 million to build its consumer device (and they can’t seem to work out what to build), and at least a billion dollars to hand off to ARM for a CPU to go with the new chips from Broadcom.

That’s $391.5 billion dollars! That’s $23.5 billion more than the $368 billion of global venture capital raised in 2024! That’s nearly 11 times Uber’s total ($35.8 billion) lifetime funding, or 5.7 times the $67.6 billion in capital expenditures that Amazon spent building Amazon Web Services

On top of all of this are OpenAI’s other costs. According to The Information, OpenAI spent $2 billion alone on Sales and Marketing in the first half of 2025, and likely spends billions of dollars on salaries, meaning that it’ll likely need at least another $10 billion on top. As this is a vague cost, I’m going with a rounded $400 billion number, though I believe it’s actually going to be more.

And to be clear, to complete these deals by the end of 2026, OpenAI needs large swaths of this money by February 2026. 

OpenAI Needs Over $400 Billion In The Next 12 Months To Complete Any Of These Deals — And Sam Altman Doesn’t Have Enough Time To Build Any Of it

I know, I know, you’re going to say that OpenAI will simply “raise debt” and “work it out,” but OpenAI has less than a year to do that, because OpenAI has promised in its own announcements that all of these things would happen by the end of December 2026, and even if they’re going to happen in 2027, data centers require actual money to begin construction, and Broadcom, NVIDIA and AMD are going to actually require cash for those chips before they ship them.

Even if OpenAI finds multiple consortiums of paypigs to take on the tens of billions of dollars of data center funding, there are limits, and based on OpenAI’s aggressive (and insane) timelines, they will need to raise multiple different versions of the largest known data center deals of all time, multiple times a year, every single year. 

Say that happens. OpenAI will still need to pay those compute contracts with Oracle, CoreWeave, Microsoft (I believe its Azure credits have run out) and Google (via CoreWeave) with actual, real cash — $40 billion dollars worth — when it’s already burning $9.2 billion in the first half of 2026 on compute against revenues of $4.3 billion. OpenAI will still need to pay its staff, its storage, its sales and marketing department that cost it $2 billion in the first half of 2026, all while converting its non-profit into a for-profit by the end of the year, or it loses $20 billion in funding from SoftBank.

Also, if it doesn’t convert to a for-profit by October 2026, its $6.6 billion funding round from 2024 converts to debt.

The Global Financial System Cannot Afford OpenAI

The burden that OpenAI is putting on the financial system is remarkable, and actively dangerous. It would absorb, at this rate, the capital expenditures of multiple hyperscalers, requiring multiple $30 billion debt financing operations a year, and for it to hit its goal of 250 gigawatts by the end of 2033, it will likely have to have outpaced the capital expenditures of any other company in the world.

OpenAI is an out-of-control monstrosity that is going to harm every party that depends upon it completing its plans. For it to succeed, it will have to absorb over a trillion dollars a year — and for it to hit its target, it will likely have to eclipse the $1.7 trillion in global private equity deal volume in 2024, and become a significant part of global trade ($33 trillion in 2025).

There isn’t enough money to do this without diverting most of the money that exists to doing it, and even if that were to happen, there isn’t enough time to do any of the stuff that has been promised in anything approaching the timelines promised, because OpenAI is making this up as it goes along and somehow everybody is believing it. 

At some point, OpenAI is going to have to actually do the things it has promised to do, and the global financial system is incapable of supporting them.

And to be clear, OpenAI cannot really do any of the things it’s promised.

Just take a look at the Oracle deal!

None of this bullshit is happening, and it’s time to be honest about what’s actually going on.

OpenAI is not building “the AI industry,” as this is capacity for one company that burns billions of dollars and has absolutely no path to profitability. 

This is a giant, selfish waste of money and time, one that will collapse the second that somebody’s confidence wavers.

I realize that it’s tempting to write “Sam Altman is building a giant data center empire,” but what Sam Altman is actually doing is lying. He is lying to everybody. 

He is saying that he will build 250GW of data centers in the space of eight years, an impossible feat, requiring more money than anybody would ever give him in volumes and intervals that are impossible for anybody to raise. 

Sam Altman’s singular talent is finding people willing to believe his shit or join him in an economy-supporting confidence game, and the recklessness of continuing to do so will only harm retail investors — regular people beguiled by the bullshit machine and bullshit masters making billions promising they’ll make trillions.

To prove it, I’m going to write down everything that will need to take place in the next twelve months for this to happen, and illustrate the timelines of everything involved. 

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Pluralistic: The mad king's digital killswitch (20 Oct 2025)

1 Share


Today's links



The Earth seen from space. Hovering above it is Uncle Sam, with Trump's hair - his legs are stuck out before him, and they terminate in ray-guns that are shooting red rays over the Earth. The starry sky is punctuated by 'code waterfall' effects, as seen in the credit sequences of the Wachowskis' 'Matrix' movies.

The mad king's digital killswitch (permalink)

Remember when we were all worried that Huawei had filled our telecoms infrastructure with listening devices and killswitches? It sure would be dangerous if a corporation beholden to a brutal autocrat became structurally essential to your country's continued operations, huh?

In other, unrelated news, earlier this month, Trump's DoJ ordered Apple and Google to remove apps that allowed users to report ICE's roving gangs of masked thugs, who have kidnapped thousands of our neighbors and sent them to black sites:

https://pluralistic.net/2025/10/06/rogue-capitalism/#orphaned-syrian-refugees-need-not-apply

Apple and Google capitulated. Apple also capitulated to Trump by removing apps that collect hand-verified, double-checked videos of ICE violence. Apple declared ICE's thugs to be a "protected class" that may not be disparaged in apps available to Apple's customers:

https://www.wnycstudios.org/podcasts/otm/articles/big-tech-is-silencing-the-ice-watchers-plus-why-a-scholar-of-antifa-fled-the-country

Of course, iPhones can (technically) run apps that Apple doesn't want you to run. All you have to do is "jailbreak" your phone and install an independent app store. Just one problem: the US Trade Rep bullied every country in the world into banning jailbreaking, meaning that if Trump (a man who never met a grievance that was too petty to pursue) orders Tim Cook (a man who never found a boot he wouldn't lick) to remove apps from your country's app store, you won't be able to get those apps from anyone else:

https://pluralistic.net/2025/10/15/freedom-of-movement/#data-dieselgate

Now, you could get your government to order Apple to open up its platform to third-party app stores, but they will not comply – instead, they'll drown your country in spurious legal threats:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62025TN0354

And they'll threaten to pull out of your country altogether:

https://pluralistic.net/2025/09/26/empty-threats/#500-million-affluent-consumers

Of course, Google's no better. Not only do they capitulate to every demand from Trump, but they're also locking down Android so that you'll no longer be allowed to install apps unless Google approves of them (meaning that Trump now has a de facto veto over your Android apps):

https://pluralistic.net/2025/09/01/fulu/#i-am-altering-the-deal

For decades, China hawks have accused Chinese tech giants of being puppeteered by the Chinese state, vehicles for projecting Chinese state power around the world. Meanwhile, the Chinese state has declared war on its tech companies, treating them as competitors, not instruments:

https://pluralistic.net/2021/04/03/ambulatory-wallets/#sectoral-balances

When it comes to US foreign policy, every accusation is a confession. Snowden showed us how the US tech giants were being used to wiretap virtually every person alive for the US government. More than a decade later, Microsoft has been forced to admit that they will still allow Trump's lackeys to plunder Europeans' data, even if that data is stored on servers in the EU:

https://www.forbes.com/sites/emmawoollacott/2025/07/22/microsoft-cant-keep-eu-data-safe-from-us-authorities/

Microsoft is definitely a means for the US to project its power around the world. When Trump denounced Karim Khan, the Chief Prosecutor of the International Criminal Court, for indicting Netanyahu for genocide, Microsoft obliged by nuking Khan's email, documents, calendar and contacts:

https://apnews.com/article/icc-trump-sanctions-karim-khan-court-a4b4c02751ab84c09718b1b95cbd5db3

This is exactly the kind of thing Trump's toadies warned us would happen if we let Huawei into our countries. Every accusation is a confession.

But it's worse than that. The very worst-case speculative scenario for Huawei-as-Chinese-Trojan-horse is infinitely better than the non-speculative, real ways in which the US has killswitched and bugged the world's devices.

Take CALEA, a Clinton-era law that requires all network switches to be equipped with law-enforcement back-doors that allow anyone who holds the right credential to take over the switch and listen in, block, or spoof its data. Virtually every network switch manufactured is CALEA-compliant, which is how the NSA was able to listen in on the Greek Prime Minister's phone calls to gain competitive advantage for the competing Salt Lake City Olympic bid:

https://en.wikipedia.org/wiki/Greek_wiretapping_case_2004%E2%80%9305

CALEA backdoors are a single point of failure for the world's networking systems. Nominally, CALEA backdoors are under US control, but the reality is that lots of hackers have exploited CALEA to attack governments and corporations, inside the US and abroad. Remember Salt Typhoon, the worst-ever hacking attack on US government agencies and large corporations? The Salt Typhoon hackers used CALEA as their entry point into those networks:

https://pluralistic.net/2024/10/07/foreseeable-outcomes/#calea

US monopolists – within Trump's coercive reach – control so many of the world's critical systems. Take John Deere, the ag-tech monopolist that supplies the majority of the world's tractors. By design, those tractors do not allow the farmers who own them to alter their software. That's so John Deere can force farmers to use Deere's own technicians for repairs, and so that Deere can extract soil data from farmers' tractors to sell into the global futures market.

A tractor is a networked computer in a fancy, expensive case filled with whirling blades, and at any time, Deere can reach into any tractor and permanently immobilize it. Remember when Russian looters stole those Ukrainian tractors and took them to Chechnya, only to have Deere remotely brick their loot, turning the tractors into multi-ton paperweights? A lot of us cheered that high-tech comeuppance, but when you consider that Donald Trump could order Deere to do this to all the tractors, on his whim, this gets a lot more sinister:

https://pluralistic.net/2022/05/08/about-those-kill-switched-ukrainian-tractors/

Any government thinking about the future of geopolitics in an era of Trump's mad king fascism should be thinking about how to flash those tractors – and phones, and games consoles, and medical implants, and ventilators – with free and open software that is under its owner's control. The problem is that every country in the world has signed up to America's ban on jailbreaking.

In the EU, it's Article 6 of the Copyright Directive. In Mexico, it's the IP chapter of the USMCA. If Central America, it's via CAFTA. In Australia, it's the US-Australia Free Trade Agreement. In Canada, it's 2012's Bill C-11, which bans Canadian farmers from fixing their own tractors, Canadian drivers from taking their cars to a mechanic of their choosing, and Canadian iPhone and games console owners from choosing to buy their software from a Canadian store:

https://pluralistic.net/2025/01/15/beauty-eh/#its-the-only-war-the-yankees-lost-except-for-vietnam-and-also-the-alamo-and-the-bay-of-ham

These anti-jailbreaking laws were designed as a tool of economic extraction, a way to protect American tech companies' sky-high fees and rampant privacy invasions by making it illegal, everywhere, for anyone to alter how these devices work without the manufacturer's permission.

But today, these laws have created clusters of deep-seated infrastructural vulnerabilities that reach into all our digital devices and services, including the digital devices that harvest our crops, supply oxygen to our lungs, or tell us when Trump's masked shock-troops are hunting people in our vicinity.

It's well past time for a post-American internet. Every device and every service should be designed so that the people who use them have the final say over how they work. Manufacturers' back doors and digital locks that prevent us from updating our devices with software of our choosing were never a good idea. Today, they're a catastrophe.

The world signed up to these laws because the US threatened them with tariffs if they didn't do as they were told. Well, happy Liberation Day, everyone. The US told the world to pass America's tech laws or face American tariffs.

When someone threatens to burn down your house unless you do as you're told, and then they burn your house down anyway, you don't have to keep doing what they told you.

When Putin invaded Ukraine, he inadvertently pushed the EU to accelerate its solarization efforts, to escape their reliance on Russian gas, and now Europe is a decade ahead of schedule in meeting its zero-emissions goals:

https://electrek.co/2025/09/30/solar-leads-eu-electricity-generation-as-renewables-hit-54-percent/

Today, another mad dictator is threatening the world's infrastructure. For the rest of the world to escape dictators' demands, they will have to accelerate their independence from American tech – not just Russian gas. A post-American internet starts with abandoning the laws that give US companies – and therefore Trump – a veto over how your technology works.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Fox shuts down Buffy Hallowe’en musical despite Whedon’s protests Fox shuts down Buffy Hallowe’en musical despite Whedon’s protests https://web.archive.org/web/20051021235310/http://www.counterpulse.org/calendar.shtml#buffy

#20yrsago Norway’s public broadcaster sells out taxpayers to Microsoft https://memex.craphound.com/2005/10/16/norways-public-broadcaster-sells-out-taxpayers-to-microsoft/

#20yrsago Lifehackers profile in NYT https://www.nytimes.com/2005/10/16/magazine/meet-the-life-hackers.html

#20yrsago Pan-European DRM proposal https://dissected

#20yrsago EFF cracks hidden snitch codes in color laser prints https://w2.eff.org/Privacy/printers/docucolor/

#20yrsago Nielsen’s top-10 blog usability mistakes https://www.nngroup.com/articles/weblog-usability-top-ten-mistakes/

#20yrsago Microsoft employee calls me a communist and a liar and insists that a Microsoft monopoly will be good for Norwayhttps://memex.craphound.com/2005/10/17/msft-employee-cory-is-a-liar-and-a-communist-msft-is-good-for-norway/

#20yrsago Dear ASCAP: May I sing Happy Birthday for my dad’s 75th? https://web.archive.org/web/20051024004347/https://blog.stayfreemagazine.org/2005/09/happy_birthday.html

#20yrsago 100 oldest .COM names in the registry https://web.archive.org/web/20051024020147/http://www.jottings.com/100-oldest-dot-com-domains.htm

#15yrsago Koja’s UNDER THE POPPY: dark, epic and erotic novel of war and intrigue https://memex.craphound.com/2010/10/18/kojas-under-the-poppy-dark-epic-and-erotic-novel-of-war-and-intrigue/

#15yrsago Ray Ozzie leaves Microsoft https://www.salon.com/2010/10/19/microsoft_roy_ozzie/

#15yrsago Google Book Search will never have an effective competitor https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1417722

#15yrsago Prentiss County, Mississippi Jail requires all inmates to have a Bible, regardless of faith https://web.archive.org/web/20061119033010/https://www.prentisscountysheriff.com/jail.aspx

#15yrsago Early distributed computing video, 1959, prefigures the net https://archive.org/details/AllAboutPolymorphics

#15yrsago Furniture made from rusted Soviet naval mines https://web.archive.org/web/20150206045826/https://marinemine.com/

#15yrsago G20 Toronto cop who was afraid of girl blowing soap bubbles sues YouTube for “ridicule” https://web.archive.org/web/20101019001110/https://www.theglobeandmail.com/news/national/toronto/officer-bubbles-launches-suit-against-youtube/article1760214/

#15yrsago Help wanted: anti-terrorism intern for Disney https://web.archive.org/web/20151015182237/http://thewaltdisneycompany.jobs/burbank-ca/global-intelligence-analyst-intern-corporate-spring-2016/408543725E4D48B196C01CAEEE602D36/job/

#15yrsago Rudy Rucker remembers Benoit Mandelbrot https://www.rudyrucker.com/blog/2010/10/16/remembering-benoit-mandelbrot/

#15yrsago Verminous Dickens cake banned from Melbourne cake show https://web.archive.org/web/20101019004804/https://hothamstreetladies.blogspot.com/2010/09/contraband-cake.html

#15yrsago English Heritage claims it owns every single image of Stonehenge, ever https://blog.fotolibra.com/2010/10/19/stonewalling-stonehenge/

#15yrsago #15yrsago English Heritage claims it owns every single image of Stonehenge, ever https://blog.fotolibra.com/2010/10/19/stonewalling-stonehenge/

#15yrsago HOWTO Make Mummy Meatloaf https://web.archive.org/web/20101022232509/http://gatherandnest.com/?p=2848

#15yrsago HOWTO catch drilling-dust with a folded Post-It https://cheezburger.com/4078311936

#10yrsago White supremacists call for Star Wars boycott because imaginary brown people https://www.themarysue.com/boycott-star-wars-vii-because-why-again/

#10yrsago In upsidedownland, Verizon upheld its fiber broadband promises to 14 cities https://www.techdirt.com/2015/10/19/close-only-counts-horseshoes-hand-grenades-apparently-verizons-fiber-optic-installs/

#10yrsago Survivor-count for the Chicago PD’s black-site/torture camp climbs to 7,000+ https://www.theguardian.com/us-news/2015/oct/19/homan-square-chicago-police-disappeared-thousands

#10yrsago A Swedish doctor’s collection of English anatomical idioms https://news.harvard.edu/gazette/story/2015/10/body-of-work/?utm_source=SilverpopMailing&utm_medium=email&utm_campaign=10.15.2015

#10yrsago Some suggestions for sad, rich people https://whatever.scalzi.com/2015/10/18/the-1-of-problems/

#10yrsago That “CIA veteran” who was always on Fox News? Arrested for lying about being in the CIA https://www.abc.net.au/news/2015-10-16/fox-news-terrorism-expert-arrested-for-pretending-to-be-cia/6859576

#10yrsago Eric Holder: I didn’t prosecute bankers for reasons unrelated to my $3M/year law firm salary https://theintercept.com/2015/10/16/holder-defends-record-of-not-prosecuting-financial-fraud/

#10yrsago Titanic victory for fair use: appeals court says Google’s book-scanning is legal https://memex.craphound.com/2015/10/16/titanic-victory-for-fair-use-appeals-court-says-googles-book-scanning-is-legal/

#10yrsago Snowden for drones: The Intercept’s expose on US drone attacks, revealed by a new leaker https://theintercept.com/drone-papers/

#10yrsago Tweens are smarter than you think: the wonderful, true story of the ERMAHGERD meme https://www.vanityfair.com/culture/2015/10/ermahgerd-girl-true-story

#10yrsago UK MPs learn that GCHQ can spy on them, too, so now we may get a debate on surveillance https://www.theguardian.com/world/2015/oct/14/gchq-monitor-communications-mps-peers-tribunal-wilson-doctrine

#10yrsago Now we know the NSA blew the black budget breaking crypto, how can you defend yourself? https://www.eff.org/deeplinks/2015/10/how-to-protect-yourself-from-nsa-attacks-1024-bit-DH

#10yrsago 23andme & Ancestry.com aggregated the world’s DNA; the police obliged them by asking for it https://web.archive.org/web/20151023033455/https://fusion.net/story/215204/law-enforcement-agencies-are-asking-ancestry-com-and-23andme-for-their-customers-dna/

#10yrsago A chess-set you wear in a ring https://imgur.com/worlds-smallest-chess-set-ring-Hh3Jeip

#10yrsago Exploiting smartphone cables as antennae that receive silent, pwning voice commands https://www.wired.com/2015/10/this-radio-trick-silently-hacks-siri-from-16-feet-away/

#15yrsago NYPD won’t disclose what it does with its secret military-grade X-ray vans https://web.archive.org/web/20151017212024/http://www.nyclu.org/news/nypd-unlawfully-hiding-x-ray-van-use-city-neighborhoods-nyclu-argues

#10yrsago The International Concatenated Order of Hoo-Hoo: greatly improved, but something important has been lost https://back-then.tumblr.com/post/131407456141/the-international-concatenated-order-of-hoo-hoo

#5yrsago Happy World Standards Day or not https://pluralistic.net/2020/10/18/middle-gauge-muddle/#aoc-flex

#5yrsago Amazon returns end up in landfills https://pluralistic.net/2020/10/16/lucky-ducky/#landfillers

#5yrsago UK to tax Amazon's victims https://pluralistic.net/2020/10/16/lucky-ducky/#amazon-tax

#5yrsago Ferris wheel offices https://pluralistic.net/2020/10/16/lucky-ducky/#gondoliers

#5yrsago Kids reason, adults rationalize https://pluralistic.net/2020/10/19/nanotubes-r-us/#kids-r-alright

#1yrago You should be using an RSS reader https://pluralistic.net/2024/10/16/keep-it-really-simple-stupid/#read-receipts-are-you-kidding-me-seriously-fuck-that-noise

#1yrago Educator sued for criticising "invigilation" tool https://pluralistic.net/2020/10/17/proctorio-v-linkletter/#proctorio

#1yrago Blue states should play "constitutional hardball" https://pluralistic.net/2024/10/18/states-rights/#cold-civil-war

#1yrago Penguin Random House, AI, and writers' rights https://pluralistic.net/2024/10/19/gander-sauce/#just-because-youre-on-their-side-it-doesnt-mean-theyre-on-your-side


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026



Colophon (permalink)

Today's top sources:

Currently writing:

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Takashi Murakami Adds His Signature Style to Dom Pérignon

1 Share

Takashi Murakami Adds His Signature Style to Dom Pérignon

Takashi Murakami, the Japanese contemporary artist known for blending motifs from popular culture and postwar Japanese art into fantastical, vibrant scenes, continues to bridge worlds with his signature characters and joyful flowers. His work has graced everything from hip-hop album coers to major museum exhibitions, seamlessly crossing creative boundaries. Now, Murakami brings his playful vision to the world of fine champagne, collaborating with Dom Pérignon on the Dom Perignon x Takashi Murakami collection – two limited-edition bottles celebrating the close of the 2025 season: one for Dom Pérignon Vintage 2015 and one for the launch of Dom Pérignon Rosé Vintage 2010.

A person stands in front of a mural featuring colorful, smiling cartoon flowers and butterflies on a gold background.

Murakami’s collaboration with Dom Pérignon extends beyond decoration – it’s a conversation rooted in nature. For Dom Pérignon, nature is where it begins, as well as the medium itself – the grapes, unpredictable climate, and human touch – are all encapsulated within the confines of the glass. Murakami also interprets nature through transformation – his surreal, smiling flowers and dreamlike characters capture both the natural and artificial worlds, between nature’s evolution and the artist’s reimagining of it.

Two black Takashi Murakami Dom Perignon boxes feature colorful, stylized flower designs and the label for a 2014 vintage champagne against a dark background.

Two bottles of Takashi Murakami Dom Perignon champagne, featuring decorative floral labels—one with purple foil (2010), the other black foil (2013)—are displayed against a dark background.

In the 2025 collection, this exchange comes alive through vibrant contrasts and symbolic design. The dark, minimalist bottles and coffrets are punctuated by bursts of Murakami’s iconic blooms, each one a cheerful, animated embodiment of vitality. The champagne’s historic crest becomes a portal to a whimsical, flower-filled world, where refinement meets exuberance and timeless craftsmanship meets contemporary imagination. The Vintage 2015 and Rosé Vintage 2010, both having distinctive color palettes and overall feelings, alike to the years themselves. When displayed side by side, the limited-edition boxes form a modular floral composition.

A man with glasses and gray hair tied in a bun examines a decorated bottle of Dom Pérignon champagne placed on a white surface.

A bottle of Takashi Murakami Dom Perignon Vintage 2013 Champagne with a floral label, displayed on a blue and purple flower-shaped stand against a dark background.

A Takashi Murakami Dom Perignon Vintage 2015 champagne bottle stands beside its box, both adorned with vibrant Murakami flower artwork on a sleek black background.

A Takashi Murakami Dom Perignon Vintage 2015 champagne bottle label adorned with colorful, cartoon-style smiling flowers.

A Takashi Murakami Dom Perignon Vintage 2015 champagne label is centered against a colorful background of cartoon flowers with smiling faces.

Murakami understands the importance of respecting the processes of the past, while also looking towards the future. “Through my collaboration with Dom Pérignon, I wanted to express a form of time travel. My goal is to remain relevant in 100 or 200 years and to transcend time. When the label has aged, and I am gone, and my children are gone, I hope that people of the future, when they see it, will reimagine 2025 in their own minds.” says the artist, grounding the collection in historical perspective.

A bottle of Takashi Murakami Dom Perignon champagne with a floral-themed label is displayed on a pink and green flower-shaped stand against a dark background.

A bottle of Takashi Murakami Dom Perignon Vintage 2010 Champagne stands next to a gift box adorned with colorful, smiling flowers by the renowned artist Murakami.

Label of a Takashi Murakami Dom Perignon Rosé Vintage 2010 champagne bottle, adorned with colorful, cartoon-style smiling flowers set against a sleek black background.

A Dom Pérignon Rosé Vintage 2010 label is featured, surrounded by colorful, cartoon-like flowers with smiling faces on a black background, inspired by Takashi Murakami's whimsical style for Dom Perignon.

As more and more brands seem to homogenize and stray away from bright color, Murakami and Dom Pérignon instead go another direction, embracing exploration, artistry, and the road less traveled. Explosions of happy Murakami flowers, bursting out of the traditional crest, signals a modern take from a classic brand. Dom Pérignon chooses to stay on this side of the millennium – a historic name paired with a contemporary sense of style.

A man sits at a table drawing colorful flowers on paper with a pencil, surrounded by bottles and boxes with floral designs, against a background of cartoon-like flower art.

Takashi Murakami working on the design

A man with gray hair and a beard signs a black box at a table, with a decorated bottle and floral-patterned backdrop behind him.

Takashi Murakami working on the design

A man with a beard, glasses, and a black robe stands in front of a gold background decorated with colorful, smiling cartoon flowers.

Takashi Murakami

To learn more about the Takashi Murakami x Dom Pérignon limited-edition collaboration, visit domperignon.com.

Photography courtesy of Dom Pérignon.

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

How Artists Are Keeping 'The Lost Art' of Neon Signs Alive

1 Share
How Artists Are Keeping 'The Lost Art' of Neon Signs Alive

Next to technicolor neon signs featuring Road Runner, an inspirational phrase that says “everything will be fucking amazing,” and a weed leaf, Geovany Alvarado points to a neon sign he’s particularly proud of: “The Lost and Found Art,” it says.

“I had a customer who called me, it was an old guy. He wanted to meet with someone who actually fabricates the neon and he couldn’t find anyone who physically does it,” Alvarado said. “He told me ‘You’re still doing the lost art.’ It came to my head that neon has been dying, there’s less and less people who have been learning. So I made this piece.” 

For 37 years, Alvarado has been practicing “the lost and found art” of neon sign bending, weathering the general ups and downs of business as well as, most threateningly, the rise of cheap LED signs that mimic neon and have become popular over the last few years. 

“When neon crashed and LED and the big letters like McDonald’s, all these big signs—they took neon before. Now it’s LED,” he said. In the last few years, though, he said there has been a resurgent interest in neon from artists and people who are rejecting the cheap feel of LED. “It came back more like, artistic, for art. So I’ve been doing 100 percent neon since then.” 

At his shop, Quality Neon Signs in Mid-City Los Angeles, there are signs in all sorts of states of completion and functionality strewn about Alvarado’s shop: old, mass-produced beer advertisements whose transformers have blown and are waiting for him to repair them, signs in the shapes of soccer and baseball jerseys, signs with inspirational phrases (“Everything is going to be fucking amazing,” “NEED MONEY FOR FAKE ART”), signs for restaurants, demonstration tubes that show the different colors he offers, weed shop signs, projects he made when he was bored. There are projects that are particularly meaningful to him: a silhouette he made of his wife holding their infant daughter, and a sign of the Los Angeles skyline with a wildfire burning in the background, “just to represent Los Angeles,” he said. There are old little bits of tube that have broken off of other pieces. “We save everything,” Alvarado said, “in case we want to fix it or need it for a repair.” His workshop, a few minutes away, features a “Home Sweet Home” sign,” a sign he made years ago for Twitter/Chanel collaboration featuring the old Twitter bird logo, and a sign for the defunct Channing Tatum buddy cop show Comrade Detective

The overwhelming majority of signs Alvarado sells are traditional neon glass. The real thing. But he does offer newer LED faux-neon signs to clients who want it, though he doesn’t make those in-house. Alvarado says he sells LED to keep up with the times and because they can be more practical for one-off events because they are less likely to break in transit, but it’s clear that he and the overwhelming majority of neon sign makers think the LED stuff is simply not the same. Most LED signs look cheaper and do not emit the same warmth of light, but are more energy efficient.

I asked two neon sign creators about the difference while I was shopping for signs. They said they think the environmental debate isn’t quite as straightforward as it seems because a lot of the LED signs they make seem to be for one-off events, meaning many LED signs are manufactured essentially for a single use and then turned into e-waste. Many real neon signs are bought as either artwork or are bought by businesses who are interested in the real aesthetic. And because they are generally more expensive and are handmade, they are used for years and can be repaired indefinitely.

I asked Alvarado to show me the process and make a neon sign for 404 Media, which I’ve wanted for years. It’s a visceral, loud, scientific process, with gas-powered burners that sound like jet engines heating up the glass tubes to roughly 1,000 degrees so they can be bent into the desired shapes. When he first started bending neon, Alvarado says he used to use an overheard projector and a transparency to project a schematic onto the wall. These days, he mocks up designs on a computer aided design program and prints them out on a huge printer that uses a sharpie to draw the schematic. He then painstakingly marks out his planned glass bends on the paper, lining up the tubes with the mockup as he works.

“You burn yourself a lot, your hands get burnt. You’re dealing with fire all the time,” Alvarado said. He burned himself several times while working on my piece. “For me it’s normal. Even if you’re a pro, you still burn yourself.” Every now and then, even for someone who has been doing this for decades, the glass tubes shatter: “You just gotta get another stick and do it again,” he said. 

After bending the glass and connecting the electrodes to one end of the piece, he connects the tubes to a high-powered vacuum that sucks the air out of them. The color of the light in Alvarado’s work is determined by a powdered coating within the tubes or a different colored coating of the tubes themselves; the type of gas and electrical current also changes the type and intensity of the colors. He uses neon for bright oranges and reds, and argon for cooler hues. 

Alvarado, of course, isn’t the only one still practicing the “lost art” of neon bending, but he’s one of just a few commercial businesses in Los Angeles still manufacturing and repairing neon signs for largely commercial customers. Another, called Signmakers, has made several large neon signs that have become iconic for people who live in Los Angeles. The artist Lili Lakich has maintained a well-known studio in Los Angeles’ Arts District for years and has taught “The Neon Workshop” to new students since 1982, and the Museum of Neon Art is in Glendale, just a few miles away. 

A few days after he made my neon sign, I was wandering around Los Angeles and came across an art gallery displaying Tory DiPietro’s neon work, which is largely fine art and pieces where neon is incorporated to other artworks; a neon “FRAGILE” superimposed on a globe, for example. Both DiPietro and Alvarado told me that there are still a handful of people practicing the lost art, and that in recent years there’s been a bit of a resurgent interest in neon, though it’s not that easy to learn.

On the day I picked up my sign, there were two bright green “Meme House” signs for a memecoin investor house in Los Angeles that Alvarado said he had bent and made immediately after working on the 404 Media sign. “I was there working til about 11 p.m.” he said.

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Hackers Say They Have Personal Data of Thousands of NSA and Other Government Officials

1 Share
Hackers Say They Have Personal Data of Thousands of NSA and Other Government Officials

A hacking group that recently doxed hundreds of government officials, including from the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE), has now built dossiers on tens of thousands of U.S. government officials, including NSA employees, a member of the group told 404 Media. The member said the group did this by digging through its caches of stolen Salesforce customer data. The person provided 404 Media with samples of this information, which 404 Media was able to corroborate.

As well as NSA officials, the person sent 404 Media personal data on officials from the Defense Intelligence Agency (DIA), the Federal Trade Commission (FTC), Federal Aviation Administration (FAA), Centers for Disease Control and Prevention (CDC), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), members of the Air Force, and several other agencies.

The news comes after the Telegram channel belonging to the group, called Scattered LAPSUS$ Hunters, went down following the mass doxing of DHS officials and the apparent doxing of a specific NSA official. It also provides more clarity on what sort of data may have been stolen from Salesforce’s customers in a series of breaches earlier this year, and which Scattered LAPSUS$ Hunters has attempted to extort Salesforce over.

💡
Do you know anything else about this breach? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

“That’s how we’re pulling thousands of gov [government] employee records,” the member told 404 Media. “There were 2000+ more records,” they said, referring to the personal data of NSA officials. In total, they said the group has private data on more than 22,000 government officials. 

Scattered LAPSUS$ Hunters’ name is an amalgamation of other infamous hacking groups—Scattered Spider, LAPSUS$, and ShinyHunters. They all come from the overarching online phenomenon known as the Com. On Discord servers and Telegram channels, thousands of scammers, hackers, fraudsters, gamers, or just people hanging out congregate, hack targets big and small, and beef with one another. The Com has given birth to a number of loose-knit but prolific hacking groups, including those behind massive breaches like MGM Resorts, and normalized extreme physical violence between cybercriminals and their victims.

On Thursday, 404 Media reported Scattered LAPSUS$ Hunters had posted the names and personal information of hundreds of government officials from DHS, ICE, the FBI, and Department of Justice. 404 Media verified portions of that data and found the dox sometimes included peoples’ residential addresses. The group posted the dox along with messages such as “I want my MONEY MEXICO,” a reference to DHS’s unsubstantiated claim that Mexican cartels are offering thousands of dollars for dox on agents. 

Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials
Scattered LAPSUS$ Hunters—one of the latest amalgamations of typically young, reckless, and English-speaking hackers—posted the apparent phone numbers and addresses of hundreds of government officials, including nearly 700 from DHS.
Hackers Say They Have Personal Data of Thousands of NSA and Other Government Officials

After publication of that article, a member of Scattered LAPSUS$ Hunters reached out to 404 Media. To prove their affiliation with the group, they sent a message signed with the ShinyHunters PGP key with the text “Verification for Joseph Cox” and the date. PGP keys can be used to encrypt or sign messages to prove they’re coming from a specific person, or at least someone who holds that key, which are typically kept private.

They sent 404 Media personal data related to DIA, FTC, FAA, CDC, ATF and Air Force members. They also sent personal information on officials from the Food and Drug Administration (FDA), Health and Human Services (HHS), and the State Department. 404 Media verified parts of the data by comparing them to previously breached data collected by cybersecurity company District 4 Labs. It showed that many parts of the private information did relate to government officials with the same name, agency, and phone number. 

Except the earlier DHS and DOJ data, the hackers don’t appear to have posted this more wide ranging data publicly. Most of those agencies did not immediately respond to a request for comment. The FTC and Air Force declined to comment. DHS has not replied to multiple requests for comment sent since Thursday. Neither has Salesforce.

The member said the personal data of government officials “originates from Salesforce breaches.” This summer Scattered LAPSUS$ Hunters stole a wealth of data from companies that were using Salesforce tech, with the group claiming it obtained more than a billion records. Customers included Disney/Hulu, FedEx, Toyota, UPS, and many more. The hackers did this by social engineering victims and tricking them to connect to a fraudulent version of a Salesforce app. The hackers tried to extort Salesforce, threatening to release the data on a public website, and Salesforce told clients it won’t pay the ransom, Bloomberg reported

On Friday the member said the group was done with extorting Salesforce. But they continued to build dossiers on government officials. Before the dump of DHS, ICE, and FBI dox, the group posted the alleged dox of an NSA official to their Telegram group. 

Over the weekend that channel went down and the member claimed the group’s server was taken “offline, presumably seized.”

The doxing of the officials “must’ve really triggered it, I think it’s because of the NSA dox,” the member told 404 Media.

Matthew Gault contributed reporting.

Read the whole story
mkalus
3 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories