On Saturday, millions of people across the U.S. attended “No Kings” protests—a slogan born in response to President Donald Trump’s self-aggrandizing social media posts where he’s called himself a king, including with AI-generated images of himself in a crown, and his continuous stretching of executive power. While Americans were out in the street, the president was posting.
In an AI-generated video originally posted on X by a genAI shitposter, Trump, wearing a crown, takes off in a fighter jet to the song “Danger Zone” like he’s in Top Gun. Flying over protestors in American cities, Pilot King Trump bombs people with gallons of chunky brown liquid. It’s poop, ok? It’s shit. It’s diarrhea, and in reposting it, it’s clear enough to me that Trump is fantasizing about doing a carpet-bomb dookie on the people he put his hand on a bible and swore to serve nine months ago. The first protestor seen in the video is a real person, Harry Sisson, a liberal social media influencer.
0:00
/0:19
The video Trump reposted to Truth Social
But this was not clear, it seems, to many other journalists. Most national news outlets seem scared to call it how they see, and how everyone sees it: as Trump dropping turd bombs on America, instead opting for euphemisms. Some of the best have included:
The Hill called it “brown liquid” and “what looked like feces”
The Guardian deemed it “brown sludge” and “bursts of brown matter”
NBC News got close with “what appeared to be feces”
A CNN contributor’s “analysis” said Trump was “appearing to dump raw sewage”
Axios’ helpful context: “suspect brown substances falling from the sky”
ABC News opted to cut the video before the AI poop even started falling
TheNew York Post, never one to waste a prime alliteration opportunity, didn’t disappoint: “Trump’s fighter jet was shown dropping masses of manure.”
I can understand some of these venerated news establishments might be skittish about using a word like “poop” in their headlines, and I can also concede that I haven’t had an editor tell me I can’t use a bad word in a headline in a long, long time. I can imagine the logic: we can’t “prove” it’s meant to be shit, so we can’t call it shit. But there’s nothing in these outlets’ style guides that has kept them from saying “poop” in the headline in the past: “Women Poop,” the New York Times once proclaimed. Axios writes extensively and frequently about dog poop. CNN’s analysis extends to poop often.
Along with the above concessions, I can also promise I don’t feel that passionately about getting poop on anyone else’s homepages. But we are in an era where the highest office in the country is disseminating imagery that isn’t just fake and stupid, but actively hostile to the people living in this country. When I first saw someone talking about the Trump Poop Bomber video—on Reels, of all places—I thought it must be someone doing satire about what they imagined Trump would post about the protests. I had to search for it to find out if he really did, and what I found was the above: trusted sources of truth and information too scared to call fake poop fake poop. It’s not about poop, it’s about being able to accurately describe what we see, an essential skill when everything online is increasingly built to enrage, trick, or mislead us. AI continues to be the aesthetic of fascism: fast, easy, ugly. When we lose the ability to say what it is, we’re losing a lot more than the chance to pun on poop.
Add to this the fact that no one in Trump’s circle will say what we can all plainly see, either: that the president hates the people. “The president uses social media to make the point. You can argue he’s probably the most effective person who’s ever used social media for that,” Speaker of the House Mike Johnson said at a news conference this morning. “He is using satire to make a point. He is not calling for the murder of his political opponents.” Johnson did not say what that point was, however.
So, I originally planned for this to be on my premium newsletter, but decided it was better to publish on my free one so that you could all enjoy it. If you liked it, please consider subscribing to support my work. Here’s $10 off the first year of annual.
I’ve also recorded an episode about this on my podcast Better Offline (RSS feed, Apple, Spotify, iHeartRadio), it’s a little different but both handle the same information, just subscribe and it'll pop up.
Over the last two years I have written again and again about the ruinous costs of running generative AI services, and today I’m coming to you with real proof.
Based on discussions with sources with direct knowledge of their AWS billing, I am able to disclose the amounts that AI firms are spending, specifically Anthropic and AI coding company Cursor, its largest customer.
I can exclusively reveal today Anthropic’s spending on Amazon Web Services for the entirety of 2024, and for every month in 2025 up until September, and that that Anthropic’s spend on compute far exceeds that previously reported.
Furthermore, I can confirm that through September, Anthropic has spent more than 100% of its estimated revenue (based on reporting in the last year) on Amazon Web Services, spending $2.66 billion on compute on an estimated $2.55 billion in revenue.
Additionally, Cursor’s Amazon Web Services bills more than doubled from $6.2 million in May 2025 to $12.6 million in June 2025, exacerbating a cash crunch that began when Anthropic introduced Priority Service Tiers, an aggressive rent-seeking measure that begun what I call the Subprime AI Crisis, where model providers begin jacking up the prices on their previously subsidized rates.
Although Cursor obtains the majority of its compute from Anthropic — with AWS contributing a relatively small amount, and likely also taking care of other parts of its business — the data seen reveals an overall direction of travel, where the costs of compute only keep on going up.
Let’s get to it.
Some Initial Important Details
I do not have all the answers! I am going to do my best to go through the information I’ve obtained and give you a thorough review and analysis. This information provides a revealing — though incomplete — insight into the costs of running Anthropic and Cursor, but does not include other costs, like salaries and compute obtained from other providers. I cannot tell you (and do not have insight into) Anthropic’s actual private moves. Any conclusions or speculation I make in this article will be based on my interpretations of the information I’ve received, as well as other publicly-available information.
I have used estimates of Anthropic’s revenue based on reporting across the last ten months. Any estimates I make are detailed and they are brief.
These costs are inclusive of every product bought on Amazon Web Services, including EC2, storage and database services (as well as literally everything else they pay for).
Anthropic works with both Amazon Web Services and Google Cloud for compute. I do not have any information about its Google Cloud spend.
The reason I bring this up is that Anthropic’s revenue is already being eaten up by its AWS spend. It’s likely billions more in the hole from Google Cloud and other operational expenses.
I have confirmed with sources that every single number I give around Anthropic and Cursor’s AWS spend is the final cash paid to Amazon after any discounts or credits.
While I cannot disclose the identity of my source, I am 100% confident in these numbers, and have verified their veracity with other sources.
Anthropic’s Compute Costs Are Likely Much Higher Than Reported — $1.35 Billion in 2024 on AWS Alone
In February of this year, The information reported that Anthropic burned $5.6 billion in 2024, and made somewhere between $400 million and $600 million in revenue:
It’s not publicly known how much revenue Anthropic generated in 2024, although its monthly revenue rose to about $80 million by the end of the year, compared to around $8 million at the start. That suggests full-year revenue in the $400 million to $600 million range.
…Anthropic told investors it expects to burn $3 billion this year, substantially less than last year, when it burned $5.6 billion. Last year’s cash burn was nearly $3 billion more than Anthropic had previously projected. That’s likely due to the fact that more than half of the cash burn came from a one-off payment to access the data centers that power its technology, according to one of the people who viewed the pitch.
While I don’t know about prepayment for services, I can confirm from a source with direct knowledge of billing that Anthropic spent $1.35 billion on Amazon Web Services in 2024, and has already spent $2.66 billion on Amazon Web Services through the end of September.
Assuming that Anthropic made $600 million in revenue, this means that Anthropic spent $6.2 billion in 2024, leaving $4.85 billion in costs unaccounted for.
The Information’s piece also brings up another point:
The costs to develop AI models accounted for a major portion of Anthropic’s expenses last year. The company spent $1.5 billion on servers for training AI models. OpenAI was on track to spend as much as $3 billion on training costs last year, though that figure includes additional expenses like paying for data.
Before I go any further, I want to be clear that The Information’s reporting is sound, and I trust that their source (I have no idea who they are or what information was provided) was operating in good faith with good data.
However, Anthropic is telling people it spent $1.5 billion on just training when it has an Amazon Web Services bill of $1.35 billion, which heavily suggests that its actual compute costs are significantly higher than we thought, because, to quote SemiAnalysis, “a large share of Anthropic’s spending is going to Google Cloud.”
I am guessing, because I do not know, but with $4.85 billion of other expenses to account for, it’s reasonable to believe Anthropic spent an amount similar to its AWS spend on Google Cloud. I do not have any information to confirm this, but given the discrepancies mentioned above, this is an explanation that makes sense.
I also will add that there is some sort of undisclosed cut that Amazon gets of Anthropic’s revenue, though it’s unclear how much. According to The Information, “Anthropic previously told some investors it paid a substantially higher percentage to Amazon [than OpenAI’s 20% revenue share with Microsoft] when companies purchase Anthropic models through Amazon.”
I cannot confirm whether a similar revenue share agreement exists between Anthropic and Google.
This also makes me wonder exactly where Anthropic’s money is going.
In 2024, it would raise several more rounds — one in January for $750 million, another in March for $884.1 million, another in May for $452.3 million, and another $4 billion from Amazon in November 2024, which also saw it name AWS as Anthropic’s “primary cloud and training partner,” bringing its 2024 funding total to $6 billion.
While I do not have Anthropic’s 2023 numbers, its spend on AWS in 2024 — around $1.35 billion — leaves (as I’ve mentioned) $4.85 billion in costs that are unaccounted for. The Information reports that costs for Anthropic’s 521 research and development staff reached $160 million in 2024, leaving 394 other employees unaccounted for (for 915 employees total), and also adding that Anthropic expects its headcount to increase to 1900 people by the end of 2025.
The Information also adds that Anthropic “expects to stop burning cash in 2027.”
This leaves two unanswered questions:
Where is the rest of Anthropic’s money going?
How will it “stop burning cash” when its operational costs explode as its revenue increases?
An optimist might argue that Anthropic is just growing its pile of cash so it’s got a warchest to burn through in the future, but I have my doubts. In a memo revealed by WIRED, Anthropic CEO Dario Amodei stated that “if [Anthropic wanted] to stay on the frontier, [it would] gain a very large benefit from having access to this capital,” with “this capital” referring to money from the Middle East.
Anthropic and Amodei’s sudden willingness to take large swaths of capital from the Gulf States does not suggest that it’s not at least a little desperate for capital, especially given Anthropic has, according to Bloomberg, “recently held early funding talks with Abu Dhabi-based investment firm MGX” a month after raising $13 billion.
In my opinion — and this is just my gut instinct — I believe that it is either significantly more expensive to run Anthropic than we know, or Anthropic’s leaked (and stated) revenue numbers are worse than we believe. I do not know one way or another, and will only report what I know.
How Much Did Anthropic and Cursor Spend On Amazon Web Services In 2025?
So, I’m going to do this a little differently than you’d expect, in that I’m going to lay out how much these companies spent, and draw throughlines from that spend to its reported revenue numbers and product announcements or events that may have caused its compute costs to increase.
I’ve only got Cursor’s numbers from January through September 2025, but I have Anthropic’s AWS spend for both the entirety of 2024 and through September 2025.
What Does “Annualized” Mean?
So, this term is one of the most abused terms in the world of software, but in this case, I am sticking to the idea that it means “month times 12.” So, if a company made $10m in January, you would say that its annualized revenue is $120m. Obviously, there’s a lot of (when you think about it, really obvious) problems with this kind of reporting — and thus, you only ever see it when it comes to pre-IPO firms — but that’s besides the point.
I give you this explanation because, when contrasting Anthropic’s AWS spend with its revenues, I’ve had to work back from whatever annualized revenues were reported for that month.
Anthropic’s Amazon Web Services Spend In 2024 - $1.359 Billion - Estimated Revenue $400 Million to $600 Million
Anthropic’s 2024 revenues are a little bit of a mystery, but, as mentioned above, The Information says it might be between $400 million and $600 million.
Here’s its monthly AWS spend.
January 2024 - $52.9 million
February 2024 - $60.9 million
March 2024 - $74.3 million
April 2024 - $101.1 million
May 2024 - $100.1 million
June 2024 - $101.8 million
July 2024 - $118.9 million
August 2024 - $128.8 million
September 2024 - $127.8 million
October 2024 - $169.6 million
November 2024 - $146.5 million
December 2024 - $176.1 million
Analysis: Anthropic Spent At Least 200% of Its 2024 Revenue On Amazon Web Services In 2024
I’m gonna be nice here and say that Anthropic made $600 million in 2024 — the higher end of The Information’s reporting — meaning that it spent around 226% of its revenue ($1.359 billion) on Amazon Web Services.
[Editor's note: this copy originally had incorrect maths on the %. Fixed now.]
Anthropic’s Amazon Web Services Spend In 2025 Through September 2025 - $2.66 Billion - Estimated Revenue Through September $2.55 Billion - 104% Of Revenue Spent on AWS
Thanks to my own analysis and reporting from outlets like The Information and Reuters, we have a pretty good idea of Anthropic’s revenues for much of the year. That said, July, August, and September get a little weirder, because we’re relying on “almosts” and “approachings,” as I’ll explain as we go.
I’m also gonna do an analysis on a month-by-month basis, because it’s necessary to evaluate these numbers in context.
January 2025 - $188.5 million In AWS Spend, $72.91 or $83 Million In Revenue - 227% Of Revenue Spent on AWS
In this month, Anthropic’s reported revenue was somewhere from $875 million to $1 billion annualized, meaning either $72.91 million or $83 million for the month of January.
February 2025 - $181.2 million in AWS Spend, $116 Million In Revenue - 156% Of Revenue Spent On AWS - 181% Of Revenue Spent On AWS
In February, as reported by The Information, Anthropic hit $1.4 billion annualized revenue, or around $116 million each month.
March 2025 - $240.3 million in AWS Spend - $166 Million In Revenue - 144% Of Revenue Spent On AWS - Launch of Claude Sonnet 3.7 & Claude Code Research Preview (February 24)
In March, as reported by Reuters, Anthropic hit $2 billion in annualized revenue, or $166 million in revenue.
And man, what a burden! Costs increased by $59.1 million, primarily across compute categories, but with a large ($2 million since January) increase in monthly costs for S3 storage.
April 2025 - $221.6 million in AWS Spend - $204 Million In Revenue - 108% Of Revenue Spent On AWS
I estimate, based on a 22.4% compound growth rate, that Anthropic hit around $2.44 billion in annualized revenue in April, or $204 million in revenue.
May 2025 - $286.7 million in AWS Spend - $250 Million In Revenue - 114% Of Revenue Spent On AWS - Sonnet 4, Opus 4, General Availability Of Claude Code (May 22) Service Tiers (May 30)
In May, as reported by CNBC, Anthropic hit $3 billion in annualized revenue, or $250 million in monthly average revenue.
This was a big month for Anthropic, with two huge launches on May 22 2025 — its new, “more powerful” models Claude Sonnet and Opus 4, as well as the general availability of its AI coding environment Claude Code.
Eight days later, on May 30 2025, a page on Anthropic's API documentation appeared for the first time: "Service Tiers":
Different tiers of service allow you to balance availability, performance, and predictable costs based on your application’s needs.
We offer three service tiers:
- Priority Tier: Best for workflows deployed in production where time, availability, and predictable pricing are important
Standard: Best for bursty traffic, or for when you’re trying a new idea
Batch: Best for asynchronous workflows which can wait or benefit from being outside your normal capacity
As I’ll get into in my June analysis, Anthropic’s Service Tiers exist specifically for it to “guarantee” your company won’t face rate limits or any other service interruptions, requiring a minimum spend, minimum token throughput, and for you to pay higher rates when writing to the cache — which is, as I’ll explain, a big part of running an AI coding product like Cursor.
Now, the jump in costs — $65.1 million or so between April and May — likely comes as a result of the final training for Sonnet and Opus 4, as well as, I imagine, some sort of testing to make sure Claude Code was ready to go.
June 2025 - $321.4 million in AWS Spend - $333 Million In Revenue - 96.5% Of Revenue Spent On AWS - Anthropic Cashes In On Service Tier Tolls That Add An Increased Charge For Prompt Caching, Directly Targeting Companies Like Cursor
In June, as reported by The Information, Anthropic hit $4 billion in annualized revenue, or $333 million.
Anthropic’s revenue spiked by $83 million this month, and so did its costs by $34.7 million.
Anthropic Started The Subprime AI Crisis In June 2025, Increasing Costs On Its Largest Customer, Doubling Its AWS Spend In A Month
I have, for a while, talked about the Subprime AI Crisis, where big tech and companies like Anthropic, after offering subsidized pricing to entice in customers, raise the rates on their customers to start covering more of their costs, leading to a cascade where businesses are forced to raise their prices to handle their new, exploding costs.
And I was god damn right. Or, at least, it sure looks like I am. I’m hedging, forgive me. I cannot say for certain, but I see a pattern.
It’s likely the June 2025 spike in revenue came from the introduction of service tiers, which specifically target prompt caching, increasing the amount of tokens you’re charged for as an enterprise customer based on the term of the contract, and your forecast usage.
You see, Anthropic specifically notes on its "service tiers" page that requests at the priority tier are "prioritized over all other requests to Anthropic," a rent-seeking measure that effectively means a company must either:
- Commit to at least a month, though likely 3-12 months of specific levels of input and output tokens a minute, based on what they believe they will use in the future, regardless of whether they do.
- Accept that access to Anthropic models will be slower at some point, in some way that Anthropic can't guarantee.Furthermore, the way that Anthropic is charging almost feels intentionally built to fuck over any coding startup that would use its service. Per the service tier page, Anthropic charges 1.25 for every time you write a token to the cache with a 5 minute TTL — or 2 tokens if you have a 1 hour TTL — and a longer cache is effectively essential for any background task where an agent will be working for more than 5 minutes, such as restructuring a particularly complex series of code, you know, the exact things that Cursor is well-known and marketed to do.
Furthermore, the longer something is in the cache, the better autocomplete suggestions for your code will be. It's also important to remember you're, at some point, caching the prompts themselves — so the instructions of what you want Cursor to do, meaning that the more complex the operation, the more expensive it'll now be for Cursor to provide the service with reasonable uptime.
Cursor, as Anthropic’s largest client (the second largest being Github Copilot), represents a material part of its revenue, and its surging popularity meant it was sending more and more revenue Anthropic’s way. Anysphere, the company that develops Cursor, hit $500 million annualized revenue ($41.6 million) by the end of May, which Anthropic chose to celebrate by increasing its costs.
As I’ll get to later in the piece, Cursor’s costs exploded from $6.19 million in May 2025 to $12.67 million in June 2025, and I believe this is a direct result of Anthropic’s sudden and aggressive cost increases.
I’ll get into this a bit later, but I find this whole situation disgusting.
July 2025 $323.2 million in AWS Spend - $416 Million In Revenue - 77.7% Of Revenue Spent On AWS
In July, as reported by Bloomberg, Anthropic hit $5 billion in annualized revenue, or $416 million.
While July wasn’t a huge month for announcements, it was allegedly the month that Claude Code was generating “nearly $400 million in annualized revenue,” or $33.3 million (according to The Information, who says Anthropic was “approaching” $5 billion in annualized revenue - which likely means LESS than that - but I’m going to go with the full $5 billion annualized for sake of fairness.
There’s roughly an $83 million bump in Anthropic’s revenue between June and July 2025, and I think Claude Code and its new rates are a big part of it. What’s fascinating is that cloud costs didn’t increase too much — by only $1.8 million, to be specific.
August 2025 - $383.7 million in AWS Spend - $416 Million In Revenue - 92% Of Revenue Spent On AWS
In August, according to Anthropic, its run-rate “reached over $5 billion,” or in or around $416 million. I am not giving it anything more than $5 billion, especially considering in July Bloomberg’s reporting said “about $5 billion.”
Costs grew by $60.5 this month, potentially due to the launch of Claude Opus 4.1, Anthropic’s more aggressively expensive model, though revenues do not appear to have grown much along the way.
Yet what’s very interesting is that Anthropic — starting August 28 — launched weekly rate limits on its Claude Pro and Max plans. I wonder why?
September 2025 - $518.9 million in AWS Spend - $583 Million In Revenue - 88.9% Of Revenue Spent On AWS
Oh fuck! Look at that massive cost explosion!
Anyway, according to Reuters, Anthropic’s run rate is “approaching $7 billion” in October, and for the sake of fairness, I am going to just say it has $7 billion annualized, though I believe this number to be lower. “Approaching” can mean a lot of different things — $6.1 billion, $6.5 billion — and because I already anticipate a lot of accusations of “FUD,” I’m going to err on the side of generosity.
If we assume a $6.5 billion annualized rate, that would make this month’s revenue $541.6 million, or 95.8% of its AWS spend.
Anthropic’s Monthly AWS Costs Have Increased By 174% Since January - And With Its Potential Google Cloud Spend and Massive Staff, Anthropic Is Burning Billions In 2025
While these costs only speak to one part of its cloud stack — Anthropic has an unknowable amount of cloud spend on Google Cloud, and the data I have only covers AWS — it is simply remarkable how much this company spends on AWS, and how rapidly its costs seem to escalate as it grows.
Though things improved slightly over time — in that Anthropic is no longer burning over 200% of its revenue on AWS alone — these costs have still dramatically escalated, and done so in an aggressive and arbitrary manner.
Anthropic’s AWS Costs Increase Linearly With Revenue, Consuming The Majority Of Each Dollar Anthropic Makes - As A Reminder, It Also Spends Hundreds Of Millions Or Billions On Google Cloud Too
So, I wanted to visualize this part of the story, because I think it’s important to see the various different scenarios.
An Estimate of Anthropic’s Potential Cloud Compute Spend Through September
THE NUMBERS I AM USING ARE ESTIMATES CALCULATED BASED ON 25%, 50% and 100% OF THE AMOUNTS THAT ANTHROPIC HAS SPENT ON AMAZON WEB SERVICES THROUGH SEPTEMBER.
I apologize for all the noise, I just want it to be crystal clear what you see next.
As you can see, all it takes is for Anthropic to spend (I am estimating) around 25% of its Amazon Web Services bills (for a total of around $3.33 billion in compute costs through the end of September) to savage any and all revenue ($2.55 billion) it’s making.
Assuming Anthropic spends half of its AWS spend on Google Cloud, this number climbs to $3.99 billion, and if you assume - and to be clear, this is an estimate - that it spends around the same on both Google Cloud and AWS, Anthropic has spent $5.3 billion on compute through the end of September.
I can’t tell you which it is, just that we know for certain that Anthropic is spending money on Google Cloud, and because Google owns 14% of the company — rivalling estimates saying Amazon owns around 15-19% — it’s fair to assume that there’s a significant spend.
Anthropic’s Costs Are Out Of Control, Consistently And Aggressively Outpacing Revenue - And Amazon’s Revenue from Anthropic Of $2.66 Billion Is 2.5% Of Its 2025 Capex
I have sat with these numbers for a great deal of time, and I can’t find any evidence that Anthropic has any path to profitability outside of aggressively increasing the prices on their customers to the point that its services will become untenable for consumers and enterprise customers alike.
As you can see from these estimated and reported revenues, Anthropic’s AWS costs appear to increase in a near-linear fashion with its revenues, meaning that the current pricing — including rent-seeking measures like Priority Service Tiers — isn’t working to meet the burden of its costs.
We do not know its Google Cloud spend, but I’d be shocked if it was anything less than 50% of its AWS bill. If that’s the case, Anthropic is in real trouble - the cost of the services underlying its business increase the more money they make.
It’s becoming increasingly apparent that Large Language Models are not a profitable business. While I cannot speak to Amazon Web Services’ actual costs, it’s making $2.66 billion from Anthropic, which is the second largest foundation model company in the world.
What’s the plan, exactly? Let Anthropic burn money for the foreseeable future until it dies, and then pick up the pieces? Wait until Wall Street gets mad at you and then pull the plug?
Who knows.
But let’s change gears and talk about Cursor — Anthropic’s largest client and, at this point, a victim of circumstance.
Cursor’s Amazon Web Services Spend In 2025 Through September 2025 - $69.99 Million
An Important Note About Cursor’s Compute Spend
Amazon sells Anthropic’s models through Amazon Bedrock, and I believe that AI startups are compelled to spend some of their AI model compute costs through Amazon Web Services. Cursor also sends money directly to Anthropic and OpenAI, meaning that these costs are only one piece of its overall compute costs. In any case, it’s very clear that Cursor buys some degree of its Anthropic model spend through Amazon.
I’ll also add that Tom Dotan of Newcomer reported a few months ago that an investor told him that “Cursor is spending 100% of its revenue on Anthropic.”
Unlike Anthropic, we lack thorough reporting of the month-by-month breakdown of Cursor’s revenues. I will, however, mention them in the month I have them.
For the sake of readability — and because we really don’t have much information on Cursor’s revenues beyond a few months — I’m going to stick to a bullet point list.
Another Note About Cursor’s AWS Spend - It Likely Funnels Some Model Spend Through AWS, But The Majority Goes Directly To Providers Like Anthropic
Based on its spend with AWS, I do not see a strong “minimum” spend that would suggest that they have a similar deal with Amazon — likely because Amazon handles more than its infrastructure than just compute, but incentivizes it to spend on Anthropic’s models through AWS by offering discounts, something I’ve confirmed with a source.
This was, as I’ve discussed above, the month when Anthropic forced it to adopt “Service Tiers”. I go into detail about the situation here, but the long and short of it is that Anthropic increased the amount of tokens you burned by writing stuff to the cache (think of it like RAM in a computer), and AI coding startups are very cache heavy, meaning that Cursor immediately took on what I believed would be massive new costs. As I discuss in what I just linked, this led Cursor to aggressively change its product, thereby vastly increasing its customers’ costs if they wanted to use the same service.
That same month, Cursor’s AWS costs — which I believe are the minority of its cloud compute costs — exploded by 104% (or by $6.48 million), and never returned to their previous levels.
It’s conceivable that this surge is due to the compute-heavy nature of the latest Claude 4 models released that month — or, perhaps, Cursor sending more of its users to other models that it runs on Bedrock.
July 2025 - $15.5 million
As you can see, Cursor’s costs continue to balloon in July, and I am guessing it’s because of the Service Tiers situation — which, I believe, indirectly resulted in Cursor pushing more users to models that it runs on Amazon’s infrastructure.
August 2025 - $9.67 million
So, I can only guess as to why there was a drop here. User churn? It could be the launch of GPT-5 on Cursor, which gave users a week of free access to OpenAI’s new models.
What’s also interesting is that this was the month when Cursor announced that its previously free “auto” model (where Cursor would select the best available premium model or its own model) would now bill at “competitive token rates,” by which I mean it went from charging nothing to $1.25 per million input and $6 per million output tokens. This change would take effect on September 15 2025.
On August 10 2025, Tom Dotan of Newcomer reported that Cursor was “well above” $500 million in annualized revenue based on commentary from two sources.
September 2025 - $12.91 million
Per the above, this is the month when Cursor started charging for its “auto” model.
What Anthropic May Have Done To Cursor Is Disgusting - And Is A Preview Of What’s To Come For AI Startups
When I wrote that Anthropic and OpenAI had begun the Subprime AI Crisis back in July, I assumed that the increase in costs was burdensome, but having the information from its AWS bills, it seems that Anthropic’s actions directly caused Cursor’s costs to explode by over 100%.
While I can’t definitively say “this is exactly what did it,” the timelines match up exactly, the costs have never come down, Amazon offers provisioned throughput, and, more than likely, Cursor needs to keep a standard of uptime similar to that of Anthropic’s own direct API access.
If this is what happened, it’s deeply shameful.
Cursor, Anthropic’s largest customer, in the very same month it hit $500 million in annualized revenue, immediately had its AWS and Anthropic-related costs explode to the point that it had to dramatically reduce the value of its product just as it hit the apex of its revenue growth.
Anthropic Timed Its Rent-Seeking Service Tier Price Increases on Cursor With The Launch Of A Competitive Product - Which Is What’s Coming To Any AI Startup That Builds On Top Of Its Products
It’s very difficult to see Service Tiers as anything other than an aggressive rent-seeking maneuver.
Yet another undiscussed part of the story is that the launch of Claude 4 Opus and Sonnet — and the subsequent launch of Service Tiers — coincided with the launch of Claude Code, a product that directly competes with Cursor, without the burden of having to pay itself for the cost of models or, indeed, having to deal with its own “Service Tiers.”
Anthropic may have increased the prices on its largest client at the time it was launching a competitor, and I believe that this is what awaits any product built on top of OpenAI or Anthropic’s models.
The Subprime AI Crisis Is Real, And It Can Hurt You
I realize this has been a long, number-stuffed article, but the long-and-short of it is simple: Anthropic is burning all of its revenue on compute, and Anthropic will willingly increase the prices on its customers if it’ll help it burn less money, even though that doesn’t seem to be working.
What I believe happened to Cursor will likely happen to every AI-native company, because in a very real sense, Anthropic’s products are a wrapper for its own models, except it only has to pay the (unprofitable) costs of running them on Amazon Web Services and Google Cloud.
As a result, both OpenAI and Anthropic can (and may very well!) devour the market of any company that builds on top of their models.
OpenAI may have given Cursor free access to its GPT-5 models in August, but a month later on September 15 2025 it debuted massive upgrades to its competitive “Codex” platform.
The ultimate problem is that there really are no winners in this situation. If Anthropic kills Cursor through aggressive rent-seeking, that directly eats into its own revenues. If Anthropic lets Cursor succeed, that’s revenue, but it’s also clearly unprofitable revenue. Everybody loses, but nobody loses more than Cursor’s (and other AI companies’) customers.
Anthropic Is In Real Trouble - And The Current Cost Of Doing Business Is Unsustainable, Meaning Prices Must Increase
I’ve come away from this piece with a feeling of dread.
Anthropic’s costs are out of control, and as things get more desperate, it appears to be lashing out at its customers, both companies like Cursor and Claude Code customers facing weekly rate limits on their more-powerful models who are chided for using a product they pay for. Again, I cannot say for certain, but the spike in costs is clear, and it feels like more than a coincidence to me.
There is no period of time that I can see in the just under two years of data I’ve been party to that suggests that Anthropic has any means of — or any success doing — cost-cutting, and the only thing this company seems capable of doing is increasing the amount of money it burns on a monthly basis.
Based on what I have been party to, the more successful Anthropic becomes, the more its services cost. The cost of inference is clearly increasing for customers, but based on its escalating monthly costs, the cost of inference appears to be high for Anthropic too, though it’s impossible to tell how much of its compute is based on training versus running inference.
In any case, these costs seem to increase with the amount of money Anthropic makes, meaning that the current pricing of both subscriptions and API access seems unprofitable, and must increase dramatically — from my calculations, a 100% price increase might work, but good luck retaining every single customer and their customers too! — for this company to ever become sustainable.
I don’t think that people would pay those prices. If anything, I think what we’re seeing in these numbers is a company bleeding out from costs that escalate the more that its user base grows. This is just my opinion, of course.
I’m tired of watching these companies burn billions of dollars to destroy our environment and steal from everybody. I’m tired that so many people have tried to pretend there’s a justification for burning billions of dollars every year, clinging to empty tropes about how this is just like Uber or Amazon Web Services, when Anthropic has built something far more mediocre.
Mr. Amodei, I am sure you will read this piece, and I can make time to chat in person on my show Better Offline. Perhaps this Friday? I even have some studio time on the books.
Hello readers! This premium edition features a generous free intro because I like to try and get some of the info out there, but the real indepth stuff is below the cut.Nevertheless, I deeply appreciate anyone subscribing.
On Monday I will have my biggest scoop ever, and it'll go out on the free newsletter because of its scale. This is possible because of people supporting me on the premium. Thanks so much for reading.
One of the only consistent critiques of my work is that I’m angry, irate, that I am taking myself too seriously, that I’m swearing too much, and that my arguments would be “better received” if I “calmed down.”
In fact, fuck it — I’m updating my priors. Let’s say it’s a nice, round $50 billion per gigawatt of data center capacity. $32.5 billion is what it cost to build Stargate Abilene, but that estimate was based on Crusoe’s 1.2GW of compute for OpenAI being part of a $15 billion joint venture, which meant a gigawatt of compute runs about $12.5 billion, and Abilene’s 8 buildings are meant to hold 50,000 NVIDIA GB200 GPUs and their associated networking infrastructure, so let’s say a gigawatt is around 333,333 Blackwell GPUs at $60,000 a piece, so about $20 billion a gigawatt.
However, this mathematics assumed that every cost associated would be paid by the Joint Venture. Lancium, the owner of the land that is allegedly building the power infrastructure, has now raised over a billiondollars.
This maths also didn’t include the cost of the associated networking infrastructure around the GB200s. So, guess what? We’re doing $50 billion now.
OpenAI has now promised 33GW of capacity across AMD, NVIDIA, Broadcom and the seven data centers built under Stargate, though one of those — in Lordstown, Ohio — is not actually a data center, with my source being “SoftBank,” speaking to WKBN in Lordstown Ohio, which said it will “not be a full-blown data center,” and instead be “at the center of cutting-edge technology that will encompass storage containers that will hold the infrastructure for AI and data storage.”
This wasn’t hard to find, by the way! I googled “SoftBank Lordstown” and up it came, ready for me to read with my eyes.
Putting all of that aside, I think it’s time that everybody started taking this situation far more seriously, by which I mean acknowledging the sheer recklessness and naked market manipulation taking place.
But let’s make it really simple, and write out what’s meant to happen in the next year:
In the second half of 2026, OpenAI and Broadcom will tape out and successfully complete an AI inference chip, then manufacture enough of them to fill a 1GW data center.
That data center will be built in an as-yet-unknown location, and will have at least 1GW of power, but more realistically it will need 1.2GW to 1.3GW of power, because for every 1GW of IT load, you need extra power capacity in reserve for the hottest day of the year, when the cooling system works hardest and power transmission losses are highest. .
OpenAI does not appear to have a site for this data center, and thus has not broken ground on it.
This will take place in an as-yet-unnamed data center location, which to be completed by that time would have needed to start construction and early procurement of power at least a year ago, if not more.
These GPUs will be deployed in a data center of some sort, which remains unnamed, but for them to meet this timeline they will need to have started construction at least a year ago.
In my most conservative estimate, these data centers will cost over $100 billion, and to be clear, a lot of that money needs to already be in OpenAI’s hands to get the data centers built. Or, some other dupe has to a.) have the money, and b.) be willing to front it.
All of this is a fucking joke. I’m sorry, I know some of you will read this, cowering from your screen like a B-movie vampire that just saw a crucifix, but it is a joke, and it is a fucking stupid joke, the only thing stupider being that any number of respectable media outlets are saying these things like they’ll actually happen.
There is not enough time to build these things. If there was enough time, there wouldn’t be enough money. If there was enough money, there wouldn’t be enough transformers, electrical-grade steel, or specialised talent to run the power to the data centers. Fuck! Piss! Shit! Swearing doesn’t change the fact that I’m right — none of what OpenAI, NVIDIA, Broadcom, and AMD are saying is possible, and it’s fair to ask why they’re saying it.
I mean, we know. Number must go up, deal must go through, and Jensen Huang wouldn’t go on CNBC and say “yeah man if I’m honest I’ve got no fucking clue how Sam Altman is going to pay me, other than with the $10 billion I’m handing him in a month. Anyway, NVIDIA’s accounts receivables keep increasing every quarterfor a normal reason, don’t worry about it.”
But in all seriousness, we now have three publicly-traded tech firms that have all agreed to join Sam Altman’s No IT Loads Refused Cash Dump, all promising to do things on insane timelines that they — as executives of giant hardware manufacturers, or human beings with warm bodies and pulses and sciatica — all must know are impossible to meet.
What is the media meant to do? What are we, as regular people, meant to do? These stocks keep pumping based on completely nonsensical ideas, and we’re all meant to sit around pretending things are normal and good. They’re not! At some point somebody’s going to start paying people actual, real dollars at a scale that OpenAI has never truly had to reckon with.
In this piece, I’m going to spell out in no uncertain terms exactly what OpenAI has to do in the next year to fulfil its destiny — having a bunch of capacity that cost ungodly amounts of money to serve demand that never arrives.
And yes, it’ll cost one-third of America’s output in 2024. This is not a sensible proposition.
Even if you think that OpenAI’s growth is impressive — it went from 700 million to 800 million weekly active users in the last two months — that is not the kind of growth that says “build capacity assuming that literally every single human being on Earth uses this all the time.”
As an aside: Altman is already lying about his available capacity. According to an internal Slack note seen by Alex Heath of Sources, Altman claims that OpenAI started the year with “around” 230 megawatts of capacity and is “now on track to exit 2025 north of 2GW of operational capacity.” Unless I’m much mistaken OpenAI doesn’t have any capacity of its own — and according to Mr. Altman, it’s somehow built or acquired 1.7GW of capacity from somewhere without disclosing it.
Anyway, what exactly is OpenAI doing? Why does it need all this capacity? Even if it hits its $13 billion revenue projection for this year (it’s only at $5.3 billion or so as of the end of August, and for OpenAI to hit its targets it’ll need to make $1.5bn+ a month very soon), does it really think it’s going to effectively 10x the entire company from here? What possible sign is there of that happening other than a conga-line of different executives willing to stake their reputations on blatant lies peddled by a man best known for needing, at any given moment, another billion dollars.
According to The Information, OpenAI spent $6.7 billion on research and development in the first six months of 2025, and according to Epoch AI, most of the $5 billion it spent on research and development in 2024 was spent on research, experimental, or derisking runs (basically running tests before doing the final testing run) and models it would never release, with only $480 million going to training actual models that people will use.
What is it that any of you believe that OpenAI is going to do with these fictional data centers?
Why Does ChatGPT Need $10 Trillion Of Data Centers?
The problem with ChatGPT isn’t just that it hallucinates — it’s that you can’t really say exactly what it can do, because you can’t really trust that it can do anything. Sure, it’ll get a few things right a lot of the time, but what task is it able to do every time that you actually need?
And no, I’m sorry, they are not building AGI. He just told Politico a few weeks ago that if we didn’t have “models that are extraordinarily capable and do things that we ourselves cannot do” by 2030 he would be “very surprised.”
Wow! What a stunning and confident statement. Let’s give this guy the ten trillion dollars he needs! And he’s gonna need it soon if he wants to build 250 gigawatts of capacity by 2033.
On top of all of this are OpenAI’s other costs.According to The Information, OpenAI spent $2 billion alone on Sales and Marketing in the first half of 2025, and likely spends billions of dollars on salaries, meaning that it’ll likely need at least another $10 billion on top. As this is a vague cost, I’m going with a rounded $400 billion number, though I believe it’s actually going to be more.
And to be clear, to complete these deals by the end of 2026, OpenAI needs large swaths of this money by February 2026.
OpenAI Needs Over $400 Billion In The Next 12 Months To Complete Any Of These Deals — And Sam Altman Doesn’t Have Enough Time To Build Any Of it
I know, I know, you’re going to say that OpenAI will simply “raise debt” and “work it out,” but OpenAI has less than a year to do that, because OpenAI has promised in its own announcements that all of these things would happen by the end of December 2026, and even if they’re going to happen in 2027, data centers require actual money to begin construction, and Broadcom, NVIDIA and AMD are going to actually require cash for those chips before they ship them.
Even if OpenAI finds multiple consortiums of paypigs to take on the tens of billions of dollars of data center funding, there are limits, and based on OpenAI’s aggressive (and insane) timelines, they will need to raise multiple different versions of the largest known data center deals of all time, multiple times a year, every single year.
The burden that OpenAI is putting on the financial system is remarkable, and actively dangerous. It would absorb, at this rate, the capital expenditures of multiple hyperscalers, requiring multiple $30 billion debt financing operations a year, and for it to hit its goal of 250 gigawatts by the end of 2033, it will likely have to have outpaced the capital expenditures of any other company in the world.
OpenAI is an out-of-control monstrosity that is going to harm every party that depends upon it completing its plans. For it to succeed, it will have to absorb over a trillion dollars a year — and for it to hit its target, it will likely have to eclipse the $1.7 trillion in global private equity deal volume in 2024, and become a significant part of global trade ($33 trillion in 2025).
There isn’t enough money to do this without diverting most of the money that exists to doing it, and even if that were to happen, there isn’t enough time to do any of the stuff that has been promised in anything approaching the timelines promised, because OpenAI is making this up as it goes along and somehow everybody is believing it.
At some point, OpenAI is going to have to actually do the things it has promised to do, and the global financial system is incapable of supporting them.
And to be clear, OpenAI cannot really do any of the things it’s promised.
Even if it could, Oracle needs 4.5GW of capacity. Stargate Abilene is meant to be completed by the end of 2026 (six months behind schedule), but (as I reported last week) only appears to have 200MW of the 1.5+GW of actual, real power it needs right now, and won’t have enough by the end of the year.
None of this bullshit is happening, and it’s time to be honest about what’s actually going on.
OpenAI is not building “the AI industry,” as this is capacity for one company that burns billions of dollars and has absolutely no path to profitability.
This is a giant, selfish waste of money and time, one that will collapse the second that somebody’s confidence wavers.
I realize that it’s tempting to write “Sam Altman is building a giant data center empire,” but what Sam Altman is actually doing is lying. He is lying to everybody.
He is saying that he will build 250GW of data centers in the space of eight years, an impossible feat, requiring more money than anybody would ever give him in volumes and intervals that are impossible for anybody to raise.
Sam Altman’s singular talent is finding people willing to believe his shit or join him in an economy-supporting confidence game, and the recklessness of continuing to do so will only harm retail investors — regular people beguiled by the bullshit machine and bullshit masters making billions promising they’ll make trillions.
To prove it, I’m going to write down everything that will need to take place in the next twelve months for this to happen, and illustrate the timelines of everything involved.
Object permanence: Use RSS; Lifehackers in the NYT; Banned Verminous Dickens cake; Fake CIA Fox guy; Ferris wheel offices; EFF finds printer snitch-dots; Officer Bubbles sues Youtube; Sued for criticizing Proctorio; Can I sing Happy Birthday? "Under the Poppy"; International Concatenated Order of Hoo-Hoo.
Remember when we were all worried that Huawei had filled our telecoms infrastructure with listening devices and killswitches? It sure would be dangerous if a corporation beholden to a brutal autocrat became structurally essential to your country's continued operations, huh?
In other, unrelated news, earlier this month, Trump's DoJ ordered Apple and Google to remove apps that allowed users to report ICE's roving gangs of masked thugs, who have kidnapped thousands of our neighbors and sent them to black sites:
Apple and Google capitulated. Apple also capitulated to Trump by removing apps that collect hand-verified, double-checked videos of ICE violence. Apple declared ICE's thugs to be a "protected class" that may not be disparaged in apps available to Apple's customers:
Of course, iPhones can (technically) run apps that Apple doesn't want you to run. All you have to do is "jailbreak" your phone and install an independent app store. Just one problem: the US Trade Rep bullied every country in the world into banning jailbreaking, meaning that if Trump (a man who never met a grievance that was too petty to pursue) orders Tim Cook (a man who never found a boot he wouldn't lick) to remove apps from your country's app store, you won't be able to get those apps from anyone else:
Now, you could get your government to order Apple to open up its platform to third-party app stores, but they will not comply – instead, they'll drown your country in spurious legal threats:
Of course, Google's no better. Not only do they capitulate to every demand from Trump, but they're also locking down Android so that you'll no longer be allowed to install apps unless Google approves of them (meaning that Trump now has a de facto veto over your Android apps):
For decades, China hawks have accused Chinese tech giants of being puppeteered by the Chinese state, vehicles for projecting Chinese state power around the world. Meanwhile, the Chinese state has declared war on its tech companies, treating them as competitors, not instruments:
When it comes to US foreign policy, every accusation is a confession. Snowden showed us how the US tech giants were being used to wiretap virtually every person alive for the US government. More than a decade later, Microsoft has been forced to admit that they will still allow Trump's lackeys to plunder Europeans' data, even if that data is stored on servers in the EU:
Microsoft is definitely a means for the US to project its power around the world. When Trump denounced Karim Khan, the Chief Prosecutor of the International Criminal Court, for indicting Netanyahu for genocide, Microsoft obliged by nuking Khan's email, documents, calendar and contacts:
This is exactly the kind of thing Trump's toadies warned us would happen if we let Huawei into our countries. Every accusation is a confession.
But it's worse than that. The very worst-case speculative scenario for Huawei-as-Chinese-Trojan-horse is infinitely better than the non-speculative, real ways in which the US has killswitched and bugged the world's devices.
Take CALEA, a Clinton-era law that requires all network switches to be equipped with law-enforcement back-doors that allow anyone who holds the right credential to take over the switch and listen in, block, or spoof its data. Virtually every network switch manufactured is CALEA-compliant, which is how the NSA was able to listen in on the Greek Prime Minister's phone calls to gain competitive advantage for the competing Salt Lake City Olympic bid:
CALEA backdoors are a single point of failure for the world's networking systems. Nominally, CALEA backdoors are under US control, but the reality is that lots of hackers have exploited CALEA to attack governments and corporations, inside the US and abroad. Remember Salt Typhoon, the worst-ever hacking attack on US government agencies and large corporations? The Salt Typhoon hackers used CALEA as their entry point into those networks:
US monopolists – within Trump's coercive reach – control so many of the world's critical systems. Take John Deere, the ag-tech monopolist that supplies the majority of the world's tractors. By design, those tractors do not allow the farmers who own them to alter their software. That's so John Deere can force farmers to use Deere's own technicians for repairs, and so that Deere can extract soil data from farmers' tractors to sell into the global futures market.
A tractor is a networked computer in a fancy, expensive case filled with whirling blades, and at any time, Deere can reach into any tractor and permanently immobilize it. Remember when Russian looters stole those Ukrainian tractors and took them to Chechnya, only to have Deere remotely brick their loot, turning the tractors into multi-ton paperweights? A lot of us cheered that high-tech comeuppance, but when you consider that Donald Trump could order Deere to do this to all the tractors, on his whim, this gets a lot more sinister:
Any government thinking about the future of geopolitics in an era of Trump's mad king fascism should be thinking about how to flash those tractors – and phones, and games consoles, and medical implants, and ventilators – with free and open software that is under its owner's control. The problem is that every country in the world has signed up to America's ban on jailbreaking.
In the EU, it's Article 6 of the Copyright Directive. In Mexico, it's the IP chapter of the USMCA. If Central America, it's via CAFTA. In Australia, it's the US-Australia Free Trade Agreement. In Canada, it's 2012's Bill C-11, which bans Canadian farmers from fixing their own tractors, Canadian drivers from taking their cars to a mechanic of their choosing, and Canadian iPhone and games console owners from choosing to buy their software from a Canadian store:
These anti-jailbreaking laws were designed as a tool of economic extraction, a way to protect American tech companies' sky-high fees and rampant privacy invasions by making it illegal, everywhere, for anyone to alter how these devices work without the manufacturer's permission.
But today, these laws have created clusters of deep-seated infrastructural vulnerabilities that reach into all our digital devices and services, including the digital devices that harvest our crops, supply oxygen to our lungs, or tell us when Trump's masked shock-troops are hunting people in our vicinity.
It's well past time for a post-American internet. Every device and every service should be designed so that the people who use them have the final say over how they work. Manufacturers' back doors and digital locks that prevent us from updating our devices with software of our choosing were never a good idea. Today, they're a catastrophe.
The world signed up to these laws because the US threatened them with tariffs if they didn't do as they were told. Well, happy Liberation Day, everyone. The US told the world to pass America's tech laws or face American tariffs.
When someone threatens to burn down your house unless you do as you're told, and then they burn your house down anyway, you don't have to keep doing what they told you.
When Putin invaded Ukraine, he inadvertently pushed the EU to accelerate its solarization efforts, to escape their reliance on Russian gas, and now Europe is a decade ahead of schedule in meeting its zero-emissions goals:
Today, another mad dictator is threatening the world's infrastructure. For the rest of the world to escape dictators' demands, they will have to accelerate their independence from American tech – not just Russian gas. A post-American internet starts with abandoning the laws that give US companies – and therefore Trump – a veto over how your technology works.
#20yrsago Microsoft employee calls me a communist and a liar and insists that a Microsoft monopoly will be good for Norwayhttps://memex.craphound.com/2005/10/17/msft-employee-cory-is-a-liar-and-a-communist-msft-is-good-for-norway/
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
A Little Brother short story about DIY insulin PLANNING
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
Takashi Murakami, the Japanese contemporary artist known for blending motifs from popular culture and postwar Japanese art into fantastical, vibrant scenes, continues to bridge worlds with his signature characters and joyful flowers. His work has graced everything from hip-hop album coers to major museum exhibitions, seamlessly crossing creative boundaries. Now, Murakami brings his playful vision to the world of fine champagne, collaborating with Dom Pérignon on the Dom Perignon x Takashi Murakami collection – two limited-edition bottles celebrating the close of the 2025 season: one for Dom Pérignon Vintage 2015 and one for the launch of Dom Pérignon Rosé Vintage 2010.
Murakami’s collaboration with Dom Pérignon extends beyond decoration – it’s a conversation rooted in nature. For Dom Pérignon, nature is where it begins, as well as the medium itself – the grapes, unpredictable climate, and human touch – are all encapsulated within the confines of the glass. Murakami also interprets nature through transformation – his surreal, smiling flowers and dreamlike characters capture both the natural and artificial worlds, between nature’s evolution and the artist’s reimagining of it.
In the 2025 collection, this exchange comes alive through vibrant contrasts and symbolic design. The dark, minimalist bottles and coffrets are punctuated by bursts of Murakami’s iconic blooms, each one a cheerful, animated embodiment of vitality. The champagne’s historic crest becomes a portal to a whimsical, flower-filled world, where refinement meets exuberance and timeless craftsmanship meets contemporary imagination. The Vintage 2015 and Rosé Vintage 2010, both having distinctive color palettes and overall feelings, alike to the years themselves. When displayed side by side, the limited-edition boxes form a modular floral composition.
Murakami understands the importance of respecting the processes of the past, while also looking towards the future. “Through my collaboration with Dom Pérignon, I wanted to express a form of time travel. My goal is to remain relevant in 100 or 200 years and to transcend time. When the label has aged, and I am gone, and my children are gone, I hope that people of the future, when they see it, will reimagine 2025 in their own minds.” says the artist, grounding the collection in historical perspective.
As more and more brands seem to homogenize and stray away from bright color, Murakami and Dom Pérignon instead go another direction, embracing exploration, artistry, and the road less traveled. Explosions of happy Murakami flowers, bursting out of the traditional crest, signals a modern take from a classic brand. Dom Pérignon chooses to stay on this side of the millennium – a historic name paired with a contemporary sense of style.
Takashi Murakami working on the design
Takashi Murakami working on the design
Takashi Murakami
To learn more about the Takashi Murakami x Dom Pérignon limited-edition collaboration, visit domperignon.com.
Next to technicolor neon signs featuring Road Runner, an inspirational phrase that says “everything will be fucking amazing,” and a weed leaf, Geovany Alvarado points to a neon sign he’s particularly proud of: “The Lost and Found Art,” it says.
“I had a customer who called me, it was an old guy. He wanted to meet with someone who actually fabricates the neon and he couldn’t find anyone who physically does it,” Alvarado said. “He told me ‘You’re still doing the lost art.’ It came to my head that neon has been dying, there’s less and less people who have been learning. So I made this piece.”
For 37 years, Alvarado has been practicing “the lost and found art” of neon sign bending, weathering the general ups and downs of business as well as, most threateningly, the rise of cheap LED signs that mimic neon and have become popular over the last few years.
“When neon crashed and LED and the big letters like McDonald’s, all these big signs—they took neon before. Now it’s LED,” he said. In the last few years, though, he said there has been a resurgent interest in neon from artists and people who are rejecting the cheap feel of LED. “It came back more like, artistic, for art. So I’ve been doing 100 percent neon since then.”
At his shop, Quality Neon Signs in Mid-City Los Angeles, there are signs in all sorts of states of completion and functionality strewn about Alvarado’s shop: old, mass-produced beer advertisements whose transformers have blown and are waiting for him to repair them, signs in the shapes of soccer and baseball jerseys, signs with inspirational phrases (“Everything is going to be fucking amazing,” “NEED MONEY FOR FAKE ART”), signs for restaurants, demonstration tubes that show the different colors he offers, weed shop signs, projects he made when he was bored. There are projects that are particularly meaningful to him: a silhouette he made of his wife holding their infant daughter, and a sign of the Los Angeles skyline with a wildfire burning in the background, “just to represent Los Angeles,” he said. There are old little bits of tube that have broken off of other pieces. “We save everything,” Alvarado said, “in case we want to fix it or need it for a repair.” His workshop, a few minutes away, features a “Home Sweet Home” sign,” a sign he made years ago for Twitter/Chanel collaboration featuring the old Twitter bird logo, and a sign for the defunct Channing Tatum buddy cop show Comrade Detective.
The overwhelming majority of signs Alvarado sells are traditional neon glass. The real thing. But he does offer newer LED faux-neon signs to clients who want it, though he doesn’t make those in-house. Alvarado says he sells LED to keep up with the times and because they can be more practical for one-off events because they are less likely to break in transit, but it’s clear that he and the overwhelming majority of neon sign makers think the LED stuff is simply not the same. Most LED signs look cheaper and do not emit the same warmth of light, but are more energy efficient.
I asked two neon sign creators about the difference while I was shopping for signs. They said they think the environmental debate isn’t quite as straightforward as it seems because a lot of the LED signs they make seem to be for one-off events, meaning many LED signs are manufactured essentially for a single use and then turned into e-waste. Many real neon signs are bought as either artwork or are bought by businesses who are interested in the real aesthetic. And because they are generally more expensive and are handmade, they are used for years and can be repaired indefinitely.
I asked Alvarado to show me the process and make a neon sign for 404 Media, which I’ve wanted for years. It’s a visceral, loud, scientific process, with gas-powered burners that sound like jet engines heating up the glass tubes to roughly 1,000 degrees so they can be bent into the desired shapes. When he first started bending neon, Alvarado says he used to use an overheard projector and a transparency to project a schematic onto the wall. These days, he mocks up designs on a computer aided design program and prints them out on a huge printer that uses a sharpie to draw the schematic. He then painstakingly marks out his planned glass bends on the paper, lining up the tubes with the mockup as he works.
“You burn yourself a lot, your hands get burnt. You’re dealing with fire all the time,” Alvarado said. He burned himself several times while working on my piece. “For me it’s normal. Even if you’re a pro, you still burn yourself.” Every now and then, even for someone who has been doing this for decades, the glass tubes shatter: “You just gotta get another stick and do it again,” he said.
After bending the glass and connecting the electrodes to one end of the piece, he connects the tubes to a high-powered vacuum that sucks the air out of them. The color of the light in Alvarado’s work is determined by a powdered coating within the tubes or a different colored coating of the tubes themselves; the type of gas and electrical current also changes the type and intensity of the colors. He uses neon for bright oranges and reds, and argon for cooler hues.
Alvarado, of course, isn’t the only one still practicing the “lost art” of neon bending, but he’s one of just a few commercial businesses in Los Angeles still manufacturing and repairing neon signs for largely commercial customers. Another, called Signmakers, has made several large neon signs that have become iconic for people who live in Los Angeles. The artist Lili Lakich has maintained a well-known studio in Los Angeles’ Arts District for years and has taught “The Neon Workshop” to new students since 1982, and the Museum of Neon Art is in Glendale, just a few miles away.
A few days after he made my neon sign, I was wandering around Los Angeles and came across an art gallery displaying Tory DiPietro’s neon work, which is largely fine art and pieces where neon is incorporated to other artworks; a neon “FRAGILE” superimposed on a globe, for example. Both DiPietro and Alvarado told me that there are still a handful of people practicing the lost art, and that in recent years there’s been a bit of a resurgent interest in neon, though it’s not that easy to learn.
On the day I picked up my sign, there were two bright green “Meme House” signs for a memecoin investor house in Los Angeles that Alvarado said he had bent and made immediately after working on the 404 Media sign. “I was there working til about 11 p.m.” he said.