Resident of the world, traveling the road of life
67992 stories
·
20 followers

Is There Any Real Money In Renting Out AI GPUs?

1 Comment and 2 Shares

NVIDIA has become a giant, unhealthy rock on which the US markets — and to some extent the US economy — sits, representing 7-to-8% of the value of the market and a large percentage of the $400 billion in expected AI data center capex expected to be spent this year, which in turn made up for more GDP growth than all consumer spending combined.

I originally started writing this piece about something else entirely — the ridiculous Oracle deal, what the consequences of me being right might be, and a lot of ideas that I'll get to later, but I couldn't stop looking at what NVIDIA is doing.

To be clear, NVIDIA is insane, making 88% of its massive revenues from selling the distinct GPUs and associated server hardware to underpin the inference and training of Large Language Models, a market it effectively created by acquiring Mellanox for $6.9 billion in 2019, and its underlying hardware that allowed for the high-speed networking to connect massive banks of servers and GPUs together, a deal now under investigation by China's antitrust authorities.

Since 2023, NVIDIA has made an astonishing amount of money from its data center vertical, going from making $47 billion in the entirety of their Fiscal Year 2023 to making $41.1 billion in its last quarterly earnings alone.

What's even more remarkable is how little money anyone is making as a result, with the combined revenues of the entire generative AI industry unlikely to cross $40 billion this year, even when you include companies like AI compute company CoreWeave, which expects to make a little over $5 billion or so this year, though most of that revenue comes from Microsoft, OpenAI (funded by Microsoft and Google, who are paying CoreWeave to provide compute to OpenAI, despite OpenAI already being a client of CoreWeave, both under Microsoft and in their own name)...and now NVIDIA itself, which has now agreed to buy $6.3 billion of any unsold cloud compute through, I believe, the next four years.

Hearing about this deal made me curious.

Why is NVIDIA acting as a backstop to CoreWeave? And why are they paying to rent back thousands of its GPUs for $1.5 billion over four years from Lambda, another AI compute company it invested in?

The answer is simple: NVIDIA is effectively incubating its own customers, creating the contracts necessary for them to raise debt to buy GPUs — from NVIDIA, of course — which can, in turn, be used as collateral for further loans to buy even more GPUs. These compute contracts are used by AI compute companies as a form of collateral — proof of revenue to reassure creditors that they're good for the money so that they can continue to raise mountains of debt to build more data centers to fill with more GPUs from NVIDIA.

This has also created demand for companies like Dell and Supermicro, companies that accounted for a combined 39% of NVIDIA's most recent quarterly revenues. Dell and Supermicro buy GPUs sold by NVIDIA and build the server architecture around them necessary to provide AI compute, reselling them to companies like CoreWeave and Lambda, who also buy GPUs of their own and have preferential access from NVIDIA.

You'll be shocked to hear that NVIDIA also invested in both CoreWeave and Lambda, that Supermicro also invested in Lambda, and that Lambda also gets its server hardware from Supermicro.

While this is the kind of merciless, unstoppable capitalism that has made Jensen Huang such a success, there's an underlying problem — that these companies become burdened with massive debt, used to send money to NVIDIA, Supermicro (an AI server/architecture reseller), and Dell (another reseller that works directly with CoreWeave), and there doesn't actually appear to be mass market demand for AI compute, other than the voracious hunger to build more of it.

In a thorough review of just about everything ever written about them, I found a worrying pattern within the three major neoclouds (CoreWeave, Lambda, and Nebius): a lack of any real revenue outside of Microsoft, OpenAI, Meta, Amazon, and of course NVIDIA itself, and a growing pile of debt raised in expectation of demand that I don't believe will ever arrive.

To make matters worse, I've also found compelling evidence that all three of these companies lack the capacity to actually serve massive contracts like OpenAI's $11.9 billion deal with CoreWeave (and an additional $4 billion added a few months later), or Nebius' $17.4 billion deal with Microsoft, both of which were used to raise debt for each company.

On some level, NVIDIA's Neocloud play was genius, creating massive demand for its own GPUs, both directly and through resellers, and creating competition with big tech firms like Microsoft's Azure Cloud and Amazon Web Services, suppressing prices in cloud compute and forcing them to buy more GPUs to compete with CoreWeave's imaginary scale.

The problem is that there is no real demand outside of big tech's own alleged need for compute. Across the board, CoreWeave, Nebius and Lambda have similar clients, with the majority of CoreWeave's revenue coming from companies offering compute to OpenAI or NVIDIA's own "research" compute.

Neoclouds exist as an outgrowth of NVIDIA, taking on debt using GPUs as collateral, which they use to buy more GPUs, which they then use as collateral along with the compute contracts they sign with either OpenAI, Microsoft, Amazon or Google.

Beneath the surface of the AI "revolution" lies a dirty secret: that most of the money is one of four companies feeding money to a company incubated by NVIDIA specifically to buy GPUs and their associated hardware.

I will add that NVIDIA also invested in Crusoe, the company building OpenAI's data center operation out in Abilene Texas. The reason I haven’t included it in this larger piece is because it’s currently focused on literally one client: Oracle, serving one other client, OpenAI.

These Neoclouds are entirely dependent on a continual flow of private credit from firms like Goldman Sachs (Nebius, CoreWeave, Lambda for its IPO), JPMorgan (Lambda, Crusoe, CoreWeave), and Blackstone (Lambda, CoreWeave), who have in a very real sense created an entire debt-based infrastructure to feed billions of dollars directly to NVIDIA, all in the name of an AI revolution that's yet to arrive.

Those billions — an estimated $50 billion a quarter for the last three quarters at least — will eventually have the expectation of some sort of return, yet every Neocloud is a gigantic money loser, with CoreWeave burning $300 million in the last quarter with expectations to spend more than $20 billion in capital expenditures in 2025 alone.

At some point the lack of real money in these companies will make them unable to pay their ruinous debt, and with NVIDIA's growth already slowing, I think we're watching a private credit bubble grow with no way for any of the money to escape.

I'm not sure where it'll end, but it's not going to be pretty.

Let's begin.

Read the whole story
mkalus
5 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
1 public comment
kglitchy
12 hours ago
reply
I wonder if the AI bust will bring a renaissance of lower cost colocation and cloud services. All these data centers with fully depreciated GPUs in a few years after all the money has run out will need some way to pay their power purchase agreements. Or they will go bust and someone else will buy the datacenter and need something to do with it.
North Carolina

Oracle and OpenAI Are Full Of Crap

1 Share

This week, something strange happened. Oracle, a company that had just missed on its earnings and revenue estimates, saw a more-than-39% single day bump in its stock, leading a massive market rally.

Why? Because it said its remaining performance obligations — contracts signed that its customers yet to pay — had increased by $317 billion from the previous quarter, with CNBC reporting at the time that this was likely part of Oracle and OpenAI's planned additional 4.5 gigawatts of data center capacity being built in the US.

Analysts fawned over Oracle — again, as it missed estimates — with TD Cowen's Derrick Wood saying it was a "momentous quarter" (again, it missed) and that these numbers were "really amazing to see," and Guggenheim Securities' John DiFucci said he was "blown away." Deutsche Bank's Brad Zelnick added that "[analysts] were all kind of in shock, in a very good way."

RPOs, while standard (and required) accounting practice and based on actual signed contracts, are being used by Oracle as a form of marketing. Plans change, contracts can be canceled (usually with a kill fee, but nevertheless), and, especially in this case, clients can either not have the money to pay or die for the very same reason they can't pay. In Oracle's case, it isn’t simply promising ridiculous growth, it is effectively saying it’ll become the dominant player in all cloud compute.

A day after Oracle's earnings and a pornographic day of market swings, the Wall Street Journal reported that OpenAI and Oracle had signed a $300 billion deal, starting in "2027," though the Journal neglected to say whether that was the year or Oracle’s FY2027 (which starts June 1 2026).

Oracle claims that it will make $18 billion in cloud infrastructure revenue in FY2026, $32 billion in FY2027, $73 billion in FY2028, $114 billion in FY2029, and $144 billion in FY2030. While all of this isn't necessarily OpenAI (as it adds up to $381 billion), it's fair to assume that the majority of it is.

This means — as the $300 billion of the $317 billion of new contracts added by Oracle, and assuming OpenAI makes up 78% of its cloud infrastructure revenue ($300 billion out of $381 billion) — that OpenAI intends to spend over $88 billion fucking dollars in compute by FY2029, and $110 billion dollars in compute, AKA nearly as much as Amazon Web Services makes in a year, in FY2030.

A sidenote on percentages, and how I'm going to talk about this going forward. If I'm honest, there's also a compelling argument that more of it is OpenAI. Who else is using this much compute? Who has agreed, and why? 

In any case, if you trust Oracle and OpenAI, this is what you are believing:

  • The AI compute industry will grow by, at the very least, 500% by 2030, to over $200 billion in annual revenue, and almost all of that growth will come from one company: OpenAI.
  • That Oracle can successfully complete the data centers in question, and that said data centers will be operational in time to provide that compute.
  • That OpenAI — a company with no plan for profitability — will be able to afford three hundred billion dollars spread over 2027, 2028, 2029, and 2030.
  • Oracle will, at this point, become a dominant player in cloud compute, with $144 billion in cloud infrastructure revenue, and it will do so mostly from one customer.
  • Oracle's cloud infrastructure revenue will increase by 700% — from $18 billion in FY2026 to $144 billion in FY2030. This represents a growth rate of 68.2% a year, again from one customer.
  • Oracle will, by FY2028, be making more in cloud infrastructure (it projects to make $73 billion) than all of Google Cloud did in 2024 ($43 billion). And it’ll make it from one customer.
  • That Oracle has more incoming revenue than Amazon, Google, and Microsoft, and will be making almost the entirety of that from one god damn customer.

I want to write something smart here, but I can't get away from saying that this is all phenomenally, astronomically, ridiculously stupid.

OpenAI, at present, has made about $6.26 billion in revenue this year, and it leaked a few days ago that it will burn $115 billion "through 2029," a statement that is obviously, patently false. Let's take a look at this chart from The Information:

alt

A note on "free cash flow." Now, these numbers may look a little different because OpenAI is now leaking free cash flow instead of losses, likely because it lost $5 billion in 2024, which included $1 billion in losses from "research compute amortization," likely referring to spreading the cost of R&D out across several years, which means it already paid it. OpenAI also lost $700 million from its revenue share with Microsoft.

In any case, this is how OpenAI is likely getting its "negative $2 billion" number."

Personally, I don't like this as a means of judging this company's financial health, because it's very clear it’s using it to make its losses seem smaller than they are.

The Information also reports that OpenAI will, in totality, spend $350 billion in compute from here until 2030, but claims it’ll only spend $100 billion on compute in that year. If I'm honest, I believe it'll be more based on how much Oracle is projecting. OpenAI represents $300 billion of the $317 billion of new cloud infrastructure revenue it’ll from 2027 through 2030, which heavily suggests that OpenAI will be spending more like $140 billion in that year.

As I'll reveal in this piece, I believe OpenAI's actual burn is over $290 billion through 2029, and these leaks were intentional to muddy the waters around how much their actual costs would be.

There is no way a $116 billion burnrate from 2025 to 2029 includes these costs, and I am shocked that more people aren't doing the basic maths necessary to evaluate this company. The timing of the leak — which took place on September 5, 2025, five days before the Oracle deal was announced — always felt deeply suspicious, as it's unquestionably bad news...unless, of course, you are trying to undersell how bad your burnrate is.

I believe that OpenAI's leaked free cash flow projections intentionally leave out the Oracle contract as a means of avoiding scrutiny.

I refuse to let that happen.


So, even if OpenAI somehow had the money to pay for its compute — it won't, but it projects, according to The Information, that it’ll make one hundred billion dollars in 2028I'm not confident that Oracle will actually be able to build the capacity to deliver it.

Vantage Data Centers, the partner building the sites, will be taking on $38 billion of debt to build two sites in Texas and Wisconsin, only one of which has actually broken ground from what I can tell, and unless it has found a miracle formula that can grow data centers from nothing, I see no way that it can provide OpenAI with $70 billion or more of compute in FY2027.

Oracle and OpenAI are working together to artificially boost Oracle's stock based on a contract that is, from everything I can see, impossible for either party to fulfill.

The fact that this has led to such an egregious pump of Oracle's stock is an utter disgrace, and a sign that the markets and analysts are no longer representative of any rational understanding of a company's value.

Let me be abundantly clear: Oracle and OpenAI's deal says nothing about demand for GPU compute.

OpenAI is the largest user of compute in the entirety of the generative AI industry. Anthropic expects to burn $3 billion this year (so we can assume that its compute costs are $3 billion to $5 billion, Amazon is estimated to make $5 billion in AI revenue this year, so I think this is a fair assumption), and xAI burns through a billion dollars a month. CoreWeave expects about $5.3 billion of revenue in 2025, and per The Information Lambda, another AI compute company, made more than $250 million in the first half of 2025. If we assume that all of these companies were active revenue participants (we shouldn't, as xAI mostly handles its own infrastructure), I estimate the global compute market is about $40 billion in totality, at a time when AI adoption is trending downward in large companies according to Apollo's Torsten Sløk.

And yes, Nebius signed a $17.4 billion, four-year-long deal with Microsoft, but Nebius now has to raise $3 billion to build the capacity to acquire "additional compute power and hardware, [secure] land plots with reliable providers, and [expand] its data center footprint," because Nebius, much like CoreWeave, and, much like Oracle, doesn't have the compute to service these contracts.

All three have seen a 30% bump in their stock in the last week.

In any case, today I'm going to sit down and walk you through the many ways in which the Oracle and OpenAI deal is impossible to fulfill for either party. OpenAI is projecting fantastical growth in an industry that's already begun to contract, and Oracle has yet to even start building the data centers necessary to provide the compute that OpenAI allegedly needs.

Read the whole story
mkalus
5 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Trying to Read

1 Share
Read the whole story
mkalus
15 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Birds on the Moon

1 Share
Read the whole story
mkalus
15 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

#ricoh #519 #rangefinder #35mm #film

1 Share

ecstaticist - evanleeson.art posted a photo:

#ricoh #519 #rangefinder #35mm #film



Read the whole story
mkalus
16 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

OpenAI fights the evil scheming AI! — which doesn’t exist yet

1 Share

AI vendors sell access to chatbots. You can have conversations with the chatbot!

This convinces far too many people there’s an actual person in there. But there isn’t — they’re text completing machines with a bit of randomness.

That doesn’t sound very cool. So the AI companies encourage what we call criti-hype — the sort of AI “criticism” that makes the robot sound dangerous in a cool and edgy way. Our chatbot is so capable, it could take over the world! So it can definitely answer your email.

If you can’t get enough criti-hype, make up your own! Ask the AI doomsday crowd. The AI companies are full of these people. They will always tell you the super-autocomplete might take over and kill us all.

Anthropic puts out a lot of reports to stoke the fear of chatbots. Reasoning AI is lying to you! Or maybe it’s just hallucinating again. Anthropic did one report with Apollo Research where they got a chatbot to lie to them — by telling it to lie to them.

Apollo Research is an “AI safety” group — that is, they’re AI doomsday cultists. [Apollo]

After GPT-5 fell flat, OpenAI is getting a bit desperate. So they sponsored Apollo to write a report on the threat of scheming chatbots that will deceive you! [OpenAI; Apollo; paper, PDF]

OpenAI and Apollo “found behaviors consistent with scheming in controlled tests across frontier models.”

No, they didn’t. They’re anthropomorphising the chatbot without a licence. The bots are text completers. They don’t have intelligence, they don’t have plans or goals.

But people project their own intelligence onto the chatbot — and if they’re paranoid AI doomsday cultists, they project AI doom onto the chatbot.

AI scheming – pretending to be aligned while secretly pursuing some other agenda – is a significant risk that we’ve been studying.

No, it is not a “significant risk,” they don’t show that at all, and the claim is as stupid as it sounds.

The paper is 94 pages, but if you read through, they openly admit they’ve got nothing. Section 3.1, “Covert actions as a proxy for scheming”, admits directly:

Current frontier models likely lack the sophisticated awareness and goal-directedness required for competent and concerning scheming.

The researchers just said chatbots don’t scheme — but they really want to study this made-up threat. So they look for supposed “covert actions”. And they just assume — on no evidence — there are goals in there.

Apollo is assuming their conclusion as hard as they can, then they’re presenting the results of this assumption as evidence for their conclusion.

The researchers even set up “anti-scheming environments that create conflicts to elicit covert rule violations” — so whatever the bot output is, it’s evidence of wrongdoing. So’s the opposite.

One of the tasks is an “Impossible Coding Task.” Now, you might have noticed that vibe coding is terrible and chatbots are not actually good at coding. But these guys assume bad code presented as good code is evidence of evil.

“Misciting the spec” is cited as evidence of malign intent. In that case, the Github Copilot bot is a malevolent intelligence that’s out to mess you up. (… Hmm.)

If they can’t get anything else, they just tell the bot to please act evil:

Training the model to covertly pursue the misaligned goal. In order to instill the misaligned, covert goal into o4-mini, we use deliberative alignment.

Yes — if you train the bot to act like an evil robot, it’ll act like an evil robot.

After way too many pages of this guff, the conclusions straight up admit they’ve got nothing:

While current models likely lack the goal-directedness and situational awareness required for dangerous scheming, this failure mode may become critical in future AI systems.

We admit this is useless and dumb, but you can’t prove it won’t be huge in the future!

Scheming represents a significant risk for future AI systems

This is just after they said they’ve no evidence this is even a thing.

The whole paper is full of claims so stupid you think, I must be reading it wrong. But then they just come out and say the stupid version.

I bet these guys are haunted by the malevolent artificial intelligence power of thermostats. It switched itself on!!

Read the whole story
mkalus
17 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories