Resident of the world, traveling the road of life
68157 stories
·
21 followers

Microsoft OneDrive lets you disable AI face recognition three times in a year!

1 Share

Slashdot editor David went to upload a photo from his phone to his Microsoft OneDrive. He found one of those wonderful little surprises waiting for him under “Privacy And Permissions”: [Slashdot]

People section: One Drive uses AI to recognize faces in your photos to help you find photos of friends and family. You can only change this setting 3 times a year.

What happens if you switch it off?:

If I moved the slidebar for that setting to the left (for “No”), it moved back to the right, and said “Something went wrong while updating this setting.” (Apparently it’s not one of those three times of the year.)

This darling little feature has been in the works at least a couple of years. [Microsoft, 2023 archive]

Now it’s finally on and active — for some lucky users. Microsoft told Slashdot it’s “rolling out to limited users in a preview so we can learn and improve” — though David says he certainly did not sign up for this preview.

Why is it opt-out and not opt-in? Especially since the opt-out doesn’t actually work?

Microsoft OneDrive inherits privacy features and settings from Microsoft 365 and SharePoint, where applicable.

And the one big question from David:

What’s the reason OneDrive tells users this setting can only be turned off 3 times a year?

[Microsoft’s publicist chose not to answer this question.]

You might want to remove all your photos from OneDrive sooner rather than later. Then set OneDrive on fire and throw it in the bin.

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Tschechischer Ambient und Downtempo der 1980er und 1990er

1 Share

The NTS guide to… mit tschechischem Ambient aus den 1980er und 1990er Jahren und wissenswertem Hintergrund dazu. Gutes Thema. Mehr als nur Musik.

Tearooms in post-revolution Czechoslovakia symbolised places through which new spiritualities were flowing, and the influx of largely uncharted ways of life closely intertwined with new age, ambient, folk and minimalism. With their minds altered thanks to smuggled records by Fripp & Eno or Steve Reich, this loose network of musicians had begun composing meditative music, using loops and handmade instruments, with a different sensibility.

Music journalist Pavel Klusák dubbed this 1990s scene a “tearoom alternative”. Experimental folk singer Oldřich Janota, Jaroslav Kořán’s various ensembles like Modrá or Orloj Snivců (The Horologe of Dreamers) or Irena and Vojtěch Havlovi were drawn by the light and composed music that didn’t match the fast pace of newly imported capitalism.


(Direktlink)

Tracklist:
Jaroslav Kořán – Jarní Píseň 00:00
Pavel Richter Band – Čínský Potůček – Poloviční Chytání 4:18
Vlastimil Marek – Závěje Hlasů 11:54
Oldřich Janota, Vojtěch A Irena Havlovi – Ozářený Sluncem, Mrakem Zastíněný 17:27
Irena & Vojtěch Havlovi – Mysterious Gamelanland IX. 22:06
Modrá – Morning Azure 25:53
Vlastislav Matoušek – Shapes Of Silence IV 30:46
Orloj Snivců – Jde 40:46
Jiná Rychlost Času – Klášter Na Vodě 44:26

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Computer von 1959 spielt Boards of Canadas „Olson“ per Lochkarten

1 Share

So wunderbar nerdig wie der Sound von BOC.

Playing the 1998 song „Olson“ by Boards of Canada through four control panel light bulbs of a 1959 DEC PDP-1 computer using Peter Samson’s 1962 Harmony Compiler and 603 bytes of music data.


(Direktlink, via Zwentner)

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

What Happened When AI Came for Craft Beer

1 Share
What Happened When AI Came for Craft Beer

A prominent beer judging competition introduced an AI-based judging tool without warning in the middle of a competition, surprising and angering judges who thought their evaluation notes for each beer were being used to improve the AI, according to multiple interviews with judges involved. The company behind the competition, called Best Beer, also planned to launch a consumer-facing app that would use AI to match drinkers with beers, the company told 404 Media.

Best Beer also threatened legal action against one judge who wrote an open letter criticizing the use of AI in beer tasting and judging, according to multiple judges and text messages reviewed by 404 Media.

The months-long episode shows what can happen when organizations try to push AI onto a hobby, pursuit, art form, or even industry which has many members who are staunchly pro-human and anti-AI. Over the last several years we’ve seen it with illustrators, voice actors, music, and many more. AI came for beer too. 

“It is attempting to solve a problem that wasn’t a problem before AI showed up, or before big tech showed up,” Greg Loudon, a certified beer judge and brewery sales manager, and who was the judge threatened with legal action, said. “I feel like AI doesn’t really have a place in beer, and if it does, it’s not going to be in things that are very human.”

“There’s so much subjectivity to it, and to strip out all of the humanity from it is a disservice to the industry,” he added. Another judge said the introduction of AI was “enshittifying” beer tasting.

💡
Do you know anything else about how AI is impacting beer? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This story started earlier this year at a Canadian Brewing Awards judging event. Best Beer is the company behind the Canadian Brewing Awards, which gives awards in categories such as Experimental Beer, Speciality IPA, and Historic/Regional Beers. To be a judge, you have to be certified by the Beer Judge Certification Program (BJCP), which involves an exam covering the brewing process, different beer styles, judging procedures, and more.

Around the third day of the competition, the judges were asked to enter their tasting notes into a new AI-powered app instead of the platform they already use, one judge told 404 Media. 404 Media granted the judge anonymity to protect them from retaliation.

Using the AI felt like it was “parroting back bad versions of your judge tasting notes,” they said. “There wasn't really an opportunity for us to actually write our evaluation.” Judges would write what they thought of a beer, and the AI would generate several descriptions based on the judges’ notes that the judge would then need to select. It would then provide additional questions for judges to answer that were “total garbage.”

“It was taking real human feedback, spitting out crap, and then making the human respond to more crap that it crafted for you,” the judge said.

“On top of all the misuse of our time and disrespecting us as judges, that really frustrated me—because it's not a good app,” they said.

Multiple judges then met to piece together what was happening, and Loudon published his open letter in April.

“They introduced this AI model to their pool of 40+ judges in the middle of the competition judging, surprising everyone for the sudden shift away from traditional judging methods,” the letter says. “Results are tied back to each judge to increase accountability and ensure a safe, fair and equitable judging environment. Judging for competitions is a very human experience that depends on people filling diverse roles: as judges, stewards, staff, organizers, sorters, and venue maintenance workers,” the letter says.

“Their intentions to gather our training data for their own profit was apparent,” the letter says. It adds that one judge said “I am here to judge beer, not to beta test.” 

The letter concluded with this: “To our fellow beverage judges, beverage industry owners, professionals, workers, and educators: Sign our letter. Spread the word. Raise awareness about the real human harms of AI in your spheres of influence. Have frank discussions with your employers, colleagues, and friends about AI use in our industry and our lives. Demand more transparency about competition organizations.”

33 people signed the letter. They included judges, breweries, and members of homebrewer associations in Canada and the United States.

Loudon told 404 Media in a recent phone call “you need to tell us if you're going to be using our data; you need to tell us if you're going to be profiting off of our data, and you can't be using volunteers that are there to judge beer. You need to tell people up front what you're going to do.”

At least one brewery that entered its beer into the Canadian Brewing Awards publicly called out Best Beer and the awards. XhAle Brew Co., based out of Alberta, wrote in a Facebook post in April that it asked for its entry fees of $565 to be refunded, and for the “destruction of XhAle's data collected during, and post-judging for the Best Beer App.”

“We did not consent to our beer being used by a private equity tech fund at the cost to us (XhAle Brew Co. and Canadian Brewers) for a for-profit AI application. Nor do we condone the use of industry volunteers for the same purpose,” the post said.

Ob Simmonds, head of innovation at the Canadian Brewing Awards, told 404 Media in an email that “Breweries will have amazing insight on previously unavailable useful details about their beer and their performance in our competition. Furthermore, craft beer drinkers will be able to better sift through the noise and find beers perfect for their palate. This in no way is aimed at replacing technical judging with AI.”

With the consumer app, the idea was to “Help end users find beers that match their taste profile and help breweries better understand their results in our competition,” Simmonds said.

Simmonds said that “AI is being used to better match consumers with the best beers for their palate,” but said Best Beer is not training its own model.

Those plans have come to a halt though. At the end of September, the Canadian Brewing Awards said in an Instagram post the team was “stepping away.” It said the goal of Best Beer was to “make medals matter more to consumers, so that breweries could see a stronger return on their entries.” The organization said it “saw strong interest from many breweries, judges and consumers” and that it will donate Best Beer’s assets to a non-profit that shows interest. The post added the organization used third-party models that “were good enough to achieve the results we wanted,” and the privacy policies forbade training on the inputted data.

What Happened When AI Came for Craft Beer
A screenshot of the Canadian Beer Awards' Instagram post.

The post included an apology: “We apologize to both judges and breweries for the communication gaps and for the disruptions caused by this year’s logistical challenges.”

In an email sent to 404 Media this month, the Canadian Brewing Awards said “the Best Beer project was never designed to replace or profit from judges.” 

“Despite these intentions, the project came under criticism before it was even officially launched,” it added, saying that the open letter “mischaracterized both our goals and approach.”

“Ultimately, we decided not to proceed with the public launch of Best Beer. Instead, we repurposed parts of the technology we had developed to support a brewery crawl during our gala. We chose to pause the broader project until we could ensure the judging community felt confident that no data would be used for profit and until we had more time to clear up the confusion,” the email added. “If judges wanted their data deleted what assurance can we provide them that it was in fact deleted. Everything was judged blind and they would have no access to our database from the enhanced division. For that reason, we felt it was more responsible to shelve the initiative for now.”

One judge told 404 Media: “I don’t think anyone who is hell bent on using AI is going to stop until it’s no longer worth it for them to do so.” 

“I just hope that they are transparent if they try to do this again to judges who are volunteering their time, then either pay them or give them the chance ahead of time to opt-out,” they added.

Now months after this all started, Loudon said “The best beers on the market are art forms. They are expressionist. They're something that can't be quantified. And the human element to it, if you strip that all away, it just becomes very basic, and very sanitized, and sterilized.” 

“Brewing is an art.”

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

A Minimalist Paper Sconce That Hangs With Just Push-Pins

1 Share

A Minimalist Paper Sconce That Hangs With Just Push-Pins

Before the days of Pinterest, the humble bulletin board with push pins has long been a tool for quick idea-making: fast, simple, and accessible. That rapid, almost improvisational rhythm can spark unexpected connections and moments of clarity that feel uniquely human. Toronto-based designer Maxwell Sims channels that same spirit into lighting with the Pin Sconce – a fixture that embraces immediacy, ingenuity, and a touch of playfulness while rethinking what a wall light can be.

Crafted from heavy cotton paper, the sconce borrows from pop-up card mechanics to lift the center of what at first glance looks like a delicate, fluted shade. Sims describes it as an homage to the “paperness” of the material itself – a celebration of fragility, tactility, and form in one gesture. In an exclusive interview, he shares the inspiration behind the design, and where he hopes to take the concept next.

Black ink sketches show parts of a sconce assembly: a square with a cord, a curved sheet, and the final sconce.

A piece of white paper with evenly spaced lines and two rectangular extensions lies on a wooden surface near a window.

There’s something that’s so intuitive about the Pin Sconce, and much of your work. How does materiality foster this connection?

I do work a lot with paper, so I’m glad that that was the impression, that there’s something kind of understandable about it. Paper, firstly, offers something very like kind of communicable, which I like. The result of a form, or the logic of a form, is easily communicated to someone who may not know anything about design, as opposed to something that’s injection molded or milled. The form can be very sort of esoteric, in a way, paper and sheet metal generally offer this very sort of constrained starting point, which I think is very easily understood by a lot of people who may engage with the work – this is why I ended up working so much with sheet metal and paper.

A piece of gray paper is folded in an arch shape, resting on a wooden surface with two rectangular cutouts beneath each end of the arch.

The practical, generative nature of paper fits so well into the concept of the Pin Sconce. How does this inform your process?

I feel as if a lot of design has moved to 3D modeling, 3D printing, and mostly existing in photographs on the internet. As a result of that, I think first, a lot of formal considerations or constraints have been lost, which makes objects look a certain way, so it’s hard to grasp as a viewer. Secondly, there’s some essence of humanity that’s maybe a bit lost in that sort of very complex industrial production, or just being a render on the internet. And so paper is something I can do from my studio that really does work against that sort of narrative, and I feel like, just like, also selfishly gives me some positionality to make something that hopefully reads as human. And that’s the dilemma, I find, as a younger, emerging designer is like, how do you even develop a positionality when everything feels like it kind of exists already online.

Three wall-mounted lamps with gradient gray shades are displayed on a wooden surface, each with a white cord and screws placed nearby.

A white wall-mounted lamp with a pleated shade and a visible cord hanging down, positioned against a plain light-colored wall.

How does this flexible sustainability in material influence your design philosophy?

There’s a book called Small Is Beautiful – one of the principles was Existenzminimum. It was a German design movement operating on the idea that one way to approach designing things is to make them as little as possible. I don’t necessarily operate exactly on those terms, but the Pin Sconce is a good example. Of course, it could be sheet metal, or it could be acrylic, or anything else, but paper does the exact same thing, and it does it for less. And I think that there’s a satisfying quality to that. It’s operating on a minimum use of material, and I think that sustainability factors into that mindset. It just really needs to feel as if it’s something that’s being engaged with in a meaningful way.

A modern wall-mounted lamp with a white, angular lampshade, attached to a rectangular wall plate, and a visible white power cord hanging down.

Maxwell F. Sims is a designer engaging in the intersection where material, interaction, and function meet. Expressive yet intensely practical, his work readily engages a wider audience outside the design community, forging new connections.

Close-up of a light gray, textured lampshade with a pleated design, with a white cord visible in the background.

To learn more about the Pin Sconce from Maxwell Sims, please visit maxwellsims.com.

Photography courtesy of Maxwell Sims.

Read the whole story
mkalus
2 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The AI Bubble's Impossible Promises

1 Share

Readers: I’ve done a very generous “free” portion of this newsletter, but I do recommend paying for premium to get the in-depth analysis underpinning the intro. That being said, I want as many people as possible to get the general feel for this piece. Things are insane, and it’s time to be realistic about what the future actually looks like.


We’re in a bubble. Everybody says we’re in a bubble. You can’t say we’re not in a bubble anymore without sounding insane, because everybody is now talking about how OpenAI has promised everybody $1 trillionsomething you could have read about two weeks ago on my premium newsletter.

Yet we live in a chaotic, insane world, where we can watch the news and hear hand-wringing over the fact that we’re in a bubble, read article after CEO after article after CEO after analyst after investor saying we’re in a bubble, yet the market continues to rip ever-upward on increasingly more-insane ideas, in part thanks to analysts that continue to ignore the very signs that they’re relied upon to read.

AMD and OpenAI signed a very strange deal where AMD will give OpenAI the chance to buy 160 million shares at a cent a piece, in tranches of indeterminate size, for every gigawatt of data centers OpenAI builds using AMD’s chips, adding that OpenAI has agreed to buy “six gigawatts of GPUs.”

This is a peculiar way to measure GPUs, which are traditionally measured in the price of each GPU, but nevertheless, these chips are going to be a mixture of AMD’s mi450 instinct GPUs — which we don’t know the specs of! — and its current generation mi350 GPUs, making the actual scale of these purchases a little difficult to grasp, though the Wall Street Journal says it would “result in tens of billions of dollars in new revenue” for AMD.

This AMD deal is weird, but one that’s rigged in favour of Lisa Su and AMD. OpenAI doesn’t get a dollar at any point - it has work out how to buy those GPUs and figure out how to build six further gigawatts of data centers on top of the 10GW of data centers it promised to build for NVIDIA and the seven-to-ten gigawatts that are allegedly being built for Stargate, bringing it to a total of somewhere between 23 and 26 gigawatts of data center capacity.

Hell, while we’re on the subject, has anyone thought about how difficult and expensive it is to build a data center? 

Everybody is very casual with how they talk about Sam Altman’s theoretical promises of trillions of dollars of data center infrastructure, and I'm not sure anybody realizes how difficult even the very basics of this plan will be.

Nevertheless, everybody is happily publishing stories about how Stargate Abilene Texas — OpenAI’s massive data center with Oracle — is “open,” by which they mean two buildings, and I’m not even confident both of them are providing compute to OpenAI yet. There are six more of them that need to get built for this thing to start rocking at 1.2GW — even though it’s only 1.1GW according to my sources in Abilene.

But, hey, sorry — one minute — while we’re on that subject, did anybody visiting Abilene in the last week or so ever ask whether they’ll have enough power there? 

Don’t worry, you don’t need to look. I’m sure you were just about to, but I did the hard work for you and read up on it, and it turns out that Stargate Abilene only has 200MW of power — a 200MW substation that, according to my sources, has only been built within the last couple of months, with 350MWs of gas turbine generators that connect to a natural gas power plant that might get built by the end of the year.

Said turbine is extremely expensive, featuring volatile pricing (for context, natural gas price volatility fell in Q2 2025…to 69% annualized) and even more volatile environmental consequences, and is, while permitted for it (this will download the PDF of the permit), impractical and expensive to use long-term. 

Analyst James van Geelen, founder of Citrini Research recently said on Bloomberg’s Odd Lots podcast that these are “not the really good natural gas turbines” because the really good ones would take seven years to deliver due to a natural gas turbine shortage.

But they’re going to have to do. According to sources in Abilene, developer Lancium has only recently broken ground on the 1GW substation and five transformers OpenAI’s going to need to build out there, and based on my conversations with numerous analysts and researchers, it does not appear that Stargate Abilene will have sufficient power before the year 2027. 

Then there’s the question of whether 1GW of power actually gets you 1GW of compute. This is something you never see addressed in the coverage of OpenAI’s various construction commitments, but it’s an important point to make. Analyst Daniel Bizo, Research Director at the Uptime Institute, explained that 1 gigawatt of power is only sufficient to power (roughly) 700 megawatts of data center capacity. We’ll get into the finer details of that later in this newsletter, but if we assume that ratio is accurate, we’re left with a troubling problem.

That figure represents a 1.43 PUE — Power Usage Effectiveness — and if we apply that to Stargate Abilene, we see that it needs at least 1.7GW of power, and currently only has 200MW.

As an aside, I need to clear something up, because everybody — including myself! — has been getting this wrong.

When you read “1.2GW data center,” they are almost certainly referring to the data center’s IT load — which is the power consumed by all of the computing equipment inside, but not the cooling systems or power lost in the infrastructure bringing the electricity to the gear itself. The amount of non-IT load power required, furthermore, can fluctuate. 

Data centers need far more power than their IT load, and any time you read a “gigawatt” data center, know that they need about 30% more power than the amount of capacity the data center has.

Stargate Abilene does not have sufficient power to run at even half of its supposed IT load of 1.2GW, and at its present capacity — assuming that the gas turbines function at full power — can only hope to run 370MW to 460MW of IT load.

I’ve seen article after article about the gas turbines and their use of fracked gas — a disgusting and wasteful act typical of OpenAI — but nobody appears to have asked “how much power does a 1.2GW data center require?” and then chased it with “how much power does Stargate Abilene have?”

The answer is not enough, and the significance of said “not enough” is remarkable.

Today, I’m going to tell you, at length, how impossible the future of generative AI is. 

What Makes a Gigawatt

Gigawatt data centers are a ridiculous pipe dream, one that runs face-first into the walls of reality.  

The world’s governments and media have been far too cavalier with the term “gigawatt,” casually breezing by the fact that Altman’s plans require 17 or more nuclear reactors’ worth of power, as if building power is quick and easy and cheap and just happens.

I believe that many of you think that this is an issue of permitting — of simply throwing enough money at the problem — when we are in the midst of a shortage in the electrical grade steel and transformers required to expand America’s (and the world’s) power grid.

I realize it’s easy to get blinded by the constant drumbeat of “gargoyle-like tycoon cabal builds 1GW  data center” and feel that they will simply overwhelm the problem with money, but no, I’m afraid that isn’t the case at all, and all of this is so silly, so ridiculous, so cartoonishly bad that it threatens even the seemingly-infinite wealth of Elon Musk, with xAI burning over a billion dollars a month and planning to spend tens of billions of dollars building the Colossus 2 data center, dragging two billion dollars from SpaceX in his desperate quest to burn as much money as possible for no reason. 

This is the age of hubris — a time in which we are going to watch stupid, powerful and rich men fuck up their legacies by finding a technology so vulgar in its costs and mythical outcomes that it drives the avaricious insane and makes fools of them. 

Or perhaps this is what happens when somebody believes they’ve found the ultimate con — the ability to become both the customer and the business, which is exactly what NVIDIA is doing to fund the chips behind Colossus 2.

According to Bloomberg, NVIDIA is creating a company — a “special purpose vehicle” — that it will invest $2 billion in, along with several other backers. Once that’s done, the special purpose vehicle will then use that equity to raise debt from banks, buy GPUs from NVIDIA, and then rent those GPUs to Elon Musk for five years.

Hell, why make it so complex? NVIDIA invested money in a company specifically built to buy chips from it, which then promptly handed the money back to NVIDIA along with a bunch of other money, and then whatever happened next is somebody else’s problem.

Right?

Actually, wait — how long do GPUs last, exactly? Four years for training? Three years? The A100 GPU started shipping in May 2020, and the H100 (and the Hopper GPU generation) entered full production in September 2022, meaning that we’re hurtling at speed toward the time in which we’re going to start seeing a remarkable amount of chips start wearing down, which should be a concern for companies like Microsoft, which bought 150,000 Hopper GPUs in 2023 and 485,000 of them in 2024.

Alright, let me just be blunt: the entire economy of debt around GPUs is insane.

Assuming these things don’t die within five years (their warranties generally end in three), their value absolutely will, as NVIDIA has committed to releasing a new AI chip every single year, likely with significant increases to power and power efficiency. At the end of the five year period, the Special Purpose Vehicle will be the proud owner of five-year-old chips that nobody is going to want to rent at the price that Elon Musk has been paying for the last five years. Don’t believe me? Take a look at the rental prices for H100 GPUs that went from $8-an-hour in 2023 to $2-an-hour in 2024, or the Silicon Data Indexes (aggregated realtime indexes of hourly prices) that show H100 rentals at around $2.14-an-hour and A100 rentals at a dollar-an-hour, with Vast.AI offering them at as little as $0.67 an hour.

This is, by the way, a problem that faces literally every data center being built in the world, and I feel insane talking about it. It feels like nobody is talking about how impossible and ridiculous all of this is. It’s one thing that OpenAI has promised one trillion dollars to people — it’s another that large swaths of that will be spent on hardware that will, by the end of these agreements, be half-obsolete and generating less revenue than ever.

Think about it. Let’s assume we live in a fantasy land where OpenAI is somehow able to pay Oracle $300 billion over 5 years — which, although the costs will almost certainly grow over time, and some of the payments are front-loaded, averages out to $5bn each month, which is a truly insane number that’s in excess of what Netflix makes in revenue. 

Said money is paying for access to Blackwell GPUs, which will, by then, be at least two generations behind, with NVIDIA’s Vera Rubin GPUs due next year. What happens to that GPU infrastructure? Why would OpenAI continue to pay the same rental rate for five-year-old Blackwell GPUs?  

All of these ludicrous investments are going into building data centers full of what will, at that point, be old tech. 

Let me put it in simple terms: imagine you, for some reason, rented an M1 Mac when it was released in 2020, and your rental was done in 2025, when we’re onto the M4 series. Would you expect somebody to rent it at the same price? Or would they say “hey, wait a minute, for that price I could rent one of the newer generation ones.” And you’d be bloody right! 

Now, I realize that $70,000 data center GPUs are a little different to laptops, but that only makes their decline in value more profound, especially considering the billions of dollars of infrastructure built around them. 

And that’s the problem. Private equity firms are sinking $50 billion or more a quarter into theoretical data center projects full of what will be years-old GPU technology, despite the fact that there’s no real demand for generative AI compute, and that’s before you get to the grimmest fact of all: that even if you can build these data centers, it will take years and billions of dollars to deliver the power, if it’s even possible to do so.

Harvard economist Jason Furman estimates that data centers and software accounted for 92% of GDP growth in the first half of this year, in line with my conversation with economist Paul Kedrosky from a few months ago

All of this money is being sunk into infrastructure for an “AI revolution” that doesn’t exist, as every single AI company is unprofitable, with pathetic revenues ($61 billion or so if you include CoreWeave and Lambda, both of which are being handed money by NVIDIA), impossible-to-control costs that have only ever increased, and no ability to replace labor at scale (and especially not software engineers).  

OpenAI needs more than a trillion dollars to pay its massive cloud compute bills and build 27 gigawatts of data centers, and to get there, it needs to start making incredible amounts of money, a job that’s been mostly handed to Fidji Simo, OpenAI’s new CEO of Applications, who is solely responsible for turning a company that loses billions of dollars into one that makes $200 billion in 2030 with $38 billion in profit. She’s been set up to fail, and I’m going to explain why.

In fact, today I’m going to explain to you how impossible all of this is — not just expensive, not just silly, but actively impossible within any of the timelines set

Stargate will not have the power it needs before the middle of 2026 — the beginning of Oracle’s fiscal year 2027, when OpenAI has to pay it $30 billion for compute — or, according to The Information, choose to walk away if the capacity isn’t complete. And based on my research, analysis and discussions with power and data center analysts, gigawatt data centers are, by and large, a pipedream, with their associated power infrastructure taking two to four years, and that’s if everything goes smoothly.

OpenAI cannot build a gigawatt of data centers for AMD by the “second half of 2026.”  It haven’t even announced the financing, let alone where the data center might be, and until it does that it’s impossible to plan the power, which in and of itself takes months before you even start building. 

Every promise you’re reading in the news is impossible. Nobody has even built a gigawatt data center, and more than likely nobody ever will. Stargate Abilene isn’t going to be ready in 2026, won’t have sufficient power until at best 2027, and based on the conversations I’ve had it’s very unlikely it will build that gigawatt substation before the year 2028. 

In fact, let me put it a little simpler: all of those data center deals you’ve seen announced are basically bullshit. Even if they get the permits and the money, there are massive physical challenges that cannot be resolved by simply throwing money at them. 

Today I’m going to tell you a story of chaos, hubris and fantastical thinking. I want you to come away from this with a full picture of how ridiculous the promises are, and that’s before you get to the cold hard reality that AI fucking sucks. 

Read the whole story
mkalus
15 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories