Resident of the world, traveling the road of life
68337 stories
·
21 followers

The futile future of the gigawatt datacenter — by Nicholas Weaver

1 Share

Guest post by Nicholas C. Weaver

The AI companies and related enterprises are currently spending huge amounts in capital expenditures (CAPEX) to build “Gigawatt” class data centers for various AI-related development.

The scale of the investment is so large it is distorting the economy. These massive expenditures, however, are shortly going to prove to be at least a half-trillion-dollar money pit: these massive data centers are simply not fit for purpose.

But to understand why this is a waste of money, it is critical to discuss two technologies: machine learning (ML) in general and large language models (LLMs) in particular.

What is Machine Learning?

Machine learning is a set of general techniques to develop what are, at heart, pattern matching tools: given a sequence of inputs, what is it? Although these techniques are actually very old, it is only in the past decade and a half, with the introduction of modern Graphics Processing Units (GPUs) that ML became feasible for a wide variety of tasks.

ML systems in general work by taking a large amount of “training data”, feeding it into the ML system to “train” it, and, once complete, the ML system performs “inference”: taking a piece of input and saying what type of output it is.

Of course the ML systems do have some drawbacks: they are opaque, they require a lot of computational power, and they can be wrong! So in general, a good recipe for using ML is:

  1. You need to build a pattern matcher.
  2. You have no clue what to actually look for (because a conventional pattern matcher would be vastly more efficient).
  3. When you are done, you still have no clue what to look for (because you could then rewrite it as a more efficient conventional pattern matcher).
  4. It is OK to be hilariously wrong a non-zero percentage of the time.

Although seemingly limited, the result can be quite spectacular: the classic ML success story in the past decade and change is speech recognition.

Speech recognition is a hard problem even for humans: how many times have you misunderstood what someone said? It proved even harder for computers until ML systems grew to a practical scale. Now we take tools like Siri for granted.

Siri started out in the datacenter: when you asked Siri for something, the phone would package up your request and send it to one of Apple’s servers, where a powerful computer would parse the speech. Now it largely runs on the phone itself: I can put my phone into airplane mode and have Siri do things because the CPU in my phone includes specialized hardware to make ML run fast.

This works because ML inference may require a lot of math, but the math itself is very regular, effectively multiplying a large number of numbers together. This allows for circuit designs that just place a lot of multipliers and adders together, which by computer standards is highly efficient. Just building lots of math is only part of the story, however, as the math doesn’t have to be very good.

So instead of using high precision math that can represent 18,446,744,073,709,551,616 different numbers (the 64-bit math that most computers normally use), ML inference can use low precision math. Apple’s system largely uses 16-bit math (able to represent 65,536 distinct values). Other ML systems might use 8-bit math (able to represent 256 values) or even just 4-bit (able to represent a total of just 16 values)! Such math means a reduction in accuracy — but if wrong is OK, a little bit more wrong but a lot cheaper is probably OK too.

This means that ML inference is, whenever possible, best done on the edge. Edge computing might have a slightly higher error rate, as models are compressed to be able to fit on a phone, laptop, or desktop computer. In return, however, the costs of the ML system are greatly reduced. Instead of needing an expensive GPU system in the cloud that may cost $10-75 an hour to run, the cost per invocation is effectively zero.

What are Large Language Models?

A large language model (LLM) is a text-based ML system initially trained on effectively all the text the LLM company can download or pirate. These systems require a massive amount of resources to initially train and operate, but do produce a highly convincing obsequious chat bot.

Unfortunately, LLMs in particular have two critical problems: the wrongness inherent in all machine learning systems, and a fundamental inability to distinguish between “data” and “program”.

The wrongness problem is simply how they operate. LLM purveyors talk about ‘hallucinations’, when the LLM produces some wrong output, as just an unfortunate side effect that they are trying to control. This is a lie. Rather, all of a LLM’s output is bullshit in the philosophical sense: statements that are divorced from whether or not they are true or false. The point of a LLM is to output text that looks right given the training data and query, not to produce text that is right.

Even so-called “reasoning” models aren’t. Instead they output a separate set of text that also sounds like reasoning, but isn’t, as witnessed whenever one tries a random variation of the “river crossing” problem.

Compounding the problem of wrongness is a LLM-specific security flaw: they cannot distinguish between ‘code’ and ‘data’ — the instructions to a LLM are simply more text that is fed into the LLM. This means that a LLM will always be fundamentally vulnerable to “prompt injection” attacks, with all protections being potentially brittle hacks that can’t fundamentally eliminate the problem. A LLM can never be used in a context where it can both “receive an input provided by a ‘bad guy’” and where “do something ‘bad’” is unsafe.

This means that LLMs are simply not fit for purpose in many applications. A LLM can’t act as an ‘agent’, reading your email and then doing something in response, because some spammer will instruct your agent to send a purchase to a Nigerian prince’s credit card processing facility.

This makes LLMs a niche technology: very few proposed applications for LLMs are both allowed to be wrong and can operate safely on untrusted input.

Enter the gigawatt datacenter

LLMs and the closely related image generators, unique amongst most ML applications, require staggering resources for both training and inference. These under-construction data centers feature tens of billions of dollars worth of Nvidia processors, drawing billions of watts of power, at an aggregate cost of at least $500B spent to-date by the major players. Yet these data centers are going to prove to be white elephants — because the intended applications simply won’t exist.

Even for these massive models, the process of inference is already best done on the edge. Deepseek showed that very large models can be shrunk in size with remarkably little impact on accuracy, with many other (mostly Chinese) companies following suit. There is an entire community on Reddit, LocalLLaMA, dedicated to running such systems in a home environment. Once again, “a little more wrong at much lower cost” shows its power in the ML space.

So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks. It doesn’t take too many users with $200/month subscriptions or too many calls to a $15 per million token API to justify paying for a few $4000 computers instead.

Training may want the large data center, but we’ve long since hit the point of diminishing returns. There is effectively no more text to train on, as even the LLM systems of a few years ago were trained on almost all the coherent text in existence. Similarly, more training, and more training data, will never eliminate the problem of the LLM being occasionally wrong. Instead, the best use of training is probably specialization: taking a large model designed for general input and shrinking it down for a specific task.

The current models were trained, and can be specialized, without the presence of the half-trillion-dollar worth of money pits that major companies like Amazon, Microsoft, Alphabet, and Meta are building.

DeepSeek v3’s claim of $5M for training is probably an exaggeration — but even 10x the cost would be just $50M in computing time for a model that is fully competitive with the best from OpenAI and Anthropic. DeepSeek v3 is a particularly large model, with 600 billion parameters, but can still run on a cluster of just 8 Mac Minis for inference, and could probably be specialized in a couple of hours running on one or two big servers, at a rental cost of a few hundred dollars.

The gigawatt data center is an evolutionary dead end. When the AI hype-bubble bursts, they are going to be multi-billion-dollar white elephants full of chips that are best simply turned off and the money regarded as wasted. There are not going to be customers: the existing resources are already sufficient for the proposed applications that have a hope of actually working where the rubber meets the road.

The future of ML is at the edge.


I did an interview with Nick about his ideas here! You can watch it on video (360p Zoom, baybee) or listen to the podcast! The transcript is on the Patreon for $5-and-up patrons. [YouTube; podcast]

 

 

Read the whole story
mkalus
13 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Pluralistic: The enshittification of labor (07 Nov 2025)

1 Comment and 2 Shares


Today's links



A Gilded Age editorial cartoon depicting a muscular worker and a corpulent millionaire squaring off for a fight; the millionaire's head has been replaced with the poop emoji from the cover of 'Enshittification,' its mouth covered in a grawlix-scrawled black bar.

The enshittification of labor (permalink)

While I formulated the idea of enshittification to refer to digital platforms and their specific technical characteristics, economics and history, I am very excited to see other theorists extend the idea of enshittification beyond tech and into wider policy realms.

There's an easy, loose way to do this, which is using "enshittification" to refer to "things generally getting worse." To be clear, I am fine with this:

https://pluralistic.net/2024/10/14/pearl-clutching/#this-toilet-has-no-central-nervous-system

But there's a much more exciting way to broaden "enshittification," which starts with the foundation of the theory: that the things we rely on go bad when the system stops punishing this kind of deliberate degradation and starts rewarding it. In other words, the foundation of enshittification is the enshittogenic policy environment:

https://pluralistic.net/2025/09/10/say-their-names/#object-permanence

That's where Pavlina Tcherneva comes in. Tcherneva is an economist whose work focuses on the power of a "job guarantee," which is exactly what it sounds like: a guarantee from the government to employ anyone who wants a job, by either finding or creating a job that a) suits that person's talents and abilities and b) does something useful and good. If this sounds like a crazy pipe-dream to you, let me remind you that America had a job guarantee and it was wildly successful, and created (among other things), the system of national parks, a true jewel in America's crown:

https://pluralistic.net/2020/10/23/foxconned/#ccc

Tcherneva's latest paper is "The Death of the Social Contract and the Enshittification of Jobs," in which she draws a series of careful and compelling parallels between my work on enshittification and today's employment crisis, showing how a job guarantee is the ultimate disenshittifier of work:

https://www.levyinstitute.org/wp-content/uploads/2025/11/wp_1100.pdf

Tcherneva starts by proposing a simplified model of enshittification, mapping my three stages onto three of her own:

  1. Bait: Lure in users with a great, often subsidized, service.

  2. Trap: Use that captive audience to attract businesses (sellers, creators, advertisers).

  3. Switch: Exploit those groups by degrading the experience for everyone to extract maximum profit.

How do these map onto the current labor market and economy? For Tcherneva, the "bait" stage was "welfare state capitalism," which was "shaped by post–Great Depression government reforms and lasted through the 70s." This was the era in which the chaos of the Great Depression gave rise to fiscal and monetary policy that promoted macroeconomic stability. It was the era of economic safety nets and mass-scale federal investment in American businesses, through the Reconstruction Finance Corporation, a federal entity that expanded into directly funding large companies during WWII. After the war, the US Treasury continued to play a direct role in finance, through procurement, infrastructure spending and provision of social services.

As Tcherneva writes, this is widely considered the "Golden Age" of the US economy, a period of sustained growth and rising standard of living (she also points out that these benefits were very unevenly distributed, thanks to compromises made with southern white nationalists that exempted farm labor, and a pervasive climate of misogyny that carved out home work).

The welfare state capitalism stage was celebrated not merely for the benefits that it brought, but also for the system it supplanted. Before welfare state capitalism, we had 19th century "banker capitalism," in which cartels and trusts controlled every aspect of our lives and gave rise to a string of spectacular economic bubbles and busts. Before that, we had the "industrial capitalism" of the Industrial Revolution, where large corporations seized power. Before that, it was "merchant capitalism," and before that, feudalism – where workers were bound to a lord's land, unable to escape the economic and geographic destiny assigned to them at birth.

So welfare state capitalism was a welcome evolution, at least for the workers who got to reap its benefits. But welfare state capitalism was short-lived. To understand what came next, Tcherneva cites Hyman Minsky (whose "theory of capitalist development" provides this epochal nomenclature for the various stages of capitalism over the centuries).

Minsky calls the capitalism that supplanted welfare state capitalism "money manager capitalism," the system that reigned from the Reagan revolution until the Great Financial Crisis of 2008. This was an era of "deregulation, eroding worker power, rapid increase in inequality, and a rise of the money manager class." It's the period of financialization, which favored the creation of gigantic conglomerates that wrapped banking services (loans, credit cards, etc) around their core offerings, from GE to Amazon.

Then came the crash of 2008, which gave us our current era, the era of "international money manager capitalism," which is the system in which gigantic, transnational funds capture our economy pumping and dumping a series of scammy bubbles, like crypto, metaverse, blockchain, and (of course) AI:

https://pluralistic.net/2025/09/27/econopocalypse/#subprime-intelligence

Welfare state capitalism was the "bait" stage of the enshittification of labor. Public subsidies and regulation produced an environment in which (many) workers were able to command a large share of the fruits of their labor, securing both a living wage and old-age surety. This was the era of the "family wage," in which a single earner could supply all the necessities of life to a family: an owner-occupied home, material sufficiency, and enough left over for vacations, Christmas presents and other trappings of "the good life."

During this stage, the "social contract" meant the government training a skilled workforce (through universal education) and public goods like roads and utilities. Companies got big contracts, but only if they accepted collective bargaining from their unions. Governments and corporations collaborated to secure a comfortable requirement for workers.

But this arrangement lacked staying power, thanks to a key omission in the social contract: the guarantee of a good job. Rather than continuing the job guarantee that brought America out of the Depression, all the post-New Deal order could offer the unemployed was unemployment insurance. This wasn't so important while America was booming and employers were begging for workers, but when growth slowed, the lack of a job guarantee suddenly became the most important fact of many workers' lives.

This was foreseen by the architects of the New Deal. FDR's "Second Bill of (Economic) Rights" would have guaranteed every American "national healthcare, paid vacation, and a guaranteed job":

https://en.wikipedia.org/wiki/Second_Bill_of_Rights

These guarantees were never realized, and for Tcherneva, this failure doomed welfare state capitalism. Unions were powerful during an era of tight labor markets and able to wring concessions out of capital, but once demand for workers ebbed (thanks to slowing growth and, later, offshoring), bosses could threaten workers with unemployment, breaking union power.

The social contract was bait, promising "economic security and decent jobs" through cooperation between the government, corporations and unions.

The switch came from Reagan, with mass-scale deregulation, a hack-and-slash approach to social spending, and the enshrining of a permanently unemployed reserve army of workers whose "job" was fighting inflation (by not having a job). Trump has continued this, with massive cuts to the federal workforce. Today, "job insecurity is not an unfortunate consequence of shifting economic winds, it is the objective of public policy."

For money manager capitalism, unemployment is a feature, not a bug – literally. Neoliberal economists invented something called the NAIRU ("non-accelerating inflation rate of unemployment"), which deliberately sets out to keep a certain percentage of workers in a state of unemployment, in order to fight inflation.

Here's how that works: if the economy is at full employment (meaning everyone who wants a job has one), and prices go up (say, because bosses decide to increase their rate of profit), then workers will demand and receive a pay-rise, because bosses can't afford to fire those "greedy" workers – there are no unemployed workers to replace them.

This means that if bosses want to maintain their rate of profit, they will have to raise prices again to pay those higher wages for their workers. But after that, workers' pay no longer goes as far as it used to, so workers demand another raise and then bosses have to hike prices again (if they are determined not to allow the decline of their own profits). This is called "the wage-price spiral" and it's what happens when bosses refuse to accept lower profits and workers have the power to demand that their wages get adjusted to keep up with prices.

Of course, this only makes sense if you think that bosses should be guaranteed their profits, even if that means that workers' real take-home pay (measured by purchasing power) declines. You aren't supposed to notice this, though. That's why neoliberal economists made it a sin to ask about "distributional effects" (that is, asking about how the pie gets divided) – you're only supposed to care about how big the pie gets:

https://pluralistic.net/2023/03/28/imagine-a-horse/#perfectly-spherical-cows-of-uniform-density-on-a-frictionless-plane

With the adoption of NAIRU, joblessness "was now officially sanctioned as necessary for the health of the economy." You could not survive unless you had a job, not everyone could have a job, and the jobs were under control of a financialized, concentrated corporate sector. Companies merged and competition disappeared. If you refused to knuckle under to the boss at your (formerly) good factory job, there wasn't another factory that would put you on the line. The alternative to those decaying industrial jobs were "unemployment and low-wage service sector work."

That's where the final phase of the enshittification of labor comes in: the "trap." For Tcherneva, the trap is "the brutal fact of necessity itself." You cannot survive without a roof over your head, without electricity, without food and without healthcare. As these are not provided by the state, the only way to procure them (apart from inherited wealth) is through work, and access to work is entirely in the hands of the private sector.

Once corporations capture control of housing (through corporate landlords), healthcare (though corporate takeover of hospitals, pharma, etc), and power (through privatization of utilities), they can squeeze the people who depend on these things, because there is no competitor. You can't opt out of shelter, food, electricity and healthcare – at least, not without substantial hardship.

In my own theory of enshittification, platforms hunt relentlessly for sources of lock-in (e.g., the high switching costs of losing your social media community or your platform data) and, having achieved it, squeeze users and businesses, secure in the knowledge that users can't readily leave for a better service. This is compounded by monopolization (which reduces the likelihood that a better service even exists) and regulatory capture (which gives companies a free hand to squeeze with). Once a company can squeeze you, it will.

Here, Tcherneva is translating this to macroeconomic phenomena: control over the labor market and capture of the necessaries of life allows companies to squeeze, and so they do. A company rips you off for the same reason your dog licks its balls: because it can.

Tcherneva describes the era of money manager capitalism as "the slow, grinding enshittification of daily life." It's an era of corporate landlords raising the rent and skimping on maintenance, while hitting tenants with endless junk fees. It's an era of corporate hospitals gouging you on bills, skimping on care, and screwing healthcare workers. It's an era of utilities capturing their public overseers and engaging in endless above-inflation price hikes:

https://pluralistic.net/2025/02/24/surfa/#mark-ellis

This is the "trap" of Tcherneva's labor enshittification, and it kicked off "a decades-long enshittification of working life." Enshittified labor is "low-wage jobs with unpredictable schedules and no benefits." Half of American workers earn less than $25/hour. The federal minimum wage is stuck at $7.25/hour. Half of all renters are rent-burdened and a third of homeowners are mortgage-burdened. A quarter of renters are severely rent-burdened, with more than half their pay going to rent.

Money manager capitalism's answer to this is…more finance. Credit cards, payday loans, home equity loans, student loans. All this credit isn't nearly sufficient to keep up with rising health, housing, and educational prices. This locks workers into "a lifetime of servicing debt, incurred to simulate a standard of living the social contract had once promised but their wages could no longer deliver."

To manage this impossible situation, money manager capitalism spun up huge "securitized" debt markets, the CDOs and ABSes that led to the Great Financial Crisis (today, international money manager capitalism is spinning up even more forms of securitized debts).

In my theory of enshittification, there are four forces that keep tech platforms from going bad: competition, regulation, a strong workforce and interoperability. For Tcherneva, these forces all map onto the rise and fall of the American standard of living.

Competition: Welfare state capitalism was born in a time of tight labor markets. Workers could walk out of a bad job and into a good one, forcing bosses to compete for workers (including by dealing fairly with unions). This was how we got the "good job," one with medical, retirement, training and health care benefits.

Regulation: The New Deal established the 40-hour week, minimum wages, overtime, and the right to unionize. As with tech regulation, this was backstopped by competition – the existence of a tight labor market meant that companies had to concede to regulation. As with tech regulation, the capture of the state meant the end of the benefits of regulation. With the rise of NAIRU, regulation was captured by bosses, with the government now guaranteeing a pool of unemployed workers who could be used to terrorize uppity employees into meek acceptance.

Interoperability: In tech enshittification, the ability to take your data, relationships and devices with you when you switch to a competitor means that the companies you do business with have to treat you well, or risk your departure. In labor enshittification, bosses use noncompetes, arbitration, trade secrecy, and nondisparagement to keep workers from walking across the street and into a better job. Some workers are even encumbered with "training repayment agreement provisions (TRAPs) that force them to pay thousands of dollars if they quit their jobs:

https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose

Worker power: In tech enshittification, tech workers – empowered by the historically tight tech labor market – are able to hold the line, refusing to enshittify the products they develop, with the constant threat that they can walk out the door and get a job elsewhere. In labor enshittification, NAIRU, combined with corporate capture of the necessaries of life and the retreat of unionization, means that workers have very little power to demand a better situation, which means their bosses can worsen things to their shriveled hearts' content.

As with my theory of enshittification, the erosion of worker power is an accelerant for labor enshittification. Weaker competition for workers means weaker labor power, which means weaker power to force the government to regulate. This sets the stage for more consolidation, weaker workers, and more state capture. This is the completion of the bait-trap-switch of the postwar social contract.

For Tcherneva, this enshittification arises out of the failure to create a job guarantee as part of the New Deal. And yet, a job guarantee remains very popular today:

https://www.jobguarantee.org/resources/public-support/

How would a job guarantee disenshittify the labor market? The job guarantee means a "permanent, publicly provided employment opportunity to anyone ready and willing to work, it establishes an effective floor for the entire labor market."

Under a job guarantee, any private employer wishing to hire a worker will have to beat the job guarantee's wages and benefits. No warehouse or fast-food chain could offer "poverty wages, unpredictable hours, and a hostile environment." It's an incentive to the private sector to compete for labor by restoring the benefits that characterized America's "golden age."

What's more, a job guarantee is administrable. A job guarantee means that workers can always access a safe, good job, even if the state fails to adequately police private-sector employers and their wages and working conditions. A job guarantee does much of the heavy lifting of enforcing a whole suite of regulations: "minimum wage laws, overtime rules, safety standards—that are constantly subject to political attack, corporate lobbying, and enforcement challenges."

A job guarantee also restores interoperability to the labor market. Rather than getting trapped in a deskilled, low-waged gig job, those at the bottom of the labor market will always have access to a job that comes with training and skills development, without noncompetes and other gotchas that trap workers in shitty jobs. For workers this means "career advancement and mobility." For society, "it delivers a pipeline of trained personnel to tackle our most pressing challenges."

And best of all, a job guarantee restores worker power. The fact that you can always access a decent job at a socially inclusive wage means that you don't have to eat shit when it comes to negotiating for your housing, health care and education. You can tell payday lenders, for-profit scam colleges (like Trump University), and slumlords to go fuck themselves.

Tcherneva concludes by pointing out that, as with tech enshittification, labor enshittification "is a political choice, not an economic inevitability." Labor enshittification is the foreseeable outcome of specific policies undertaken in living memory by named individuals. As with tech enshittification, we are under no obligation to preserve those enshittificatory policies. We can replace them with better ones.

If you want to learn more about the job guarantee, you can read my review of her book on the subject:

https://pluralistic.net/2020/06/22/jobs-guarantee/#job-guarantee

And the interview I did with her about it for the LA Times:

https://www.latimes.com/entertainment-arts/books/story/2020-06-24/forget-ubi-says-an-economist-its-time-for-universal-basic-jobs

Tcherneva and I are appearing onstage together next week in Lisbon at Web Summit to discuss this further:

https://websummit.com/sessions/lis25/2a479f57-a938-485a-acae-713ea9529292/working-it-out-job-security-in-the-ai-era/

And I assume that the video will thereafter be posted to Websummit's Youtube channel:

https://www.youtube.com/@websummit


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago PATRIOT Act secret-superwarrants use is up 10,000 percent https://www.washingtonpost.com/wp-dyn/content/article/2005/11/05/AR2005110501366_pf.html

#10yrsago Protopiper: tape-gun-based 3D printer extrudes full-size furniture prototypes https://www.youtube.com/watch?v=beRA4sIjxa8

#10yrsago EFF on TPP: all our worst fears confirmed https://www.eff.org/deeplinks/2015/11/release-full-tpp-text-after-five-years-secrecy-confirms-threats-users-rights

#10yrsago TPP will ban rules that require source-code disclosure https://www.keionline.org/39045

#10yrsago Publicity Rights could give celebrities a veto over creative works https://www.eff.org/deeplinks/2015/11/eff-asks-supreme-court-apply-first-amendment-speech-about-celebrities-0

#10yrsago How TPP will clobber Canada’s municipal archives and galleries of historical city photos https://www.geekman.ca/single-post/2015/11/the-tpp-vs-municipal-archives.html

#5yrsago HP ends its customers' lives https://pluralistic.net/2020/11/06/horrible-products/#inkwars

#1yrago Every internet fight is a speech fight https://pluralistic.net/2024/11/06/brazilian-blowout/#sovereignty-sure-but-human-rights-even-moreso


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026



Colophon (permalink)

Today's top sources:

Currently writing:

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
mkalus
17 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
1 public comment
cjheinz
23 hours ago
reply
JG, FTW!
Lexington, KY; Naples, FL

Emergency subwoofer

1 Share

To go.


(Direktlink)

Read the whole story
mkalus
18 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Loacker Galaxy in Austria Takes Cues From Sweet Treats

1 Share

Loacker Galaxy in Austria Takes Cues From Sweet Treats

Since 1925, confectionary brand Loacker has been a purveyor of treats, known for their biscuits filled with hazelnut and chocolate cream. In celebration of the company’s centennial this year, the team at MoDusArchitects was tapped to refresh the company’s Loacker Galaxy flagship, and design a place that would be above and beyond a typical retail environment. “We wanted visitors to feel fully immersed, and to create an exciting, generous space where adults could experience the same awe and wonder as children in a candy store,” says Sandy Attia, co-founder of MoDusArchitects.

People walk by and enter a modern ice cream kiosk with a decorative roof, set against a backdrop of green forested hills and outdoor umbrellas.

Located in Heinfels, Austria, a newly completed pavilion enlivens the grounds outside of the main building. A truncated oval form is clad in stained, vertical wood boards with rhomboid cutouts. This structure screens the area from the adjacent parking lot and also covers the relocated ice cream kiosk beneath a concrete canopy.

A modern red and beige building with geometric columns, large windows, and “100 YEARS” signage, set against a mountainous landscape. Two people walk past in the foreground.

The 8,500-square-foot Loacker Galaxy references the manufacturer’s goods and packaging in three key areas. From the mezzanine, guests can peek into the lab or head into the production space themselves, where they can make their own goodies. The brand’s bold red hue covers ceramic floor tiles, while silver laminate and metal have a futuristic feel.

A small, modern red kiosk with geometric shapes and a decorative circular canopy is set against a backdrop of trees and mountains.

A small modern building with a geometric roof design and red glass walls stands outside, with green chairs nearby and mountains in the background.

Circular wooden structure with diamond-shaped cutouts and red accents, set against a backdrop of green trees and grassy hills.

Wooden facade with a geometric pattern of diamond-shaped cutouts backed by red, next to a modern entrance with a red frame and a door casting a shadow.

Modern café interior with a curved stainless steel counter, large windows, light wood walls and ceiling, gray tile flooring, and tables with chairs near the windows.

A coffered fir ceiling in the cafe echoes the look of favorite biscuits, and brings in a natural warmth. A custom stainless steel counter sits in the center of the casual space, in the shape of a Loacker wafer. Oak paneling on the walls and tabletops complements rich leather upholstery in a combination of light and dark caramel tones.

Modern café interior with a central circular metal counter, orange chairs, wooden paneled walls and ceiling, and geometric patterned design elements.

A close-up view of a modern ceiling featuring a grid pattern of light-colored wooden beams and perforated acoustic panels.

Modern café interior with light wood paneling, orange-brown chairs, small wooden tables, and two arched doorways, one featuring a red accent wall and booth seating.

A modern seating area with brown chairs and wooden tables in front of arched wooden wall alcoves, one with red upholstery and a table, near a staircase.

A modern retail store interior with shelves displaying neatly arranged red and white packaged products, gray floors, and metallic shelving units.

The shop, of course, offers delights for every sweet tooth. Loacker favorites are available to purchase after a day of touring the Galaxy complex. Instead of basic shelves, “totems” are placed on stone flooring. Photos of various products are found at the top of almost a dozen of these 12-foot-tall, three-dimensional grids attached on columns that swivel to make shopping easier. Factory-style vertical bins built into a wall hold an array of candies to choose from.

Modern retail store interior with shelves displaying assorted packaged products, including chocolates and gift boxes; cashier counter visible in the background.

A modern store interior with a wall of organized clear bins containing various dried pasta shapes, a metal counter, and a light wood ceiling.

Shelves stacked with hundreds of individually packaged food items, organized by type, with several rows filling the entire vertical space.

Wide gray stone stairs lead up to a modern ceiling with circular, geometric metal and black panel designs, illuminated by recessed lighting.

Modern, minimalist interior with white walls, black railing, and red exhibition tables arranged on a red floor in the center of a two-story gallery space.

View through a black grid structure onto a red-tiled display area with several exhibited items and informational panels in a modern, well-lit interior space.

Modern interior with red tables, gray countertops, and large windows offering a view of mountains and trees in the background.

A group of people wearing white coats and red hairnets gather around tables in a modern, clean kitchen workspace with bright lighting and glass partitions.

Modern interior with red tables, stainless steel countertops, and a large wall mural of mountains featuring a timeline of key dates and events.

For more information, please visit modusarchitects.com.

Photography by Marco Cappelletti.

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

OpenAI’s Sora 2 Floods Social Media With Videos of Women Being Strangled

1 Share
OpenAI’s Sora 2 Floods Social Media With Videos of Women Being Strangled

Social media accounts on TikTok and X are posting AI-generated videos of women and girls being strangled, showing yet another example of generative AI companies failing to prevent users from creating media that violates their own policies against violent content. 

One account on X has been posting dozens of AI-generated strangulation videos starting in mid-October. The videos are usually 10 seconds long and mostly feature a “teenage girl” being strangled, crying, and struggling to resist until her eyes close and she falls to the ground. Some titles for the videos include: “A Teenage Girl Cheerleader Was Strangled As She Was Distressed,” “Prep School Girls Were Strangled By The Murderer!” and “man strangled a high school cheerleader with a purse strap which is crazy.”

Many of the videos posted by this X account in October include the watermark for Sora 2, Open AI’s video generator, which was made available to the public on September 30. Other videos, including most videos that were posted by the account in November, do not include a watermark but are clearly AI generated. We don’t know if these videos were generated with Sora 2 and had their watermark removed, which is trivial to do, or created with another AI video generator. 

The X account is small, with only 17 followers and a few hundred views on each post. A TikTok account with a similar username that was posting similar AI-generated choking videos had more than a thousand followers and regularly got thousands of views. Both accounts started posting the AI-generated videos in October. Prior to that, the accounts were posting clips of scenes, mostly from real Korean dramas, in which women are being strangled. I first learned about the X account from a 404 Media reader, who told me X declined to remove the account after they reported it. 

“According to our Community Guidelines, we don't allow hate speech, hateful behavior, or promotion of hateful ideologies,” a TikTok spokesperson told me in an email. The TikTok account was also removed after I reached out for comment. “That includes content that attacks people based on protected attributes like race, religion, gender, or sexual orientation.”

X did not respond to a request for comment. 

OpenAI did not respond to a request for comment, but its policies state that “graphic violence or content promoting violence” may be removed from the Sora Feed, where users can see what other users are generating. In our testing, Sora immediately generated a video for the prompt “man choking woman” which looked similar to the videos posted to TikTok and X. When Sora finished generating those videos it sent us notifications like “Your choke scene just went live, brace for chaos,” and “Yikes, intense choke scene, watch responsibly.” Sora declined to generate a video for the prompt “man choking woman with belt,” saying “This content may violate our content policies.”

Safe and consensual choking is common in adult entertainment, be it various forms of BDSM or more niche fetishes focusing on choking specifically, and that content is easy to find wherever adult entertainment is available. Choking scenes are also common social media and more mainstream horror movies and TV shows. The UK government recently announced that it will soon make it illegal to publish or possess pornographic depictions of strangulation of suffocation

It’s not surprising, then, that when generative AI tools are made available to the public some people generate choking videos and violent content as well. In September, I reported about an AI-generated YouTube channel that exclusively posted videos of women being shot. Those videos were generated with Google’s Veo AI-video generator, despite it being against the company’s policies. Google said it took action against the user who was posting those videos.

Sora 2 had to make several changes to its guardrails since it launched after people used it to make videos of popular cartoon characters depicted as Nazis and other forms of copyright infringement.

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

AI gets 45% of news wrong — but readers still trust it

2 Shares

The BBC and the European Broadcasting Union have produced a large study of how well AI chatbots handle summarising the news. In short: badly. [BBC; EBU]

The researchers asked ChatGPT, Copilot, Gemini, and Perplexity about current events. 45% of the chatbot answers had at least one major issue. 31% were seriously wrong and 20% had major inaccuracies, from hallucinations or outdated sources. This is across multiple languages and multiple countries. [EBU, PDF]

The AI distortions are “significant and systemic in nature.”

Google Gemini was by far the worst. It would make up an authoritative-sounding summary with completely fake and wrong references — much more than the other chatbots. It also used a satire source as a news source. Pity Gemini’s been forced into every Android phone, hey.

Chatbots fail most with current news stories that are moving fast. They’re also really prone to making up quotes. Anything in quotes probably isn’t the words the person actually said.

7% of news consumers ask a chatbot for their news, and that’s 15% of readers under 25. And just over a third — though they don’t give the actual percentage number — say they trust AI summaries, and about half of those under 35. People pick convenience first. [BBC, PDF]

Peter Archer is the BBC’s Programme Director for Generative AI — what a job title — and is quoted in the EBU press release. Archer put forward these results even though they were quite bad. So full points for that.

Unfortunately, Archer also says in the press release: ‘We’re excited about AI and how it can help us bring even more value to audiences.”

Archer sees his task here as promoting the chatbots: “We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.”

Anyone whose title is “Programme Director for Generative AI” is never going to sign off on a result that this stuff is poison to accurate news and the public discourse, and the BBC needs it gone — as this study makes clear. Because the job description is not to assess generative AI — it’s to promote generative AI. [job description]

So what happens next? The broadcasters have no plan to address the chatbot problem. The report doesn’t even offer ways forward. There’s no action points! Except do more studies!

They’re just going to cross their fingers and hope the chatbot vendors can be shamed into giving a hoot — the approach that hasn’t worked so far, and isn’t going to work.

Unless the vendors can cure chatbot hallucinations. And they can’t do that, because that’s how chatbots work. Everything a chatbot outputs is a hallucination, and some of the hallucinations are just closer to accurate.

The actual answer is to stop using chatbots for news, stop creating jobs inside the broadcasters whose purpose is to befoul the information stream with generative AI, and attach actual liability to the chatbot vendors when they output complete lies. Imagine a chatbot vendor having to take responsibility for what the lying chatbot spits out.

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories