Resident of the world, traveling the road of life
69009 stories
·
21 followers

Premium: The Hater's Guide To Microsoft

1 Share

Have you ever looked at something too long and felt like you were sort of seeing through it? Has anybody actually looked at a company this much in a way that wasn’t some sort of obsequious profile of a person who worked there? I don’t mean this as a way to fish for compliments — this experience is just so peculiar, because when you look at them hard enough, you begin to wonder why everybody isn’t just screaming all the time. 

Yet I really do enjoy it. When you push aside all the marketing and the interviews and all that and stare at what a company actually does and what its users and employees say, you really get a feel of the guts of a company. I’m enjoying it. The Hater’s Guides are a lot of fun, and I’m learning all sorts of things about the ways in which companies try to hide their nasty little accidents and proclivities. 

Today, I focus on one of the largest. 

In the last year I’ve spoken to over a hundred different tech workers, and the ones I hear most consistently from are the current and former victims of Microsoft, a company with a culture in decline, in large part thanks to its obsession with AI. Every single person I talk to about this company has venom on their tongue, whether they’re a regular user of Microsoft Teams or somebody who was unfortunate to work at the company any time in the last decade.

Microsoft exists as a kind of dark presence over business software and digital infrastructure. You inevitably have to interact with one of its products — maybe it’s because somebody you work with uses Teams, maybe it’s because you’re forced to use SharePoint, or perhaps you’re suffering at the hands of PowerBI — because Microsoft is the king of software sales. It exists entirely to seep into the veins of an organization and force every computer to use Microsoft 365, or sit on effectively every PC you use, forcing you to interact with some sort of branded content every time you open your start menu.

This is a direct results of the aggressive monopolies that Microsoft built over effectively every aspect of using the computer, starting by throwing its weight around in the 80s to crowd out potential competitors to MS-DOS and eventually moving into everything including cloud compute, cloud storage, business analytics, video editing, and console gaming, and I’m barely a third through the list of products. 

Microsoft uses its money to move into new markets, uses aggressive sales to build long-term contracts with organizations, and then lets its products fester until it’s forced to make them better before everybody leaves, with the best example being the recent performance-focused move to “rebuild trust in Windows” in response to the upcoming launch of Valve’s competitor to the Xbox (and Windows gaming in general), the Steam Machine.

Microsoft is a company known for two things: scale and mediocrity. It’s everywhere, its products range from “okay” to “annoying,” and virtually every one of its products is a clone of something else. 

And nowhere is that mediocrity more obvious than in its CEO.

Since taking over in 2014, CEO Satya Nadella has steered this company out of the darkness caused by aggressive possible chair-thrower Steve Ballmer, transforming from the evils of stack ranking to encouraging a “growth mindset” where you “believe your most basic abilities can be developed through dedication and hard work.” Workers are encouraged to be “learn-it-alls” rather than “know-it-alls,” all part of a weird cult-like pseudo-psychology that doesn’t really ring true if you actually work at the company

Nadella sells himself as a calm, thoughtful and peaceful man, yet in reality he’s one of the most merciless layoff hogs in known history. He laid off 18,000 people in 2014 months after becoming CEO, 7,800 people in 2015, 4,700 people in 2016, 3,000 people in 2017, “hundreds” of people in 2018, took a break in 2019, every single one of the workers in its physical stores in 2020 along with everybody who worked at MSN, took a break in 2021, 1,000 people in 2022, 16,000 people in 2023, 15,000 people in 2024 and 15,000 people in 2025

Despite calling for a “referendum on capitalism” in 2020 and suggesting companies “grade themselves” on the wider economic benefits they bring to society, Nadella has overseen an historic surge in Microsoft’s revenues — from around $83 billion a year when he joined in 2014 to around $300 billion on a trailing 12-month basis — while acting in a way that’s callously indifferent to both employees and customers alike. 

At the same time, Nadella has overseen Microsoft’s transformation from an asset-light software monopolist that most customers barely tolerate to an asset-heavy behemoth that feeds its own margins into GPUs that only lose it money. And it’s that transformation that is starting to concern investors, and raises the question of whether Microsoft is heading towards a painful crash. 

You see, Microsoft is currently trying to pull a fast one on everybody, claiming that its investments in AI are somehow paying off despite the fact that it stopped reporting AI revenue in the first quarter of 2025. In reality, the one segment where it would matter — Microsoft Azure, Microsoft’s cloud platform where the actual AI services are sold — is stagnant, all while Redmond funnels virtually every dollar of revenue directly into more GPUs. 

Intelligent Cloud also represents around 40% of Microsoft’s total revenue, and has done so consistently since FY2022. Azure sits within Microsoft's Intelligent Cloud segment, along with server products and enterprise support.

For the sake of clarity, here’s how Microsoft describes Intelligent Cloud in its latest end-of-year K-10 filing:

Our Intelligent Cloud segment consists of our public, private, and hybrid server products and cloud services that power modern business and developers. This segment primarily comprises:

  • Server products and cloud services, including Azure and other cloud services, comprising cloud and AI consumption-based services, GitHub cloud services, Nuance Healthcare cloud services, virtual desktop offerings, and other cloud services; and Server products, comprising SQL Server, Windows Server, Visual Studio, System Center, related Client Access Licenses (“CALs”), and other on-premises offerings.
  • Enterprise and partner services, including Enterprise Support Services, Industry Solutions, Nuance professional services, Microsoft Partner Network, and Learning Experience.

It’s a big, diverse thing — and Microsoft doesn’t really break things down further from here — but Microsoft makes it clear in several places that Azure is the main revenue driver in this fairly diverse business segment. 

Some bright spark is going to tell me that Microsoft said it has 15 million paid 365 Copilot subscribers (which, I add, sits under its Productivity and Business Processes segment), with reporters specifically saying these were corporate seats, a fact I dispute, because this is the quote from Microsoft’s latest conference call around earnings:

We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Copilot seats, and multiples more enterprise Chat users.

At no point does Microsoft say “corporate seat” or “business seat.” “Enterprise Copilot Chat” is a free addition to multiple different Microsoft 365 products, and Microsoft 365 Copilot could also refer to Microsoft’s $18 to $21-a-month addition to Copilot Business, as well as Microsoft’s enterprise $30-a-month plans. And remember: Microsoft regularly does discounts through its resellers to bulk up these numbers.

As an aside: If you are anything to do with the design of Microsoft’s investor relations portal, you are a monster. Your site sucks. Forcing me to use your horrible version of Microsoft Word in a browser made this newsletter take way longer. Every time I want to find something on it I have to click a box and click find and wait for your terrible little web app to sleepily bumble through your 10-Ks.

If this is a deliberate attempt to make the process more arduous, know that no amount of encumbrance will stop me from going through your earnings statements, unless you have Satya Nadella read them. I’d rather drink hemlock than hear another minute of that man speak after his interview from Davos. He has an answer that’s five and a half minutes long that feels like sustaining a concussion. 

Microsoft Is Wasting Its Money On AI — And Using It To Paper Over The Flagging Growth Of Azure

When Nadella took over, Microsoft had around $11.7 billion in PP&E (property, plant, and equipment). A little over a decade later, that number has ballooned to $261 billion, with the vast majority added since 2020 (when Microsoft’s PP&E sat around $41 billion). 

Also, as a reminder: Jensen Huang has made it clear that GPUs are going to be upgraded on a yearly cycle, guaranteeing that Microsoft’s armies of GPUs regularly hurtle toward obsolescence. Microsoft, like every big tech company, has played silly games with how it depreciates assets, extending the “useful life” of all GPUs so that they depreciate over six years, rather than four. 

And while someone less acquainted with corporate accounting might assume that this move is a prudent, fiscally-conscious tactic to reduce spending by using assets for longer, and stretching the intervals between their replacements, in reality it’s a handy tactic to disguise the cost of Microsoft’s profligate spending on the balance sheet. 

You might be forgiven for thinking that all of this investment was necessary to grow Azure, which is clearly the most important part of Microsoft’s Intelligent Cloud segment. In Q2 FY2020, Intelligent Cloud revenue sat at $11.9 billion on PP&E of around $40 billion, and as of Microsoft’s last quarter, Intelligent Cloud revenue sat at around $32.9 billion on PP&E that has increased by over 650%. 

Good, right? Well, not really. Let’s compare Microsoft’s Intelligent Cloud revenue from the last five years:

alt

In the last five years, Microsoft has gone from spending 38% of its Intelligent Cloud revenue on capex to nearly every penny (over 94%) of it in the last six quarters, at the same time in two and a half years that Intelligent Cloud has failed to show any growth. 

An important note: If you look at Microsoft’s 2025 K-10, you’ll notice that it lists the Intelligent Cloud revenue for 2024 as $87.4bn — not, as the above image shows, $105bn.

If you look at the 2024 K-10, you’ll see that Intelligent Cloud revenues are, in fact, $105bn. So, what gives?

Essentially, before publishing the 2025 K-10, Microsoft decided to rejig which part of its operations fall into which particular segments, and as a result, it had to recalculate revenues for the previous year. Having read and re-read the K-10, I’m not fully certain which bits of the company were recast.

It does mention Microsoft 365, although I don’t see how that would fall under Intelligent Cloud — unless we’re talking about things like Sharepoint, perhaps. I’m at a loss. It’s incredibly strange.  

Things, I’m afraid, get worse. Microsoft announced in July 2025 — the end of its 2025 fiscal year— that Azure made $75 billion in revenue in FY2025. This was, as the previous link notes, the first time that Microsoft actually broke down how much Azure actually made, having previously simply lumped it in with the rest of the Intelligent Cloud segment. 

I’m not sure what to read from that, but it’s still not good. meaning that Microsoft spent every single penny of its Azure revenue from that fiscal year on capital expenditures of $88 billion and then some, a little under 117% of all Azure revenue to be precise. If we assume Azure regularly represents 71% of Intelligent Cloud revenue, Microsoft has been spending anywhere from half to three-quarters of Azure’s revenue on capex.

To simplify: Microsoft is spending lots of money to build out capacity on Microsoft Azure (as part of Intelligent Cloud), and growth of capex is massively outpacing the meager growth that it’s meant to be creating. 

You know what’s also been growing? Microsoft’s depreciation charges, which grew from $2.7 billion in the beginning of 2023 to $9.1 billion in Q2 FY2026, though I will add that they dropped from $13 billion in Q1 FY2026, and if I’m honest, I have no idea why! Nevertheless, depreciation continues to erode Microsoft’s on-paper profits, growing (much like capex, as the two are connected!) at a much-faster rate than any investment in Azure or Intelligent Cloud.

But worry not, traveler! Microsoft “beat” on earnings last quarter, making a whopping $38.46 billion in net income…with $9.97 billion of that coming from recapitalizing its stake in OpenAI. Similarly, Microsoft has started bulking up its Remaining Performance Obligations. See if you can spot the difference between Q1 and Q2 FY26, emphasis mine:

Q1FY26: 

Revenue allocated to remaining performance obligations, which includes unearned revenue and amounts that will be invoiced and recognized as revenue in future periods, was $398 billion as of September 30, 2025, of which $392 billion is related to the commercial portion of revenue. We expect to recognize approximately 40% of our total company remaining performance obligation revenue over the next 12 months and the remainder thereafter.

Q2FY26:


Revenue allocated to remaining performance obligations related to the commercial portion of revenue was $625 billion as of December 31, 2025, with a weighted average duration of approximately 2.5 years. We expect to recognize approximately 25% of both our total company remaining performance obligation revenue and commercial remaining performance obligation revenue over the next 12 months and the remainder thereafter

So, let’s just lay it out:

  • Q1: $398 billion of RPOs, 40% within 12 months, $159.2 billion in upcoming revenue.
  • Q2: $625 billion of RPOs, 25% within 12 months, $156.25 billion in upcoming revenue.

…Microsoft’s upcoming revenue dropped between quarters as every single expenditure increased, despite adding over $200 billion in revenue from OpenAI. A “weighted average duration” of 2.5 years somehow reduced Microsoft’s RPOs.

But let’s be fair and jump back to Q4 FY2025…

Revenue allocated to remaining performance obligations, which includes unearned revenue and amounts that will be invoiced and recognized as revenue in future periods, was $375 billion as of June 30, 2025, of which $368 billion is related to the commercial portion of revenue. We expect to recognize approximately 40% of our total company remaining performance obligation revenue over the next 12 months and the remainder thereafter.

40% of $375 billion is $150 billion. Q3 FY25? 40% on $321 billion, or $128.4 billion. Q2 FY25? $304 billion, 40%, or $121.6 billion. 

It appears that Microsoft’s revenue is stagnating, even with the supposed additions of $250 billion in spend from OpenAI and $30 billion from Anthropic, the latter of which was announced in November but doesn’t appear to have manifested in these RPOs at all.

In simpler terms, OpenAI and Anthropic do not appear to be spending more as a result of any recent deals, and if they are, that money isn’t arriving for over a year.

Much like the rest of AI, every deal with these companies appears to be entirely on paper, likely because OpenAI will burn at least $115 billion by 2029, and Anthropic upwards of $30 billion by 2028, when it mysteriously becomes profitable two years before OpenAI “does so” in 2030

These numbers are, of course, total bullshit. Neither company can afford even $20 billion of annual cloud spend, let alone multiple tens of billions a year, and that’s before you get to OpenAI’s $300 billion deal with Oracle that everybody has realized (as I did in September) requires Oracle to serve non-existent compute to OpenAI and be paid hundreds of billions of dollars that, helpfully, also don’t exist.

Yet for Microsoft, the problems are a little more existential. 

Microsoft Is A Decaying Empire That Bet The Future On Making In Excess Of $500 Billion In New Revenue Within The Next 4 To 6 Years From AI — And It Hasn’t Made A Dime In Profit Yet

Last year, I calculated that big tech needed $2 trillion in new revenue by 2030 or investments in AI were a loss, and if anything, I think I slightly underestimated the scale of the problem.

As of the end of its most recent fiscal quarter, Microsoft has spent $277 billion or so in capital expenditures since the beginning of FY2022, with the majority of them ($216 billion) happening since the beginning of FY2024. Capex has ballooned to the size of 45.5% of Microsoft’s FY26 revenue so far — and over 109% of its net income. 

alt

This is a fucking disaster. While net income is continuing to grow, it (much like every other financial metric) is being vastly outpaced by capital expenditures, none of which can be remotely tied to profits, as every sign suggests that generative AI only loses money.

While AI boosters will try and come up with complex explanations as to why this is somehow alright, Microsoft’s problem is fairly simple: it’s now spending 45% of its revenues to build out data centers filled with painfully expensive GPUs that do not appear to be significantly contributing to overall revenue, and appear to have negative margins.

Those same AI boosters will point at the growth of Intelligent Cloud as proof, so let’s do a thought experiment (even though they are wrong): if Intelligent Cloud’s segment growth is a result of AI compute, then the cost of revenue has vastly increased, and the only reason we’re not seeing it is that the increased costs are hitting depreciation first.

You see, Intelligent Cloud is stalling, and while it might be up by 8.8% on an annualized basis (if we assume each quarter of the year will be around $30 billion, that makes $120 billion, so about an 8.8% year-over-year increase from $106 billion), that’s come at the cost of a massive increase in capex (from $88 billion for FY2025 to $72 billion for the first two quarters of FY2026), and gross margins that have deteriorated from 69.89% in Q3 FY2024 to 68.59% in FY2026 Q2, and while operating margins are up, that’s likely due to Microsoft’s increasing use of contract workers and increased recruitment in cheaper labor markets.

And as I’ll reveal later, Microsoft has used OpenAI’s billions in inference spend to cover up the collapse of the growth of the Intelligent Cloud segment. OpenAI’s inference spend now represents around 10% of Azure’s revenue.

Microsoft, as I discussed a few weeks ago, is in a bind. It keeps buying GPUs, all while waiting for the GPUs it already has to start generating revenue, and every time a new GPU comes online, its depreciation balloons. Capex for GPUs began in seriousness in Q1 FY2023 following October’s shipments of NVIDIA’s H100 GPUs, with reports saying that Microsoft bought 150,000 H100s in 2023 (around $4 billion at $27,000 each) and 485,000 H100s in 2024 ($13 billion). These GPUs are yet to provide much meaningful revenue, let alone any kind of profit, with reports suggesting (based on Oracle leaks) that the gross margins of H100s are around 26% and A100s (an older generation launched in 2020) are 9%, for which the technical term is “dogshit.”  Somewhere within that pile of capex also lies orders for H200 GPUs, and as of 2024, likely NVIDIA’s B100 (and maybe B200) Blackwell GPUs too.

You may also notice that those GPU expenses are only some portion of Microsoft’s capex, and the reason is because Microsoft spends billions on finance leases and construction costs. What this means in practical terms is that some of this money is going to GPUs that are obsolete in 6 years, some of it’s going to paying somebody else to lease physical space, and some of it is going into building a bunch of data centers that are only useful for putting GPUs in.

And none of this bullshit is really helping the bottom line! Microsoft’s More Personal Computing segment — including Windows, Xbox, Microsoft 365 Consumer, and Bing — has become an increasingly-smaller part of revenue, representing in the latest quarter a mere 17.64% of Microsoft’s revenue in FY26 so far, down from 30.25% a mere four years ago.

We are witnessing the consequences of hubris — those of a monopolist that chased out any real value creators from the organization, replacing them with an increasingly-annoying cadre of Business Idiots like career loser Jay Parikh and scummy, abusive timewaster Mustafa Suleyman

Satya Nadella took over Microsoft with the intention of fixing its culture, only to replace the aggressive, loudmouthed Ballmer brand with a poisonous, passive aggressive business mantra of “you’ve always got to do more with less.”

Today, I’m going to walk you through the rotting halls of Redmond’s largest son, a bumbling conga line of different businesses that all work exactly as well as Microsoft can get away with. 

Welcome to The Hater’s Guide To Microsoft, or Instilling The Oaf Mindset.

Read the whole story
mkalus
49 seconds ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

X Will Stop Paying People for Sharing Unlabeled AI-Generated War Footage

1 Share
X Will Stop Paying People for Sharing Unlabeled AI-Generated War Footage

X said it will temporarily demonetize accounts that share AI-generated war footage without a label. The decision comes days after the US and Israel launched airstrikes in Iran and AI-slop war footage flooded social media timelines across the internet.

“Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Nikita Bier, X’s head of product, said in a post on X.

Many of the AI-generated videos currently on X purport to show Iranian ballistic missiles hitting sites in Israel. One video shared thousands of times on X showed missiles slamming into the ground near the Dome of the Rock in Jerusalem while a computer generated voice said “Oh my god, hear they come.” X users community noted the video, but the account that shared it has a Bluecheck and is eligible for a financial payout for engagement as part of X’s content creator program.

Bier said today that X will stop people from making money on unlabeled AI war footage, but won’t stop accounts from sharing it.

“Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program,” he added. “This will be flagged to us by any post with a Community Note or if the content contains meta data (or other signals) from generative AI tools. We will continue to refine our policies and product to ensure X can be trusted during these critical moments.”

Fake war footage shared on social media isn’t a new problem. For several years every new conflict would be met with a flood of fake videos. Old war footage passed off as coming from the current war was popular, but so was recordings of video games run through filters to make it look low-resolution. The same three clips from milsim video game Arma 3 were shared at the outbreak of every new conflict for a decade. The Government of Pakistan even shared Arma 3 footage once in a post that’s still live on X.

What is new is the proliferation of easy to use AI video-generation tools. AI image and video generation has come a long way in the past few years and it’s trivially easy to remove the watermark that’s supposed to distinguish them from the real thing. X’s verification system—which rewards accounts for engagement—has also created incentives for Bluecheck accounts to publish fast, verify later (if ever), and rake in the cash. So in the hours and days after the war with Iran began, fake footage of airstrikes and conflict spread on X. 

The way X is handling the problem gives the game away. According to Bier, the site will rely on the community to police itself and the punishment is a 90 day suspension not from the site but from the monetization program.

Read the whole story
mkalus
1 minute ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Pluralistic: Supreme Court saves artists from AI (03 Mar 2026)

1 Share


Today's links



The Supreme Court building. It has been tinted sepia. Floating in front of it are a 1920s-era Supreme Court, tinted blue-green, their heads replaced with the glaring red eyes of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey,' and their hands tinted hot pink. They have been distorted with a ripple effect and TV scan lines. The sky is full of dark clouds.

Supreme Court saves artists from AI (permalink)

The Supreme Court has just turned down a petition to hear an appeal in a case that held that AI works can't be copyrighted. By turning down the appeal, the Supreme Court took a massively consequential step to protect creative workers' interests:

https://www.theverge.com/policy/887678/supreme-court-ai-art-copyright

At the core of the dispute is a bedrock of copyright law: that copyright is for humans, and humans alone. In legal/technical terms, "copyright inheres at the moment of fixation of a work of human creativity." Most people – even people who work with copyright every day – have not heard it put in those terms. Nevertheless, it is the foundation of international copyright law, and copyright in the USA.

Here's what it means, in plain English:

a) When a human being,

b) does something creative; and

c) that creative act results in a physical record; then

d) a new copyright springs into existence.

For d) to happen, a), b) and c) all have to happen first. All three steps for copyright have been hotly contested over the years. Remember the "monkey selfie," in which a photographer argued that he was entitled to the copyright after a monkey pointed a camera at itself and pressed the shutter button? That image was not copyrightable, because the monkey was a monkey, not a human, and copyright is only for humans:

https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute

Then there's b), "doing something creative." Copyright only applies to creative work, not work itself. It doesn't matter how hard you labor over a piece of "IP" – if that work isn't creative, there's no copyright. For example, you can spend a fortune creating a phone directory, and you will get no copyright in the resulting work, meaning anyone can copy and sell it:

https://en.wikipedia.org/wiki/Feist_Publications,_Inc._v._Rural_Telephone_Service_Co.

If you mix a little creative labor with the hard work, you can get a little copyright. A directory of "all the phone numbers for cool people" can get a "thin" copyright over the arrangement of facts, but such a copyright still leaves space for competitors to make many uses of that work without your permission:

https://pluralistic.net/2021/08/14/angels-and-demons/#owning-culture

Finally, there's c): copyright is for tangible things, not intangibles. Part of the reason choreographers created a notation system for dance moves is that the moves themselves aren't copyrightable:

https://en.wikipedia.org/wiki/Dance_notation

The non-copyrightability of movement is (partly) why the noted sex-pest and millionaire grifter Bikram Choudhury was blocked from claiming copyright on ancient yoga poses (the other reason is that they are ancient!):

https://en.wikipedia.org/wiki/Copyright_claims_on_Bikram_Yoga

Now, AI-generated works are certainly tangible (any work by an AI must involve magnetic traces on digital storage media). The prompts for an AI output can be creative and thus copyrightable (in the same way that notes to a writers' room or from an art-director are). But the output from the AI cannot be copyrighted, because it is not a work of human authorship.

This has been the position of the US Copyright Office from the start, when AI prompters started sending in AI-generated works and seeking to register copyrights in them. Stephen Thaler, a computer scientist who had prompted an image generator to produce a bitmap, kept appealing the Copyright Office's decision, seemingly without regard to the plain facts of the case and the well-established limits of copyright. By attempting to appeal his case all the way to the Supreme Court, Thaler has done every human artist a huge boon: his weak, ill-conceived case was easy for the Supreme Court to reject, and in so doing, the court has cemented the non-copyrightability of AI works in America.

You may have heard the saying, "Hard cases make bad law." Sometimes, there are edge-cases where following the law would result in a bad outcome (think of a Fourth Amendment challenge to an illegal search that lets a murderer go free). In these cases, judges are tempted to interpret the law in ways that distort its principles, and in so doing, create a bad precedent (the evidence from a bad search is permitted, and so cops stop bothering to get a warrant before searching people).

This is one of the rare instances in which a bad case made good law. Thaler's case wasn't even close – it was an absolute loser from the jump. Normally, plaintiffs give up after being shot down by an agency like the Copyright Office or by a lower court. But not Thaler – he stuck with it all the way to the highest court in the land, bringing clarity to an issue that might have otherwise remained blurry and ill-defined for years.

This is wonderful news for creative workers. It means that our bosses must pay humans to do work if they want to be granted copyright on the things they want to sell. The more that humans are involved in the creation of a work, the stronger the copyright on that work becomes – which means that the less a human contributes to a creative work, the harder it will be to prevent others from simply taking it and selling it or giving it away.

This is so important. Our bosses do not want to pay us. When our bosses sue AI companies, it's not because they want to make sure we get paid.

The many pending lawsuits – from news organizations like the New York Times, wholesalers like Getty Images, and entertainment empires like Disney – all seek to establish that training an AI model is a copyright infringement. This is wrong as a technical matter: copyright clearly permits making transient copies of published works for the purpose of factual analysis (otherwise every search engine would be illegal). Copyright also permits performing mathematical analysis on those transient copies. Finally, copyright permits the publication of literary works (including software programs) that embed facts about copyrighted works – even billions of works:

https://pluralistic.net/2023/09/17/how-to-think-about-scraping/

Sure, you can infringe copyright with an AI model – say, by prompting it to produce infringing images. But the mere fact that a technology can be used to infringe copyright doesn't make the technology itself infringing (otherwise every printing press, camera, and computer would be illegal):

https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Universal_City_Studios,_Inc.

Of course, the fact that copyright currently permits training models doesn't mean that it must. Copyright didn't come down from a mountain on two stone tablets. It's just a law, and laws can be amended. I think that amending copyright to ban training a model would inflict substantial collateral damage on everything from search engines to scholarship, but perhaps you disagree. Maybe you think that you could wordsmith a new copyright law that bans training without whacking a bunch of socially beneficial activities.

Even if that's so, it still wouldn't help artists.

To understand why, consider Universal and Disney's lawsuit against Midjourney. The day that lawsuit dropped, I got a press release from the RIAA, signed by its CEO, Mitch Glazier. Here's how it began:

There is a clear path forward through partnerships that both further AI innovation and foster human artistry. Unfortunately, some bad actors – like Midjourney – see only a zero-sum, winner-take-all game.

The RIAA represents record labels, not film studios, but thanks to vertical integration, the big film studios are also the big record labels. That's why the RIAA alerted the press to its position on this suit.

There's two important things to note about the RIAA press release: how it opened, and how it closed. It opens by stating that the companies involved want "partnerships" with AI companies. In other words, if they establish that they have the right to control training on their archives, they won't use that right to prevent the creation of AI models that compete with creative workers. Rather, they will use that right to get paid when those models are created.

Expanding copyright to cover models isn't about preventing generative AI technologies – it's about ensuring that these technologies are licensed by incumbent media companies. This licensure would ensure that media companies would get paid for training, but it would also let them set the terms on which the resulting models were used. The studios could demand that AI companies put "guardrails" on the resulting models to stop them from being used to output things that might compete with the studios' own products.

That's what the opening of this press-release signifies, but to really understand its true meaning, you have to look at the closing of the release: the signature at the bottom of it, "Mitch Glazier, CEO, RIAA."

Who is Mitch Glazier? Well, he used to be a Congressional staffer. He was the guy responsible for sneaking a clause into an unrelated bill that repealed "termination of transfer" for musicians. "Termination" is a part of copyright law that lets creators take back their rights after 35 years, even if they originally signed a contract for a "perpetual license."

Under termination, all kinds of creative workers who got royally screwed at the start of their careers were able to get their copyrights back and re-sell them. The primary beneficiaries of termination are musicians, who signed notoriously shitty contracts in the 1950s-1980s:

https://pluralistic.net/2021/09/26/take-it-back/

When Mitch Glazier snuck a termination-destroying clause into legislation, he set the stage for the poorest, most abused, most admired musicians in recording history to lose access to money that let them buy a couple bags of groceries and make the rent. He condemned these beloved musicians to poverty.

What happened next is something of a Smurfs Family Christmas miracle. Musicians were so outraged by this ripoff, and their fans were so outraged on their behalf, that Congress convened a special session solely to repeal the clause that Mitch Glazier tricked them into voting for. Shortly thereafter, Glazier was out of Congress:

https://en.wikipedia.org/wiki/Mitch_Glazier

But this story has a happy ending for Glazier, too – he might have been out of his government job, but he had a new gig, as CEO of the Recording Industry Association of America, where he earns more than $1.3 million/year to carry on the work he did in Congress – serving the interests of the record labels:

https://projects.propublica.org/nonprofits/organizations/131669037

Mitch Glazier serves the interests of the labels, not musicians. He can't serve both interests, because every dime a musician takes home is a dime that the labels don't get to realize as profits. Labels and musicians are class enemies. The fact that many musicians are on the labels' side when they sue AI companies does not mean that the labels are on the musicians' side.

What will the media companies do if they win their lawsuits? Glazier gives us the answer in the opening sentence of his press release: they will create "partnerships" with AI companies to train models on the work we produce.

This is the lesson of the past 40 years of copyright expansion. For 40 years, we have expanded copyright in every way: copyright lasts longer, covers more works, prohibits more uses without licenses, establishes higher penalties, and makes it easier to win those penalties.

Today, the media industry is larger and more profitable than at any time, and the share of those profits that artists take home is smaller than ever.

How has the expansion of copyright led to media companies getting richer and artists getting poorer? That's the question that Rebecca Giblin and I answer in our 2022 book Chokepoint Capitalism. In a nutshell: in a world of five publishers, four studios, three labels, two app companies and one company that controls all ebooks and audiobooks, giving a creative worker more copyright is like giving your bullied kid extra lunch money. It doesn't matter how much lunch money you give that kid – the bullies will take it all, and the kid will go hungry:

https://pluralistic.net/2022/08/21/what-is-chokepoint-capitalism/

Indeed, if you keep giving that kid more lunch money, the bullies will eventually have enough dough that they'll hire a fancy ad-agency to blitz the world with a campaign insisting that our schoolkids are all going hungry and need even more lunch money (they'll take that money, too).

When Mitch Glazier – who got a $1m+/year job for the labels after attempting to pauperize musicans – writes on behalf of Disney in support of a copyright suit to establish that copyright prevents training a model without a license, he's not defending creative workers. Disney, after all, is the company that takes the position that if it buys another company, like Lucasfilm or Fox, that it only acquires the right to use the works we made for those companies, but not the obligation to pay us when they do:

https://pluralistic.net/2021/04/29/writers-must-be-paid/#pay-the-writer

If a new, unambiguous copyright over model training comes into existence – whether through a court precedent or a new law – then all our contracts will be amended to non-negotiably require us to assign that right to our bosses. And our bosses will enter into "partnerships" to train models on our works. And those models will exist for one purpose: to let them create works without paying us.

The market concentration that lets our bosses dictate terms to us is getting much worse, and it's only speeding up. Getty Images – who sued Stability AI over image generation – is merging with Shutterstock:

https://globalcompetitionreview.com/gcr-usa/article/photographers-alarmed-gettyshutterstock-merger

And Paramount is merging with Warners:

https://pluralistic.net/2026/02/28/golden-mean/#reality-based-community

This is where this new Supreme Court action comes in. A new copyright that covers training is just one more thing these increasingly powerful members of this increasingly incestuous cartel can force us to sign away. That new copyright isn't something for us to bargain with, it's something we'll bargain away.

But the fact that the works that a model produces are automatically in the public domain is something we can't bargain away. It's a legal fact, not a legal right. It means that the more humans there are involved in the creation of a final work, the more copyrightable that work is.

Media bosses love AI because it dangles the tantalizing possibility of running a business without ego-shattering confrontations with creative workers who know how to do things. It's the solipsistic fantasy of a world without workers, in which a media boss conceives of a "product," prompts a sycophantic AI, and receives an item that's ready for sale:

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

Many bosses know this isn't within reach. They imagine that they'll get the AI to shit out a script and then pay a writer on the cheap to "polish" it. They think they'll get an AI to shit out a motion sequence, a still, or a 3D model and then pay a human artist pennies to put the "final touches" on it. But the Copyright Office's position is that only those human contributions are eligible for a copyright: a few editorial changes, a few pixels or vectors rearranged. Everything else is in the public domain.

Here's the cool part: the only thing our bosses hate more than paying us is when other people take their stuff without paying for it. To achieve the kind of control they demand, they will have to pay us to make creative works.

What's more, the fact that AI-generated works are in the public domain leaves a lot of uses that don't harm creative workers intact. You can amuse yourself and your friends with all the AI slop you can generate; the fact that it's not copyrightable doesn't matter to that use. I happen to think AI "art" is shit, but you do you:

https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand

This also means that if you're a writer who likes to brainstorm with a chatbot as you develop an idea, that's fine, so long as the AI's words don't end up in the final product. Creative workers already assemble "mood boards" and clippings for inspiration – so long as these aren't incorporated into the final work, that's fine.

That's just what the Hollywood writers bargained for in their historic strike over AI. They retained the right to use AI if they wanted to, but their bosses couldn't force them to:

https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/

The Writers Guild were able to bargain with the heavily concentrated studios because they are organized in a union. Not just any union, either: the Writers Guild (along with the other Hollywood unions) are able to undertake "sectoral bargaining" – that's when a union can negotiate a contract with all the employers in a sector at once.

Sectoral bargaining was once the standard for labor relations, but it was outlawed in the 1947 Taft-Hartley Act, which clawed back many of the important labor rights established with the New Deal's National Labor Relations Act. To get Taft-Hartley through Congress, its authors had to compromise by grandfathering in the powerful Hollywood unions, who retained their right to sectoral bargaining. More than 75 years later, that sectoral bargaining right is still protecting those workers.

Our bosses tell us that we should side with them in demanding a new law: a copyright law that covers training an AI model. The mere fact that our bosses want this should set off alarm bells. Just because we're on their side, it doesn't mean they're on our side. They are not.

If we're going to use our muscle to fight for a new law, let it be a sectoral bargaining law – one that covers all workers. You can tell that this would be good for us because our bosses would hate it, and every other worker in America would love it. The Writers Guild used sectoral bargaining to achieve something that 40 years of copyright expansion failed at: it made creative workers richer, rather than giving us another way to be angry about how our work is being used.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Cornell University harasses maker of Cornell blog https://web.archive.org/web/20060621110535/http://cornell.elliottback.com/archives/2006/03/02/cornell-university-nastygram/

#15yrsago Explaining creativity to a Martian https://locusmag.com/feature/cory-doctorow-explaining-creativity-to-a-martian/

#15yrsago Scott Walker smuggles ringers into the capital for the legislative session https://www.theawl.com/2011/03/in-madison-scott-walker-packed-his-budget-address-with-ringers/

#15yrsago Measuring radio’s penetration in 1936 https://www.flickr.com/photos/70118259@N00/albums/72157626051208969/with/5490099786

#10yrsago Rube Goldberg musical instrument that runs on 2,000 steel ball-bearings https://www.youtube.com/watch?v=IvUU8joBb1Q

#10yrsago KKK vs D&D: the surprising, high fantasy vocabulary of racism https://en.wikipedia.org/wiki/Ku_Klux_Klan_titles_and_vocabulary

#10yrsago UK minister compares adblocking to piracy, promises action https://www.theguardian.com/media/2016/mar/02/adblocking-protection-racket-john-whittingdale

#10yrsago Some ad-blockers are tracking you, shaking down publishers, and showing you ads https://www.wired.com/2016/03/heres-how-that-adblocker-youre-using-makes-money/

#10yrsago ISIS opsec: jihadi tech bureau recommends non-US crypto tools https://web.archive.org/web/20160303095904/http://www.dailydot.com/politics/isis-apple-fbi-congressional-hearing-crypto-international/

#10yrsago Apple v FBI isn’t about security vs privacy; it’s about America’s security vs FBI surveillance https://www.wired.com/2016/03/feds-let-cyber-world-burn-lets-put-fire/


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1020 words today, 41284 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
mkalus
37 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements

2 Shares
CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements

Customs and Border Protection (CBP) bought data from the online advertising ecosystem to track peoples’ precise movements over time, in a process that often involves siphoning data from ordinary apps like video games, dating services, and fitness trackers, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media.

The document shows in stark terms the power, and potential risk, of online advertising data and how it can be leveraged by government agencies for surveillance purposes. The news comes after Immigration and Customs Enforcement (ICE) purchased similar tools that can monitor the movements of phones in entire neighbourhoods. ICE also recently said in public procurement documents it was interested in sourcing more “Ad Tech” data for its investigations. Following 404 Media’s revelation of that ICE purchase, on Tuesday a group of around 70 lawmakers urged the DHS oversight body to conduct a new investigation into ICE’s location data buying.

This sort of information is a “goldmine for tracking where every person is and what they read, watch, and listen to,” Johnny Ryan, director of the Irish Council for Civil Liberties (ICCL) Enforce, which has closely followed the sale of advertising data, told 404 Media in an email.

💡
Do you work at CBP, ICE, or a location data company? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The document 404 Media obtained is the first time CBP has acknowledged the location data it bought came from the advertising industry. Specifically, CBP says the data was in part sourced via real-time bidding, or RTB. Whenever an advertisement is displayed inside an app, a near instantaneous bidding process happens with companies vying to have their advert served to a certain demographic. A side effect of this is that surveillance firms, or rogue advertising companies working on their behalf, can observe this process and siphon information about mobile phones, including their location. All of this is essentially invisible to an ordinary phone user, but happens constantly.

This sort of surveillance can happen through all sorts of innocuous seeming apps, such as video games, news apps, weather trackers, and dating apps. 404 Media has previously linked RTB-based surveillance to games like Candy Crush and Subway Surfers; dating apps Tinder and Grindr; the social network Tumblr, and the popular fitness app MyFitnessPal. In many cases, the app developers themselves are likely unaware they are acting as a conduit for government surveillance because the data collection is not based on any code the app creators have included themselves. The end result is tools that can potentially track hundreds of millions of phones, often without a warrant.

“RTB-sourced location data is recorded when an advertisement is served,” the DHS document reads. The document is a Privacy Threshold Analysis (PTA), a type of record DHS is required to complete when deploying or exploring a new technology. 404 Media obtained it through a Freedom of Information Act (FOIA) request.

CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements
A section of the document. Image: 404 Media.

The document lays out how the online advertising industry gave birth to this sort of surveillance. Traditionally, marketers used cookies to track consumers’ activities. When those grew less effective due to the rise of smartphones and apps in the 2010s, Apple and Google created Advertising IDs, or AdIDs, that are assigned to each device. These are unique identifiers that, although they don’t contain a person’s phone number or name, still provide a way for the online advertising industry to track devices. As the document says, “this allows app developers to still track and report a device’s consumer activity, to include date/time and locational information, without connecting to or using any personally identifiable information (PII) associated with the device.”

In essence, the AdID acts as the digital glue between a person’s device and their location data, allowing marketers—or a surveillance contractor or DHS—to attribute a set of movements to a specific device. From there, investigators can draw geofences to see all phones at a particular area over a period of time. Many smartphone location data tools then let officials see where else those devices went, potentially revealing where their owners live or work, or other sensitive locations.

“It operates behind the scenes on websites and apps, and it puts everyone at risk. RTB is the world’s biggest data breach,” Ryan added.

CBP acknowledged a request for comment, which asked whether CBP is still using this sort of data, but ultimately did not comment.

CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements
A section of the document. Image: 404 Media.

The internal document relates to a “pilot” program CBP previously ran from 2019 to 2021, which would “aid in CBP’s targeting, vetting, analysis, and illicit network discovery processes,” it says. The document says the “evaluation will be focused on AdIDs associated with cross border criminal activity and/or activity with an identified terrorist/criminal predicate.”

Although CBP described the move as a pilot, the DHS Office of the Inspector General (OIG) later found both CBP and ICE did not limit themselves to non-operational use. The OIG found that CBP, ICE, and the Secret Service all illegally used the smartphone location data, and found a CBP official used the data to track coworkers with no investigative purpose. CBP and ICE went on to repeatedly purchase access to location data.

In 2020 The Wall Street Journal was first to report on CBP and ICE’s purchase of commercial location data from a vendor called Venntel. The report said ICE used the data to help identify immigrants who were later arrested, and CBP used the data to look for cellphone activity in unusual places, including the stretches of desert on the Mexican border. The FTC later found Venntel illegally sold location data collected without proper consent.

In January, 404 Media reported on material which explained how a similar and more recently ICE-purchased system called Webloc works. It is designed to monitor a city neighborhood or block for mobile phones, then let ICE track the movements of those devices back to their likely homes or other locations. The material did not say how Penlink, the company selling the tool, obtains this location data. But surveillance companies broadly either obtain it through RTB, or small bundles of code called software development kits (SDKs) inserted into ordinary apps.

“By refusing to cut off surveillance companies and sleazy data brokers, Big Tech companies are effectively collaborating with ICE’s lawless campaign of violence and terror. As a result, every internet ad on a website or app could be collecting location data that ICE will use for its next operation,” Senator Ron Wyden told 404 Media in a statement.

“Congress could put a stop to this by passing my bills to ban the government from buying our data and ban tech companies from using surveillance advertising. Until then, the best way for the public to protect themselves is to install ad blockers, disable their phone's advertising ID, and enable the Global Privacy Control in their browser, which 12 states now enforce,” the statement added.

On Tuesday, Wyden, Rep. Adriano Espaillat, and a group of 70 other lawmakers sent a letter to the DHS OIG urging it to once again investigate DHS’s purchase of location data. 

“Public reports indicate that ICE has resumed its location data purchases, even though DHS has yet to adopt all of the recommendations from your prior review,” the letter, shared with 404 Media by Wyden’s office, reads. “Your 2023 report noted that there was no DHS-wide policy governing component use of commercial location data. DHS created a commercial data working group in 2022, but as of April 2023, the DHS commercial data working group had yet to issue a policy. Your office recently confirmed that this recommendation remains open.”

The letter says that ICE is “stonewalling” Wyden’s efforts to investigate the Webloc purchase. Wyden’s office requested a briefing on the purchase after 404 Media first revealed it, with that briefing scheduled for February 10. “One day before that briefing was to take place, ICE cancelled it with no explanation and without any offer to reschedule,” the letter reads.

In January ICE posted a request for information—essentially a callout for capable companies to come forward—looking for more access to advertising technology. ICE “is gathering information to better understand how the industry’s commercial Big Data and Ad Tech providers can directly support investigations activities,” the announcement reads. “The Government is seeking to understand the current state of Ad Tech compliant and location data services available to federal investigative and operational entities, considering regulatory constraints and privacy expectations of support investigations activities,” it added.

Read the whole story
mkalus
3 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Americans! (We need to talk about your Health Secretary)

1 Share
From: potholer54
Duration: 22:06
Views: 71,048

Are seed oils poison? Is thimerosal in vaccines dangerous? RFK gets a lot of attention in the media, but let's look at the science behind his claims.

CORRECTIONS and CLARIFICATIONS:
1) I have had a couple of comments questioning my description of LDL, the so-called 'bad' cholesterol. One of them cited some research suggesting LDL may not be the culprit when it comes to clogged arteries. This has to be weighed against the huge amount of research over several decades showing that there is a link, and that link is accepted by every major medical institution I checked. See my pinned comment for details. Still, fair to say that the link between LDL and clogged arteries (atherosclerosis) is being questioned.

2) The photo I show of the element sodium is a bit suspect. I got the photo from a website called 'Get 10 Facts About the Element Sodium' (thoughtco.com/sodium-element-facts-606471). Sodium is a dull gray color on contact with air, and this looks more like a crystalline form of a sodium compound.

3) I cannot verify all the ingredients shown in Quaker oats and McDonalds frying oil. The one I mention, dimethylpolysiloxane, is used. And although the exact ingredients may differ, or (as I mentioned) may have changed, there is a huge disparity between what is allowed in the USA and Europe (cspi.org/page/chemical-cuisine-food-additive-safety-ratings)

TO SUPPORT THIS CHANNEL -- PLEASE SUPPORT THIS CHARITY:
I do not ask for contributions. Instead, please support the work of Health in Harmony, which trades forest protections for health care. See my video here: https://www.youtube.com/watch?v=j9-GRugP9pU for an explanation of their work.
Donations can be made here: https://www.healthinharmony.org/donate
The main website is: https://www.healthinharmony.org/
Health in Harmony also has a live website: https://actnow.healthinharmony.org/
Please mention the name Potholer54 when you donate, so it can be counted when I update the total from my subscribers. Thanks :)

SOURCES:

RFK on seed oil:
https://www.youtube.com/shorts/qffLMbzjui8

Joe Rogan on seed oil:
https://www.youtube.com/shorts/n5lfDRxjKpA

RFK speaking on new diet guidelines:
https://www.youtube.com/watch?v=_c28wiWIsQk

Simopoulos, A. P. (2008). The importance of the omega-6/omega-3 fatty acid ratio in cardiovascular disease and other chronic diseases. (Experimental Biology and Medicine, 233(6), 674–688.)

https://www.heart.org/en/news/2024/08/20/theres-no-reason-to-avoid-seed-oils-and-plenty-of-reasons-to-eat-them

Finns:
https://www.youtube.com/shorts/kOiXgrz3Jss
https://www.youtube.com/shorts/yncFW4CZA_g

RFK explaining ban on Thimerosal
https://www.youtube.com/watch?v=XPWh-Mq_BNg

Timeline of Thimerosal
https://www.motherjones.com/politics/2004/03/timeline-thimerosal-controversy/

Egan testimony:
https://www.pharmaceuticalonline.com/doc/fdas-william-egan-testifies-on-vaccine-additi-0001#thi

Map of measles outbreak:
https://www.cdc.gov/measles/data-research/index.html

https://www.wsj.com/video/series/opinion-review-and-outlook/wsj-opinion-could-measles-lose-its-elimination-status-in-the-us/5A75DA4D-E62F-46AA-B929-1A588BCA2D4B?mod=Searchresults&pos=2&page=1

Vitamin A toxicity
https://www.mcgill.ca/oss/article/medical-critical-thinking-health-and-nutrition/measles-vitamin-and-rfk-jrs-about-face

http://factcheck.org/2026/01/the-facts-on-the-vaccines-the-cdc-no-longer-recommends-for-all-kids/

Joe Rogan interview
https://www.youtube.com/watch?v=p6LJXPOv4SM

2:30 “I’m not against vaccines”
https://www.youtube.com/watch?v=LBP6P12oyzM

Chaired Children’s health defence – anti-vax
http://youtube.com/watch?v=HqI_z1OcenQ

1:50 – RFK saying he tells mothers not to vaccinate their babies
6:06 – RFK disinfo on vaccines
https://www.youtube.com/watch?v=DsTfrJVWYqc

congress hearings
https://www.youtube.com/watch?v=i0q_Oj425cU&t=259s

"Do your own research"
https://www.youtube.com/shorts/hmXMHkdSB8U

"Don’t take advice from me":
https://www.youtube.com/watch?v=gZJpvcg3iUY
https://www.youtube.com/watch?v=E40cdMiZ03w

Congressional hearing:
https://www.youtube.com/watch?v=OBpVLYmYLso

Americans die younger (2013)
https://www.ncbi.nlm.nih.gov/books/NBK154469/

Citations to nonexistent studies:
https://www.theguardian.com/us-news/2025/may/29/rfk-jr-maha-health-report-studies

CDC award to Danish researchers
https://www.cidrap.umn.edu/childhood-vaccines/cdc-awards-16-million-hepatitis-b-vaccine-study-controversial-danish-researchers

Stephanie Rist
https://www.youtube.com/watch?v=mcd5MSZK0V8

Read the whole story
mkalus
14 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Get your war on: AI chatbots in the kill chain

1 Share

The term “artificial intelligence” was invented in 1955 for a marketing pitch to the US Department of Defense. Silicon Valley’s goal has always been the government teat — the final stage of capitalism.

Last week, chatbot vendor Anthropic had some issues with the Department of Defense. The DoD is very into its chatbots. And they love Anthropic’s Claude. They’ve got Claude all through DoD systems, usually embedded in software from Palantir.

Anthropic had a $200 million contract with the DoD. They even made a model called Claude Gov that didn’t have the guardrails and restrictions the commercial version has.

The DoD contract said Claude couldn’t be used for domestic surveillance or in “autonomous lethal operations” — Anthropic did not want the chatbot in the kill chain. The DoD started talking in January about cancelling the whole contract. [WSJ, archive]

The DoD is upset Anthropic won’t change these provisions. So now the DoD wants to designate Anthropic a “supply chain risk” — so no other company with a defense contract will be allowed to use Anthropic.

“Supply chain risk” is a big deal — that’s a designation you use for spies, not as a business negotiation. The threat first came up two weeks ago. A senior Pentagon official told Axios: [Axios]

It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.

That is, wanting the DoD to stick to the contract they signed. Pete Hegseth, the Secretary of Defense, called Dario Amodei of Anthropic into his office last Tuesday morning and made the supply chain risk threat directly to his face. [NYT, archive]

So Anthropic and the DoD were close to hammering out the last details of the new contract. Deadline was 5pm last Friday, 27 February. [NYT, archive]

The sticking point was that the Pentagon really, really wanted a loophole that would allow domestic surveillance: [Atlantic, archive]

the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans.

The deadline passed. At 5:14pm, Hegseth posted a statement to Twitter, obviously prepared well ahead of time, saying he was designating Anthropic a supply chain risk: [Twitter. archive]

The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.

You might think using a hallucination machine was risking their lives already.

Anthropic is not happy about the supply chain risk designation. They’re going to sue over it. It’s one thing for the DoD to say they don’t want to use Claude any more, it’s quite another to threaten to destroy Anthropic’s business. It’d clearly be illegal, if laws existed. [Anthropic]

Other parts of the government aren’t so happy either:

Officials at U.S. intelligence agencies including the C.I.A., which uses Anthropic’s A.I. technology, have also privately urged both sides to make a deal. Some current and former officials said they continued to hope for a peace agreement.

But OpenAI was whispering in the DoD’s ear. After Hegseth threatened Amodei, Sam Altman called the DoD to make a deal. They hammered out a draft agreement by the next day. [NYT, archive]

OpenAI first opened itself to military sales in January 2024. Sam Altman announced the new DoD deal on Friday, a few hours after Pete Hegseth’s tweet: [Twitter, archive; OpenAI]

In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.

The best possible outcome for OpenAI is a bailout when they run out of money. OpenAI needs the DoD to think OpenAI is load-bearing.

Now Anthropic is getting chatbot fans claiming they’re the peace-loving AI company. And the suckers are buying it! Drooling Claude addicts going up to the $200 subscription.

Claude hit number one on the US Apple Store over the weekend. The Claude service has been falling over today under the load. [Axios; Bloomberg, archive]

But this is nonsense. Anthropic very much wants to keep being a defense contractor. They have no pacifist attitude and never did. Here’s Dario Amodei on CBS just yesterday: [YouTube]

Anthropic actually has been the most lean forward of all the AI companies in working with the US government and working with the US military. We were the first company to put our models on the classified cloud. We were the first company to make custom models for national security purposes. We’re deployed across the intelligence community and the military for applications like cyber, you know, combat support operations, various things like this.

The DoD’s goals in 2026 will make a lot more dead people. Anthropic’s just a bit squeamish about that last bit — they definitely want in on all the stuff that leads up to the dead people. [Anthropic]

So you know what happened next — on Saturday 28 February, the US and Israel launched attacks on Iran. And guess what, the DoD used Claude all through the operation: [WSJ]

Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool.

… The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up.

And that’ll be why the Pentagon wanted to nail down a deal — so it would be in place for this operation.

Anthropic’s DoD contract terminates six months from now. The US will surely have finished in Iran by then. Right?

 

Read the whole story
mkalus
14 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories