Resident of the world, traveling the road of life
68377 stories
·
21 followers

Power Companies Are Using AI To Build Nuclear Power Plants

1 Share
Power Companies Are Using AI To Build Nuclear Power Plants

Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster. 

“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.

The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field. 

“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”

The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.

 But the authors of the report from AI Now told 404 Media that they have major concerns about trusting nuclear safety to an LLM. “Nuclear licensing is a process, it’s not a set of documents,”  Heidy Khlaaf, the head AI scientist at the AI Now Institute and a co-author of the report, told 404 Media. “Which I think is the first flag in seeing proposals by Microsoft. They don’t understand what it means to have nuclear licensing.”

“Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”

The Idaho National Laboratory, a Department of Energy run nuclear lab, is already using Microsoft’s AI to “streamline” nuclear licensing. “INL will generate the engineering and safety analysis reports that are required to be submitted for construction permits and operating licenses for nuclear power plants,” INL said in a press release. Lloyd's Register, a UK-based maritime organization, is doing the same. American power company Westinghouse is marketing its own AI, called bertha, that promises to make the licensing process go from "months to minutes.”

The authors of the AI Now report worry that using AI to speed up the licensing process will bypass safety checks and lead to disaster. “Producing these highly structured licensing documents is not this box taking exercise as implied by these generative AI proposals that we're seeing,” Khlaaf told 404 Media. “The whole point of the lesson in process is to reason and understand the safety of the plant and to also use that process to explore the trade offs between the different approaches, the architectures, the safety designs, and to communicate to a regulator why that plant is safe. So when you use AI, it's not going to support these objectives, because it is not a set of documents or agreements, which I think you know, is kind of the myth that is now being put forward by these proposals.”

Sofia Guerra, Khlaaf’s co-author, agreed. Guerra is a career nuclear safety expert who has advised the U.S. Nuclear Regulatory Commission (NRC) and works with the International Atomic Energy Agency (IAEA) on the safe deployment of AI in nuclear applications. “This is really missing the point of licensing,” Guerra said of the push to use AI. “The licensing process is not perfect. It takes a long time and there’s a lot of iterations. Not everything is perfectly useful and targeted …but I think the process of doing that, in a way, is really the objective.”

Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.

Law is another profession where people have attempted to use AI to streamline the process of writing complicated and involved technical documents. It hasn’t gone well. Lawyers who’ve attempted to write legal briefs have been caught, over and over again, in court. AI-constructed legal arguments cite precedents that do not exist, hallucinate cases, and generally foul up legal proceedings.

Might something similar happen if AI was used in nuclear licensing? “It could be something as simple as software and hardware version control,” Khlaaf said. “Typically in nuclear equipment, the supply chain is incredibly rigorous. Every component, every part, even when it was manufactured is accounted for. Large language models make these really minute mistakes that are hard to track. If you are off in the software version by a letter or a number, that can lead to a misunderstanding of which software version you have, what it entails, the expectation of the behavior of both the software and the hardware and from there, it can cascade into a much larger accident.”

Khlaaf pointed to Three Mile Island as an example of an entirely human-made accident that AI may replicate. The accident was a partial nuclear meltdown of a Pennsylvania reactor in 1979. “What happened is that you had some equipment failure and design flaws, and the operators misunderstood what those were due to a combination of a lack of training…that they did not have the correct indicators in their operating room,” Khlaaf said. “So it was an accident that was caused by a number of relatively minor equipment failures that cascaded. So you can imagine, if something this minor cascades quite easily, and you use a large language model and have a very small mistake in your design.”

In addition to the safety concerns, Khlaaf and Guerra told 404 Media that using sensitive nuclear data to train AI models increases the risk of nuclear proliferation. They pointed out that Microsoft is asking not only for historical NRC data but for real-time and project specific data. “This is a signal that AI providers are asking for nuclear secrets,” Khlaaf said. “To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”

“This is a signal that AI providers are asking for nuclear secrets. To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”

Tech companies maintain cloud servers that comply with federal regulations around secrecy and are sold to the US government. Anthropic and the National Nuclear Security Administration traded information across an Amazon Top Secret cloud server during a recent collaboration, and it’s likely that Microsoft and others would do something similar. Microsoft’s presentation on nuclear licensing references its own Azure Government cloud servers and notes that it’s compliant with Department of Energy regulations. 404 Media reached out to both Westinghouse Nuclear and Microsoft for this story. Microsoft declined to comment and Westinghouse did not respond.

“Where is this data going to end up and who is going to have the knowledge?” Guerra told 404 Media.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Nuclear is a dual use technology. You can use the knowledge of nuclear reactors to build a power plant or you can use it to build a nuclear weapon. The line between nukes for peace and nukes for war is porous. “The knowledge is analogous," Khlaaf said. “This is why we have very strict export controls, not just for the transfer of nuclear material but nuclear data.”

Proliferation concerns around nuclear energy are real. Fear that a nuclear energy program would become a nuclear weapons program was the justification the Trump administration used to bomb Iran earlier this year. And as part of the rush to produce more nuclear reactors and create infrastructure for AI, the White House has said it will begin selling old weapon-grade plutonium to the private sector for use in nuclear reactors.

Trump’s done a lot to make it easier for companies to build new nuclear reactors and use AI for licensing. The AI Now report pointed to a May 23, 2025 executive order that seeks to overhaul the NRC. The EO called for the NRC to reform its culture, reform its structure, and consult with the Pentagon and the Department of Energy as it navigated changing standards. The goal of the EO is to speed up the construction of reactors and get through the licensing process faster.

A different May 23 executive order made it clear why the White House wants to overhaul the NRC. “Advanced computing infrastructure for artificial intelligence (AI) capabilities and other mission capability resources at military and national security installations and national laboratories demands reliable, high-density power sources that cannot be disrupted by external threats or grid failures,” it said.

At the same time, the Department of Government Efficiency (DOGE) has gutted the NRC.  In September, members of the NRC told Congress they were worried they’d be fired if they didn’t approve nuclear reactor designs favored by the administration. “I think on any given day, I could be fired by the administration for reasons unknown,” Bradley Crowell, a commissioner at the NRC said in Congressional testimony. He also warned that DOGE driven staffing cuts would make it impossible to increase the construction of nuclear reactors while maintaining safety standards. 

“The executive orders push the AI message. We’re not just seeing this idea of the rollback of nuclear regulation because we’re suddenly very excited about nuclear energy. We’re seeing it being done in service of AI,” Khlaaf said. “When you're looking at this rolling back of Nuclear Regulation and also this monopolization of nuclear energy to explicitly power AI, this raises a lot of serious concerns about whether the risk associated with nuclear facilities, in combination with the sort of these initiatives can be justified if they're not to the benefit of civil energy consumption.”

Matthew Wald, an independent nuclear energy analyst and former New York Times science journalist is more bullish on the use of AI in the nuclear energy field. Like Khlaaf, he also referenced the accident at Three Mile Island. “The tragedy of Three Mile Island was there was a badly designed control room, badly trained operators, and there was a control room indication that was very easy to misunderstand, and they misunderstood it, and it turned out that the same event had begun at another reactor. It was almost identical in Ohio, but that information was never shared, and the guys in Pennsylvania didn't know about it, so they wrecked a reactor,” Wald told 404 Media.

"AI is helpful, but let’s not get messianic about it.”

According to Wald, using AI to consolidate government databases full of nuclear regulatory information could have prevented that. “If you've got AI that can take data from one plant or from a set of plants, and it can arrange and organize that data in a way that's helpful to other plants, that's good news,” he said. “It could be good for safety. It could also just be good for efficiency. And certainly in licensing, it would be more efficient for both the licensee and the regulator if they had a clearer idea of precedent, of relevant other data.”

He also said that the nuclear industry is full of safety-minded engineers who triple check everything. “One of the virtues of people in this business is they are challenging and inquisitive and they want to check things. Whether or not they use computers as a tool, they’re still challenging and inquisitive and want to check things,” he said. “And I think anybody who uses AI unquestionably is asking for trouble, and I think the industry knows that…AI is helpful, but let’s not get messianic about it.”

But Khlaaf and Guerra are worried that the framing of nuclear power as a national security concern and the embrace of AI to speed up construction will setback the embrace of nuclear power. If nuclear isn’t safe, it’s not worth doing. “People seem to have lost sight of why nuclear regulation and safety thresholds exist to begin with. And the reason why nuclear risks, or civilian nuclear risk, were ever justified, was due to the capacity for nuclear power. To provide flexible civilian energy demands at low cost emissions in line with climate targets,” Khlaaf said.

“So when you move away from that…and you pull in the AI arms race into this cost benefit justification for risk proportionality, it leads government to sort of over index on these unproven benefits of AI as a reason to have nuclear risk, which ultimately undermines the risks of ionizing radiation to the general population, and also the increased risk of nuclear proliferation, which happens if you were to use AI like large language models in the licensing process.”



Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

No, fake AI music bought onto a minor chart is not actually popular

1 Share

Sometimes a song just comes out of nowhere and does great! But also, there’s well worn paths to buying your way to a small amount of public attention. And these ways have been common as long as there’s been mass media with music in. Payola is the least of it.

Spotify plays and followers are a commodity. You can just buy them. We wrote up last year how a guy bought himself a ton of Spotify streams of AI tracks. Instagram followers are a commodity you can just buy.

And if someone cracks out Suno and generates yet another song going “the lights are low but the beat is high,” they’re not going to get organic attention.

But if they can get gullible idiots in the music press to report on them, they’re much happier. And there are journalists who will write up anything that says a robot might replace humans.

We’ve seen the fake fashion version of this a couple of times, where a brand promises things that are actually impossible, or some chancers say they’re using AI because they’re trying to sell you a PDF on how to make money like them. These would not have worked without foolish journalists who fail to ask the most basic and obvious questions.

I am blaming the journalists who write up these fakes as if they are not the fakes they really obviously are. And you should blame the journalists too.

It’s the same with fake bands. I am reluctant to even mention this variety of scoundrel, because even Pivot to AI levels of attention would just feed their hype machine. The bozo behind the Velvet Sundown project even sent me a press release about his latest AI fake band project, saying I should expose it. Then he signed his name.

But the fake bands do suck in some gullible press fools who write like marketing never happened. I’m looking at you, Brian Hiatt from Rolling Stone, making Velvet Sundown look serious to the rest of the press. You have no excuse not to have known better. [Rolling Stone]

The key tell is that all the numbers are ones that are really easy to fake. You can buy followers and clicks. You can do fake deals where you claim a huge headline number but no actual money moves. You can’t buy organic interest.

Nobody talks about the bands except the obviously AI-generated comments. Nobody is interested, except they might click on it once because there was a headline in an easily-fooled news outlet. There is no evidence anyone actually likes AI music slop.

Today we have a “band” called Breaking Rust. It topped the Billboard country charts! Wow! Well, it topped the Billboard country digital song sales chart. That’s one of the very tiny also-ran charts.

Breaking Rust’s total sales to top the country downloads chart was … 3,000 downloads. They bought their way onto the chart for $3,000. [Independent]

That was enough for gullible media, including a gullible writer at Billboard itself — Xander Zellner is our lucky loser today, as the guy who had no excuse not to know better: [Billboard]

at least one AI artist has debuted in each of the past six chart weeks, a streak suggesting this trend is quickly accelerating.

Maybe the “trend” of buying yourself coverage. I know music journalism is a super tough game right now. But that was just a very dumb thing to write.

When you see one of these AI bands or AI singers or AI actors, check for anything verifiable and real. Check for numbers that can’t be faked. You won’t find them. Because nobody wants AI slop artists.

You will find easily fooled writers who really need to step away from the keyboard and think for a bit. It’s not that hard! C’mon guys, you can do it! Give it a go!

Read the whole story
mkalus
6 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Saturday Morning Breakfast Cereal - Performance

2 Comments and 4 Shares


Click here to go see the bonus panel!

Hovertext:
Are 23 eggs not an oeuf for you? (Pun brought to you by Patreon comments)


Today's News:
Read the whole story
mkalus
21 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
2 public comments
silberbaer
1 day ago
reply
Pro tip: If you buy one more egg than you light on fire, the remaining egg has had its value increased, thereby lessening the blow. You can't get that kind of deal from the medical industry!
New Baltimore, MI
hannahdraper
1 day ago
reply
Accurate
Washington, DC

Pluralistic: For-profit healthcare is the problem, not (just) private equity (13 Nov 2025)

1 Share


Today's links



A black and white photo of an old hospital ward. A bright red river of blood courses between the beds. Dancing in the blood is Monopoly's 'Rich Uncle Pennybags.' He has removed his face to reveal a grinning skull.

For-profit healthcare is the problem, not (just) private equity (permalink)

When you are at the library, you are a patron, not a customer. When you are at school, you're a student, not a customer. When you get health care, you are a patient, not a customer.

Property rights are America's state religion, and so market-oriented language is the holy catechism. But the things we value most highly aren't property, they cannot be bought or sold in markets, and describing them as property grossly devalues them. Think of human beings: murder isn't "theft of life" and kidnapping isn't "theft of children":

https://www.theguardian.com/technology/2008/feb/21/intellectual.property

When we use markets and property relations to organize these non-market matters, horrors abound. Just look at the private equity takeover of American healthcare. PE bosses have spent more than a trillion dollars cornering regional markets on various parts of the health system:

https://pluralistic.net/2024/02/28/5000-bats/#charnel-house

The PE playbook is plunder. After PE buys a business, it borrows heavily against it (with the loan going straight into the PE investors' pockets), and then, to service that debt, the new owners cut, and cut, and cut. PE-owned hospitals are literally filled with bats because the owners stiff the exterminators:

https://prospect.org/health/2024-02-27-scenes-from-bat-cave-steward-health-florida/

Needless to say, a hospital that is full of bats has other problems. All of the high-tech medical devices are broken and no one will fix them because the PE bosses have stiffed all the repair companies and contractors. There are blood shortages, saline shortages, PPE shortages. Doctors and nurses go weeks or months without pay. The elevators don't work. Black mold climbs the walls.

When PE rolls up all the dialysis clinics in your neighborhood, the new owners fire all the skilled staff and hire untrained replacements. They dispense with expensive fripperies like sterilizing their needles:

https://www.thebignewsletter.com/p/the-dirty-business-of-clean-blood

When PE rolls up your regional nursing homes, they turn into slaughterhouses. To date, PE-owned nursing homes have stolen at least 160,000 lost life years:

https://pluralistic.net/2021/02/23/acceptable-losses/#disposable-olds

Then there's hospices, the last medical care you will ever receive. Once your doctor declares that you have less than six months or less to live, Medicare will pay a hospice $243-$1,462/day to take care of you as you die. At the top end of that rate, hospices have to satisfy a lot of conditions, but if the hospice is willing to take $243/day, they effectively have no duties to you – they don't even have to continue providing you with your regular medication or painkillers for your final days:

https://prospect.org/health/2023-04-26-born-to-die-hospice-care/

Setting up a hospice is cheap as hell. Pay a $3,000 filing fee, fill in some paperwork (which no one ever checks) and hang out a shingle. Nominally, a doctor has to oversee the operation, but PE-backed hospices save money here by having a single doctor "oversee" dozens of hospices:

https://auditor.ca.gov/reports/2021-123/index.html#pg34A

Once you rope a patient into this system, you can keep billing the government for them up to a total of $32,000, then you have to kick them out. Why would a patient with only six months to live survive to be kicked out? Because PE companies pay bounties to doctors to refer patients who aren't dying to hospices. 51% of patients in the PE-cornered hospices of Van Nuys are "live discharged":

https://pluralistic.net/2023/04/26/death-panels/#what-the-heck-is-going-on-with-CMS

However, once you're admitted to a hospice, Medicare expects you to die – so "live discharged" patients face a thick bureaucratic process to get back into the system so they can start seeing a doctor again.

So all of this is obviously very bad, a stark example of what happens when you mix the most rapacious form of capitalist plunder with the most vulnerable kind of patient. But, as Elle Rothermich writes for LPE Journal, the PE model of hospice is merely a more extreme and visible version of the ghastly outcomes that arise out of all for-profit hospice care:

https://lpeproject.org/blog/hospice-commodification-and-the-limits-of-antitrust/

The problems of PE-owned hospices are not merely a problem of the lack of competition, and applying antitrust to PE rollups of hospices won't stop the carnage, though it would certainly improve things somewhat. While once American hospices were run by nonprofits and charities, that changed in 1983 with the introduction of Medicare's hospice benefit. Today, three quarters of US hospices are private.

It's not just PE-backed hospices; the entire for-profit hospice sector is worse than the nonprofit alternative. For-profit hospices deliver worse care and worse outcomes at higher prices. They are the worst-performing hospices in the country.

This is because (as Rothermich writes) "The actual provision of care—the act of healing or attempting to heal—is broadly understood to be something more than a purely economic transaction." In other words, patients are not customers. In the hierarchy of institutional obligations, "patients" rank higher than customers. To be transformed from a "patient" into a "customer" is to be severely demoted.

Hospice care is a complex, multidisciplinary, highly individualized practice, and pain treatment spans many dimensions: "psychological, social, emotional, and spiritual as well as physical." A cash-for-service model inevitably flattens this into "a standardized list of discrete services that can each be given a monetary value: pain medication, durable medical equipment, skilled nursing visits, access to a chaplain."

As Rothermich writes, while there are benefits to blocking PE rollups and monopolization of hospices, to do so at all tacitly concedes that health care should be treated as a business, that "corporate involvement in care delivery is an inevitable, irreversible development."

Rothermich's point is that health care isn't a commodity, and to treat it as such always worsens care. It dooms patients to choosing between different kinds of horrors, and subjects health care workers to the moral injury of failing their duty to their patients in order to serve them as customers.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago New Sony lockware prevents selling or loaning of games https://memex.craphound.com/2005/11/12/new-sony-lockware-prevents-selling-or-loaning-of-games/

#20yrsago Dr Seuss meets Star Trek https://web.archive.org/web/20051126025052/http://www.seuss.org/seuss/seuss.sttng.html

#20yrsago Sony’s other malicious audio CD trojan https://memex.craphound.com/2005/11/12/sonys-other-malicious-audio-cd-trojan/

#15yrsago Will TSA genital grope/full frontal nudity “security” make you fly less? https://web.archive.org/web/20101115011017/https://blogs.reuters.com/ask/2010/11/12/are-new-security-screenings-affecting-your-decision-to-fly/

#15yrsago Make inner-tube laces, turn your shoes into slip-ons https://www.instructables.com/Make-normal-shoes-into-slip-ons-with-inner-tubes/

#15yrsago Tractor sale gone bad ends with man eating own beard https://web.archive.org/web/20101113200759/http://www.msnbc.msn.com/id/40136299

#10yrsago San Francisco Airport security screeners charged with complicity in drug-smuggling https://www.justice.gov/usao-ndca/pr/three-san-francisco-international-airport-security-screeners-charged-fraud-and

#10yrsago Female New Zealand MPs ejected from Parliament for talking about their sexual assault https://www.theguardian.com/world/2015/nov/11/new-zealand-female-mps-mass-walkout-pm-rapists-comment

#10yrsago Councillor who voted to close all public toilets gets a ticket for public urination https://uk.news.yahoo.com/councillor-cut-public-toilets-fined-094432429.html#1snIQOG

#10yrsago Edward Snowden’s operational security advice for normal humans https://theintercept.com/2015/11/12/edward-snowden-explains-how-to-reclaim-your-privacy/

#10yrsago Not (just) the War on Drugs: the difficult, complicated truth about American prisons https://jacobin.com/2015/03/mass-incarceration-war-on-drugs/

#10yrsago Britons’ Internet access bills will soar to pay for Snoopers Charter https://www.theguardian.com/technology/2015/nov/11/broadband-bills-increase-snoopers-charter-investigatory-powers-bill-mps-warned

#10yrsago How big offshoring companies pwned the H-1B process, screwing workers and businesses https://www.nytimes.com/2015/11/11/us/large-companies-game-h-1b-visa-program-leaving-smaller-ones-in-the-cold.html?_r=0

#5yrsago Anti-bear robo-wolves https://pluralistic.net/2020/11/12/thats-what-xi-said/#robo-lobo

#5yrsago Xi on interop and lock-in https://pluralistic.net/2020/11/12/thats-what-xi-said/#with-chinese-characteristics

#5yrsago Constantly Wrong https://pluralistic.net/2020/11/12/thats-what-xi-said/#conspiratorialism


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026



Colophon (permalink)

Today's top sources:

Currently writing:

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Google Has Chosen a Side in Trump's Mass Deportation Effort

1 Comment and 2 Shares
Google Has Chosen a Side in Trump's Mass Deportation Effort

Google is hosting a Customs and Border Protection (CBP) app that uses facial recognition to identify immigrants, and tell local cops whether to contact ICE about the person, while simultaneously removing apps designed to warn local communities about the presence of ICE officials. ICE-spotting app developers tell 404 Media the decision to host CBP’s new app, and Google’s description of ICE officials as a vulnerable group in need of protection, shows that Google has made a choice on which side to support during the Trump administration’s violent mass deportation effort.

Google removed certain apps used to report sightings of ICE officials, and “then they immediately turned around and approved an app that helps the government unconstitutionally target an actual vulnerable group. That's inexcusable,” Mark, the creator of Eyes Up, an app that aims to preserve and map evidence of ICE abuses, said. 404 Media only used the creator’s first name to protect them from retaliation. Their app is currently available on the Google Play Store, but Apple removed it from the App Store.

“Google wanted to ‘not be evil’ back in the day. Well, they're evil now,” Mark added.

💡
Do you know anything else about Google's decision? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The CBP app, called Mobile Identify and launched last week, is for local and state law enforcement agencies that are part of an ICE program that grants them certain immigration-related powers. The 287(g) Task Force Model (TFM) program allows those local officers to make immigration arrests during routine police enforcement, and “essentially turns police officers into ICE agents,” according to the New York Civil Liberties Union (NYCLU). At the time of writing, ICE has TFM agreements with 596 agencies in 34 states, according to ICE’s website.

Read the whole story
mkalus
1 day ago
reply
“Don’t be Evil™”.
iPhone: 49.287476,-123.142136
dlwright
1 day ago
From the company that sold the data of people with gambling addictions to betting sites, comes their newest boon to society
Share this story
Delete

GEMA wins against OpenAI on copying song lyrics

1 Share

GEMA is the official music publishing collections organisation in Germany. They’re big, they’re powerful, they throw their weight around, and a lot of people think they’re dicks. Large swathes of YouTube are still blocked in Germany because they might contain music that GEMA hasn’t been paid for.

So GEMA decided it was going to take on the AI guys.  In September 2024 GEMA put together a licence scheme for AI vendors — with ongoing licensing fees, not just a one-off payment. [MBW, 2024]

GEMA’s proposed license fee would be — get this — “a 30% share of all net income generated by the generative AI model or system of the provider.” 30% of all the profit OpenAI makes. If they ever make a profit. It’s not clear why GEMA is going for a share of net, rather than a share of revenue. GEMA also want ongoing payments for secondary usage of AI-generated music. [GEMA, 2024]

In November 2024, GEMA sued OpenAI because ChatGPT would happily spit out lyrics by German songwriters on request. [press release, 2024]

Yesterday, GEMA won big in the Munich Regional Court against OpenAI. [Bavarian State Ministry of Justice, in German]

OpenAI had argued that their large language models did not store copies of the lyrics — just whatever the model had been trained on by the whole training set.

This is the sort of argument that Stability AI successfully used against Getty Images — that the models aren’t literally a copy. Even though they are actually a compressed copy of their training data!

The Munich court did not buy OpenAI’s argument. Language models can absolutely just reproduce their training data, and GPT very obviously did that precise thing.

The court said that training was fine. It was spitting out copies that constituted spitting out copies.

OpenAI argued that if the chatbot just happened to spit out a copy of the lyrics, that was the user’s fault for asking for them. The court didn’t buy that either — it was OpenAI that put the lyrics into the training data and made them available.

The court has prohibited OpenAI from reproducing the lyrics. If you ask ChatGPT for song lyrics, it will now block the output on copyright grounds. It will still produce a translation of the lyrics into another language, though. [Heise, in German]

The Munich judgement can be appealed. OpenAI says “we disagree with the ruling and are considering next steps,” but they would say that. [Reuters]

GEMA also sued the AI music generator Suno in January for allegedly spitting out recognisable copies of licensed German songs. There isn’t a court date as yet. [GEMA; GEMA]

OpenAI has already threatened to leave Europe if it gets regulated too hard. Imagine GEMA being the ones to drive the chatbots out of Germany.

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories