Resident of the world, traveling the road of life
68979 stories
·
21 followers

Actually, the left is winning the AI debate

1 Share

Last week, I wrote a bit about how and why the AI discourse has become particularly unhinged lately. Right as I published that piece, another AI discourse generator started making the rounds; an article by Dan Kagan-Kans in the effective altruist AI newsletter Transformer, that argued that “the left is missing out on AI.” I’d already briefly addressed this notion in the post on the AI zeitgeist, and at first I thought that would probably be sufficient. But it kept nagging at me, and I do think it’s worth engaging this notion, and why I think it’s wrong, in full.

In fact, I’m going to argue that not only is the left not “missing out” on AI, but that it would be more accurate to say that it is “currently winning the debate” over AI in American hearts and minds. Polls routinely show that Americans are more concerned than enthusiastic about AI. Coalitions are organizing opposition to data centers across the country, often successfully. Where they have been proposed, in states like New York, Colorado, and California, laws to regulate or rein in AI have found majority support. When actors and screenwriters went on strike in 2023, and foregrounding a demand to stop executives from using AI to undermine their jobs, they were widely cheered. And so on.

Much of this is driven by left-liberal critique of AI systems: Of the impact AI would have on labor when administered by management (the union-led screen actor and writer strikes, California’s No Robo Bosses Act), of the resource and energy costs of data centers (articulated by progressive environmental groups), of the practice of nonconsensually exploiting works in training models (shouted from the rooftops by leftist artists like Molly Crabapple from the first days of Midjourney), and so on. Left-liberals can’t claim full credit for the concern piece—the AI CEOs themselves have been doing their best to ensure everyone knows they intend to automate the world’s jobs and think that there’s a chance they might create Skynet in the process.

In fact, the left appears to be so successfully engaged in matters related to AI that one can’t help but wonder if allegations about its supposed ignorance of the technology are motivated by a desire to change the very terms of the debate.

Subscribe now

To wit: The Kagan-Kans piece articulates a position that I think is pretty widely shared, especially among AI industry folks, centrist pundits, and anyone who might command a speaking fee for having tough but sensible opinions about AI. I’m not going to spend a lot of time rebutting it line by line—Gita Jackson handled much of that task over at Aftermath—though I did find its argument underwhelming and somewhat confused. (Kagan-Kans never really seems certain as to what he wants to describe as “the left”, for one thing. Most of the critique is dedicated to the linguist Emily Bender, who is not a leftist, and the only person on the left Kagan-Kans interviews is Matt Bruenig, who argues… the left should use more AI.)

Briefly reconstructed, that argument basically goes like this: Bender and her coauthors of the famous stochastic parrots paper posited that AI is not really intelligent, it’s a next-token prediction machine. “The left” has metabolized this conception of AI, and uses it as an excuse to write off AI’s import, which is growing by the day. (In one puzzling section, the author compares those who discount the true power of AI to climate deniers.) This in turn means “the left” will miss out on a political opportunity to be “first in line” and thus the chance “to set all the rules for discussion and debate about it,” according to the source Kagan-Kans quotes to advance that claim: the chief futurist of OpenAI.

Now, I personally would not look to the “chief futurist” of OpenAI for my understanding of political science, for starters. Being “first” to a debate does not to my knowledge necessarily translate to policy influence. Power does, though. Since Kagan-Kans likes climate change metaphors, here’s an example: In the 1990s, climate scientists presented the case that greenhouse gases were warming the atmosphere to Congress, and for a time it seemed like federal legislative action was probable. Then, during the Bush administration, when action that would injure the interests of the oil industry seemed imminent, Republicans adopted a concerted political strategy of climate denial, as dictated by the infamous Frank Luntz memo—and sure enough, the party was able to reset the terms of the debate, years later.

Furthermore, I think it’s strictly counterfactual to argue, as Kagan-Kans does, that the left is missing out on a chance to play a role in policymaking over AI guardrails. If anything, left-liberals are trying to pass laws that do exactly that, especially on the state level, and the right is trying to crush them. Dozens of state bills, backed by unions and progressive organizations like TechEquity, have been proposed and passed already, covering everything from labor impacts to surveillance to identity protection and child safety. Meanwhile, the Trump right is working to put a moratorium on state level AI lawmaking altogether—a move that’s profoundly unpopular among Americans and the electorate, and that many on the left have called out as dangerous and wrong. It’s yet another point where left-liberals appear to be winning the public opinion in the AI debate.

But I know that’s not what most pundits mean when they say the left is missing out on AI. What they are saying instead, is that the left doesn’t see AI the way I and my cohort do, as a transformational force that will remake the world and its institutions.

That’s where Bender comes in. I think many in the AI industry have been particularly angered by Bender’s depiction of AI as a stochastic parrot—more than they have by other criticisms—because it seems to demean their project on a structural level. If someone believes they’re building a powerful super intelligence, I’m sure it feels insulting to have someone call it fancy autocomplete. I also know there are even critical tech writers who chafe at Bender’s formulation and its persistence; who argue it has had the effect of limiting folks’ understanding of what AI models are capable of. I see the primary thrust of her work as grounding claims and hype about AI in the fact that they are statistical language modeling machines. To me, that doesn’t diminish the notion that those models can be complex or powerful or capable of impressive things; it just underscores their materiality as programmed systems, and perhaps helps limit the purchase of industry-benefitting visions of artificial sentience.

Regardless, a core part of the ‘left doesn’t get AI’ line comes from an assumption that Bender’s parrot formulation has become its default position. And where does that assumption come from? Kagan-Kans cites a handful of articles out of the thousands written about AI, but I think I know where the true root lies. It is at this point that I stand up for one of the most denigrated populations on the entire internet, Bluesky users. I think a lot of the tech world’s conception of what constitutes “the left” seems to be drawn from scanning Bluesky. AI evangelists on X see users there calling AI “the plagiarism machine” or “nothing but autocomplete” and assume their knowledge of the technology tapers off at where the state of the art was in 2023. (To be fair, Bluesky users do the reverse to AI boosters on X.) But just because someone calls AI a “plagiarism machine” doesn’t mean they don’t have any further understanding of the technology. One may think it a corny, reductive way to describe AI, or to articulate a rejection of it, but much in the way you wouldn’t assume a user who posts “orange man bad” hasn’t been following the latest Trump news, it just doesn’t follow to assume that’s the limit of someone’s understanding of the topic.

I would wager, in fact, that the most common left-liberal position is not that generative AI is just a stochastic parrot, but a product built by powerful profit-seeking firms that’s capable of serious harms. It’s not a semantic or philosophical critique the AI industry should be worried about, in other words, but a material one. AI is viewed, correctly, as a threat to jobs, education, mental wellbeing, the arts, child safety, the information ecosystem, and as possessing little upside for few others then corporate managers and AI companies.

The left’s project, then, is much larger than the right’s—which is content to cut red tape, shrug, shovel money into the engine, and laugh at liberal tears—and must both resist AI’s current iteration and envision where it should go instead. The left knows it must oppose these firms as they seek to facilitate a massive transfer of wealth from the working class to the rich—and again, it knows this because AI executives are constantly talking about how they aim to do exactly this. It must confront the energy costs of the data centers, which are causing electricity bills to spike around the nation, and fossil fueled power plants to be brought back online. It must confront systems that are built on pirated and stolen intellectual property, and work to mitigate the damage done both by the theft of that intellectual property and to combat the new norms such systems seek to impose.

In other words:

“The left” must confront the entire political economy of AI at once, not just consider the core technology, which at this point is nearly impossible to assess apart from its owners and developers. AI is after all being developed by a cohort of executives and oligopolist tech firms that are open about their project of orchestrating a mass deskilling of labor, and using as much energy, resources, and capital as they see fit in the pursuit of that goal, and who are actively working with an authoritarian state to shut down democratic oversight of the technology. It feels blinkered, to say the least, to say “the left is missing out” by not productively engaging with the product built to serve this socioeconomic formation.

Rejecting or resisting a commercial technology designed to attempt a mass wealth transfer and to erode public institutions is a valid political position. This rejection can manifest as a kneejerk “plagiarism machine sucks” tweet or a slogan on a poster board at a picket line or a policy paper. And just because someone is rejecting a technology and the broader project it is a part of, does not mean they do not understand it. Oftentimes that rejection is entirely informed, warranted, and rational.

“The left” should be thinking about what it does want AI to do, and what good management of AI systems would look like. This is the big point I think that centrist pundits and AI folks are trying to make, I think, but I have good news for them: The left is thinking about these things! The biggest indicator that Kagan-Kans piece was either not particularly carefully researched or not written in good faith is that it failed to mention a debate that unfolded over the last many months, read by much of the left, between Aaron Benanav, Evgeny Morozov, and Leif Weatherby, addressing this very question. Benanav articulates a nuanced and carefully detailed plan for organizing a society that can sustainably manage both a drawdown from fossil fuels and a sector capable of robust innovation. (I will add he first did so last year, in an article for the New Left Review, the left’s most august journal of ideas.) Morozov argues for more fluidity and experimentation with AI. Weatherby makes a case for automating the c-suite. Meanwhile, the left’s most popular interview podcast, the Dig, just hosted Nick Srnicek, whose latest book takes the import of AI quite seriously, and whose subtitle is quite actually “The Fight for the Future of AI.” The left/left-liberal scholars and policymakers Ruha Benjamin, Alondra Nelson, and Amba Kak were appointed to Mamdani’s tech transition committee—where they are thinking seriously about how to manage the future of AI. Bernie Sanders, the single most famous individual on the American left, has proposed a data center moratorium, as, somewhat confusingly, Kagan-Kans himself points out.

“The left” must grapple with broad questions about what should be automated and what should not, and who gets to make those decisions, even if AI were not being developed by anti-humanist CEOs bent on mass extraction. My sense is that most people on “the left” understand that AI can efficiently automate large amounts of text, code, and image production—certainly most I speak with understand this—and that the tools are improving, if not at a rate the tech executives insist they are, and are still plenty flawed. But embedded in everything from Marxist programs to advance a communal AI to left-liberal critiques of harms to reactionary Bluesky posts is a broader grappling with what matters to us in society: What do we want AI to do, and how? Do we want AI in the classroom at all, even job degradation and deskilling somehow weren’t concerns? Do we want the writing of journalists’ copy to be automated? Art to be mass produced by machines? What is worth the trade-offs, the energy consumption? What’s not? The Benanav-Morozov debate was in large part over that question, and so, I would argue, are conversations happening online and off, among the broader left-liberal axis. (The MAGA right has fewer such qualms—if AI can make money, confer them power, and help them demean opponents with slop, bring it on.)

That these messy and vitriolic and sometimes inarticulate debates are not about what kind of Claudeswarm one should be wielding to maximize productivity does not signify ignorance, or a missed opportunity. It signifies resistance to the general AI project as currently constituted. It asks “Why do we want this?” “Who does it serve?”

This is eminently reasonable. It’s also popular. Data center moratoria are being passed in cities across the nation. Silicon Valley elites are immensely unpopular. There is widespread support for regulation and AI governance, and for the AI laws that have passed thus far. I’m sure AI advocates would prefer we move past a tendency to challenge the foundational elements of the industry’s products and hegemony. Of course it would! Because “the left” is winning the AI debate.

Do I think it’s getting everything right about AI? Hardly. I think it could lean more into a program of rejecting generative AI in extractive and exploitative circumstances, of protecting labor from deskilling, wage degradation, and surveillance, and refusing AI’s intrusion into spheres of public life Silicon Valley that seeks to colonize and profit from. The left, as always, does need to get better organized, and to better understand that it has actually accrued significant political capital around AI—again, majorities fear AI, dislike it, want to keep it away from their kids, and don’t want it to take their jobs, all for good reason—and use it. Because as of right now, it’s only winning the debate in the court of public opinion. In practice, AI companies are doing whatever they want, with the blessing of Trumpworld.

The left could even, as Benanav and Weatherby intimate, make a significantly stronger case that AI should be entirely publicly held and administered and—why not?—used to replace the executive class altogether. Position this on the terms AI firms and their founders themselves have laid out. If AI is truly the revolutionary force they claim, and it stands to remake the world from the ground up, if it promises to eliminate skill difference and advantage, then forget pittances like a basic income. Forget leaving Sam Altman in charge. Why should any reasonable person settle for anything less than full equality, and full co-ownership of this AI-run state? Control over our AI should be placed entirely in public hands—it was built to “benefit all of humanity”, after all—and granted to the humans it stands to impact. Why not automate Altman and the Anthropic c-suite and Elon Musk, and redistribute any gains to the people, however meager?


Subscribe now


I did mean to respond to the whole ‘the left is missing out on AI’ earlier in the discourse cycle, and one reason I couldn’t was I was putting together the last edition of BITM, about the nationwide phenomenon of people smashing Flock surveillance cameras, which has absolutely blown up. I suppose that was a pun.

It was picked up by Gizmodo, TechCrunch, Yahoo!, VICE, and was featured on the front page of Reddit, Hacker News, and Slashdot. The response was nearly universal: Godspeed to the Flock smashers. Jeff Sovern, the man who’s standing trial for dismantling 13 Flock cameras in Virginia, said he’d received thousands of dollars in donations to his legal fund since the story went up. And I’ve been receiving more tips about more acts of sabotage since. Keep them coming.

If you’ve heard stories of smashed Flock cameras or dismantled surveillance equipment in your neighborhood, please share—drop a link in the comments, or contact me on Signal or at briancmerchant@proton.me.

Thanks for sharing that story around everyone.

I linked this in the story above, but thought I’d share it here too; friend of BITM was featured on 60 Minutes, talking about AI’s impact on the art world and her livelihood. Turns out she thinks AI is pretty cool:

Oh you thought I was being hyperbolic when I labeled AI CEOs as anti-humanist up there?

Paris Marx has a nice writeup about Altman and OpenAI’s moral rot, tying the above posture wrt humans as null energy consumers to the AI company’s failure to alert authorities after ChatGPT flagged one of its users as a risk for committing violence—and that user went on to perpetrate one of Canada’s deadliest mass shootings.

I missed this when it came out, but just stumbled on it, and it’s too good not to share. Diana Enriquez studied how 50 middle managers use AI, and wrote about the results for Tech Policy Press:

…middle managers often appease leadership requests to implement AI automation even when there is limited value in it, pretending that their error-free draft was written by an AI tool that actually failed to deliver them a single coherent copy. They push harder to do more work with less time because the job market now demands they demonstrate familiarity with AI to align with the company’s evolving brand and complete work at the expected level of quality. The result is increased employee anxiety, burnout, and limits on productivity rather than gains.

A fun BBC story about how incredibly easy it is to hack AI search results:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs”. Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously….

Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

I went to check the results in ChatGPT a couple days later to see if OpenAI had fixed the answer to the query the BBC built its stunt around, and it had, though now that response appeared weirdly fixated on Verge editor in chief Nilay Patel.

Honestly why is Nilay on there twice lol

Finally, some good Luddite goings on:

In London, this Saturday, February 28th, you can join the March Against the Machines, organized by Pull the Plug.

For more details, RSVP here. As organizer Harry Atkinson tells me:

The rally is part of a wider effort to demand democratic oversight of how AI is developed and deployed in the UK. We’re calling for binding Citizens’ Assemblies on AI, so ordinary people whose jobs, industries and livelihoods are being reshaped by these systems - can have a meaningful say in how they’re used.

Alongside the London rally, Global Action Plan will be coordinating decentralised actions outside data centres across the country, so people can participate locally if they can’t make it to London

If I was anywhere near London, you can bet I’d be there, proverbial hammer in tow.

Finally, in New York City, the makers of the Luddite Club documentary are holding a salon with a screening and some special guests. This film is going to be great, and I’m not just saying that because I’m in it. RSVP and more info here.

Subscribe now

OK OK OK that’s it for today. Thanks for reading everyone. And as always, a special thanks to paid subscribers, who make all of this work possible. Consider upgrading if you find value in it, too. OK! Until next time. Hammers—and bolt cutters—up.



Read the whole story
mkalus
59 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Five takeaways from an unhinged AI discourse

1 Share

The AI discourse has been particularly, let’s say, “heated” lately. It’s hitting a lot of the beats we’ve heard before—people are not ready for what’s coming, critics are too dismissive, and at everyone’s peril, “the left” is getting AI all wrong, etc—but delivered at a fever pitch.

A viral, AI-generated blog post on X called “Something Big Is Happening,” by Matt Shumer, a CEO of an AI company, was one catalyst, though it builds off sentiments articulated in Anthropic CEO Dario Amodei’s much longer essay, “The Adolescence of Technology,” which makes a similar if more indulgent and nuanced case, plus all the AI Super Bowl ads, and the hype drummed up by Moltbook, the ‘reddit for AI agents’ created by yet another AI CEO, that was the talk of the town until it was revealed that it exposed the user data of everyone involved and that many of the most interesting threads were actually written by humans. Underneath it all was more organic buzz produced by Anthropic’s coding tools, which users, journalists and commentators are blogging and podcasting about. But the Something Big blog, with 83 million views and counting, burst the dam.

The gist should be plenty familiar to BITM readers and AI watchers at this point: Tremendous social change, driven by AI, is about to unfold, and people simply aren’t prepared. (Per the post’s central conceit, we are in the early pandemic days when things are about to change forever.) Yet the blog did that special thing that blogs blogged at just the right time and place can do: they inspire people to react particularly strongly on social media in a way that inspires other people to react strongly to the reaction, and then all bets were off and everyone was sharing what they think about AI, and more specifically their frustrations with what everyone else thinks about AI. That many had been preoccupied with what was happening with ICE and Minneapolis and the release of more of the Epstein files also probably meant lots of those AI thoughts had been pent up for a month or two, and contributed to the unusual force through which they were released.

It has been a rich and sprawling text, to say the least. To help make sense of it, here are the five major takeaways from the most heated AI discourse in a minute, as far as I’m concerned:

  1. There is a distinct material basis for all this discourse. We’re in the midst of another concerted, industry-led hype cycle, this time driven more visibly by Anthropic, which just landed a $30 billion investment round.

  2. This time the hype must transcend multibillion dollar investment deals: It must also raise the stock of AI companies ahead of scheduled IPOs later this year and help lay the groundwork for federal funding and/or bailout backing.

  3. Much of the discourse centered on lambasting critics who accuse AI of being “fake”—but this is a straw man argument that serves the industry.

Read more



Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Jazz Lass: 1947

1 Share
New York, May 1947. "Teddy Kaye, Vivien Garry (last seen here) and Arvin "Arv" Charles Garrison at Dixon's." Photo by Down Beat contributor William Gottlieb. View full size.
Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The Funny Place: 1912

1 Share
Atlantic City, New Jersey, circa 1912. "The Boardwalk and Steeplechase Pier." George Tilyou's "amusement pier" lasted the better part of a century, hurricanes and fires notwithstanding. 5x7 inch dry plate glass negative, Detroit Publishing Company. View full size.
Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Midtowner Motel: 1964

1 Share
July 1, 1964. Here we are at the Midtowner Motel, in a Kodachrome slide donated by a fan of Shorpy. But where is the Midtowner Motel? Let us know in the comments below. View full size.
Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The Factories: 1899

2 Shares
Niagara Falls, New York, circa 1899. "The Factories -- Niagara Gorge. (Roof of first plant by water: Power Station No. 2, Niagara Falls Hydraulic Power & Mfg. Co.; second plant: Cliff Paper Co.)" 8x10 inch glass transparency, Detroit Photographic Company. View full size.
Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories