Resident of the world, traveling the road of life
67436 stories
·
21 followers

Swedish Prime Minister Pulls AI Campaign Tool After It Was Used to Ask Hitler for Support

1 Share
Swedish Prime Minister Pulls AI Campaign Tool After It Was Used to Ask Hitler for Support

The Moderate Party of Sweden has removed an AI tool from its website after people used it to generate videos of Prime Minister Ulf Kristersson asking Adolf Hitler for support.The tool allowed users to generate videos of Kristersson holding an AI-generated message in an attempt to promote the candidate ahead of the general election in Sweden next year.

Swedish television station TV4 used the tool to generate a video of Kristersson on a newspaper above the headline “Sweden needs Adolf Hitler” after it noticed that it had no guardrails or filters.

In the video TV4 generated using the website, Kristersson makes his pitch over stock footage of old people embracing. A woman runs through a field, the camera focusing on flowers while the sun twinkles in the background. Cut to Kristersson. He turns a blue board around. “We need you, Adolf Hitler,” it says.

The Moderates removed the AI system from its website, but the videos of Ulf asking Hitler to join the Moderates remain on social media and TV4’s website..

In an attempt to bolster its party's ranks, Moderates launched a website that allowed users to generate a custom video of Kristersson asking someone to join the party. The idea was probably to have party members plug in the names of friends and family members and share what appeared to be a personalized message from the PM asking for their support.

In the video, Kristersson stands in front of stairs, makes his pitch, and turns around a blue tablet that bears a personalized message to the viewer. The system apparently had no guardrails or filters and Swedish television station TV4 was able to plug in the names Adolf Hitler, Ugandan dictator Idi Amin, and Norwegian mass murderer Anders Breivik.

The Moderate Party did not return 404 Media’s request for a comment about the situation, but told TV4 it shut down the site as soon as it learned people were using it to generate messages with inappropriate names.

The Moderate Party’s AI-generated video was simple.. It filmed the PM holding a blue board it could easily overlay with input from a user and then used AI to generate the fake newspaper and a few other slides. Preventing people from typing in “Hitler” or “Anders Brevik” would have been as simple as maintaining a list of prohibited names, words, and phrases, something that every video game and service does. Users are good at bypassing guardrails, but the Moderate’s AI tool appeared to have none.

Users making content you don’t want to be associated with is one of the oldest and most well known problems in AI. If you release a chatbot, generative photo system, or automated political greeting generator, someone will use it to reference the Nazis or make nonconsensual porn.

When Microsoft launched TAY in 2016, users turned it into a Hitler-loving white nationalist in a few hours. Eight years later, another Microsoft AI product had a loophole that let people make AI-generated nudes of Taylor Swift. Earlier this year, Instagram’s AI chatbots lied about being licensed therapists.

Read the whole story
mkalus
3 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Jordan Peterson: "Capable of assessing data", or gullibly misled?

1 Share
From: potholer54
Duration: 10:39
Views: 46,029

This is the first in a series about Jordan Peterson's disastrous foray into the world of geophysics.
Whether you love him or loathe him doesn't matter. The question is: Does he get his facts right?
In this video, I look at the claim that it's costing us $14 trillion to transition from fossil fuels to clean energy, and whether carbon dioxide emissions from cars, power plants, buildings and other sources are heating the Earth's atmosphere. Peterson has very firm beliefs on these issues, and uses his platform as a psychologist and self-help guru to spread those beliefs. But what evidence does he have to support them? Let's see....

SOURCES:

https://www.iea.org/reports/world-energy-investment-2024/overview-and-key-findings

https://assets.bbhub.io/professional/sites/24/951623_BNEF-Energy-Transition-Trends-2025-Abridged.pdf

https://about.bnef.com/blog/global-cost-of-renewables-to-continue-falling-in-2025-as-china-extends-manufacturing-lead-bloombergnef/

https://www.ox.ac.uk/news/2022-09-14-decarbonising-energy-system-2050-could-save-trillions-oxford-study

My video on Bjorn Lomborg and his claims about electric cars, is here:
https://youtu.be/hwMPFDqyfrA

My video on Clintel, the political lobby group that sent a 'declaration' to the EU parliament, is here:
https://www.youtube.com/watch?v=cpUe41EbHvQ
I anticipate that a lot of people will say that the signatories to this Declaration are not amateurs when it comes to climate science. If that's the case, please cite the signatories you think are involved in climate research and have published their results in respected, peer-reviewed journals (you know, the way science is done.) To help you narrow it down, blogs, websites, newspapers, speeches, petitions and TV shows are not respected, peer-reviewed journals. Making a claim is easy. Backing it up with actual names and titles is much harder, as you will discover.

DONATIONS TO CHARITY:
I do not ask for contributions. Instead, please support the work of Health in Harmony, which trades forest protections for health care. It's an innovative scheme that has seen thousands of acres of tropical rain forest protected and also restored, and the health of nearby villagers greatly improved.
See my video here: https://www.youtube.com/watch?v=j9-GRugP9pU for an explanation of their work.
Please donate here: https://www.healthinharmony.org/donate
Health in Harmony also has a live website: https://www.healthinharmony.org

Read the whole story
mkalus
5 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Pivot to AI is unwell

1 Share

I got Saturday’s 20-minute video of Veo 3 fails produced in a tour de force, ignoring my increasing headache and difficulty thinking clearly. But it’s done, it’s up, and it’s fantastic. Tell all your friends! [YouTube]

By the time I went to bed I had 1°C of fever. This has continued today and I’ve spent the day in bed dazed.

So the blog post and podcast versions will be a while. Probably not tomorrow unless I’m magically well.

In the meantime, here’s the video. It’s awesome. I’m going to rest as long as I need to.

 

 

Read the whole story
mkalus
5 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

neeks @ Dubstation, Fusion 2025 – als ob

1 Share

Das Set hat mir gestern den Abend schön gemacht und ich dachte dabei so, was das für ein fantastischer Sound für einen cozy Sonntag wäre. Wie der Zufall es will ist dann heute ja Sonntag und ich finde den kann man nicht angemessener in Klang packen als neeks es auf der diesjährigen Dubstation getan hat. Eine ganz großartige Mixtur – von allem etwas.

Read the whole story
mkalus
5 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The Media's Pivot to AI Is Not Real and Not Going to Work

1 Comment and 2 Shares
The Media's Pivot to AI Is Not Real and Not Going to Work

On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.

From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT. 

This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites. 

The Media's Pivot to AI Is Not Real and Not Going to Work

This general dynamic—plummeting traffic because of AI snippets, ChatGPT, AI slop, Twitter no workie so good no more—has been called the “traffic apocalypse” and has all but killed some smaller websites and has been blamed by executives for hundreds of layoffs at larger ones. 

Despite the fact that generative AI has been a destructive force against their businesses, their industry, and the truth more broadly, media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked, and pretending that they are forward thinking or have any clue what they are doing will perhaps allow a specific type of media executive to squeeze out a few more months of salary.

But pivoting to AI is not a business strategy. Telling journalists they must use AI is not a business strategy. Partnering with AI companies is a business move, but becoming reliant on revenue from tech giants who are creating a machine that duplicates the work you’ve already created is not a smart or sustainable business move, and therefore it is not a smart business strategy. It is true that AI is changing the internet and is threatening journalists and media outlets. But the only AI-related business strategy that makes any sense whatsoever is one where media companies and journalists go to great pains to show their audiences that they are human beings, and that the work they are doing is worth supporting because it is human work that is vital to their audiences. This is something GQ’s editorial director Will Welch recently told New York magazine: “The good news for any digital publisher is that the new game we all have to play is also a sustainable one: You have to build a direct relationship with your core readers,” he said.

Becoming an “AI-first” media company has become a buzzword that execs can point at to explain that their businesses can use AI to become more ‘efficient’ and thus have a chance to become more profitable. Often, but not always, this message comes from executives who are laying off large swaths of their human staff.

In May, Business Insider laid off 21 percent of its workforce. In her layoff letter, Business Insider’s CEO Barbara Peng said “there’s a huge opportunity for companies who harness AI first.” She told the remaining employees there that they are “fully embracing AI,” “we are going all-in on AI,” and said “over 70 percent of Business Insider employees are already using Enterprise ChatGPT regularly (our goal is 100%), and we’re building prompt libraries and sharing everyday use cases that help us work faster, smarter, and better.” She added they are “exploring how AI can boost operations across shared services, helping us scale and operate more efficiently.” 

Last year, Hearst Newspapers executives, who operate 78 newspapers nationwide, told the company in an all-hands meeting audio obtained by 404 Media that they are “leaning into [AI] as Hearst overall, the entire corporation.” Examples given in the meeting included using AI for slide decks, a “quiz generation tool” for readers, translations, a tool called Dispatch, which is an email summarization tool, and a tool called “Assembly,” which is “basically a public meeting monitor, transcriber, summarizer, all in one. What it does is it goes into publicly posted meeting videos online, transcribes them automatically, [and] automatically alerts journalists through Slack about what’s going on and links to the transcript.”

The Washington Post and the Los Angeles Times are doing all sorts of fucked up shit that definitely no one wants but are being imposed upon their newsrooms because they are owned by tech billionaires who are tired of losing money. The Washington Post has an AI chatbot and plans to create a Forbes contributor-esque opinion section with an AI writing tool that will assist outside writers. The Los Angeles Times introduced an AI bot that argues with its own writers and has written that the KKK was not so bad, actually. Both outlets have had massive layoffs in recent months.

The New York Times, which is actually doing well, says it is using AI to “create initial drafts of headlines, summaries of Times articles and other text that helps us produce and distribute the news.” Wirecutter is hiring a product director for AI and recently instructed its employees to consider how they can use AI to make their journalism better, New York magazine reported. Kevin Roose, an, uhh, complicated figure in the AI space, said “AI has essentially replaced Google for me for basic questions,” and said that he uses it for “brainstorming.” His Hard Fork colleague Casey Newton said he uses it for “research” and “fact-checking.” 

Over at Columbia Journalism Review, a host of journalists and news execs, myself included, wrote about how AI is used in their newsrooms. The responses were all over the place and were occasionally horrifying, and ranged from people saying they were using AI as personal assistants to brainstorming partners to article drafters.

In his largely incoherent screed that shows how terrible he was at managing G/O Media, which took over Deadspin, Kotaku, Jezebel, Gizmodo, and other beloved websites and ran them into the ground at varying speeds, Jim Spanfeller nods at the “both good and perhaps bad” impacts of AI on news. In a truly astounding passage of a notably poorly written letter that manages to say less than nothing, he wrote: “AI is a prime example. It is here to a degree but there are so many more shoes to drop [...] Clearly this technology is already having a profound impact. But so much more is yet to come, both good and perhaps bad depending on where you sit and how well monitored and controlled it is. But one thing to keep in mind, consumers seek out content for many reasons. Certainly, for specific knowledge, which search and search like models satisfy in very effective ways. But also, for insights, enjoyment, entertainment and inspiration.” 

At the MediaPost Publishing Insider Conference, a media industry business conference I just went to in New Orleans, there was much chatter about AI. Alice Ting, an executive for the Daily Mail gave a pretty interesting talk about how the Daily Mail is protecting its journalism from AI scrapers in order to eventually strike deals with AI companies to license their content.  

“What many of you have seen is a surge in scraping of our content, a decline in traffic referrals, and an increase in hallucinated outputs that often misrepresent our brands,” Ting said. “Publishers can provide decades of vetted and timestamped content, verified, fact checked, semantically organized, editorially curated. And in addition offer fresh content on an almost daily basis.” 

Ting is correct in that several publishers have struck lucrative deals with AI companies, but she also suggested that AI licensing would be a recurring revenue stream for publishers, which would require a series of competing LLMs to want to come in and license the same content over and over again. Many LLMs have already scraped almost everything there is to scrape, it’s not clear that there are going to consistently be new LLMs from companies wanting to pay to train on data that other LLMs have already trained on, and it’s not clear how much money the Daily Mail’s blogs of the day are going to be worth to an AI company on an ongoing basis. Betting that this time, hinging the future of our industry on massive, monopolistic tech giants will work out is the most Lucy with the football thing I can imagine.

There is not much evidence that selling access to LLMs will work out in a recurring way for any publisher, outside of the very largest publishers like, perhaps, the New York Times. Even at the conference, panel moderator Upneet Grover, founder of LH2 Holdings, which owns several smaller blogs, suggested that “a lot of these licensing revenues are not moving the needle, at least from the deals we’ve seen, but there’s this larger threat of more referral traffic being taken away from news publishers [by AI].”

In my own panel at the conference I made the general argument that I am making in this article, which is that none of this is going to work.

“We’re not just competing against large-scale publications and AI slop, we are competing against the entire rest of the internet. We were publishing articles and AI was scraping and republishing them within five minutes of us publishing them,” I said. “So many publications are leaning into ‘how can we use AI to be more efficient to publish more,’ and it’s not going to work. It’s not going to work because you’re competing against a child in Romania, a child in Bangladesh who is publishing 9,000 articles a day and they don’t care about facts, they don’t care about accuracy, but in an SEO algorithm it’s going to perform and that’s what you’re competing against. You have to compete on quality at this point and you have to find a real human being audience and you need to speak to them directly and treat them as though they are intelligent and not as though you are trying to feed them as much slop as possible.”

It makes sense that journalists and media execs are talking about AI because everyone is talking about AI, and because AI presents a particularly grave threat to the business models of so many media companies. It’s fine to continue to talk about AI. But the point of this article is that “we’re going to lean into AI” is not a business model, and it’s not even a business strategy, any more than pivoting to “video” was a strategy or chasing Facebook Live views was a strategy. 

In a harrowing discussion with Axios, in which he excoriates many of the deals publishers have signed with OpenAI and other AI companies, Matthew Prince, the CEO of Cloudflare, said that the AI-driven traffic apocalypse is a nightmare for people who make content online: “If we don’t figure out how to fix this, the internet is going to die,” he said.

So AI is destroying traffic, ripping off our work, creating slop that destroys discoverability and further undermines trust, and allows random people to create news-shaped objects that social media and search algorithms either can’t or don’t care to distinguish from real news. And yet media executives have decided that the only way to compete with this is to make their workers use AI to make content in a slightly more efficient way than they were already doing journalism. 

This is not going to work, because “using AI” is not a reporting strategy or a writing strategy, and it’s definitely not a business strategy.

AI is a tool (sorry!) that people who are bad at their jobs will use badly and that people who are good at their jobs will maybe, possibly find some uses for. People who are terrible at their jobs (many executives), will tell their employees that they “need” to use AI, that their jobs depend on it, that they must become more productive, and that becoming an AI-first company is the strategy that will save them from the old failed strategy, which itself was the new strategy after other failed business models.

The only journalism business strategy that works, and that will ever work in a sustainable way, is if you create something of value that people (human beings, not bots) want to read or watch or listen to, and that they cannot find anywhere else. This can mean you’re breaking news, or it can mean that you have a particularly notable voice or personality. It can mean that you’re funny or irreverent or deeply serious or useful. It can mean that you confirm people’s priors in a way that makes them feel good. And you have to be trustworthy, to your audience at least. But basically, to make money doing journalism, you have to publish “content,” relatively often, that people want to consume. 

This is not rocket science, and I am of course not the only person to point this out. There have been many, many features about the success of Feed Me, Emily Sundberg’s newsletter about New York, culture, and a bunch of other stuff. As she has pointed out in many interviews, she has been successful because she writes about interesting things and treats her audience like human beings. The places that are succeeding right now are individual writers who have a perspective, news outlets like WIRED that are fearless, publications that have invested in good reporters like The Atlantic, publications that tell you something that AI can’t, and worker owned, journalist-run outlets like us, Defector, Aftermath, Hellgate, Remap, Hearing Things, etc. There are also a host of personality-forward, journalism-adjacent YouTubers, TikTok influencers, and podcasters who have massive, loyal audiences, yet most of the traditional media is utterly allergic to learning anything from them.

There was a short period of time where it was possible to make money by paying human writers—some of them journalists, perhaps—to spam blog posts onto the internet that hit specific keywords, trending topics, or things that would perform well on social media. These were the early days of Gawker, Buzzfeed, VICE, and Vox. But the days of media companies tricking people into reading their articles using SEO or hitting a trending algorithm are over.

They are over because other people are doing it better than them now, and by “better,” I mean, more shamelessly and with reckless abandon. As we have written many times, news outlets are no longer just competing with each other, but with everyone on social media, and Netflix, and YouTube, and TikTok, and all the other people who post things on the internet. They are not just up against the total fracturing of social media, the degrading and enshittification of the discovery mechanisms on the internet, algorithms that artificially ding links to articles, AI snippets and summaries, etc. They are also competing with sophisticated AI slop and spam factories often being run by people on the other side of the world publishing things that look like “news” that is being created on a scale that even the most “efficient” journalist leveraging AI to save some perhaps negligible amount of time cannot ever hope to measure up to. 

Every day, I get emails from AI spam influencers who are selling tools that allow slop peddlers to clone any website with one click, automatically generate newsletters about any topic, or generate plausible-seeming articles that are engineered to perform well in a search algorithm. Examples: “Clone any website in 9 seconds with Clonely AI,” “The future of video creation is here—and it’s faceless, seamless & limitless,” “just a straightforward path to earning 6-figures with an AI-powered newsletter that’s working right now.” These people do not care at all about truth or accuracy or our information ecosystem or anything else that a media company or a journalist would theoretically care about. If you want an example of what this looks like, consider the series of “Good Day” newsletters, which are AI generated and are in 355 small towns across America, many of which no longer have newspapers. These businesses are economically viable because they are being run by one person (or a very small team of people) who disproportionately live in low cost of living areas and who have essentially zero overhead.

And so becoming more “efficient” with AI is the wrong thing to do, and it’s the wrong thing to ask any journalist to do. The only thing that media companies can do in order to survive is to lean into their humanity, to teach their journalists how to do stories that cannot be done by AI, and to help young journalists learn the skills needed to do articles that weave together complicated concepts and, again, that focus on our shared human experience, in a way that AI cannot and will never be able to.

AI as buzzword and shiny object has been here for a long time. And I actually do not think AI is fake and sucks (I also don’t really believe that anyone thinks AI is “fake,” because we can see the internet collapsing around us). We report every day on the ways that AI is changing the web, in part because it is being shoved down our throats by big tech companies, spammers, etc. But I think that Princeton’s Arvind Narayanan and Sayash Kapoor are basically correct when they say that AI is “normal technology” that will not change everything but that over time will lead to modest improvements in people’s workflows as they get integrated into existing products or as they help around the edges. We—yes, even you—are using some version of AI, or some tools that have LLMs or machine learning in them in some way shape or form already, even if you hate such tools.  

In early 2023, when I was the editor-in-chief of Motherboard, I was asked to put together a presentation for VICE executives about AI, and how I thought it would change both our journalism and the business of journalism. The reason I was asked to do this was because our team was writing a lot about AI, and there was a sense that the company could do something with AI to make money, or do better journalism, or some combination of those things. There was no sense or thought at the time, at least from what I was told, that VICE was planning to use AI as a pretext for replacing human journalists or cutting costs—it had already entered a cycle where it was constantly laying off journalists—but there was a sense that this was going to be the big new opportunity/threat, a new potential savior for a company that had already created a “virtual office” in Decentraland, a crypto-powered metaverse that last year had 42 daily active users.

I never got to give the presentation, because the executive who asked me to put it together left the company, and the new people either didn’t care or didn’t have time for me to give it. The company went bankrupt almost immediately after this change, and I left VICE soon after to make 404 Media with my co-founders, who also left VICE. 

But my message at the time, and my message now two years later, is that AI has already changed our world, and that we have the opportunity to report on the technology as it already exists and is already being used—to justify layoffs, to dehumanize people, to spam the internet, etc. At the time, we had already written 840 articles that were tagged “AI,” which included articles about biased sentencing algorithms, predictive policing, facial recognition, deepfakes, AI romantic relationships, AI-powered spam and scams, etc. 

The business opportunity then, as now, was to be an indispensable, very human guide to a technology that people—human beings—are making tons of money off of, using as an excuse to lay off workers, and are doing wild shit with. There was no magic strategy in which we could use AI to quadruple our output, replace workers, rise to the top of Google rankings, etc. There was, however, great risk in attempting to do this: “PR NIGHTMARE,” one of my slides about the risks of using AI I wrote said: “CNET plagiarism scandal. Big backlash from artists and writers to generative AI. Copyright issues. Race to the bottom.”

My other thought was that any efficiencies that could be squeezed out of AI, in our day-to-day jobs, were already being done so by good reporters and video producers at the company. There could be no top-down forced pivot to AI, because research and time-saving uses of AI were already being naturally integrated into our work by people who were smart in ways that were totally reasonable and mostly helpful, if not groundbreaking. The AI-as-force-multiplier was already happening, and while, yes, this probably helped the business in some way, it helped in ways that were not then and were never going to be actually perceptible to a company’s bottom line. AI was not a savior then, and it is not a savior now. For journalists and for media companies, there is no real “pivot to AI” that is possible unless that pivot means firing all of the employees and putting out a shittier product (which some companies have called a strategy). This is because the pivot has already occurred and the business prospects for media companies have gotten worse, not better. If Kevin Roose is using AI so much, in such a new and groundbreaking way, why aren’t his articles noticeably different than they ever were before, or why aren’t there way more of them than there were before? Where are the journalists who were formerly middling who are now pumping out incredible articles thanks to efficiencies granted by AI?

To be concrete: Many journalists, including me, at least sometimes use some sort of AI transcription tool for some of their less sensitive interviews. This saves me many hours, the tools have gotten better (but are still not perfect, and absolutely require double checking and should not be used for sensitive sources or sensitive stories). YouTube’s transcript feature is an incredible reporting tool that has allowed me to do stories that would have never been possible even a few years ago. YouTube’s built-in translations and subtitles, and its transcript tool are some of the only reasons that I was able to do this investigation into Indian AI slop creators, which allowed me to get the gist of what was happening in a given video before we handed them to human translators to get exact translations. Most podcasts I know of now use Descript, Riverside, or a similar tool to record and edit their podcasts; these have built-in AI transcription tools, built-in AI camera switching, and built-in text-to-video editing tools. Most media outlets use captioning that is built into Adobe Premiere or CapCut for their vertical videos and their YouTube videos (and then double check them). If you want to get extremely annoying about it, various machine learning algorithms are in ProTools, Audition, CapCut, Premiere, Canva, etc for things like photo editing, sound leveling, noise reduction, etc. 

There are other journalists who feel very comfortable coding and doing data analysis and analyzing huge sets of documents. There are journalists out there who are already using AI to do some of these tasks and some of the resulting articles are surely good and could not have been done without AI. 

But the people doing this well are doing so in a way where they are catching and fixing AI hallucinations, because the stakes for fucking up are so incredibly high. If you are one of the people who is doing this, then, great. I have little interest in policing other people’s writing processes so long as they are not publishing AI fever dreams or plagiarizing, and there are writers I respect who say they have their little chats with ChatGPT to help them organize their thoughts before they do a draft or who have vibecoded their own productivity tools or data analysis tools. But again, that’s not a business model. It’s a tool that has enabled some reporters to do their jobs, and, using their expertise, they have produced good and valuable work. This does not mean that every news outlet or every reporter needs to learn to shove the JFK documents into ChatGPT and have it shit out an investigation.

I also know that our credibility and the trust of our audience is the only thing that separates us from anyone else. It is the only “business model” that we have and that I am certain works: We trade good, accurate, interesting, human articles for money and attention. The risks of offloading that trust to an AI in a careless way is the biggest possible risk factor that we could have as a business. Having an article go out where someone goes “Actually, a robot wrote this,” is one of the worst possible things that could ever happen to us, and so we have made the brave decision to not do that. 

This is part of what is so baffling about the Chicago Sun Times’ response to its somewhat complicated summer guide AI-generated reading list fiasco. Under its new owners, Chicago Public Media, The Sun Times has in recent years spent an incredible amount of time and effort rebuilding the image and good will that its previous private equity owners destroyed. And yet in its apology note, Melissa Bell, the CEO of Chicago Public Media, said that more AI is coming: “Chicago Public Media will not back away from experimenting and learning how to properly use AI,” she wrote, adding that the team was working with a fellow paid for by the Lenfest Institute, a nonprofit funded by OpenAI and Microsoft. 

Bell does realize what makes the paper stand apart, though: “We must own our humanity,” Bell wrote. “Our humanity makes our work valuable.”

This is something that the New York Times’s Roose recently brought up that I thought was quite smart and yet is not something that he seems to have internalized when talking about how AI is going to change everything and that its widespread adoption is inevitable and the only path forward: “I wonder if [AI is] going to catalyze some counterreaction,” he said. “I’ve been thinking a lot recently about the slow-food movement and the farm-to-table movement, both of which came up in reaction to fast food. Fast food had a lot going for it—it was cheap, it was plentiful, you could get it in a hurry. But it also opened up a market for a healthier, more artisanal way of doing things. And I wonder if something similar will happen in creative industries—a kind of creative renaissance for things that feel real and human and aren’t just outputs from some A.I. company’s slop machine.”

This has ALREAAAAADDDDYYYYYY HAPPPENEEEEEDDDDDD, and it is quite literally the only path forward for all but perhaps the most gigantic of media companies. There is no reason for an individual journalist or an individual media company to make the fast food of the internet. It’s already being made, by spammers and the AI companies themselves. It is impossible to make it cheaper or better than them, because it is what they exist to do. The actual pivot that is needed is one to humanity. Media companies need to let their journalists be human. And they need to prove why they’re worth reading with every article they do.

Read the whole story
mkalus
5 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
1 public comment
cjheinz
6 hours ago
reply
A long article, but spot on. I am going to excerpt from it.
Lexington, KY; Naples, FL

Superflux

1 Share

Michael Kalus posted a photo:

Superflux



Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories