Resident of the world, traveling the road of life
67582 stories
·
21 followers

CSIRO posts AI-fake climate article with made-up quotes

1 Share

Cosmos is — or was — a popular science magazine published by the CSIRO, Australia’s national science agency.

David Ho is an oceanographer at the University of Hawai’i. Cosmos posted an article about removing carbon from the oceans. Professor Ho was quite surprised the article had a quote from him and a pile of stuff he didn’t actually think — particularly when he hadn’t spoken to Cosmos at all. The article read just like AI slop. [Bluesky]

The quote was lifted from a June article in the Guardian, who did speak to Ho — in a completely different context. [Guardian]

A different Cosmos story by the same author, ​Melissa Cristina Márquez, on the shellfish industry’s problems with oceans getting more acidic, had several quotes, all lifted from other sources.

Ho finally heard back from Cosmos’ “Engagement Manager” that Cosmos had taken both articles down and were “investigating.” [Bluesky]

Cosmos went broke in early 2024. The CSIRO took over the magazine in June 2024. [press release, 2024]

In August 2024, the new-look Cosmos used a grant from the Walkley Foundation to build a “custom AI service” for the Cosmos website. It pumped out mangled articles based on work by Cosmos contributors — who were freelancers who had retained their copyright in their original articles, and were not happy. [ABC, 2024]

The CSIRO announced in May that Cosmos Magazine was just not sustainable and it was ceasing publication. The website would continue for the rest of this year — apparently running AI slop. [CSIRO]

Read the whole story
mkalus
6 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

UK Users Need to Post Selfie or Photo ID to View Reddit's r/IsraelCrimes, r/UkraineWarFootage

1 Share
UK Users Need to Post Selfie or Photo ID to View Reddit's r/IsraelCrimes, r/UkraineWarFootage

Several Reddit communities dedicated to sharing news and media from conflicts around the world now require users in the UK to submit a photo ID or selfie in order to prove they are old enough to view “mature” content. The new age verification system is a result of the recently enacted Online Safety Act in the UK, which aims to protect children from certain types of content and hold platforms like Reddit accountable if they don’t. 

Some of the Reddit communities that now include this age verification check include:

  • r/IsraelCrimes, which aims to “Spread awareness of what is happening in occupied Palestine.” The subreddit regularly features videos of Israeli bombs killing Palestinians, clashes between protesters, settlers, and the Israeli Defense Force in the West Bank, and images of dead Palestinians, but also links to articles from news publications and discussion of the subject without graphic images. 
  • r/UkraineWarFootage, which bills itself as “a politically neutral subreddit for posting combat footage of the Ukraine-Russian war.” The subreddit regularly features graphic war footage from the frontlines of the war, sometimes from soldiers wearing GoPro-type cameras, but also other footage that’s available in mainstream news sources. Two of the top posts at the time of writing were a video of bombs falling in Kyiv that was published by The Guardian and video of the notoriously heated meeting between Donald Trump, JD Vance, and Volodymyr Zelenskyy in the Oval Office in February. 
  • r/CombatFootage, “A forum for combat footage, and photos, from historical to ongoing wars.” The top post on that subreddit at the time of writing is a video of a Russian fuel train targeted by a long range Ukrainian drone. Other top videos at the time of writing show consumer-grade drones dropping explosives on soldiers in Ukraine and Burma, and Israeli airstrikes in Syria. 

“We’re not at all surprised the UK government is deploying this tactic now when the world is paying attention to the horrors unfolding, they’d rather silence the discussion than confront their own complicity,” the moderators of r/IsraelCrimnes told me. “By labeling brutally documented atrocities as ‘mature content’ and gate‑keeping access, they reveal themselves as hypocrites: preaching democracy and human rights abroad while trampling them at home.”

The Online Safety Act is fundamentally changing how people in the UK access the internet and adds a layer of verification that is incompatible with the concept of a free and open internet as we know it. It doesn’t guarantee children can’t view mature content. VPN usage in the UK is already skyrocketing and kids are easily bypassing the kind of age verification tech Reddit is using, but that friction makes some information inherently less accessible.

“Reddit was built on the principle that you shouldn’t need to share personal information to participate in meaningful discussions,” Reddit said in a post explaining how it’s going to verify users’ age in the UK when they want to view “mature” content in order to comply with the Online Safety Act. “Unlike platforms that are identity-based and cater to the famous (or those that want to become famous), Reddit has always favored upvoting great posts and comments by people who use whimsical usernames and not their real name. These conversations are often more candid and real than those that force you to share your real-world identity.”

Reddit explained that it will verify users' age by partnering with Persona, an identity verification company that raised $200 million in April in a series D round led by Peter Thiel’s Founders Fund. Persona asks users to upload a selfie or photo ID in order to verify their age. Reddit says it does not have access to these images, and that Persona does not retain those photos for more than seven days. 

Reddit told me in an email that it restricts certain mature content in the UK, and that it defines “mature content” for these purposes per the UK’s Online Safety Act, which includes violent ,graphic content, and porn.

As free speech advocates have argued, and as the new age verification on certain Reddit communities now show, the result of this policy in practice is also to limit access to important information across the board, regardless of the user’s age. Reddit declined to say whether it saw a drop in traffic to these communities, but the age verification system is likely to make some users turn away because they are worried about their privacy and don’t want to upload a selfie or picture of their ID, or because they just don’t want to jump through hoops. In the past few days we reported that Tea, a women’s dating safety app which required users to upload selfies in order to prove they were women, exposed all those images which are now being posted across the internet

“If visibility of r/IsraelCrimes is being restricted under the Online Safety Act, it’s only because the state fears accountability,” the moderators said. “We stand by the need for unfettered access to information, and we’ll continue working to ensure these stories reach every corner of the internet.”

Even if they work exactly as intended, age verification laws are going to further skew the internet’s ability to reflect reality with a bias that diminishes how cruel, bloody, and inhumane that reality can be. I don’t think it’s good or necessary for children to have easy access to ISIS beheading videos, but I also don’t think it’s good that a piece of legislation that aims to protect children is making it much more difficult for internet users, even if they are younger than 18, to view the emaciated bodies of children in Gaza who are dying of starvation, and to discuss that fact with other people on Reddit. 

Reddit users can get access to the basic facts of what is happening in Gaza or Ukraine on other news sites and Reddit communities that discuss the news, like r/worldnews. But the decision by lawmakers to make it harder to see the brutal reality behind the news isn’t neutral. The Pentagon for years banned the media from covering flag-draped coffins of war victims coming back from Iraq, which made it easier to forget how many Americans died there, not to speak of Iraqi civilians. 

When should kids be allowed to see the world in its full horror and who is responsible for that are extremely complicated questions that I’m not sure anyone has a good answer for. I certainly don’t. But in the UK, that decision has already been made, and now it will take us time to see the consequences. The same restrictions are also increasingly present in the United States, with age verification laws are expanding state by state, the passing of the Take It Down act which forces platforms to actively monitor speech, and lawmakers advancing the even more aggressive Kids Online Safety Act.



Read the whole story
mkalus
6 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Gun Nerds Dismantle Infamous Pistol to Research If It Fires at Random

1 Share
Gun Nerds Dismantle Infamous Pistol to Research If It Fires at Random

A U.S. airman in Wyoming died last week after an incident involving an M18 pistol, the military version of the P320 handgun, a weapon long infamous among gun nerds. The incident, and other incidents where the M18 and the civilian version of it, the P320, have fired unexpectedly, have sent gun hobbyists into investigation mode, with guntubers dismantling the gun at the center of the controversy, running it through various stress tests and firing exercises in an attempt to discover the flaw that’s given the P320 a reputation for firing on its own.

Online gun nerd drama doesn’t typically bubble up into the mainstream, but what’s happening with the P320, which is made by Sig Sauer, is extreme. In the aftermath of the death of the airman at F.E. Warren Air Force Base last week, the Air Force’s Global Strike Command ordered an indefinite pause on the use of the pistol. On July 9, well before the airman died, ICE told its agents to stop using the handgun. Police departments across the country have banned officers from carrying the weapon. An FBI report published in 2024 and leaked online recently found that it’s possible for the weapon to discharge at random.

The most digestible breakdown for a non-gun-afficianado audience is this 40-minute epic from Wyoming Gun Project. In the video, host Matt Rittman shows that the slide (the top portion of the pistol) can wobble up and down. That’s not typical in these kinds of handguns, and he speculates that the instability of the slide does something to the striker (the internal firing pin that hits the back of a bullet and launches it from the gun) can make it discharge under certain conditions.

A gun shouldn’t fire without a full pull of its trigger, but Rittman demonstrates that the combination of a tiny amount of pressure on the trigger coupled with jostling of the slide can make the gun fire at random. The amount of pressure on the trigger in the video is light enough that it could be done by a rock, a piece of grit, or some other piece of debris.

Other videos in the genre are more technical in nature. Four Peaks Tactical dismantled several firearms to show the difference between safety mechanism and firing pins, explaining in detail how it all works and suggesting that a flaw with the safety may lead to the P320 firing on its own. LFD Research took the slide off the tops of several versions of the P320 and discussed how, exactly, the gun worked and why the safety doesn’t work as it should. Mongoose Guns got granular, dismantling a P320 completely and showing each individual piece screw, spring, and bolt moving individually.

But it was the Wyoming Gun Project that captured the imagination of the firearm enthusiasts and the wider public. In his video, he makes the gun discharge in his garage just by touching it and makes a soyface over the top of the pistol, creating the perfect YouTube thumbnail that others attached to their own reaction videos of his tests. Even MoistCr1TiKaL, a gaming streamer and gun enthusiast with 17 million subscribers, made a reaction video to Wyoming’s tests.

For the past few years, one of the biggest online feuds in this world has been between fans of the Sig Sauer P320 and everyone else. Guntubers and others had attempted to suss out the exact problem with the guns for years but had come to no satisfying conclusion.

In March, Sig Sauer made a long Instagram post about how safe the gun is. “It ends today,” the Instagram post said. “The P320 cannot, under any circumstances, discharge without a trigger pull—that is a fact. The allegations against the P320 are nothing more than individuals seeking to profit or avoid personal responsibility.”

“We can no longer stay silent while lawsuits run their course, and clickbait farming, engagement hacking grifters continue their campaign to hijack the truth for profit,” the post said. “What’s happening today to Sig Sauer with the anti-gun mob and their lawfare tactics will happen tomorrow at another firearms manufacturer, and then another.” The situation is so bad that Sig has a website, P320truth.com, dedicated to debunking claims about the guns safety and providing the “truth” about the handgun.

The statement was mocked by people in the firearms community. Many of the lawsuits Sig is facing were filed on behalf of police officers, U.S. military veterans, and gun enthusiasts who claimed the gun had a design flaw that made it fire when it’s not supposed to. The overwhelming majority of the P320 video content is from guntubers trying to replicate unintentional discharges and dismantling the gun to figure out what’s going on. The P320 fires by mistake so often that there are supercuts of it happening online pulled from body cam footage and CCTV. And, of course, now Air Force’s Global Strike Command and ICE have told its people to stop using the weapon.

People discussing guns online are like any other fandom or subculture. You can track what the community cares about through memes and shitposts, a collective received forum wisdom creates heroes and villains, and fans battle over their favorites with fierce tenacity. The P320 and its manufacturer Sig are, increasingly, a villain in the community.

There are a lot of anti-Sig memes. One of the most popular is a tourniquet kit with a Sig Sauer logo on it. “Free with the purchase of any new Sig Sauer P320,” it says. In another, Power from Chainsaw Man explains how the P320 never fires unless someone is pulling the trigger, before accidentally shooting herself in the head. There’s even a popular post on 4chan right now from a guy who claims he shot himself in the leg with his P320 and is considering switching to a Glock, complete with graphic photos.

Gun Nerds Dismantle Infamous Pistol to Research If It Fires at Random
Image via 4chan, blurred by 404 Media.

In the aftermath of the airman dying, Sig posted condolences to his family, but its response to government agencies banning the weapon has been to fight. In March, a police academy in Washington State banned the handgun. In response, Sig Sauer filed a lawsuit against the academy to get the ban lifted, saying that it had hurt the weapon’s manufacturer’s reputation. 

Sig Sauer did not respond to 404 Media’s request for comment.

Read the whole story
mkalus
6 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Living Next To Tesla Diner Is 'Absolute Hell,' Neighbors Say

1 Share
Living Next To Tesla Diner Is 'Absolute Hell,' Neighbors Say

One of the big unanswered questions at last week’s grand opening of Hollywood’s Tesla Diner was how its neighbors were feeling about the new, four-story tall movie screen placed directly outside their apartment building. 

Turns out, many of them are not liking it, or the general chaos that the diner has brought.

First, there was the construction. “Last night they have installed a flashing security light up against our fence,” Kristin Rose, a former resident of the apartment building next to the Tesla Diner, said in an email to the building management and to Tesla in February 2024, during building works. “This light is flashing BRIGHT into our apartments, including bedrooms, all night. Even with the blinds closed it feels like we're at the world's worst rave. Video is attached."

0:00
/0:15

Strobing light outside apartment. Video: Kristin Rose

Rose moved out in January of this year, which she says is “absolutely, 100 percent” because of the diner. “We were living through active construction six days a week from 7 a.m. to like 8 or 9 p.m.” she said. “Often that construction was starting at like 4 a.m. illegally.” She showed me several email chains where she discussed these issues with Tesla, the city of Los Angeles, her building management, and the construction company.

Read the whole story
mkalus
6 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Spotify Is Forcing Users to Undergo Face Scanning to Access Explicit Content

1 Share
Spotify Is Forcing Users to Undergo Face Scanning to Access Explicit Content

Spotify is requiring users in the UK to verify they’re over 18 to view "certain age restricted content," and users are reporting seeing a popup on Spotify to verify their ages following the enactment of the UK's Online Safety Act last week, which forced platforms to verify the ages of everyone who tries to access certain kinds of content deemed harmful to children.

“You may be presented with an age check when you try to access certain age restricted content, like music videos tagged 18+,” Spotify says on an informational page about the checks. If you fail the checks, or if the age verification system can’t accurately determine your age—which involves getting your face scanned through your device’s camera to determine your age, or uploading your license or passport if that doesn’t work—your Spotify account will be deleted.

“You cannot use Spotify if you don’t meet the minimum age requirements for the market you’re in. If you cannot confirm you’re old enough to use Spotify, your account will be deactivated and eventually deleted,” Spotify says.

Spotify is using a third-party system for age verification called Yoti. In 2023, when Utah started requiring age verification to access porn sites, porn site xHamster implemented Yoti, which involved a multi-step process including facial analysis or uploading a photo of a government-issued ID. 

The Online Safety Act went into effect last week. Much like the many laws in U.S. states that keep users from accessing porn unless they upload an ID or pass biometric face scanning, the law requires sites operating in the UK to implement age verification or face millions of dollars in fines and jail—or up to 10 percent of global revenues, whichever is higher. 

After publication of this story on Wednesday morning, a Spotify spokesperson emailed me to claim that the headline is not accurate.

"We are not forcing users to go through our age assurance checks, these are voluntary. There are multiple ways that users can go through our age assurance checks (e.g. ID verification) - not just ‘face scanning.' These checks are not in order to access explicit content, they are to access music videos that are labelled 18+," they wrote. All of this was already in the article as it first appeared when published Wednesday morning, cited directly from Spotify's own site.

"Will you please update your headline to reflect this?," the spokesperson said. "So to actually be accurate, your headline could read: 'Spotify is Offering Users Age Assurance Technology to Access Music Videos Labelled 18+'"

So far, the UK law has resulted in people having to verify their ages to visit subreddits that post news about war, certain Discord community, certain Bluesky content, and more. The UK’s Reform party is already vowing to repeal it, calling it “borderline dystopian.”    

Also last week, 404 Media broke the news that in the process of collecting selfies to attempt to check users’ gender, women’s dating safety app Tea exposed the personal information, including private messages and IDs, of thousands of users. Critics of age verification laws say they only create more censorship for adults, while children and everyone else get around the checks by using VPNs or visiting less safe, noncompliant sites. 

Updated 7/20/2025 at 12:32 p.m. EST with comment from Spotify, and to clarify in the first sentence that the verification is for "certain age restricted content."



Read the whole story
mkalus
6 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Journalist Discovers Google Vulnerability That Allowed People to Disappear Specific Pages From Search

1 Share
Journalist Discovers Google Vulnerability That Allowed People to Disappear Specific Pages From Search

By accident, journalist Jack Poulson discovered Google had completely de-listed two of his articles from its search results. “We only found it by complete coincidence,” Poulson told 404 Media. “I happened to be Googling for one of the articles, and even when I typed in the exact title in quotes it wouldn’t show up in search results anymore.”

Poulson had stumbled on a vulnerability in Google’s search engine that allowed people to maliciously delete links off of Google, which is a reputation management company’s dream and which could easily be used to suppress information. The SEO trick had allowed someone to de-list specific web pages from the search engine using Google’s Refresh Outdated Content tool, a site that lets users submit pages to URLs to be recrawled and re-listed after an update. The vulnerability had to do with capitalizing different letters in the URL in this tool, which ultimately caused the delisting. 

In 2023, Poulson published an article about tech CEO Delwin Maurice Blackman’s 2021 arrest on a felony domestic violence charge.

After Poulson published Blackman’s arrest records in 2023, the CEO has attempted to suppress the story in various ways, including lawsuits and DMCA takedown requests. Eventually, the stories disappeared from Google, using this vulnerability. As far as Poulson could tell, the only two articles on his newsletter that had been de-listed by Google using the trick were related to the CEO. 

Google confirmed the problem in an email to 404 Media. “This tool helps ensure our search results are up to date. We’re vigilant in monitoring abuse, and we worked quickly to roll out a fix for this specific issue, which was only impacting a tiny fraction of web pages.”

Poulson’s work has appeared at the Center for Investigative Journalism, Drop Site News, and his personal newsletter All-Source Intelligence

The Freedom of the Press Foundation—a nonprofit dedicated to protecting the rights of journalists—has chronicled Poulson’s fight against censorship and Ahmed Zidan, its Deputy Director of Audience, told 404 Media that an article about that fight had also been de-listed from Google.

When Poulson noticed this, he alerted Zidan, who did some digging and figured out the problem. The owner of websites can access a Google Search Console (GSC) to tweak and optimize their site's place in the search engine. Digging around in GSC,  Zidan discovered someone had made repeated requests, starting in May and ending in June, to recrawl its article about Poulson and Blackman.

In every instance, the capitalization of letters in the URL had been changed. “So the first request that comes in, the ‘a’ of anatomy is a capital ‘A’ and the rest of the slug is the same. And apparently after this request would expire, the attackers would make another request, this time capitalizing the ‘n’ in anatomy,” Zidan said. When Google tries to index the URLs with tweaked capitalization, it gets a 404. “Then, Google, instead of indexing only the 404 page, would de-index all the variations including the live, valid, legitimate article,” Zidan said.

Journalist Discovers Google Vulnerability That Allowed People to Disappear Specific Pages From Search
Image via Ahmed Zidan.

Zidan contacted Google who confirmed the bug to him. The company wouldn’t tell him or 404 Media how many pages had been affected or give further details about the incident. “We would really love Google and other social platforms to be more transparent with advocacy and press freedom organizations,” Zidan said.

It’s hard to know who, exactly, made the re-indexing requests. Anyone can use the Refresh Outdated Content tool and it doesn’t tag the person who made a request in GSC. But the only articles on Poulson’s newsletter affected by the issue were two related to Blackman and the one on the Freedom of the Press Foundation site was about Blackman’s fight with Poulson.

It is easy to imagine a celebrity, high-profile politician, or even a government using this bug to suppress negative information about themselves in a targeted way. Reputation management companies exist to help the rich and powerful do just that.

Discoverability is vital to a journalist and getting de-listed from Google search can crush a story’s impact. “It’s basically just silent censorship and who knows if there’s some other variant of this that exists…any child could do this. And it’s just shocking to me that a company as technical as Google would have such a simple bug,” Poulson said. “If your article doesn’t appear in Google search results, in many ways it just doesn’t exist.”



Read the whole story
mkalus
6 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories