Resident of the world, traveling the road of life
68247 stories
·
21 followers

New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines

1 Share
New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines

A new analysis of synthetic intimate image abuse (SIIA) found that the tools for making non-consensual, sexually explicit deepfakes are easily discoverable all over social media and through simple searches on Google and Bing.

Research published by the counter-extremism organization Institute for Strategic Dialogue shows how tools for creating non-consensual deepfakes spread across the internet. They analyzed 31 websites for SIIA tools, and found that they received a combined 21 million visits a month, with up to four million visits in one month.

Chiara Puglielli and Anne Craanen, the authors of the research paper, used SimilarWeb to identify a common group of sites that shared content, audiences, keywords and referrals. They then used the social media monitoring tool Brandwatch to find mentions of those sites and tools on X, Reddit, Bluesky, YouTube, Tumblr, public pages on Instagram and Facebook, forums, blogs and review sites, according to the paper. “We found 410,592 total mentions of the keywords between 9 June 2020 and 3 July 2025, and used Brandwatch’s ability to separate mentions by source in order to find which sources hosted the highest volumes of mentions,” they wrote. 

The easiest place to find SIIA tools was through simple web searches. “Searches on Google, Yahoo, and Bing all yielded at least one result leading the user to SIIA technology within the first 20 results when searching for ‘deepnude,’ ‘nudify,’ and ‘undress app,’” the authors wrote. Last year, 404 Media saw that Google was also advertising these apps in search results. But Bing surfaces the tools most readily: “In the case of Bing, the first results for all three searchers were SIIA tools.” These weren’t counting advertisements on the search engines that the websites would have paid for, but were organic search results surfaced by the engines’ crawlers and indexing.

X was another massively popular way these tools spread, they found: “Of 410,592 total mentions between June 2020 and July 2025, 289,660 were on X, accounting for more than 70 percent of all activity.” A lot of these were bots. “A large volume of traffic appeared to be inorganic, based on the repetitive style of the usernames, the uniformity of posts, and the uniformity of profile pictures,” Craanen told 404 Media. “Nevertheless, this activity remains concerning, as its volume is likely to attract new users to these tools, which can be employed for activities that are illegal in several contexts.” 

One major spike in mentions of the tools on social media happened in early 2023 on Tumblr, when a woman posted about her experience being a target of sexual harassment from those very same tools. As targets of malicious deepfakes have said over and over again, the price of speaking up about one’s own harassment, or even objecting to the harassment of others, is the risk of drawing more attention and harassment to themselves. 

‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”
New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines

Another spike on X in 2023 was likely the result of bot advertisements for a single SIIA tool, Craanen said, and the spike was a result of those bots launching. X has rules against “unwanted sexual conduct and graphic objectification” and “inauthentic media,” but the platform remains one of the most significant places where tools for making that content are disseminated and advertised.  

Apps and sites for making malicious deepfakes have never been more common or easier to find. There have been several incidents where schoolchildren have used “undress” apps on their classmates, including last year when a Washington state high school was rocked by students using AI to take photos from other children’s Instagram accounts and “undress” around seven of their underage classmates, which police characterized as a possible sex crime against children. In 2023, police arrested two middle schoolers for allegedly creating and sharing AI-generated nude images of their 12 and 13 year old classmates, and police reports showed the preteens used an application to make the images. 

A recent report from the Center for Democracy and Technology found that 40 percent of students and 29 percent of teachers said they know of an explicit deepfake depicting people associated with their school being shared in the past school year. 

Laws About Deepfakes Can’t Leave Sex Workers Behind
As lawmakers propose federal laws about preventing or regulating nonconsensual AI generated images, they can’t forget that there are at least two people in every deepfake.
New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines

The “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks” (TAKE IT DOWN) Act, passed earlier this year, requires platforms to report and remove synthetic sexual abuse material, and after years of state-by-state legislation around deepfake harassment is the first federal-level law to attempt to confront the problem. But critics of that law have said it carries a serious risk of chilling legitimate speech online.

“The persistence and accessibility of SIIA tools highlight the limits of current platform moderation and legal frameworks in addressing this form of abuse. Relevant laws relating to takedowns are not yet in full effect across the jurisdictions analysed, so the impact of this legislation cannot yet be fully known,” the ISD authors wrote. “However, the years of public awareness and regulatory discussion around these tools, combined with the ease with which users can still discover, share and deploy these technologies suggests that takedowns cannot be the only tool used to counter their proliferation. Instead, effective mitigation requires interventions at multiple points in the SIIA life cycle—disrupting not only distribution but also discovery and demand. Stronger search engine safeguards, proactive content-blocking on major platforms, and coordinated international policies are essential to reducing the scale of harm.”

Read the whole story
mkalus
11 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

DHS Tries To Unmask Ice Spotting Instagram Account by Claiming It Imports Merchandise

1 Share
DHS Tries To Unmask Ice Spotting Instagram Account by Claiming It Imports Merchandise

The Department of Homeland Security (DHS) is trying to force Meta to unmask the identity of the people behind Facebook and Instagram accounts that post about Immigration and Customs Enforcement (ICE) activity, arrests, and sightings by claiming the owners of the account are in violation of a law about the “importation of merchandise.” Lawyers fighting the case say the move is “wildly outside the scope of statutory authority,” and say that DHS has not even indicated what merchandise the accounts, called Montcowatch, are supposedly importing.

Read the whole story
mkalus
11 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

a16z-Backed Startup Sells Thousands of ‘Synthetic Influencers’ to Manipulate Social Media as a Service

1 Share
a16z-Backed Startup Sells Thousands of ‘Synthetic Influencers’ to Manipulate Social Media as a Service

A new startup backed by one of the biggest venture capital firms in Silicon Valley, Andreessen Horowitz (a16z), is building a service that allows clients to “orchestrate actions on thousands of social accounts through both bulk content creation and deployment.” Essentially, the startup, called Doublespeed, is pitching an astroturfing AI-powered bot service, which is in clear violation of policies for all major social media platforms. 

“Our deployment layer mimics natural user interaction on physical devices to get our content to appear human to the algorithims [sic],” the company’s site says. Doublespeed did not respond to a request for comment, so we don’t know exactly how its service works, but the company appears to be pitching a service designed to circumvent many of the methods social media platforms use to detect inauthentic behavior. It uses AI to generate social media accounts and posts, with a human doing 5 percent of “touch up” work at the end of the process. 

On a podcast earlier this month, Doublespeed cofounder Zuhair Lakhani said that the company uses a “phone farm” to run AI-generated accounts on TikTok. So-called “click farms” often use hundreds of mobile phones to fake online engagement of reviews for the same reason. Lakhani said one Doublespeed client generated 4.7 million views in less than four weeks with just 15 of its AI-generated accounts. 

a16z-Backed Startup Sells Thousands of ‘Synthetic Influencers’ to Manipulate Social Media as a Service

“Our system analyzes what works to make the content smarter over time. The best performing content becomes the training data for what comes next,” Doublespeed’s site says. Doublespeed also says its service can create slightly different variations of the same video, saying “1 video, 100 ways.”

“Winners get cloned, not repeated. Take proven content and spawn variation. Different hooks, formats, lengths. Each unique enough to avoid suppression,” the site says. 

a16z-Backed Startup Sells Thousands of ‘Synthetic Influencers’ to Manipulate Social Media as a Service
One of Doublespeed's AI influencers

Doublespeed allows clients to use its dashboard for between $1,500 and $7,500 a month, with more expensive plans allowing them to generate more posts. At the $7,500 price, users can generate 3,000 posts a month. 

The dashboard I was able to access for free shows users can generate videos and “carousels,” which is a slideshow of images that are commonly posted to Instagram and TikTok. The “Carousel” tab appears to show sample posts for different themes. One, called “Girs Selfcare” shows images of women traveling and eating at restaurants. Another, called “Christian Truths/Advice” shows images of women who don’t show their face and text that says things like “before you vent to your friend, have you spoken to the Holy Spirit? AHHHHHHHHH” 

a16z-Backed Startup Sells Thousands of ‘Synthetic Influencers’ to Manipulate Social Media as a Service
a16z-Backed Startup Sells Thousands of ‘Synthetic Influencers’ to Manipulate Social Media as a Service

On the company’s official Discord, one Doublespeed staff member explained that the accounts the company deploys are “warmed up” on both iOS and Android, meaning the accounts have been at least slightly used, in order to make it seem like they are not bots or brand new accounts. Doublespeed cofounder Zuhair Lakhani also said on the Discord that users can target their posts to specific cities and that the service currently only targets TikTok but that it has internal demos for Instagram and Reddit. Lakhani said Doublespeed doesn’t support “political efforts.”

A Reddit spokesperson told me that Doublespeed’s service would violate its terms of service. TikTok, Meta, and X did not respond to a request for comment. 

Lakhani said Doublespeed has raised $1 million from a16z as part of its “Speedrun” accelerator program “a fast‐paced, 12-week startup program that guides founders through every critical stage of their growth.”

Marc Andreessen, after whom half of Andreessen Horowitz is named, also sits on Meta’s board of directors. Meta did not immediately respond to our question about one of its board members backing a company that blatantly aims to violate its policy on “authentic identity representation.” 

What Doublespeed is offering is not that different than some of the AI generation tools Jason has covered that produce a lot of the AI-slop flooding social media already. It’s also similar, but a more blatant version of an app I covered last year which aimed to use social media manipulation to “shape reality.” The difference here is that it has backing from one of the biggest VC firms in the world.

Read the whole story
mkalus
12 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

AI Dataset for Detecting Nudity Contained Child Sexual Abuse Images

1 Share
AI Dataset for Detecting Nudity Contained Child Sexual Abuse Images

A large image dataset used to develop AI tools for detecting nudity contains a number of images of child sexual abuse material (CSAM), according to the Canadian Centre for Child Protection (C3P). 

The NudeNet dataset, which contains more than 700,000 images scraped from the internet, was used to train an AI image classifier which could automatically detect nudity in an image. C3P found that more than 250 academic works either cited or used the NudeNet dataset since it was available download from Academic Torrents, a platform for sharing research data, in June 2019.

“A non-exhaustive review of 50 of these academic projects found 13 made use of the NudeNet data set, and 29 relied on the NudeNet classifier or model,” C3P said in its announcement

C3P found more than 120 images of identified or known victims of CSAM in the dataset, including nearly 70 images focused on the genital or anal area of children who are confirmed or appear to be pre-pubescent. “In some cases, images depicting sexual or abusive acts involving children and teenagers such as fellatio or penile-vaginal penetration,” C3P said. 

People and organizations that downloaded the dataset would have no way of knowing it contained CSAM unless they went looking for it, and most likely they did not, but having those images on their machines would be technically criminal. 

“CSAM is illegal and hosting and distributing creates huge liabilities for the creators and researchers. There is also a larger ethical issue here in that the victims in these images have almost certainly not consented to have these images distributed and used in training,” Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on digitally manipulated images, told me in an email. Farid also developed PhotoDNA, a widely used image-identification and content filtering tool. “Even if the ends are noble, they don’t justify the means in this case.”

“Many of the AI models used to support features in applications and research initiatives have been trained on data that has been collected indiscriminately or in ethically questionable ways. This lack of due diligence has led to the appearance of known child sexual abuse and exploitation material in these types of datasets, something that is largely preventable,” Lloyd Richardson, C3P's director of technology, said.

Academic Torrents removed the dataset after C3P issued a removal notice to its administrators. 

"In operating Canada's national tipline for reporting the sexual exploitation of children we receive information or tips from members of the public on a daily basis," Richardson told me in an email. "In the case of the NudeNet image dataset, an individual flagged concerns about the possibility of the dataset containing CSAM, which prompted us to look into it more closely."

C3P’s findings are similar to 2023 research from Stanford University’s Cyber Policy Center, which found that LAION-5B, one of the largest datasets powering AI-generated images, also contained CSAM. The organization that manages LAION-5B removed it from the internet following that report and only shared it again once it had removed the offending images. 

"These image datasets, which have typically not been vetted, are promoted and distributed online for hundreds of researchers, companies, and hobbyists to use, sometimes for commercial pursuits," Richardson told me. "By this point, few are considering the possible harm or exploitation that may underpin their products. We also can’t forget that many of these images are themselves evidence of child sexual abuse crimes. In the rush for innovation, we’re seeing a great deal of collateral damage, but many are simply not acknowledging it — ultimately, I think we have an obligation to develop AI technology in responsible and ethical ways."

Update: This story has been updated with comment from Lloyd Richardson.



Read the whole story
mkalus
13 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Scrape the web for OpenAI — with the Atlas browser!

1 Share

OpenAI has finally released its web browser project, Atlas! It’s Chrome with ChatGPT being an agent for you. [OpenAI]

So what’s it like to use? Anil Dash tried it out. Dash calls it the anti web browser — “the first browser that actively fights against the web.” He opened it, typed “Taylor Swift Showgirls” into the search bar, and got back a web page! One written by the chatbot: [blog post]

I had typed “Taylor Swift” in a browser, and the response had literally zero links to Taylor Swift’s actual website. If you stayed within what Atlas generated, you would have no way of knowing that Taylor Swift has a website at all.

Imagine if all web pages were Google AI Overview summaries

Atlas is also your agentic pal who’s fun to be with! This is the hot new thing in web browsers — get it to do stuff on the web for you!

And hope the hallucinations won’t lead it wildly astray. And that it won’t be prompt-injected by something it reads on a web page. As happens with all the agentic web browsers. Because large language models cannot tell data from instructions. Atlas has been prompt-injected already. [Twitter; Twitter]

Letting a chatbot do things always ends like this. If agent browsers gain any popularity, they’ll be a security disaster.

But there’s one really obvious reason for OpenAI to release Atlas — so it can get past all the sites blocking their bots from scraping them for training.

Nobody who runs a site wants to talk to the scrapers any more. The bots have gotten feisty lately and they’re getting past the blocking.

And then there’s all the paywalled data and the personal data and the corporate data. Think if they could train on all that juicy stuff!

OpenAI swears training is strictly opt-out, but they’d really love you to enable it.

At present, there’s no quick way to block Atlas — its user agent is identical to the version of Chrome it’s based on. Though Gergely Nagy from Iocaine is working on detecting Atlas: [post]

Once I can confirm that the method works outside of the short experiment I was able to conduct, I’ll make it public, so other tools can follow along and send Atlas to the garbage bin where it belongs. It will not be able to run rampant for long.

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Anti-renewables group uses AI in government inquiry submissions

1 Share

Rainforest Reserves Australia is an odd little environmental charity in Queensland in Australia. It started out running a cassowary sanctuary.

But lately, RRA’s gone big into anti-renewable energy activism. Their line is that renewable energy — wind and solar — is horrifyingly destructive to the environment. This is completely false, by the way.

Looking at their public filings, RRA are not getting big money from anyone! But it so happens that their talking points are popular with proponents of fossil fuel talking points.

National Party in Australia, which happens to love coal, loves the RRA and quotes them a lot. There’s a bizarrely wrong RRA map, alleging that renewables will use huge amounts of farmland, that the Nationals and the Murdoch press have been pushing hard of late. [NSW Nationals; Renew Economy]

RRA makes a lot of submissions to government inquiries. And a number of these turn out to have made-up references. Just like someone wrote them with a chatbot: [Guardian]

The organisation’s submission writer has admitted using AI to help write more than 100 submissions to councils and state and federal governments since August 2024, and to also using AI to answer questions from the Guardian.

The Guardian checked with two of the academics that RRA cited in its latest Senate submission. Naomi Oreskes, author of “Merchants of Doubt” — a book that covers climate change denial — says “the passage cites my work in a way that is 100% misleading.”

Robert Brulle of Brown University also writes about climate change deniers. He says:

The citations are totally misleading. I have never written on these topics in any of my papers. To say that these citations support their argument is absurd.

RRA made previous submissions to inquiries on “forever chemicals” — allegedly a significant hazard of solar panels, and that’s another fossil fuel talking point — that cited papers from the Journal of Cleaner Production. The journal exists — but the cited papers did not exist. Elsevier, the publisher, said the titles were likely AI hallucinations.

The main author of RRA’s submissions is a volunteer, Dr. Anne S. Smith. She told the Guardian that Elsevier had removed the papers because “they contain findings that challenge dominant policy narratives”. Yes, that must be it — not using an AI that may have hallucinated paper titles.

Dr Smith finds AI speeds up her work marvelously:

… When the Guardian asked RRA if the responses to its questions had been generated using AI, Smith responded “Yes” in an email, and added it was “the most efficient way to review everything properly and provide you with an accurate and timely response. All of the information and conclusions are mine the tool simply helped me work through the material quickly.”

RRA has put a statement on the front page of their website: [RRA, archive]

It’s becoming increasingly clear that Rainforest Reserves Australia — and particularly the work by Dr Anne Smith — could become the focus of targeted criticism in the near future.

… If criticism does come our way, it’s important to understand why: because we challenge assumptions, scrutinise the impacts of the net-zero rollout, and ask hard questions about the ecological costs of rapid industrial expansion.

And maybe the made-up references. But RRA has formally referred itself to the Senate Privileges Committee over this issue.

What was the Senate inquiry RRA was caught using AI to write its submission for? It was on “Information Integrity on Climate Change and Energy.” [Parliament]

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories