Resident of the world, traveling the road of life
68973 stories
·
21 followers

Meta’s director of AI alignment falls for OpenClaw

1 Share

Summer Yue is the Director of AI Alignment at Meta. She came over when Meta bought 49% of Scale AI and brought over anyone at Scale worth hiring.

“AI alignment” is a great term to put in a title. It was invented by Eliezer Yudkowsky’s AI doomsday cranks. It means an actually-intelligent robot that’s sufficiently controlled that we can use it as our slave.

The term has been softened a bit to mean “AI that doesn’t screw up totally,” but the appeal of robot slaves is what “aligned AI” really means. We don’t have intelligent AI, but this is apparently job number one if we do get it. Anyway, building the robot slave is Yue’s job.

Yue has a years long track record as a machine learning researcher. She knows her stuff — or she should know it. Specifically, she should know enough not to do what she claims she did Sunday night: [Twitter, thread, archive]

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.

Yue posted screenshots too. The bot is deleting all her email before February 15th that isn’t in a “keep” list. She tells it to stop and it keeps going! “STOP, OPENCLAW!” Oh no!

What happened? The bot had an instruction not to do anything unless told to. But the chatbot’s context window got too big, so OpenClaw summarised the context window! And chatbots don’t actually summarise text — they shorten it. So that instruction got … shortened.

What really happened was that someone who is fully equipped to know better was surprised when her AI agent — a class of software that does not work reliably and cannot work reliably — messed up.

To be clear — this is all assuming this story is what it’s presented as. The total substance of this story is six tweets and three screenshots. Neither Yue or Meta have answered any of the many press queries.

The story also matches a common pattern of AI promotion — where AI boosters talk about their bot going Sorcerer’s Apprentice and really screwing something up badly as if that’s an achievement. It’s how they say: my bot is so powerful, that next model bro, it’ll be awesome. This shows how much we need AI alignment!!

Yue doesn’t tweet much. She tweets every two to three months and they’re very corporate sort of tweets. Her last tweet was October. Suddenly there’s six tweets just on this single alleged personal incident.

It’s worth asking if this … happened. Or, if something like it did happen, how involved Meta’s marketing department was in this public tweet and its followups.

This is not a misfortune befalling some random person — this is the director of AI alignment at Meta.

I’m not the only one to wonder about this. PCGamer also suggests: “Of course, there’s always the possibility none of this is real at all.” [PCGamer]

But against that, we have an extensive list of previously smart people who used the chatbot once and it blows their tiny minds, and they start saying it’s good, AI is fine, you can uh run it locally, all you AI haters are purity culture shills for Big Not-Dumbass. Some of them start talking about their coding agent like it’s their girlfriend. Who they completely control.

So it’s not clear that Summer Yue’s inbox was in fact eaten by a vibe-coded pile of trash. But it’s stupid enough to be entirely plausible. Because the chatbot keeps rotting brains, and particularly brains that work in AI.


It’s pledge week at Pivot to AI! If you enjoyed this post, and our other posts, please do put $5 into the Patreon. It helps us keep Pivot coming out daily. Thank you all.

Read the whole story
mkalus
5 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Saturday Morning Breakfast Cereal - Cow

3 Shares


Click here to go see the bonus panel!

Hovertext:
This is when she calls the cops.


Today's News:
Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Amazon Change Means Wishlists Might Expose Your Address

2 Shares
Amazon Change Means Wishlists Might Expose Your Address

Amazon is telling people who use its wishlists feature to switch to post office boxes or non-residential delivery addresses if they want to ensure their home addresses remain private, as part of a change in how it processes gifts bought from third-party sellers. The change is especially concerning to many sex workers, influencers and public figures who use Amazon wishlists to receive gifts from fans and clients. 

First spotted by adult content creators raising the alarm on social media, the changes open anyone who uses wishlists publicly to increased privacy risk unless they change how they receive packages.

In an email sent to list holders, Amazon said beginning March 25, it will reveal users’ shipping addresses to third-party sellers. The platform added that gift purchasers might end up seeing your address as part of this process, too. 

Before this change, the only information visible to sellers and gift purchases was the recipients’ city and state.

“We're writing to inform you about an upcoming change to Amazon Lists. Starting March 25, 2026, we will remove the option to restrict purchases from third-party sellers for list items. When this change takes effect, gift purchasers will be able to purchase items sold by third-party sellers from your lists and your delivery address will be shared with the seller for fulfillment. This change will provide gift purchasers with access to a wider selection of items when shopping from your lists,” Amazon said in the email. “Important note: When gifts are purchased from your shared or public lists, Amazon needs to provide your shipping address to sellers and delivery partners to fulfill these orders. During the delivery process, your address may become visible to gift purchasers through delivery updates and tracking information. To help protect your privacy, we recommend using a PO Box or non-residential address for any list you share with public audiences.”

If you have public wishlists, you can manage individual list settings here and select "manage list." From there you can change your list privacy settings to private or shared to limit who has access, or remove your shipping address entirely by selecting "none" from the dropdown menu.

Most of the popular shipping methods in the US, including UPS, Fedex, and the USPS, don’t show full addresses as part of package tracking. But if a third-party seller shares a gift recipient’s home address with a buyer as part of the tracking process, Amazon is saying that’s out of the platform’s control. And some of those delivery services send photos as part of the tracking process for proof of delivery, which could include more information about one’s home or location than they would want a gift sender to see. 

“Those who do a range of work where privacy concerns are top of mind would be left to wonder what problem Amazon is solving with this change,” Krystal Davis, an adult content creator who posted about receiving the email from Amazon, told 404 Media. “Those who use these lists as an opportunity to allow fans to show support and offset expenses will lose that option. The alternatives to Amazon wishlist are significantly lacking.”

Many online sex workers use Amazon wishlists to receive gifts from subscribers and fans. It’s a practice that’s gone on for years. Revealing one’s full address to buyers — especially if they don’t realize this change has gone into effect, or missed the email sent by Amazon with the warning to switch to a P.O. box — puts their safety at serious risk. And like so many privacy and security issues that affect sex workers first, anyone could potentially be affected; lots of people use public wishlists who might want to keep their location private, and should consider checking their settings or switching to a non-residential address if they want to maintain that privacy.

Amazon Change Means Wishlists Might Expose Your Address
Screenshot via Amazon showing the "Manage List" page, with the option to share shipping address with sellers grayed out and a notice: "This setting will no longer be supported starting February 25, 2026. After this date, third-party sellers will receive your shipping address to fulfill orders. You can review of update your lists' shipping address on this page."

Amazon provides conflicting information on when and how this change will go into effect. The email sent to wishlist holders says it will start on March 25, 2026, but as of writing, a notice on the “Manage List” settings page said starting February 25, third party sellers will see users’ shipping addresses. Amazon confirmed to 404 Media that the option to restrict purchases from third-party sellers for list items is being removed on March 25, one month from today.

Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children

1 Share

In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy. 

While some survivors of Epstein’s abuse have chosen to identify themselves, many more have never come forward. In a joint statement, 18 of the survivors condemned the release of the files, which they said exposed the names and identifying information of survivors “while the men who abused us remain hidden and protected”. 

After the latest release of documents on Jan. 30 under the Epstein Files Transparency Act, thousands of documents had to be taken down because of flawed redactions that lawyers for the victims said compromised the names and faces of nearly 100 survivors. 

But X users are trying to undo the redactions on even the images of people whose faces were correctly redacted. By searching for terms such as “unblur” and “epstein” with the “@grok” handle, Bellingcat found more than 20 different photos and one video that multiple users were trying to unredact using Grok. These included photos showing the visible bodies of children or young women, with their faces covered by black boxes. There may be other such requests on the platform that were not picked up in our searches.

Requests by X users for Grok to unblur and identify the images of children from the Epstein files, overlaid on an image of Epstein next to a young child in a pool. Source: X; collage by Bellingcat

The images appeared to show several children and women with Jeffrey Epstein as well as other high-profile figures implicated in the files, including the UK’s Prince Andrew, former US President Bill Clinton, Microsoft co-founder Bill Gates and director Brett Ratner, in various locations such as inside a plane and at a swimming pool.

From Jan. 30 to Feb. 5, we reviewed 31 separate requests from users for Grok to “unblur” or identify the women and children from these images. Grok noted in responses to questions or requests by some users that the faces of minors in the files were blurred to protect their privacy “as per standard practices in sensitive images from the Epstein files”, and said it could not unblur or identify them. However, it still generated images in response to 27 of the requests that we reviewed. 

We are not linking to these posts to prevent amplification.

The generations created by Grok ranged in quality from believable to comically bad, such as a baby’s face on a young girl’s body. Some of these posts have garnered millions of views on X, where users are monetarily incentivised to create high-engagement content.

Examples of posts by X users asking Grok to unredact images from the latest Epstein release, some with millions of views. Source: X

Of the four requests we found during this period that Grok did not generate images in response to, it did not respond to one request at all. In response to another request, Grok said deblurring or editing images was outside its abilities, and noted that photos from recent Epstein file releases were redacted for privacy. 

The other two requests appeared to have been made by non-premium users, with the chatbot responding: “Image generation and editing are currently limited to verified Premium subscribers”. X has limited some of Grok’s image generation capabilities to paid subscribers since January amid an ongoing controversy over users using the AI chatbot to digitally “undress” women and children. 

X did not respond to multiple requests for comment. 

However, shortly after we first reached out to X on Feb. 6, we noticed that more guardrails appeared to have been put in place. Out of 16 requests from users between Feb. 7 to Feb. 9, which we found using similar search terms as before, Grok did not attempt to unredact any of the images. 

In most cases, Grok did not respond at all (14), while in two cases, Grok generated AI images that were completely different from the images uploaded in the user’s original request. 

When a user commented on one of these requests that Grok was no longer working, Grok responded: “I’m still operational! Regarding the request to unblur the face in that Epstein photo: It’s from recently released DOJ files where identities of minors are redacted for privacy. I can’t unblur or identify them, as it’s ethically and legally protected. For more, check official sources like the DOJ releases.”

As of publication, X had not responded to Bellingcat’s subsequent query about whether new guardrails had been put in place over the weekend.

Fabricated Images

This is not the first time AI has been used to fabricate images related to Epstein file releases. Some images that were shared on X, which appeared to show Epstein alongside famous figures such as US President Donald Trump and New York City mayor Zohran Mamdani as a child with his mother, were reportedly AI-generated. Some of the individuals shown in the false images, such as Trump, do appear in authentic photos, which can be viewed on the DOJ website.

Far left: AI-generated photo of Trump and Epstein with several children. Middle and far right: AI-generated photos of a young Mamdani and his mother, alongside Epstein, former US president Bill Clinton, Amazon CEO Jeff Bezos, Microsoft co-founder Bill Gates and Epstein associate Ghislaine Maxwell. Source: X. Annotations by Bellingcat

X users also previously used Grok to generate images in relation to recent killings in Minnesota by federal agents. 

For example, some users asked Grok to try to “unmask” the federal agent who killed Renee Good, resulting in a completely fabricated face of a man that did not look like the actual agent, Jonathan Ross, and a false accusation of a man who had nothing to do with the shooting.

Bellingcat’s Director of Research and Training @giancarlofiorella.bsky.social appeared on CTV yesterday to discuss the misleading AI-generated images that were used to falsely identify ICE agents and weapons at the centre of the two fatal shootings in Minneapolis youtu.be/mL7Fbp3UrSo?…

[image or embed]

— Bellingcat (@bellingcat.com) 5 February 2026 at 09:36

After Alex Pretti was shot and killed by federal agents in Minneapolis, people used AI to edit video stills, resulting in AI images that showed a completely different gun than the one actually owned by Pretti. In another instance, an AI-edited image of Pretti’s shooting falsely depicted the intensive care unit nurse holding a gun instead of his sunglasses. 

Grok has also been at the centre of a controversy for generating sexually explicit content.

On Twitter/X, users have figured out prompts to get Grok (their built in AI) to generate images of women in bikinis, lingerie, and the like. What an absolute oversight, yet totally expected from a platform like Twitter/X. I’ve tried to blur a few examples of it below.

[image or embed]

— Kolina Koltai (@koltai.bsky.social) 6 May 2025 at 03:20

Multiple countries including the UK and France have launched investigations into Elon Musk’s chatbot over reports of people using it to generate deepfake non-consensual sexual images, including child sexual abuse imagery. Malaysia and Indonesia have also blocked Grok over concerns about deepfake pornographic content. 

One analysis by the Center for Countering Digital Hate found that Grok had publicly generated around three million sexualised images, including 23,000 of children, in 11 days from Dec. 29, 2025 to Jan. 8 this year. X’s initial response, in January, was to limit some image generation and editing features to only paid subscribers. However, this has been widely criticised as inadequate, including by UK Prime Minister Keir Starmer, who said it “simply turns an AI feature that allows the creation of unlawful images into a premium service”. The social media platform has since announced new measures to block all users, including paid subscribers, from using Grok via X to edit images of real people in revealing clothing such as bikinis.


Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here and Mastodon here.

The post Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children appeared first on bellingcat.

Read the whole story
mkalus
9 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

FBI Got Grok to Hand Over Prompts Used to Create Nonconsensual Porn

1 Share
FBI Got Grok to Hand Over Prompts Used to Create Nonconsensual Porn

This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.

The FBI got a search warrant for X to provide details on the Grok prompts a man allegedly used to create more than 200 nonconsensual sexual videos of a woman he knew in real life, according to court records.

The details of the investigation are contained in an FBI affidavit about the alleged actions of Simon Tuck, who is accused of extensively harassing and threatening the woman’s husband. Tuck regularly worked out with and texted with the woman and, according to the affidavit, secretly filmed her while she was working out in his garage. Over the course of the last several months, Tuck swatted their home, made a series of anonymous reports to the man’s employer claiming that he was a child abuser and a drug addict, posed as the man and made a series of mass shooting and suicide threats. Tuck also made a series of other threats and bizarre actions, which included reaching out to a funeral home to say that the man would be dead soon and sending threats to the man while posing as a member of Sector 16, a Russian hacking crew.

The affidavit notes that, in January, the FBI got a search warrant for the man’s conversations with Grok. The FBI says that it received “prompts provided to GrokAI that generated approximately 200 pornographic videos of a woman who closely resembled VICTIM’s wife’s physical appearance.”

“For example, in one prompt, TUCK queried: ‘In a sensual sports style, a confident blonde woman playfully undresses on a tennis court, starting with her white crop top pulled up to expose her bare breasts. She has long wavy hair, a toned athletic body, and a flirtatious smile, wearing a short navy pleated skirt and holding a racket. She slowly lowers her top, revealing full nudity, tosses her hair, and swings the racket teasingly, with a surprising clumsy spin like a comedic twirl,’” the affidavit says. 

FBI Got Grok to Hand Over Prompts Used to Create Nonconsensual Porn

The FBI says that Tuck also allegedly used Grok to create a complaint about the woman’s husband that was then filed to the company he works for. 

The actions described in the affidavit are extreme and horrifying, but are not terribly out of the ordinary for harassment cases that we have reported on before. What’s notable here is that this case shows that law enforcement is looking at chats with AI bots as potential sources of evidence and that X is complying with these requests.

Most importantly, it highlights X’s role in allowing Grok to create nonconsensual sexual material in a criminal case that involves extreme cyberstalking and real life harm. According to the affidavit, Tuck used Grok to create this nonconsensual sexual material at the same time that Grok was being heavily criticized for creating child sexual abuse material. This all happened during the “undress her” phenomenon, which showed just how terribly Grok’s content moderation is. Last week, we also reported that Grok was used to reveal the real name of an adult performer.

Correction: This piece originally said the FBI issued Grok with a subpoena. It was a search warrant.

Read the whole story
mkalus
10 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

What’s the Point of School When AI Can Do Your Homework?

1 Share
What’s the Point of School When AI Can Do Your Homework?

There’s a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein’s website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions. 

Educators told me that Einstein is just one of many AI tools that can do homework for students, but should be seen as a warning to schools that are increasingly seen by students as a place to gain a diploma and status as opposed to the value of education itself. 

If an AI can go to school for you what’s the point of going to school? For Advait Paliwal, Brown dropout and co-creator of Einstein, there isn’t one. “I think about horses,” he said. “They used to pull carriages, but when cars came around, I'd argue horses became a lot more free,” he said. “They can do whatever they want now. It would be weird if horses revolted and said ‘no, I want to pull carriages, this is my purpose in life.’”

But humans aren’t horses. “This is much bigger than Einstein,” Matthew Kirschenbaum told 404 Media. “Einstein is symptomatic. I doubt we’ll be talking about Einstein, as such, in a year. But it’s symptomatic of what’s about to descend on higher ed and secondary ed as well.”

Kirschenbaum teaches English at the University of Virginia and has written at length about artificial intelligence. He’s also a member of the Modern Language Association (MLA) where he serves as member of its Task Force on AI Research and Teaching. Einstein isn’t the first agentic AI to do the work of a student for them, it’s just one that got attention online recently. Kirschenbaum and his fellow committee members flagged their concerns about these AIs in October, 2025.

“Agentic browsers are becoming widely available to the public. These offer AI ‘agents’ that can navigate [learning management systems] and complete assignments without any student involvement,” the MLA’s statement from October said. “The recent and hasty integration of generative AI features into those systems is already redefining student and instructor relationships, evaluative standards, and instructional outcomes—with no compelling evidence that any of it is for the better.”

The statement called on educators, lawmakers, and learning management system providers like Canvas, too cooperate in order to give academic institutions the abilities to block AI agents like Einstein. 

Canvas did not respond to a request for comment. 

Einstein is explicit in its pitch: it will log into Canvas (one of the most popular and ubiquitous pieces of education software) and do your classwork for you, just like Kirschenbaum and his fellows warned about last year.

The attractiveness of agentic AIs is a symptom of a decades-long trend in higher education.  “Universities…by and large adopted a transactive model of education,” Kirschenbaum said. “Students see their diploma as a credential. They pay tuition and at the end of four years, sometimes five years, they receive the credential and, in theory at least, that is then the springboard to economic stability and prosperity.”

Paliwal seems to agree. He told 404 Media that he attempted to change the university from the inside while working as a TA, but felt stymied by politics. “The only way to force these institutions to evolve is to bring reality to their face. And usually the loudest critics are the ones who can't do their own job well and live in fear of automation,” he said.

For Paliwal, agentic AIs are a method of freeing people from the labor of education. “I think we really need to question what learning even is and whether traditional educational institutions are actually helping or harming us,” he said. “We're seeing a rise in unemployment across degree holders because of AI, and that makes me question whether this is really what humans are born to do. We've been brainwashed as a society into valuing ourselves by the output of our productive work, and I think humanity is a lot more beautiful than that. Is it really education if we're just memorizing things to perform a task well?”

Kirschenbaum said that programs like Einstein are the inevitable conclusion of viewing higher education as a certification and transactive process. “What we’re finding is that if forms of education can be transacted then we’ve just about arrived at the point where autonomous software AI agents are capable of performing the transaction on your behalf,” he said. “And so the whole educational paradigm has come back to essentially bite itself in the ass.”

He said that one solution he’s seen work is to retreat from devices entirely in the classroom. “Colleagues who have done it report that students are almost universally grateful. They understand the reasoning. They understand the logic,” he said. “And they appreciate the opportunity to be freed from the phones and the screens and to focus and engage with other people in a meaningful dialogue.”

But the abandonment of EdTech platforms and screens won’t work for every student. Anna Mills, an English professor at the College of Marin and a colleague of Kirschenbaum’s on the MLA AI task force, compared the fight against agentic AI in education to cybersecurity. “We could decide that bots need to be labeled as bots and that we need to be able to distinguish human activity from AI activity online in some circumstances and that we want to build infrastructure for that,” she said. “That would be an ongoing project, as cybersecurity is.”

Mills is not a luddite. She’s an expert in artificial intelligence systems as well as English, frequently uses Claude, and has been documenting the rise of agentic AIs in EdTech on her YouTube channel for months. She said that using agentic AI like Einstein was cheating, full stop, and academic fraud. “This is in direct violation of these foundational agreements that we make in order to use technology for human communication, human exchange, and human work online,” she said. “And yet that’s not obvious to us. It seems like it’s just another tool, right? But it’s not.”

Mills said she understands Paliwal’s frustrations with education. “But what you need to understand is that online learning spaces are critical for students to access any kind of education,” she said. For her, the proliferation of tools like Einstein do more than help a student bypass the labor of the classroom. They poison the educational well. Online learning has been a boon to many kinds of non-traditional students and that the rise of agentic AI is a threat to that not just because it trivializes traditional forms of education, but because it hurts the credibility of EdTech itself and other online platforms.

The vast majority of college students aren’t attending Ivy League schools, they’re grinding away at night classes in community colleges across the country. Distance and online learning has been an enormous boon for those students. “If there’s no credibility to that, then you’ve just ruined the investment and the learning goals and the access to meaningful learning that that they can then also use for employment of students who are underprivileged, who can’t come to the classroom, who are working full time and raising families and trying to get an education,” Mills said.

Students aren’t horses and there is no greater freedom they can buy themselves by using AI tools to cheat in the classroom. And worse, the more these tools proliferate, the more suspect the entire enterprise becomes. It’s one thing to cheat yourself out of an education, it’s quite another to muddy the waters of EdTech platforms and online learning for everyone else.

Read the whole story
mkalus
10 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories