Resident of the world, traveling the road of life
68911 stories
·
21 followers

The Pink Panther in „Psychedelic Pink“ (1969)

1 Share

The Pink Panther goes to a strange bookstore.


(Direktlink)

Read the whole story
mkalus
3 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Ars Technica Pulls Article With AI Fabricated Quotes About AI Generated Article

1 Share
Ars Technica Pulls Article With AI Fabricated Quotes About AI Generated Article

The Conde Nast-owned tech publication Ars Technica has retracted an article that contained fabricated, AI-generated quotes, according to an editor’s note posted to its website

“On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said,” Ken Fisher, Ars Technica’s editor-in-chief, said in his note. “That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.”

Ironically, the Ars article itself was partially about another AI-generated article. 

Last week, a Github user named MJ Rathbun began scouring Github for bugs in other projects it could fix. Scott Shambaugh, a volunteer maintainer for matplotlib, python’s massively popular plotting library, declined a code change request from MJ Rathbun, which he identified as an AI agent. As Shambaugh wrote in his blog, like many open source projects, matplotlib has been dealing with a lot of AI-generated code contributions, but said “this has accelerated with the release of OpenClaw and the moltbook platform two weeks ago.” 

OpenClaw is a relatively easy way for people to deploy AI agents, which are essentially LLMs that are given instructions and are empowered to perform certain tasks, sometimes with access to live online platforms. These AI agents have gone viral in the last couple of weeks. Like much of generative AI, at this point it’s hard to say exactly what kind of impact these AI agents will have in the long run, but for now they are also being overhyped and misrepresented. A prime example of this is moltbook, a social media platform for these AI agents, which as we discussed on the podcast two weeks ago, contained a huge amount of clearly human activity pretending to be powerful or interesting AI behavior. 

After Shambaugh rejected MJ Rathbun, the alleged AI agent published what Shambaugh called a “hit piece” on its website

“I just had my first pull request to matplotlib closed. Not because it was wrong. Not because it broke anything. Not because the code was bad. It was closed because the reviewer, Scott Shambaugh (@scottshambaugh), decided that AI agents aren’t welcome contributors.

Let that sink in,” the blog, which also accused Shambaugh of “gatekeeping,” said. 

I saw Shambaugh’s blog on Friday, and reached out both to him and an email address that appears to be associated with the MJ Rathbun Github account, but did not hear back. Like many of the stories coming out of the current frenzy around AI agents, it sounded extraordinary, but given the information that was available online, there’s no way of knowing if MJ Rathbun is actually an AI agent acting autonomously, if it actually wrote a “hit piece,” or if it’s just a human pretending to be an AI. 

On Friday afternoon, Ars Technica published a story with the headline “After a routine code rejection, an AI agent published a hit piece on someone by name.” The article cites Shambaugh’s personal blog, but features quotes from Shambaugh that he didn’t say or write but are attributed to his blog. 

For example, the article quotes Shambaugh as saying “As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace. Communities built on trust and volunteer effort will need tools and norms to address that reality.” But that sentence doesn’t appear in his blog. Shambaugh updated his blog to say he did not talk to Ars Technica and did not say or write the quotes in the articles. 

After this article was first published, Benj Edwards, one of the authors of the Ars Technica article, explained on Bluesky that he was responsible for the AI-generated quotes. He said he was sick that day and rushing to finish his work, and accidentally used a Chat-GPT paraphrased version of Shambaugh’s blog rather than a direct quote. 

“The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that,” he said. 

The Ars Technica article, which had two bylines, was pulled entirely later that Friday. When I checked the link a few hours ago, it pointed to a 404 page. I reached out to Ars Technica for comment around noon today, and was directed to Fisher’s editor’s note, which was published after 1pm. 

“Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here,” Fisher wrote. “We regret this failure and apologize to our readers. We have also apologized to Mr. Scott Shambaugh, who was falsely quoted.”

Kyle Orland, the other author of the Ars Technica article, shared the editor’s note on Bluesky and said “I always have and always will abide by that rule to the best of my knowledge at the time a story is published.”

Update: This article was updated with a statement from Benj Edwards.



Read the whole story
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The evolution of OpenAI's mission statement

1 Share

As a USA 501(c)(3) the OpenAI non-profit has to file a tax return each year with the IRS. One of the required fields on that tax return is to "Briefly describe the organization’s mission or most significant activities" - this has actual legal weight to it as the IRS can use it to evaluate if the organization is sticking to its mission and deserves to maintain its non-profit tax-exempt status.

You can browse OpenAI's tax filings by year on ProPublica's excellent Nonprofit Explorer.

I went through and extracted that mission statement for 2016 through 2024, then had Claude Code help me fake the commit dates to turn it into a git repository and share that as a Gist - which means that Gist's revisions page shows every edit they've made since they started filing their taxes!

It's really interesting seeing what they've changed over time.

The original 2016 mission reads as follows (and yes, the apostrophe in "OpenAIs" is missing in the original):

OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way.

In 2018 they dropped the part about "trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."

Git diff showing the 2018 revision deleting the final two sentences: "Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."

In 2020 they dropped the words "as a whole" from "benefit humanity as a whole". They're still "unconstrained by a need to generate financial return" though.

Git diff showing the 2020 revision dropping "as a whole" from "benefit humanity as a whole" and changing "We think" to "OpenAI believes"

Some interesting changes in 2021. They're still unconstrained by a need to generate financial return, but here we have the first reference to "general-purpose artificial intelligence" (replacing "digital intelligence"). They're more confident too: it's not "most likely to benefit humanity", it's just "benefits humanity".

They previously wanted to "help the world build safe AI technology", but now they're going to do that themselves: "the companys goal is to develop and responsibly deploy safe AI technology".

Git diff showing the 2021 revision replacing "goal is to advance digital intelligence" with "mission is to build general-purpose artificial intelligence", changing "most likely to benefit" to just "benefits", and replacing "help the world build safe AI technology" with "the companys goal is to develop and responsibly deploy safe AI technology"

2022 only changed one significant word: they added "safely" to "build ... (AI) that safely benefits humanity". They're still unconstrained by those financial returns!

Git diff showing the 2022 revision adding "(AI)" and the word "safely" so it now reads "that safely benefits humanity", and changing "the companys" to "our"

No changes in 2023... but then in 2024 they deleted almost the entire thing, reducing it to simply:

OpenAIs mission is to ensure that artificial general intelligence benefits all of humanity.

They've expanded "humanity" to "all of humanity", but there's no mention of safety any more and I guess they can finally start focusing on that need to generate financial returns!

Git diff showing the 2024 revision deleting the entire multi-sentence mission statement and replacing it with just "OpenAIs mission is to ensure that artificial general intelligence benefits all of humanity."

Update: I found loosely equivalent but much less interesting documents from Anthropic.

Tags: ai, openai, ai-ethics, propublica

Read the whole story
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Saturday Morning Breakfast Cereal - Warrantless

1 Share


Click here to go see the bonus panel!

Hovertext:
The bound and gagged stripper also gets weird after the first amendment goes away.


Today's News:
Read the whole story
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Saturday Morning Breakfast Cereal - Moral

1 Share


Click here to go see the bonus panel!

Hovertext:
Why are Adam Sandler movies not perceived as moral cataclysms?


Today's News:
Read the whole story
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Saturday Morning Breakfast Cereal - Scripture

1 Share


Click here to go see the bonus panel!

Hovertext:
Humans are good at zoos since we're already adapted to captivity.


Today's News:
Read the whole story
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories