Resident of the world, traveling the road of life
69028 stories
·
21 followers

CatGPT Goes Wrong

1 Share

Simons ewig hungrige Katze findet einige Wege, um den Laptop seines Menschen zu zerstören, als Simon während einer Telefonkonferenz den Raum verlassen musste.

If you’ve ever tried to work with a cat in the room, this one’s for you. Working from home isn’t so peaceful with this keyboard warrior around!


(Direktlink, via Laughing Squid)

Read the whole story
mkalus
7 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

AI works can’t be copyrighted or patented in the US

1 Share

On Monday, the US Supreme Court declined an appeal against a decision that AI-produced art could not be copyrighted. The earlier decision stands. [Reuters]

This should be no surprise at all. This was a very weird and dumb copyright case which was always going to fail. The plaintiff even brought a similar AI patent case previously.

This dates back well before the current AI bubble. Dr Stephen Thaler has been trying to get copyright assignments and patents for his machine DABUS — Device for the Autonomous Bootstrapping of Unified Sentience. Thaler is convinced that years ago, he invented a machine that is actually a person. A creative one. Huge if true. [Imagination Engines]

DABUS has apparently produced inventions. Thaler isn’t content to file these in his name — he wants the machine to get the credit. So he filed patent applications with DABUS as the inventor in July 2019. [Complaint, 2020, PDF; case docket]

The US Patent and Trademark Office rejected the application in April 2020 on the basis that only a natural person could be named on a patent as the inventor. Thaler appealed — on behalf of DABUS — in June 2020.

The Patent Office response to the appeal includes a lot of the sentence “The allegations contained within this paragraph constitute conclusions of law, to which no response is required.” The Court ruled against Thaler in February 2021. [Answer, 2020, PDF; ruling, 2021, PDF]

Thaler appealed the patent decision, and the appeal was denied in May 2022. Costs were assessed against Thaler. He appealed to the Supreme Court, who declined his patent appeal in April 2023. [Ruling, 2022, PDF; case docket; Reuters, 2023]

Here’s Thaler being interviewed on NewsNation in August 2023: [YouTube]

Natasha Zouves: Stephen, you say that you’ve invented a sentient AI,that it has feelings. What do you mean by this?

Thaler: Well, you’re also hearing news that a machine has invented whole new concepts that are being patented right now. And that’s resulting in a lot of conflict around the world as we battle in court cases to give credit to the machine. But what is driving the machine to invent, to motivate it are its emotions, its sentience, its subjective feelings.

That was four months after Thaler had lost his patent case in the US. The remaining case he’s talking about there was his final appeal in the UK, which the UK Supreme Court rejected in December 2023. [BBC; UK Supreme Court]

So, robots can’t get patents. Thaler brought the copyright case, which we mentioned on Pivot to AI in late 2024. In this case, Thaler’s Creativity Machine had generated an image, and Thaler went to register the copyright in November 2018. The US Copyright Office rejected the application in August 2019 — “because it lacks the human authorship necessary to support a copyright claim.” [Complaint, 2022, PDF; case docket]

Thaler appealed the copyright decision in June 2022 and that was thrown out in August 2023. He further appealed to the DC Circuit and that was thrown out in September 2024. He appealed to the Supreme Court, and that’s what was declined on Monday. AI can’t create a new copyright. [Opinion, 2023, PDF; appeal, 2024, PDF; appeal docket]

Something very like this has come up before — the monkey selfie case, where a monkey grabbed a camera in 2011 and took a picture of itself, the owner of the camera tried to register a copyright, and in December 2014, the Copyright Office ruled that yeah, a monkey can’t own a copyright.

Thaler’s machines sound like very interesting AI demos. That’s different from his machine being alive with feelings and intent. Thaler hasn’t got anyone to agree with him on that yet.

So what all this means is: if you generate some AI slop, it’s not yours, it’s uncopyrightable and in the public domain. Even if you own the AI that generated it.

That doesn’t mean you can copyright-wash someone else’s work by running it through the AI — your AI-twiddled version might still be a copyright violation and you could be sued for it.

If you edit an AI work, the human-edited parts might create a new copyright, but only for the new elements.

I’m not your lawyer, go talk to your lawyer. But robots can’t create a new copyright.

Read the whole story
mkalus
8 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Saturday Morning Breakfast Cereal - Serve

1 Share


Click here to go see the bonus panel!

Hovertext:
I need to do an upbeat comic week one of these days. They all end with hooray.


Today's News:
Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

OpenAI’s ‘$110b’ funding round is $25b and some promises

1 Share

OpenAI’s announced its latest funding round! The big headline number is $110 billion! That’s $30 billion from Softbank, $30 billion from Nvidia, and $50 billion from Amazon. [OpenAI]

Zero dollars have moved yet. And the dollars are not real until they move — look at that $100 billion Nvidia not-a-deal that evaporated in early February.

I love SEC filings — you’re not allowed to lie in them. Amazon’s putting in $15 billion to start with, and the other $35 billion depends on conditions. From Amazon’s SEC 8-K filing on the matter: [SEC]

(i) OpenAI meeting specified milestones, and (ii) OpenAI directly or indirectly consummating an initial public offering or direct listing of equity securities in the United States

The “specified milestones” aren’t listed. The Information spoke to some guys who are pretty sure one condition is achieving Artificial General Intelligence. Quite a condition. [Information, archive]

SoftBank’s $30 billion is in three tranches — $10 billion in each of April, July, and October this year. This won’t be SoftBank’s own money — they’re borrowing it. They had a hard time finding lenders for the previous $40 billion round, so they’ll need some salesmanship. [Softbank]

Nvidia hasn’t put in an actual dollar as yet or signed anything binding. There’s been nothing new in their SEC filings since their Form 10-K annual report, which says: [SEC]

We are finalizing an investment and partnership agreement with OpenAI. There is no assurance that we will enter into an investment and partnership agreement with OpenAI or that a transaction will be completed.

So that’s $25 billion of actual money to OpenAI as yet, if we count SoftBank in April.

OpenAI also expects other investors: [Bloomberg, archive]

roughly another $10 billion from venture capital firms and sovereign wealth funds as the round progresses.

Microsoft isn’t in this funding round so far. From the Information:

Microsoft had been expected to invest low billions of dollars, The Information previously reported, but it could invest a smaller amount or none at all, according to two of the people.

There’s also the requisite circular deals. OpenAI will use 2 gigawatts of Amazon “Trainium” chips. This will cost an unspecified number of billions of dollars. OpenAI will do AI models for Amazon.

OpenAI will use 5 gigawatts of Nvidia’s Vera Rubin chips. Again, they don’t list a price tag.

OpenAI is likely to try for an initial public offering in the fourth quarter of 2026. This present deal gives OpenAI an imaginary valuation of $730 billion. I’m not sure there’s enough money in the market to sell all of that as stock. Maybe they can make an offering of some of it. [WSJ, archive]

OpenAI is still utterly unsustainable as a business. It burns three to five dollars for every dollar it takes in. It’s scrambling for revenue lately — but when you lose money on every transaction, you’re won’t actually make it up in volume.

There’s rumours Anthropic wants to go public as well. Nothing definite as yet. I suspect whoever goes first will do better — the same institutions who’d be the big backers for an IPO are the investors both companies have been hitting up for cash already.

Perhaps OpenAI or Anthropic can look sufficiently essential to the US government. Bailouts are peak capitalism.

I predict neither OpenAI nor Anthropic can make it out of this alive as sustainable businesses. But they might be able to soak the public investors first.

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Polymarket Pulls Bet on Nuclear Detonation in 2026

1 Share
Polymarket Pulls Bet on Nuclear Detonation in 2026

For a few hours on Tuesday, Polymarket hosted a bet about the possibility of nuclear war in 2026. The market asked the question “Nuclear weapon detonation by …?” and racked up close to a million dollars in trading volume before Polymarket took the unusual step to remove the market from its website. It did not simply close down the bet, but it’s been “archived” meaning that a record of it no longer exists. It’s strange as many older and paid out bets remain on the site.

Pulling a bet like this is unusual and the company did not respond to 404 Media’s request for an explanation as to why. Word of the nuke bet drew wide attention online from critics already upset about Polymarket for its place in the depravity economy.

“I have not seen anything like this before,” Jon Wolfsthal, a former special assistant to President Barack Obama and a member of the Bulletin of the Atomic Scientists, told 404 Media. “As a citizen, it seems dangerous to enable people in power to place bets anonymously on things that might happen, creating an incentive to act on a basis of personal gain and not the national interest.”

Polymarket doesn’t often balk at bets on violence and war. There are multiple markets covering the wars in Ukraine and Iran and also many other bets about nuclear detonations. “Will a US ally get a nuke before 2027?” and “Russia nuclear test by …?” are both still actively trading. An older version of the “nuclear weapons detonation” is still on the site and did almost $3 million in trading before closing and paying out at the end of the 2025. It’s hosted a bet on the same question every year for the past few years.

The gambling market has been under fire this week after gaining a lot of attention for its various bets on the war in Iran. Gamblers spent more than $5 million betting on the question “Will the Iranian regime fall by June 30?” People have been caught manipulating war maps to cash in on frontline advances in Ukraine. And someone made $400,000 using inside knowledge to place bets about the capture of Maduro.

“How ghoulish. Especially given how much insider trading apparently goes on with current events bets,” Alex Wellerstein, a nuclear historian and creator of the NUKEMAP, told 404 Media.

Wellerstein said that betting on nuclear war isn’t unprecedented, but that it’s usually tongue-in-cheek and conducted by insiders. “The thing that immediately comes to mind is Fermi's ‘side bet’ that the Trinity test would destroy the atmosphere in 1945—which was a joke, as nobody would be able to collect if it had happened,” he said.

“A flip of this is in Daniel Ellsberg's The Doomsday Machine, in which he eschewed paying into a pension in the early 1960s because he thought the odds of a future nuclear war were so high that it was better to spend the money sooner rather than later. So another kind of bet, but a private one,” Wellerstein added. “And whenever experts give ‘odds’ on nuclear use (which the intelligence community does, apparently), they are to some degree indulging in this kind of impulse. But not for the hope of personal profit—usually it is because they want to avoid such an outcome.”

Polymarket CEO Shayne Coplan has repeatedly called the site “the future of news,” and has suggested that prediction markets give the public a more clear picture of events because money is on the line. The reality is that the financial incentives pervert reality. Nuclear war, it seems, was a bit too dramatic for Polymarket to host a wager on. But Polymarket has few moral qualms, has not told anyone why it "archived" the bet, and it’s possible it did so for some arcane technical reason and not because it got squeamish. Polymarket did not respond to 404 Media’s request for comment.

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles

1 Share
AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles

Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article. 

The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model. 

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

“We do so by providing monthly stipends to full-time contributors and translators,” OKA’s site says. “We leverage AI (Large Language Models) to automate most of the work.”

The problem is that editors started to notice that some of these translations introduced errors to articles. For example, a draft translation for a Wikipedia article about the French royal La Bourdonnaye family cites a book and specific page number when discussing the origin of the family. A Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia, checked that source and found that the specific page of that book “doesn't talk about the La Bourdonnaye family at all.”

“To measure the rate of error, I actually decided to do a spot-check, during the discussion, of the first few translations that were listed, and already spotted a few errors there, so it isn't just a matter of cherry-picked cases,” Lebleu told me. “Some of the articles had swapped sources or added unsourced sentences with no explanation, while 1879 French Senate election added paragraphs sourced from material completely unrelated to what was written!”

As Wikipedia editors looked at more OKA-translated articles, they found more issues. 

“Many of the results are very problematic, with a large number of [...] editors who clearly have very poor English, don't read through their work (or are incapable of seeing problems) and don't add links and so on,” a Wikipedia page discussing the OKA translation said. The same Wikipedia page also notes that in some cases the copy/paste nature of OKA translators’ work breaks the formatting on some articles. 

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations. 

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output. 

“The use of Grok proved controversial, notably given the reasons for which Grok has been in the news recently, and a recent in-house study showed ChatGPT and Claude perform more accurately, leading them to switch a few days ago, although they still recommend Grok as ‘valuable for experienced editors handling complex, template-heavy articles,’” Lebleu told me. 

Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule. 

“OKA translators who have received, within six months, four (correctly applied) warnings about content that fails verification will be blocked without further warning if another example is found,” the Wikipedia editors wrote. “Content added by an OKA translator who is subsequently blocked for failing verification may be presumptively deleted [...] unless an editor in good standing is willing to take responsibility for it.”

A job posting for a “Wikipedia Translator” from OKA offers $397 a month for working up to 40 hours per week. The job listing says translators are expected to publish “5-20 articles per week (depending on size).”

“They leverage machine translation to accelerate the process. We have published over 1500 articles and the number grows every day,” the job posting says. 

“Given this precarious status, I am worried that more uncertainty in the translator duties may lead to an overloading of responsibilities, which is worrying as independent contractors do not necessarily have the same protections as paid employees,” Lebleu wrote in the public Wikipedia discussion about OKA

Jonathan Zimmermann, the founder and president of OKA, and who goes by 7804j

on Wikipedia, told me that translators are paid hourly, not per article, and that there is no fixed article quota. 

“We emphasize quality over speed,” Zimmerman told me in an email. “In fact, some of the problematic cases involved unusually high output relative to time spent — which in retrospect was a warning sign. Those cases were driven by individual enthusiasm and speed rather than institutional pressure.”

Zimmerman told me that “errors absolutely do occur,” but that OKA’s process includes human review, requires translators to check their content against cited sources, and that “senior editors periodically review samples, especially from newer translators.”

“Following the recent discussion, we have strengthened our safeguards,” Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms 

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate. 

“I agree that using AI to check AI can absolutely fail — and in some contexts it can fail at very high rates. We’re not assuming the secondary model is reliable in isolation,” Zimmerman said. “The key point is that we’re not replacing human verification with automated verification. The second model is a complement to manual review, not a substitute for it.”

“When a coordinated project uses AI tools and operates at scale, it’s going to attract attention. I understand why editors would examine that closely. Ultimately, the outcome of the discussion formalized expectations that are largely aligned with our existing internal policies,” Zimmerman added. “However, these restrictions apply specifically to OKA translators. I would prefer that standards apply equally to everyone, but I also recognize that organized, funded efforts are often held to a higher bar.”

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories