AI vendors sell access to chatbots. You can have conversations with the chatbot!
This convinces far too many people there’s an actual person in there. But there isn’t — they’re text completing machines with a bit of randomness.
That doesn’t sound very cool. So the AI companies encourage what we call criti-hype — the sort of AI “criticism” that makes the robot sound dangerous in a cool and edgy way. Our chatbot is so capable, it could take over the world! So it can definitely answer your email.
If you can’t get enough criti-hype, make up your own! Ask the AI doomsday crowd. The AI companies are full of these people. They will always tell you the super-autocomplete might take over and kill us all.
Anthropic puts out a lot of reports to stoke the fear of chatbots. Reasoning AI is lying to you! Or maybe it’s just hallucinating again. Anthropic did one report with Apollo Research where they got a chatbot to lie to them — by telling it to lie to them.
Apollo Research is an “AI safety” group — that is, they’re AI doomsday cultists. [Apollo]
After GPT-5 fell flat, OpenAI is getting a bit desperate. So they sponsored Apollo to write a report on the threat of scheming chatbots that will deceive you! [OpenAI; Apollo; paper, PDF]
OpenAI and Apollo “found behaviors consistent with scheming in controlled tests across frontier models.”
No, they didn’t. They’re anthropomorphising the chatbot without a licence. The bots are text completers. They don’t have intelligence, they don’t have plans or goals.
But people project their own intelligence onto the chatbot — and if they’re paranoid AI doomsday cultists, they project AI doom onto the chatbot.
AI scheming – pretending to be aligned while secretly pursuing some other agenda – is a significant risk that we’ve been studying.
No, it is not a “significant risk,” they don’t show that at all, and the claim is as stupid as it sounds.
The paper is 94 pages, but if you read through, they openly admit they’ve got nothing. Section 3.1, “Covert actions as a proxy for scheming”, admits directly:
Current frontier models likely lack the sophisticated awareness and goal-directedness required for competent and concerning scheming.
The researchers just said chatbots don’t scheme — but they really want to study this made-up threat. So they look for supposed “covert actions”. And they just assume — on no evidence — there are goals in there.
Apollo is assuming their conclusion as hard as they can, then they’re presenting the results of this assumption as evidence for their conclusion.
The researchers even set up “anti-scheming environments that create conflicts to elicit covert rule violations” — so whatever the bot output is, it’s evidence of wrongdoing. So’s the opposite.
One of the tasks is an “Impossible Coding Task.” Now, you might have noticed that vibe coding is terrible and chatbots are not actually good at coding. But these guys assume bad code presented as good code is evidence of evil.
“Misciting the spec” is cited as evidence of malign intent. In that case, the Github Copilot bot is a malevolent intelligence that’s out to mess you up. (… Hmm.)
If they can’t get anything else, they just tell the bot to please act evil:
Training the model to covertly pursue the misaligned goal. In order to instill the misaligned, covert goal into o4-mini, we use deliberative alignment.
Yes — if you train the bot to act like an evil robot, it’ll act like an evil robot.
After way too many pages of this guff, the conclusions straight up admit they’ve got nothing:
While current models likely lack the goal-directedness and situational awareness required for dangerous scheming, this failure mode may become critical in future AI systems.
We admit this is useless and dumb, but you can’t prove it won’t be huge in the future!
Scheming represents a significant risk for future AI systems
This is just after they said they’ve no evidence this is even a thing.
The whole paper is full of claims so stupid you think, I must be reading it wrong. But then they just come out and say the stupid version.
I bet these guys are haunted by the malevolent artificial intelligence power of thermostats. It switched itself on!!
We talked previously about using machine-learning AI to produce fake data — sorry, synthetic data — for medical research. Avoid pesky human subjects and ethics requirements. The data is not real data — but you can get so many papers out of it!
Synthetic data generally uses old-fashioned machine learning. It didn’t come from the AI bubble, and it isn’t chatbots.
But what if … it was chatbots?
Here’s a remarkable paper: “A foundation model to predict and capture human cognition”. The researchers talk up their exciting new model called Centaur. They collected 160 psychological experiments and retrained Llama 3.1 on them. The researchers claim Centaur is so good that: [Nature]
it also generalizes to previously unseen cover stories, structural task modifications and entirely new domains.
That is, they asked Centaur about some experiments they didn’t train it on, and it got better answers than an untrained chatbot. They’d fooled themselves, and that was enough.
The paper ends with an example of “model-guided scientific discovery”. Their example is that Centaur does better at designing an experiment than DeepSeek-R1.
Now, the researchers are not saying you should go out and fake experiments with data from Centaur. Perish the thought!
They’re just talking about how you might use Centaur for science, and then they tweet that “Centaur is a computational model that predicts and simulates human behavior for any experiment described in natural language.” They’re just saying it. [Twitter]
I won’t name names, but I’ve seen academics vociferously defending this paper against the charge that it’s suggesting you could synthesize your data with Centaur, because they didn’t expressly say those words. This is a complaint that the paper said 2 and 2, and how dare you add them up and get 4.
You or I might think that claiming a chatbot model simulates human psychology was obviously a weird and foolish claim. So we have a response paper: “Large Language Models Do Not Simulate Human Psychology.” [arXiv, PDF]
This paper doesn’t let the first guys get away with weasel wording. It also replies to the first paper’s implied suggestions that Centaur would be just dandy for data synthesis:
Recently, some research has suggested that LLMs may even be able to simulate human psychology and can therefore replace human participants in psychological studies. We caution against this approach.
A chatbot doesn’t react consistently, it doesn’t show human levels of variance, and — being a chatbot — it hallucinates.
The second research team tested multiple chatbots, including Centaur, against 400 human subjects on a standard series of ethical questions, and subtly reworded them. Centaur’s human fidelity regressed to about average:
If inputs are re-worded, we would need LLMs to still align with human responses. But they do not.
Their conclusion should be obvious, but it looks like they did have to say it out loud:
LLMs should not be treated as (consistent or reliable) simulators of human psychology. Therefore, we recommend that psychologists should refrain from using LLMs as participants for psychological studies.
Some researchers will do anything not to deal with messy humans and ethics boards. They only want an excuse to synthesize their data.
The Centaur paper was precisely that excuse. The researchers could not have not known it was that excuse, in the context of the world they live and work in, in 2025. Especially as the first tool they reached for was everyone’s favourite academic cheat code — a chatbot.
The tech press and the finance press have seen a barrage of quantum computing hype in the past few weeks This is entirely because venture capital is worried about the AI bubble.
The MIT report that 95% of AI projects don’t make any money frightened the AI bubble investors. This is even as the actual report is trash, and its purpose was to sell you on the authors’ Web3 crypto scam. (I still appear to be the only one to notice that bit.) They got the right answer entirely by accident.
Venture capital needs a bubble party to get lottery wins. The hype is the product. The tech itself is an annoying bag on the side of the hype.
This new wave of quantum hype is trying to pump up a bubble with rather more desperation than before.
Quantum computing is not a product yet. It’s as if investors were being sold the fabulous vision of the full Internet, ten minutes after the telegraph was invented. Get in early!
Now, quantum computing is a real thing. It is not a technology yet — right now, it’s physics experiments. You can’t buy it in a box.
The big prize in quantum computing is where you use quantum bits (qubits) to do particular difficult calculations fast — or at all. Such as factoring huge numbers to break encryption!
Here in the real-life present day, quantum computing still can’t factor numbers higher than 21. Three times seven. That’s the best the brightest minds of IBM have achieved.
But there’s also a lot of fudging results. The recent Peter Gutmann paper is a list of ways to cheat at quantum factoring. [IACR, PDF]
In this paper, Gutmann is telling cryptographers not to worry too much about quantum computing. Though cryptographers have still been on the case for a couple of decades, just in case there’s a breakthrough.
There’s other things that are not this qubit-based version of quantum computing, but the companies can technically call them computing that’s quantum, so they do, ’cos there’s money in it.
D-Wave will sell you a “quantum computer” that does a different thing called quantum annealing. Amazon and IBM will rent you “quantum computers,” and you look and they’re quantum computing simulators. IBM also runs a lot of vendor-funded trials.
The hype version of quantum computing, that uses qubits to factor numbers fast and so on, does not exist as yet. The hype is that it will exist any time soon. But there’s not a lot of sign of that.
Look up “quantum computing” in Google News. Some of this is science, such as a university press office talking up a small advance.
The hype results are PR company placements, funded by venture capital dollars. Today’s press release is a new physics experiment, and you have to read several paragraphs down to where they admit anything practical is a decade away. They say “this decade,” I say I wish them all the best. [press release]
Press releases like this come out all the time. Mostly they don’t go anywhere. I am not saying none of them will — but I am saying these are funding pitches, not scientific papers. Any results are always years away.
If everything goes really well, we might have a product in five years. I won’t say it can’t happen! I will say, show me.
The Financial Times ran an editorial on 21st August: “The world should prepare for the looming quantum era: New breakthroughs underscore the technology’s potential and perils.” [FT]
The “new breakthroughs” aren’t breakthroughs. All of this is “could”. The results the FT says “could make this imminent” are press releases from IBM and Google. It’s handwaving about big companies making promises and the earliest actual date anyone will put on the start of a result is 2033.
The FT editors can’t come up with any actual business advice, given all of this is years away. For almost anyone, there is no meaningful business action to take in 2025. No CEO or CTO needs to think about quantum computing until there’s an actual product in front of you. Unless you’re the CEO of IBM.
You must, of course, put all your money into funds investing in quantum computing companies. That’ll keep the numbers going up!
So can venture capital actually ignite a quantum computing bubble? I’m not going to say no, because stupid things happen every day. I will say they’ve got an uphill battle.
The AI bubble launched with a super-impressive demo called ChatGPT, and quantum computing doesn’t have anything like that. There are no products. But the physics experiments are very pretty.
This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.
A former CIA official and contractor, who at the time of his employment dug through classified systems for information he then sold to a U.S. lobbying firm and foreign clients, used access to those CIA systems as “his own personal Google,” according to a court record reviewed by 404 Media and Court Watch.
Dale Britt Bendler, 68, was a long running CIA officer before retiring in 2014 with a full pension. He rejoined the agency as a contractor and sold a wealth of classified information, according to the government’s sentencing memorandum filed on Wednesday. His clients included a U.S. lobbying firm working for a foreigner being investigated for embezzlement and another foreign national trying to secure a U.S. visa, according to the court record.
A union that represents university professors and other academics published a guide on Wednesday tailored to help its members navigate social media during the “current climate.” The advice? Lock down your social media accounts, expect anything you post will be screenshotted, and keep things positive. The document ends with links to union provided trauma counseling and legal services.
The American Association of University Professors (AAUP) published the two page document on September 17, days after the September 10 killing of right-wing pundit Charlie Kirk. The list of college professors and academics who've been censured or even fired for joking about, criticizing, or quoting Kirk after his death is long.
Clemson University in South Carolina fired multiple members of its faculty after investigating their Kirk-related social media posts. On Monday the state’s Attorney General sent the college a letter telling it that the first amendment did not protect the fired employees and that the state would not defend them. Two universities in Tennessee fired multiple members of the staff after getting complaints about their social media posts. The University of Mississippi let a member of the staff go because they re-shared a comment about Kirk that people found “insensitive.” Florida Atlantic University placed an art history professor on administrative leave after she posted about Kirk on social media. Florida's education commissioner later wrote a letter to school superintendents warning them there would be consequences for talking about Kirk in the wrong way. “Govern yourselves accordingly,” the letter said.
AAUP’s advice is meant to help academic workers avoid ending up as a news story. “In a moment when it is becoming increasingly difficult to predict the consequences of our online speech and choices, we hope you will find these strategies and resources helpful,” it said.
Here are its five explicit tips: “1. Set your personal social media accounts to private mode. When prompted, approve the setting to make all previous posts private. 2. Be mindful that anything you post online can be screenshotted and shared. 3. Before posting or reposting online commentary, pause and ask yourself: a. Am I comfortable with this view potentially being shared with my employer, my students, or the public? Have I (or the person I am reposting) expressed this view in terms I would be comfortable sharing with my employer, my students, or the public?”
The advice continues: “4. In your social media bios, state that the views expressed through the account represent your own opinions and not your employer. You do not need to name your employer. Consider posting positive statements about positions you support rather than negative statements about positions you disagree with. Some examples could be: ‘Academic freedom is nonnegotiable,’ ‘The faculty united will never be divided,’ ‘Higher ed research saves lives,’ ‘Higher ed transforms lives,’ ‘Politicians are interfering with your child’s education.’”
The AAUPthen provides five digital safety tips that include setting up strong passwords, installing software updates as soon as they’re available, using two-factor authentication, and never using employer email addresses outside of work.
The last tip is the most revealing of how academics might be harassed online through campaigns like Turning Point USA’s “Professor Watchlist.” “Search for your name in common search engines to find out what is available about you online,” AAUP advises. “Put your name in quotation marks to narrow the search. Search both with and without your institution attached to your name.”
After that, the AAUP provided a list of trauma, counseling, and insurance services that its members have access to and a list of links to other pieces of information about protecting themselves.
“It’s good basic advice given that only a small number of faculty have spent years online in my experience, it’s a good place to start,” Pauline Shanks Kaurin, the former military ethics professor at the U.S. Naval War College told 404 Media. Kaurin resigned her position at the college earlier this year after realizing that the college would not defend academic freedom during Trump’s second term.
“I think this reflects the heightened level of scrutiny and targeting that higher ed is under,” Kaurin said. “While it’s not entirely new, the scale is certainly aided by many platforms and actors that are engaging on [social media] now when in the past faculty might have gotten threatening phone calls, emails and hard copy letters.”
The AAUP guidance was co-written by Isaac Kamola, an associate professor at Trinity College and the director of the AAUP’s Center for Academic Freedom. Kamola told 404 Media that the recommendations came for years of experience working with faculty who’ve been on the receiving end of targeted harassment campaigns. “That’s incredibly destabilizing,” he said. “It’s hard to explain what it’s like until it happens to you.”
Kamola said that academic freedom was already under threat before Kirk’s death. “It’s a multi-decade strategy of making sure that certain people, certain bodies, certain dies, are not in higher education, so that certain other ones can be, so that you can reproduce the ideas that a political apparatus would prefer existed in a university,” he said.
It’s telling that the AAUP felt the need to publish this, but the advice is practical and actionable, even for people outside of academia. Freedom of expression is under attack in America and though academics and other public figures are perhaps under the most threat, they aren’t the only ones. Secretary of Defense Pete Hegseth said the Pentagon is actively monitoring the social media activity of military personnel as well as civilian employees of the Department of Defense.
“It is unacceptable for military personnel and Department of War civilians to celebrate or mock the assassination of a fellow American,” Sean Parnell, public affairs officer at the Pentagon, wrote on X, using the new nickname for the Department of Defense. In the private sector, Sony fired one of its video game developers after they made a joke on X about Kirk’s death and multiple journalists have been fired for Kirk related comments.
AAUP did not immediately respond to 404 Media’s request for comment.