Early on Friday morning, someone threw a Molotov cocktail at OpenAI founder Sam Altman’s house in San Francisco! It bounced off the house and set a front gate on fire. Police would like to talk to 100 million suspects.
No, they caught the alleged culprit straight away. He went from Altman’s house to the OpenAI offices and yelled he was going to set them on fire too. The police arrested him there. [Twitter, archive]
Sam Altman was shaken by this — obviously — and he blogged about it. Altman tried to imply the attack was because of the article about him in the New Yorker last week by Ronan Farrow and Andrew Marantz — where everyone they spoke to said what a serial liar Altman was. [blog post; New Yorker, archive]
The article was not the reason for the attack. The alleged attacker was Daniel Moreno-Gama, age 20 — a devoted believer in the AI doomsday, and a huge fan of Eliezer Yudkowsky, founder of the Rationality subculture.
Moreno had an Instagram called “butlerian_jihadist_”. The Butlerian Jihad is from Dune by Frank Herbert, in which they spend a hundred years wiping out any computer that could think.
Moreno’s Instagram has a pile of stuff on the forthcoming AI doomsday — wherein an artificial intelligence gets so smart it can improve itself. At that point the AI takes off and escalates to superintelligence! (Somehow.) The AI then treats humans as mere raw materials for its own uses, and we all die.
Now, you might think that’s a sci-fi movie scenario, and frankly a bit nuts to actually worry about as a real problem.
But Moreno posted to his Instagram heartily endorsing Yudkowsky’s book about AI doomsday: If Anyone Builds It, Everyone Dies. This book puts itself forward as non-fiction.
The book came out late last year. I got a review copy and instantly regretted asking for one.
Yudkowsky previously wrote a million words of blog posts, from 2007 till 2009, detailing his philosophy. This is called the Sequences — the core documents of the rationalist subculture. (Some have even read them!)
The book is the same stuff Yudkowsky’s been saying since 2007. Slightly cleaned up by Nate Soares, the president of Yudkowsky’s anti-AI charity.
Is this cult stuff? You betcha! Are they sincere or are they charlatans? 100% sincere. Yudkowsky believes this with all his heart.
Yudkowsky used to push for a “Friendly AI,” aligned with human values. It would love us and take care of us. He’s now pretty sure friendly AI can’t be done — ’cos the core members of the AI bubble vendors are his own cultists!
Yudkowsky said “don’t build the torment nexus, you idiots” and the AI doomers all got billions in venture capital funding to build the torment nexus just like you said, boss, real sexy!
The book is not a great argument against AI. It handwaves so fast it’ll take off. We’re talking about super intelligence — what is intelligence? Intelligence is (handwave) being able to plan and do general … things (handwave). “It seems to us.”
The whole book is a chain of reasoning by analogy, frequently to things Yudkowsky doesn’t quite understand — like large language models. It’s completely vibes based. How will the superintelligence beat humanity? It just will, okay. It’ll play Calvinball and just win.
Yudkowsky skids by on the reader assuming he must know what he’s talking about. If you go “wait, hold on a tick” the illusion breaks.
They saved the crazy stuff for the end. Here is the Yudkowsky/Soares prescription for stopping the creation of dangerous AI: have the international AI monitoring authority threaten a nuclear strike if you have more than eight (8) high-end graphics cards as of 2024! That’d be Nvidia H100 Hopper equivalent. BOOM, you get bombed. Seriously, it’s on page 213:
Unfortunately, there isn’t anything magical about the number 100,000. We don’t know that 99,999 GPUs is okay. Nobody knows how to calculate the fatal number. So the safest bet would be to set the threshold low — say, at the level of eight of the most advanced GPUs from 2024 — and say that it is illegal to have nine GPUs that powerful in your garage, unmonitored by the international authority.
That would solve the AI bubble. Even if for the dumbest possible reason.
I don’t recommend this book.
But Daniel Moreno sure did recommend it. He loved it. He was inspired by it. Moreno is a fully committed AI doomer.
I posted to LessWrong.com, the home of the Sequences and the epicentre of rationalism, from 2010 to 2014. I thought they were an interesting bunch, and surely we could work past the culty bits with sweet reason. I did eventually realise the culty bits were the point.
The rationalist subculture keeps churning out radicalised people obsessed with AI doomsday.
You might have heard of the Zizians, the cult formed by Ziz LaSota, who is currently on trial for murdering her landlord, and whose cultists allegedly murdered a Border Patrol officer and possibly four others. The Zizians are AI doomers as well. They’re a schism from rationalism. [AP]
Is rationalism itself a dangerous cult? Well, mostly they’re really bad at things and anyone who gets good at something leaves.
But I was very proud in November when Oliver Habryka, who runs the LessWrong site these days, posted an Enemies List of the “rationality community” — and I was number one! Awesome! [LessWrong, archive]
Number two was someone who ran a downvote bot in 2013. Number three was Émile P. Torres, who writes a lot about the rationalists these days in academic and popular press. Émile works way harder against the rationalists than I do.
Number four on the enemies list was the cult of Ziz. Émile and I, and some guy who ran a downvote bot, are apparently worse than the literally murderous nut cultists.
I think my main offence was writing the RationalWiki article about Roko’s basilisk, the super-AI that will torture a copy of you forever if you don’t donate money to build it. Now, you might think that idea is obviously stupid. Also, I’m told rationalists are sure the reason people think they’re a cult is me. And not because they keep acting like a cult.
You’ll be comforted to know I feel about zero percent in danger from these bozos. If you’re a rationalist and this post upsets you, I suggest you read more Émile Torres.
All of that rationalist guff is the stew of crazy brewing in Daniel Moreno’s brain. So why did Moreno allegedly attack Sam Altman?
Everyone’s heard all about the AI doomsday, where the chatbot is so dangerous it’ll take over — because the AI bubble companies use this idea as marketing! They never shut up about it!
We have built a nothingburger so tasty it could destroy civilisation! If the AI can destroy humanity … it’s definitely powerful enough to write your emails.
And Altman is one of the loudest. OpenAI will fight the scheming evil AI, which doesn’t exist yet! Sam says the super-AI is coming in just “a few thousand days”! OpenAI’s Strawberry model could destroy humanity!
Altman’s been resorting to this trick for a long time. In 2019, OpenAI were hyping GPT-2 — their first text generator to be barely coherent — as “too dangerous to release”. [TechCrunch, 2019]
There was a second attack at Altman’s house on Sunday morning. Someone shot at the house. Two people were arrested. Motive is as yet unknown. [SF Police]
Altman is slightly realising there’s a lot of people who’ve literally been driven mad by the ideas he’s been using as marketing. Throwing Molotovs at Sam Altman’s house is probably bad — but it’s like Altman’s done a stochastic terrorism to … himself. Hope he calms down a bit.