The Doom Thesis: Why 'If Anyone Builds It, Everyone Dies
Details
Joins us for our first "Book Club" style event.
If you're been in AI Safety, there is a new book on the scene,
"If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares. See a high level summary of chapters here. This book moves beyond traditional "AI safety" to examine the urgent case for a total global cessation of general AI development.
Hazel Shanks will explain and deconstruct the core arguments of the "Doom Thesis," from the last few years, exploring why current training methods—described by the authors as "growing" rather than "building" artificial minds—may lead to a literal, biological extinction event.
Afterwards at dinner/pub, we will discuss the validity of the authors' claims and debate their radical call to action: an international ban on large-scale GPU clusters and the enforcement of "no-build" zones.
***
### Pre-Reading and Resources
*
- AI 2027, graph, a prediction of capabilities
- AI Catastrophe, 2024 Blog Post, Compendium. 10 minutes. A walk through the arguments.
- The Problem, Machine Intelligence Research Institute, 10 minutes. A summary of the problem
- Pausing AI Developments Isn’t Enough. We Need to Shut it All Down. 2023, Blog Post
- Statement on AI Risk, 2023 Open Letter. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
- (Chapter Summary) If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (2025) – Yudkowsky, E. & Soares, N.
