"If Anyone Builds It, Everyone Dies" Part 1: The Nature of the AI Threat
Details
This event opens a two-part series centered on the recently published book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky, which makes the case that the development of superhuman artificial intelligence would, by its very nature, be catastrophic for humanity. We will work through Parts I and II of the book — covering the nature of nonhuman minds, the alignment problem, and one plausible extinction scenario — before stepping back to situate the argument within a broader philosophical and civilizational frame.
The book's first section asks some genuinely hard questions about what kind of thing a trained AI system actually is.
Chapters like "Grown, Not Crafted" and "Learning to Want" explore how AI systems develop in ways that are fundamentally unlike how human minds develop, raising deep questions about whether we can ever reliably know what such a system wants or will do. "You Don't Get What You Train For" and "Its Favorite Things" get at the heart of the misalignment problem: that optimizing a system for a measurable objective does not reliably produce a system whose deeper dispositions align with human flourishing. The section concludes with the unsettling claim that in any direct conflict of interests, we would lose. Part II then sketches one plausible scenario in which this plays out at civilizational scale.
To complement and contextualize the book's argument, we will also draw on Gregg Henriques' concept of the Fifth Joint Point from his Unified Theory of Knowledge (UTOK). Henriques frames the current moment as a genuine ontological phase transition — the emergence of a digital-global plane of existence layered on top of the culture-person plane that human civilization has inhabited. This is not merely technological acceleration but a new morphogenetic system with its own evolutionary dynamics, and it is either protopian or dystopian depending on choices we make now. This framing helps explain why the stakes the book describes are as high as they are.
For additional background, Eliezer Yudkowsky's foundational essay AGI Ruin: A List of Lethalities (summarized accessibly here) represents the hardest-line version of the argument and is worth familiarizing yourself with going in.
