An open discussion on Brian Christian's The Alignment Problem- AI, values, and the hard questions. No lectures. Just curious people.
Can we build AI that actually does what we want, and should we trust ourselves to define what that is?
Brian Christian's The Alignment Problem is one of the most important books written about artificial intelligence, not because it's about robots taking over, but because it's about something far more unsettling: what happens when we build systems that are technically doing exactly what we told them to, and it still goes wrong.
Join us for an afternoon informal discussion on the book's core ideas: reward hacking, value loading, corrigibility, inner alignment, and what it means to build machines that behave according to human intentions. No computer science background required. Genuine curiosity is the only prerequisite.
What to expect:
- A focused, 1-hour conversation organized around the book's major themes.
- No lectures: just facilitated dialogue, with prepared discussion questions to keep things moving
- A venue that makes you want to stay for one more round of conversation
Book: The Alignment Problem by Brian Christian
Capacity: Limited to 6 attendees (kept intentionally small so everyone gets to speak)
Format: Facilitated group discussion