🧠AI, Optimization, and Agency: Terminator, Tinker Bell… or Something Else Entirely?
What happens when we build systems that don’t just think—but act?
This event explores one of the most pressing and under-examined questions in AI: what if the real risk isn’t intelligence, but optimization?
We’ll look at how AI systems—especially those with increasing autonomy—may begin to behave in agent-like ways, even when they aren’t explicitly designed to be agents. Drawing on work from LessWrong (like “Optimality is the tiger, agents are its teeth”), we’ll explore how optimization processes themselves, when left unchecked, can drift toward dangerous outcomes—even in systems that fall short of AGI.
We’ll discuss:
- What AI is, and how generative models are trained
- What it means for a system to act as an agent—or optimize for a goal
- Why optimization may be more dangerous than intelligence alone
- Biological vs. artificial agency: why our assumptions may not apply
- How systems could bootstrap agent-like behavior from optimization loops
This talk is aimed at people curious about where AI is heading—philosophically, technically, and ethically. No specialized background required. Expect a mix of insight, debate, and real-world relevance.