Options for the future of global AI governance
Details
In what ways (if at all) should people around the world try to constrain and guide the development and deployment of new generations of AI platforms and applications?
Recent events raise significant new issues and opportunities regarding the possibilities for coordinated global governance of advanced AI. These include:
- The Singapore Consensus
- The US AI Action Plan
- Discussions at the World AI Conference in Shanghai
- Conversations in New York around the UN High Level Week
- The Global Call for AI Red Lines
- Rapid new releases of AI models
- AI models passing new thresholds of capability
This London Futurists webinar features a number of close observers of these trends and events, each offering their suggestions for what can (and should) happen next:
- Sean O hEigeartaigh, Director, AI: Futures and Responsibility Programme, University of Cambridge
- Kayla Blomquist, Director, Oxford China Policy Lab
- Dan Faggella, CEO and Head of Research, Emerj Artificial Intelligence Research
- Duncan Cass-Beggs, Executive Director, Global AI Risks Initiative
- Robert Whitfield, Convenor, GAIGANow
- Nora Amman, Technical Specialist, Advanced Research and Invention Agency (ARIA).
The webinar will include time for audience questions and feedback.
AI and Society
New Technology
Political Activism
Risk Governance and Compliance
Geopolitics
