Skip to content

Details

Many leading scientists have expressed concerns that AI could be an existential risk to humanity. In response, a series of global AI Safety Summits was initiated in the UK in November last year. This series will continue in Seoul, South Korea, on 21-22 May.

These AI Safety Summits take place behind closed doors, meaning that citizens cannot directly verify how much progress is being made in the understanding and reduction of the AI risks that threaten catastrophe.

However, the Existential Risk Observatory, with support from London Futurists, has organised a parallel series of AI Safety Summit Talks that are open to the general public, policymakers, and journalists.

At these events, participants discuss what the largest risks of future AI are and how to reduce them.

The next in this series is taking place on Tuesday 21st May, from 12 noon UK time (which will be 8pm in Korea).

========

To register for this event, visit Luma: lu.ma/1ex04fuw

The event will be livestreamed at https://www.youtube.com/watch?v=bDLfV4MU1Ns.

To ask questions during the Q&A portions of this event, and to upvote questions raised by other audience members, [please visit Slido.com with the event code #XRO-YB](https://app.sli.do/event/ioUSE81vkgPWpq69qAnkGX).

========

The speakers for this edition are:

Keynote: AI X-Risk and its Mitigation with Safe-by-Design AI

Yoshua Bengio is a professor at the University of Montreal (MILA institute). He is a recipient of the Turing Award and is considered to be one of the fathers of Deep Learning. He is the world's most cited computer scientist.

Panel:

  • Max Tegmark is a physics professor at MIT, whose current research focuses on the intersection of physics and AI. He is also president and cofounder of the Future of Life Institute (FLI).
  • Jaan Tallinn is a cofounder of Skype, CSER, and FLI, an investor in DeepMind and Anthropic, and a leading voice in AI Safety.
  • Holly Elmore is an AI activist and Executive Director of PauseAI US. She holds a PhD in Organismic & Evolutionary Biology from Harvard University.
  • Stijn Bronzwaer is an AI and technology journalist at the leading Dutch newspaper NRC Handelsblad. He co-authored a best-selling book on booking.com and is the recipient of the investigative journalism award De Loep.
  • Will Henshall is editorial fellow at TIME Magazine. He covers tech, with a focus on AI. One recent piece he wrote details big tech lobbying on AI in Washington DC.
  • Arjun Ramani writes for The Economist about economics and technology. His writings on AI include a piece on what humans might do in a world of superintelligence.

The event will be moderated by David Wood, chair of the London Futurists.

There will be opportunities for audience members to raise questions.

========

Note: Unfortunately, the UK Secretary of State Michelle Donelan has cancelled her intended participation, due to a demanding diary in the lead-up to the Summit.

========

Selected additional information about the Seoul AI Safety Summit:

========

To register for this event, visit Luma: lu.ma/1ex04fuw

The event will be livestreamed at https://www.youtube.com/watch?v=bDLfV4MU1Ns.

Related topics

AI and Society
Artificial Intelligence
New Technology
Risk Management
Futurology

You may also like