Skip to content

Details

Many leading scientists have expressed concerns that AI could be an existential risk to humanity. In response, a series of global AI Safety Summits was initiated in the UK in November last year. This series will continue in Seoul, South Korea, on 21-22 May.

These AI Safety Summits take place behind closed doors, meaning that citizens cannot directly verify how much progress is being made in the understanding and reduction of the AI risks that threaten catastrophe.

However, the Existential Risk Observatory, with support from London Futurists, has organised a parallel series of AI Safety Summit Talks that are open to the general public, policymakers, and journalists.

At these events, participants discuss what the largest risks of future AI are and how to reduce them.

The next in this series is taking place on Tuesday 21st May, from 12 noon UK time (which will be 8pm in Korea).

AI and Society
Artificial Intelligence
New Technology
Risk Management
Futurology

Members are also interested in