Assessing the risks of AI catastrophe
Details
How should we respond to claims that forthcoming new versions of AI pose unacceptable risks of human catastrophe?
In this webinar, David Wood, chair of London Futurists, gives an updated version of a presentation he shared at the recent BGI unconference in Panama City. This talk was described by a number of participants as "the best of the entire event", but others said that it was "a mistake that so much time was given to this subject".
The talk has been significantly revised in light of feedback received at the BGI unconference and subsequently.
The talk aims to improve understanding of which risks are the most credible and serious (as opposed to fanciful or unfounded). It also reviews a variety of options for how to respond to these risks. This includes varieties of so-called "accelerationism" and "singularity activism".
The webinar will include time for audience questions and feedback.
AI and Society
Risk Management
Futurology
Transhumanism
Singularity
