Assessing AI Risk Skepticism
Details
How should we respond to the idea that advances in AI pose catastrophic risks for the wellbeing of humanity?
Two sets of arguments have been circulating online for many years, but in light of recent events, are now each mutating into new forms and are attracting much more attention from the public. The first set argues that AI risks are indeed serious. The second set is skeptical. It argues that the risks are exaggerated, or can easily be managed, and are a distraction from more important issues and opportunities.
In this London Futurists webinar, we'll be assessing the skeptical views. To guide us, we'll be joined by the two authors of a recently published article, "AI Risk Skepticism: A Comprehensive Survey", namely Vemir Ambartsoumean and Roman Yampolskiy. We'll also be joined by Mariana Todorova, a member of the Millennium Project's AGI scenarios study team.
Questions likely to be addressed include:
- What are the main arguments against worrying about catastrophic outcomes from developing and deploying new forms of AI?
- Which of these arguments are the strongest?
- How should the overall set of these "AI Risk Skepticism" arguments be assessed?
- What are the reasons in practice why people become AI Risk Skeptics and/or AI Risk Deniers?
- How can the discussion of these risks be improved?
