What are the real risks of super A.I.?
Details
What are the potential consequences when artificial intelligence increasingly exceeds human comprehension? Which possible side-effects of "superAI" deserve the most attention? Just as malfunctioning satnav systems occasionally lead travellers off the beaten track, where might malfunctioning superAI lead humanity?
For example, should we worry most about the impact of superAI on military conflict? Or on battles between various systems of "fake news" that seek to manipulate people into actions contrary to their best interests?
Alternatively, are such worries misplaced? Should we instead encourage faster progress towards a near-future world in which pervasive AI provides us all with an unprecedented abundance of goods and services?
In this talk, hosted by Funzing at the George IV in Chiswick, David Wood, chair of London Futurists, will highlight:
*) The meaning of terms "singularity" and "intelligence explosion"
*) Five factors that are accelerating progress in AI
*) Scenarios in which superAI might arise within as little as ten years time
*) Common fallacies and misunderstanding in discussions about superAI
*) Five ways in which superAI could knock humanity off trajectory
*) Positive steps that are now being taken to ensure a beneficial superAI.
== Timing ==
Doors open, 7pm
Talk starts, 7.30pm
Short break for drinks, c. 8.30pm
Talk continues, with Q&A, till c. 9.30pm
== Credits ==
The image above incorporates elements from work by Gerd Altmann on Pixabay, https://pixabay.com/illustrations/evolution-artificial-intelligence-3778196/
== To attend this talk ==
IMPORTANT: To attend this talk, it is essential to visit the Funzing page for this talk, and purchase a ticket there (cost £12). The link is https://uk.funzing.com/funz/what-are-the-real-risks-of-super-a-i-24176
That page also contains audience reviews of occasions when an earlier version of this talk was given.
