How might we control AI, before AI controls us?
Details
The late physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk, have all expressed serious concerns about the possibility that AI might develop to the point that humans could no longer control it, with Hawking theorizing that this could “spell the end of the human race”.
Other AI researchers have highlighted the existential risk that AI poses. For example, professors Allan Dafoe and Stuart Russell, both eminent AI scientists, mention that, contrary to misrepresentations in the media, this risk does not have to arise from spontaneous malevolent intelligence. Rather, “the risk arises from the unpredictability and irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it.”
This webinar, hosted by Rohit Talwar of Fast Future and David Wood of London Futurists, with support from the UK node of the Millennium Project, continues these organisations' joint project to explore and understand the rise and implications of AGI (Artificial General Intelligence). The speaker will be Tony Czarnecki, Managing Partner of Sustensis, who has some distinctive and thought-provoking views about both the dangers and the rich positive potential of AGI.
Tony believes that the issues arising from the rise of AGI are more urgent than is generally assumed, and that we need to be truly open-minded to perceive the best solutions. He says that "There is no perfect method of controlling AI. However, there may be one approach which gives us sufficient control… but at a price".
To find out more about Tony's analysis, and to have the opportunity to offer your ideas in response as part of the group discussion, please register to attend this webinar.
Artificial Intelligence
Collaboration
Culture
Education
Futurology
