Skip to content

Details

This month we have a expert academic guest speaker Simon, who lays out the case that AI is a catastrophic risk to humanity. Simon will analyze this risk in terms of four questions:

First: will technological advances allow AIs to become very powerful?

Second: will governments allow these powerful AIs to be developed?

Third: will we fail to align very powerful AIs, so that they try to destroy humanity?

Fourth: will we fail to disable misaligned powerful AIs?

Simon will introduce some of the factors required to answer each question, and argue that the answer to all four questions may be 'yes'

Should be a very topical and interesting talk and we are very lucky to have Simon present for us.

As usual there will be time before and after to grab some food and drinks and continue the discussion. Hope to see you there!

Related topics

Events in Hong Kong, CN
Humanism
Skeptics
Critical Thinking
Intellectual Discussions
Science

You may also like