Skip to content

Details

Risks from Artificial General Intelligence have been a top concern of the EA movement since EA's beginning. Recently, AI safety has gotten mass public attention with the launch of GPT-4 and a group of AI leaders releasing a joint public statement:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war"

At this meetup, we'll have a discussion about "AGI safety from first principles" by Richard Ngo, available here:
https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ

Please read the article before attending (~15,000 words, ~1 hr)

This is a good piece if AI safety is new to you, or you want to refresh your understanding. It answers some of the most frequent first reactions to hearing about AI concerns.

Related topics

Humanism
Nonprofit
Philosophy
Self-Help & Self-Improvement
Activism

You may also like