
What we’re about
We're a group interested in rationality. We talk about science, reason, thinking, economics, artificial intelligence, and everything else we can apply logic and reason to. We like to be curious, read widely, and think about the big questions.
Many people have found this community through the blogs of Astral Code Ten / Slate Star Codex and LessWrong. If you also like those blogs—great! If not but you're still interested, that's great too!
If you are an aspiring rationalist, a nerd, a geek, a scientist, or just a thinker—we can't wait to meet you to hear what you are thinking about. We love to share ideas, discuss, debate, learn and grow together. Don't worry about being the "right" person for this group. If you like to think about ideas, you're the right person.
Upcoming events
4
Book Club: If Anyone Builds It, Everyone Dies
Wisdom Park, https://maps.app.goo.gl/AvKszcgfFuWhgmS49, San Diego, CA, USJoin us as we discuss If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares.
Optional extra reading
Quintin Pope's objections: https://www.alignmentforum.org/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky
Alex Turner's objections: https://www.lesswrong.com/posts/yQSmcfN4kA7rATHGK/many-arguments-for-ai-x-risk-are-wrong
Alex Turner’s dissertation on avoiding power-seeking AI: https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/0r967b839?locale=en
“AGI is impossible” book from a philosopher: https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032309938
(My critical review of that book: https://thegreymatter.substack.com/p/book-review-why-machines-will-never)Ben Goertzel objections:
https://bengoertzel.substack.com/p/why-everyone-dies-gets-agi-all-wrongACX review (generally positive about the book): https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone
Comments from Scott Aaronson (generally positive about the book): https://scottaaronson.blog/?p=8901
Normie criticism: https://www.newscientist.com/article/2495333-no-ai-isnt-going-to-kill-us-all-despite-what-this-new-book-says/
Book Blurb:
The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.9 attendees
Past events
105