
What we’re about
Form to join: https://forms.gle/tb3utjeDFoUfbQkg6
This group is for people interested in reading and learning about AI safety, and connecting with others who share this interest. We welcome people from all backgrounds - industry, academia, government, or anyone who is interested in the field (even if you don't explicitly do work in it).
The papers will range from technical AI safety papers, to AI governance (eg. policy approaches, or governance within companies), to big picture (eg. broader societal implications of AI). A full list of planned papers can be found here: https://docs.google.com/document/d/1L4Z18I-os5PwN78aBPHUMIK3snIS-oZFqxBz-oSAWmQ/edit?usp=sharing; and a request form can be found here: https://forms.gle/Pk3qxbKnFjdq5JZw6.
How it works:
- To participate, fill out this interest form: https://forms.gle/tb3utjeDFoUfbQkg6
- Papers are announced at least ~1-2 weeks before the scheduled meetup, the events will take place every other week (online).
- Everyone reads the papers (independently, before the meeting)
- During the meeting (hosted on Zoom), we will first give a short overview of the paper and key results, then we will discuss what we each found interesting, the results, and potential implications.
We will do our best to respond within 1-2 weeks of google form submission. The main limitation on accepting new members is the size of the group / how many people is feasible to facilitate a meaningful discussion. The group is always evolving so people will often be accepted off of the waitlist, as our capacity to facilitate increases.
Later, we may also add semi-regular in-person meetings in Munich (pending interest/availability of a location to host)