
What we’re about
We're a group that's focused on spreading public awareness about the domain of AI safety. This includes:
- Hands-on education and practice of technical AI safety research for individuals who wish to contribute technically to AI safety research
- Awareness of the latest in AI safety governance and what promising new avenues there are to get involved there
- General education for the public of what's the latest in AI and what else is coming down the pipeline
Upcoming events (1)
See all- Latest News in AI Safety - Discussion GroupLink visible for attendees
Join our "Latest News in AI Safety" discussion group to catch up on recent research and policy news. We'll look at reports from Palisade Research on o3 and Claude 4—such as their attempts to avoid shutdown and blackmail engineers—and Redwood Research's work on models that pretend to be aligned. We'll also talk through New York's new RAISE Act, the end of the proposed state-level moratorium in the latest federal bill, and other legislation in the pipeline.
Everyone's welcome, whether you follow these topics closely or are just curious—bring your questions and thoughts! Please RSVP on Luma to help us estimate attendance:https://lu.ma/9qaxhk5i