AI Existential Risk Action Group - Inaugural Meeting


Details
We're launching a group dedicated to what may be humanity's most critical challenge: preventing human extinction from advanced artificial intelligence.
If this sounds far-fetched, consider the warnings from the very people developing this technology:
"...and the bad case, and this is important to say, is lights out for all of us..." - Sam Altman, CEO of OpenAI
"The chance that something goes quite catastrophically wrong, on the scale of human civilisation, might be somewhere between 10-25%." - Dario Amodei, CEO of Anthropic
These aren't sci-fi authors or doomsayers - these are the leaders of major AI labs acknowledging the existential risks of their work!
Meeting Agenda:
- Introductions & Perspectives
- Share your journey to AI concern
- Current assessment of AI risks
- Background and relevant expertise
- Defining Our Mission
- Core focus: Preventing AI-driven human extinction
- Secondary concerns: Job displacement, social disruption
- Success criteria and measurable goals
- Strategic Planning
- Alignment with existing movements (Pause AI, Stop AI)
- Local vs. national focus
- Effective advocacy strategies
- Resource requirements and fundraising approaches
- Action Planning
- First public event
- Community outreach strategies
- Membership growth
- Public awareness campaigns
- Individual commitment levels
- Next Steps
- Event calendar
- Task assignments
- Communication channels
- Follow-up meeting schedule
This is an opportunity to join others who recognise the urgency of this issue and want to take meaningful action. Whether you're deeply versed in AI safety or just beginning to grasp the implications, your perspective and contribution matter.
Please come prepared to engage in serious discussion and planning. This isn't just another tech meetup—it's about organising to address the most significant threat humanity has ever faced.
Recommended Pre-Reading and Watching:
- "Lights out for all of us" - Sam Altman https://www.youtube.com/watch?v=dXhoTrU1Kkw
- "10-25%" - Dario Amodei https://www.youtube.com/watch?v=GLv62w2G6os
- Probability of Doom, p(doom) of prominent AI researchers https://pauseai.info/pdoom
- TED Talk - Will Superintelligent AI end the world? https://www.ted.com/talks/eliezer_yudkowsky_will_superintelligent_ai_end_the_world?subtitle=en
- Time Magazine https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
- What Really Made Nobel Prize Winner Geoffrey Hinton Into an AI Doomer https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/
- The '3.5% rule': How a small minority can change the world https://www.bbc.com/future/article/20190513-it-only-takes-35-of-people-to-change-the-world
- Pause AI https://pauseai.info/
- Stop AI https://www.stopai.info/
Join us in taking action while we still can!
See you there,
Olaf

AI Existential Risk Action Group - Inaugural Meeting