About us
Have an idea? Want to Speak? 👉 Submit your talk here 👈
Something happened at our events? 👉 Share anonymous feedback or concerns here 👈
The rapid, global investment in AI has triggered a 'geopolitical AI race', with nations competing for technological supremacy. This accelerated development is creating novel risks at an alarming rate.
Valuable resources like the AI Incident Database and the MIT AI Risk Repository are now cataloguing over 3000 real-world examples where AI systems have failed or caused harm. These incidents range from biased hiring algorithms and autonomous vehicle accidents to privacy breaches and the spread of misinformation. These databases highlight the urgent need for careful, responsible, and ethical development as we deploy increasingly powerful AI systems.
Our Group's Mission
This group provides a vital local platform for navigating this complex landscape. Our core mission is to elevate the voices of AI researchers and interested speakers right here in Christchurch.
By focusing on local expertise, we aim to:
- Educate our community on the latest developments, opportunities, and risks associated with AI.
- Connect researchers, developers, students, and curious individuals with like-minded people.
- Foster a thoughtful and informed conversation about how Christchurch can responsibly engage with the AI revolution.
We believe that a well-informed local community is essential for harnessing the benefits of AI while mitigating its potential harms. Join us to be part of the conversation.
### 📚 Resources
### 🐣 For Newcomers and beginners
If you're just getting started check out start here! 2 hour free course on the Future of AI from BlueDot Impact. Guided reading and curated resources.
Check out [AISafety.com](https://www.aisafety.com/) an open source community with job advertisements, maps & diagrams, open projects, newsletters, self study guides and tools to help you engage in and navigate this rapidly emerging field.
### ⚖️ Legislation and Policy
Petition the NZ Government to regulate AI
📊 AI Capabilities Reports
- AI Safety Landscape Report (Jan 2025): Overseen by Yoshua Bengio following the first AI Safety Summit. While it is lengthy (approx. 250 pages), it features high-quality citations and concise chapter summaries that make it very skimmable.
- International AI Safety Report Summaries (Oct & Nov 2025): These monthly updates provide a focused review of new capabilities and developments in 20 pages or less.
### 🛠️ Key Monitoring Tools
- MIT Risk Repository: An excellent resource for identifying current gaps in AI safety development.
- OECD AIM (AI Incidents and Hazards Monitor): This allows you to filter by country (e.g., New Zealand and Australia) to see which specific risks are gaining public and regulatory attention in the news.
### 🎓Technical Reading
- "The AI Safety Dance" by Nicky Case: A digital book that goes across domains and sectors in the AI safety space. A refreshing change of pace from "dry" reports. Case is a respected researcher from MIT, uses interactive web pages, animations, and flashcards to make highly technical concepts accessible for everyone from students to public sector professionals.
Upcoming events
4

AI's Impacts on Finding Work and the Future of Work
EPIC Innovation, 76/106 Manchester Street, Christchurch, NZJoin us for some talks on AI Impacts on the world of work. Afterwards we'll have some Q&A then head to a nearby pub for drinks and further discussion.
Talk: Struggling to find a job? AI Harms in Applicant Tracking Systems with Emma Humphrey
If you can’t get past the algorithm, you can’t get the job. We explore the "black box" of modern recruitment: the Applicant Tracking System (ATS).
Emma will share her research into CV bias programmed into the very tools designed to find talent. This talk investigates the invisible harms of automated hiring and how algorithmic filtering results in 72% of CVs never being seen by human eyes.
Talk: What is 'Work'? Possible Futures in an AI-Enabled World
Daniel Guppy shares some of the core concepts from his thesis, Wellbeing, Work and AI: The Impacts of artificial intelligence on work. . He challenges the standard definition of labor by proposing a "pluralistic account" of work. He argues that we must recognize at least eight distinct characterizations—ranging from "income work" and "economic work" to "goal work" and "volunteer work"—to understand how automation impacts our wellbeing.
Rather than offering a single prediction, Daniel outlines eight possible future worlds determined by the pace of AI progress and economic inequality. He will discuss which policy tools fit which future—exploring why Universal Adjustment Assistance might work for today’s world, while a Universal Basic Income (UBI) becomes essential in a future where machines handle the bulk of economic labor.
Recommended Background Readings
28 attendees
The Doom Thesis: Why 'If Anyone Builds It, Everyone Dies
EPIC Innovation, 76/106 Manchester Street, Christchurch, NZJoins us for our first "Book Club" style event.
If you're been in AI Safety, there is a new book on the scene,
"If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares. See a high level summary of chapters here. This book moves beyond traditional "AI safety" to examine the urgent case for a total global cessation of general AI development.
Hazel Shanks will explain and deconstruct the core arguments of the "Doom Thesis," from the last few years, exploring why current training methods—described by the authors as "growing" rather than "building" artificial minds—may lead to a literal, biological extinction event.
Afterwards at dinner/pub, we will discuss the validity of the authors' claims and debate their radical call to action: an international ban on large-scale GPU clusters and the enforcement of "no-build" zones.
***
### Pre-Reading and Resources
*- AI 2027, graph, a prediction of capabilities
- AI Catastrophe, 2024 Blog Post, Compendium. 10 minutes. A walk through the arguments.
- The Problem, Machine Intelligence Research Institute, 10 minutes. A summary of the problem
- Pausing AI Developments Isn’t Enough. We Need to Shut it All Down. 2023, Blog Post
- Statement on AI Risk, 2023 Open Letter. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
- (Chapter Summary) If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (2025) – Yudkowsky, E. & Soares, N.
1 attendee
Past events
16

![AI Mythbusting: A Practical Guide for Beginners [Virtual Webinar]](https://secure.meetupstatic.com/photos/event/a/d/f/highres_530462783.jpeg)
![AI Self Defence: Risks and Solutions [Virtual Webinar]](https://secure.meetupstatic.com/photos/event/b/0/c/8/highres_531165256.jpeg)

