About us
Have an idea? Want to Speak? 👉 Submit your talk here 👈
Something happened at our events? 👉 Share anonymous feedback or concerns here 👈
The rapid, global investment in AI has triggered a 'geopolitical AI race', with nations competing for technological supremacy. This accelerated development is creating novel risks at an alarming rate.
Valuable resources like the AI Incident Database and the MIT AI Risk Repository are now cataloguing over 3000 real-world examples where AI systems have failed or caused harm. These incidents range from biased hiring algorithms and autonomous vehicle accidents to privacy breaches and the spread of misinformation. These databases highlight the urgent need for careful, responsible, and ethical development as we deploy increasingly powerful AI systems.
Our Group's Mission
This group provides a vital local platform for navigating this complex landscape. Our core mission is to elevate the voices of AI researchers and interested speakers right here in Christchurch.
By focusing on local expertise, we aim to:
- Educate our community on the latest developments, opportunities, and risks associated with AI.
- Connect researchers, developers, students, and curious individuals with like-minded people.
- Foster a thoughtful and informed conversation about how Christchurch can responsibly engage with the AI revolution.
We believe that a well-informed local community is essential for harnessing the benefits of AI while mitigating its potential harms. Join us to be part of the conversation.
### 📚 Resources
### 🐣 For Newcomers and beginners
If you're just getting started check out start here! 2 hour free course on the Future of AI from BlueDot Impact. Guided reading and curated resources.
Check out [AISafety.com](https://www.aisafety.com/) an open source community with job advertisements, maps & diagrams, open projects, newsletters, self study guides and tools to help you engage in and navigate this rapidly emerging field.
### ⚖️ Legislation and Policy
Petition the NZ Government to regulate AI
📊 AI Capabilities Reports
- AI Safety Landscape Report (Jan 2025): Overseen by Yoshua Bengio following the first AI Safety Summit. While it is lengthy (approx. 250 pages), it features high-quality citations and concise chapter summaries that make it very skimmable.
- International AI Safety Report Summaries (Oct & Nov 2025): These monthly updates provide a focused review of new capabilities and developments in 20 pages or less.
### 🛠️ Key Monitoring Tools
- MIT Risk Repository: An excellent resource for identifying current gaps in AI safety development.
- OECD AIM (AI Incidents and Hazards Monitor): This allows you to filter by country (e.g., New Zealand and Australia) to see which specific risks are gaining public and regulatory attention in the news.
### 🎓Technical Reading
- "The AI Safety Dance" by Nicky Case: A digital book that goes across domains and sectors in the AI safety space. A refreshing change of pace from "dry" reports. Case is a respected researcher from MIT, uses interactive web pages, animations, and flashcards to make highly technical concepts accessible for everyone from students to public sector professionals.
Upcoming events
2
Past events
20

![AI Mythbusting: A Practical Guide for Beginners [Virtual Webinar]](https://secure.meetupstatic.com/photos/event/a/d/f/highres_530462783.jpeg)
![AI Self Defence: Risks and Solutions [Virtual Webinar]](https://secure.meetupstatic.com/photos/event/b/0/c/8/highres_531165256.jpeg)
