
What we’re about
We’re a Montréal-based community who want to steer AI towards less risk and more beneficial futures. We convene people across disciplines with talks, workshops, discussions, coworking sessions. We prioritize risks that are of greater scale, neglected, and tractable.
Topics overview (examples)
- Safer AI design & testing: how to make systems reliable and understandable: capability evaluations and red-teaming, interpretability, human oversight and shutdown/interlocks, transparency and documentation.
- Misuse, security & resilience: preventing harmful uses (cyber, bio, …), protecting critical infrastructure, content authenticity, and preparedness.
- Ethics, rights & social impacts: privacy and dignity, fairness, tangible effects.
- Rules, standards & accountability: audits, policy paths, international standards, compute reporting, licensing, liability, treaties.
- Responsible deployment in organizations: safety cases, monitoring and incident response, post-deployment evaluation, assurance models.
Norms
- Open by default, but sensitive sessions may use Chatham House Rule.
- Constructive and respectful discussions.
- Bilingual participation welcome (EN/FR).
Get involved
It’s collaborative. Propose a talk or activity by emailing team@horizonomega.org.
Main calendar: luma.com/montreal-ai-safety-ethics-governance (mirrored here on Meetup).
Notes
- This group is ran in partnership with the AI Governance & Safety network of Canada.
- This group also used to host events by the Montreal AI Ethics Institute.
Past events
64



