
What we’re about
Join us for a variety of events on technical AI safety, governance in a world of advanced AI, and more.
Hosted by Trajectory Labs, a nonprofit coworking and events space catalyzing Toronto's role in steering AI progress toward a future of human flourishing.
Is there a topic you'd love to see us cover at a future event? Submit your suggestion here.
Upcoming events (4+)
See all- AI Policy Tuesday: The Case for Regulating AI Companies, Not AI Models30 Adelaide East, Industrious Office 12th Floor Common Area, Toronto, ON
Registration Instructions
This is a paid event ($5 general admission, free for students & job seekers) with limited tickets - you must RSVP on Luma to secure your spot.
If you can't make it in person, feel free to join the live stream starting at 6:30 pm, via this link.Description
Wim Howson Creutzberg will discuss the case for treating business entities — as opposed to models or use cases — as the main focal unit for preemptive risk regulation of advanced AI systems.The talk will give an overview of "Entity-Based Regulation in Frontier AI Governance" by Ketan Ramakrishnan and Dean Ball, followed by commentary on the article and surrounding discussion.
Event Schedule
6:00 to 6:30 - Food and introductions
6:30 to 7:30 - Presentation and Q&A
7:30 to 9:00 - Open Discussions - AI Safety Thursday: Attempts and Successes of LLMs Persuading on Harmful Topics30 Adelaide East, Industrious Office 12th Floor Common Area, Toronto, ON
Registration Instructions
This is a paid event ($5 general admission, free for students & job seekers) with limited tickets - you must RSVP on Luma to secure your spot.
If you can't make it in person, feel free to join the live stream starting at 6:30 pm, via this link.Description
Large Language Models can persuade people at unprecedented scale—but how effectively, and are they willing to try persuading us toward harmful ideas?In this talk, Matthew Kowal and Jasper Timm will present findings showing that LLMs can shift beliefs toward conspiracy theories as effectively as they debunk them, and that many models are willing to attempt harmful persuasion on dangerous topics.
Event Schedule
6:00 to 6:30 - Food and introductions
6:30 to 7:30 - Presentation and Q&A
7:30 to 9:00 - Open Discussions - "If Anyone Builds It, Everyone Dies" Reading Group30 Adelaide St E, Toronto, ON
This is the first meeting in a reoccurring series. If you are interested in attending, please register on this page so we can ensure we have copies of the book for everyone.
If Anyone Builds It, Everyone Dies is a new book from Eliezer Yudkowsky and Nate Soares which makes the case that AI poses an existential threat to human survival.
Trajectory Labs is hosting a reading group to discuss the book on Mondays from 7-9 PM, starting October 6th.
No prior experience with AI safety topics is expected; we'll provide the book and welcome participants from all backgrounds. - AI Policy Tuesday: The Concept of Political Space and AI Safety30 Adelaide East, Industrious Office 12th Floor Common Area, Toronto, ON
Registration Instructions
This is a paid event ($5 general admission, free for students & job seekers) with limited tickets - you must RSVP on Luma to secure your spot.
Description
International cooperation often becomes possible only after shocks, crises, or dramatic shifts in perception. ChatGPT's release in 2022 created space for x-risk discussions at the UK AI Summit, while just two years later, the Paris Summit cast these same concerns as "science fiction".
At this talk, Jason Yung will examine the concept of political space—how it opens and closes, what factors enable or constrain it, and how it can inform the way AI safety advocacy is advanced.
Event Schedule
6:00 to 6:30 - Food and introductions
6:30 to 7:30 - Presentation and Q&A
7:30 to 9:00 - Open DiscussionsIf you can't make it in person, feel free to join the live stream starting at 6:30 pm, via this link.