
What we’re about
Join us for a variety of events on technical AI safety, governance in a world of advanced AI, and more.
Hosted by Trajectory Labs, a nonprofit coworking and events space catalyzing Toronto's role in steering AI progress toward a future of human flourishing.
Is there a topic you'd love to see us cover at a future event? Submit your suggestion here.
Upcoming events
3
 - Hackathon: AI Forecasting30 Adelaide East, Industrious Office 12th Floor Common Area, 30 Adelaide East, 12th Floor, Toronto, ON, CA- Important registration information: To participate in this event, please sign up through Apart Research's event page before registering. - The trajectory of AI development represents one of the most consequential questions for humanity's future. Understanding when and how transformative AI capabilities will emerge is critical for policy, safety research, and societal preparedness. Yet current forecasting methods struggle with unprecedented technological shifts, compounding uncertainties, and the challenge of predicting emergent capabilities. 
 In this hackathon, you can:- Build forecasting models and evaluation pipelines to anticipate AI capabilities and timelines
- Create tools for scenario exploration, uncertainty quantification, and model benchmarking
- Develop monitoring systems for key indicators of AI progress across research, industry, and policy
- Write policy briefs and governance proposals grounded in forecasting insights
- Explore new methodologies inspired by projects like AI 2027 and EpochAI's empirical forecasting work
- Pursue other projects that advance the field of AI forecasting!
 - You will work in teams over one weekend and submit open-source forecasting models, benchmark suites, scenario analyses, policy briefs, or empirical studies that advance our understanding of AI development timelines and trajectories. 
 Trajectory Labs, the jamsite, provides a comfortable and spacious coworking space along with coffee, tea, and other refreshments (meals not provided, but there are many nearby options). Other locations will also be taking part!
 6 attendees
 - AI Policy Tuesday: Open Global Investment as a Governance Model for AGI30 Adelaide East, Industrious Office 12th Floor Common Area, 30 Adelaide East, 12th Floor, Toronto, ON, CA- Registration Instructions 
 This is a paid event ($5 general admission, free for students & job seekers) with limited tickets - you must RSVP on Luma to secure your spot.- Event Description 
 Kathrin Gardhouse will walk us through Nick Bostrom's recent paper Open Global Investment as a Governance Model for AGI.
 We'll discuss its merits compared to other alternatives, such as a Manhattan Project or CERN for AI, assuming short AGI timelines.- Event Schedule 
 6:00 to 6:30 - Food & introductions
 6:30 to 7:30 - Presentation & Q&A
 7:30 to 9:00 - Open discussion- If you can't attend in person, join our live stream starting at 6:30 pm via this link. - This is part of our weekly AI Policy Tuesdays series. Join us in examining questions like: - How should AI development be regulated?
- What are the economic and social implications of widespread automation?
- How do we balance innovation with safety considerations?
- What governance structures are needed for safer AI?
 5 attendees
 - AI Safety Thursday: Monitoring LLMs for deceptive behaviour using probes30 Adelaide East, Industrious Office 12th Floor Common Area, 30 Adelaide East, 12th Floor, Toronto, ON, CA- How can we detect when AI intends to deceive us? - Registration Instructions 
 This is a paid event ($5 general admission, free for students & job seekers) with limited tickets - you must RSVP on Luma to secure your spot.- LLMs show deceptive behaviour when they have incentive to do so, whether it's alignment faking or lying about its capabilities. A work earlier this year at Apollo proposed using linear probes that detect such behaviour using model’s internal activations. - In this talk Shivam Arora, will share details on how these probes work and share his research experience on a follow up work to improve them conducted as part of a fellowship at LASR labs. - Event Schedule 
 6:00 to 6:30 - Food & Introductions
 6:30 to 7:30 - Main Presentation & Questions
 7:30 to 9:00 - Open Discussion
 If you can't attend in person, join our live stream starting at 6:30 pm via this link.- This is part of our weekly **AI Safety Thursdays **series. Join us in examining questions like: - How do we ensure AI systems are aligned with human interests?
- How do we measure and mitigate potential risks from advanced AI systems?
- What does safer AI development look like?
 5 attendees
Past events
185
Group links
Organizers




























