PyTorch Afters: The future of AI infra for RL + large-scale inference
Details
Join us as in San Francisco as we kick off the PyTorch Conference — whether you’re attending or simply based in the SF Bay Area. Together, we’ll explore the cutting edge of AI infrastructure and its role in shaping the future of RL for post-training and large-scale inference of media and world models.
At this session, the DataCrunch team and frontier AI labs (TBA) will share the learnings from building and scaling systems that push the state of the art. You’ll get a first look at B300 and GB300 NVL-72 systems, and what the future holds for AI infra.
Learn from practitioners, connect with like-minded engineers, and unwind over food, drinks, and sharp discussions.
Speakers
- Training world models using B200s | Paul Chang - ML engineer at DataCrunch
- Training quantized LLMs efficiently on consumer GPUs | Erik Schultheis - Posdoctoral researcher at IST Austria
Agenda
- 5:30pm – Arrival
- 6:00pm – Talks + Q&A
- 7:00pm – Networking, food, & drinks
- 9:00pm – Wrap-up
Who Should Join?
- AI researchers
- ML engineers
- Technical founders
- AI product managers
This event is for those staying ahead of the curve with AI infra, optimization techniques, and production-grade systems at scale.
Registration
For real-time updates, sign up on Luma: https://luma.com/hioq18dz?utm_source=dc-meetup
Don’t wait until the doors open — join our Discord and Dev Community today to talk about the event, swap ideas, and meet others who’ll be there.
***
About DataCrunch
DataCrunch is a provider of Cloud Infrastructure for AI builders – trusted by the AI frontier labs (1X, PrimeIntellect) and enterprises alike. DataCrunch offers production-grade GPU clusters and inference services – among the first to deploy the B200, B300, and GB300 platforms.
Other hosts and speakers (TBA)
