
About us
MEETUPS ARE TO BE HELD ON THE LAST WEDNESDAY OF EVERY MONTH
Turbine's AI meetups are meant for all researchers, engineers, scientists and students working on hard machine learning problems. It is created to dissect and understand new developments in ML together, and share our experience from real-life projects.
Presenters cover the latest impactful AI models, aiming to dive much deeper into each topic than what standard science communication formats allow. Thus, they'll expect you to have a working knowledge of machine learning.
In some sessions, we are going deep to understand recently published models and architectures - with working code whenever possible & intro to math background whenever needed. In others, presenters share their learnings working on models of real-life applications. Our goal is to give thorough knowledge to the audience, that you will use in model design on a daily basis.
Turbine is a computational biology company focusing on cancer, so expect lots of topics infused with biology. Yet, we are also a curious community, inviting you to join even if your personal interest centers around other domains. We also host completely biology-free events about computer vision, NLP and generic AI topics to cover the latest scientific advancements.
Select past presentations can be found here:
https://www.turbine.ai/ai-meetup-presentations
Upcoming events
1

Titans: Long-Context Sequence Models with Test-Time Memorization
Turbine Kft., Szigony utca 26-32, Budapest, HUThis month's meetup is about the Titans neural network architecture published by Google Research:
The goal of the architecture is to:
- scale transformer-based sequence models for tasks requiring very long context
- enable test-time memorization
This allows models to maintain long-term memory effectively while running—without the need for dedicated offline retraining.
We'll discuss the Titans architecture, with a focus on its implementation of the long-term memory module:
- how neural networks can act as memory vs. classical fixed-size recurrent states
- how this type of long-term memory compares to traditional RNNs, and why it enables models to learn new context on the fly
We will also take a look at the MIRAS framework, which provides a higher-level abstraction for designing sequence models.
The key idea of MIRAS is that sequence model architectures—from transformers to linear RNNs—can be viewed as complex associative memories under the hood.
We'll see how Titans fits into this framework of neural network design, and why the authors state that Titans and MIRAS can: "combine the speed of RNNs with the accuracy of transformers."
***
Follow-up from the last event
HOPE, the reference architecture for Nested Learning, is built on a modified version of the Titans architecture. At our last event, we didn’t have time to discuss many important details about how HOPE is built.
So, the March meetup will also act as a follow-up for anyone who missed some of the underlying architectural details last time.=== ENTRY DETAILS ===
- QR code with entry information will be available soon, in the "Photos" section of this event page.
- Gate closes at 18:15 - no late entries.57 attendees
Past events
24