Skip to content

About us

MEETUPS ARE TO BE HELD ON THE LAST WEDNESDAY OF EVERY MONTH

Turbine's AI meetups are meant for all researchers, engineers, scientists and students working on hard machine learning problems. It is created to dissect and understand new developments in ML together, and share our experience from real-life projects.

Presenters cover the latest impactful AI models, aiming to dive much deeper into each topic than what standard science communication formats allow. Thus, they'll expect you to have a working knowledge of machine learning.

In some sessions, we are going deep to understand recently published models and architectures - with working code whenever possible & intro to math background whenever needed. In others, presenters share their learnings working on models of real-life applications. Our goal is to give thorough knowledge to the audience, that you will use in model design on a daily basis.

Turbine is a computational biology company focusing on cancer, so expect lots of topics infused with biology. Yet, we are also a curious community, inviting you to join even if your personal interest centers around other domains. We also host completely biology-free events about computer vision, NLP and generic AI topics to cover the latest scientific advancements.

Select past presentations can be found here:
https://www.turbine.ai/ai-meetup-presentations

Upcoming events

1

See all
  • Nested Learning - a new paradigm in machine learning

    Nested Learning - a new paradigm in machine learning

    Turbine Kft., Szigony utca 26-32, Budapest, HU

    Google Research published a paper at NeurIPS '25 : "Nested Learning: The Illusion of Deep Learning Architectures"

    It is a new approach to look at machine learning problems: architecture & optimization algorithms together

    In NL, training models is not treated as one opaque, atomic process. But as a system of interconnected, multi-level learning problems that are optimized in parallel.

    The goal is to solve one of the core challenges of modern LLMs: continual learning (the ability for a model to acquire new skills over time without forgetting old ones.)

    Authors argue: the model's architecture and the optimization algorithm are fundamentally the same concepts; they are just different "levels" of optimization, each with its own internal flow of information and update rate

    We'll walk through:
    - how the paper unifies model architecture and optimizers
    - architectural decomposition of a classical learning problem solved in the framework of Nested Learning
    - how optimizer algorithms fit into the NL framework (backrpop as associative memory)
    - the concept of continuum, multi-timescale memory
    - Hope - an architectural backbone of NL

    === ENTRY DETAILS ===

    - QR code with entry information will beavailable soon, in the "Photos" section of this event page.
    - Gate closes at 18:15 - no late entries.

    • Photo of the user
    • Photo of the user
    33 attendees

Group links

Organizers

Members

808
See all