Skip to content

Deep Learning & AI Talks

Photo of Armina Stepan
Hosted By
Armina S. and Lea
Deep Learning & AI Talks

Details

This year we're excited to host another in-person edition of Deep Learning & AI talks, a series of events in which we will discuss state of the art developments in deep learning, machine learning and computer vision. The event will take place at the Startup Village in Science Park and there will be 4 exciting presentations on a broad range of topics (details below).
Each presentation will last 20 minutes so that we have enough time for Q&A sessions.
Room limit: 70. First come first serve.

Schedule:
18:30 - Doors open + drinks and food on us
19:05 - 19:10 - Intro to the evening
19:10 - 19:35 - Dieuwke Hupkes: State-of-the-art generalisation research in NLP
19:35 - 20:00 - Haitam Ben Yahia: Efficient Video Perception
20:00 - 20:10 - Break
20:10 - 20:35 - Pengwan Yang: Less than Few: Self-Shot Video Instance Segmentation
20:35 - 21:00 - Ties van Rozendaal: Neural data compression
21:00 – 22:00 - Wrap up + drinks

State-of-the-art generalisation research in NLP
Good generalisation is of utmost importance for any artificial intelligence model. Traditionally, the generalisation capabilities of machine learning models are evaluated using random train/test splits. However, numerous recent studies have exposed substantial generalisation failures in models that perform well on such random within-distribution splits. For instance, a model classifying toxic language might work well for posts written by white male users, but drastically fail when considering comments from black female users. So, if random splitting is not good for measuring how robustly models generalise to different scenarios, how should we evaluate that? In this talk, I present a taxonomy for characterising and understanding generalisation in NLP, and use it to analyse over 400 papers of the ACL anthology.

Efficient video perception
Despite the great progress in the development of efficient deep neural networks, highly accurate models are still too expensive to process video frames in real-time, especially on low-power devices such as smartphones.
Video tensors, while being huge, are highly redundant. This talk explores several ideas to speed up deep neural networks by leveraging the inherent redundancies in the video. Instead of processing the redundant information over and over, we identify and process a minimal set of pixels, regions, and frames that bring in novel information about the video. We also explore how the temporal redundancies can be leveraged to further compress the model either by dynamically selecting the backbone or by knowledge distillation.

Less than Few: Self-Shot Video Instance Segmentation
We explore a new setting of few-shot learning, which we call self-shot learning. Rather than performing few-shot learning with a human oracle to provide a few densely labelled support videos, we propose to automatically learn to find appropriate support videos given a query. We outline a simple self-supervised learning method to generate an embedding space well-suited for unsupervised retrieval of relevant samples.

Neural data compression
Neural data compression has been shown to outperform classical methods in terms of rate-distortion performance, with results still improving rapidly. These models are fitted to a training dataset and cannot be expected to optimally compress test data in general due to limitations on model capacity, distribution shifts, and imperfect optimization. If the test-time data distribution is known and has relatively low entropy, the model can easily be finetuned or adapted to this distribution. Instance-adaptive methods take this approach to the extreme, adapting the model to a single test instance, and signaling the updated model along in the bitstream. In this talk, we will show the potential of different types of instance-adaptive methods and discuss the tradeoffs that these methods pose.

Photo of AI Netherlands group
AI Netherlands
See more events
Startup Village
Science Park 608 · Amsterdam, NH