Skip to content

Kubernetes & Cloud Native Berlin Meetup February Edition

Photo of Benazir Khan
Hosted By
Benazir K.
Kubernetes & Cloud Native Berlin Meetup February Edition

Details

Kubernetes & Cloud Native Berlin Meetup is happy to host Ayesha Kaleem, Software Engineer - OpenShift Core Engineering at Red Hat, and Jérôme Petazzoni, Container OG at Enix SAS, on February 8, 2023!

---------------------------------------------

SCHEDULE:

[17:00 onwards] Doors open - get a juice/soft drink, relax

[18:00 - 18:10] Chris Kühl, "Opening Remarks: Why we organise meetups, a brief history of the meetup groups, and a reflection on meetups pre-pandemic and now."

[18:15 - 18:45] Ayesha Kaleem, "Introduction to eBPF"
[18:45 - 18:55] Q&A round

[18:55 - 19:30] Break - time for networking with pizza and other refreshments

[19:30 - 20:00] Jérôme Petazzoni, "Running machine learning apps with GPU acceleration in containers"
[20:00 -20:10] Q&A round

[20:15 - 20:25] Benazir Khan, "Closing Remarks: a little about upcoming meetups, the topics we are looking at and how we are constantly looking to best represent the broad spectrum of our local tech community."

[20:25 - 22:00] More time to network with peers and colleagues from the industry at a relaxed pace

---------------------------------------------

TALK DETAILS:

"Introduction to eBPF," Ayesha Kaleem

Abstract: * I'll give a short overview of what eBPF is.
* Loading ebpf programs from user space to the kernel space.
* Discussing the security factors of ebpf programs in kernel space
* I'll go through the tooling involved for wrapping eBPF with CO-RE and libbpf
* I'll highlight some of the domains that were revolutionized by eBPF, like networking, observability, and security from the heart of the kernel

Description: eBPF (Extended Berkeley Packet Filter) is the hottest technology nowadays and it makes talking to the Linux kernel far easier without actually changing the kernel itself. Now it is possible to make additional changes to the kernel related to security, networking, and observability.

As an eBPF program is loaded into the kernel, a verifier ensures that it is safe to run, and rejects it if not. Once loaded, an eBPF the program needs to be attached to an event so that whenever the event happens, the program is triggered.

eBPF allows us to collect customized information about how an app is behaving without having to change the app in any way, by observing it from within the kernel. We can build on this observability to create eBPF security tools that detect or even prevent malicious activity within the kernel. And we can develop powerful, high-performance networking capabilities with eBPF, handling network packets within the kernel and avoiding costly transitions to and from user space.

---------------------------------------------

"Running machine learning apps with GPU acceleration in containers," Jérôme Petazzoni

Abstract: Many folks have been running machine learning models with GPUs for a while now. However, in the last few months, some models like Whisper (for speech recognition) or Stable Diffusion (for prompt-based image generation) gained a lot of popularity because (1) their code and weights have been made public and (2) they can run on affordable "consumer" GPUs (instead of expensive datacenter GPUs). This means that we can run them at home, and we'll show you how - but for extra fun (for some definition of "fun") we'll do it in containers!

Description: This presentation will focus on two popular models: OpenAI Whisper (released in September 2022) and Stable Diffusion (released in August 2022). Anyone can download the code and the weight for these models. They can run on CPUs, but running them on a GPU can achieve 50x speed improvements with a GPU costing less than 500€ these days. This makes them a great example for GPU acceleration!

Since there are many other presentations (in specialized conferences and meetups) about Deep Learning, Machine Learning, and associated topics, in this talk, we want to focus in particular on the "containerization" aspects.

First, how do we expose our GPU(s) to our containers? We'll talk about the NVIDIA ecosystem, its infamous drivers, the "nvidia-docker" runtime, and how to get started with all that.

Next, how do we containerize these applications? Is there anything special to do if we want to leverage GPUs? What about the models - these can be quite big (multiple GB for these "modern" models), how should we handle them? What about image size optimization?

---------------------------------------------

SPEAKER BIOS:

You can learn more about Ayesha here: ayesha54.github.io

Jérôme was part of the team that created Docker. He plays a dozen of musical instruments. These days, he teaches containers and Kubernetes.

Photo of Kubernetes and Cloud Native Berlin Meetup group
Kubernetes and Cloud Native Berlin Meetup
See more events
Adalbertstraße 6a
Adalbertstraße 6a · Berlin, BE