Cloud & AI: GCP Baremetal, MCP deployment, and LLM guard
Details
πππππππ Please RSVP in this page πππππππ
------------------------------------
NOTE: Due to the event being held at a bank-related facility, every attendees must provide a real name with identification document in order to enter the building. A separate form will be send to registered attendees on 2025.09.28 evening.
------------------------------------
Join us for an exciting evening at Deutsche Bank! Where we will be talking cloud and some really interesting things you can do with it, and how to squeeze every last drop of it. And of course in 2025, we will also talk about AI and some more cool things you can do with it, from MCP deployment, to LLM.
Don't miss out on this chance to expand your knowledge and connect with like-minded individuals. Bring your queries, ideas, and passion for technology. Register now and be part of the cloud innovation conversation!
----------------------------------------
1800: Door open
1830: Welcoming
1840: A Deeper Dive into CPU Limits, Requests, and Scheduling in Kubernetes and GDC Software Only on Bare Metal specifically by Pavel Malyarevsky
1910: Break
1920: Agents with MCPs - deployment by Sonam
2000: Break
2010: LLM Guard: Simulating AI-Driven Security Administration by Fam Shihata
2050: Networking
----------------------------------------
A Deeper Dive into CPU Limits, Requests, and Scheduling in Kubernetes and GDC Software Only on Bare Metal specifically by Pavel Malyarevsky
As enterprises adopt bare-metal cloud solutions, mastering resource management is critical to achieving performance and efficiency. Google Distributed Cloud (GDC) on Bare Metal offers powerful capabilities, but its nuances in CPU handling present unique challenges. How can teams ensure their applications get the resources they need without overprovisioning? This talk will provide a deep dive into the mechanics of CPU limits and requests within GDC. We will explore the current scheduling limitations, their real-world impact on application performance, and the key design decisions that can mitigate these challenges. Attendees will walk away with concrete implementation patterns, best practices for resource configuration, and a clear understanding of how to optimize their GDC deployments for maximum efficiency and reliability.
Pavel is an IT Infrastructure Manager and Architect specializing in building private cloud solutions for Deutsche Bank's Investment Bank division. His experience spans application production management, development, and the design of large-scale public and private cloud infrastructure. He is focused on delivering business value through vendor-agnostic, enterprise-ready solutions, bringing cutting-edge technologies to development teams to enhance their productivity and capabilities.
----------------------------------------
Agents with MCPs - deployment by Sonam
Model context protocol is an easy way to provide your agent with tools and data through external servers. But how you can easily deploy it. This talk deals with what is MCP, plus what are the benefits of using it in your agent. Plus how you could deploy it easily.
Sonam is GenerativeAI Evangelist for Articul8, backed by Intel, which provides secure generativeAI infrastructure for enterprise. Moreover, she is the creator of the open-source library called Embed-Anything, which helps to create local and multimodal embeddings and stream them to vector databases, itβs built in rust and thus itβs more greener and efficient. She worked previously at Qdrant Engine, and before that, she worked at Rasa. Previously, she worked as an AI researcher at Saama and has worked extensively on clinical trial analytics with Pfizer. She is passionate about topics like metric learning and biases in language models. She has also published a paper in the most reputed journal of computational linguistics, COLING, in ACL Anthology.
----------------------------------------
LLM Guard: Simulating AI-Driven Security Administration by Fam Shihata
LLM Guard is a security simulator designed to explore the capabilities and limitations of Large Language Models as network defenders. By placing an LLM in the role of a security administrator within a defined topology, the simulator tests how effectively it can detect threats, configure defenses, and respond to incidents. The project provides a sandbox for evaluating AI-driven security operations, benchmarking performance against human administrators, and uncovering both strengths and vulnerabilities of LLM-based defense strategies.
----------------------------------------
πππππππ Please RSVP in this page πππππππ