
About us
This group was looking for a new owner, so we at DataTalks.Club decided to take it.
We host weekly virtual meetups on AI, machine learning, system design, recsys, etc. Each meetup is a ~30-40 minute talk, followed by a Q&A.
Note: Messages in Meetup are not monitored.
Upcoming events
4

Stream Processing with PyFlink
·OnlineOnlineBuilding Resilient, Real-Time Data Pipelines with Python and Apache Flink - Alexey Grigorev
In this hands-on session, you’ll learn how to bridge the gap between batch processing and real-time engineering. While many data engineers are comfortable with SQL and Python, streaming introduces complex challenges like "lateness," out-of-order events, and state management.
We’ll show you how PyFlink handles these effortlessly.This session walks you through a complete end-to-end flow: producing mock event data into Red Panda, performing real-time windowed aggregations in Flink, and sinking the results into a PostgreSQL database for immediate analysis.
What You'll Learn
- The Streaming Mindset: When to use continuous processing vs. micro-batches.
- Architecture 2026: Setting up a modernized stack with Red Panda, Flink, and Postgres.
- Watermarks & Windows: How to handle data from users "in a tunnel" using 2026 best practices.
- Resiliency & Recovery: Configuring Flink Checkpointing to ensure you never lose your place during a failure.
- The Table API: Using Python to write "SQL-like" transformations on live data streams.
It will be a live demo with a fully working 2026 environment, practical troubleshooting tips for common Flink "stumbling blocks," and a chance to ask your questions. This workshop gives you a real feel for how stream processing is implemented in high-scale, real-world environments.
About the SpeakerAlexey Grigorev is the Founder of DataTalks.Club and creator of the Zoomcamp series.
Alexey is a seasoned software and ML engineer with over 10 years of engineering experience and 6+ years in machine learning. He has deployed large-scale ML systems at companies like OLX Group and Simplaex, authored several technical books, including Machine Learning Bookcamp, and is a Kaggle Master with a 1st place finish in the NIPS'17 Criteo Challenge.
**Join our Slack: https://datatalks.club/slack.html**
11 attendees
Context Engineering for Agentic Hybrid Applications
·OnlineOnlineResearch survey and upcoming trends discussion - Ivan Potapov and Tobias Lindenbauer
As an agent keeps running, its context window balloons with tool logs, stale diffs, and repeated data dumps. The model starts drowning in irrelevant details and falls victim to "lost-in-the-middle" effects — missing critical facts buried deep in oversized prompts.We'll walk through research for keeping only high-signal observations: masking vs. summarization trade-offs, compressing bulky tool output (drawing from ideas like LLMLingua-2), and pruning dead branches from the agent's trajectory so it stops dragging noise forward. We'll also share insights on cutting LLM call costs along the way.
Then we'll connect those techniques to bigger-picture design: memory hierarchies (session → working set → notes → cross-session) and standardized tool interfaces like MCP that reduce "context debt" and keep the agent's working set clean.
Finally, we'll look at where the field is heading — toward a world where Context Engineering becomes something you train, not just script.
About the Speakers:
Tobias Lindenbauer is an AI researcher at JetBrains Research, where he advances efficient and effective code agents that robustly solve long-horizon software engineering tasks. Currently, he is most interested in efficiency topics, context management, interpretability and data synthesis. He recently presented “The Complexity Trap: Simple Observation Masking Is as Efficient as LLM Summarization for Agent Context Management” at the Deep Learning for Code workshop at NeurIPS 2025, highlighting practical pitfalls of LLM summarization-based context strategies and evidence for more computationally efficient alternatives.
Ivan Potapov is a Research Engineer in Discovery Search & Ranking at Zalando, where he builds search retrieval and ranking systems. He teaches data engineering, AI agents, and LLM alignment, with a focus on bridging software engineering and applied ML. His recent work centers on long-running agents and context engineering—memory, state, and retrieval—exploring why many code-first agent designs fall short. His key thesis: context management is becoming something we train and iterate on, not just script. https://blog.ivan.digital/context-engineering-for-agentic-hybrid-applications-why-code-agents-fail-and-how-to-fix-them-076cab699262
**Join our Slack: https://datatalks.club/slack.html**
8 attendees
How to Evaluate MCP-powered AI Agents Beyond Accuracy using Agent GPA
·OnlineOnlineThis hands-on workshop introduces the Agent Goal-Plan-Action (Agent GPA) framework, a practical and advanced method for evaluating and improving AI agents.
Moving beyond simple final-answer scoring, Agent GPA focuses on the agent's entire reasoning process: evaluating goal achievement efficiency, plan logic, appropriate tool usage, and execution follow-through.
Agent GPA has achieved state-of-the-art benchmark results on TRAIL/GAIA, with 95% error coverage and 86% error localization, demonstrating the power of process-level evaluation over simple final-answer scoring.
We'll move beyond simple accuracy to analyze the agent's behavior holistically through the Agent GPA lens, which provides a deeper view of the agent’s working process and allows us to evaluate it. Using Agent GPA, you will diagnose and iteratively improve the agent's performance, specifically addressing frequent issues like planning failures, tool selection errors, and execution gaps.
You’ll discover how seemingly minor changes, particularly in tool definitions, can lead to measurable improvements in tool selection and tool calling.
What you’ll learn:- How to build an AI agent powered by Snowflake MCP
- How agents discover and choose tools through MCP
- How to design tool descriptions that influence agent behavior
- How to evaluate agent quality using structured metrics
- How to compare agent versions using observability and traces
- Why data grounding matters for reliable agents
What we’ll do:- Build an initial agent version connected to Snowflake MCP
- Evaluate its performance using TruLens metrics
- Identify failure modes in tool selection and tool calling
- Improve MCP tool definitions using a coding agent
- Rebuild and re-evaluate a second agent version
- Compare both versions side by side using their traces and evaluation data
The workshop uses a concrete example: a health research agent, grounded on clinical trials and PubMed data available from the Snowflake Marketplace.
By the end of the session, you’ll understand how to evaluate AI agents using the Agent GPA framework and move beyond simple accuracy or final-answer scoring. You’ll learn how to analyze an agent’s goals, plans, tool usage, and execution, diagnose failures, and iteratively improve agent performance using structured evaluation and observability.
Please come prepared with a fresh Python environment (such as Jupyter) to run the lab.
About the speaker:Josh is a developer advocate for AI and Open Source at Snowflake, previously at TruEra (acquired by Snowflake). He is also a maintainer of TruLens, an open-source library for systematically tracking and evaluating LLM-based applications.
Josh regularly delivers tech talks and workshops at events including PyData, Devoxx, AI_Dev, AI DevWorld, AI Camp meetups, and more. He also developed courses and taught students on a variety of platforms, including Coursera, DeepLearning.ai, Udemy, and DataCamp, and served as an advisor for Trustworthy Machine Learning at Stanford.
**Join our Slack: https://datatalks.club/slack.html**
This post is sponsored by Snowflake.
8 attendees
Past events
30


