
About us
đź–– This virtual group is for data scientists, machine learning engineers, and open source enthusiasts.
Every month we’ll bring you diverse speakers working at the cutting edge of AI, machine learning, and computer vision.
- Are you interested in speaking at a future Meetup?
- Is your company interested in sponsoring a Meetup?
This Meetup is sponsored by Voxel51, the lead maintainers of the open source FiftyOne computer vision toolset. To learn more, visit the FiftyOne project page on GitHub.
Upcoming events
10
- Network event

Feb 5 - AI, ML and Computer Vision Meetup
·OnlineOnline368 attendees from 47 groupsJoin our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.
Feb 5, 2026
9 - 11 AM Pacific
Online. Register for the Zoom!Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models
Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.
About the Speaker
Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.
Data-Centric Lessons To Improve Speech-Language Pretraining
Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.
We focus on three research questions fundamental to speech-language pretraining data:
- How to process raw web-crawled audio content for speech-text pretraining;
- How to construct synthetic pretraining datasets to augment web-crawled data;
- How to interleave (text, audio) segments into training sequences.
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.
About the Speaker
Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.
A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne
Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.
About the Speaker
Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow, Docker, and OpenCV. I started as a software developer, moved into AI, led teams, and served as CTO. Today, I connect code and community to build open, production-ready AI, making technology simple, accessible, and reliable.
Making Computer Vision Models Faster: An Introduction to TensorRT Optimization
Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.
About the Speaker
Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.
10 attendees from this group - Network event

Feb 11 - Visual AI for Video Use Cases
·OnlineOnline205 attendees from 47 groupsJoin our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.
Time and Location
Feb 11, 2026
9 - 11 AM Pacific
Online. Register for the Zoom!VIDEOP2R: Video Understanding from Perception to Reasoning
Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.
In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.
About the Speaker
Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.
Layer-Aware Video Composition via Split-then-Merge
Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.
About the Speaker
Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.
Video-native VLMs and control
We show how image-native vision–language models can be extended to support native video understanding, structured reasoning, tool use, and robotics. Our approach focuses on designing data, modeling, and training recipes to optimize for multimodality input and interaction patterns - treating vision and perception as a first class citizens. We discuss lessons learned from scaling these methods in an open-source model family and their implications for building flexible multimodal systems.
About the Speaker
Akshat Shrivastava is the CTO and co-founder of Perceptron, previously leading AR On-Device at Meta and conducting research at UW.
Video Intelligence Is Going Agentic
Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.
About the Speaker
James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.
4 attendees from this group 
Feb 12 - Seattle AI, ML and Computer Vision Meetup
Location not specified yetJoin us to hear talks from experts on cutting-edge topics across AI, ML, and computer vision!
Pre-registration is mandatory.
Time and Location
Feb 12, 2026
5:30 - 8:30 PMUnion AI Offices
400 112th Ave NE #115
Bellevue, WA 98004ALARM: Automated MLLM-Based Anomaly Detection in Complex-EnviRonment Monitoring with Uncertainty Quantification
In the complex environments, the anomalies are sometimes highly contextual and also ambiguous, and thereby, uncertainty quantification (UQ) is a crucial capacity for a multi-modal LLM (MLLM)-based video anomaly detection (VAD) system to succeed. In this talk, I will introduce our UQ-supported MLLM-based VAD framework called ALARM. ALARM integrates UQ with quality-assurance techniques like reasoning chain, self-reflection, and MLLM ensemble for robust and accurate performance and is designed based on a rigorous probabilistic inference pipeline and computational process.
About the Speaker
Congjing Zhang is a third-year Ph.D. student in the Department of Industrial and Systems Engineering at the University of Washington, advised by Prof. Shuai Huang. She is a recipient of the 2025-2027 Amazon AI Ph.D. Fellowship. Her research interests center on large language models (LLMs) and machine learning, with a focus on uncertainty quantification, anomaly detection and synthetic data generation.
The World of World Models: How the New Generation of AI Is Reshaping Robotics and Autonomous Vehicles
World Models are emerging as the defining paradigm for the next decade of robotics and autonomous systems. Instead of depending on handcrafted perception stacks or rigid planning pipelines, modern world models learn a unified representation of an environment—geometry, dynamics, semantics, and agent behavior—and use that understanding to predict, plan, and act. This talk will break down why the field is shifting toward these holistic models, what new capabilities they unlock, and how they are already transforming AV and robotics research.
We then connect these advances to the Physical AI Workbench, a practical foundation for teams who want to build, validate, and iterate on world-model-driven pipelines. The Workbench standardizes data quality, reconstruction, and enrichment workflows so that teams can trust their sensor data, generate high-fidelity world representations, and feed consistent inputs into next-generation predictive and generative models. Together, world models and the Physical AI Workbench represent a new, more scalable path forward—one where robots and AVs can learn, simulate, and reason about the world through shared, high-quality physical context.
About the Speaker
Daniel Gural leads technical partnerships at Voxel51, where he’s building the Physical AI Workbench, a platform that connects real-world sensor data with realistic simulation to help engineers better understand, validate, and improve their perception systems.
Modern Orchestration for Durable AI Pipelines and Agents - Flyte 2.0
In this talk we’ll discuss how the orchestration space is evolving with the current AI landscape, and provide a peak at Flyte 2.0, which makes truly dynamic, compute aware, and durable AI orchestration easy for any type of AI application, from computer vision, agents, and more!
Flyte, the open source orchestration platform, is already being used by thousands of teams to build their AI pipelines. In-fact it’s extremely likely you’ve interacted with AI models trained on Flyte, while on social media, listening to music on using self driving technologies.
About the Speaker
Sage Elliott is an AI Engineer at Union.ai (core maintainers of Flyte).
Context Engineering for Video Intelligence: Beyond Model Scale to Real-World Impact
Video streams combine vision, audio, time-series and semantics at a scale and complexity unlike text alone. At TwelveLabs, we’ve found that tackling this challenge doesn’t start with ever-bigger models — it starts with engineering the right context. In this session, we’ll walk engineers and infrastructure leads through how to build production-grade video AI by systematically designing what information the model receives, how it's selected, compressed, and isolated. You’ll learn our four pillars of video context engineering (Write, Select, Compress, Isolate), see how our foundation models (Marengo & Pegasus) and agent product (Jockey) use them, and review real-world outcomes in media, public-safety and advertising pipelines.
We’ll also dive into how you measure context effectiveness — tokens per minute, retrieval hit rates, versioned context pipelines — and how this insight drives cost, latency and trust improvements. If you’re deploying AI video solutions in the wild, you’ll leave with a blueprint for turning raw video into deployable insight — not by model size alone, but by targeted context engineering.
About the Speaker
James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.
Build Reliable AI apps with Observability, Validations and Evaluations
As generative AI moves from experimentation to enterprise deployment, reliability becomes critical. This session outlines a strategic approach to building robust AI apps using Monocle for observability and the VS Code Extension for diagnostics, and bug fixing. Discover how to create AI systems that are not only innovative but also predictable and trustworthy.
About the Speaker
Hoc Phan has 20+ years of experience driving innovation at Microsoft, Amazon, Dell, and startups. In 2025, he joined Okahu to lead product and pre-sales, focusing on AI observability and LLM performance. Previously, he helped shape Microsoft Purview via the BlueTalon acquisition and led R&D in cybersecurity and data governance. Hoc is a frequent speaker and author of three books on mobile development and IoT.
4 attendees- Network event

Feb 18 - Feedback-Driven Annotation Pipelines for End-to-End ML Workflows
·OnlineOnline113 attendees from 47 groupsIn this technical workshop, we’ll show how to build a feedback-driven annotation pipeline for perception models using FiftyOne. We’ll explore real model failures and data gaps, and turn them into focused annotation tasks that then route through a repeatable workflow for labeling and QA. The result is an end-to-end pipeline keeping annotators, tools, and models aligned and closing the loop from annotation, curation, back to model training and evaluation.
Time and Location
Feb 18, 2026
10 - 11 AM PST
Online. Register for the Zoom!What you'll learn
- Techniques for labeling the data that matters the most for annotation time and cost savings
- Structure human-in-the-loop workflows for finding and fixing model errors, data gaps, and targeted relabeling instead of bulk labeling
- Combine auto-labeling and human review in a single, feedback-driven pipeline for perception models
- Use label schemas and metadata as “data contracts” to enforce consistency between annotators, models, and tools, especially for multimodal data
- Detect and manage schema drift and tie schema versions to dataset and model versions for reproducibility
- QA and review steps that surface label issues early and tie changes back to model behavior
- An annotation architecture that can accommodate new perception tasks and feedback signals without rebuilding your entire data stack
1 attendee from this group
Past events
205

