Skip to content

Details

In this technical workshop, we’ll show how to build a feedback-driven annotation pipeline for perception models using FiftyOne. We’ll explore real model failures and data gaps, and turn them into focused annotation tasks that then route through a repeatable workflow for labeling and QA. The result is an end-to-end pipeline keeping annotators, tools, and models aligned and closing the loop from annotation, curation, back to model training and evaluation.

Time and Location

Feb 18, 2026
10 - 11 AM PST
Online. Register for the Zoom!

What you'll learn

  • Techniques for labeling the data that matters the most for annotation time and cost savings
  • Structure human-in-the-loop workflows for finding and fixing model errors, data gaps, and targeted relabeling instead of bulk labeling
  • Combine auto-labeling and human review in a single, feedback-driven pipeline for perception models
  • Use label schemas and metadata as “data contracts” to enforce consistency between annotators, models, and tools, especially for multimodal data
  • Detect and manage schema drift and tie schema versions to dataset and model versions for reproducibility
  • QA and review steps that surface label issues early and tie changes back to model behavior
  • An annotation architecture that can accommodate new perception tasks and feedback signals without rebuilding your entire data stack

Related topics

Artificial Intelligence
Computer Vision
Machine Learning
Data Science
Open Source

You may also like