Oct 16 - Visual AI in Agriculture (Day 2)


Details
Join us for day two of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture.
Date and Time
Oct 16 at 9 AM Pacific
Location
Virtual. Register for the Zoom.
Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming
Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors.
We’ll show how AgIR blends two complementary streams:
(1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline;
(2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production.
On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time.
About the Speaker
Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery.
Beyond Manual Measurements: How AI is Accelerating Plant Breeding
Traditional plant breeding relies on manual phenotypic measurements that are time-intensive, subjective, and create bottlenecks in variety development. This presentation demonstrates how computer vision and artificial intelligence are revolutionizing plant selection processes by automating trait extraction from simple photographs. Our cloud-based platform transforms images captured with smartphones, drones, or laboratory cameras into instant, quantitative phenotypic data including fruit count, size measurements, and weight estimations.
The system integrates phenotypic data with genotypic, pedigree, and environmental information in a unified database, enabling real-time analytics and decision support through intuitive dashboards. Unlike expensive hardware-dependent solutions, our software-focused approach works with existing camera equipment and standard breeding workflows, making advanced phenotyping accessible to organizations of all sizes.
About the Speaker
Dr. Sharon Inch is a botanist with a PhD in Plant Pathology and over 20 years of experience in horticulture and agricultural research. Throughout her career, she has witnessed firsthand the inefficiencies of traditional breeding methods, inspiring her to found AgriVision Analytics. As CEO, she leads the development of cloud-based computer vision platforms that transform plant breeding workflows through AI-powered phenotyping. Her work focuses on accelerating variety development and improving breeding decision-making through automated trait extraction and data integration. Dr. Sharon Inch is passionate about bridging the gap between advanced technology and practical agricultural applications to address global food security challenges.
AI-assisted sweetpotato yield estimation pipelines using optical sensor data
In this presentation, we will introduce the sensor systems and AI-powered analysis algorithms used in high-throughput sweetpotato post-harvest packing pipelines (developed by the Optical Sensing Lab at NC State University). By collecting image data from sweetpotato fields and packing lines respectively, we aim to quantitatively optimize the grading and yield estimation process, and the planning on storage and inventory-order matching.
We built two customized sensor devices to collect data respectively from the top bins when receiving sweetpotatoes from farmers, and eliminator table before grading and packing process. We also developed a compact instance segmentation pipeline that can run on smart phones for rapid yield estimation in-field with resource limitations. To minimize data privacy concerns and Internet connectivity issues, we try to keep all the analysis pipelines on the edge, which results in a design tradeoff between resource availability and environmental constraints. We will also introduce sensor building with these considerations. The analysis results and real time production information are then integrated into an interactive online dashboard, where stakeholders can leverage to help with inventory-order management and making operational decisions.
About the Speaker
Yifan Wu is a current Ph.D candidate at NC State University working in the Optical Sensing Lab (OSL) supervised by Dr. Michael Kudenov. Research focuses on developing sensor systems and machine learning platforms for business intelligence applications.
An End-to-End AgTech Use Case in FiftyOne
The agricultural sector is increasingly turning to computer vision to tackle challenges in crop monitoring, pest detection, and yield optimization. Yet, developing robust models in this space often requires careful data exploration, curation, and evaluation—steps that are just as critical as model training itself.
In this talk, we will walk through an end-to-end AgTech use case using FiftyOne, an open-source tool for dataset visualization, curation, and model evaluation. Starting with a pest detection dataset, we will explore the samples and annotations to understand dataset quality and potential pitfalls. From there, we will curate the dataset by filtering, tagging, and identifying edge cases that could impact downstream performance. Next, we’ll train a computer vision model to detect different pest species and demonstrate how FiftyOne can be used to rigorously evaluate the results. Along the way, we’ll highlight how dataset-centric workflows can accelerate experimentation, improve model reliability, and surface actionable insights specific to agricultural applications.
By the end of the session, attendees will gain a practical understanding of how to:
- Explore and diagnose real-world agricultural datasets
- Curate training data for improved performance
- Train and evaluate pest detection models
- Use FiftyOne to close the loop between data and models
This talk will be valuable for anyone working at the intersection of agriculture and computer vision, whether you’re building production models or just beginning to explore AgTech use cases.
About the Speaker
Prerna Dhareshwar is a Machine Learning Engineer at Voxel51, where she helps customers leverage FiftyOne to accelerate dataset curation, model development, and evaluation in real-world AI workflows. She brings extensive experience building and deploying computer vision and machine learning systems across industries. Prior to Voxel51, Prerna was a Senior Machine Learning Engineer at Instrumental Inc., where she developed models for defect detection in manufacturing, and a Machine Learning Software Engineer at Pure Storage, focusing on predictive analytics and automation.

Sponsors
Oct 16 - Visual AI in Agriculture (Day 2)