Oct 30 - Raleigh AI, ML, Computer Vision Meetup


Details
Join us on Oct 30 to hear talks from experts on cutting-edge AI topics at the Raleigh AI, ML, and Computer Vision.
Date and Time
Oct 30, 2025 at 5:30-8:30 PM
Location
Raleigh Founded
509 W North St Suite 224
Raleigh, NC 27603
Applied AI/ML for urban planning applications
While most AI/ML applications are discussed in contexts such as enterprise and healthcare, there is a potential to use them for public good. The inherent data-intensive nature and abundance of public-benefit issues in urban planning lends itself well to applied AI/ML. The real word challenges presented also necessitate innovations in architecture and processes. In this talk, I will summarize and share the backgrounds, experiences, and findings from some of my past and present applied AI/ML work in urban planning.
The projects include identifying Mobile Home Parks from high-resolution aerial imagery using computer vision, identifying occupations to retrain to clean energy occupations using natural language processing combined with unsupervised clustering, comparing large language model and computer vision approaches to identify small scale solar photovoltaic, and combining graphical neural network with flow matching for energy network optimization and simulation. These projects offer insights on real world complexity, potential to contribute to public good, and responding to challenges and limitations with innovative approaches.
About the Speaker
Kshitiz Khanal is a research staff at the Institute for Transportation Research and Education at NC State University. He works mainly on applied AI/ML for transportation, land-use, and energy planning. Previously, he conducted doctoral and post-doctoral research work at the Department of City and Regional Planning at the University of North Carolina at Chapel Hill, in which he applied computer vision, large language models, and legacy machine learning approaches to address issues in urban planning.
Super Resolutions Imagery for Precision agriculture
Satellite imagery is a rich source of Earth-observation data across spatial, temporal, and spectral dimensions, but its application in precision agriculture is limited due to its low spatial resolution, which ranges from 3 to 40 m per pixel. Drone imagery offers high resolution but it requires a substantial effort to map a field, and the cost could be substantial if bands beyond RGB bands are required.
Our work proposes a Super-Resolution model to obtain high-resolution multispectral images using RGB bands from drone imagery combined with commercially available multispectral satellite imagery such as Sentinel-2 or PlanetScope SuperDove. The model is trained with simulated multispectral data from a hyperspectral camera, achieving a PSNR up to 39 dB. This approach reduces the cost of getting a multispectral mapping of a field, and provides relevant data to calculate vegetation indices, estimate biomass and crop health, and build other models on top of it.
About the Speaker
Camilo Zuluaga is a Master’s candidate in Electrical Engineering at North Carolina State University and a research assistant with the Plant Sciences Initiative’s Optical Sensing Lab, working on potato research, remote sensing, polarization imaging, and satellite imagery.
Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming
Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors. We’ll show how AgIR blends two complementary streams: (1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline; and (2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production.
On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time.
About the Speaker
Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery. His interdisciplinary background combines technical depth with practical problem-solving, aiming to accelerate the adoption of AI in sustainable agriculture and beyond.
Beyond the Lab: Real-World Anomaly Detection for Agricultural Computer Vision
Anomaly detection is transforming manufacturing and surveillance, but what about agriculture? Can AI actually detect plant diseases and pest damage early enough to make a difference?
This talk demonstrates how anomaly detection identifies and localizes crop problems using coffee leaf health as our primary example. We'll start with the foundational theory, then examine how these models detect rust and miner damage in leaf imagery.
The session includes a comprehensive hands-on workflow using the open-source FiftyOne computer vision toolkit, covering dataset curation, patch extraction, model training, and result visualization. You'll gain both theoretical understanding of anomaly detection in computer vision and practical experience applying these techniques to agricultural challenges and other domains.
About the Speaker
Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Oct 30 - Raleigh AI, ML, Computer Vision Meetup