Skip to content

Details

From Static Pipelines to Co-Evolving AI: Building Vision Systems That Adapt in Real Time -by Gowdhaman Sadhasivam

Virtual event on Zoom and YouTube

If you want to join discussion remotely, you can submit questions via Zoom Q&A. The zoom link:
https://acm-org.zoom.us/j
Join via YouTube:
https://youtube.com/live/

AGENDA
6:30 pre-sign in to test and chat
7:00 SFBayACM upcoming events, introduce the speaker
7:15 speaker presentation starts
8:15 - 8:30 finish, depending on Q&A

Join SF Bay ACM Chapter for an insightful discussion in person at VRP (limited to 15 seats only from 8:30-9:30pm, query email: Lianaye2 at gmail.com, Subject title "PeaceNames" ) on potential owner meeting of a distributed could after the presentation on:

Talk Description:
For decades, computer vision has been built on rigid, sequential pipelines: data is collected, labeled, modeled, deployed, and optimized in isolation. This paradigm worked in controlled environments but falters in today’s world, where complexity, edge cases, and shifting domains evolve faster than traditional workflows can adapt. In this talk, I’ll share a vision for co-evolving AI systems where models, data, and feedback loops are not static stages but living components that grow together. In a co-evolving workflow, data engineering, model training, and product integration happen in parallel, continuously informed by real-world signals. The result: faster iteration cycles, more resilient models, and dramatically lower costs.

We’ll explore how to harness foundational models, LLMs, and methods like active learning, self-supervised, few-shot, and zero-shot to minimize dependence on ground truth data, design pipelines that capture edge cases before they become failures, and treat production feedback as a real-time engine of evolution rather than a post-deployment patch. This is not just a technical shift, it’s a blueprint for the future of AI infrastructure: systems that adapt as quickly as the world they seek to understand.

What audience will learn: Learn how to move beyond static pipelines and build AI systems that continuously adapt to real-world signals. Discover practical ways to minimize labeled data needs, capture edge cases early, and turn production feedback into a real-time driver of system evolution.

Speaker bio:
Gowdhaman Sadhasivam is an award-winning AI leader and Co-Founder & CTO of Labelbees AI, where he builds intelligent infrastructure for real-world, multimodal AI systems. With over 12 years of experience in defense, insurance, and geospatial intelligence, he has led teams in delivering production-grade machine learning systems for Fortune 500 enterprises and U.S. national agencies.

At Orbital Insight, he led AI engineering efforts on multimodal SAR, EO, and AIS fusion systems, contributing to the company’s eventual acquisition by Privateer Space. At EMC Insurance, he transformed legacy operations into AI-driven systems, earning national innovation awards from NAMIC, Plug and Play, and Wolfram Research. He also received the USGIF Golden Ticket Award for his impact on geospatial AI.
A frequent global speaker, mentor, and advisor, Gowdhaman brings field-tested insight into MLOps, GenAI, and enterprise AI adoption, helping organizations move from prototypes to scalable, trusted AI in production.
https://www.linkedin.com/in/gsadhas/

---

Valley Research Park is a coworking research campus of 104,000 square feet hosting 60+ life science and technology companies. VRP has over 100 dry labs, wet labs, and high power labs sized from 125-15,000 square feet. VRP manages all of the traditional office elements: break rooms, conference rooms, outdoor dining spaces, and recreational spaces.

As a plug-and-play lab space, once companies have secured their next milestone and are ready to expand, VRP has 100+ labs ready to expand into.
https://www.valleyresearchpark.com/

Machine Vision
High Scalability Computing
Happy Hour
System Administration
IT Infrastructure

Members are also interested in

FREE
300 spots left