Skip to content

What we’re about

🖖 This group is for data scientists, machine learning engineers, and open source enthusiasts.

Every month we’ll bring you diverse speakers working at the cutting edge of AI, machine learning, and computer vision.

  • Are you interested in speaking at a future Meetup?
  • Is your company interested in sponsoring a Meetup?

Send me a DM on Linkedin

This Meetup is sponsored by Voxel51, the lead maintainers of the open source FiftyOne computer vision toolset. To learn more, visit the FiftyOne project page on GitHub.

Upcoming events

10

See all
  • Network event
    Oct 30 - AI, ML and Computer Vision Meetup
    •
    Online

    Oct 30 - AI, ML and Computer Vision Meetup

    Online
    447 attendees from 44 groups

    Join the virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

    Date, Time and Location

    Oct 30, 2025
    9 AM Pacific
    Online.
    Register for the Zoom!

    The Agent Factory: Building a Platform for Enterprise-Wide AI Automation

    In this talk we will explore what it takes to build an enterprise-ready AI automation platform at scale. The topics covered will include:

    • The Scale Challenge: E-commerce environments expose the limitations of single-point AI solutions, which create fragmented ecosystems lacking cohesion and efficient resource sharing across complex, knowledge-based work.
    • Root Cause Analysis Success: Flipkart’s initial AI agent transformed business analysis from days-long investigations to near-instantaneous insights, proving the concept while revealing broader platform opportunities.
    • Platform Strategy Evolution: Success across Engineering (SDLC, SRE), Operations, and Commerce teams necessitated a unified, multi-tenant platform serving diverse use cases with consistency and operational efficiency.
    • Architectural Foundation: Leveraging framework-agnostic design principles we were able to emphasize modularity, which enabled teams to leverage different AI models while maintaining consistent interfaces and scalable infrastructure.
    • The “Agent Garden” Vision: Flipkart’s roadmap envisions an internal ecosystem where teams discover, deploy, and contribute AI agents, providing a practical blueprint for scalable AI agent infrastructure development.

    About the Speaker

    Virender Bhargav at Flipkart is a seasoned engineering leader whose expertise spans business technology integration, enterprise applications, system design/architecture, and building highly scalable systems. With a deep understanding of technology, he has spearheaded teams, modernized technology landscapes, and managed core platform layers and strategic products. With extensive experience driving innovation at companies like Paytm and Flipkart, his contributions have left a lasting impact on the industry.

    Scaling Generative Models at Scale with Ray and PyTorch

    Generative image models like Stable Diffusion have opened up exciting possibilities for personalization, creativity, and scalable deployment. However, fine-tuning them in production‐grade settings poses challenges: managing compute, hyperparameters, model size, data, and distributed coordination are nontrivial.

    In this talk, we’ll dive deep into learning how to fine-tune Stable Diffusion models using Ray Train (with HuggingFace Diffusers), including approaches like DreamBooth and LoRA. We’ll cover what works (and what doesn’t) in scaling out training jobs, handling large data, optimizing for GPU memory and speed, and validating outputs. Attendees will come away with practical insights and patterns they can use to fine-tune generative models in their own work.

    About the Speaker

    Suman Debnath is a Technical Lead (ML) at Anyscale, where he focuses on distributed training, fine-tuning, and inference optimization at scale on the cloud. His work centers around building and optimizing end-to-end machine learning workflows powered by distributed computing framework like Ray, enabling scalable and efficient ML systems.
    Suman’s expertise spans Natural Language Processing (NLP), Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG).
    Earlier in his career, he developed performance benchmarking and monitoring tools for distributed storage systems. Beyond engineering, Suman is an active community contributor, having spoken at over 100 global conferences and events, including PyCon, PyData, ODSC, AIE and numerous meetups worldwide.

    Privacy-preserving in Computer Vision through Optics Learning

    Cameras are now ubiquitous, powering computer vision systems that assist us in everyday tasks and critical settings such as operating rooms. Yet, their widespread use raises serious privacy concerns: traditional cameras are designed to capture high-resolution images, making it easy to identify sensitive attributes such as faces, nudity, or personal objects. Once acquired, such data can be misused if accessed by adversaries. Existing software-based privacy mechanisms, such as blurring or pixelation, often degrade task performance and leave vulnerabilities in the processing pipeline.

    In this talk, we explore an alternative question: how can we preserve privacy before or during image acquisition? By revisiting the image formation model, we show how camera optics themselves can be learned and optimized to acquire images that are unintelligible to humans yet remain useful for downstream vision tasks like action recognition. We will discuss recent approaches to learning camera lenses that intentionally produce privacy-preserving images, blurry and unrecognizable to the human eye, but still effective for machine perception. This paradigm shift opens the door to a new generation of cameras that embed privacy directly into their hardware design.

    About the Speaker

    Carlos Hinojosa is a Postdoctoral researcher at King Abdullah University of Science and Technology (KAUST) working with Prof. Bernard Ghanem. His research interests span Computer Vision, Machine Learning, AI Safety, and AI for Science. He focuses on developing safe, accurate, and efficient vision systems and machine-learning models that can reliably perceive, understand, and act on information, while ensuring robustness, protecting privacy, and aligning with societal values.

    It's a (Blind) Match! Towards Vision-Language Correspondence without Parallel Data

    Can we match vision and language embeddings without any supervision? According to the platonic representation hypothesis, as model and dataset scales increase, distances between corresponding representations are becoming similar in both embedding spaces. Our study demonstrates that pairwise distances are often sufficient to enable unsupervised matching, allowing vision-language correspondences to be discovered without any parallel data.

    About the Speaker

    Dominik Schnaus is a third-year Ph.D. student in the Computer Vision Group at the Technical University of Munich (TUM), supervised by Daniel Cremers. His research centers on multimodal and self-supervised learning with a special emphasis on understanding similarities across embedding spaces of different modalities.

    • Photo of the user
    • Photo of the user
    • Photo of the user
    56 attendees from this group
  • Network event
    Physical AI Data Pipelines with NVIDIA Omniverse NuRec, Cosmos and FiftyOne
    •
    Online

    Physical AI Data Pipelines with NVIDIA Omniverse NuRec, Cosmos and FiftyOne

    Online
    275 attendees from 47 groups

    Join Voxel51 and NVIDIA as they unveil a breakthrough that’s changing how Physical AI systems are built. In this first-ever demo featuring NVIDIA Omniverse NuRec and NVIDIA Cosmos with FiftyOne, you’ll learn how to create validated, simulation-ready data pipelines—cutting testing costs, eliminating manual data audits, and accelerating development from months to days.

    Date and Location

    Nov 5, 2025
    9:00-10:30 AM Pacific
    Online. Register for the Zoom

    Developing autonomous vehicles and humanoid robots requires rigorous simulations that capture real-world complexity. The critical barrier that keeps teams from achieving success isn’t the simulation engine itself, but the data that powers it.

    As Physical AI systems ingest petabytes of multisensor data, converting this raw input into validated, simulation-ready data pipelines remains a hidden bottleneck. A camera-to-LiDAR projection off by a few pixels, timestamps misaligned by a few milliseconds, or inaccurate coordinate systems will cascade into flawed neural reconstructions and synthetic data.

    Without a well-orchestrated data pipeline, even the most advanced simulation platforms end up consuming imperfect data, wasting weeks of effort and thousands of dollars in testing and compute costs.

    In a first-ever demo featuring NVIDIA Omniverse NuRec and NVIDIA Cosmos with FiftyOne, you’ll discover how to:

    • Eliminate manual data audits with an automated workflow that calibrates, aligns, and ensures data integrity across cameras, LiDAR, radar, and other sensors
    • Curate and enrich the data for neural reconstructions and synthetic data generation
    • Reduce Physical AI testing and QA costs by up to 80%
    • Accelerate Physical AI development from months to days

    Who should attend:

    • Data Engineers, MLOps & ML Engineers working with Physical AI data
    • Technical leaders and Managers driving Physical AI projects from prototype to production
    • AV/Robotics Researchers building safety-critical apps with cutting-edge tech
    • Product & Strategy leaders seeking to accelerate development while reducing infra costs and risks.

    About the Speakers

    Itai H Zadok is a Senior Product Manager l Autonomous Vehicles Simulation at NVIDIA

    Daniel Gural is a Machine Learning Engineer and Evangelist at Voxel51

    • Photo of the user
    • Photo of the user
    25 attendees from this group
  • Network event
    Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens
    •
    Online

    Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens

    Online
    131 attendees from 47 groups

    Join us for a virtual event to hear talks from experts on the latest developments in Visual Document AI.

    Date and Location

    Nov 6, 2025
    9-11 AM Pacific
    Online.
    Register for the Zoom!

    Document AI: A Review of the Latest Models, Tasks and Tools

    In this talk, go through everything document AI: trends, models, tasks, tools! By the end of this talk you will be able to get to building apps based on document models

    About the Speaker

    Merve Noyan works on multimodal AI and computer vision at Hugging Face, and she's the author of the book Vision Language Models on O'Reilly.

    Run Document VLMs in Voxel51 with the VLM Run Plugin — PDF to JSON in Seconds

    The new VLM Run Plugin for Voxel51 enables seamless execution of document vision-language models directly within the Voxel51 environment. This integration transforms complex document workflows — from PDFs and scanned forms to reports — into structured JSON outputs in seconds. By treating documents as images, our approach remains general, scalable, and compatible with any visual model architecture. The plugin connects visual data curation with model inference, empowering teams to run, visualize, and evaluate document understanding models effortlessly. Document AI is now faster, reproducible, and natively integrated into your Voxel51 workflows.

    About the Speaker

    Dinesh Reddy is a founding team member of VLM Run, where he is helping nurture the platform from a sapling into a robust ecosystem for running and evaluating vision-language models across modalities. Previously, he was a scientist at Amazon AWS AI, working on large-scale machine learning systems for intelligent document understanding and visual AI. He completed his Ph.D. at the Robotics Institute, Carnegie Mellon University, focusing on combining learning-based methods with 3D computer vision for in-the-wild data. His research has been recognized with the Best Paper Award at IEEE IVS 2021 and fellowships from Amazon Go and Qualcomm.

    CommonForms: Automatically Making PDFs Fillable

    Converting static PDFs into fillable forms remains a surprisingly difficult task, even with the best commercial tools available today. We show that with careful dataset curation and model tuning, it is possible to train high-quality form field detectors for under $500. As part of this effort, we introduce CommonForms, a large-scale dataset of nearly half a million curated form images. We also release a family of highly accurate form field detectors, FFDNet-S and FFDNet-L.

    About the Speaker

    Joe Barrow is a researcher at Pattern Data, specializing in document AI and information extraction. He previously worked at the Adobe Document Intelligence Lab after receiving his PhD from the University of Maryland in 2022.

    Visual Document Retrieval: How to Cluster, Search and Uncover Biases in Document Image Datasets Using Embeddings

    In this talk you'll learn about the task of visual document retrieval, the models which are widely used by the community, and see them in action through the open source FiftyOne App where you'll learn how to use these models to identify groups and clusters of documents, find unique documents, uncover biases in your visual document dataset, and search over your document corpus using natural language.

    About the Speaker

    Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

    • Photo of the user
    9 attendees from this group
  • Network event
    Nov 13 - Women in AI
    •
    Online

    Nov 13 - Women in AI

    Online
    144 attendees from 47 groups

    Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13.

    Date and Location

    Nov 13, 2025
    9 AM Pacific
    Online.
    Register for the Zoom!

    Copy, Paste, Customize! The Template Approach to AI Engineering

    Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability.

    Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations.

    About the Speaker

    Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science.

    Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne

    Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety.

    About the Speaker

    Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

    The Heart of Innovation: Women, AI, and the Future of Healthcare

    This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world.

    About the Speaker

    Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology.

    Language Diffusion Models

    Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens.

    Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue.

    About the Speaker

    Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering.

    • Photo of the user
    • Photo of the user
    • Photo of the user
    15 attendees from this group

Group links

Organizers

Members

3,934
See all