Skip to content

About us

🖖 This virtual group is for data scientists, machine learning engineers, and open source enthusiasts.

Every month we’ll bring you diverse speakers working at the cutting edge of AI, machine learning, and computer vision.

  • Are you interested in speaking at a future Meetup?
  • Is your company interested in sponsoring a Meetup?

Send me a DM on Linkedin

This Meetup is sponsored by Voxel51, the lead maintainers of the open source FiftyOne computer vision toolset. To learn more, visit the FiftyOne project page on GitHub.

Upcoming events

10

See all
  • Network event
    May 20 - Getting Started with FiftyOne

    May 20 - Getting Started with FiftyOne

    ·
    Online
    Online
    101 attendees from 48 groups

    This workshop provides a technical foundation for managing large scale computer vision datasets. You will learn to curate, visualize, and evaluate models using the open source FiftyOne app.

    Date, Time and Location

    May 20, 2026
    10 AM PST - 11 AM Pacific
    Online. Register for the Zoom!

    The session covers data ingestion, embedding visualization, and model failure analysis. You will build workflows to identify dataset bias, find annotation errors, and select informative samples for training. Attendees leave with a framework for data centric AI for research and production pipelines, prioritizing data quality over pure model iteration.

    What you'll learn

    • Structure unstructured data. Map data and metadata into a queryable schema for images, videos, and point clouds.
    • Query datasets with the FiftyOne SDK. Create complex views based on model predictions, labels, and custom tags. Use the FiftyOne to filter data based on logical conditions and confidence scores.
    • Visualize high dimensional embeddings. Project features into lower dimensions to find clusters of similar samples. Identify data gaps and outliers using FiftyOne Brain.
    • Automate data curation. Implement algorithmic measures to select diverse subsets for training. Reduce labeling costs by prioritizing high entropy samples.
    • Debug model performance. Run evaluation routines to generate confusion matrices and precision recall curves. Visualize false positives and false negatives directly in the App to understand model failures.
    • Customize FiftyOne. Build custom dashboards and interactive panels. Create specialized views for domain specific tasks.

    Prerequisites:

    • Working knowledge of Python and machine learning and/or computer vision fundamentals.
    • All attendees will get access to the tutorials and code examples used in the workshop.
    • Photo of the user
    1 attendee from this group
  • Network event
    May 21 - Women in AI Meetup

    May 21 - Women in AI Meetup

    ·
    Online
    Online
    227 attendees from 48 groups

    Hear talks from experts on the latest topics in AI, ML, and computer vision on May 21.

    Date, Time and Location

    May 21, 2026
    9 - 11 AM pacific
    Online.
    Register for the Zoom!

    Beyond Models: LLM-Guided Reinforcement Learning for Real-World Wireless Systems

    Reinforcement learning agents often perform well in simulation but break down when deployed in real, non-stationary, constraint-driven environments such as wireless systems. This work explores using large language models not as annotators or reward hacks, but as a reasoning layer that guides RL decision-making with domain logic, scenario interpretation, and adaptive constraints.

    We present an architecture where the LLM provides structured, high-level advisory signals while the RL policy remains the final action authority to avoid hallucination-driven failures. Early experiments show that this hybrid setup improves robustness under distribution shifts and complex constraint scenarios where standard RL collapses. The goal is not to replace RL with LLMs, but to combine learning and reasoning into a more deployable control-intelligence framework.

    About the Speaker

    Fatemeh Lotfi is a Ph.D. researcher focused on integrating large language models and reinforcement learning for adaptive wireless control systems. Her work targets the limitations of classical RL under real-world uncertainty by introducing reasoning-driven guidance mechanisms using LLMs. She has contributed to multiple AI-for-infrastructure projects, including advanced O-RAN automation.

    Responsible and Ethical AI in Healthcare: Building Trustworthy and Inclusive Intelligent Systems

    In this session, I will discuss how Responsible AI principles: including fairness, transparency, accountability, and reliability can be practically embedded into healthcare AI systems. Key discussion points will include:

    • Addressing bias and equity challenges in healthcare datasets and model training.
    • Building explainable and interpretable AI to strengthen clinician trust and adoption.
    • Ensuring ethical deployment of generative AI models within regulated healthcare environments.
    • Establishing governance frameworks for data privacy, model monitoring, and regulatory compliance.

    About the Speaker

    Jahnavi Kachhia is the Global Product Owner, AI & ML at Abbott, leading large-scale AI initiatives for the FreeStyle Libre platform to enhance clinical decision-making and patient outcomes. Previously at Meta’s Reality Labs, she advanced AR/VR innovation and LLM-based intelligent systems. An active contributor to the AI research community, she serves on the IJCAI 2025 Program Committee and reviews for AAAI, IJCNN, and IEEE conferences.

    AI Applications in Drug Repurposing

    Drug repurposing is increasingly important because it offers a faster, lower-cost path to therapeutic discovery compared to de novo drug development, especially in oncology where many cancers still lack effective targeted options. In under-studied cancers such as endometrial cancer, the challenge is often a lack of large, high-quality clinical or response datasets, making purely data-dependent approaches difficult to scale reliably. This motivates combining data-independent strategies (e.g., pathway- and mechanism-driven modeling) with data-dependent learning when interaction evidence is available. A practical and scalable direction is drug–target interaction (DTI) prediction, where AI models can leverage molecular and protein representations to prioritize mechanistically plausible drug candidates for repurposing.

    About the Speaker

    Madhurima Mondal's academic journey has been shaped by strong foundations in mathematical and scientific problem-solving, including multiple national-level achievements such as Regional Mathematics Olympiad (RMO), NTSE, and the KVPY fellowship. She completed my B.Tech and M.Tech in Electronics & Electrical Communication Engineering from IIT Kharagpur, and I am currently a PhD candidate in Electrical & Computer Engineering at Texas A&M University,

    Mapping to Belonging: How Ethically Governed AI Can Make Real Places More Accessible, Legible, and Human

    Can AI help people belong in the places where they live, work, travel, and get together?

    This talk explores that question through real-world work at the intersection of accessibility, computer vision mapping, civic data, and ethically governed AI. I will show how AI can support the collection and interpretation of pedestrian accessibility data, reduce the burden of documenting barriers, and help transform lived experience into structured information that can be used across routing tools, planning systems, and public decision-making. I will also argue that public-interest AI only works when it is governed well. In accessibility work, the risks are clear: over-averaging, hidden bias, false completeness, and systems that optimize for efficiency while overlooking the people most affected by missing or poor-quality data. Ethically governed AI must therefore be designed to preserve local context, support transparency, include community participation, and make room for experiences that conventional systems often ignore.

    About the Speaker

    Anat Caspi is Director of the Taskar Center for Accessible Technology at the University of Washington, where she leads research and public-interest technology efforts focused on accessibility, mobility, and inclusive transportation data.

    • Photo of the user
    • Photo of the user
    3 attendees from this group
  • Network event
    May 27 - Perceptron AI and FiftyOne for Video Understanding Workshop

    May 27 - Perceptron AI and FiftyOne for Video Understanding Workshop

    ·
    Online
    Online
    58 attendees from 48 groups

    Join us for a hands-on virtual session on May 27 exploring video-native multimodal AI and how to integrate cutting-edge video understanding models into your computer vision workflows.

    Date, Time and Location

    May 27, 2026
    9:00 AM - 11:00 AM PST
    Online. Register for Zoom!

    Video-Native Multimodal Models for Video and Image Understanding

    In this 20-minute talk, Akshat will introduce Perceptron’s latest release, a video-native multimodal model that matches or exceeds frontier models from Google and Alibaba on video and image understanding at a fraction of their inference cost. He’ll walk through the capabilities that move the needle for real video workloads: temporal grounding to clip precise events from long streams, egocentric reasoning for first-person and wearable contexts, and structured “thinking traces” that reason over motion and physical space. He’ll also cover the image-side advances production perception teams care about: reliable pointing, point-by-example one-shot visual search, dense counting, dial/gauge/clock reading, and structured document extraction.

    About the Speaker

    Akshat Shrivastava is the CTO and co-founder of Perceptron, previously leading AR On-Device at Meta and conducting research at UW.

    Getting Started with Perceptron AI in FiftyOne

    In the second half of the session, Harpreet Sahota will walk through how to get started using Perceptron’s video-native multimodal model within FiftyOne for real-world video understanding workflows. He’ll demonstrate how to connect to the API, explore multimodal outputs inside FiftyOne, and build practical workflows for tasks like temporal event analysis, visual search, and video dataset inspection. Attendees will leave with a hands-on understanding of how to integrate state-of-the-art video perception models into their existing computer vision pipelines.

    About the Speaker

    Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

  • Network event
    June 9 - Visual AI in Healthcare: Ground Truth in the Foundation-Model Era

    June 9 - Visual AI in Healthcare: Ground Truth in the Foundation-Model Era

    ·
    Online
    Online
    92 attendees from 48 groups

    Learn how to handle expert label disagreement and build high performing fine-tuned medical foundation models for clinical imaging tasks.

    Date, Time and Location

    Jun 09, 2026
    9:00 AM – 10:30 AM PST
    Online. Register for the Zoom!

    Medical imaging teams are increasingly fine-tuning foundation models like UNI, MedSAM2, and BiomedCLIP on small in-house datasets. At that scale, label disagreement is a dominant cause of model failures, and the disputed ground truth is what regulators will ask you to defend. We'll build a medical imaging dataset in FiftyOne, surfacing and analyzing the cases where reviewers disagree. From there, we'll fine-tune a foundation model on cleaned data and use FiftyOne to evaluate where our model succeeds and fails, and which data is needed to move the model’s performance forward.

    You’ll learn how to:

    • Build a medical imaging dataset that preserves multiple expert annotations as first-class fields
    • Use FiftyOne views, embedding similarity, and confidence-disagreement signals to find the samples where reviewers split.
    • Run label-quality screens, near-duplicate detection, and active-learning sample selection using foundation model embeddings
    • Fine-tune a medical foundation model on a defensible dataset, with auditable and versioned experiment tracking
    • Filter and slice evaluation for regulatory and clinical readiness
    • Drive the pipeline with natural-language agents using the FiftyOne MCP Server and Skills to run the same curation, evaluation, and review workflows from your favorite AI tool

    Who This Is For

    • ML and computer-vision engineers in the medical imaging space
    • Data and annotation operations teams
    • Clinical AI and digital pathology leads
    • Regulatory and quality leads
    • Photo of the user
    1 attendee from this group

Group links

Organizers

Super Organizer

Members

693
See all