Skip to content

About us

đź–– This virtual group is for data scientists, machine learning engineers, and open source enthusiasts.

Every month we’ll bring you diverse speakers working at the cutting edge of AI, machine learning, and computer vision.

  • Are you interested in speaking at a future Meetup?
  • Is your company interested in sponsoring a Meetup?

Send me a DM on Linkedin

This Meetup is sponsored by Voxel51, the lead maintainers of the open source FiftyOne computer vision toolset. To learn more, visit the FiftyOne project page on GitHub.

Upcoming events

10

See all
  • Network event
    March 12 - Agents, MCP and Skills Virtual Meetup

    March 12 - Agents, MCP and Skills Virtual Meetup

    ·
    Online
    Online
    876 attendees from 48 groups

    Join us for a special edition of the AI, ML and Computer Vision Meetup where we will focus on Agents, MCP and Skills!

    Date, Time, Location

    Mar 12, 2026
    9 - 11 AM PST
    Online.
    Register for the Zoom!

    Agents Building Agents on the Hugging Face Hub

    Discover how coding agents can run or support your fine-tuning experiments. From quick dataset validation and preprocessing, to optimal GPU hardware selection, to automated job submission based on metric tracking, to evaluation. Ben will demonstrate how Hugging Face skills can be used to define best practices for agents to support machine learning experiments. Bring Claude, Codex, or Mistral Vibes, and we’ll show you to get it training models with GRPO, SFT, and DPO.

    About the Speaker

    Ben Burtenshaw is a Machine Learning Engineer at Hugging Face, focusing on building agents with fine-tuning and reinforcement learning. He led educational projects like the Agents Course, the MCP Course, and the LLM course, which bridge the gap between complex Reinforcement Learning (RL) techniques and practical application. Ben focuses on democratizing access to efficient AI, empowering the community to align, evaluate, and deploy transparent agentic systems.

    Claude Code Templates

    This talk explores how to configure and align Claude Code agents using templates and custom components. I'll demonstrate practical configuration patterns that ensure your CLI agent executes exactly what you intend, covering Skills setup, hooks implementation, and template customization. Drawing from real-world examples building Claude Code Templates, attendees will learn how to structure their agent configurations for consistent, reliable behavior and create reusable components that maintain alignment across different use cases.

    About the Speaker

    Daniel Avila is an AI Engineer at Hedgineer building agentic systems and creator of Claude Code Templates.

    Move Faster in Computer Vision by Teaching Agents to See Your Data

    Computer vision teams spend too much time writing scripts just to find bad labels, blurry images, and edge cases. In this talk, I’ll show how to move that work to agents by using FiftyOne as a visual operating system. With Skills and MCP, agents can see inside your datasets, explore them visually, and handle common data cleanup tasks, so you can spend less time on data and more time shipping models.

    About the Speaker

    Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow, Docker, and OpenCV. I started as a software developer, moved into AI, led teams, and served as CTO. Today, I connect code and community to build open, production-ready AI, making technology simple, accessible, and reliable.

    Skills As Documentation

    Skills are self-contained recipes - each one a piece of a larger puzzle. Instead of trying to modify human-centric documentation to better fit agents, skills let us build capabilities into our agents directly. This talk will showcase how to think about leveraging skills to enhance how users interact with your software!

    About the Speaker

    Chris Alexiuk is a deep learning developer advocate at NVIDIA, working on creating technical assets that help developers use the incredible suite of AI tools available at NVIDIA. Chris comes from a machine learning and data science background, and he is obsessed with everything and anything about large language models.

    • Photo of the user
    • Photo of the user
    • Photo of the user
    4 attendees from this group
  • Network event
    March 18 - Vibe Coding Production-Ready Computer Vision Pipelines Workshop

    March 18 - Vibe Coding Production-Ready Computer Vision Pipelines Workshop

    ·
    Online
    Online
    371 attendees from 48 groups

    Join us for an interactive workshop where we'll build production-ready computer vision pipelines using vibe coded FiftyOne plugins.

    Register for the Zoom

    Plugins enable you to customize the open-source FiftyOne computer vision app to match your exact workflow by easily incorporating data annotation, curation, model evaluation and inference.

    We'll demonstrate how FiftyOne Skills and the MCP Server can streamline the journey from prototype to production-ready pipelines, keeping your development flow intact.

    Perfect for open-source contributors, researchers, and enterprise teams seeking hands-on experience. All participants receive slides, notebooks, and access to GitHub repositories and videos from the workshop.

    1 attendee from this group
  • Network event
    March 19 - Women in AI Meetup

    March 19 - Women in AI Meetup

    ·
    Online
    Online
    241 attendees from 47 groups

    Hear talks from experts on the latest topics in AI, ML, and computer vision on March 19th.

    Date and Location

    Mar 19, 2026
    9 - 11 AM Pacific
    Online.
    Register for Zoom!

    Towards Reliable Clinical AI: Evaluating Factuality, Robustness, and Real-World Performance of Large Language Models

    Large language models are increasingly deployed in clinical settings, but their reliability remains uncertain—they hallucinate facts, behave inconsistently across instruction phrasings, and struggle with evolving medical terminology. In my talk, I address methods to systematically evaluate clinical LLM reliability across four dimensions aligned with how healthcare professionals actually work: verifying concrete facts (FactEHR), ensuring stable guidance across instruction variations (instruction sensitivity study showing up to 0.6 AUROC variation), integrating up-to-date knowledge (BEACON improving biomedical NER by 15%), and assessing real patient conversations (PATIENT-EVAL revealing models abandon safety warnings when patients seek reassurance). These contributions establish evaluation standards spanning factuality, robustness, knowledge integration, and patient-centered communication, charting a path toward clinical AI that is safer, more equitable, and more trustworthy.

    About the Speaker

    Monica Munnangi is a doctoral student at the Khoury College of Computer Sciences at Northeastern University, advised by Saiph Savage. Her doctoral research, which she began in 2021 and expects to complete in 2026, focuses on multi-modal machine learning for healthcare. After being introduced to artificial intelligence and machine learning during her undergraduate studies, Munnangi earned her master’s degree from UMass Amherst.

    Neural BRDFs: Learning Compact Representations for Material Appearance

    Accurately modeling how light interacts with real-world materials remains a central challenge in rendering. Bidirectional Reflectance Distribution Functions (BRDFs) describe how materials reflect light as a function of viewing and lighting directions. Creating realistic digital materials has traditionally required choosing between fast parametric models that can't capture real-world complexity, or massive measured BRDFs that are expensive to acquire and store. Neural BRDFs address this challenge by learning continuous reflectance functions from data, exploiting directional correlations and symmetry to achieve significant compression while maintaining rendering quality. In this talk, we examine how neural networks can encode complex material behavior compactly, why this matters for rendering and material capture, and how neural BRDFs fit into the broader evolution toward data-driven graphics.

    About the Speaker

    Manushree Gangwar is a Machine Learning Engineer at Voxel51 working on data-centric visual AI. She holds an MS in Computer Science from Columbia University and has previously worked in robotics, autonomous driving, and AR/VR, with a focus on scene understanding and 3D reconstruction.

    Supercharging AI agents with evaluations

    Reliable deployment of AI agents depends on rigorous evaluation, which must shift from a nice-to-have QA step to a core engineering discipline. Robust evaluation is essential for safety, predictability, misuse resistance, and sustained user trust. To meet this bar, Evals must be deeply integrated into the agent development lifecycle. This talk will outline how simulation-based testing—using high-fidelity, controllable environments—provides the next generation of evaluation infrastructure for production-ready AI agents.

    About the Speaker

    Priya Venkat, PhD, is a Senior AI Manager at Intuit, where she leads teams that build and scale ML and Agentic AI systems for finance. Her work integrates cutting-edge agentic workflows and robust evaluation systems to drive business impact while ensuring AI safety and reliability. Priya is a strong advocate of responsible AI, and actively mentors the next generation of AI scientists and engineers.

    Language Diffusion Models

    Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue.

    About the Speaker

    Jayita Bhattacharyya is a AI ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering.

    • Photo of the user
    1 attendee from this group
  • Network event
    March 26 - Advances in AI at Northeastern University

    March 26 - Advances in AI at Northeastern University

    ·
    Online
    Online
    169 attendees from 48 groups

    Join us to hear about the latest advances in AI at Northeastern University!

    Date, Time and Location

    March 26, 2026
    9 - 11 AM Pacific
    Online.
    Register for the Zoom!

    Scalable and Efficient Deep Learning: From Understanding to Generation

    In an era where model complexity and deployment constraints increasingly collide, achieving both scalability and efficiency in deep learning has become essential. Scalable and efficient deep learning ensures that powerful models can be trained, deployed, and adapted under limited computational and data resources, enabling broader accessibility and practical application. From understanding to generation, this talk unifies methods that cut costs while preserving capability.

    About the Speaker

    Yitian Zhang is a fifth-year PhD student at Northeastern University, advised by Prof. Yun Raymond Fu. His research interests center around Efficient and Scalable AI, spanning Generative Models, Multimodal Large Language Models, and Foundation Models.

    Grounding Visual AI Models in Real-World Physics

    Generative video models have made rapid progress in visual realism, yet they frequently violate basic physical laws, producing implausible motion and incorrect cause-effect relationships. This talk presents MoReGen, a physics-grounded, agentic text-to-video generation framework that integrates Newtonian physics directly into the generation process via executable physics-engine code.

    By coupling vision–language models with trajectory-based physical evaluation and iterative feedback, MoReGen produces videos that are both visually coherent and physically consistent. We further introduce MoRe Metrics and MoReSet, a benchmark and dataset designed to evaluate physics fidelity beyond appearance-based metrics such as FID and FVD. Together, this work demonstrates a path toward visual AI systems that reason about motion, interaction, and causality in the real world rather than hallucinating them.

    About the Speakers

    Professor Sarah Ostadabbas is an Associate Professor of Electrical and Computer Engineering at Northeastern University, where she directs the Augmented Cognition Lab (ACLab) and serves as Director of Women in Engineering. Her research focuses on computer vision and machine learning, with an emphasis on motion-centric representation learning, small-data AI, and applications in healthcare, defense, and behavior understanding under privacy and data constraints. She has authored over 130 peer-reviewed publications and received numerous honors, including the NSF CAREER Award, Sony Faculty Innovation Award, and the Cade Prize for Inventivity, along with multiple industry and federal research awards.

    Xiangyu Bai is a third-year PhD student in the ACLab and leads the lab's work on physics-aware visual intelligence, with several publications in top-tier computer vision and robotics conferences.

    WorldFormer: Diffusion Transformer World Models with Mixture-of-Experts for Embodied Physical Intelligence

    World models have emerged as a foundational paradigm for enabling agents to simulate, predict, and reason about complex environments. Recent advances driven by diffusion transformer (DiT) architectures have dramatically expanded the fidelity, scalability, and physical plausibility of learned world models. In this work, we present a world model framework built upon the diffusion transformer paradigm, following the design philosophy of state-of-the-art systems such as NVIDIA Cosmos. Our approach comprises three core components: (1) a spatiotemporal variational autoencoder (VAE) that compresses high-resolution video into a compact continuous latent space with strong temporal causality, enabling efficient encoding and decoding of long-horizon video sequences; (2) a transformer-based diffusion backbone that operates on 3D-patchified latent tokens, leveraging self-attention and cross-attention with text embeddings to iteratively denoise Gaussian noise into physically coherent future video states using a flow matching objective; and (3) a scalable pre-training and post-training pipeline that first trains a generalist world foundation model on large-scale, diverse video data and then specializes it to target physical AI domains — such as robotic manipulation, autonomous driving, or embodied navigation — through task-specific fine-tuning.

    Our model supports both text-to-world and video-to-world generation, enabling action-conditioned future state prediction for downstream planning and policy learning. We discuss implications for synthetic data generation, sim-to-real transfer, and the integration of world models into vision-language-action (VLA) pipelines for physical AI.

    About the Speaker

    Yanzhi Wang joined the Electrical & Computer Engineering department in August 2018 as an Assistant Professor. He earned his PhD at University of Southern California. His research interests include energy-efficient and high-performance implementations of deep learning and artificial intelligence systems; neuromorphic computing and non-von Neumann computing paradigms; cyber-security in deep learning systems; emerging deep learning algorithms/systems such as Bayesian neural networks, generative adversarial networks (GANs) and deep reinforcement learning.

    Physical AI Research (PAIR) Center: Foundational Pairing of Digital Intelligence & Physical World Deployment at Northeastern University and Beyond

    The Physical AI Research (PAIR) initiative advances the next frontier of artificial intelligence: enabling systems that can perceive, reason, and act reliably in the physical world. By uniting expertise across engineering, computer science, health sciences, and the social sciences, PAIR develops safe, transparent, and human-aligned AI that bridges digital models with real-world dynamics. The initiative is organized around three intellectual pillars: Learning and Modeling the World, through physics-informed multimodal learning, realistic simulations, and digital twins; Reasoning in the World, by integrating multimodal evidence to support grounded decision-making under uncertainty; and Acting in the World, by ensuring AI systems are verifiable, explainable, energy-efficient, and trustworthy. Together, these efforts position Physical AI as a foundational science driving innovation in health, sustainability, and security.

    About the Speaker

    Edmund Yeh is the Department Chair of Electrical and Computer Engineering at Northeastern University.

    • Photo of the user
    • Photo of the user
    2 attendees from this group

Group links

Organizers

Members

473
See all