
What we’re about
🖖 This group is for data scientists, machine learning engineers, and open source enthusiasts.
Every month we’ll bring you diverse speakers working at the cutting edge of AI, machine learning, and computer vision.
- Are you interested in speaking at a future Meetup?
- Is your company interested in sponsoring a Meetup?
This Meetup is sponsored by Voxel51, the lead maintainers of the open source FiftyOne computer vision toolset. To learn more, visit the FiftyOne project page on GitHub.
Upcoming events (4+)
See all- Network event164 attendees from 37 groups hostingJune 20 - AI, ML and Computer Vision Meetup en EspañolLink visible for attendees
When and Where
June 20, 2025 | 9:00 – 11:00 AM Pacific
IA Generativa con Agentes: Transformando el Desarrollo de Software
La charla explora cómo expandir las capacidades de los LLMs utilizando herramientas externas mediante agentes inteligentes. Veremos cómo esta combinación transforma el desarrollo de software al automatizar tareas y colaboración con la IA.
----------
Antonio Martinez es Ingeniero de Software en Inteligencia Artificial en Intel, con una maestría en Ciencias de la Computación por la Universidad Estatal de Texas. Tiene más de 10 años de experiencia en liderazgo técnico, inteligencia artificial, visión por computador y desarrollo de software.
Trabajadores Digitales: El Futuro del Trabajo Aumentado por Agentes
En esta charla te cuento cómo, junto a mi esposa, desarrollamos una plataforma de agentes basada en LangGraph y LangChain que ha escalado nuestra atención al cliente, aumentado la satisfacción y mejorado la conversión de ventas.
Te mostraré mi arquitectura agentica con el patrón React (Reasoning-Action) + Reflection (self-validation) y cómo este agente es capaz no solo de vender, sino de hacer todo el proceso de costear el delivery, validar pagos y más.
Compartiré ejemplos reales de empresas que ya usan microautomatizaciones low-code/no-code para centrar su esfuerzo en el core del negocio.
Reflexionaremos juntos sobre un mundo laboral hiperautomatizado donde cada uno de nosotros estará potenciado por múltiples agentes digitales.
-----------
Soy Jamilton Quintero, Head de Inteligencia Artificial en Apiux Tecnología, y me apasiona diseñar arquitecturas agenticas que transformen procesos reales. Soy un apasionado de la tecnología y fiel creyente de que la información tiene que fluir libremente, por lo que me encanta contribuir al OpenSource y en comunidades.
Usando Computer Vision Para Decisiones y Expresiones Artísticas en Entornos Creativos
Uso de un sistema de control gestual para la creación de animaciones / efectos visuales creativos. Exploración de cómo Machine Learning e Inteligencia Artificial pueden facilitar la creación de experiencias inmersivas para entornos de trabajo creativos. Creación de un sistema integral que comunica Python con Unreal Engine 5 para controlar entornos 3D.
----------
Tecnólogo Creativo especializado en el uso de tecnologías emergentes dentro de entornos creativos 2D/3D para diseños visuales.
Experiencia de trabajo en Realidad Virtual y Efectos Visuales para cine y TV.Tus Datos te Están Mintiendo: Búsqueda Semántica Para Encontrar la Verdad
Los modelos de alto rendimiento comienzan con datos de alta calidad, pero encontrar muestras ruidosas, mal etiquetadas o casos límite dentro de conjuntos de datos masivos sigue siendo un gran obstáculo. En esta sesión, exploraremos un enfoque escalable para curar y refinar conjuntos de datos visuales a gran escala utilizando búsqueda semántica impulsada por embeddings basados en transformers.
Al aprovechar la búsqueda por similitud y el aprendizaje de representaciones multimodales, aprenderás a descubrir patrones ocultos, detectar inconsistencias y encontrar casos límite. También discutiremos cómo estas técnicas pueden integrarse en lagos de datos y canalizaciones a gran escala para facilitar la depuración de modelos, la optimización de conjuntos de datos y el desarrollo de modelos fundacionales más robustos en visión por computadora. Únete a nosotros para descubrir cómo la búsqueda semántica está transformando la manera en que construimos y refinamos sistemas de inteligencia artificial.
------------
Paula Ramos tiene un doctorado en Visión por Computador y Aprendizaje Automático, con más de 20 años de experiencia en el campo tecnológico. Desde principios de los años 2000 en Colombia, ha estado desarrollando tecnologías integradas de ingeniería innovadoras, principalmente en Visión por Computador, robótica y Aprendizaje Automático aplicados a la agricultura.
Durante su investigación doctoral y postdoctoral, desplegó múltiples tecnologías de computación en el borde e IoT inteligentes y de bajo costo, diseñadas para agricultores y que pueden ser operadas sin experiencia en sistemas de visión por computador. El objetivo central de la investigación de Paula ha sido desarrollar sistemas y máquinas inteligentes capaces de comprender y recrear el mundo visual que nos rodea para resolver necesidades del mundo real, como las que se presentan en la industria agrícola.
- Network event319 attendees from 37 groups hostingJune 25 - Visual AI in HealthcareLink visible for attendees
Join us for the first of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare.
June 25 at 9 AM Pacific
Vision-Driven Behavior Analysis in Autism: Challenges and Opportunities
Understanding and classifying human behaviors is a long-standing goal at the intersection of computer science and behavioral science. Video-based monitoring provides a non-intrusive and scalable framework for analyzing complex behavioral patterns in real-world environments. This talk explores key challenges and emerging opportunities in AI-driven behavior analysis for individuals with autism spectrum disorder (ASD), with an emphasis on the role of computer vision in building clinically meaningful and interpretable tools.
About the Speaker
Somaieh Amraee is a postdoctoral research fellow at Northeastern University’s Institute for Experiential AI. She earned her Ph.D. in Computer Engineering and her research focuses on advancing computer vision techniques to support health and medical applications, particularly in children’s health and development.
PRISM: High-Resolution & Precise Counterfactual Medical Image Generation using Language-guided Stable Diffusion
PRISM, an explainability framework that leverages language-guided Stable Diffusion that generates high-resolution (512×512) counterfactual medical images with unprecedented precision, answering the question: “What would this patient image look like if a specific attribute is changed?” PRISM enables fine-grained control over image edits, allowing us to selectively add or remove disease-related image features as well as complex medical support devices (such as pacemakers) while preserving the rest of the image. Beyond generating high-quality images, we demonstrate that PRISM’s class counterfactuals can enhance downstream model performance by isolating disease-specific features from spurious ones — a significant advancement toward robust and trustworthy AI in healthcare.
About the Speaker
Amar Kumar is a PhD Candidate at McGill University | MILA Quebec AI Institute in the Probabilistic Vision Group (PVG). His research primarily focuses on generative AI and medical imaging, with the main objective to tackle real-world challenges like bias mitigation in deep learning models.
Building Your Medical Digital Twin — How Accurate Are LLMs Today?
We all hear about the dream of a digital twin: AI systems combining your blood tests, MRI scans, smartwatch data, and genetics to track health and plan care. But how accurate are today’s top tools like GPT-4o, Gemini, MedLLaMA, or OpenBioLLM — and what can you realistically feed them?
In this talk, we’ll explore where these models deliver, where they fall short, and what I learned testing them on my own health records.
About the Speaker
Ekaterina Kondrateva is a senior computer vision engineer with 8 years of experience in AI for healthcare, author of 20+ scientific papers, and finalist in three international MRI analysis competitions. Former head of AI research for medical imaging at HealthTech startup LightBC.
Deep Dive: Google’s MedGemma, NVIDIA’s VISTA-3D and MedSAM-2 Medical Imaging Models
In this talk, we’ll explore three medical imaging models. First, we’ll look at Google’s MedGemma open models for medical text and image comprehension, built on Gemma 3. Next,, we’ll dive into NVIDIA’s Versatile Imaging SegmenTation and Annotation (VISTA) model which combines semantic segmentation with interactivity, offering high accuracy and adaptability across diverse anatomical areas for medical imaging. Finally, we’ll explore MedSAM-2, an advanced segmentation model that utilizes Meta’s SAM 2 framework to address both 2D and 3D medical image segmentation tasks.
About the Speaker
Daniel Gural is a seasoned Machine Learning Engineer at Voxel51 with a strong passion for empowering Data Scientists and ML Engineers to unlock the full potential of their data.
- Network event245 attendees from 37 groups hostingJune 26 - Visual AI in HealthcareLink visible for attendees
Join us for one of the several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare.
When
June 26 at 9 AM Pacific
Where
Online. Register for the Zoom!
Multimodal AI for Efficient Medical Imaging Dataset Curation
We present a multimodal AI pipeline to streamline patient selection and quality assessment for radiology AI development. Our system evaluates patient clinical histories, imaging protocols, and data quality, embedding results into imaging metadata. Using FiftyOne researchers can rapidly filter and assemble high-quality cohorts in minutes instead of weeks, freeing radiologists for clinical work and accelerating AI tool development.
About the Speaker
Brandon Konkel is a Senior Machine Learning engineer at Booz Allen Hamilton with over a decade of experience developing AI solutions for medical imaging.
AI-Powered Heart Ultrasound: From Model Training to Real-Time App Deployment
We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views.
This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.
About the Speaker
Jeffrey Gao is a PhD candidate at Caltech, working at the intersection of machine learning and medical imaging. His research focuses on developing clinically deployable AI systems for ultrasound-based heart assessments, with an emphasis on real-time, edge-based inference and system integration.
Let’s Look Deep at Continuous Patient Monitoring
In hospitals, direct patient observation is limited–nurses spend only 37% of their shift engaged in patient care, and physicians average just 10 visits per hospital stay. LookDeep Health’s AI-driven platform enables continuous and passive monitoring of individual patients, and has been deployed “in the wild” for nearly 3 years. They recently published a study validating this system, titled “Continuous Patient Monitoring with AI”. This talk is a technical dive into said paper, focusing on the intersection of AI and real-world application.
About the Speaker
Paolo Gabriel, PhD is a senior AI engineer at LookDeep Health, where they continue to use computer vision and signal processing to augment patient care in the hospital.
AI in Healthcare: Lessons from Oncology Innovation
About the Speaker
Artificial intelligence is rapidly transforming how we diagnose, treat, and manage health.
Dr. Asba (AT) Tasneem is a healthcare data and innovation leader with over 20 years of experience at the intersection of clinical research, AI, and digital health. She has led large-scale programs in oncology and data strategy, partnering with organizations like the FDA, Duke University, and top pharma companies to drive AI-enabled healthcare solutions.
- Network event212 attendees from 38 groups hostingJune 27 - Visual AI in HealthcareLink visible for attendees
Join us for the third of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare.
When
June 27 at 9 AM Pacific
Where
Online. Register for the Zoom!
MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders
We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings.
About the Speakers
Ashwin Kumar is a PhD Candidate in Biomedical Physics at Stanford University, advised by Akshay Chaudhari and Greg Zaharchuk. He focuses on developing deep learning methodologies to advance medical image acquisition and analysis.
Maya Varma is a PhD student in computer science at Stanford University. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications.
Leveraging Foundation Models for Pathology: Progress and Pitfalls
How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value.
About the Speaker
Heather D. Couture is a consultant and founder of Pixel Scientia Labs, where she partners with mission-driven founders and R&D teams to support applications of computer vision for people and planetary health. She has a PhD in Computer Science and has published in top-tier computer vision and medical imaging venues. She hosts the Impact AI Podcast and writes regularly on LinkedIn, for her newsletter Computer Vision Insights, and for a variety of other publications.
LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging
Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation.
About the Speaker
Maximilian Rokussis is a PhD scholar at the German Cancer Research Center (DKFZ), working in the Division of Medical Image Computing under Klaus Maier-Hein. He focuses on 3D multimodal and multi-timepoint segmentation with spatial and text prompts. With several MICCAI challenge wins and first-author publications at CVPR and MICCAI, he co-leads the Helmholtz Medical Foundation Model initiative and develops AI solutions at the interface of research and clinical radiology.
LLMs for Smarter Diagnosis: Unlocking the Future of AI in Healthcare
Large Language Models are rapidly transforming the healthcare landscape. In this talk, I will explore how LLMs like GPT-4 and DeepSeek-R1 are being used to support disease diagnosis, predict chronic conditions, and assist medical professionals without relying on sensitive patient data. Drawing from my published research and real-world applications, I’ll discuss the technical challenges, ethical considerations, and the future potential of integrating LLMs in clinical settings. The talk will offer valuable insights for developers, researchers, and healthcare innovators interested in applying AI responsibly and effectively.
About the Speaker
Gaurav K Gupta graduated from Youngstown State University, Bachelor’s in Computer Science and Mathematics.
Past events (25)
See all- Network event338 attendees from 39 groups hostingJune 19 - AI, ML and Computer Vision MeetupThis event has passed