
What we’re about
This group is for data scientists, machine learning engineers, and open source enthusiasts.
Every month we’ll bring you diverse speakers working at the cutting edge of AI, machine learning, and computer vision.
Upcoming events (4+)
See all- Network event382 attendees from 44 groups hostingSept 10 - Visual AI in Manufacturing and Robotics (Day 1)Link visible for attendees
Join us for the first in a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI, Manufacturing and Robotics.
Date and Time
Sept 10 at 9 AM Pacific
Location
Virtual. Register for the Zoom!
Detecting the Unexpected: Practical Approaches to Anomaly Detection in Visual Data
Anomaly detection is one of computer vision's most exciting and essential challenges today. From spotting subtle defects in manufacturing to identifying edge cases in model behavior, it is one of computer vision's most exciting and crucial challenges. In this session, we’ll do a hands-on walkthrough using the MVTec AD dataset, showcasing real-world workflows for data curation, exploration, and model evaluation. We’ll also explore the power of embedding visualizations and similarity searches to uncover hidden patterns and surface anomalies that often go unnoticed.
This session is packed with actionable strategies to help you make sense of your data and build more robust, reliable models. Join us as we connect the dots between data, models, and real-world deployment—alongside other experts driving innovation in anomaly detection.
About the Speaker
Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.
Scaling Synthetic Data for Industrial AI: From CAD to Model in Hours
This talk explores how we generate high-performance computer vision datasets from CAD—without real-world images or manual labeling. We’ll walk through our synthetic data pipeline, including CPU-optimized defect simulation, material variation, and lighting workflows that scale to thousands of renders per part. While Blender plays a role, our focus is on how industrial data (like STEP files) and procedural generation unlock fast, flexible training sets for manufacturing QA, even on modest hardware. If you're working at the edge of 3D, automation, and vision AI—this is for you!
About the Speaker
Matt Puchalski is the founder and CEO of Bucket Robotics, A Y Combinator backed startup building self-serve computer vision systems for manufacturing. Previously, he led robotics reliability at Argo AI and helped build and deploy autonomous vehicles at Stack AV and Uber ATG.
Swarm Intelligence: Solving Complex Industrial Optimization in Seconds
Manufacturing and logistics companies face increasingly complex operational challenges that traditional AI and human planning struggle to solve effectively. Collide Technology harnesses Swarm Intelligence algorithms to transform intractable problems—like scheduling hundreds or thousands of maintenance employees while simultaneously optimizing production capacity, inventory levels, and cross-sector resource allocation—into solutions delivered in seconds rather than weeks.
Unlike rigid Operations Research approaches that require specialized expertise and expensive implementations, our platform democratizes industrial optimization by making sophisticated decision-making accessible to any factory or logistics operation. We deliver holistic, data-driven solutions that optimize across multiple business entities and sectors simultaneously, adapting to real-world constraints and evolving operational needs.
About the Speaker
Frederick Gertz, PhD has worked in AI for the manufacturing space for over a decade delivering data science insights for the medical and pharmaceutical manufacturing space. Prior to that he worked in nanotechnology with a focus on bio-physics and nanomagnetics with his dissertation research on Magnonic Holographic Devices being named as a runner-up for 2014 Physics Breakthrough of the Year by Physics World.
- Network event266 attendees from 44 groups hostingSept 11 - Visual AI in Manufacturing and Robotics (Day 2)Link visible for attendees
Join us for day two in a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI, Manufacturing and Robotics.
Date and Time
Sept 11 at 9 AM Pacific
Location
Virtual. Register for the Zoom!
Bringing Specialist Agents to the Physical World to Improve Manufacturing Output
U.S. manufacturing productivity (output per labor hour) has been stagnant since 2008, driven by a stall in technology integration as well as available workers. RIOS Agents are collaborative AI perception and control systems that act as plant managers' eyes on the ground. Our Agents become specialists in a process, observing process steps, reporting on them, and ultimately controlling them by integrating into new or existing equipment. This enables factory production to be optimized in a way that was previously not possible.
About the Speaker
Clinton Smith is the co-founder and CEO of RIOS, whose AI agents watch, optimize and control production in various industrial facilities, including deep penetration into wood products and lumber. Clinton previously was a Senior Member of the Research Staff at Xerox PARC, leading multiple Dept. of Energy & Dept. of Defense projects, and holds a PhD in Electrical Engineering from Princeton University and a BS in Computer Engineering from Georgia Tech.
Accelerating Robotics with Simulation
In this session, Steve Xie, CEO of Lightwheel, shares how simulation-first workflows and high-quality SimReady assets are transforming the development of visual AI in manufacturing. From warehouse anomaly detection to worker safety and object identification, Steve will explore how physics-accurate simulation and synthetic datasets can drive scalable AI training with minimal real-world data. Drawing from Lightwheel’s deployment of robot models like GR00T N1 in factory environments, the talk highlights how unifying vision, language, and action in simulation accelerates real-world deployment while improving safety, generalization, and efficiency.
About the Speaker
Dr. Steve Xie is founder and CEO of Lightwheel, a company leading simulation infrastructure for embodied AI. Steve is a pioneer in generative-AI-powered simulation for robotics. He holds a B.S. from Peking University and a Ph.D. from Columbia University. Steve has led simulation efforts at NVIDIA and Cruise, where he built end-to-end synthetic data pipelines that set industry benchmarks for realism, scalability, and sim2real transfer.
Anomalib 2.0: Edge Inference and Model Deployment
When deploying models for inference, just exporting the models and calling them via the inferencers do not work. There are challenges related to pre-processing and post-processing. Any deviation in these steps during inference impacts performance. This talk is about how we re-designed components of Anomalib to integrate pre and post-processing steps in the model graph.
About the Speaker
Samet Akcay is an AI Research Engineer at Intel who leads ML research and development efforts across multiple Open Edge Platform libraries, including Intel Geti, Datumaro, Anomalib, Training Extensions, and Vision Inference libraries. His research specializes in semi/self-supervised learning, zero/few-shot learning, and multi-modal object and anomaly detection. He is the creator of Anomalib, a major open-source anomaly detection library.
Exploring Robotic Manipulation Datasets using FiftyOne: DROID and Amazon Armbench
About the Speaker
Allen Lee is currently a Machine Learning Engineer at Voxel51. Before that, Allen was the Co-Founder and Consulting Engineer at Leap Scientific LLC, where they provided scientific software consultancy services related to computation, machine learning, and computer vision.
- Network event171 attendees from 44 groups hostingSept 12 - Visual AI in Manufacturing and Robotics (Day 3)Link visible for attendees
Join us for day three in a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI, Manufacturing and Robotics.
Date and Time
Sept 12 at 9 AM Pacific
Location
Virtual. Register for the Zoom!
Towards Robotics Foundation Models that Can Reason
In recent years, we have witnessed remarkable progress in generative AI, particularly in language and visual understanding and generation. This leap has been fueled by unprecedentedly large image–text datasets and the scaling of large language and vision models trained on them. Increasingly, these advances are being leveraged to equip and empower robots with open-world visual understanding and reasoning capabilities.
Yet, despite these advances, scaling such models for robotics remains challenging due to the scarcity of large-scale, high-quality robot interaction data, limiting their ability to generalize and truly reason about actions in the real world. Nonetheless, promising results are emerging from using multimodal large language models (MLLMs) as the backbone of robotic systems, especially in enabling the acquisition of low-level skills required for robust deployment in everyday household settings.
In this talk, I will present three recent works that aim to bridge the gap between rich semantic world knowledge in MLLMs and actionable robot control. I will begin with AHA, a vision-language model that reasons about failures in robotic manipulation and improves the robustness of existing systems. Building on this, I will introduce SAM2Act, a 3D generalist robotic model with a memory-centric architecture capable of performing high-precision manipulation tasks while retaining and reasoning over past observations. Finally, I will present MolmoAct, AI2’s flagship robotic foundation model for action reasoning, designed as a generalist system that can be post-trained for a wide range of downstream manipulation tasks.
About the Speaker
Jiafei Duan is a Ph.D. candidate in Computer Science & Engineering at the University of Washington, advised by Professors Dieter Fox and Ranjay Krishna. His research focuses on foundation models for robotics, with an emphasis on developing scalable data collection and generation methods, grounding vision-language models in robotic reasoning, and advancing robust generalization in robot learning. His work has been featured in MIT Technology Review, GreekWire, VentureBeat, and Business Wire.
Beyond Academic Benchmarks: Critical Analysis and Best Practices for Visual Industrial Anomaly Detection
In this talk, I will share our recent research efforts in visual industrial anomaly detection. It will present a comprehensive empirical analysis with a focus on real-world applications, demonstrating that recent SOTA methods perform worse than methods from 2021 when evaluated on a variety of datasets. We will also investigate how different practical aspects, such as input size, distribution shift, data contamination, and having a validation set, affect the results.
About the Speaker
Aimira Baitieva is a Research Engineer at Valeo, where she works primarily on computer vision problems. Her recent work has been focused on deep learning anomaly detection for automating visual inspection, incorporating both research and practical applications in the manufacturing sector.
The Digital Reasoning Thread in Manufacturing: Orchestrating Vision, Simulation, and Robotics
Manufacturing is entering a new phase where AI is no longer confined to isolated tasks like defect detection or predictive maintenance. Advances in reasoning AI, simulation, and robotics are converging to create end-to-end systems that can perceive, decide, and act – in both digital and physical environments.
This talk introduces the Digital Reasoning Thread – a consistent layer of AI reasoning that runs through every stage of manufacturing, connecting visual intelligence, digital twins, simulation environments, and robotic execution. By linking perception with advanced reasoning and action, this approach enables faster, higher-quality decisions across the entire value chain.
We will explore real-world examples of applying reasoning AI in industrial settings, combining simulation-driven analysis, orchestration frameworks, and the foundations needed for robotic execution in the physical world. Along the way, we will examine the key technical building blocks – from data pipelines and interoperability standards to agentic AI architectures – that make this level of integration possible.
Attendees will gain a clear understanding of how to bridge AI-driven perception with simulation and robotics, and what it takes to move from isolated pilots to orchestrated, autonomous manufacturing systems.
About the Speaker
Vlad Larichev is an Industrial AI Lead at Accenture Industry X, specializing in applying AI, generative AI, and agentic AI to engineering, manufacturing, and large-scale industrial operations. With a background as an engineer, solution architect, and software developer, he has led AI initiatives across sectors including automotive, energy, and consumer goods, integrating advanced analytics, computer vision, and simulation into complex industrial environments.
Vlad is the creator of the Digital Reasoning Thread – a framework for connecting AI reasoning across visual intelligence, simulation, and physical execution. He is an active public speaker, podcast host, and community builder, sharing practical insights on scaling AI from pilot projects to enterprise-wide adoption.
The Road to Useful Robots
This talk explores the current state of AI-enabled robots and the issues with deploying more advanced models on constrained hardware, including limited compute and power budgets. It then moves on to what's next for developing useful, intelligent robots.
About the Speaker
Michael Hart, also known as Mike Likes Robots. is a robotics software engineer and content creator. His mission is to share knowledge to accelerate robotics. @mikelikesrobots
- Sept 25 - Valencia AI, ML and Computer Vision MeetupUniversidad de Valencia - Salon de Grados, Valencia
Acompáñanos para escuchar charlas de expertos en IA, ML y Visión por Computadora.
Sep 25, 2025, 5:00 - 8:30 PM
Universidad de Valencia
Salon de Grados
Avinguda de l'Universitat, s/n 46100
Burjassot, ValenciaIA Generativa Responsable y Gobernada: MLOps/GenAIOps para Entornos On-Premise Éticos y Seguros
La rápida adopción de modelos generativos exige entornos controlados que garanticen trazabilidad, privacidad y gobernanza de extremo a extremo. En esta charla exploraremos cómo aplicar el paradigma MLOps/GenAIOps para desplegar soluciones de IA generativa ética en infraestructuras on-premise, cumpliendo regulaciones como el AI Act y ENS.
Revisaremos patrones de arquitectura, herramientas open-source, y estrategias para garantizar observabilidad, reproducibilidad y control de datos en cada etapa del ciclo de vida del modelo. Además, compartiremos casos reales de colaboración internacional en el co-desarrollo de soluciones IA responsables con startups, sector público y comunidades técnicas.
Orador
Luis San Martín posee el rol de Ingeniero de IA en Exceltic, con más de 4 años de experiencia en proyectos de Inteligencia Artificial y 8 en desarrollo de software. Actualmente lidera iniciativas de MLOps/LLMOps en Proyectos Europeos, impulsando hubs de IA generativa y colaborando con comunidades, startups y entidades públicas para desarrollar soluciones éticas, trazables y seguras.
IA que Huele a Café: Explorando Datos Agrícolas con FiftyOne
La inteligencia artificial en la agricultura solo es tan buena como los datos que la respaldan, pero los conjuntos de datos desordenados, las anotaciones deficientes y los sesgos ocultos frenan el progreso. Acompaña a Paula en una sesión dinámica sobre segmentación semántica, donde mostrará cómo FiftyOne puede transformar la curación de datos, el análisis de anotaciones y la evaluación de modelos en proyectos de IA agrícola.
Usando conjuntos de datos reales de café provenientes de Colombia, exploraremos la segmentación de frutos de café en diferentes etapas de maduración, aprovechando las potentes herramientas de FiftyOne: desde la detección de datos únicos hasta la búsqueda por similitud y la visualización de embeddings.
Ya sea que trabajes en robótica agrícola, teledetección o fenotipado de plantas, esta charla te brindará técnicas prácticas para refinar tus conjuntos de datos y potenciar tus flujos de trabajo de IA.
Orador
Paula Ramos tiene un doctorado en Visión Artificial y Aprendizaje Automático, con más de 20 años de experiencia en el campo tecnológico. Desde principios de la década del 2000 en Colombia, ha desarrollado novedosas tecnologías integradas de ingeniería, principalmente en Visión Artificial, robótica y Aprendizaje Automático aplicados a la agricultura.
IA Agéntica y RAG Visual: Sistemas Inteligentes que Ven, Recuperan y Actúan
El auge de los LLMs ha transformado la interacción con el lenguaje, pero sus capacidades son limitadas sin percepción visual. Por otro lado, los sistemas de visión computacional carecen de razonamiento contextual. Esta charla explora cómo combinar visión por computador, recuperación aumentada (RAG) y agentes autónomos para construir sistemas que no solo ven, sino que entienden y actúan.
Presentaremos una arquitectura práctica basada en herramientas open source como CLIP, LangChain y FiftyOne, que permite crear agentes visuales capaces de etiquetar imágenes, responder preguntas y tomar decisiones informadas.Una charla para quienes quieren ir más allá del análisis visual y construir flujos inteligentes basados en percepción, razonamiento y acción.
Orador
Sandra Lancheros es ingeniera de software y líder técnica especializada en inteligencia artificial aplicada. Con una sólida trayectoria en machine learning y sistemas de agentes, ha liderado proyectos de innovación en empresas internacionales y ahora impulsa soluciones a través de su empresa IntelligentSystem.es.
Del campo al dato: oportunidades, obstáculos y el camino por Recorrer de la IA en agricultura
La inteligencia artificial (IA) se ha consolidado como una herramienta clave para afrontar algunos de los mayores desafíos en la agricultura actual, desde la optimización del uso de insumos hasta el monitoreo preciso de cultivos y la predicción de rendimientos. En este seminario se presentarán varios casos prácticos reales en los que la IA ha demostrado su potencial para mejorar la eficiencia, sostenibilidad y rentabilidad en distintas etapas de la producción agrícola.
Sin embargo, a pesar de sus beneficios, la adopción de estas tecnologías en el sector sigue siendo limitada. Factores como la falta de relevo generacional, el envejecimiento de la población agraria, la baja digitalización en el medio rural y la percepción de complejidad o desconfianza hacia las herramientas digitales suponen importantes barreras. Esta charla, se aborda éxitos y obstáculos actuales, destacando la importancia de diseñar soluciones accesibles, acompañadas de formación y apoyo técnico, que se ajusten a la realidad de un sector tradicional en proceso de transformación.
Orador
José Blasco es doctor en Informática por la Universitat Politècnica de
València (2001) y desarrolla su actividad investigadora en el
Instituto Valenciano de Investigaciones Agrarias (IVIA) desde 1996. Ha
sido responsable del Área de Visión Artificial y Espectroscopia,
coordinador del Centro de Agroingeniería y director del IVIA. Su línea
de investigación se centra en el desarrollo, adaptación y aplicación
de tecnologías electrónicas e informáticas para la inspección
automática en campo y poscosecha, la agricultura de precisión y el
manejo de datos agronómicos.Pasado (reciente), Presente (variable) y Futuro (incierto) de la IA
En este charla se abordará el pasado reciente de la IA (desde 2010); el presente que varía diariamente por la gran cantidad de grandes empresas que están en este campo y el futuro que, cualquier innovación disruptiva (como los transformers en el 2017) puede cambiar.
Orador
Emilio Soria-Olivas - Catedrático de Universidad, Licenciado en Físicas (premio extraordinario) y Doctor Ingeniero Electrónico. Director del Máster Oficial en Ciencia de Datos y del Máster Propio en Inteligencia Artificial ambos de la Universidad de Valencia. Director de la aceleradora de empresas de inteligencia artificial IATECH-UV y director de la Cátedra Universidad-Empresa de Baleària sobre Inteligencia Artificial y Neurociencia.
Past events (9)
See all- Network event356 attendees from 43 groups hostingAug 29 - Visual Agents Workshop Part 3: Teaching Machines to See and ClickThis event has passed