
What we’re about
BayNode is a community focused node.js meetup in Mountain View We meet for a talk night (food & drinks), and a Beer.node (unformatted socializing).
Each Node Night features 2-3 talks relevant to the node.js ecosystem. When possible, we prioritize speakers and topics from our members over specific topics or expertise level.
If you want to help, we are always looking for contributors.
Sponsors
Upcoming events
4
- Network event
•OnlineNov 13 - Women in AI
Online264 attendees from 47 groupsHear talks from experts on the latest topics in AI, ML, and computer vision on November 13.
Date and Location
Nov 13, 2025
9 AM Pacific
Online. Register for the Zoom!Copy, Paste, Customize! The Template Approach to AI Engineering
Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability.
Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations.
About the Speaker
Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science.
Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne
Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety.
About the Speaker
Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.
The Heart of Innovation: Women, AI, and the Future of Healthcare
This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world.
About the Speaker
Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology.
Language Diffusion Models
Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens.
Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue.
About the Speaker
Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering.
- Network event
•OnlineNov 14 - Workshop: Document Visual AI with FiftyOne
Online112 attendees from 47 groupsThis hands-on workshop introduces you to document visual AI workflows using FiftyOne, the leading open-source toolkit for computer vision datasets.
Date and Location
Nov 14, 2025
9:00-10:30 AM Pacific
Online. Register for the ZoomIn document understanding, a pixel is worth a thousand tokens. While traditional text-extraction pipelines tokenize and process documents sequentially, modern visual AI approaches can understand document structure, layout, and content directly from images—making them more efficient, accurate, and robust to diverse document formats.
In this workshop you'll learn how to:
- Load and organize document datasets in FiftyOne for visual exploration and analysis
- Compute visual embeddings using state-of-the-art document retrieval models to enable semantic search and similarity analysis
- Leverage FiftyOne workflows including similarity search, clustering, and quality assessment to gain insights from your document collections
- Deploy modern vision-language models for OCR and document understanding tasks that go beyond simple text extraction
- Evaluate and compare different OCR models to select the best approach for your specific use case
Whether you're working with invoices, receipts, forms, scientific papers, or mixed document types, this workshop will equip you with practical skills to build robust document AI pipelines that harness the power of visual understanding. Walk away with reproducible notebooks and best practices for tackling real-world document intelligence challenges.
- Network event
•OnlineDec 4 - AI, ML and Computer Vision Meetup
Online170 attendees from 47 groupsJoin the virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.
Date and Time
Dec 4, 2025
9:00 - 11:00 AM PacificBenchmarking Vision-Language Models for Autonomous Driving Safety
This workshop introduces a unified framework for evaluating how vision-language models handle driving safety. Using an enhanced BDDOIA dataset with scene, weather, and action labels, we benchmark models like Gemini, FastVLM, and Qwen within FiftyOne. Our results show consistent blind spots where models misjudge unsafe situations, highlighting the need for safer and more interpretable AI systems for autonomous driving.
About the Speaker
Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow, Docker, and OpenCV. I started as a software developer, moved into AI, led teams, and served as CTO. Today, I connect code and community to build open, production-ready AI — making technology simple, accessible, and reliable.
TrueRice: AI-Powered Visual Quality Control for Rice Grains and Beyond at Scale
Agriculture remains one of the most under-digitized industries, yet grain quality control defines pricing, trust, and livelihoods for millions. TrueRice is an AI-powered analyzer that turns a flatbed scanner into a high-precision, 30-second QC engine, replacing the 2+ hours and subjectivity of manual quality inspection.
Built on a state-of-the-art 8K image processing pipeline with SAHI (Slicing Aided Hyper Inference), it detects fine-grained kernel defects at scale with high accuracy across grain size, shape, breakage, discoloration, and chalkiness. Now being extended to maize and coffee, TrueRice showcases how cross-crop transfer learning and frugal AI engineering can scale precision QC for farmers, millers, and exporters. This talk will cover the design principles, model architecture choices, and a live demonstration, while addressing challenges in data variability, regulatory standards, and cross-crop adaptation.
About the Speaker
Sai Jeevan Puchakayala is an Interdisciplinary AI/ML Consultant, Researcher, and Tech Lead at Sustainable Living Lab (SL2) India, where he drives development of applied AI solutions for agriculture, climate resilience, and sustainability. He led the engineering of TrueRice, an award-winning grain quality analyzer that won India’s first International Agri Hackathon 2025.
WeedNet: A Foundation Model Based Global-to-Local AI Approach for Real-Time Weed Species Identification and Classification
Early and accurate weed identification is critical for effective management, yet current AI-based approaches face challenges due to limited expert-verified datasets and the high variability in weed morphology across species and growth stages. We present WeedNet, a global-scale weed identification model designed to recognize a wide range of species, including noxious and invasive plants. WeedNet is an end-to-end real-time pipeline that integrates self-supervised pretraining, fine-tuning, and trustworthiness strategies to improve both accuracy and reliability.
Building on this foundation, we introduce a Global-to-Local strategy: while the Global WeedNet model provides broad generalization, we fine-tune local variants such as Iowa WeedNet to target region-specific weed communities in the U.S. Midwest. Our evaluation addresses both intra-species diversity (different growth stages) and inter-species similarity (look-alike species), ensuring robust performance under real-world variability. We further validate WeedNet on images captured by drones and ground rovers, demonstrating its potential for deployment in robotic platforms. Beyond field applications, we integrate a conversational AI to enable practical decision-support tools for farmers, agronomists, researchers, and land managers worldwide. These advances position WeedNet as a foundational model for intelligent, scalable, and regionally adaptable weed management and ecological conservation.
About the Speaker
Timilehin Ayanlade is a Ph.D. candidate in the Self-aware Complex Systems Laboratory at Iowa State University, where his research focuses on developing machine learning and computer vision methods for agricultural applications. His work integrates multimodal data across ground-based sensing, UAV, and satellite with advanced AI models to tackle challenges in weed identification, crop monitoring, and crop yield prediction.
Memory Matters: Early Alzheimer’s Detection with AI-Powered Mobile Tools
Advancements in artificial intelligence and mobile technology are transforming the landscape of neurodegenerative disease detection, offering new hope for early intervention in Alzheimer’s.
By integrating machine learning algorithms with everyday mobile devices, we are entering a new era of accessible, scalable, and non-invasive tools for early Alzheimer’s detection
In this talk, we’ll cover the potential of AI in health care systems, ethical considerations, plus an architecture, model, datasets and framework deep dive.About the Speaker
Reetam Biswas has more than 18 years of experience in the IT industry as a software architect, currently working on AI.
1 attendee from this group - Network event
•OnlineDec 11 - Visual AI for Physical AI Use Cases
Online49 attendees from 47 groupsJoin our virtual meetup to hear talks from experts on cutting-edge topics across Visual AI for Physical AI use cases.
Date, Time and Location
Dec 11, 2025
9:00-11:00 AM Pacific
Online. Register for the Zoom!From Data to Open-World Autonomous Driving
Data is key for advances in machine learning, including mobile applications like robots and autonomous cars. To ensure reliable operation, occurring scenarios must be reflected by the underlying dataset. Since the open-world environments can contain unknown scenarios and novel objects, active learning from online data collection and handling of unknowns is required. In this talk we discuss different approach to address this real world requirements.
About the Speaker
Sebastian Schmidt is a PhD student at the Data Analytics and Machine Learning group at TU Munich and part of an Industrial PhD Program with the BMW research group. His work is mainly focused on Open-world active learning and perception for autonomous vehicles.
From Raw Sensor Data to Reliable Datasets: Physical AI in Practice
Modern mobility systems rely on massive, high-quality multimodal datasets — yet real-world data is messy. Misaligned sensors, inconsistent metadata, and uneven scenario coverage can slow development and lead to costly model failures. The Physical AI Workbench, built in collaboration between Voxel51 and NVIDIA, provides an automated and scalable pipeline for auditing, reconstructing, and enriching autonomous driving datasets.
In this talk, we’ll show how FiftyOne serves as the central interface for inspecting and validating sensor alignment, scene structure, and scenario diversity, while NVIDIA Neural Reconstruction (NuRec) enables physics-aware reconstruction directly from real-world captures. We’ll highlight how these capabilities support automated dataset quality checks, reduce manual review overhead, and streamline the creation of richer datasets for model training and evaluation.
Attendees will gain insight into how Physical AI workflows help mobility teams scale, improve dataset reliability, and accelerate iteration from data capture to model deployment — without rewriting their infrastructure.
About the Speaker
Daniel Gural leads technical partnerships at Voxel51, where he’s building the Physical AI Workbench, a platform that connects real-world sensor data with realistic simulation to help engineers better understand, validate, and improve their perception systems. With a background in developer relations and computer vision engineering,
Relevance of Classical Algorithms in Modern Autonomous Driving Architectures
While modern autonomous driving systems increasingly rely on machine learning and deep neural networks, classical algorithms continue to play a foundational role in ensuring reliability, interpretability, and real-time performance. Techniques such as Kalman filtering, A* path planning, PID control, and SLAM remain integral to perception, localization, and decision-making modules. Their deterministic nature and lower computational overhead make them especially valuable in safety-critical scenarios and resource-constrained environments. This talk explores the enduring relevance of classical algorithms, their integration with learning-based methods, and their evolving scope in the context of next-generation autonomous vehicle architectures.
Prajwal Chinthoju is an Autonomous Driving Feature Development Engineer with a strong foundation in systems engineering, optimization, and intelligent mobility. I specialize in integrating classical algorithms with modern AI techniques to enhance perception, planning, and control in autonomous vehicle platforms.
Past events
163


