June 25 - AI, ML and Computer Vision Meetup
139 attendees from 48 groups hosting
Details
Join our virtual meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.
Date, Time and Location
Jun 25, 2026
9AM PST
Online. Register for the Zoom!
Large-Scale Scene Reconstruction via Local View Transformers
Transformer-based models have advanced 3D scene reconstruction, but their quadratic attention limits scalability to large scenes. We introduce the Local View Transformer (LVT), which replaces global attention with locality-aware attention over neighboring views, conditioned on relative camera geometry. LVT decodes directly into 3D Gaussian splats with view-dependent color and opacity for high-fidelity rendering. Our approach enables scalable, single-pass reconstruction of large, high-resolution scenes.
About the Speaker
Tooba Imtiaz is a PhD candidate in Electrical and Computer Engineering at Northeastern University, working in the Machine Learning Lab. Her research focuses on 3D computer vision, novel view synthesis, and robust machine learning. She has published in top venues including SIGGRAPH Asia, CVPR, and ICLR, and has industry experience at Google.
Lessons learned from running AI workloads in production
He’ll share his “tales from the engine room” - practical insights from operating AI systems at scale, including the challenges of abstraction layers, the realities of data movement and hardware constraints, and how systems thinking is essential for building high-performance, secure, and responsible AI infrastructure.
About the Speaker
Dave Hughes is CTO at Stelia. He was formerly CTO at Genesis Cloud, which pioneered what is now commonly known as 'neoclouds', and Principal Engineer/Interim Director of Engineering at Adjust GmbH where he built large-scale data warehousing and processing. Dave has a strong background in software engineering, data engineering, systems admin and network engineering. He has worked in traditional HPC, early GPU-accelerated computing (ML) and now AI.
Enhancing Low-Field MRI with Deep Super-Resolution for Improved Nipah Virus Neuroimaging
Advances in deep learning make very-low-field (VLF) MRI systems a viable alternative for in vivo neuroimaging. Zero-shot super-resolution, self-supervised learning, and generative AI were explored to improve the quality of low-field MRI images. We present a framework for the first deployment of a VLF scanner for imaging Nipah virus-inoculated nonhuman primates (NHPs) using a 0.05 T MRI system.
First, a retrospective simulation study assessed the feasibility of imaging NiV infection at low field, followed by a prospective deployment (0.05 T) that enabled longitudinal imaging. The VLF-NiV imaging was characterized by low image quality and included multiple contrasts. A deep learning-based unpaired domain adaptation (CycleGAN) conditioned on acquisition parameters was used to harmonize contrast, and a simulation-based ResUNet model was used to reduce unwanted noise and preserve T2-weighted structural fidelity. We also highlight studies involving zero-shot super-resolution and denoising experiments that are advantageous for accessible neuroimaging.
About the Speaker
Ajay Sharma is a deep learning engineer with a broad background in biomedical image analysis. My research focuses on developing advanced deep learning methods for computer-aided disease detection and diagnosis. Currently, my work centers on improving image analysis in magnetic resonance imaging (MRI), with emphasis on low-field MRI (LF-MRI), image acquisition, image enhancement, brain tracking, segmentation, and reporting. Previously, I developed explainable AI (XAI) approaches for chest and pediatric brain imaging that increase clinicians’ confidence in AI-assisted diagnostic systems.
And Now for Something Completely Different with FiftyOne
Often the best way to understand what a tool is truly capable of, is to use in ways it was never intended to be used. This session pushes FiftyOne past its computer vision roots through a series of demos showing how to push the boundaries with FiftyOne. A few practical, some playful, all built with open source code. You'll see how FiftyOne's core building blocks generalize far beyond labeled datasets, and leave with patterns and ideas you can take in your own direction.
About the Speaker
Burhan Qaddoumi is a ML DevRel Engineer at Voxel51 and perpetual "new guy" as a life long learner. Active in communities all across the web, eager to help, learn, and share with others that demonstrate initiative, interest, and drive.
Related topics
Sponsors

PubNub
Event Host

AWS Web Services
Hosting

O'Reilly
Media Sponsor

Structure
Media Partner
