
About us
The University of San Francisco Data Science & Artificial Intelligence Speaker Series is produced by the Data Institute. This group brings researchers and practitioners together with students in the MS in Data Science and Artificial Intelligence graduate program, faculty, and interested members of the public to discuss topics of interest in analytics and data science.
Talks take place in person on Fridays from 12:30–2:00 pm at the USF Downtown San Francisco campus, located at 101 Howard Street in the East Cut neighborhood, at the heart of San Francisco’s downtown innovation corridor. We encourage attendees to bring their lunch and join us for these mid-day conversations.
Talk recordings are made available subject to speaker permission. You can find the recorded talks at https://www.youtube.com/channel/UCN0kf0sI01-FXPZdWAA-uMA
Upcoming events
3

On-Device AI: Privacy, Performance, and Real-World Edge Systems
101 Howard St, University of San Francisco - Downtown Campus, San Francisco, CA 94105, San Francisco, Ca, USWe are excited to welcome Vineeth Gupta, USF Master of Science in Data Science and Artificial Intelligence alum, for an upcoming Data Science Speaker Series talk.
Vineeth is a Machine Learning Engineer at HP Inc., where he builds privacy-preserving AI systems for on-device and edge deployments. With over six years of experience, his work bridges machine learning, systems engineering, and real-world AI production.
As artificial intelligence becomes more embedded in everyday products, running models directly on edge devices is increasingly important for privacy, latency, and user trust. In this session, Vineeth will take a systems-first look at how on-device AI products are built. He will cover why privacy-first AI often favors local inference, how performance is achieved under strict hardware constraints, and how modern edge AI systems are designed in practice.
Attendees will leave with a clear understanding of the trade-offs between cloud and edge AI, as well as practical insights into building reliable and privacy-focused AI products.We look forward to seeing you there.
#DataScience #ArtificialIntelligence #EdgeAI #MachineLearning #USFCA #USFMSDSAI #DataInstitute #TechTalk #AIEngineering
26 attendees
How to survive building AI systems when things change every week
101 Howard St, University of San Francisco - Downtown Campus, San Francisco, CA 94105, San Francisco, Ca, USWe are excited to welcome back Sean McCurdy, AI and Machine Learning Engineering Leader at Lattice and startup founder at hyprbm, for an upcoming Data Science Speaker Series talk.
Sean brings over fifteen years of experience building AI and machine learning systems that drive real impact. His work spans scientific research, startup innovation, and large-scale production systems, including leading high-impact teams at Pinterest and building agentic and embedded AI experiences at Lattice.
In a world where new AI models seem to appear every week, this talk offers a grounded perspective on what it actually takes to build AI systems that last. Sean will share hard-won lessons on choosing the right metrics, designing architectures that adapt quickly, and avoiding the trap of constantly chasing the latest techniques at the cost of shipping reliable products.
This session is a reality check from the trenches, with real examples of what scales, what fails, and how teams can build AI systems that continue to deliver value over time.We look forward to seeing you there.
#USFCA #USFMSDSAI #DataInstitute #DataScience #ArtificialIntelligence #MachineLearning #AIEngineering #TechTalk #StartupLife #AIInProduction
10 attendees
Foundations of Distributed Training: How Modern AI Systems Are Built
101 Howard St, University of San Francisco - Downtown Campus, San Francisco, CA 94105, San Francisco, Ca, USWe are excited to welcome Suman Debnath, Technical Lead in Machine Learning at Anyscale, for a practical and intuitive introduction to distributed training.
Talk Description:
As modern AI models continue to grow, single-GPU training is no longer enough. Distributed training has become essential, but scaling models introduces challenges that require understanding communication patterns, system bottlenecks, and key trade-offs.
In this session, we will break down distributed training from first principles. We will explore why single-GPU training hits limits, how transformer models manage memory, and what techniques like gradient accumulation, checkpointing, and data parallelism actually do.
We will also demystify communication primitives, walk through ZeRO-1, ZeRO-2, ZeRO-3 and FSDP, and show how compute and communication can be overlapped for better efficiency. Finally, we will connect these concepts to real-world tooling used in frameworks like Ray and PyTorch. Attendees will gain a clear, grounded understanding of how distributed training works and when to apply different strategies.Bio:
Suman Debnath is a Technical Lead in Machine Learning at Anyscale, where he works on large-scale distributed training, fine-tuning, and inference optimization in the cloud. His expertise spans Natural Language Processing, Large Language Models, and Retrieval-Augmented Generation.
He has also spoken at more than one hundred global conferences and events, including PyCon, PyData, and ODSC, and has previously built performance benchmarking tools for distributed storage systems.We look forward to seeing you!
#DataScience #MachineLearning #DistributedTraining #Ray #PyTorch #LLM #RAG #DeepLearning #USFCA #USFMSDSAI #DataInstitute #AIEngineering #TechTalk
41 attendees
Past events
309

