AI Inference Meetup with NVIDIA


Details
# π AI Inference Meetup with NVIDIA β Unlock the Future of AI π
π
Date: 5th April, 2025
β° Time: 9:00 AM - 4:00 PM (IST)
π Location: Nokia L5 Building Manyata Tech Park Rd, DadaMastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 56004, Bengaluru
Register Now https://forms.gle/VsTp1DWY7GGp4DcP9
Join us for an exclusive AI Inference Meetup with NVIDIA, a full-day developer event in Bengaluru, where AI meets cloud-native technologies. This event brings together top experts from NVIDIA, Docker, Nokia, Cloudera, and more to explore cutting-edge advancements in AI inference and deployment.
## πΉ Whatβs in Store?
π Inspiring Keynotes & Sessions
Gain insider insights from AI and cloud-native pioneers who are revolutionizing AI inference, deployment, and model optimization.
π€ Connect with Industry Experts
Network with developers, AI practitioners, and tech leaders who are shaping the future of AI in the cloud.
π οΈ Hands-On Demos & Learning
Experience real-world applications through interactive demos, technical deep dives, and cloud-native AI deployment strategies.
***
## π Event Agenda
- Welcome and Registration (9:00 AM - 4:00 PM)
- 10:00 AM - 10:30 AM - NVIDIA Inference Stack Demystified by Amit Kumar, NVIDIA
- 10:30 - 12:00 AM - LLM Inference Optimization and Serving using TensorRT-LLM and Triton Inference Server
- Advanced techniques for LLM Inference optimization - FP4, FP8, KV-cache reuse, etc - By Utkarsh Uppal, NVIDIA
- 11:45 AM - 12:00 PM - Tea Break
- 12:00 PM - 12:30 PM - AI Inference for enhanced performance and accessibility with TensorRT-LLM by Sarvam.AI Team
- 12:30 PM - 1:15 PM - Bridging Data & AI: How Cloudera and NVIDIA Drive the Future of Intelligent Enterprises by Manick Mehra, Anukrati Saxena & Navin Agrawal, Cloudera Team
- 1:15 PM - 1:45 PM - GPU-Accelerated AI Inference for Local LLM Development with Docker Model Runner by Ajeet Singh Raina, DevRel, Docker
- 1:45 PM - 2:30 PM - Lunch
- 2:30 PM - 3:00 PM - Securing LLM Apps with NVIDIA Nemo Guardrails, Jayita Bhattacharya, Deliotte
- 3:00 PM - 3:30 PM - Fine-Tuning LLMs Locally with NIM and NVIDIA NIM Microservices: A Live Demo by Manjunath Janardhan, GE Healthcare
- 3:30 PM - 4:00 - Networking
***
π₯ Donβt miss this opportunity to learn from AI pioneers and connect with fellow developers and industry experts!
π’ Seats are limited β RSVP now!

Sponsors
AI Inference Meetup with NVIDIA