About us
vLLM is a high-throughput, memory-efficient open-source library for fast LLM inference and serving, widely used for deploying large language models in production.
This meetup group is for anyone curious about how LLM inference works—and eager to learn the state-of-the-art techniques that power it.
Whether you're a developer, researcher, or just getting started with AI, you're welcome to join us in Hong Kong to explore vLLM, share insights, and connect with others.
Featured event

The First vLLM Meetup @ Hong Kong!
*** IMPORTANT ***
Please using the following to REGISTER offically, and please ignore the attendee limits here at meetup dot com:
https://www.vantagemind.com/events/vLLM/260307/vLLM-HK-Meetup_vLLM.html
*** IMPORTANT ***
The global vLLM Meetup is coming to Hong Kong! We’re bringing together vLLM core contributors and users locally from Hong Kong, Greater China, and around the world to share what’s next for LLM inference with vLLM—an open‑source LLM inference and serving engine with over 60,000 stars on GitHub!
Join us to dive into the fundamentals of vLLM, get hands-on experience, discover proven techniques to optimize LLM performance, deployment cost and reliability, and connect in person with a vibrant community of vLLM contributors, developers and users.
- Discover vLLM and the current landscape of LLM inference
- Hear directly from vLLM core contributors to learn the latest vLLM developments and updates
- Learn how vLLM integrates with AI hardware accelerators and state-of-the-art AI models
*** IMPORTANT ***
Please using the following to REGISTER offically, and please ignore the attendee limits here at meetup dot com:
https://www.vantagemind.com/events/vLLM/260307/vLLM-HK-Meetup_vLLM.html
*** IMPORTANT ***
Upcoming events
1

The First vLLM Meetup @ Hong Kong!
Hong Kong Polytechnic University,, 11 Yuk Choi Road, Hung Hom, Hong Kong, HK*** IMPORTANT ***
Please using the following to REGISTER offically, and please ignore the attendee limits here at meetup dot com:
https://www.vantagemind.com/events/vLLM/260307/vLLM-HK-Meetup_vLLM.html
*** IMPORTANT ***The global vLLM Meetup is coming to Hong Kong! We’re bringing together vLLM core contributors and users locally from Hong Kong, Greater China, and around the world to share what’s next for LLM inference with vLLM—an open‑source LLM inference and serving engine with over 60,000 stars on GitHub!
Join us to dive into the fundamentals of vLLM, get hands-on experience, discover proven techniques to optimize LLM performance, deployment cost and reliability, and connect in person with a vibrant community of vLLM contributors, developers and users.
- Discover vLLM and the current landscape of LLM inference
- Hear directly from vLLM core contributors to learn the latest vLLM developments and updates
- Learn how vLLM integrates with AI hardware accelerators and state-of-the-art AI models
*** IMPORTANT ***
Please using the following to REGISTER offically, and please ignore the attendee limits here at meetup dot com:
https://www.vantagemind.com/events/vLLM/260307/vLLM-HK-Meetup_vLLM.html
*** IMPORTANT ***9 attendees
