Skip to content

Details

The raffle for a Lime Green Game Boy is still open! ๐Ÿ‹โ€๐ŸŸฉ๐ŸŽฎ
The rules: โœ… 1 entry per invite sent โœ… 2 entries per invite who shows up in person!

vLLM is used to deploy production-scale inferencing systems into Cloud Native environments. In this session, we will explore how vLLM operates in a Kubernetes environment, integration points for the vLLM engine, and explores orchestration and operational lifecycles required to scale inference effectively.โ€

Related topics

Events in Cincinnati, OH
Artificial Intelligence
Cloud Native
Open Source
Software Development
Kubernetes

You may also like