[In-Person - April] Expert Talk - JAVA Meetup
Details
## 🚀 Bangalore JUG Special Talk.
## Learn from globally recognized Tech Experts, Industry Leaders shaping Modern Software!
## Event Details
Location:
Infosys Campus,
Electronic City Phase 1,
Bengaluru, Karnataka 560100
-- RSVP ONLY IF YOU HAVE DECIDED TO JOIN, LIMITED SEATS --
-- RSVP before 24th April 2026 --
-- GOV ID CARD IS MANDATORY TO ENTER --
-- Lunch included --
-- ATTENDEES HAS TO BRING FULLY CHARGED LAPTOPS IF REQUIRED --
Featured Talks -
Talk 1 : Full-stack observability for Java on Kubernetes: Combining OpenTelemetry with eBPF.
OpenTelemetry (OTel) is the industry standard for collecting distributed traces, metrics and logs. Extended Berkley Packet Filter (eBPF) provides high frequency, low-overhead insights directly from the Linux kernel. Together they build a comprehensive observability solution for Java applications running on Kubernetes. Using tools such as the OpenTelemetry Collector, Pixie, and Parca, we demonstrate how developers and SRE teams can correlate distributed traces with continuous profiling data, kernel‑level metrics, and flamegraphs to reduce mean time to detection (MTTD) and accelerate root‑cause analysis.
Speakers(s) :
Prabal Rakshit
-------------------------------------------------------------------
Talk 2 : Open-Source GPU-Powered Image Generation with LangChain4j
Modern Java can now deliver real GPU-accelerated AI. This session walks through a complete end-to-end solution combining CUDA, ONNX Runtime, and Oracle’s Stable Diffusion for Java (SD4J) to generate high-quality images directly from text prompts. The demo layers together Java, Spring, and LangChain4j to build an intelligent, cloud-ready service that performs inference with ONNX, produces embeddings for semantic search, and exposes a real-time web UI with health checks and monitoring. Attendees see each component—from CLIP tokenizer and U-Net to VAE decoder, scheduler, and safety checker—running efficiently on GPUs inside a Dockerized app that runs locally and on the cloud. The result: enterprise-grade Java that generates images at GPU speed with no rate limits.
Speaker(s) :
Brian Benz
-------------------------------------------------------------------
Talk 3 : Demystifying Java Virtual Threads: Internals, Challenges, and Pitfalls
In this talk, we will quickly trace the evolution of threading in Java, examining the limitations of traditional thread-per-request models and the motivations behind virtual threads.
We will then dive into the internals of virtual threads and discuss key engineering challenges faced during their development. Special attention will be given to practical concerns such as thread pinning, blocking operations, and how they impact scalability in real-world applications.
By the end of this session, you will gain a deep understanding of how virtual threads work under the hood, the trade-offs involved in their design, and how to reason about their behaviour when building high-performance Java systems.
Speaker(s) :
Ramkumar Sunderbabu
-------------------------------------------------------------------
Lunch break:
Time : 1:00pm - 2:00pm
-------------------------------------------------------------------
Talk 4 : Building AI Agents with Spring and MCP
Integrating AI into enterprise systems has traditionally been complex and specialized, but new tools are changing that. Spring AI and the Model Context Protocol (MCP) simplify AI integration by providing high-level abstractions and seamless interoperability, no deep AI expertise required.
This session walks through the end-to-end process of building an AI agent that can act as a natural language interface to enterprise data and back-end services. Attendees will see how Spring AI and MCP work together to create robust, production-ready AI applications that extend the power of existing Spring ecosystems.
Speaker(s) :
Varsha Das
-------------------------------------------------------------------
Talk 5 : LLMs and model serving
We will start with the architecture of large language models such as encoder decoder architecture, attention mechanism. We will understand the computational complexity within LLMs and how KV cache helps with to reduce it. KV Cache introduces new challenges, we will look at what are some of the strategies used to overcome these strategies. We will look at how DJL(Deep Java Library) containers can be used for model serving and its configuration options.
Speaker(s) :
Jayakrishnan Ramakumar
-------------------------------------------------------------------
Networking
Time : 4:00pm to 5:00pm



