Wed, Jun 17 · 5:30 PM EDT
Agenda
5:30 - 6:30pm Networking
6:30 - 7:30pm Presentation
7:30 - 7:45pm Closing
Abstract
So you push a Kubernetes Helm chart in your Visual Studio Code IDE from your local to a Git branch and kickoff the pipeline in Harness for the changes to be deployed to a container. You'd then have to go to OCP (or your cloud provider of choice), open stateful sets, pod logs to how the deployment is going on. What if you could see all that right where you pushed your code? You could be coding in your IDE all the while seeing the status of your pods.
This talk presents a practical architecture for exposing operational capabilities through MCP-style servers (Model Context Protocol) so developers can interact with OpenShift resources directly from their local development environment. Instead of treating the IDE as only a coding surface, we turn it into a controlled entry point for runtime diagnostics such as pod log retrieval, namespace-aware querying, rollout inspection, and incident triage.
I will also be posting the link to my MCP server that anybody can use and customize to their needs!
Bio
Vineel Arekapudi works at the intersection of large-scale data engineering, cloud platforms, and applied AI. He currently builds and leads modern data platforms at a major U.S. bank, where he designs lakehouse and streaming architectures that operate at multi-billion-record scale across cloud environments.
His background spans the full evolution of enterprise data systems—from mainframe and Teradata warehouses to cloud-native lakehouses built on Spark, Iceberg, Kafka, and Kubernetes. Over the past decade, his work has focused on building production-grade data platforms, including high-throughput ingestion pipelines, real-time analytics systems, and ML-ready data infrastructure used by data scientists, analysts, and AI teams.
In addition to data engineering, Vineel has deep experience in full-stack platform development using Java, Spring Boot, REST APIs, and modern front-end frameworks. This enables him to design data systems not just as pipelines, but as complete products—with APIs, services, governance layers, and developer tooling.
His current interests include open table formats (Apache Iceberg), lakehouse architecture, metadata-driven governance, and building scalable AI-ready data platforms. He enjoys sharing practical lessons from real production systems—what works, what breaks, and how to design data infrastructure that lasts.