Skip to content

Optimizing Kafka for Cost-Efficiency: Best Practices and Strategies

Photo of Samriddhi Bhatnagar
Hosted By
Samriddhi B. and Mohamed A.
Optimizing Kafka for Cost-Efficiency: Best Practices and Strategies

Details

Event Title: Optimizing Kafka for Cost-Efficiency: Best Practices and Strategies
Event Details:
Date: July 18th 2024
Time: 4:00 PM IST (Virtual Event)

Join us for an insightful and practical session on Optimizing Kafka for Cost-Efficiency: Best Practices and Strategies. This event is a must-attend for anyone looking to optimize their Kafka infrastructure, reduce costs, and improve performance.

Speakers:

  1. Yaniv Ben Hemo, Co-Founder & CEO at Superstream
    Yaniv is an accomplished developer and data engineer with a diverse background in developing high-velocity software-defined storage systems and data platforms. His expertise extends to constructing real-time pipelines capable of handling petabytes of data for some of the world's largest enterprises.
  2. Viktor Somogyi-Vass, Staff Software Engineer- CDF at Cloudera
    Viktor is a Kafka Committer and staff engineer working currently at Cloudera. He focuses on Kafka core and Cruise Control but also develops Cloudera’s proprietary solutions, CDP Private and Public Cloud to provide better integration with the Kafka ecosystem

Agenda

1. Yaniv Ben Hemo – Superstream 4:00 – 4:30 PM

Title : Cutting Costs with Kafka: Strategies for Budget-Friendly Streaming
Abstract-
In this session, we'll explore a variety of techniques and best practices to reduce costs associated with running Apache Kafka. From optimizing resource allocation and scaling efficiently to leveraging cost-effective storage solutions and tuning configurations, attendees will gain actionable insights to make their Kafka deployments more economical without sacrificing performance.

2. Viktor Somogyi-Vass – Cloudera 4:30 - 5:00 PM

Title: Optimizing Streaming Data Management: Integrating Tiered Storage in Apache Kafka with Apache Iceberg
Abstract:
In this session, we will focus on the integration of tiered storage in Apache Kafka to enhance data storage and query capabilities. Building on cost-cutting strategies for Kafka deployments, we will dig into how tiered storage can address scalability and cost concerns by offloading older data to more economical storage solutions, ensuring high performance for active data.
We will demonstrate practical steps and configurations for setting up Kafka with tiered storage and explore how to marry operational and analytical workloads by integrating Apache Kafka with Apache Iceberg. This includes showcasing current solutions for streaming and offloading data into Iceberg using Kafka Connect and Apache Flink, as well as discussing future plans and enhancements by the open-source community to further improve data pipeline integration.

Attendee Benefits

  • Attendees will gain insights into best practices for deploying and managing this integrated solution, ensuring scalable, cost-effective, and queryable data management.
  • This session is ideal for data engineers, architects, and anyone looking to optimize their data storage strategies with cutting-edge technologies.
  • Join us to discover how combining tiered storage in Apache Kafka with Apache Iceberg can revolutionize your data management and analytics capabilities, both now and in the future.
  • Efficient Scaling Strategies: Discover how to scale Kafka deployments efficiently, ensuring optimal performance without unnecessary expenses.
  • Cost Optimization Techniques: Learn practical methods to optimize resource allocation and reduce operational costs.
  • Actionable strategies and best practices to implement in your own Kafka environments, driving budget-friendly streaming solutions.
Photo of Future of Data: Bangalore group
Future of Data: Bangalore
See more events