How to Simplify Your Streaming Data Architecture with Kafka and Voltdb

Details
How to Simplify Your Streaming Data Architecture with Kafka
The story of Fast Data and how to get what you want
Writing mission-critical applications on top of streaming data requires high throughput, scalability and event processing without compromising "non negotiables" such as transactional consistency, and resiliency in a distributed computing environment. Kafka is becoming a default mechanism of moving data between layers. This can be observed in its adoption by Teradata, HPE, MapR with their respective ingestion layers.
A common challenge is how to manage the ingesting and processing of data while ensuring transactional consistency and meeting stringent latency SLAs for demanding throughput levels. There are several disparate approaches for accomplishing this using a combination of open source and proprietary technologies. In this talk, we'll help you understand how a simplified architecture can deliver performance and reliability without the guesswork.
You will learn:
-How to make Kafka imports more actionable
-Ensure scalable, fully consistent data with synchronous command logging
-Meet low latency SLA requirements
-How to guarantee the aforementioned non-negotiables
Meet your speaker:
Christopher M. Wolff is a software engineer on the SQL team at VoltDB, working on problems related to implementing SQL on distributed, in-memory database. Prior to this, Chris worked on the SQL implementation on the OpenEdge platform at Progress Software, in addition to working on compiler internals at MathWorks.
--------------
Agenda:
6:30-7PM: Check-in/light dinner
7-8PM: talk begins
8-8:30PM: Q&A/networking

Sponsors
How to Simplify Your Streaming Data Architecture with Kafka and Voltdb