One of the problems that companies face as they store and ingest more and more data is figuring out where the data is, where it needs to be, and a fast and efficient way of getting it from A to B. Apache Kafka helps solve those problems in a number of ways by making Kafka the centralized data pipeline, and leveraging Kafka Connectors as well as Data Streaming for data enrichment. My goal is to show the current state of ETL and what it could be with Kafka, as well as walking through some examples of how it works.
Our speaker will be Allen Underwood. Allen is a co-host of the Coding Blocks podcast https://www.codingblocks.net and a Software Architect by trade. He loves all things data and the challenges that come along with it. He’s particularly interested in big data and what it’s meant trying to get information as close to real time as possible. Ultimately those challenges are what steered him in Kafka’s direction and he’d love to share some of that knowledge with you.