Skip to content

Understanding Apache Flink

Photo of Marco Villalobos
Hosted By
Marco V.
Understanding Apache Flink

Details

Note that this event will be held online since we currently do not have a meeting location. Afterwards, if anybody is up to meet for a drink, let me know in advanced so that we can plan.

This presentation is the first part of a series of three Apache Flink presentations that I (Marco Villalobos) am writing.

  1. Understanding Apache Flink.
  2. Apache Flink Patterns.
  3. Understanding Apache Flink Stateful Functions.

This first presentation introduces Apache Flink and its core concepts. The target audience is software engineers that need an introduction to Apache Flink. Additionally, this presentation offers an opportunity to learn and integrate many different technologies. It offers the following:

  • A complete Apache Flink job that uses the Data-Stream API and SQL API that writes all incoming data into s3 in Parquet format and writes aggregate time-series data into Influx DB.
  • A data generator deployed into a Kubernetes cluster.
  • An Apache Kafka Cluster deployed into a Kubernetes cluster with the Strimzi Kafka Operator.
  • An Apache Flink job deployed into a Kubernetes cluster with the Apache Flink Kubernetes Operator.
  • Localstack deployed into a Kubernetes cluster that simulates the S3 Amazon Web Services.
  • InfluxDB time-series database and Telegraf ingestion component deployed into a Kubernetes cluster.
  • Kafka UI deployed into a Kubernetes cluster.
  • Containerized system components built with Minikube.
  • A fully containerized distributed application deployed locally into Kubernetes cluster with Minikube.
  • A real-world example of how to build a containerized Apache Flink job with Gradle.
Photo of Los Angeles Java User Group group
Los Angeles Java User Group
See more events