#1 - Integration of Apache Kafka with Apache Spark


Details
Spark and Kafka - Practical Applications
This event is all about data streaming and integration of Apache Kafka with Apache Spark. It also marks the first Toronto Apache Kafka meetup (https://www.meetup.com/Logger/events/228807188/)! The organization of this event is a joint effort of Toronto Apache Kafka (https://www.meetup.com/Logger/), Toronto Apache Spark (https://www.meetup.com/Toronto-Apache-Spark/), and Scala Toronto (https://www.meetup.com/scalator/) meetups.
Agenda:
6:30PM to 7:00PM - Opening and networking (refreshments provided)
7:00PM to 7:30PM - Realtime AIR MILES transactions using Kafka and Spark
7:35PM to 8:20PM - Spark and Kafka - Practical Applications
8:30PM to 9:00PM - Networking
------------------------------------------------------------
Title: Realtime AIR MILES transactions using Kafka and Spark
Target audience: Data Scientist, Data Engineer, Data Analyst, Product Owners, Business Owners
Level: Beginner/Intermediate
Speaker: Sansom Lee (https://ca.linkedin.com/in/sansom-lee-65728492)
Sansom is a data enthusiast. Together with a dynamic team at LoyaltyOne, he is embarking on a mission to explore new Big Data technologies to revamp the current stack. Lately, he is a big fan and intrigued by the simple yet powerful paradigms of Spark, Akka, and Kafka.
This talk is all about an ongoing journey of moving away from batch processes and entering the new era of streaming for AIR MILES Rewards Program.
----------------------------------------
Title: Spark and Kafka - Practical Applications
Target audience: Data Engineer, Dev Ops
Level: Intermediate
Speaker: Adam Bellemare (https://ca.linkedin.com/in/adambellemare)
Adam is a Software Developer at Flipp, where he works with Big Data and their related technologies.
Assumptions will be made that the audience is familiar with Spark Streaming, RDDs, SparkContext, and StreamingContext. I won't be diving into how those work, but they will be used in the presentation. There will be a basic overview of how Kafka works, benefits and drawbacks of it, and how to go about integrating it into both Spark Kafka Consumers and Producers. Some sample code will be shown, and I will demo a few use-cases in real-time. I will also touch on monitoring and data loss prevention.
------------------------------------------------------------
Sponsors:
LoyaltyOne (https://www.loyalty.com/) is sharing their office space with us for the event and also providing F&B.
Organized by: Rodrigo Abreu, Motasem Salem, Sansom Lee, and Sean Glover.

Sponsors
#1 - Integration of Apache Kafka with Apache Spark