Skip to content

Real-time Stream Analytics and Scoring Using Apache Flink, Druid & Cassandra

Photo of Konrad Szatan
Hosted By
Konrad S. and Tomasz G.
Real-time Stream Analytics and Scoring Using Apache Flink, Druid & Cassandra

Details

One of the hardest challenges we are trying to solve is how to deliver customizable insights based on billions of data points in real-time, that fully scale from a single perspective of an individual up to millions of users.
At Deep.BI we track user habits, engagement, product and content performance, processing terabytes or billions of events of data daily. Our goal is to provide real-time insights based on custom metrics from a variety of self-created dimensions. The platform allows us to perform tasks from various domains such as adjusting websites using real-time analytics, running AI optimized marketing campaigns, providing a dynamic paywall based on user engagement and AI scoring, or detecting frauds based on data anomalies and adaptive patterns.
To accomplish this, our system collects every user interaction. We use Apache Flink for event enrichment, custom transformations, aggregations and serving machine learning models. The processed data is then indexed by Apache Druid for real-time analytics and Apache Cassandra for delivery of the scores. Historical data is also stored on Apache Hadoop for machine learning model building. Using the low-level DataStream API, custom Process Functions, and Broadcasted State, we have built an abstract feature engineering framework that provides re-usable templates for data transformations. This allowed us to easily define domain specific features for analytics and machine learning, and migrate our batch data preprocessing pipeline from Python jobs deployed on Apache Spark to Flink, resulting in a significant performance boost.
This talk covers our challenges with building and maintaining our platform and lessons learned along the way, namely how to:
Evolve a continuous application processing an unbounded data stream,
Provide an API for defining, updating and reusing features for machine learning,
Handle late events and state TTL,
Serve machine learning models with the lowest latency possible,
Dynamically update the business logic at runtime without a need of redeploy, and
Automate the data pipeline deployment.

Speaker Information
Michał Ciesielczyk
BIO: Michał Ciesielczyk is the Head of AI Engineering at Deep.BI. He is responsible for researching, building and integrating machine learning tools with a variety of technologies including Scala, Python, Flink, Kafka, Spark, and Cassandra. Previously, he worked as an assistant professor at the Poznan University of Technology, where he received a Ph.D. in computer science and was a member of a research team working on numerous scientific and R&D projects. He has published more than 15 peer reviewed journal and conference papers in the areas of recommender systems and machine learning.
Company: Deep.BI
Job Position: Head of AI Engineering
Company Website: www.deep.bi
LinkedIn: https://www.linkedin.com/in/michal-c/
Email: michal.ciesielczyk@deep.bi

Sebastian Zontek
BIO: Sebastian Zontek is the CEO, CTO and co-founder of Deep.BI, Predictive Customer Data Platform with real-time user scoring. He is an experienced IT systems architect with particular emphasis on the production use of open source systems for big data such as Flink, Cassandra, Hadoop, Spark, Kafka, Druid in BDaaS solutions (Big Data as a Service), SaaS (Software as a Service), and PaaS (Platform as a Service). Previously, CEO and main platform architect at Advertine. The Advertine network allowed to match product ads with the user preferences, predicting their purchasing intent using ML and NLP techniques.
Company: Deep.BI
Job Position: CEO, CTO
Company Website: www.deep.bi
Email: seb@deep.bi
Twitter: @sebastianzontek
Blog: www.deep.bi/blog
LinkedIn: www.linkedin.com/in/sebastianzontek

Photo of DataOps Poland group
DataOps Poland
See more events
Online event
This event has passed