Skip to content

Building Realtime AI Applications with Apache Flink

Photo of Future of Data
Hosted By
Future of D. and Timothy S.
 Building Realtime AI Applications with Apache Flink

Details

Building Real-Time Applications for Credit Card Spending Analysis with APIs
By Matthias Broecheler

Looking to build realtime, data-driven applications at scale? Join us as we explore how to harness Apache Flink's power to process large data volumes efficiently, and expose the results through responsive APIs. In a live demo, we'll construct a credit card transaction analytics microservice, enabling users to monitor their spending and review transaction history in realtime.

To enhance customer accessibility, we'll also craft a realtime ChatBot using large language models (LLMs) that interacts with the microservice. We'll introduce you to DataSQRL, a tool that simplifies the creation of realtime data applications with Flink by managing the tedious data integration work. With DataSQRL, ingesting your streaming data, processing it in realtime, and exposing the results through a responsive API or a customer-facing ChatBot becomes a breeze.

Come and discover how the combination of Apache Flink and LLMs can effortlessly transform your data into realtime data products.

Unlocking Financial Data with Real-Time Pipelines
(Flink Analytics on Stocks with SQL )
By Timothy Spann

Financial institutions thrive on accurate and timely data to drive critical decision-making processes, risk assessments, and regulatory compliance. However, managing and processing vast amounts of financial data in real-time can be a daunting task. To overcome this challenge, modern data engineering solutions have emerged, combining powerful technologies like Apache Flink, Apache NiFi, Apache Kafka, and Iceberg to create efficient and reliable real-time data pipelines. In this talk, we will explore how this technology stack can unlock the full potential of financial data, enabling organizations to make data-driven decisions swiftly and with confidence.
Introduction: Financial institutions operate in a fast-paced environment where real-time access to accurate and reliable data is crucial. Traditional batch processing falls short when it comes to handling rapidly changing financial markets and responding to customer demands promptly. In this talk, we will delve into the power of real-time data pipelines, utilizing the strengths of Apache Flink, Apache NiFi, Apache Kafka, and Iceberg, to unlock the potential of financial data. I will be utilizing NiFi 2.0 with Python and Vector Databases.

Speakers

Timothy Spann
Principal Developer Advocate, Cloudera
Tim Spann is a Principal Developer Advocate in Data In Motion for Cloudera. He works with Apache NiFi, Apache Kafka, Apache Pulsar, Apache Flink, Flink SQL, Apache Pinot, Trino, Apache Iceberg, DeltaLake, Apache Spark, Big Data, IoT, Cloud, AI/DL, machine learning, and deep learning. Tim has over ten years of experience with the IoT, big data, distributed computing, messaging, streaming technologies, and Java programming. Previously, he was a Developer Advocate at StreamNative, Principal DataFlow Field Engineer at Cloudera, a Senior Solutions Engineer at Hortonworks, a Senior Solutions Architect at AirisData, a Senior Field Engineer at Pivotal and a Team Leader at HPE. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton & NYC on Big Data, Cloud, IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as ApacheCon, DeveloperWeek, Pulsar Summit and many more. He holds a BS and MS in computer science.
https://twitter.com/PaaSDev
https://www.linkedin.com/in/timothyspann/
https://medium.com/@tspann
https://github.com/tspannhw/FLiPStackWeekly/

Matthias Broecheler
Founder
DataSQRL
Matthias is the founder of DataSQRL which enables organizations to build data products at scale. He invented the JanusGraph database (formerly TitanDB) and is an author of O’Reilly’s "Practitioner’s Guide to Graph Data." He is a founder of the database company Aurelius, which was acquired by DataStax in 2015. Matthias holds a PhD in database systems and machine learning from the University of Maryland, where he developed the PSL machine learning framework.
https://twitter.com/MBroecheler
https://www.linkedin.com/in/matthiasbroecheler/
https://matthiasb.com/
https://www.datasqrl.com/

Photo of Future of Data: New York group
Future of Data: New York
See more events