Skip to content

Stream Analytics with SQL on Apache Flink

Photo of David Sabater Dinter
Hosted By
David Sabater D. and 2 others
Stream Analytics with SQL on Apache Flink

Details

Hello all squirrels,

I hope you liked our last meetup! This will be a busy month. We have another meet-up coming up on the 23rd. This time, our speaker needs no introduction. We are excited to have with us the co-founder of Data Artisans, Dr Fabian Hueske.
Here is the abstract for his presentation:

Abstract: optimizationSQL is undoubtedly the most widely used language for data analytics for many good reasons. It is declarative, many database systems and query processors feature advanced query optimizers and highly efficient execution engines, and last but not least it is the standard that everybody knows and uses. With stream processing technology becoming mainstream a question arises: “Why isn’t SQL widely supported by open source stream processors?”. One answer is that SQL’s semantics and syntax have not been designed with the characteristics of streaming data in mind. Consequently, systems that want to provide support for SQL on data streams have to overcome a conceptual gap.
Apache Flink is a distributed stream processing system with superior support for streaming analytics. Flink features two relational APIs, the Table API and SQL. The Table API is a language-integrated relational API for Java and Scala. Flink's SQL implementation follows the SQL standard. Both APIs are compatible and Sha optimisation and execution path based on Apache Calcite. Moreover, they feature unified semantics for stream and batch processing, i.e., a SQL or Table API query will produce the same result regardless whether its input data is read from a batch dataset or from a data stream.

In this talk we present the future of Apache Flink’s relational APIs, discuss their conceptual model, and showcase their usage. The central concept of these APIs are dynamic tables. Dynamic tables can be defined on data streams and be queried with regular SQL queries producing new dynamic tables. We discuss the semantics of querying dynamic tables and explain how the unified behavior for batch and stream inputs is achieved. Dynamic tables can be converted back into streams or be written as materialized views to external systems, such as Apache Kafka or Apache Cassandra, to serve low-latency applications or dashboards. We conclude the talk by outlining how common stream analytics use cases can be realized, highlighting the power and expressiveness of Flink’s relational APIs.

Note: Currently we kind of having small place for like 20-25 poeple so there will be limited RSVP. We are looking for bigger place, if get one we will increase the RSVP. If anyone from community knows place (for like 30-50 Pepole) where we can host this meetup then do let us know.

Photo of Apache Flink London Meetup group
Apache Flink London Meetup
See more events