You scheduled your first event! Don't forget to share it on social media.

What we're about

New York Apache Flink Meetup is for developers interested in and using the open source framework Apache Flink for distributed stream and batch data processing. Learn about Flink, its capabilities and use cases at . This meetup group is a place for you to talk to other developers and share your experiences!

What type of events do we host?

We organize community events featuring use case talks & demos by Flink users and sessions with Flink contributors, committers, and PMC members.

We’re always on the lookout for interesting talk ideas, so please feel free to submit yours:

📝Check out the latest documentation on how to get started with Flink:

📢Follow @ApacheFlink on Twitter for news & updates:

🐿️Explore Flink’s ecosystem of connectors, APIs, extensions, tools, and integrations:

💻Learn how to contribute to Flink:


We’re committed to providing a friendly, safe and welcoming environment for all attendees, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status and religion (or lack thereof). We ask for all attendees to help create a safe and positive experience for everyone. Please do contact the meetup organizers if you witness or experience unacceptable behaviour.

* Apache Flink, Flink®, Apache®, the squirrel logo, and the Apache feather logo are either registered trademarks or trademarks of The Apache Software Foundation.

Upcoming events (1)

Machine Learning Inference in Flink with ONNX

Online event

Join us for a meetup on how to run inference on a machine learning model in Apache Flink with ONNX and get the chance to connect with the Flink community online!

12 pm - 12:40 pm "Machine Learning Inference in Flink with ONNX" by Colin Jermain
12:40 pm - 1 pm Q&A

Speaker: Colin Jermain is a Data Science Team Lead at Vectra AI in Boston, managing a team of Data Scientists to build advanced algorithms and machine learning models for cyberattack detection. Colin is an avid open-source developer, and has contributed to keras2onnx and PyTorch to improve the ONNX ecosystem. He holds a PhD in Physics from Cornell University and has been an active Python developer for over 14 years. He enjoys working across technical disciplines, from DevOps in the cloud to exploring new AI techniques and how to run them in production settings.

Abstract: What is the best way to run inference on a Machine Learning (ML) model in your streaming application? We will unpack this question, and explore the ways to leverage ML models in Flink. Starting from a PyTorch model in a Python training environment, we will leverage ONNX to be able to serve the model through a Scala Flink application. By using the Jаvа version of the onnxruntime library, we will show that the computation can be done in the same TaskManager and packaged in the same JAR as a resource. This was inspired by a previous Flink Forward talk on ONNX that leveraged Py4J to run inference from Python. Our technique simplifies the deployment and management for production inference by using Scala directly and packaging the model with the code. We will dive into the pros and cons of this technique, and examine other methods for harnessing the power of machine learning in Flink.

Past events (4)

Machine Learning Inference in Flink with ONNX

Online event

Photos (10)

Related topics