Skip to content

Meetup de rentrée @ Finaxys

Photo of Hervé Riviere
Hosted By
Hervé R. and Florian H.
Meetup de rentrée @ Finaxys

Details

Faite le plein de Kafka en septembre : le design de plans de reprise d'activité avec Kafka ainsi qu'un second talk sur le Machine & Deep Learning avec Kafka Stream.

Un grand merci à Finaxys (https://www.finaxys.com/) qui nous accueille pour ce meetup et sponsorise le buffet !

Agenda :

6:30pm: Ouvertures des portes
6:30pm - 6:45pm: Networking, Pizzas et rafraîchissements
6:45pm - 7:15pm: Présentation #1: How to design a Disaster Recovery Plan for Big Data Architectures, Mehdi BEN AISSA, Finaxys
7:15pm - 7:45pm: Presentation #2: Highly Scalable Machine Learning and Deep Learning in Real Time with Apache Kafka’s Streams API, Kai Waehner, Confluent
7:45pm - 8:15pm: Q&A et Networking

Premier talk :

How to design a Disaster Recovery Plan for Big Data Architectures ? - Mehdi BEN AISSA, Finaxys

For the different Big Data architectures (batch processing, real time processing, Lambda, Kappa ..), we suggest, in a first phase, different Disaster Recovery Plan solutions depending on SLA (Service-level agreement) : RPO (Recovery Point Objective), RTO (Recovery Time Objective)..
In a second phase, we focus more on steam processing and existing Kafka solutions for Disaster Recovery Plan (Mirror Maker, Kafka Connect Replicator, GeoCluster ..) : the advantages, the drawbacks and the impact of this choice on the global architecture.
Finally, we explain in details how to configure and deploy each Disaster Recovery Plan solution (rack awareness, replication, replication factor, min insync …) and how to integrate each layer (storage layer, processing layer ..) into the chosen architecture.

Bio : Mehdi est Big Data Solutions Architect à Finaxys

Second talk :

Highly Scalable Machine Learning and Deep Learning in Real Time with Apache Kafka’s Streams API - Kai Waehner, Confluent

Highly Scalable Machine Learning and Deep Learning in Real Time with Apache Kafka’s Streams API Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow or H2O. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way.

Bio : Kai Waehner works as Technology Evangelist at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Integration, Microservices, Internet of Things, Stream Processing and Blockchain. He is regular speaker at international conferences such as JavaOne, O’Reilly Software Architecture or ApacheCon, writes articles for professional journals, and shares his experiences with new technologies on his blog (www.kai-waehner.de/blog). Contact and references: kontakt@kai-waehner.de / @KaiWaehner / http://www.kai-waehner.de

------

N'oubliez de joindre la communauté Slack Confluent (https://slackpass.io/confluentcommunity) !

Vous souhaitez accueillir le groupe ou proposer un talk lors d'un prochain meetup ? N’hésitez à nous contacter par la messagerie meetup ou envoyer un email à community@confluent.io

Photo of Paris Apache Kafka® Meetup group
Paris Apache Kafka® Meetup
See more events