Join us for our next Elastic meetup, we are co-hosting with the Kafka Bay Area User Group. Thanks to Lyft for providing the venue!
The agenda for the evening is:
6:00pm: Doors open
6:00pm - 6:30pm: Pizza, Drinks and Networking
6:30pm - 7:15pm: Streaming DynamoDB changelogs to Elasticsearch using Apache Kafka and Flink by Ying Xu and Dan Fan
7:15pm - 7:45pm: Integrating Kafka into your Elasticsearch use case by Andrew Selden
7:45pm - 8:15pm: Microservices Integration Patterns with Kafka by Kasun Indrasiri
8:15pm - Additional Q&A & Networking
Microservices Integration Patterns with Kafka
Microservice composition or integration is probably the hardest thing in microservices architecture. Unlike conventional centralized ESB based integration, we need to leverage the smart-endpoints and dumb pipes terminology when it comes to integrating microservices.
There two main microservices integration patterns; service orchestration (active integrations) and service choreography (reactive integration).
In this talk, we will explore on, Microservice Orchestration, Microservice Choreography, Event Sourcing, CQRS and how Kafka can be leveraged to implement microservices composition
Kasun Indrasiri is the director of Integration Architecture at WSO2 and an architect with over nine years of experience in Enterprise Integration and Microservice. He is an author and an evangelist on Microservices Architecture. He has authored ‘Microservices for Enterprise’ (Apress: 2018 Q4) and ‘Beginning WSO2 ESB’ (Apress - Released in 2017) books.
He was also an architect and the product lead of WSO2 ESB and an Apache committer.
Integrating Kafka into your Elasticsearch use case
Andrew will be providing an introduction on how users are integrating Kafka into their Elasticsearch use cases. This will include best practices, architecture overviews and more.
Andrew is a Senior Solutions Architect at Elastic
Streaming DynamoDB changelogs to Elasticsearch using Apache Kafka and Flink
In this talk, we will present the architecture of Lyft’s changelog data ingestion pipeline which allows for real-time ingestion of DynamoDB changelogs into Elasticsearch. Our system uses Apache Kafka as a core pub-sub component storing all the changelog data. Apache Flink jobs are employed as connectors linking data source(s) and destination(s). By virtue of state-of-the-art streaming technology, the whole data pipeline achieves low latency, strong message durability and ordering guarantee, with scalability and extensibility built into the design.