War of the Distributed Loggers and Graph Analytics with Neo4j

This is a past event

28 people went

Location image of event venue

Details

Join us this evening to hear about Distributed Loggers and graph analytics with Apache Kafka and Neo4j. Following is a brief agenda for the evening:

Following is a brief agenda for the evening:

6:30 PM - 7:00 PM - Doors open and pizza
7:00 PM - 7:45 PM - Talk 1: War of the Distributed Loggers
7:50 PM - 8:30 PM - Talk 2: Event-driven Graph Analytics using Neo4j and Apache Kafka
8:30 PM - Event Ends

-----------------------------
Co-Hosted By:
-----------------------------

https://www.meetup.com/Apache-Kafka-London

----------------------------
Talk 1: "War of the Distributed Loggers" with Sasha Gerrand from Funding Circle (https://www.fundingcircle.com)
----------------------------

The world of open source distributed log implementation has recently exploded, with Apache Kafka leading the fray. However, there are a number of other implementations which have received a marked uptake, such as LogDevice and Apache Pulsar. This talk will compare the competing technologies, identifying their similarities, key differences and offering some potential best use cases for each.

Sasha is an experienced technologist, software developer, author, public speaker and senior technology leader. He is the Director of Engineering Effectiveness at Funding Circle and an individual contributor.

----------------------------
Talk 2: "Event-driven Graph Analytics using Neo4j and Apache Kafka" with Ljubica Lazarevic from Neo4j (https://neo4j.com)
----------------------------

Commonly we will want to get insight from any analytical processing on our operational data. For example, we may want to leverage the connectedness of customers to products and their networks to identify recommendation opportunities. However, doing analytical work on operational databases is seldom a good idea, and usually, there will be separate databases for each of the tasks. Also, we may want to stream insight as and when it becomes available.

This in itself can bring in new challenges: How do we keep the data on both database instances in sync? How do we stream results as and when they’re generated from our analysis onto our transactional database?

In this talk we will describe a scenario where graph databases in a cluster and read replica format are used for both operational means, and for delivering the analytical work, and how we can use this architectural pattern with Kafka to stream back analytical results to the operational databases as soon as they’re available, whilst ensuring all of the databases are up to date with the same data. This example uses the newly released Apache Kafka plugin for Neo4j.

Ljubica is part of Neo4j's field team, based in London. She has a varied background, covering development, project management and architecture, in a diverse range of industries from ecology to finance. Ljubica a data geek with a particular interest in data lineage and associated areas.