It gives us great pleasure to announce our July Meetup at Bank of Ireland @boistartups where we will have an awesome line up with John Gorman on Apache Flink and "Fast Data!" On second slot, we will have Vincent De Stoecklin on Modern Industrialised Data Science Workflows. As you can see, we are going lower in this Meetup on data processing and use cases for good data science architectures.
Please note there will be a post-event party in the Marker hotel. This is for the launch of the Travel meets Big Data conference in November.
Our full agenda is as follows;
Apache Flink... Don’t Cross The Streams! by John Gorman, Senior Data Consultant with Eberon.
Along with the arrival of BigData, a parallel yet less well known but significant change to the way we process data has occurred. Data is getting faster! Business models are changing radically based on the ability to be first to know insights and act appropriately to keep the customer, prevent the breakdown or save the patient. In essence, knowing something now is overriding knowing everything later. Stream processing engines allow us to blend event streams from different internal and external sources to gain insights in real time. This talk will discuss the need for streaming, business models it can change, new applications it allows and why Apache Flink enables these applications. Apache Flink is a top Level Apache Project for real time stream processing at scale. It is a high throughput, low latency, fault tolerant, distributed, state based stream processing engine. Flink has associated Polyglot APIs (Scala, Python, Java) for manipulating streams, a Complex Event Processor for monitoring and alerting on the streams and integration points with other big data ecosystem tooling.
Modern Industrialised Data Science Workflows by Vincent De Stoecklin of Dataiku
The talk will focus on presenting standard production architectures for data products, and give insight on best practices to articulate efficiently a design environment (data lab prototyping use cases) and a production environment (where workflows are run and monitored). We will take two examples from Dataiku clients to show the different types of architectures and how they can allow companies to address different types of uses cases :
Using a real time API to deploy machine learning models for real time prediction - example for dynamic pricing with AramisAuto
Deploying and monitoring a (batch) industrialized data product to identify monthly churners - example from Coyote
Hybrid Batch + Real Time scoring architectures - example from Fraud Detection in Healthcare
So as always, the event hashtag is #HUGIreland so do include us if you are tweeting about it! Please note after July 11th, we will be taking a break for our collective vacations in August but plan to return with a special event in partnership with Oracle as Sponsor so stay tuned for developments!! Looking forward to seeing you on July 11th so do RSVP today... chat to ya then!!