Meetup #12: Detect problematic customers | LSTM cells

Möchtest Du auf die Warteliste?

17 auf der Warteliste

Teilen:

Details

Another meetup, another chance to learn and connect: Sören will talk about a project to predict bad customers. And Thomas' talk is all about long short-term memory networks.

Let's be optimistic and call this an early spring meetup. This time we hear talks from Antje and Jeremy on productionizing ML models with Azure ML Studio and from Johannes on catching fake followers with algorithms.

The host this time is Westphalia DataLab. Ladies and Gentlemen, saddle your bikes and aim north to meet for a drink or two plus some exquisite data science input.

----------------------
"How to detect problematic web customers"
Sören Erdweg

Sören studied Physics at RWTH Aachen and did his PhD on the CMS Experiment at CERN. He founded the data science team at the flaschenpost SE and is now the Head of Data Analytics there.

Abstract: Delivering goods to customers not able or willing to pay can be very costly. Therefore, it is important to detect or even predict such cases as early in the ordering process as possible. For known problematic customers this is quite an easy task but what about new customers where we don’t have any previous knowledge? This is where Machine Learning techniques come into play to predict a probability of default for each new customer and reduce the manual vetting process. We will show the different approaches and results we got in implementing these procedures.

----------------------
Thomas Klein
"LSTM Cells - Deep Learning for Sequential Data"

Thomas studies Cognitive Science at the University of Osnabrück and completed his Bachelor's degree this year with a thesis on memory in Recurrent Neural Networks.

Abstract: Long Short-Term Memory Cells (LSTMs) have proven to be one of the most successful neural models for dealing with sequential data, powering inventions such as Siri, Alexa and Google Translate. Thomas will outline the scenario in which LSTMs can be applied, briefly cover the underlying principles of artificial neural networks in general and take a deep dive into the inner workings of LSTMs. These cells can seem very complex at first, but if deconstructed properly, their anatomy can feel sensible and even intuitive. This talk assumes some familiarity with the concepts of ANNs, but avoids unnecessary formalisms and focusses on intuitions and visualisations instead. If you know what a vector is, you can understand this talk :)

See you soon,
Tobias²