We were happy to see you at last event, hope you enjoyed ;) And here we go again, this time we have:
1830 Doors open/socialising
1900 Talks start
# snacks & beer reserved
SPOTLIGHT TALK: MAKING OF A REAL-WORLD MONEYBALL APPLICATION
We got this evening as a guest our friend Jo from H2O.ai with a short story and GIFT for us (he didn’t tell what is it, cannot wait to see :) Jo is a Data Science Evangelist and Community Manager at H2O.ai
(UPDATE) TALK#1: HOW IT REALLY WORKS! ADVANCED STREAMING AND EDGE ANALYTICS METHODS IN INDUSTRY 4.0
We will have this evening as our guest - Nicole Tschauder, who works as a Manufacturing Analytics Expert at SAS. She focuses on the use of ML in production, logistics and other IoT scenarios. From her background Nicole is a mathematician.
In order to transform gigantic IoT data flows into usable insights, two things are key: The right technology to obtain sensor data and functioning analytical methods to analyze data either at the edge or in-stream. During this presentation, Nicole will give an introduction to new methods that are specifically tailored to the analysis of sensor data. You’ll learn how that is different from classical analytical methods and how to apply the knowledge in areas like predictive maintenance, anomaly detection or signal processing.
TALK#2: DECODING THE BLACK BOX
And finally we will have with us Dr.Shirin Glander, whom we were inviting for a long time back. Shirin lives in Münster and works as a Data Scientist at codecentric, she has lots of practical experience. Besides crunching data, she trains her creativity by sketching information. Visit her blog and you will find lots of interesting stuff there, like experimenting with Keras, TensorFlow, LIME, caret, lots of R and also her beautiful sketches. We recommend: www.shirin-glander.de Besides all that she is an organiser of MünsteR - R User Group: www.meetup.com/Munster-R-Users-Group
Traditional ML workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex ML models are essentially black boxes and too complicated to understand, we need to use approximations, like LIME.
SUPPORTERS: Thanks to our supporters SAS and STATWORX and our kind hosts Frankfurt School of Finance & Management. BIG THANKS!
As always LIVESTREAM:
To get closer join our LinkedIn Group
Yours / FFMDataScience Team