Hello Pythonist friends!
It's been a long time we've seen each other! Are you ready for the Autumn edition of budapest.py? This time our event is sponsored by EPAM.
18:45 - Welcome
18:55 - Ritu Saiwal: Data Journey at SSP
19:20 - Lóránt Zsarnowszky: Would you survive the Titanic? (machine learning from disaster)
20:00 - Break
20:20 - Bertalan Rónai: Combine Python with Power BI!
20:45 - Manuel Schipper: Import Kafka! A Pythonic Introduction To Apache Kafka
• • • Ritu Saiwal: Data Journey at SSP • • •
Ritu Saiwal is Data enthusiast by day, home chef by night. She is currently working at SSP, after working for two years for a services company like TATA Consultancy services in India.
"Walk with me on a tour of how data travels at a pure data driven start up, where data is the bread and butter.
See how we clean, twist, rinse, filter, store and abuse data so that it's useful for prediction."
• • • Lóránt Zsarnowszky: Would you survive the Titanic? (machine learning from disaster) • • •
Lóránt Zsarnowszky is a data scientist evolved from risk management with heavy banking and insurance background.
A very brief and basic insight on the data science pipeline and machine learning modelling using python tools and libraries applied on the popular Titanic dataset.
• • • Bertalan Rónai: Combine Python with Power BI! • • •
Bertalan Rónai is a Business Intelligence Developer at Grape Solutions.
His presentation is about using Python in the Microsoft Power BI reporting tool as data source or to create visuals.
• • • Manuel Schipper: Import Kafka! A Pythonic Introduction To Apache Kafka • • •
Manuel is working at Cloudera as a Customer Operations Engineer delivering assistance in operations and mission-critical response for production Kafka and Hadoop cluster. He's learned the ropes of the distributed world by tinkering with broken software and getting a wrecked system back on their feet. In the past, he was an active community member and organized tech events like the Python meetup, Women in IT meetup, and Django Girls Budapest.
Stream processing is becoming a key requirement in many organizations. Applications that compute valuable data like customer transactions or machine learning models will want to continuously update their results when new data is available. Kafka provides the capacity to bring real-time feed to such applications in a fault-tolerant, durable way. It is a distributed, high-performance message hub capable of handling billions of messages per day and scale to the requirements of any large scale data application. In this talk, we'll introduce Apache Kafka basics. We'll go through it's architecture, capabilites, and possible use cases. In addition, we'll do a short demo on how to get started in Kafka using Python. Whether you are evaluating Kafka for your organization or are a curious techie looking to expand your interests, let's discuss how you can leverage the power of Apache Kafka!
Code of Conduct
Every attendee must follow the Code of Conduct: https://goo.gl/aMS14H