Apache Druid: Sub-second Slice and Dice your Data!
Agenda & Timeline:
3:30 - 3:45: Assemble, Introductions
3:45 - 4:45: First Speaker (Priyam Gupta)
4:45 - 5:45: Second Speaker + Demo (Tijo Thomas)
5:45 - 6:00: Q&A
Talk 1: Druid – Massively Parallel Processing System Making your Warehouses Available for Fast and Real-time Analytics
Abstract: Druid's architecture is enabling peta bytes scale of data crunching with sub-second latency. You'll hear how zapr is leveraging Druid to analyse TV viewership data and powering analytical dashboards to report real time user behaviour. Priyam will talk about some of the key problems that can be solved by deploying Druid and its easy integration with a lot of open source technologies like Kafka. Learn how Druid can seamlessly integrate with the existing ETL pipelines and provide a better and faster way to slice & dice the existing warehouses and power real-time dashboards.
Speaker bio: Priyam Gupta is a Technical Architect at zapr. He has over 8 years of experience in building large scale distributed data platforms and data products providing powerful insights to users/customers/researchers. He has rich experience in building modern day cloud native ETL pipelines.
Talk 2: Setting the Stage for Fast Analytics with Druid
Druid is an emerging standard in the data infrastructure world, designed for high-performance slice-and-dice analytics (“OLAP”) on large data sets. This talk is for you if you’re interested in learning more about pushing Druid’s analytical performance to the limit. Perhaps you’re already running Druid and are looking to speed up your deployment, or perhaps you aren’t familiar with Druid and are interested in learning the basics. Some of the tips in this talk are Druid-specific, but many of them will apply to any operational analytics technology stack.
Speaker bio: Tijo Thomas is a Solutions Architect at Imply. He has post graduated from IIT Bombay majoring in Information Technology. He has more than 16 years of experience in the software industry. Before joining Imply he was worked at Cloudera. He has ~7 year of experience in various big data technologies.