Past Meetup

Hortonworks, the future of Hadoop, and Agile Big Data

This Meetup is past

120 people went


We are very excited to have Jamie Engesser from Hortonworks ( come to present. He will present both on the work being done by Hortonworks on the Hadoop platform ( Tez, Stinger, Knox ). Come and learn about the future of Hadoop!

6:00 – 6:30 - Socialize over food and drink
6:30 – 6:45 - Announcements, Upcoming Events
6:45 – 8:30 - Hortonworks - Tez, Stinger, Knox and the future of Hadoop
8:30 – ??? - Continued socializing

About the Speaker

Jamie Engesser
Vice President, Solutions Engineering
For almost 20 years, Jamie has been an instrumental player in the growth of many small to medium sized High Tech Companies. His experience around Big Data with Hadoop, Fast Data with In-Memory data stores and Java Middleware make him a critical asset to the Hortonworks team. He has responsibility for running the Hortonworks Solutions Engineering Organization. Previous to Hortonworks, Jamie took the SpringSource Field Team from a small agile team to a leader in the Java Middleware/Platform as a Service (PAAS) space and ultimately led the buildout of VMware's vFabric Division, globally. He has been integral in multiple startups including: SpringSource, Savvion, Vitria and Documentum. He holds a B.S. in Industrial and Management Engineering from Montana State University

About the presentations

Hortonworks and the future of Hadoop

Apache Hive and its HiveQL interface has become the de facto SQL interface for Hadoop. Apache Hive was originally built for large-scale operational batch processing and it is very effective with reporting, data mining, and data preparation use cases. These usage patterns remain very important but with widespread adoption of Hadoop, the enterprise requirement for Hadoop to become more real time or interactive has increased in importance as well. Enabling Hive to answer human-time use cases (i.e. queries in the 5-30 second range) such as big data exploration, visualization, and parameterized reports without needing to resort to yet another tool to install, maintain and learn can deliver a lot of value to the large community of users with existing Hive skills and investments. To this end, we have launched the Stinger Initiative, with input and participation from the broader community, to enhance Hive with more SQL and better performance for these human-time use cases. We believe the performance changes we are making today, along with the work being done in Tez will transform Hive into a single tool that Hadoop users can use to do report generation, ad hoc queries, and large batch jobs spanning 10s or 100s of terabytes.