Big Data Use Case - Logging with Apache Flume and Hadoop/HDFS


Details
Please join us on Wednesday, December 18th 2013, for a meetup on Logging and use of Big Data technologies like HDFS. The ability to perform large-scale and real-time logging is an important use case in the Cloud-centric ops world.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As organizations move more applications to the cloud, there is an increased need for logging and monitoring of a heterogeneous software and infrastructure stack. In this meetup we want to explore some of the tools and technologies that can be used to perform "BigOps" - the ability to collect and process large amounts of data using some robust open source tools such as Apache Flume and its ability to integrate with Hadoop/HDFS.
Overview of Apache Flume
- What is Flume (going through the parts)
- Common Flume Architectures including use of Hadoop/HDFS as a sink
- Performance tuning tips with Flume
- Architecting for different levels of guarantees
- Working through different types of sinks and what they can offer.
Agenda
6:30pm - Networking & Food/Drinks
7:00pm - Technical Presentations
8:00pm - Q&A
8:30pm - Networking
Speaker Bio
Ted Malaska, Sr Solution Architect at Cloudera
Ted has worked on close to 60 Clusters over 2-3 dozen clients with over 100's of use cases. He has 18 years of professional experience working for start-ups, the US government, a number of the worlds largest banks, commercial firms, bio firms, retail firms, hardware appliance firms, and the US’s largest non-profit financial regulator. He has architecture experience across topic such as Hadoop, Web 2.0, Mobile, SOA (ESB, BPM), and Big Data. Ted is a regular committer to Flume, Avro, Pig and YARN.
Food and drinks will be provided.
We look forward to seeing you there!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Big Data Use Case - Logging with Apache Flume and Hadoop/HDFS