Skip to content

Apache Mesos, Apache Hadoop, Apache Spark + Custom Enterprise Applications

Photo of Adam Muise
Hosted By
Adam M.
Apache Mesos, Apache Hadoop, Apache Spark + Custom Enterprise Applications

Details

There are a wealth of both processing (Spark, tez, etc) and resource management frameworks (YARN, Mesos, etc) out there now. Our guest speaker, Jim Scott, will help us sort out the options for balancing these frameworks in a modern Enterprise.

Outline:
Apache Mesos, Apache Hadoop, Apache Spark + Custom Enterprise Applications: This stack combined is greater than the sum of each of the pieces of this stack. Couple all of that with custom enterprise applications, and the data center turns into a well-oiled machine. When combined, this software stack delivers unlimited flexibility for the entire data center.

Abstract:
Apache Mesos delivers resource management across the entire data center. This allows a company's operations team to tune the performance of the entire software stack by shifting resources between applications without having to re-engineer software. Apache Hadoop and Apache Spark together deliver all of the processing power for handling big data. Custom enterprise applications can leverage Hadoop and Spark to deliver the enterprise functionality, while Mesos can balance the resources across the data center. This presentation will focus on an end-to-end use case for the architecture and benefits that can be delivered with this software stack. These benefits will include operational efficiencies, better CPU utilization, and simplified software architectures.

Our Guest Speaker: Jim Scott - Director of Enterprise Strategy and Architecture, MapR

Jim has held positions running Operations, Engineering, Architecture and QA teams. Jim is the cofounder of the Chicago Hadoop Users Group (CHUG), where he has coordinated the Chicago Hadoop community for the past 4 years. Jim has worked in the Consumer Packaged Goods, Digital Advertising, Digital Mapping, Chemical and Pharmaceutical industries. Jim has built systems that handle more than 50 billion transactions per day. Jim’s work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts like Hadoop.

Photo of AI Performance Engineering Meetup (Toronto) group
AI Performance Engineering Meetup (Toronto)
See more events
Paytm Labs
220 Adelaide Street West · Toronto, ON