Join us for an Apache Kafka® meetup on August 27th from 6pm, hosted at Ticketmaster in Scottsdale, AZ!
6:00: Doors open
6:00-6:30: Pizza, Drinks and Networking
[DOORS CLOSE AT 7PM]
6:30-7:15: Erik Tank, Ticketmaster
7:15-8:00: Dani Traphagen, Confluent
8:00-8:30 Additional Q&A & Networking
Erik Tank currently is a software engineer for Ticketmaster in the Core Client Services team and leads the Phoenix Perl Mongers group. In his twenty plus year career he has been working on e-commerce backend APIs, full-stack web projects, and POS style systems. Currently he is working with a mandate to deprecate or update 'legacy' products to allow Ticketmaster the ability to quickly move forward with the best technologies allowing a smooth user/client experience as we 'Get You In'. This unique role is exposing Erik to a wide-ranging set of technologies and languages allowing him to find solutions in order to move products forward. In his free time, he enjoys volleyball, canyoneering, camping, reading, and a good Call of Duty Blackout session.
Title: Adding Avro to your Kafka streams to meet your messaging needs
Abstract: Now that you have Kafka up and running how are you going to use it? Kafka doesn't care what it transfers; just that it gets delivered.
You can blaze your own trail defining everything, but then everything is your and your user's responsibility. You can create the equivalent of a HATEOS message - which bloats your message and requires your clients to create a smart parser. In contrast you can create a minimalist message - which requires your clients to know everything about what your message and they'll have to make updates every-time you change the message.
Enter Avro. Using Avro schemas allows you to define and share you message schema with all the information to validate your message ... without bloating your actual messages.
In this talk we will be exploring what Avro is, what it isn't, and if it's worth your time. As a hardened Perl* programmer who lives by TIMTOWTDI, who abhors systems that get in the way by imposing their ideology, I'll share with you why I've adopted Avro when dealing with Kafka.
* Code examples may include a Perl example, but will be primarily in Java.
Like many, I love and hate Distributed Systems, because they are rewarding but highly complex. I have a penchant for making Enterprises successful with Open Source Technologies, targeting transitions toward real-time and event-based architectures. While currently at Confluent (the platform around Apache Kafka) my history includes Apache Ignite and Apache Cassandra at GridGain and DataStax, respectively. I was an IT grunt from a young age continue to love this field dearly. My interests are in Event Streaming, Big Data, Data Science, Bioinformatics, Machine Learning, Distributed Databases, Data Modeling, Search and data processing/analytics. I also love public speaking and travel!
Title: Kubernetes and Kafka - all the cool kids are doing it and why you *maybe* should!
In this talk, we will address the Confluent Operator and how it simplifies running the Confluent Platform on Kubernetes, on-premises or in the cloud. Operator is an enterprise-ready implementation of the Kubernetes Operator API to automate deployment and key lifecycle operations. Leverage Helm for upgrades or scale out your Kafka Brokers. Scaling is as simple as a kubectl command. We will walk through the very basics of Kubernetes, Kafka and then show how they can be used together in a quick demo. We will also address where things get tricky with Persistent Volume Claims (PVCs) and how to manage state. Hope to see you there IP!
KAFKA SUMMIT SF 2019: 30 Sep - 1 Oct (25% off)
bit.ly/KSummitMeetupInvite, click ‘register’ and enter the community promo code “KS19Meetup”.
Community Slack: https://launchpass.com/confluentcommunity
Contact [masked] to speak or host!