• A legacy way of Video Ad Tracking to Apache Kafka® based Video Ad Tracking.

    Join us for an Apache Kafka® meetup on October 15th from 6:30pm, hosted at Makers Tribe in Chennai! The address, agenda and speaker information can be found below. See you there! ----- Agenda: 6.30pm - Registration 6.40pm - A journey from a legacy way of Video Ad Tracking to Kafka based Video Ad Tracking - Ganeshkumar Ramachandran (Gramcha), Pando Corp 7.40pm - Networking & Dinner 8.00pm - Group Photo ----- Speaker: Ganeshkumar Ramachandran (Gramcha) Bio: Gramcha is a Principal Software Engineer at Pando Corp, doing Truck Science. Previously he has worked as an engineer for Ooyala and ADF Data Science, where he gained vast experience in Java, Docker and Apache Kafka. He is a frequent visitor of JUG and gives talks about Microservices, Data Pipeline at different events. Most recently dedicated his time to digitizing logistics at scale and in the process of improving the supply chain of logistics using Kafka. In his spare time, he enjoys spending time with his family and friends. linkedin.com/in/iamganeshkumar/ github.com/gramcha @gramcha Title: A journey from a legacy way of Video Ad Tracking to Kafka based Video Ad Tracking Abstract: The video Ad tracking events are not much different from non-video Ad tracking events. The non-video Ads typically have trackers like delivery, impressions, clicks, installs(in case of mobile app Ad). The video Ads will have trackers like delivery, impression, 10%, 20%, 30%, ... 100% completed. In our case, the existing video Ad delivering system captures the trackers and that is not scalable solutions if millions of video Ads are watched simultaneously. This is a very unique case where you will have millions of concurrent users watch video Ads simultaneously similar to Youtube. Our client is an OTT media service that streams cricket matches and shows video Ads in between each over of the match. Generally millions of users watch the match concurrently and some high profile matches will have over 10 million concurrent users! Each break will deliver 3 or 4 small duration Ads and each Ad will trigger 10+ tracker events to Adserver. These trackers are handled in an old fashion way of batch processing. We have converted that into realtime data processing using Kafka. We will see what the old solution was and how we converted that into Kafka based solution without disturbing the existing legacy system in a detailed way. ----- KAFKA SUMMIT SF 2019: 30th September til the 1st October - We are able to offer you a 25% discount on the standard priced ticket for Kafka Summit San Francisco (September 30th & October 1st). To redeem it, please go to bit.ly/KSummitMeetupInvite, click ‘register’, select ‘Conference Pass’ and enter the community promo code “KS19Meetup”. Don't forget to join our Community Slack Team! https://launchpass.com/confluentcommunity If you would like to speak or host our next event please let us know! [masked] NOTE: We're unable to cater for attendees under the age of 18. Please do not sign up for this event if you're under 18.

  • An Introduction to Apache Kafka®

    Makers Tribe

    Join us for an Apache Kafka® meetup on June 24th from 6:30pm, hosted at Makers Tribe in Chennai! The address, agenda and speaker information can be found below. See you there! ----- Agenda: 6.30pm - Registration 6.40pm - What is Apache Kafka®? - Magesh Nandakumar, Confluent 7.40pm - Networking & Dinner 8.00pm - Group Photo ----- Speaker: Magesh Nandakumar Bio: Magesh Nandakumar is a Software Engineer at Confluent working on Kafka Connect and Schema Registry. Prior to Confluent, he worked at various Fin Tech companies. During his tenure at Visa, he started using Kafka for Fraud detection and ever since has been fascinated about it. During his spare time, he loves to play cricket and loves to spend his time with his Son and Daughter Title: What is Apache Kafka®? Abstract: Streaming platforms have emerged as a popular, new trend, but what exactly is a streaming platform? Part messaging system, part Hadoop made fast, part fast ETL and scalable data integration, with Apache Kafka at the core, streaming platforms offer an entirely new perspective on managing the flow of data. This talk will explain what a streaming platform such as Apache Kafka is and some of the use cases and design patterns around its use—including several examples of where it is solving real business problems. New developments in this area such as KSQL will also be discussed. ----- KAFKA SUMMIT SF 2019: 30th September til the 1st October - We are able to offer you a 25% discount on the standard priced ticket for Kafka Summit San Francisco (September 30th & October 1st). To redeem it, please go to bit.ly/KSummitMeetupInvite, click ‘register’, select ‘Conference Pass’ and enter the community promo code “KS19Meetup”. Don't forget to join our Community Slack Team! https://launchpass.com/confluentcommunity If you would like to speak or host our next event please let us know! [masked] NOTE: We're unable to cater for attendees under the age of 18. Please do not sign up for this event if you're under 18.

    12