Ido Nadler
I am a big data team lead at Nielsen.
My team focuses on building massive data pipelines (~250 Billion events/day) and infrastructure for running machine learning algorithms. Our projects run on AWS using a variety of technologies like Kafka, Spark, Airflow, Kubernetes, and more.
I like to continuously experiment with new technologies, tackle challenging problems, and find those better, more elegant, and cost-effective solutions.
Session
Kafka data pipeline maintenance can be painful.
It usually comes with complicated and lengthy recovery processes, scaling difficulties, traffic ‘moodiness’, and latency issues after downtimes and outages.
It doesn’t have to be that way!
We’ll examine one of our multi-petabyte scale Kafka pipelines, and go over some of the pitfalls we’ve encountered. We’ll offer solutions that alleviate those problems, and go over comparisons between the before and after . We’ll then explain why some common sense solutions do not work well and offer an improved, scalable and resilient way of processing your stream.
We’ll cover:
- Costs of processing in stream compared to in batch
- Scaling out for bursts and reprocessing
- Making the tradeoff between wait times and costs
- Recovering from outages
- And much more…