– , Main Stream
Language: English
Join us for an insightful talk on ensemble learning in machine learning, where we'll explore how combining multiple models can significantly boost predictive performance. We’ll cover the fundamentals, including the difference between weak and strong learners. Then, we'll dive into key techniques: Bagging, which trains models independently in parallel; Boosting, which sequentially strengthens weak models; and Stacking, which merges models to create a more generalized and robust model. Whether you're a data scientist or just curious, this talk will deepen your understanding of these powerful techniques.
In this talk, we’ll delve into the fascinating world of ensemble learning techniques in Machine learning —a powerful paradigm that combines the strengths of multiple models to enhance overall predictive performance. Here’s what you can expect: 1. Foundations: Understanding Weak and Strong Learners 2. Bagging: An ensemble technique that has multiple base models trained independently and in parallel. 3. Boosting: A technique that sequentially builds a strong model by combining multiple weak models. 4. Stacking: Model fusion of multiple models, building a more generalized yet robust model.
Join us in this talk, where we’ll demystify these techniques, understand their inner workings, and discover how they elevate predictive performance. Whether you’re a data scientist, machine learning enthusiast, or curious learner, this talk promises insights that go beyond single models!
Yashasvi Misra is a Data Engineer at ABInBev, recognized as a GHC Scholar with a solid expertise in Data Modeling, Data Architecture, Python, and Data Science. With a prestigious Excellence Award from Samsung Research India, Yashasvi brings a robust background in research projects and a fervent enthusiasm for exploring and implementing cutting-edge technologies. Passionate about engaging with open source communities, Yashasvi is also a dedicated advocate for diversity and inclusion in the tech industry.