Language: English
11-16, 16:40–17:10 (Asia/Hong_Kong), LT7
The talk will explore privacy in machine learning (ML) and how Federated Learning (FL) can be leveraged to address privacy concerns effectively. Federated Learning allows training machine learning models across multiple decentralized devices or servers holding local data samples without exchanging them.
We will demonstrate a real-time Federated Learning setup using Python(MPI for communication), showcasing the integration of advanced communication protocols to manage real-time interaction and parallel processing among the nodes.
We will also discuss how model compression techniques(like Pruning, Quantization, Binarization, etc) can complement Federated Learning, enabling efficient deployment of privacy-preserving models on resource-constrained edge devices.
Sections:
-
Introduction: The Privacy vs. Power Struggle in ML (5 minutes)
- The challenges of data privacy in traditional centralized learning.
- Introducing the tension between powerful models and user privacy
- Introduction of FL and model compression as potential solutions. -
Federated Learning: Training Together, Protecting Privacy (5 minutes)
- Core concept of FL and its decentralized training paradigm
- Privacy benefits of FL as compared to traditional methods. -
Real-Time Visualization with MPI: See It to Believe It (7 minutes)
- Introduction of MPI as a powerful tool for parallel communication between simulated user devices.
- Demonstrate how MPI can be used to create virtual users for FL training.
- Use MPI to showcase a live visualization of the FL training process. -
Making the Cut: Model Compression for Edge Devices (5 minutes)
- Discussion on the limitations of deploying complex models on resource-constrained edge devices.
- Introduction of model compression techniques(Quantization, Pruning, etc) for reducing model size without compromising accuracy.
- How can model compression be integrated with FL for efficient deployment on edge devices? -
Conclusion(3 mins)
- Discussion on the potential of FL and model compression for secure AI deployments at the edge with its real-time applications
Gautam Jajoo, a fourth-year Computer Science undergraduate student at BITS Pilani, is a seasoned Python developer and AI enthusiast with extensive experience in machine learning, data privacy, and distributed computing. Over the past two years, Gautam has focused on Federated Learning and exploring model compression techniques to enhance privacy and efficiency. He has previously worked and is currently engaged at several prestigious research institutions in related domains, including the MIT Media Lab, Nantes Université, and the ADAPT Lab at BITS Pilani. His involvement with the CloudCV organization highlights his commitment to making AI research more reproducible. Gautam has also been a Google Summer of Code mentor and organization administrator, leading teams to enhance EvalAI's infrastructure and user interface.