Federated Learning is a technology to train machine learning models and run data analytics on decentralized data. This tutorial will demonstrate step-by-step how to train largescale TensorFlow models and custom computations in federated environments.
In this tutorial, authors of TensorFlow Federated will introduce the key concepts behind federated learning, an approach to machine learning that allows a shared global model to be trained across many participating clients that keep their training data locally. By eliminating the need to collect data at a central location, yet still enabling each participant to benefit from the collective knowledge of all participants in the network, it lets you build intelligent applications that leverage insights from data that might be too costly, sensitive, or impractical to collect.
We’ll demonstrate how you can develop hands-on familiarity with federated learning using TensorFlow Federated (TFF), a new open-source framework in the TensorFlow ecosystem. We will introduce the key concepts behind TensorFlow and TFF, we’ll demonstrate by example how to setup a federated learning experiment and run it in a simulator, what the code looks like under the hood and how to extend it, and we’ll briefly discuss options for future deployment to real devices.
The talk caters to audiences with different types of backgrounds:
Machine learning developers and practitioners, who would like to experiment with running their existing machine learning models and data in a federated setting, will learn how to do so using Federated Learning API, the included simulation runtime and sample federated data sets.
Researchers, who would like to experiment with new types of federated learning algorithms or extend those that come included with the framework, or who might wish to develop custom types of federated computations such as statistical analysis over sensitive data, will learn how to do so using Federated Core API, a strongly-typed functional programming environment that allows for easy mixing of TensorFlow code with federated communication abstractions.
Systems engineers and researchers, who would like to adapt TensorFlow Federated to target new types of environments, will learn how they can benefit from the abstract platform-independent representation used to represent all computations expressed in TFF - at its core, TFF is designed to facilitate a smooth migration path for all TFF code from a simulation environment to a possible future deployment on real devices in production.
Artificial Intelligence, Deep Learning, Data Science, Machine Learning, Data Engineering
Domain Expertise:some
Python Skill Level:basic
Link to talk slides: Abstract as a tweet:Meet TensorFlow Federated: an open-source framework for machine learning and other computations on decentralized data.