Holdout SGD: Byzantine Tolerant Federated Learning

About the Project

Joint work with Tel-Aviv University professors Amir Globerson and Tomer Koren, and students Shahar Azulay and Lior Raz.
Here we develop and suggest a new distributed Byzantine tolerant federated learning algorithm, HoldOut SGD, for Stochastic Gradient Descent (SGD) optimization. This work uses the well known machine learning technique of holdout estimation, in a distributed fashion, in order to select parameter updates that are likely to lead to models with low loss values. This makes it more effective at discarding Byzantine workers inputs than existing methods that eliminate outliers in the parameter-space of the learned model. HoldOut SGD first randomly selects a set of workers that use their private data in order to propose gradient updates. Next, a voting committee of workers is randomly selected, and each voter uses its private data as holdout data, in order to select the best proposals via a voting scheme. Empirical evaluation shows that HoldOut SGD is Byzantine-resilient and efficiently converges to an effectual model for deep-learning tasks, as long as the total number of participating workers is large and the fraction of Byzantine workers is less than half (<1/3 for the fully distributed variant).
 
Shahar AzulayLior RazAmir GlobersonTomer Koren, Yehuda Afek:
Holdout SGD: Byzantine Tolerant Federated Learning. CoRR abs/2008.04612 (2020)

Project Details

Loading...