Simulation-based performance analysis of server-less Blockchain-enabled Federated Learning

Overview

Blockchain-enabled Server-less Federated Learning

Repository containing the files used to reproduce the results of the publication "Blockchain-enabled Server-less Federated Learning".

''BibTeX'' citation:

@article{wilhelmi2021blockchain,
  title={Blockchain-enabled Server-less Federated Learning},
  author={Wilhelmi, Francesc, Giupponi, Lorenza and Dini, Paolo},
  journal={arXiv preprint arXiv:2112.07938
},
  year={2021}
}

Table of Contents

Authors

Abstract

Motivated by the heterogeneous nature of devices participating in large-scale Federated Learning (FL) optimization, we focus on an asynchronous server-less FL solution empowered by Blockchain (BC) technology. In contrast to mostly adopted FL approaches, which assume synchronous operation, we advocate an asynchronous method whereby model aggregation is done as clients submit their local updates. The asynchronous setting fits well with the federated optimization idea in practical large-scale settings with heterogeneous clients. Thus, it potentially leads to higher efficiency in terms of communication overhead and idle periods. To evaluate the learning completion delay of BC-enabled FL, we provide an analytical model based on batch service queue theory. Furthermore, we provide simulation results to assess the performance of both synchronous and asynchronous mechanisms. Important aspects involved in the BC-enabled FL optimization, such as the network size, link capacity, or user requirements, are put together and analyzed. As our results show, the synchronous setting leads to higher prediction accuracy than the asynchronous case. Nevertheless, asynchronous federated optimization provides much lower latency in many cases, thus becoming an appealing FL solution when dealing with large data sets, tough timing constraints (e.g., near-real-time applications), or highly varying training data.

Repository description

This repository contains the resources used to generate the results included in the paper entitled "Blockchain-enabled Server-less Federated Learning". The files included in this repository are:

  1. LaTeX files: contains the files used to generate the manuscript.
  2. Code & Results: scripts and code used to generate the results included in the paper.
  • Queue code: scripts used to execute the Blockchain queuing delay simulations through the batch-service queue simulator.
  • TensorFlow code: python scripts used to execute the FL mechanisms through TensorFlowFederated.
  • Matlab code: matlab scripts used to process the results and plot the figures included in the manuscript.
  • Outputs: files containing the outputs from the different resources (queue simulator, TFF).
  • Figures: figures included in the manuscript and others with preliminary results.

Usage

Part 1: Batch service queue analysis

To generate the results related to the analysis of the queueing delay in the Blockchain, we used our batch-service queue simulator (commit: f846b66). Please, refer to that repository's documentation for installation/execution guidelines. As for the corresponding theoretical background, more details can be found in [1].

The obtained results from this part can be found at "Matlab code/output_queue_simulator". To reproduce them, execute the scripts from the "Batch service queue" folder in the batch-service queue simulator.

Part 2: FLchain analysis

Tensorflow Federated (TFF) has been used to evaluate the proposed s-FLchain and a-FLchain mechanisms in the manuscript. To get started with TF (and TFF), we strongly recommend using the tutorials in https://www.tensorflow.org/federated/tutorials/tutorials_overview.

Once the TFF environment has been setup, our results can be reproduced by using the scripts in "TensorFlow code":

  1. centalized_baseline.py: centralized ML model for getting baseline results (upper/lower bounds).
  2. sFLchain_vs_aFLchain.py: script generating the output for the comparison of the synchronous and the asynchronous models.

The output results from this part can be found at "Matlab code/output_tensorflow".

Part 3: End-to-end analysis framework

Finally, to gather all the resources together, we have used the end-to-end latency framework contained in this repository ("Matlab code/simulation_scripts"). Those files contain the communication and computation models used to calculate the total latency experienced by each considered Blockchain-enabled FL mechanism. Moreover, to get the end-to-end latency and accuracy results, the abovementioned scripts gather and process the outputs obtained from both batch-service queue simulator and TFF.

Content:

  1. 0_preliminary_results: evaluation of several FL parameters via TFF (out of the scope of this publication).
  2. 1_blockchain_analysis: evaluation of the Blockchain queuing delay (refer to Part 1: Batch service queue analysis).
  3. 2_flchain: evaluation of the FL accuracy (refer to Part 2: FLchain analysis) and end-to-end latency analysis. Includes models to compute communication and computation-related delays.

Performance Evaluation

Simulation parameters

The simulation parameters used in the publication are as follows:

Parameter Value
Number of miners 19
Transaction size 5 kbits
BC Block header size 20 kbits
Max. waiting time 1000 seconds
Queue length 1000 packets
--------- --------------------------------------- ----------------------
Min/max distance Client-BS 0/4.15 meters
Bandwidth. 180 kHz
Min/max distance Client-BS 2 GHz
Min/max distance Client-BS 0 dBi
Comm. Loss at the reference distance (P_L0) 5 dB
Path-loss exponent (α) 4.4
Shadowing factor (σ) 9.5
Obstacles factor (γ) 30
Ground noise -95 dBm
Capacity P2P links 5 Mbps
--------- --------------------------------------- ----------------------
Learning algorithm Neural Network
Number of hidden layers 2
Activation function ReLU
Optimizer SGD
Loss function Cat. cross-entropy
ML Learning rate (local/global) 0.01/1
Epochs number 5
Batch size 20
CPU cycles to process a data point 10^-5
Clients' clock speed 1 GHz

Simulation Results

In what follows, we present the results presented in the manuscript. First, we refer to the Blockchain queuing delay analysis, where we assess the sensitivity of the Blockchain on various parameters, including the block size, the mining rate, the traffic intensity, or the miners' communication capacity.

Next, we provide a broader vision of the Blockchain transaction confirmation latency by including other delays different than the queuing delay, such as transaction upload, block generation, or block propagation.

Finally, we present the results obtained for the evaluation of s-FLchain and a-FLchain in terms of learning accuracy and learning completion time:

References

[1] Wilhelmi, F., & Giupponi, L. (2021). Discrete-Time Analysis of Wireless Blockchain Networks. arXiv preprint arXiv:2104.05586.

Contribute

If you want to contribute, please contact to [email protected].

Owner
Francesc Wilhelmi
PhD Student at the Wireless Networking Research Group (Universitat Pompeu Fabra)
Francesc Wilhelmi
Numba-accelerated Pythonic implementation of MPDATA with examples in Python, Julia and Matlab

PyMPDATA PyMPDATA is a high-performance Numba-accelerated Pythonic implementation of the MPDATA algorithm of Smolarkiewicz et al. used in geophysical

Atmospheric Cloud Simulation Group @ Jagiellonian University 15 Nov 23, 2022
[ECCV2020] Content-Consistent Matching for Domain Adaptive Semantic Segmentation

[ECCV20] Content-Consistent Matching for Domain Adaptive Semantic Segmentation This is a PyTorch implementation of CCM. News: GTA-4K list is available

Guangrui Li 88 Aug 25, 2022
Train emoji embeddings based on emoji descriptions.

emoji2vec This is my attempt to train, visualize and evaluate emoji embeddings as presented by Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko

Miruna Pislar 17 Sep 03, 2022
PantheonRL is a package for training and testing multi-agent reinforcement learning environments.

PantheonRL is a package for training and testing multi-agent reinforcement learning environments. PantheonRL supports cross-play, fine-tuning, ad-hoc coordination, and more.

Stanford Intelligent and Interactive Autonomous Systems Group 57 Dec 28, 2022
A whale detector design for the Kaggle whale-detector challenge!

CNN (InceptionV1) + STFT based Whale Detection Algorithm So, this repository is my PyTorch solution for the Kaggle whale-detection challenge. The obje

Tarin Ziyaee 92 Sep 28, 2021
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"

MUST-GAN Code | paper The Pytorch implementation of our CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generat

TianxiangMa 46 Dec 26, 2022
4K videos with annotated masks in our ICCV2021 paper 'Internal Video Inpainting by Implicit Long-range Propagation'.

Annotated 4K Videos paper | project website | code | demo video 4K videos with annotated object masks in our ICCV2021 paper: Internal Video Inpainting

Tengfei Wang 21 Nov 05, 2022
This repo contains source code and materials for the TEmporally COherent GAN SIGGRAPH project.

TecoGAN This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution

Nils Thuerey 5.2k Jan 02, 2023
Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning

Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning. Circuit Training is an open-s

Google Research 479 Dec 25, 2022
Deep Learning Models for Causal Inference

Extensive tutorials for learning how to build deep learning models for causal inference using selection on observables in Tensorflow 2.

Bernard J Koch 151 Dec 31, 2022
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

Jia Research Lab 116 Dec 20, 2022
MusicYOLO framework uses the object detection model, YOLOx, to locate notes in the spectrogram.

MusicYOLO MusicYOLO framework uses the object detection model, YOLOX, to locate notes in the spectrogram. Its performance on the ISMIR2014 dataset, MI

Xianke Wang 2 Aug 02, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
Geneva is an artificial intelligence tool that defeats censorship by exploiting bugs in censors

Geneva is an artificial intelligence tool that defeats censorship by exploiting bugs in censors

Kevin Bock 1.5k Jan 06, 2023
Multispectral Object Detection with Yolov5

Multispectral-Object-Detection Intro Official Code for Cross-Modality Fusion Transformer for Multispectral Object Detection. Multispectral Object Dete

Richard Fang 121 Jan 01, 2023
Tgbox-bench - Simple TGBOX upload speed benchmark

TGBOX Benchmark This script will benchmark upload speed to TGBOX storage. Build

Non 1 Jan 09, 2022
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021) Jiaxi Jiang, Kai Zhang, Radu Timofte Computer Vision Lab, ETH Zurich, Switzerland 🔥

Jiaxi Jiang 282 Jan 02, 2023
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019

Class-Balanced Loss Based on Effective Number of Samples Tensorflow code for the paper: Class-Balanced Loss Based on Effective Number of Samples Yin C

Yin Cui 546 Jan 08, 2023
Free like Freedom

This is all very much a work in progress! More to come! ( We're working on it though! Stay tuned!) Installation Open an Anaconda Prompt (in Windows, o

2.3k Jan 04, 2023
PyTorch Implementation of the paper Learning to Reweight Examples for Robust Deep Learning

Learning to Reweight Examples for Robust Deep Learning Unofficial PyTorch implementation of Learning to Reweight Examples for Robust Deep Learning. Th

Daniel Stanley Tan 325 Dec 28, 2022