Blockchain-Based Federated Learning: Comparison
Please note this is a comparison between Version 4 by Mona Zou and Version 3 by Mona Zou.

Federated Learning (FL) is a distributed Deep Learning (DL) technique that creates a global model through the local training of multiple edge devices. It uses a central server for model communication and the aggregation of post-trained models. The central server orchestrates the training process by sending each participating device an initial or pre-trained model for training. To achieve the learning objective, focused updates from edge devices are sent back to the central server for aggregation. While such an architecture and information flows can support the preservation of the privacy of participating device data, the strong dependence on the central server is a significant drawback of this framework. Having a central server could potentially lead to a single point of failure. Further, a malicious server may be able to successfully reconstruct the original data, which could impact on trust, transparency, fairness, privacy, and security. Decentralizing the FL process can successfully address these issues. Integrating a decentralized protocol such as Blockchain technology into Federated Learning techniques will help to address these issues and ensure secure aggregation.

  • artificial intelligence
  • deep learning
  • federated learning
  • blockchain
  • secure aggregation

[1]1. Introduction

The introduction of the Internet of Things (IoT) has resulted in the massive growth in the number of intelligent devices. With strong hardware and dedicated sensors, these devices can collect and process data at high speed. Artificial Intelligence (AI) and Machine Learning (ML) flourish in data. These data are generated by billions of IoT devices and smart phones. By generating these large amounts of data, the IoT has effectively enhanced the training of Deep Learning (DL) models. However, IoT devices cannot independently execute DL algorithms because of their resource-constrained nature. Traditionally, a DL approach entails data collection from various sources and storing them in a centralized location. These stored data are then used to train the DL model. However, privacy legislations such as European Commission’s General Data Protection Right (GDPR) and the U.S. Consumer Privacy Bill of Right require that in certain cases, data collection may not be feasible. To address this issue, Federated Learning (FL) was introduced. FL is a distributed DL technique that creates a global model through the local training of multiple decentralized edge devices. It enables distributed ML to be effectively accomplished between various edge devices or participants. Also, it promotes the exchange of big data and tends to enhance the privacy preservation of users’ data within the confinement of the law [2][3].
The FL algorithm permits the decentralized training of data, but the central server aggregates the model and process planning. In traditional FL, the central server sends to each participating device/client an initial/pre-trained model for training. Using their own local dataset, each participating device trains the model locally and sends it back to the central server for aggregation. The server aggregates the returned trained model to produce an updated global model that is sent back to the participating devices for another round of local training [4]. This client–server interaction [5] continues until model convergence is achieved or a specific number of iterations (rounds) are attained. However, this centralized approach of model aggregation and process planning in traditional FL makes the central server a single point of failure [6]. This threat of a single point of failure (SPOF) on the server could be because of unforeseen external attacks, purposeful unfair aggregation, unexpected failure in network connection, etc. This strong dependence on the central server is a significant drawback to this technique because if there exist a problem with the server or it fails, the training process will stop and as mentioned earlier, the resource-constrained end devices will not be able to independently execute the aggregation process [7]. Several risks and issues arise in such a centralized model: (1) Communication failure: To collect model updates and distribute the updated model, the central server depends on communication with end devices. If there is a communication failure, it can interrupt the training process and delay model updates. (2) Scalability and overload issues: The central server might face scalability issues in a large network with several end devices. If the model updates and requests from end devices cannot be effectively handled by the central server, it may be overloaded and slow down or crash. This will lead to training disruption. (3) Security breach: A security breach on the central server could result in malicious actors gaining unauthorized access to sensitive data or model updates, leading to privacy issues or tampering with the model updates. (4) Server downtime: The central server may experience hardware failures or software issues which could result in downtime, making it unavailable to end devices. During this period, model updates cannot be aggregated, and the FL process will stop. (5) Aggregation bias: To form an updated global model, the central server aggregates updates from various end devices. If the aggregation is biased, it could favor certain end devices over others, leading to a skewed model result.
Furthermore, the privacy leakage in FL could put updates from the end devices at risk due to fairness and trust issues from the central server, and this could be because of the following: (1) Central server integrity: The central server orchestrates the training and aggregation of model updates from end devices. If the server is compromised, it could change or alter the model updates, resulting in influenced or poisonous models being dispersed to end devices. (2) Model poisoning: Without thorough validation, the central server may aggregate model updates from a malicious participant in the training process. The malicious participant may attempt to poison the global model by intentionally sending updates that degrade the model performance. (3) Data bias: Data distribution across end devices may not be evenly distributed, resulting in bias or data imbalance. This imbalanced distribution could result in less accurate models and be unfair to a subset of the end devices. (4) Data privacy and security: In as much as FL aims to preserve the privacy of the user data by not sharing raw data with the central server, there is still risk of data exposure during model updates. The gradients sent to the server may accidentally reveal sensitive information about the local data. Also, a malicious central server might compromise or gain sensitive intuitions of the updates from the end devices because of its capability to successfully reconstruct the original data due to non-scrutinized, constant, and direct communication with the end devices. Recent works have shown that a malicious server can use the gradient information to infer the sensitive content about the clients’ training data. Through a Generative Adversarial Network (GAN), the distribution of the training data can be recovered by the malicious server [8]. Also, attacks on the server can alter the global model [9]. Furthermore, attacks on the end devices could manipulate local models, and this can result in errors in the global model generated from such altered local models. In like manner, the integrity of the generated global model should be verified before use by the edge devices. FL was integrated with Blockchain technology to ensure transparency and enhance its privacy preservation, security, and performance [10][11].
To address this SPOF threat, privacy, trust, fairness, transparency, and security, Blockchain is integrated into FL methodology to mitigate against vulnerability in the FL centralized approach of model aggregation and process planning. Blockchain is used as a reliable orchestrating memory that eliminates the need for a central coordinating unit and provides a secured, certified, and validated exchange of information. The three fundamental security considerations identified in Ref. [12] are confidentiality, integrity, and availability. As identified in Refs. [13][14], FL suffers from insufficient incentives, poisoning attacks, privacy preservation, etc.
In Blockchain, transactions are unaltered and timestamped. As a distribute ledger, Blockchain can act as an append-only database that offers data integrity. Also, it can act as a hybrid Blockchain that guarantees data confidentiality to only authenticated and permitted users. Blockchain allows the storage and exchange of data in a decentralized approach using digital blocks, increasing FL fault tolerance capacity [15]. These digital blocks are chained together using cryptographic hashes to form a distributed ledger. Blockchain is a type of distributed ledger that is shared among all devices in a federated network. This ensures that data are immutable, visible, traceable, transparent, and non-repudiated. These unique characteristics of Blockchain make it an ideal technology to combine with FL to safeguard the privacy and security of aggregated data.

2. Secure Aggregation in FL

To guarantee privacy and security using FL, the following proposals [16][17] on secure aggregation mechanisms have been proposed. Fereidooni et al. [16] proposed a secure aggregation for private Federated Learning. This approach tends to impede inference attacks on FL by prohibiting access and tampering with trained model updates. They utilized a Secure Multipath Computation (SMC) encryption technique to prevent the aggregator from accessing the model updates used for the training of the Machine Learning model. Similarly, Wu et al. [17] proposed a secure aggregation mechanism for model updates in FL to prevent inference and inversion attacks that can obtain sensitive information from local model updates. Their approach utilized matrix transformation to protect each clients’ model updates by preventing the attacker from gaining sensitive information using encryption of a little part of the model update to avoid heavy encryption that could result in low accuracy. Their aggregation mechanism functions with an acceptable overhead. However, both approaches suffer the threat of the SPOF of the central server which orchestrates the training process [6].
Huang et al. [8] proposed a secure aggregation mechanism for Federated Learning that utilized ransom masking code to ensure the confidentiality of local gradients. Their proposed mechanism ensures the confidentiality of local gradients and verifiability of aggregated gradients. However, this mechanism is not communication- and bandwidth-efficient when several clients are involved in the training process. Also, it suffers from the threat of SPOF in the aggregator and verification servers. To protect against Byzantine adversarial that could compromise the performance and convergence of the global model, Zhao et al. [18] proposed a secure aggregation mechanism in FL. This mechanism used intel SGX primitives to ensure privacy preservation of the local models by providing a recovery key to the encrypted models. This technique ensures that sensitive information is not revealed to the aggregation server. However, it still suffers the threat of SPOF of the aggregation server that could halt the training process.

3. Blockchain-Based Federated Learning

Traditional FL mechanisms depend on the central server for coordination and orchestration. This central server dependence may result in SPOF, trust issues, and unwanted behaviors of the server. To ensure effective decentralization, trust, transparency, and reliability, Blockchain technology has emerged. Blockchain technology has been implemented by many researchers to eliminate the threat of SPOF in traditional FL [19][20].
To guarantee data authenticity and privacy protection, the authors in Ref. [19] implemented an FL framework using Blockchain in self-driving cars. In Ref. [21], they implemented a private Blockchain FL using an interstellar file system to minimize high storage costs in Blockchain, inference, and poisoning attacks in FL. As seen in Ref. [22], they implemented a private Blockchain for secure model aggregation in FL using a consensus process for traffic prediction. In Ref. [20], the author proposed a Blockchain-enabled FL where the security and privacy of the user’s information were protected by encrypting and encoding it in the cloud. All these research works mentioned above makes use of Blockchain technology for the aggregation of a trained model, which incurs huge bandwidth and complexity in computation. Most of the contributions are based on a private Blockchain, where the entire process is not decentralized, which could result in trust issues.
For the local evaluation and global aggregation of parameters, Sun et al. [23] proposed the use of Blockchain in FL to lessen the effect of end-point adversarial training data. In this work, the method of selecting a committee member is not feasible and was not fully analysed. Furthermore, if there are more users participating in the network, the method may experience a decrease in classification accuracy. To facilitate the model update and guarantee secure aggregation of the global model, Mallah et al. [24] proposed a Blockchain-enabled Federated Learning that selects only reliable IoT devices for global aggregation. Their approach ensures the aggregation of the global model through optimized behavior monitoring of the devices, increasing the convergence time of FL processes while preserving network performance. However, there is a trade-off in time and bandwidth efficiency, and the scalability of this technique in variable network topology is not guaranteed. To guarantee a secure aggregation mechanism that will ensure trust, security, and integrity of the global model, the following approaches [25][26] have been proposed.
Kalapaaking et al. [25] proposed a Blockchain-based FL secure aggregation mechanism to guarantee the security and integrity of the global model. Their technique ensured a trusted aggregation of the local model to generate a global model. However, they failed to consider how to handle stragglers and dropouts in Industrial IoT (IIoT). Their assumption was that all the IIoT will successfully return their trained model, which is practically impossible. Chen et al. [26] proposed a Blockchain-based FL for the secure aggregation and efficient sharing of medical data. Their technique enhanced the sharing of medical data in a privacy-preserved manner. However, the use of a contribution-weighted aggregation mechanism, as seen in Ref. [26], will incur huge bandwidth and complexity in computation, which makes the technique not feasible within a resource-constrained setting. To minimize the impact of the attacks from malicious clients or a poisonous server and preserve privacy in FL, Refs. [27][1] have been proposed.
Li et al. [27] proposed a Blockchain-based decentralized FL with committee consensus to solve the issues of SPOF, privacy, and security. Their technique solves the threat of SPOF, prevents malicious attacks, prevents models from been exposed to poisoning or unauthorized devices, and the burden of consensus computing is reduced. However, the validation consumption is increased, and the consensus committee selection could result in security issues if not properly selected. Miao et al. [1] proposed an FL privacy preserving scheme based on a Blockchain network. Their approach mitigates against a poisoning attack from malicious clients and ensures a transparent process using the Blockchain network. However, they did not provide mechanisms on how to deal with stragglers and dropouts that may exist within the devices.
 

References

  1. Hussain, G.K.J.; Manoj, G. Federated Learning: A Survey of a New Approach to Machine Learning. In Proceedings of the 2022 1st International Conference on Electrical, Electronics, Information and Communication Technologies, ICEEICT 2022, Trichy, India, 16–18 February 2022; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2022.
  2. Abdulrahman, S.; Tout, H.; Ould-Slimane, H.; Mourad, A.; Talhi, C.; Guizani, M. A survey on federated learning: The journey from centralized to distributed on-site learning and beyond. IEEE Internet Things J. 2021, 8, 5476–5497.
  3. Wang, S.; Sahay, R.; Brinton, C.G. How Potent Are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers? 2023. Available online: http://arxiv.org/abs/2301.08866 (accessed on 22 July 2023).
  4. Rahman, K.M.J.; Ahmed, F.; Akhter, N.; Hasan, M.; Amin, R.; Aziz, K.E.; Islam, A.K.M.M.; Mukta, S.H. Challenges, Applications and Design Aspects of Federated Learning: A Survey. IEEE Access 2021, 9, 124682–124700.
  5. Chen, H.; Asif, S.A.; Park, J.; Shen, C.-C.; Bennis, M. Robust Blockchained Federated Learning with Model Validation and Proof-of-Stake Inspired Consensus. 2021. Available online: www.aaai.org (accessed on 22 July 2023).
  6. Bhatia, L.; Samet, S. Decentralized Federated Learning: A Comprehensive Survey and a New Blockchain-based Data Evaluation Scheme. In Proceedings of the 2022 4th International Conference on Blockchain Computing and Applications, BCCA 2022, San Antonio, TX, USA, 5–7 September 2022; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2022; pp. 289–296.
  7. Huang, C.; Yao, Y.; Zhang, X.; Teng, D.; Wang, Y.; Zhou, L. Robust Secure Aggregation with Lightweight Verification for Federated Learning. In Proceedings of the 2022 IEEE 21st International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2022, Wuhan, China, 9–11 December 2022; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2022; pp. 582–589.
  8. Liu, P.; Xu, X.; Wang, W. Threats, attacks and defenses to federated learning: Issues, taxonomy and perspectives. Cybersecurity 2022, 5, 4.
  9. Li, D.; Han, D.; Weng, T.-H.; Zheng, Z.; Li, H.; Liu, H.; Castiglione, A.; Li, K.-C. Blockchain for federated learning toward secure distributed machine learning systems: A systemic survey. Soft Comput. 2022, 26, 4423–4440.
  10. Salim, S.; Turnbull, B.; Moustafa, N. A Blockchain-Enabled Explainable Federated Learning for Securing Internet-of-Things-Based Social Media 3.0 Networks. IEEE Trans. Comput. Soc. Syst. 2021, 1–17.
  11. Manvith, V.S.; Saraswathi, R.V.; Vasavi, R. A performance comparison of machine learning approaches on intrusion detection dataset. In Proceedings of the 3rd International Conference on Intelligent Communication Technologies and Virtual Mobile Networks, ICICV 2021, Tirunelveli, India, 4–6 February 2021; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2021; pp. 782–788.
  12. Qu, Y.; Pokhrel, S.R.; Garg, S.; Gao, L.; Xiang, Y. A Blockchained Federated Learning Framework for Cognitive Computing in Industry 4.0 Networks. IEEE Trans. Ind. Inform. 2021, 17, 2964–2973.
  13. Passerat-Palmbach, J.; Farnan, T.; McCoy, M.; Harris, J.D.; Manion, S.T.; Flannery, H.L.; Gleim, B. Blockchain-orchestrated machine learning for privacy preserving federated learning in electronic health data. In Proceedings of the 2020 IEEE International Conference on Blockchain, Blockchain 2020, Rhodes, Greece, 2–6 November 2020; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2020; pp. 550–555.
  14. Ullah, I.; Deng, X.; Pei, X.; Jiang, P.; Mushtaq, H. A verifiable and privacy-preserving blockchain-based federated learning approach. Peer Peer Netw. Appl. 2023, 16, 2256–2270.
  15. Fereidooni, H.; Marchal, S.; Miettinen, M.; Mirhoseini, A.; Mollering, H.; Nguyen, T.D.; Rieger, P.; Sadeghi, A.-R.; Schneider, T.; Yalame, H.; et al. SAFELearn: Secure Aggregation for private FEderated Learning. In Proceedings of the 2021 IEEE Symposium on Security and Privacy Workshops, SPW 2021, San Francisco, CA, USA, 27 May 2021; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2021; pp. 56–62.
  16. Wu, D.; Pan, M.; Xu, Z.; Zhang, Y.; Han, Z. Towards Efficient Secure Aggregation for Model Update in Federated Learning. In Proceedings of the 2020 IEEE Global Communications Conference, GLOBECOM 2020—Proceedings, Taipei, Taiwan, 7–11 December 2020; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2020.
  17. Zhao, L.; Jiang, J.; Feng, B.; Wang, Q.; Shen, C.; Li, Q. SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning. IEEE Trans. Dependable Secur. Comput. 2022, 19, 3329–3342.
  18. Pokhrel, S.R.; Choi, J. Federated Learning with Blockchain for Autonomous Vehicles: Analysis and Design Challenges. IEEE Trans. Commun. 2020, 68, 4734–4746.
  19. Guo, X. Implementation of a Blockchain-enabled Federated Learning Model that Supports Security and Privacy Comparisons. In Proceedings of the 2022 IEEE 5th International Conference on Information Systems and Computer Aided Education, ICISCAE 2022, Dalian, China, 23–25 September 2022; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2022; pp. 243–247.
  20. Zhang, P.; Liu, G.; Chen, Z.; Guo, J.; Liu, P. A study of a federated learning framework based on the interstellar file system and blockchain: Private Blockchain Federated Learning. In Proceedings of the 2022 3rd International Conference on Computer Vision, Image and Deep Learning and International Conference on Computer Engineering and Applications, CVIDL and ICCEA 2022, Changchun, China, 20–22 May 2022; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2022; pp. 267–273.
  21. Zhang, Q.; Palacharla, P.; Sekiya, M.; Suga, J.; Katagiri, T. Blockchain-based Secure Aggregation for Federated Learning with a Traffic Prediction Use Case. In Proceedings of the 2021 IEEE Conference on Network Softwarization: Accelerating Network Softwarization in the Cognitive Age, NetSoft 2021, Tokyo, Japan, 28 June–2 July 2021; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2021; pp. 372–374.
  22. Sun, Y.; Esaki, H.; Ochiai, H. Blockchain-Based Federated Learning against End-Point Adversarial Data Corruption. In Proceedings of the 19th IEEE International Conference on Machine Learning and Applications, ICMLA 2020, Miami, FL, USA, 14–17 December 2020; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2020; pp. 729–734.
  23. Al Mallah, R.; López, D.; Halabi, T. Blockchain-enabled Efficient and Secure Federated Learning in IoT and Edge Computing Networks. In Proceedings of the 2023 International Conference on Computing, Networking and Communications, ICNC 2023, Honolulu, HI, USA, 20–22 February 2023; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2023; pp. 511–515.
  24. Kalapaaking, A.P.; Khalil, I.; Rahman, M.S.; Atiquzzaman, M.; Yi, X.; Almashor, M. Blockchain-based Federated Learning with Secure Aggregation in Trusted Execution Environment for Internet-of-Things. IEEE Trans. Ind. Inform. 2023, 19, 1703–1714.
  25. Chen, Y.; Lin, F.; Chen, Z.; Tang, C.; Jia, R.; Li, M. Blockchain-based Federated Learning with Contribution-Weighted Aggregation for Medical Data Modeling. In Proceedings of the 2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems, MASS 2022, Denver, CO, USA, 19–23 October 2022; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2022; pp. 606–612.
  26. Li, Y.; Chen, C.; Liu, N.; Huang, H.; Zheng, Z.; Yan, Q. A Blockchain-Based Decentralized Federated Learning Framework with Committee Consensus. IEEE Netw. 2021, 35, 234–241.
  27. Miao, Y.; Liu, Z.; Li, H.; Choo, K.K.R.; Deng, R.H. Privacy-Preserving Byzantine-Robust Federated Learning via Blockchain Systems. IEEE Trans. Inf. Forensics Secur. 2022, 17, 2848–2861.
More
Video Production Service