TY - GEN
T1 - Exploration and Exploitation in Federated Learning to Exclude Clients with Poisoned Data
AU - Tabatabai, Shadha
AU - Mohammed, Ihab
AU - Qolomany, Basheer
AU - Albaseer, Abdullatif
AU - Ahmad, Kashif
AU - Abdallah, Mohamed
AU - Al-Fuqaha, Ala
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Federated Learning (FL) is one of the hot research topics, and it utilizes Machine Learning (ML) in a distributed manner without directly accessing private data on clients. How-ever, FL faces many challenges, including the difficulty to obtain high accuracy, high communication cost between clients and the server, and security attacks related to adversarial ML. To tackle these three challenges, we propose an FL algorithm inspired by evolutionary techniques. The proposed algorithm groups clients randomly in many clusters, each with a model selected randomly to explore the performance of different models. The clusters are then trained in a repetitive process where the worst performing cluster is removed in each iteration until one cluster remains. In each iteration, some clients are expelled from clusters either due to using poisoned data or low performance. The surviving clients are exploited in the next iteration. The remaining cluster with surviving clients is then used for training the best FL model (i.e., remaining FL model). Communication cost is reduced since fewer clients are used in the final training of the FL model. To evaluate the performance of the proposed algorithm, we conduct a number of experiments using FEMNIST dataset and compare the result against the random FL algorithm. The experimental results show that the proposed algorithm outperforms the baseline algorithm in terms of accuracy, communication cost, and security.
AB - Federated Learning (FL) is one of the hot research topics, and it utilizes Machine Learning (ML) in a distributed manner without directly accessing private data on clients. How-ever, FL faces many challenges, including the difficulty to obtain high accuracy, high communication cost between clients and the server, and security attacks related to adversarial ML. To tackle these three challenges, we propose an FL algorithm inspired by evolutionary techniques. The proposed algorithm groups clients randomly in many clusters, each with a model selected randomly to explore the performance of different models. The clusters are then trained in a repetitive process where the worst performing cluster is removed in each iteration until one cluster remains. In each iteration, some clients are expelled from clusters either due to using poisoned data or low performance. The surviving clients are exploited in the next iteration. The remaining cluster with surviving clients is then used for training the best FL model (i.e., remaining FL model). Communication cost is reduced since fewer clients are used in the final training of the FL model. To evaluate the performance of the proposed algorithm, we conduct a number of experiments using FEMNIST dataset and compare the result against the random FL algorithm. The experimental results show that the proposed algorithm outperforms the baseline algorithm in terms of accuracy, communication cost, and security.
KW - CNNs
KW - Deep Learning
KW - Distributed ML
KW - Edge Computing
KW - Federated Learning
KW - Internet of Things
KW - Security
UR - http://www.scopus.com/inward/record.url?scp=85135285962&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85135285962&partnerID=8YFLogxK
U2 - 10.1109/IWCMC55113.2022.9825004
DO - 10.1109/IWCMC55113.2022.9825004
M3 - Conference contribution
AN - SCOPUS:85135285962
T3 - 2022 International Wireless Communications and Mobile Computing, IWCMC 2022
SP - 407
EP - 412
BT - 2022 International Wireless Communications and Mobile Computing, IWCMC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE International Wireless Communications and Mobile Computing, IWCMC 2022
Y2 - 30 May 2022 through 3 June 2022
ER -