Knowledge Agora



Scientific Article details

Title Federated Adversarial Training Strategies for Achieving Privacy and Security in Sustainable Smart City Applications
ID_Doc 38638
Authors Utomo, S; Rouniyar, A; Hsu, HC; Hsiung, PA
Title Federated Adversarial Training Strategies for Achieving Privacy and Security in Sustainable Smart City Applications
Year 2023
Published Future Internet, 15, 11
DOI 10.3390/fi15110371
Abstract Smart city applications that request sensitive user information necessitate a comprehensive data privacy solution. Federated learning (FL), also known as privacy by design, is a new paradigm in machine learning (ML). However, FL models are susceptible to adversarial attacks, similar to other AI models. In this paper, we propose federated adversarial training (FAT) strategies to generate robust global models that are resistant to adversarial attacks. We apply two adversarial attack methods, projected gradient descent (PGD) and the fast gradient sign method (FGSM), to our air pollution dataset to generate adversarial samples. We then evaluate the effectiveness of our FAT strategies in defending against these attacks. Our experiments show that FGSM-based adversarial attacks have a negligible impact on the accuracy of global models, while PGD-based attacks are more effective. However, we also show that our FAT strategies can make global models robust enough to withstand even PGD-based attacks. For example, the accuracy of our FAT-PGD and FL-mixed-PGD models is 81.13% and 82.60%, respectively, compared to 91.34% for the baseline FL model. This represents a reduction in accuracy of 10%, but this could be potentially mitigated by using a more complex and larger model. Our results demonstrate that FAT can enhance the security and privacy of sustainable smart city applications. We also show that it is possible to train robust global models from modest datasets per client, which challenges the conventional wisdom that adversarial training requires massive datasets.
Author Keywords sustainable smart cities; federated learning; adversarial attack; privacy protection; robust model
Index Keywords Index Keywords
Document Type Other
Open Access Open Access
Source Emerging Sources Citation Index (ESCI)
EID WOS:001119994200001
WoS Category Computer Science, Information Systems
Research Area Computer Science
PDF https://www.mdpi.com/1999-5903/15/11/371/pdf?version=1700474587
Similar atricles
Scroll