Skip to main content
Thesis defences

PhD Oral Exam - M A Moyeen, Electrical and Computer Engineering

Securing Federated Learning: A Comprehensive Defence Against Privacy Attacks


Date & time
Wednesday, August 27, 2025
9:30 a.m. – 12:30 p.m.
Cost

This event is free

Organization

School of Graduate Studies

Contact

Dolly Grewal

Accessible location

Yes

When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.

Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.

Abstract

In this information age, machine learning (ML) applications drive smart living through innovations such as personalized healthcare, intelligent transportation, and smart homes. However, despite these advancements, businesses and industries continue to face significant challenges in safeguarding data privacy, as ML systems increasingly rely on vast amounts of user data. In this direction, Federated Learning (FL) has emerged as a promising solution, enabling collaborative model training while keeping user data on local premises without the need to share raw data. However, FL faces significant challenges also, including expensive communication costs, system heterogeneity, and vulnerability to various attacks. In particular, it is susceptible to poisoning attacks, where malicious participants corrupt models and data as well as inference attacks that exploit gradients to reveal sensitive information through membership inference or model inversion techniques. These attacks can extract sensitive information from shared gradients, undermining the fundamental privacy guarantees of federated systems. Thus, effective defence mechanisms are fundamental for fully leveraging the advantages of FL. Numerous defences, such as FoolsGold, Flod, Flad, MADDPG, and others, are in place to secure the FL systems.

However, the majority of the defence mechanisms suffer from accuracy degradation, computational overhead, and inadequate attack prevention. Most client selection methods cannot reliably separate malicious and straggler clients, with even cutting-edge approaches struggling with herding and cold-start issues. Furthermore, recent state-of-the-art techniques frequently fail to defend against inference attacks adequately. These methods typically employ Secure Multiparty Computation (SMPC), Homomorphic Encryption (HE), or Differential Privacy (DP) as defensive measures. However, SMPC and HE suffer from high computational complexity, while DP often leads to degraded model accuracy.

Thus, this research addresses these limitations by proposing five different defence mechanisms that ensure robust FL with protected gradients. The proposed mechanisms consist of FedChallenger, Fed-Reputed, SignDefence, Ada-Sign, and SignMPC. The proposed FedChallenger introduces a dual-layer defence mechanism that comprises the zero-trust challenge-response-based authentication at the first layer and a variant of Trimmed-Mean aggregation at the second layer that leverages pairwise cosine similarity and Median Absolute Deviation (MAD). Extensive evaluation on MNIST, FMNIST, EMNIST, and CIFAR-10 datasets demonstrates 3-10% accuracy improvement over state-of-the-art approaches with 1.1-2.2 times faster convergence and 2-3% higher F1-scores.

Subsequently, the reputation-based client selection approach, Fed-Reputed, leverages device capability information and a modified Bellman equation within a hierarchical framework, integrated into a Deep Q-Learning Network (DQN)-based Imbalanced Classification Markov Decision Process (ICMDP) classifier for enhanced client selection. Testing on MNIST and FMNIST datasets demonstrates 9-50% accuracy gains and 1.3-1.7 times faster convergence while effectively detecting both malicious and straggler clients.

Moreover, existing methods often suffer from the dying ReLU problem, where neurons permanently deactivate during training. To counter the dying ReLU problem, SignDefence implements a sophisticated aggregation scheme that utilizes sign direction and LeakyReLU-based aggregation, incorporating Jaccard similarity derived from binary-encoded model weights. This technique demonstrates consistent accuracy and F1-score improvements across different attack conditions.

Despite its benefits, SignDefence remains vulnerable to inference attacks and suffers from limited generalization due to its fixed threshold across diverse benchmark datasets. To address these limitations, a lightweight strategy, Ada-Sign, employs adaptive threshold computation and incorporates DP mechanisms. This approach maintains comparable accuracy to SignDefence while providing enhanced gradient protection through adaptive DP. Extensive evaluation on MNIST and HAR datasets reveals 3-20% accuracy improvement for Ada-Sign over the majority of state-of-the-art techniques.

Finally, to enhance the protection of both SignDefence and Ada-Sign against inference attacks, SignMPC integrates highly configurable SMPC, HE, and DP algorithms. This combined approach ensures comprehensive communication security and gradient privacy while avoiding significant performance bottlenecks. Comprehensive evaluation on MNIST and HAR datasets demonstrates 4-17% accuracy gains for SignMPC over established approaches while maintaining computational efficiency and robust privacy guarantees.

Back to top

© Concordia University