LDP-Fed+: A robust and privacy-preserving federated learning based classification framework enabled by local differential privacy

Yufeng Wang*, Xu Zhang, Jianhua Ma, Qun Jin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


As a distributed learning framework, Federated Learning (FL) allows different local learners/participants to collaboratively train a joint model without exposing their own local data, and offers a feasible solution to legally resolve data islands. However, among them, the data privacy and model security are two challenges. The former means that, if original data are used for trained FL models, various methods can be used to deduce the original data samples, thereby causing the leakage of data. The latter implies that unreliable/malicious participants may affect or destroy the joint FL model, through uploading wrong local model parameters. Therefore, this paper proposes a novel distributed FL training framework, namely LDP-Fed+, which takes into account differential privacy protection and model security defense. Specifically, firstly, a local perturbation module is added at the local learner side, which perturbs the original data of local learners through feature extraction, binary encoding and decoding, and random response. Then, through using the perturbed data, local neural network model is trained to obtain the network parameters that meet local differential protection, to effectively deal with model inversion attacks. Secondly, a security defense module is added on the server side, which uses the auxiliary model and differential index mechanism to select an appropriate number of local disturbance parameters for aggregation to enhance model security defense and deal with membership inference attacks. The experimental results show that, compared with other federated learning models based on differential privacy, LDP-Fed+ has stronger robustness for model security and higher accuracy for model training while ensuring strict privacy protection.

Original languageEnglish
JournalConcurrency Computation Practice and Experience
Publication statusAccepted/In press - 2022


  • differential privacy
  • federated learning
  • model security
  • privacy protection

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Computational Theory and Mathematics


Dive into the research topics of 'LDP-Fed+: A robust and privacy-preserving federated learning based classification framework enabled by local differential privacy'. Together they form a unique fingerprint.

Cite this