Deep Neural Backdoor in Semi-Supervised Learning: Threats and Countermeasures

Zhicong Yan, Jun Wu*, Gaolei Li, Shenghong Li, Mohsen Guizani

*この研究の対応する著者

研究成果査読

抄録

Semi-Supervised Learning (SSL) is a powerful derivative for humans to discover the hidden knowledge, and will be a great substitute for data taggers. Although the availability of unlabeled data rises up a huge passion to SSL, the untrustness of unlabeled data leads to many unknown security risks. In this paper, we first identify an insidious backdoor threat of SSL where unlabeled training data are poisoned by backdoor methods migrated from supervised settings. Then, to further exploit this threat, a Deep Neural Backdoor (DeNeB) scheme is proposed, which requires less data poisoning budgets and produces stronger backdoor effectiveness. By poisoning a fraction of our unlabeled training data, the DeNeB achieves the illegal manipulation on the trained model without modifying the training process. Finally, an efficient detection-and-purification defense (DePuD) framework is proposed to thwart the proposed scheme. In DePuD, we construct a deep detector to locate trigger patterns in the unlabeled training data, and perform secured SSL training with purified unlabeled data where the detected trigger patterns are obfuscated. Extensive experiments based on benchmark datasets are performed to demonstrate the huge threatening of DeNeB and the effectiveness of DePuD. To the best of our knowledge, this is the first work to achieve the backdoor and its defense in semi-supervised learning.

本文言語English
ページ(範囲)4827-4842
ページ数16
ジャーナルIEEE Transactions on Information Forensics and Security
16
DOI
出版ステータスPublished - 2021

ASJC Scopus subject areas

  • 安全性、リスク、信頼性、品質管理
  • コンピュータ ネットワークおよび通信

フィンガープリント

「Deep Neural Backdoor in Semi-Supervised Learning: Threats and Countermeasures」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル