TY - JOUR
T1 - Protecting Intellectual Property with Reliable Availability of Learning Models in AI-based Cybersecurity Services
AU - Ren, Ge
AU - Wu, Jun
AU - Li, Gaolei
AU - Li, Shenghong
AU - Guizani, Mohsen
N1 - Publisher Copyright:
IEEE
PY - 2022
Y1 - 2022
N2 - Artificial intelligence (AI)-based cybersecurity services offer significant promise in many scenarios, including malware detection, content supervision, and so on. Meanwhile, many commercial and government applications have raised the need for intellectual property protection of using deep neural network (DNN). Existing studies (e.g., watermarking techniques) on intellectual property protection only aim at inserting secret information into DNNs, allowing producers to detect whether the given DNN infringes on their own copyrights. However, since the availability protection of learning models is rarely considered, the piracy model can still work with high accuracy. In this paper, a novel model locking (M-LOCK) scheme for the DNN is proposed to enhance its availability protection, where the DNN produces poor accuracy if a specific token is absent, while it maps only the tokenized inputs into correct predictions. The proposed scheme performs the verification process during the DNN inference operation, actively protecting models' intellectual property copyright at each query. Specifically, to train the token-sensitive decision-making boundaries of DNNs, a data poisoning-based model manipulation (DPMM) method is also proposed, which minimizes the correlation between the dummy outputs and correct predictions. Extensive experiments demonstrate the proposed scheme could achieve high reliability and effectiveness across various benchmark datasets as well as typical model protection methods.
AB - Artificial intelligence (AI)-based cybersecurity services offer significant promise in many scenarios, including malware detection, content supervision, and so on. Meanwhile, many commercial and government applications have raised the need for intellectual property protection of using deep neural network (DNN). Existing studies (e.g., watermarking techniques) on intellectual property protection only aim at inserting secret information into DNNs, allowing producers to detect whether the given DNN infringes on their own copyrights. However, since the availability protection of learning models is rarely considered, the piracy model can still work with high accuracy. In this paper, a novel model locking (M-LOCK) scheme for the DNN is proposed to enhance its availability protection, where the DNN produces poor accuracy if a specific token is absent, while it maps only the tokenized inputs into correct predictions. The proposed scheme performs the verification process during the DNN inference operation, actively protecting models' intellectual property copyright at each query. Specifically, to train the token-sensitive decision-making boundaries of DNNs, a data poisoning-based model manipulation (DPMM) method is also proposed, which minimizes the correlation between the dummy outputs and correct predictions. Extensive experiments demonstrate the proposed scheme could achieve high reliability and effectiveness across various benchmark datasets as well as typical model protection methods.
KW - AI-based Cybersecurity Services
KW - Biological system modeling
KW - Computational modeling
KW - Computer security
KW - Data models
KW - Data Poisioning-based Model Manipulation
KW - Intellectual property
KW - Intellectual Property
KW - Model Locking
KW - Predictive models
KW - Reliable Availability
KW - Watermarking
UR - http://www.scopus.com/inward/record.url?scp=85142791117&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85142791117&partnerID=8YFLogxK
U2 - 10.1109/TDSC.2022.3222972
DO - 10.1109/TDSC.2022.3222972
M3 - Article
AN - SCOPUS:85142791117
SN - 1545-5971
SP - 1
EP - 18
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
ER -