Differentially private Bayesian learning on distributed data

Mikko Heikkilä, Eemil Lagerspetz, Samuel Kaski, Kana Shimizu, Sasu Tarkoma, Antti Honkela

研究成果: Conference article査読

11 被引用数 (Scopus)

抄録

Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results. The standard DP algorithms require a single trusted party to have access to the entire data, which is a clear weakness, or add prohibitive amounts of noise. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a learning strategy based on a secure multi-party sum function for aggregating summaries from data holders and the Gaussian mechanism for DP. Our method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost.

本文言語English
ページ(範囲)3227-3236
ページ数10
ジャーナルAdvances in Neural Information Processing Systems
2017-December
出版ステータスPublished - 2017
イベント31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
継続期間: 2017 12 42017 12 9

ASJC Scopus subject areas

  • コンピュータ ネットワークおよび通信
  • 情報システム
  • 信号処理

フィンガープリント

「Differentially private Bayesian learning on distributed data」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル