Paper 2021/1074

UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning

Ege Erdogan, Alptekin Kupcu, and A. Ercument Cicek

Abstract

Training deep neural networks requires large scale data, which often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning framework aims to address this concern by splitting up the model among the client and the server. The idea is that since the server does not have access to client's part of the model, the scheme supposedly provides privacy. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and also obtain a functionally similar model to the client model, without the client being able to detect the attack. (2) Furthermore, we show that if split learning is used naively to protect the training labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using three benchmark datasets and investigate various properties of the overall system that affect the attacks' effectiveness. Our results show that plaintext split learning paradigm can pose serious security risks and provide no more than a false sense of security.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Preprint. MINOR revision.
Keywords
data privacymachine learningsplit learningmodel inversionmodel stealing
Contact author(s)
eerdogan17 @ ku edu tr
History
2021-08-23: received
Short URL
https://ia.cr/2021/1074
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2021/1074,
      author = {Ege Erdogan and Alptekin Kupcu and A.  Ercument Cicek},
      title = {UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning},
      howpublished = {Cryptology ePrint Archive, Paper 2021/1074},
      year = {2021},
      note = {\url{https://eprint.iacr.org/2021/1074}},
      url = {https://eprint.iacr.org/2021/1074}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.