Paper 2020/165

Subsampling and Knowledge Distillation On Adversarial Examples: New Techniques for Deep Learning Based Side Channel Evaluations

Aron Gohr, Sven Jacob, and Werner Schindler

Abstract

This paper has four main goals. First, we show how we solved the CHES 2018 AES challenge in the contest using essentially a linear classifier combined with a SAT solver and a custom error correction method. This part of the paper has previously appeared in a preprint by the current authors (e-print report 2019/094) and later as a contribution to a preprint write-up of the solutions by the three winning teams (e-print report 2019/860). Second, we develop a novel deep neural network architecture for side-channel analysis that completely breaks the AES challenge, allowing for fairly reliable key recovery with just a single trace on the unknown-device part of the CHES challenge (with an expected success rate of roughly 70 percent if about 100 CPU hours are allowed for the equation solving stage of the attack). This solution significantly improves upon all previously published solutions of the AES challenge, including our baseline linear solution. Third, we consider the question of leakage attribution for both the classifier we used in the challenge and for our deep neural network. Direct inspection of the weight vector of our machine learning model yields a lot of information on the implementation for our linear classifier. For the deep neural network, we test three other strategies (occlusion of traces; inspection of adversarial changes; knowledge distillation) and find that these can yield information on the leakage essentially equivalent to that gained by inspecting the weights of the simpler model. Fourth, we study the properties of adversarially generated side-channel traces for our model. Partly reproducing recent work on useful features in adversarial examples in our application domain, we find that a linear classifier generalizing to an unseen device much better than our linear baseline can be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network. This gives a new way of extracting human-usable knowledge from a deep side channel model while also yielding insights on adversarial examples in an application domain where relatively few sources of spurious correlations between data and labels exist. The experiments described in this paper can be reproduced using code available at https://github.com/agohr/ches2018 .

Metadata
Available format(s)
PDF
Category
Secret-key cryptography
Publication info
Published elsewhere. SAC 2020
Keywords
Side channel attacksMachine learningdeep neural networksAES
Contact author(s)
aron gohr @ gmail com
History
2020-10-19: revised
2020-02-13: received
See all versions
Short URL
https://ia.cr/2020/165
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2020/165,
      author = {Aron Gohr and Sven Jacob and Werner Schindler},
      title = {Subsampling and Knowledge Distillation On Adversarial Examples: New Techniques for Deep Learning Based Side Channel Evaluations},
      howpublished = {Cryptology ePrint Archive, Paper 2020/165},
      year = {2020},
      note = {\url{https://eprint.iacr.org/2020/165}},
      url = {https://eprint.iacr.org/2020/165}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.