Paper 2018/350

The Interpose PUF: Secure PUF Design against State-of-the-art Machine Learning Attacks

Phuong Ha Nguyen, Durga Prasad Sahoo, Chenglu Jin, Kaleel Mahmood, Ulrich Rührmair, and Marten van Dijk

Abstract

The design of a silicon Strong Physical Unclonable Function (PUF) that is lightweight and stable, and which possesses a rigorous security argument, has been a fundamental problem in PUF research since its very beginnings in 2002. Various effective PUF modeling attacks, for example at CCS 2010 and CHES 2015, have shown that currently, no existing silicon PUF design can meet these requirements. In this paper, we introduce the novel Interpose PUF (iPUF) design, and rigorously prove its security against all known machine learning (ML) attacks, including any currently known reliability-based strategies that exploit the stability of single CRPs (we are the first to provide a detailed analysis of when the reliability based CMA-ES attack is successful and when it is not applicable). Furthermore, we provide simulations and confirm these in experiments with FPGA implementations of the iPUF, demonstrating its practicality. Our new iPUF architecture so solves the currently open problem of constructing practical, silicon Strong PUFs that are secure against state-of-the-art ML attacks.

Metadata
Available format(s)
PDF
Category
Implementation
Publication info
Published by the IACR in TCHES 2019
Keywords
Arbiter Physical Unclonable Function (APUF)Majority VotingModeling AttackStrict Avalanche CriterionReliability based ModelingXOR APUFCMA-ESLogistic RegressionDeep Neural Network.
Contact author(s)
chenglu jin @ uconn edu
History
2019-07-09: last of 5 revisions
2018-04-18: received
See all versions
Short URL
https://ia.cr/2018/350
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2018/350,
      author = {Phuong Ha Nguyen and Durga Prasad Sahoo and Chenglu Jin and Kaleel Mahmood and Ulrich Rührmair and Marten van Dijk},
      title = {The Interpose PUF: Secure PUF Design against State-of-the-art Machine Learning Attacks},
      howpublished = {Cryptology ePrint Archive, Paper 2018/350},
      year = {2018},
      note = {\url{https://eprint.iacr.org/2018/350}},
      url = {https://eprint.iacr.org/2018/350}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.