International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Recent videos of IACR talks

Year
Venue
Title
2022
EUROCRYPT
Batch-OT with Optimal Rate 📺
We show that it is possible to perform $n$ independent copies of $1$-out-of-$2$ oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is $n(1+o(1))$ for sufficiently large $n$. Note that this matches the information-theoretic lower bound. Prior to this work, this was only achievable by using the heavy machinery of rate-$1$ fully homomorphic encryption (Rate-$1$ FHE, Brakerski et al., TCC 2019). To achieve rate-$1$ both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate $1/m^{\epsilon}$ for any $\epsilon>0$ together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive ``bootstrapping'' operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE). For our DDH-based solution we develop a new technique that may be of independent interest. We show that it is possible to ``emulate'' the binary group $\bbZ_2$ (or any other small-order group) inside a prime-order group $\bbZ_p$ \emph{in a function-private manner}. That is, $\bbZ_2$ operations are mapped to $\bbZ_p$ operations such that the outcome of the latter do not reveal additional information beyond the $\bbZ_2$ outcome. Our encoding technique uses the discrete Gaussian distribution, which to our knowledge was not done before in the context of DDH.
2022
EUROCRYPT
A Correlation Attack on Full SNOW-V and SNOW-Vi 📺
In this paper, a method for searching correlations between the binary stream of Linear Feedback Shift Register (LFSR) and the keystream of SNOW-V and SNOW-Vi is presented based on the technique of approximation to composite functions. With the aid of the linear relationship between the four taps of LFSR input into Finite State Machine (FSM) at three consecutive clocks, we present an automatic search model based on the SAT/SMT technique and search out a series of linear approximation trails with high correlation. By exhausting the intermediate masks, we find a binary linear approximation with a correlation $-2^{-47.76}$. Using such approximation, we propose a correlation attack on SNOW-V with an expected time complexity $2^{246.53}$, a memory complexity $2^{238.77}$ and $2^{237.5}$ keystream words generated by the same key and Initial Vector (IV). For SNOW-Vi, we provide a binary linear approximation with the same correlation and mount a correlation attack with the same complexity as that of SNOW-V. To the best of our knowledge, this is the first known efficient attack on full SNOW-V and SNOW-Vi, which is better than the exhaustive key search. The results indicate that neither SNOW-V nor SNOW-Vi can guarantee the 256-bit security level if we ignore the design constraint that the maximum length of keystream for a single pair of key and IV is less than $2^{64}$.
2022
EUROCRYPT
On the security of ECDSA with additive key derivation and presignatures 📺
Two common variations of ECDSA signatures are {\em additive key derivation} and presignatures. Additive key derivation is a simple mechanism for deriving many subkeys from a single master key, and is already widely used in cryptocurrency applications with the Hierarchical Deterministic Wallet mechanism standardized in Bitcoin Improvement Proposal 32 (BIP32). Because of its linear nature, additive key derivation is also amenable to efficient implementation in the threshold setting. With presignatures, the secret and public nonces used in the ECDSA signing algorithm are precomputed. In the threshold setting, using presignatures along with other precomputed data allows for an extremely efficient "online phase" of the protocol. Recent works have advocated for both of these variations, sometimes combined together. However, somewhat surprisingly, we are aware of no prior security proof for additive key derivation, let alone for additive key derivation in combination with presignatures. In this paper, we provide a thorough analysis of these variations, both in isolation and in combination. Our analysis is in the generic group model (GGM). Importantly, we do not modify ECDSA or weaken the standard notion of security in any way. Of independent interest, we also present a version of the GGM that is specific to elliptic curves. This EC-GGM better models some of the idiosyncrasies (such as the conversion function and malleability) of ECDSA. In addition to this analysis, we report security weaknesses in these variations that apparently have not been previously reported. For example, we show that when both variations are combined, there is a cube-root attack on ECDSA, which is much faster than the best known, square-root attack on plain ECDSA. We also present two mitigations against these weaknesses: re-randomized presignatures and homogeneous key derivation. Each of these mitigations is very lightweight, and when used in combination, the security is essentially the same as that of plain ECDSA (in the EC-GGM).
2022
EUROCRYPT
Key Guessing Strategies for Linear Key-Schedule Algorithms in Rectangle Attacks 📺
When generating quartets for the rectangle attack on ciphers with linear key-schedule ciphers, we find the right quartets which may suggest key candidates have to satisfy some nonlinear relationships. However, some quartets generated always violate these relationships, so that they will never suggest any key candidates. Inspired by previous rectangle frameworks, we find that guessing certain key cells before generating quartets may reduce the number of those invalid quartets. However, guessing a lot of key cells at once may lose the benefit from the early abort technique, which may lead to a higher overall complexity. To get better tradeoff, we build a new rectangle attack framework on ciphers with linear key-schedule with the purpose of reducing the overall complexity or attacking more rounds. In the tradeoff model, there are many parameters affecting the overall complexity, especially for the choices of the number and positions of key guessing cells before generating quartets. To identify optimal parameters, we build a uniform automatic tool on SKINNY as an example, which includes the optimal rectangle distinguishers for key-recovery phase, the number and positions of key guessing cells before generating quartets, the size of key counters to build that affecting the exhaustive search step, etc. Based on the automatic tool, we identify a 32-round key-recovery attack on SKINNY-128-384 in the related-key setting, which extends the best previous attack by 2 rounds. For other versions with n-2n or n-3n, we also achieve one more round than before. In addition, using the previous rectangle distinguishers, we achieve better attacks on round-reduced ForkSkinny, Deoxys-BC-384 and GIFT-64. At last, we discuss the conversion of our rectangle framework from related-key setting into single-key setting and give new single-key rectangle attack on 10-round Serpent.
2022
EUROCRYPT
Group Signature and More from Isogenies and Lattices: Generic, Simple, and Efficient 📺
We construct an efficient dynamic group signature (or more generally an accountable ring signature) from isogeny and lattice assumptions. Our group signature is based on a simple generic construction that can be instantiated by cryptographically hard group actions such as the CSIDH group action or an MLWE-based group action. The signature is of size $O(¥log N)$, where $N$ is the number of users in the group. Our idea builds on the recent efficient OR-proof by Beullens, Katsumata, and Pintore (Asiacrypt'20), where we efficiently add a proof of valid ciphertext to their OR-proof and further show that the resulting non-interactive zero-knowledge proof system is ¥emph{online extractable}. Our group signatures satisfy more ideal security properties compared to previously known constructions, while simultaneously having an attractive signature size. The signature size of our isogeny-based construction is an order of magnitude smaller than all previously known post-quantum group signatures (e.g., 6.6 KB for 64 members). In comparison, our lattice-based construction has a larger signature size (e.g., either 126 KB or 89 KB for 64 members depending on the satisfied security property). However, since the $O(¥cdot)$-notation hides a very small constant factor, it remains small even for very large group sizes, say $2^{20}$.
2022
EUROCRYPT
A Greater GIFT: Strengthening GIFT against Statistical Cryptanalysis 📺
GIFT-64 is a 64-bit block cipher with a 128-bit key that is more lightweight than PRESENT. This paper provides a detailed analysis of GIFT-64 against differential and linear attacks. Our work complements automatic search methods for the best differential and linear characteristics with a careful manual analysis. This hybrid approach leads to new insights. In the differential setting, we theoretically explain the existence of differential characteristics with two active S-boxes per round and derive some novel properties of these characteristics. Furthermore, we prove that all optimal differential characteristics of GIFT-64 covering more than seven rounds must activate two S-boxes per round. We can construct all optimal characteristics by hand. In parallel to the work in the differential setting, we conduct a similar analysis in the linear setting. However, unlike the clear view in differential setting, the optimal linear characteristics of GIFT-64 must have at least one round activating only one S-box. Moreover, with the assistance of automatic searching methods, we identify 24 GIFT-64 variants achieving better resistance against differential attack while maintaining a similar security level against a linear attack. Since the new variants strengthen GIFT-64 against statistical cryptanalysis, we claim that the number of rounds could be reduced from 28 to 26 for the variants. This observation enables us to create a cipher with lower energy consumption than GIFT-64. Similarly to the case in GIFT-64, we do not claim any related-key security for the round-reduced variant as this is not relevant for most applications.
2022
EUROCRYPT
Private Circuits with Quasilinear Randomness 📺
A {\em $t$-private} circuit for a function $f$ is a randomized Boolean circuit $C$ that maps a randomized encoding of an input $x$ to an encoding of the output $f(x)$, such that probing $t$ wires anywhere in $C$ reveals nothing about $x$. Private circuits can be used to protect embedded devices against side-channel attacks. Motivated by the high cost of generating fresh randomness in such devices, several works have studied the question of minimizing the randomness complexity of private circuits. The best known upper bound, due to Coron et al. (Eurocrypt 2020), is $O(t^2\cdot\log s)$ random bits, where $s$ is the circuit size of $f$. We improve this to $O(t\cdot \log s)$, including the randomness used by the input encoder, and extend this bound to the stateful variant of private circuits. Our constructions are semi-explicit in the sense that there is an efficient randomized algorithm that generates the private circuit $C$ from a circuit for $f$ with negligible failure probability.
2022
EUROCRYPT
Orientations and the supersingular endomorphism ring problem 📺
We study two important families of problems in isogeny-based cryptography and how they relate to each other: computing the endomorphism ring of supersingular elliptic curves, and inverting the action of class groups on oriented supersingular curves. We prove that these two families of problems are closely related through polynomial-time reductions, assuming the generalized Riemann hypothesis. We identify two classes of essentially equivalent problems. The first class corresponds to the problem of computing the endomorphism ring of oriented curves. The security of a large family of cryptosystems (such as CSIDH) reduces to (and sometimes from) this class, for which there are heuristic quantum algorithms running in subexponential time. The second class corresponds to computing the endomorphism ring of orientable curves. The security of essentially all isogeny-based cryptosystems reduces to (and sometimes from) this second class, for which the best known algorithms are still exponential. Some of our reductions not only generalise, but also strengthen previously known results. For instance, it was known that in the particular case of curves defined over $\mathbb F_p$, the security of CSIDH reduces to the endomorphism ring problem in subexponential time. Our reductions imply that the security of CSIDH is actually equivalent to the endomorphism ring problem, under polynomial time reductions (circumventing arguments that proved such reductions unlikely).
2022
EUROCRYPT
Property-Preserving Hash Functions for Hamming Distance from Standard Assumptions 📺
Property-preserving hash functions allow for compressing long inputs $x_0$ and $x_1$ into short hashes $h(x_0)$ and $h(x_1)$ in a manner that allows for computing a predicate $P(x_0, x_1)$ given only the two hash values without having access to the original data. Such hash functions are said to be adversarially robust if an adversary that gets to pick $x_0$ and $x_1$ after the hash function has been sampled, cannot find inputs for which the predicate evaluated on the hash values outputs the incorrect result. In this work we construct robust property-preserving hash functions for the hamming-distance predicate which distinguishes inputs with a hamming distance at least some threshold $t$ from those with distance less than $t$. The security of the construction is based on standard lattice hardness assumptions. Our construction has several advantages over the best known previous construction by Fleischhacker and Simkin (Eurocrypt 2021). Our construction relies on a single well-studied hardness assumption from lattice cryptography whereas the previous work relied on a newly introduced family of computational hardness assumptions. In terms of computational effort, our construction only requires a small number of modular additions per input bit, whereas the work of Fleischhacker and Simkin required several exponentiations per bit as well as the interpolation and evaluation of high-degree polynomials over large fields. An additional benefit of our construction is that the description of the hash function can be compressed to $\lambda$ bits assuming a random oracle. Previous work has descriptions of length $\bigO{\ell \lambda}$ bits for input bit-length $\ell$. We prove a lower bound on the output size of any property-preserving hash function for the hamming distance predicate. The bound shows that the size of our hash value is not far from optimal.
2022
EUROCRYPT
Embedding the UC Model into the IITM Model 📺
Universal Composability is a widely used concept for the design and analysis of protocols. Since Canetti's original UC model and the model by Pfitzmann and Waidner several different models for universal composability have been proposed, including, for example, the IITM model, GNUC, CC, but also extensions and restrictions of the UC model, such as JUC, GUC, and SUC. These were motivated by the lack of expressivity of existing models, ease of use, or flaws in previous models. Cryptographers choose between these models based on their needs at hand (e.g., support for joint state and global state) or simply their familiarity with a specific model. While all models follow the same basic idea, there are huge conceptually differences, which raises fundamental and practical questions: (How) do the concepts and results proven in one model relate to those in another model? Do the different models and the security notions formulated therein capture the same classes of attacks? Most importantly, can cryptographers re-use results proven in one model in another model, and if so, how? In this paper, we initiate a line of research with the aim to address this lack of understanding, consolidate the space of models, and enable cryptographers to re-use results proven in other models. As a start, here we focus on Canetti's prominent UC model and the IITM model proposed by K{\"u}sters et al. The latter is an interesting candidate for comparison with the UC model since it has been used to analyze a wide variety of protocols, supports a very general protocol class and provides, among others, seamless treatment of protocols with shared state, including joint and global state. Our main technical contribution is an embedding of the UC model into the IITM model showing that all UC protocols, security and composition results carry over to the IITM model. Hence, protocol designers can profit from the features of the IITM model while being able to use all their results proven in the UC model. We also show that, in general, one cannot embed the full IITM model into the UC model.
2022
EUROCRYPT
Beyond quadratic speedups in quantum attacks on symmetric schemes 📺
In this paper, we report the first quantum key-recovery attack on a symmetric block cipher design, using classical queries only, with a more than quadratic time speedup compared to the best classical attack. We study the 2XOR-Cascade construction of Ga{\v{z}}i and Tessaro (EUROCRYPT~2012). It is a key length extension technique which provides an n-bit block cipher with 5n/2 bits of security out of an n-bit block cipher with 2n bits of key, with a security proof in the ideal model. We show that the offline-Simon algorithm of Bonnetain et al. (ASIACRYPT~2019) can be extended to, in particular, attack this construction in quantum time $\widetilde{\mathcal{O}}{2^n}$, providing a 2.5 quantum speedup over the best classical attack. Regarding post-quantum security of symmetric ciphers, it is commonly assumed that doubling the key sizes is a sufficient precaution. This is because Grover's quantum search algorithm, and its derivatives, can only reach a quadratic speedup at most. Our attack shows that the structure of some symmetric constructions can be exploited to overcome this limit. In particular, the 2XOR-Cascade cannot be used to generically strengthen block ciphers against quantum adversaries, as it would offer only the same security as the block cipher itself.
2022
EUROCRYPT
Universally Composable Subversion-Resilient Cryptography 📺
Subversion attacks undermine security of cryptographic protocols by replacing a legitimate honest party's implementation with one that leaks information in an undetectable manner. An important limitation of all currently known techniques for designing cryptographic protocols with security against subversion attacks is that they do not automatically guarantee security in the realistic setting where a protocol session may run concurrently with other protocols. We remedy this situation by providing a foundation of reverse firewalls (Mironov and Stephens-Davidowitz, EUROCRYPT'15) in the universal composability (UC) framework (Canetti, FOCS'01 and J. ACM'20). More in details, our contributions are threefold: - We generalize the UC framework to the setting where each party consists of a core (which has secret inputs and is in charge of generating protocol messages) and a firewall (which has no secrets and sanitizes the outgoing/incoming communication from/to the core). Both the core and the firewall can be subject to different flavors of corruption, modeling different kinds of subversion attacks. For instance, we capture the setting where a subverted core looks like the honest core to any efficient test, yet it may leak secret information via covert channels (which we call specious subversion). - We show how to sanitize UC commitments and UC coin tossing against specious subversion, under the DDH assumption. - We show how to sanitize the classical GMW compiler (Goldreich, Micali and Wigderson, STOC 1987) for turning MPC with security in the presence of semi-honest adversaries into MPC with security in the presence of malicious adversaries. This yields a completeness theorem for maliciously secure MPC in the presence of specious subversion. Additionally, all our sanitized protocols are transparent, in the sense that communicating with a sanitized core looks indistinguishable from communicating with an honest core. Thanks to the composition theorem, our methodology allows, for the first time, to design subversion-resilient protocols by sanitizing different sub-components in a modular way.
2022
EUROCRYPT
Sine Series Approximation of the Mod Function for Bootstrapping of Approximate HE 📺
While it is well known that the sawtooth function has a point-wise convergent Fourier series, the rate of convergence is not the best possible for the application of approximating the mod function in small intervals around multiples of the modulus. We show a different sine series, such that the sine series of order $n$ has error $O(\epsilon^{2n+1})$ for approximating the mod function in $\epsilon$-sized intervals around multiples of the modulus. Moreover, the resulting polynomial, after Taylor series approximation of the sine function, has small coefficients, and the whole polynomial can be computed at a precision that is only slightly larger than $-(2n+1)\log \epsilon$, the precision of approximation being sought. This polynomial can then be used to approximate the mod function to almost arbitrary precision, and hence allows practical CKKS-HE bootstrapping with arbitrary precision. We validate our approach by an implementation and obtain $100$ bit precision bootstrapping as well as improvements over prior work even at lower precision.
2022
EUROCRYPT
A PCP Theorem for Interactive Proofs and Applications 📺
The celebrated PCP Theorem states that any language in NP can be decided via a verifier that reads O(1) bits from a polynomially long proof. Interactive oracle proofs (IOP), a generalization of PCPs, allow the verifier to interact with the prover for multiple rounds while reading a small number of bits from each prover message. While PCPs are relatively well understood, the power captured by IOPs (beyond $\NP$) has yet to be fully explored. We present a generalization of the PCP theorem for interactive languages. We show that any language decidable by a k(n)-round IP has a k(n)-round public-coin IOP, where the verifier makes its decision by reading only O(1) bits from each (polynomially long) prover message and $O(1)$ bits from each of its own (random) messages to the prover. Our result and the underlying techniques have several applications. We get a new hardness of approximation result for a stochastic satisfiability problem, we show IOP-to-IOP transformations that previously were known to hold only for IPs, and we formulate a new notion of PCPs (index-decodable PCPs) that enables us to obtain a commit-and-prove SNARK in the random oracle model for nondeterministic computations.
2022
EUROCRYPT
Quantum Algorithms for Variants of Average-Case Lattice Problems via Filtering 📺
We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. The SIS, LWE, and EDCP problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for the variants of SIS and LWE we consider. For EDCP, our quantum algorithm slightly extends the result of Ivanyos et al. (2018). Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contribution is solving LWE given LWE-like quantum states with interesting parameters using a filtering technique. We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. The SIS, LWE, and EDCP problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for the variants of SIS and LWE we consider. For EDCP, our quantum algorithm slightly extends the result of Ivanyos et al. (2018). Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contribution is solving LWE given LWE-like quantum states with interesting parameters using a filtering technique.
2022
EUROCRYPT
Highly Efficient OT-Based Multiplication Protocols 📺
We present a new OT-based two-party multiplication protocol that is almost as efficient as Gilboa's semi-honest protocol (Crypto '99), but has a high-level of security without further compilation. The achieved security suffices for many applications, and, assuming DDH, can be cheaply compiled into full security.
2022
EUROCRYPT
Practical Non-interactive Publicly Verifiable Secret Sharing with Thousands of Parties 📺
Non-interactive publicly verifiable secret sharing (PVSS) schemes enables (re-)sharing of secrets in a decentralized setting in the presence of malicious parties. A recently proposed application of PVSS schemes is to enable permissionless proof-of-stake blockchains to ``keep a secret" via a sequence of committees that share that secret. These committees can use the secret to produce signatures on the blockchain's behalf, or to disclose hidden data conditioned on consensus that some event has occurred. That application needs very large committees with thousands of parties, so the PVSS scheme in use must be efficient enough to support such large committees, in terms of both computation and communication. Yet, previous PVSS schemes have large proofs and/or require many exponentiations over large groups. We present a non-interactive PVSS scheme in which the underlying encryption scheme is based on the learning with errors (LWE) problem. While lattice-based encryption schemes are very fast, they often have long ciphertexts and public keys. We use the following two techniques to conserve bandwidth: First, we adapt the Peikert-Vaikuntanathan-Waters (PVW) encryption scheme to the multi-receiver setting, so that the bulk of the parties' keys is a common random string. The resulting scheme yields $\Omega(1)$ amortized plaintext/ciphertext rate, where concretely the rate is $\approx 1/60$ for 100 parties, $\approx 1/8$ for 1000 parties, and approaching 1/2 as the number of parties grows. Second, we use bulletproofs over a DL-group of order about 256 bits to get compact proofs of correct encryption/decryption of shares. Alternating between the lattice and DL settings is relatively painless, as we equate the LWE modulus with the order of the group. We also show how to reduce the the number of exponentiations in the bulletproofs by applying Johnson-Lindenstrauss-like compression to reduce the dimension of the vectors whose properties must be verified. An implementation of our PVSS with 1000 parties showed that it is feasible even at that size, and should remain so even with one or two order of magnitude increase in the committee size.
2022
EUROCRYPT
Garbled Circuits With Sublinear Evaluator 📺
A recent line of work, Stacked Garbled Circuit (SGC), showed that Garbled Circuit (GC) can be improved for functions that include conditional behavior. SGC relieves the communication bottleneck of 2PC by only sending enough garbled material for a single branch out of the $b$ total branches. Hence, communication is sublinear in the circuit size. However, both the evaluator and the generator pay in computation and perform at least factor $\log b$ extra work as compared to standard GC evaluation. We extend the sublinearity of SGC to also include the work performed by the GC Evaluator E; thus we achieve a fully sublinear E, which is essential when optimizing for the online phase. We formalize our approach as a garbling scheme called GCWise: GC WIth Sublinear Evaluator. We show one attractive and immediate application, Garbled PIR, a primitive that marries GC with Private Information Retrieval. Garbled PIR allows the GC to non-interactively and sublinearly access a privately indexed element from a publicly known database, and then use this element in continued GC evaluation.
2022
EUROCRYPT
Practical Post-Quantum Signature Schemes from Isomorphism Problems of Trilinear Forms 📺
In this paper, we propose a practical signature scheme based on the alternating trilinear form equivalence problem. Our scheme is inspired from the Goldreich-Micali-Wigderson's zero-knowledge protocol for graph isomorphism, and can be served as an alternative candidate for the NIST's post-quantum digital signatures. First, we present theoretical evidences to support its security, especially in the post-quantum cryptography context. The evidences are drawn from several research lines, including hidden subgroup problems, multivariate cryptography, cryptography based on group actions, the quantum random oracle model, and recent advances on isomorphism problems for algebraic structures in algorithms and complexity. Second, we demonstrate its potential for practical uses. Based on algorithm studies, we propose concrete parameter choices, and then implement a prototype. One concrete scheme achieves 128 bit security with public key size ~4100 bytes, signature size ~6800 bytes, and running times (key generation, sign, verify) ~0.8ms on a common laptop computer.
2022
EUROCRYPT
Round-Optimal Black-Box Protocol Compilers 📺
We give black-box, round-optimal protocol compilers from semi-honest security to malicious security in the Random Oracle Model (ROM) and in the 1-out-2 OT correlations model. We use our compilers to obtain the following results: \begin{itemize} \item A two-round, two-party protocol secure against malicious adversaries in the random oracle model making black-box use of a two-round semi-honest secure protocol. Prior to our work, such a result was not known even considering special functionalities such as a two-round oblivious transfer. This result also implies the first constructions of two-round malicious (batch) OT/OLE in the random oracle model based on the black-box use of two-round semi-honest (batch) OT/OLE. \item A three-round multiparty secure computation protocol in the random oracle model secure against malicious adversaries that is based on the black-box use of two-round semi-honest OT. This protocol matches a known round complexity lower bound due to Applebaum et al. (ITCS'20) and is based on a minimal cryptographic hardness assumption. \item A two-round, multiparty secure computation protocol in the $1$-out-of-$2$ OT correlations model that is secure against malicious adversaries and makes black-box use of cryptography. This gives new round-optimal protocols for computing arithmetic branching programs that are statistically secure and makes black-box use of the underlying field. \end{itemize} As a contribution of independent interest, we provide a new variant of the IPS compiler (Ishai, Prabhakaran and Sahai, Crypto 2008) in the two-round setting, where we relax requirements on the IPS ``inner protocol'' by strengthening the ``outer protocol''.
2022
EUROCRYPT
Secure Non-Interactive Reduction and Spectral Analysis of Correlations 📺
Correlated pairs of random variables are a central concept in information-theoretically secure cryptography. Secure reductions between different correlations have been studied, and completeness results are known. Further, the complexity of such reductions is intimately connected with circuit complexity and efficiency of locally decodable codes. As such, making progress on these complexity questions faces strong barriers. Motivated by this, in this work, we study a restricted form of secure reductions --- namely, Secure Non-Interactive Reductions (SNIR) --- which is still closely related to the original problem, and establish several fundamental results and relevant techniques for it. We uncover striking connections between SNIR and linear algebraic properties of correlations. Specifically, we define the spectrum of a correlation, and show that a target correlation has a SNIR to a source correlation only if the spectrum of the latter contains the entire spectrum of the former. We also establish a `mirroring lemma' that shows an unexpected symmetry between the two parties in a SNIR, when viewed through the lens of spectral analysis. We also use cryptographic insights and elementary linear algebraic analysis to fully characterize the role of common randomness as well as local randomness in SNIRs. We employ these results to resolve several fundamental questions about SNIRs, and to define future directions.
2022
EUROCRYPT
Approximate Divisor Multiples - Factoring with Only a Third of the Secret CRT-Exponents 📺
We address Partial Key Exposure attacks on CRT-RSA on secret exponents $d_p, d_q$ with small public exponent $e$. For constant $e$ it is known that the knowledge of half of the bits of one of $d_p, d_q$ suffices to factor the RSA modulus $N$ by Coppersmith's famous {\em factoring with a hint} result. We extend this setting to non-constant $e$. Somewhat surprisingly, our attack shows that RSA with $e$ of size $N^{\frac 1 {12}}$ is most vulnerable to Partial Key Exposure, since in this case only a third of the bits of both $d_p, d_q$ suffices to factor $N$ in polynomial time, knowing either most significant bits (MSB) or least significant bits (LSB). Let $ed_p = 1 + k(p-1)$ and $ed_q = 1 + \ell(q-1)$. On the technical side, we find the factorization of $N$ in a novel two-step approach. In a first step we recover $k$ and $\ell$ in polynomial time, in the MSB case completely elementary and in the LSB case using Coppersmith's lattice-based method. We then obtain the prime factorization of $N$ by computing the root of a univariate polynomial modulo $kp$ for our known $k$. This can be seen as an extension of Howgrave-Graham's {\em approximate divisor} algorithm to the case of {\em approximate divisor multiples} for some known multiple $k$ of an unknown divisor $p$ of $N$. The point of {\em approximate divisor multiples} is that the unknown that is recoverable in polynomial time grows linearly with the size of the multiple $k$. Our resulting Partial Key Exposure attack with known MSBs is completely rigorous, whereas in the LSB case we rely on a standard Coppersmith-type heuristic. We experimentally verify our heuristic, thereby showing that in practice we reach our asymptotic bounds already using small lattice dimensions. Thus, our attack is highly practical.
2022
EUROCRYPT
One-Shot Fiat-Shamir-based NIZK Arguments of Composite Residuosity and Logarithmic-Size Ring Signatures in the Standard Model 📺
The standard model security of the Fiat-Shamir transform has been an active research area for many years. In breakthrough results, Canetti {\it et al.} (STOC'19) and Peikert-Shiehian (Crypto'19) showed that, under the Learning-With-Errors (LWE) assumption, it provides soundness by applying correlation-intractable (CI) hash functions to so-called {\it trapdoor} $\Sigma$-protocols. In order to be compatible with CI hash functions based on standard LWE assumptions with polynomial approximation factors, all known such protocols have been obtained via parallel repetitions of a basic protocol with binary challenges. In this paper, we consider languages related to Paillier's composite residuosity assumption (DCR) for which we give the first trapdoor $\Sigma$-protocols providing soundness in one shot, via exponentially large challenge spaces. This improvement is analogous to the one enabled by Schnorr over the original Fiat-Shamir protocol in the random oracle model. Using the correlation-intractable hash function paradigm, we then obtain simulation-sound NIZK arguments showing that an element of $\mathbb{Z}_{N^2}^\ast$ is a composite residue, which opens the door to space-efficient applications in the standard model. As a concrete example, we build logarithmic-size ring signatures (assuming a common reference string) with the shortest signature length among schemes based on standard assumptions in the standard model. We prove security under the DCR and LWE assumptions, while keeping the signature size comparable with that of random-oracle-based schemes.
2022
EUROCRYPT
On Building Fine-Grained One-Way Functions from Strong Average-Case Hardness 📺
Constructing one-way functions from average-case hardness is a long-standing open problem. A positive result would exclude Pessiland (Impagliazzo '95) and establish a highly desirable win-win situation: either (symmetric) cryptography exists unconditionally, or all NP problems can be solved efficiently on the average. Motivated by the lack of progress on this seemingly very hard question, we initiate the investigation of weaker yet meaningful candidate win-win results of the following type: either there are *fine-grained* one-way functions (FGOWF), or nontrivial speedups can be obtained for all NP problems on the average. FGOWFs only require a fixed polynomial gap (as opposed to superpolynomial) between the running time of the function and the running time of an inverter. We obtain three main results: Construction. We show that if there is an NP language having a very strong form of average-case hardness, which we call *block finding hardness*, then FGOWF exist. We provide heuristic support for this very strong average-case hardness notion by showing that it holds for a random language. Then, we study whether weaker (and more natural) forms of average-case hardness could already suffice to obtain FGOWF, and obtain two negative results: Separation I. We provide a strong oracle separation for the implication (exponentially average-case hard languages exist => FGOWF exist). Separation II. We provide a second strong negative result for an even weaker candidate win-win result. Namely, we rule out a black-box proof for the implication (exponentially average-case hard language *whose hardness amplifies optimally through parallel repetitions* exist => FGOWF exist). This separation forms the core technical contribution of our work.
2022
EUROCRYPT
Cryptanalysis of Candidate Obfuscators for Affine Determinant Programs 📺
At ITCS 2020, Bartusek et al. proposed a candidate indistinguishability obfuscator (iO) for affine determinant programs (ADPs). The candidate is special since it is the only unbroken candidate iO to date that does not rely on the hardness of traditional cryptographic assumptions like discrete-log or learning with errors. Instead, it directly applies specific randomization techniques to the underlying ADP. It is relatively efficient compared to the rest of the iO candidates. However, the obfuscation scheme requires further cryptanalysis since it was not known to be based on any well-formed mathematical assumptions. In this paper, we show cryptanalytic attacks on the iO candidate provided by Bartusek et al. Our attack exploits the weakness of one of the randomization steps in the candidate. The attack applies to a fairly general class of programs. At the end of the paper we discuss plausible countermeasures to defend against our attacks.
2022
EUROCRYPT
A Fast and Simple Partially Oblivious PRF, with Applications 📺
We build the first construction of a partially oblivious pseudorandom function (POPRF) that does not rely on bilinear pairings. Our construction can be viewed as combining elements of the 2HashDH OPRF of Jarecki, Kiayias, and Krawczyk with the Dodis-Yampolskiy PRF. We analyze our POPRF’s security in the random oracle model via reduction to a new one-more gap strong Diffie-Hellman inversion assumption. The most significant technical challenge is establishing confidence in the new assumption, which requires new proof techniques that enable us to show that its hardness is implied by the q-DL assumption in the algebraic group model. Our new construction is as fast as the current, standards-track OPRF 2HashDH protocol, yet provides a new degree of flexibility useful in a variety of applications. We show how POPRFs can be used to prevent token hoarding attacks against Privacy Pass, reduce key management complexity in the OPAQUE password authenticated key exchange protocol, and ensure stronger security for password breach alerting services.
2022
EUROCRYPT
Rubato: Noisy Ciphers for Approximate Homomorphic Encryption 📺
A transciphering framework converts a symmetric ciphertext into a homomorphic ciphertext on the server-side, reducing computational and communication overload on the client-side. In Asiacrypt 2021, Cho et al. proposed the RtF framework that supports approximate computation. In this paper, we propose a family of noisy ciphers, dubbed Rubato, with a novel design strategy of introducing noise to a symmetric cipher of a low algebraic degree. With this strategy, the multiplicative complexity of the cipher is significantly reduced, compared to existing HE-friendly ciphers, without degrading the overall security. More precisely, given a moderate block size (16 to 64), Rubato enjoys a low multiplicative depth (2 to 5) and a small number of multiplications per encrypted word (2.1 to 6.25) at the cost of slightly larger ciphertext expansion (1.26 to 1.31). The security of Rubato is supported by comprehensive analysis including symmetric and LWE cryptanalysis. Compared to HERA within the RtF framework, client-side and server-side throughput is improved by 22.9% and 32.2%, respectively, at the cost of only 1.6% larger ciphertext expansion.
2022
EUROCRYPT
A Complete Characterization of Game-Theoretically Fair, Multi-Party Coin Toss 📺
Cleve's celebrated lower bound (STOC'86) showed that a de facto strong fairness notion is impossible in 2-party coin toss, i.e., the corrupt party always has a strategy of biasing the honest party's outcome by a noticeable amount. Nonetheless, Blum's famous coin-tossing protocol (CRYPTO'81) achieves a strictly weaker "game-theoretic'' notion of fairness — specifically, it is a 2-party coin toss protocol in which neither party can bias the outcome towards its own preference; and thus the honest protocol forms a Nash equilibrium in which neither party would want to deviate. Surprisingly, an n-party analog of Blum's famous coin toss protocol was not studied till recently. The work by Chung et al.~(TCC'18) was the first to explore the feasibility of game-theoretically fair n-party coin toss in the presence of corrupt majority. We may assume that each party has a publicly stated preference for either the bit 0 or 0, and if the outcome agrees with the party's preference, it obtains utility 1; else it obtains nothing. A natural game-theoretic formulation is to require that the honest protocol form a coalition-resistant Nash equilibrium, i.e., no coalition should have incentive to deviate from the honest behavior. Chung et al. phrased this game-theoretic notion as “cooperative-strategy-proofness'' or ”CSP-fairness'' for short. Unfortunately, Chung et al.~showed that under (n-1)-sized coalitions, it is impossible to design such a CSP-fair coin toss protocol, unless all parties except one prefer the same bit. In this paper, we show that the impossibility of Chung et al.~is in fact not as broad as it may seem. When coalitions are majority but not $n-1$ in size, we can indeed get feasibility results in some meaningful parameter regimes. We give a complete characterization of the regime in which CSP-fair coin toss is possible, by providing a matching upper- and lower-bound. Our complete characterization theorem also shows that the mathematical structure of game-theoretic fairness is starkly different from the de facto strong fairness notion in the multi-party computation literature.
2022
EUROCRYPT
Single-Server Private Information Retrieval with Sublinear Amortized Time 📺
We construct new private-information-retrieval protocols in the single-server setting. Our schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size. Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small "hint" about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time--yielding sublinear amortized cost. Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage.
2022
EUROCRYPT
McEliece needs a Break -- Solving McEliece-1284 and Quasi-Cyclic-2918 with Modern ISD 📺
With the recent shift to post-quantum algorithms it becomes increasingly important to provide precise bit-security estimates for code-based cryptography such as McEliece and quasi-cyclic schemes like BIKE and HQC. While there has been significant progress on information set decoding (ISD) algorithms within the last decade, it is still unclear to which extent this affects current cryptographic security estimates. We provide the first concrete implementations for representation-based ISD, such as May-Meurer-Thomae (MMT) or Becker-Joux-May-Meurer (BJMM), that are parameter-optimized for the McEliece and quasi-cyclic setting. Although MMT and BJMM consume more memory than naive ISD algorithms like Prange, we demonstrate that these algorithms lead to significant speedups for practical cryptanalysis already for cryptographic instances of medium security level (around 60 bit). More concretely, we provide data for the record computations of McEliece-1223 and McEliece-1284 (old record: 1161), and for the quasi-cyclic setting up to dimension 2918 (before: 1938). Based on our record computations we extrapolate to the bit-security level of the proposed BIKE, HQC and McEliece parameters in NIST's standardization process. For BIKE/HQC, we also show how to transfer the Decoding-One-Out-of-Many (DOOM) technique to MMT/BJMM. Although we achieve significant DOOM speedups, our estimates confirm the bit-security levels of BIKE and HQC. For the proposed McEliece round-3 parameter sets of 192 and 256 bit, however, our extrapolation indicates a security level overestimate by roughly 20 and 10 bits, respectively, i.e., the high-security McEliece instantiations may be a bit less secure than desired.
2022
EUROCRYPT
Field Instruction Multiple Data 📺
Fully homomorphic encryption~(FHE) has flourished since it was first constructed by Gentry~(STOC 2009). Single instruction multiple data~(SIMD) gave rise to efficient homomorphic operations on vectors in \((\mathbb{F}_{t^d})^\ell\), for prime \(t\). RLWE instantiated with cyclotomic polynomials of the form \(X^{2^N}+1\) dominate implementations of FHE due to highly efficient fast fourier transformations. However, this choice yields very short SIMD plaintext vectors and high degree extension fields, e.g. \(\ell < 100, d > 100\) for small primes~(\(t = 3, 5, \dots\)). In this work, we describe a method to encode more data on top of SIMD, \emph{Field Instruction Multiple Data}, applying reverse multiplication friendly embedding~(RMFE) to FHE. With RMFE, length-\(k\) \(\mathbb{F}_{t}\) vectors can be encoded into \(\mathbb{F}_{t^d}\) and multiplied once. The results have to be recoded~(decoded and then re-encoded) before further multiplications can be done. We introduce an FHE-specific technique to additionally evaluate arbitrary linear transformations on encoded vectors for free during the FHE recode operation. On top of that, we present two optimizations to unlock high degree extension fields with small \(t\) for homomorphic computation: \(r\)-fold RMFE, which allows products of up to \(2^r\) encoded vectors before recoding, and a three-stage recode process for RMFEs obtained by composing two smaller RMFEs. Experiments were performed to evaluate the effectiveness of FIMD from various RMFEs compared to standard SIMD operations. Overall, we found that FIMD generally had \(>2\times\) better (amortized) multiplication times compared to FHE for the same amount of data, while using almost \(k/2 \times\) fewer ciphertexts required.
2022
EUROCRYPT
Mitaka: A Simpler, Parallelizable, Maskable Variant of Falcon 📺
This work describes the Mitaka signature scheme: a new hash-and-sign signature scheme over NTRU lattices which can be seen as a variant of NIST finalist Falcon. It achieves comparable efficiency but is considerably simpler, online/offline, and easier to parallelize and protect against side-channels, thus offering significant advantages from an implementation standpoint. It is also much more versatile in terms of parameter selection. We obtain this signature scheme by replacing the FFO lattice Gaussian sampler in Falcon by the “hybrid” sampler of Ducas and Prest, for which we carry out a detailed and corrected security analysis. In principle, such a change can result in a substantial security loss, but we show that this loss can be largely mitigated using new techniques in key generation that allow us to construct much higher quality lattice trapdoors for the hybrid sampler relatively cheaply. This new approach can also be instantiated on a wide variety of base fields, in contrast with Falcon's restriction to power-of-two cyclotomics. We also introduce a new lattice Gaussian sampler with the same quality and efficiency, but which is moreover compatible with the integral matrix Gram root technique of Ducas et al., allowing us to avoid floating point arithmetic. This makes it possible to realize the same signature scheme as Mitaka efficiently on platforms with poor support for floating point numbers. Finally, we describe a provably secure masking of Mitaka. More precisely, we introduce novel gadgets that allow provable masking at any order at much lower cost than previous masking techniques for Gaussian sampling-based signature schemes, for cheap and dependable side-channel protection.
2022
EUROCRYPT
On IND-qCCA security in the ROM and its applications: CPA security is sufficient for TLS 1.3 📺
Bounded IND-CCA security (IND-qCCA) is a notion similar to the traditional IND-CCA security, except the adversary is restricted to a constant number q of decryption/decapsulation queries. We show in this work that IND-qCCA is easily obtained from any passively secure PKE in the (Q)ROM. That is, simply adding a confirmation hash or computing the key as the hash of the plaintext and ciphertext holds an IND-qCCA KEM. In particular, there is no need for derandomization or re-encryption as in the Fujisaki-Okamoto (FO) transform (JoC 2013). This makes the decapsulation process of such IND-qCCA KEM much more efficient than its FO-derived counterpart. In addition, IND-qCCA KEMs could be used in the recently proposed KEMTLS protocol (ACM CCS 2020) that requires IND-1CCA ephemeral key-exchange mechanisms or in TLS 1.3. Then, using similar proof techniques, we show that CPA-secure KEMs are sufficient for the TLS 1.3 handshake to be secure, solving an open problem in the ROM. In turn, this implies that the PRF-ODH assumption used to prove the security of TLS 1.3 is not necessary in the ROM. We also highlight and briefly discuss several use cases of IND-1CCA KEMs in protocols and ratcheting primitives.
2022
EUROCRYPT
Online-Extractability in the Quantum Random-Oracle Model 📺
We show the following generic result: Whenever a quantum query algorithm in the quantum random-oracle model outputs a classical value t that is promised to be in some tight relation with H(x) for some x, then x can be efficiently extracted with almost certainty. The extraction is by means of a suitable simulation of the random oracle and works online, meaning that it is straightline, i.e., without rewinding, and on- the-fly, i.e., during the protocol execution and without disturbing it. The technical core of our result is a new commutator bound that bounds the operator norm of the commutator of the unitary operator that describes the evolution of the compressed oracle (which is used to simulate the random oracle above) and of the measurement that extracts x. We show two applications of our generic online extractability result. We show tight online extractability of commit-and-open Σ-protocols in the quantum setting, and we offer the first complete post-quantum security proof of the textbook Fujisaki-Okamoto transformation, i.e, without adjustments to facilitate the proof, including concrete security bounds.
2022
EUROCRYPT
A Novel Completeness Test for Leakage Models and its Application to Side Channel Attacks and Responsibly Engineered Simulators 📺
Today’sdside channel attack targets are often complex devices in which instructions are processed in parallel and work on 32-bit datae words. Consedsquently, the state that is involved in producing leakage in these modern devices is large, and basing evaluations (i.e. worst case attacks) and simulators, and on a potentially incomplete state can lead to wrong conclusions. We put forward a novel notion for the “completeness” of an assumed state, together with an efficient statistical test that is based on “collapsed models”. Our novel test can be used to recover a state that contains multiple 32-bit variables in a grey box setting. We illustrate how our novel test can help to guide side channel attacks and we reveal new attack vectors for existing implementations. We then demonstrate the application of this test in the context of leakage modelling for leakage simulators and confirm that even the most recent leakage simulators do not capture all available leakage of their respective target devices. Our new test enables finding nominal models that capture all available leakage but do not give a helping hand to adversaries. Thereby we make a first step towards leakage simulators that are responsibly engineered.
2022
EUROCRYPT
Towards Micro-Architectural Leakage Simulators: Reverse Engineering Micro-Architectural Leakage Features is Practical 📺
Leakage simulators offer the tantalising promise of easy and quick testing of software with respect to the presence of side channel leakage. The quality of their build in leakage models is therefore crucial, this includes the faithful inclusion of micro-architectural leakage. Microarchitectural leakage is a reality even on low- to mid-range commercial processors, such as the ARM Cortex M series. Dealing with it seems initially infeasible in a grey box setting: how should we describe it if micro-architectural elements are not publicly known? We demonstrate, for the first time, that it is feasible, using a recent leakage modelling technique, to reverse engineer significant elements of the micro-architectural leakage of a commercial processor. Our approach first recovers the micro-architectural leakage of each stage in the pipeline, and the leakage of elements that are known to produce glitches. Using the reverse engineered leakage features we build an enhanced version of the popular leakage simulator ELMO.
2022
EUROCRYPT
Fiat-Shamir Bulletproofs are Non-Malleable (in the Algebraic Group Model) 📺
Bulletproofs (B{\"u}nz et al.~IEEE S\&P 2018) are a celebrated ZK proof system that allows for short and efficient proofs, and have been implemented and deployed in several real-world systems. In practice, they are most often implemented in their \emph{non-interactive} version obtained using the Fiat-Shamir transform, despite the lack of a formal proof of security for this setting. Prior to this work, there was no evidence that \emph{malleability attacks} were not possible against Fiat-Shamir Bulletproofs. Malleability attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. In this paper, we show for the first time that Bulletproofs (or any other similar multi-round proof system satisfying some form of \emph{weak unique response} property) achieve \emph{simulation-extractability} in the \emph{algebraic group model}. This implies that Fiat-Shamir Bulletproofs are \emph{non-malleable}.
2022
EUROCRYPT
On the Lattice Isomorphism Problem, Quadratic Forms, Remarkable Lattices, and Cryptography 📺
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds. This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance. In this work, we provide generic realizations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the lattice isomorphism problem (LIP). More specifically, we provide: - a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014). - a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP. - a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP. The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski's bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski's bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
2022
EUROCRYPT
Limits of Polynomial Packings for $\mathbb{Z}_{p^k}$ and $\mathbb{F}_{p^k}$ 📺
We formally define polynomial packing methods and initiate a unified study of related concepts in various contexts of cryptography. This includes homomorphic encryption (HE) packing and reverse multiplication-friendly embedding (RMFE) in information-theoretically secure multi-party computation (MPC). We prove several upper bounds and impossibility results on packing methods for $\mathbb{Z}_{p^k}$ or $\mathbb{F}_{p^k}$-messages into $\mathbb{Z}_{p^t}[x]/f(x)$ in terms of (i) packing density, (ii) level-consistency, and (iii) surjectivity. These results have implications on recent development of HE-based MPC over $\mathbb{Z}_{2^k}$ secure against actively corrupted majority and provide new proofs for upper bounds on RMFE.
2022
EUROCRYPT
High-Precision Bootstrapping for Approximate Homomorphic Encryption by Error Variance Minimization 📺
The Cheon-Kim-Kim-Song (CKKS) scheme (Asiacrypt'17) is one of the most promising homomorphic encryption (HE) schemes as it enables privacy-preserving computing over real (or complex) numbers. It is known that bootstrapping is the most challenging part of the CKKS scheme. Further, homomorphic evaluation of modular reduction is the core of the CKKS bootstrapping. As modular reduction is not represented by the addition and multiplication of complex numbers, approximate polynomials for modular reduction should be used. The best-known techniques (Eurocrypt'21) use a polynomial approximation for trigonometric functions and their composition. However, all the previous methods are based on an indirect approximation, and thus it requires lots of multiplicative depth to achieve high accuracy. This paper proposes a direct polynomial approximation of modular reduction for CKKS bootstrapping, which is optimal in error variance and depth. Further, we propose an efficient algorithm, namely the lazy baby-step giant-step (BSGS) algorithm, to homomorphically evaluate the approximate polynomial, utilizing the lazy relinearization/rescaling technique. The lazy-BSGS reduces the computational complexity by half compared to the ordinary BSGS algorithm. The performance improvement for the CKKS scheme by the proposed algorithm is verified by implementation using HE libraries. The implementation results show that the proposed method has a multiplicative depth of 10 for modular reduction to achieve the state-of-the-art accuracy, while the previous methods have depths of 11 to 12. Moreover, we achieve higher accuracy within a small multiplicative depth, for example, 93-bit within multiplicative depth 11.
2022
EUROCRYPT
Dynamic Collusion Bounded Functional Encryption from Identity-Based Encryption 📺
Functional Encryption is a powerful notion of encryption in which each decryption key is associated with a function f such that decryption recovers the function evaluation f(m). Informally, security states that a user with access to function keys sk_f1,sk_f2,..., (and so on) can only learn f1(m), f2(m),... (and so on) but nothing more about the message. The system is said to be q-bounded collusion resistant if the security holds as long as an adversary gets access to at most q = q(λ) function keys. A major drawback of such statically bounded collusion systems is that the collusion bound q must be declared at setup time and is fixed for the entire lifetime of the system. We initiate the study of dynamically bounded collusion resistant functional encryption systems which provide more flexibility in terms of selecting the collusion bound, while reaping the benefits of statically bounded collusion FE systems (such as quantum resistance, simulation security, and general assumptions). Briefly, the virtues of a dynamically bounded scheme can be summarized as: -Fine-grained individualized selection: It lets each encryptor select the collusion bound by weighing the trade-off between performance overhead and the amount of collusion resilience. -Evolving encryption strategies: Since the system is no longer tied to a single collusion bound, thus it allows to dynamically adjust the desired collusion resilience based on any number of evolving factors such as the age of the system, or a number of active users, etc. -Ease and simplicity of updatability: None of the system parameters have to be updated when adjusting the collusion bound. That is, the same key skf can be used to decrypt ciphertexts for collusion bound q = 2 as well as q = 2^λ. We construct such a dynamically bounded functional encryption scheme for the class of all polynomial-size circuits under the general assumption of Identity-Based Encryption
2022
EUROCRYPT
Round-Optimal Byzantine Agreement 📺
Byzantine agreement is a fundamental primitive in cryptography and distributed computing, and minimizing its round complexity is of paramount importance. It is long known that any randomized $r$-round protocol must fail with probability at least $(c\cdot r)^{-r}$, for some constant $c$, when the number of corruptions is linear in the number of parties, $t = \theta(n)$. On the other hand, current protocols fail with probability at least $2^{-r}$. Whether we can match the lower bound agreement probability remains unknown. In this work, we resolve this long-standing open question. We present a protocol that matches the lower bound up to constant factors. Our results hold under a (strongly rushing) adaptive adversary that can corrupt up to $t = (1-\epsilon)n/2$ parties, and our protocols use a public-key infrastructure and a trusted setup for unique threshold signatures. This is the first protocol that decreases the failure probability (overall) by a \emph{super-constant} factor per round.
2022
EUROCRYPT
Post-Quantum Security of the Even-Mansour Cipher 📺
The Even-Mansour cipher is a simple method for constructing a (keyed) pseudorandom permutation $E$ from a public random permutation~$P:\bool^n \rightarrow \bool^n$. It is a core ingredient in a wide array of symmetric-key constructions, including several lightweight cryptosystems presently under consideration for standardization by NIST. It is secure against classical attacks, with optimal attacks requiring $q_E$ queries to $E$ and $q_P$ queries to $P$ such that $q_P \cdot q_E \approx 2^n$. If the attacker is given \emph{quantum} access to both $E$ and $P$, however, the cipher is completely insecure, with attacks using $q_P = q_E = O(n)$ queries known. In any plausible real-world setting, however, a quantum attacker would have only \emph{classical} access to the keyed permutation $E$ implemented by honest parties, while retaining quantum access to $P$. Attacks in this setting with $q_P^2 \cdot q_E \approx 2^n$ are known, showing that security degrades as compared to the purely classical case, but leaving open the question as to whether the Even-Mansour cipher can still be proven secure in this natural ``post-quantum'' setting. We resolve this open question, showing that any attack in this post-quantum setting requires $q^2_P \cdot q_E + q_P \cdot q_E^2 \approx 2^n$. Our results apply to both the two-key and single-key variants of Even-Mansour. Along the way, we establish several generalizations of results from prior work on quantum-query lower bounds that may be of independent interest.
2022
EUROCRYPT
Round-Optimal and Communication-Efficient Multiparty Computation 📺
Typical approaches for minimizing the round complexity of multi-party computation (MPC) come at the cost of increased communication complexity (CC) or the reliance on setup assumptions. A notable exception is the recent work of Ananth et al. [TCC 2019], which used Functional Encryption (FE) combiners to obtain a round optimal (two-round) semi-honest MPC in the plain model with CC proportional to the depth and input-output length of the circuit being computed---we refer to such protocols as circuit scalable. This leaves open the question of obtaining communication efficient protocols that are secure against malicious adversaries in the plain model, which our work solves. Concretely, our two main contributions are: 1) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-scalable maliciously secure MPC protocols in the plain model, assuming (succinct) FE combiners. 2) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-independent --- i.e., with CC that depends only on the input-output length of the circuit---maliciously secure MPC protocols in the plain model, assuming Multi-Key Fully-Homomorphic Encryption (MFHE). Our constructions are based on a new compiler that turns a wide class of MPC protocols into k-delayed-input function MPC protocols (a notion we introduce), where the functions to be computed is specified only in the k-th round of the protocol. As immediate corollaries of our two compilers, we derive (1) the first round-optimal and circuit-scalable maliciously secure MPC, and (2) the first round-optimal and circuit-independent maliciously secure MPC in the plain model. The latter MPC achieves the best to-date CC for a round-optimal malicious MPC protocol. In fact, it is even communication-optimal when the output size of the function being evaluated is smaller than its input size (e.g., for boolean functions). All of our results are based on standard polynomial time assumptions.
2022
EUROCRYPT
Incompressible Cryptography 📺
Incompressible encryption allows us to make the ciphertext size flexibly large and ensures that an adversary learns nothing about the encrypted data, even if the decryption key later leaks, unless she stores essentially the entire ciphertext. Incompressible signatures can be made arbitrarily large and ensure that an adversary cannot produce a signature on any message, even one she has seen signed before, unless she stores one of the signatures essentially in its entirety. In this work, we give simple constructions of both incompressible public-key encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures, recently introduced by Guan and Zhandry (TCC 2021), whose previous constructions relied on sophisticated techniques and strong, non-standard assumptions. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.
2022
EUROCRYPT
Multi-Designated Receiver Signed Public Key Encryption 📺
This paper introduces a new type of public-key encryption scheme, called Multi-Designated Receiver Signed Public Key Encryption (MDRS-PKE), which allows a sender to select a set of designated receivers and both encrypt and sign a message that only these receivers will be able to read and authenticate (confidentiality and authenticity). An MDRS-PKE scheme provides several additional security properties which allow for a fundamentally new type of communication not considered before. Namely, it satisfies consistency---a dishonest sender cannot make different receivers receive different messages---off-the-record---a dishonest receiver cannot convince a third party of what message was sent (e.g., by selling their secret key), because dishonest receivers have the ability to forge signatures---and anonymity---parties that are not in the set of designated receivers cannot identify who the sender and designated receivers are. We give a construction of an MDRS-PKE scheme from standard assumptions. At the core of our construction lies yet another new type of public-key encryption scheme, which is of independent interest: Public Key Encryption for Broadcast (PKEBC) which provides all the security guarantees of MDRS-PKE schemes, except authenticity. We note that MDRS-PKE schemes give strictly more guarantees than Multi-Designated Verifier Signature (MDVS) schemes with privacy of identities. This in particular means that our MDRS-PKE construction yields the first MDVS scheme with privacy of identities from standard assumptions. The only prior construction of such schemes was based on Verifiable Functional Encryption for general circuits (Damgard et al., TCC '20).
2022
EUROCRYPT
On Succinct Non-Interactive Arguments in Relativized Worlds 📺
Succinct non-interactive arguments of knowledge (SNARKs) are cryptographic proofs with strong efficiency properties. Applications of SNARKs often involve proving computations that include the SNARK verifier, a technique called recursive composition. Unfortunately, SNARKs with desirable features such as a transparent (public-coin) setup are known only in the random oracle model (ROM). In applications this oracle must be heuristically instantiated and used in a non-black-box way. In this paper we identify a natural oracle model, the low-degree random oracle model, in which there exist transparent SNARKs for all NP computations *relative to this oracle*. Informally, letting $O$ be a low-degree encoding of a random oracle, and assuming the existence of (standard-model) collision-resistant hash functions, there exist SNARKs relative to $O$ for all languages in $NP^{O}$. Such a SNARK can directly prove a computation about its own verifier. To analyze this model, we introduce a more general framework, the *linear code random oracle model* (LCROM). We show how to obtain SNARKs in the LCROM for computations that query the oracle, given an *accumulation scheme* for oracle queries. Then we construct such an accumulation scheme for the special case of a low degree random oracle.
2022
EUROCRYPT
Refined Cryptanalysis of the GPRS Ciphers GEA-1 and GEA-2 📺
At EUROCRYPT~2021, Beierle et al. presented the first public analysis of the GPRS ciphers GEA-1 and GEA-2. They showed that although GEA-1 uses a 64-bit session key, it can be recovered with the knowledge of only 65 bits of keystream in time $2^{40}$ using $44$ GiB of memory. The attack exploits a weakness in the initialization process of the cipher that was presumably hidden intentionally by the designers to reduce its security. While no such weakness was found for GEA-2, the authors presented an attack on this cipher with time complexity of about $2^{45}$. The main practical obstacle is the required knowledge of 12800 bits of keystream used to encrypt a full GPRS frame. Variants of the attack are applicable (but more expensive) when given less consecutive keystream bits, or when the available keystream is fragmented (it contains no long consecutive block). In this paper, we improve and complement the previous analysis of GEA-1 and GEA-2. For GEA-1, we devise an attack in which the memory complexity is reduced by a factor of about $2^{13} = 8192$ from $44$ GiB to about 4 MiB, while the time complexity remains $2^{40}$. Our implementation recovers the GEA-1 session key in average time of 2.5~hours on a modern laptop. For GEA-2, we describe two attacks that complement the analysis of Beierle et al. The first attack obtains a linear tradeoff between the number of consecutive keystream bits available to the attacker (denoted by $\ell$) and the time complexity. It improves upon the previous attack in the range of (roughly) $\ell \leq 7000$. Specifically, for $\ell = 1100$ the complexity of our attack is about $2^{54}$, while the previous one is not faster than the $2^{64}$ brute force complexity. In case the available keystream is fragmented, our second attack reduces the memory complexity of the previous attack by a factor of $512$ from 32 GiB to 64 MiB with no time complexity penalty. Our attacks are based on new combinations of stream cipher cryptanalytic techniques and algorithmic techniques used in other contexts (such as solving the $k$-XOR problem).
2022
EUROCRYPT
Families of SNARK-friendly 2-chains of elliptic curves 📺
At CANS'20, El Housni and Guillevic introduced a new 2-chain of pairing-friendly elliptic curves for recursive zero-knowledge Succinct Non-interactive ARguments of Knowledge (zk-SNARKs) made of the former BLS12-377 curve (a Barreto--Lynn--Scott curve over a 377-bit prime field) and the new BW6-761 curve (a Brezing--Weng curve of embedding degree 6 over a 761-bit prime field). First we generalise the curve construction, the pairing formulas ($e \colon \G_1 \times \G_2 \to \G_T$) and the group operations to any BW6 curve defined on top of any BLS12 curve, forming a family of 2-chain pairing-friendly curves. Second, we investigate other possible 2-chain families made on top of the BLS12 and BLS24 curves. We compare BW6 to Cocks--Pinch curves of higher embedding degrees 8 and 12 (CP8, CP12) at the 128-bit security level. We derive formulas for efficient optimal ate and optimal Tate pairings on our new CP curves. We show that for both BLS12 and BLS24, the BW6 construction always gives the fastest pairing and curve arithmetic compared to Cocks-Pinch curves. Finally, we suggest a short list of curves suitable for Groth16 and KZG-based universal SNARKs and present an optimized implementation of these curves. Based on Groth16 and PlonK (a KZG-based SNARK) implementations in the \texttt{gnark} ecosystem, we obtain that the BLS12-377/BW6-761 pair is optimized for the former while the BLS24-315/BW6-672 pair is optimized for the latter.
2022
EUROCRYPT
Zero-Knowledge IOPs with Linear-Time Prover and Polylogarithmic-Time Verifier 📺
Interactive oracle proofs (IOPs) are a multi-round generalization of probabilistically checkable proofs that play a fundamental role in the construction of efficient cryptographic proofs. We present an IOP that simultaneously achieves the properties of zero knowledge, linear-time proving, and polylogarithmic-time verification. We construct a zero-knowledge IOP where, for the satisfiability of an $N$-gate arithmetic circuit over any field of size $\Omega(N)$, the prover uses $O(N)$ field operations and the verifier uses $\polylog(N)$ field operations (with proof length $O(N)$ and query complexity $\polylog(N)$). Polylogarithmic verification is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-time-computable encoding of the circuit whose satisfiability is being proved). Our result implies progress on a basic goal in the area of efficient zero knowledge. Via a known transformation, we obtain a zero knowledge argument system where the prover runs in linear time and the verifier runs in polylogarithmic time; the construction is plausibly post-quantum and only makes a black-box use of lightweight cryptography (collision-resistant hash functions).
2022
EUROCRYPT
Anamorphic Encryption: Private Communication against a Dictator 📺
Cryptosystems have been developed over the years under the typical prevalent setting which assumes that the receiver’s key is kept secure from the adversary, and that the choice of the message to be sent is freely performed by the sender and is kept secure from the adversary as well. Under these fundamental and basic operational assumptions, modern Cryptography has flourished over the last half a century or so, with amazing achievements: New systems (including public-key Cryptography), beautiful and useful models (including security definitions such as semantic security), and new primitives (such as zero-knowledge proofs) have been developed. Furthermore, these fundamental achievements have been translated into actual working systems, and span many of the daily human activities over the Internet. However, in recent years, there is an overgrowing pressure from many governments to allow the government itself access to keys and messages of encryption systems (under various names: escrow encryption, emergency access, communication decency acts, etc.). Numerous non-direct arguments against such policies have been raised, such as “the bad guys can utilize other encryption system” so all other cryptosystems have to be declared illegal, or that “allowing the government access is an ill-advised policy since it creates a natural weak systems security point, which may attract others (to masquerade as the government).” It remains a fundamental open issue to show directly that the above mentioned efforts by a government (called here “a dictator” for brevity) which mandate breaking of the basic operational assumption (and disallowing other cryptosystems), is, in fact, a futile exercise. This is a direct technical point which needs to be made and has not been made to date. In this work, as a technical demonstration of the futility of the dictator’s demands, we invent the notion of “Anamorphic Encryption” which shows that even if the dictator gets the keys and the messages used in the system (before anything is sent) and no other system is allowed, there is a covert way within the context of well established public-key cryptosystems for an entity to send secure messages which are, in spite of the stringent dictator conditions, hidden from the dictator itself! We feel that this may be an important direct technical argument against the nature of governments attempts to police the use of strong cryptographic systems, and we hope to stimulate further works in this direction.
2022
EUROCRYPT
Watermarking PRFs against Quantum Adversaries 📺
We initiate the study of software watermarking against quantum adversaries. A quantum adversary generates a quantum state as a pirate software that potentially removes an embedded message from a classical marked software. Extracting an embedded message from quantum pirate software is difficult since measurement could irreversibly alter the quantum state. In software watermarking against classical adversaries, a message extraction algorithm crucially uses the (input-output) behavior of a classical pirate software to extract an embedded message. Even if we instantiate existing watermarking PRFs with quantum-safe building blocks, it is not clear whether they are secure against quantum adversaries due to the quantum-specific property above. Thus, we need entirely new techniques to achieve software watermarking against quantum adversaries. In this work, we define secure watermarking PRFs for quantum adversaries (unremovability against quantum adversaries). We also present two watermarking PRFs as follows. - We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. The marking and extraction algorithms use a public parameter and a private extraction key, respectively. The watermarking PRF is unremovable even if adversaries have (the public parameter and) access to the extraction oracle, which returns a result of extraction for a queried quantum circuit. - We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation (IO) and the quantum hardness of the LWE problem. The marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable even if adversaries have the extraction key (and the public parameter). We develop a quantum extraction technique to extract information (a classical string) from a quantum state without destroying the state too much. We also introduce the notion of extraction-less watermarking PRFs as a crucial building block to achieve the results above by combining the tool with our quantum extraction technique.
2022
EUROCRYPT
Non-Interactive Zero-Knowledge Proofs with Fine-Grained Security 📺
We construct the first non-interactive zero-knowledge (NIZK) proof systems in the fine-grained setting where adversaries' resources are bounded and honest users have no more resources than an adversary. More concretely, our setting is the NC1-fine-grained setting, namely, all parties (including adversaries and honest participants) are in NC1. Our NIZK systems are for circuit satisfiability (SAT) under the worst-case assumption, $NC1 \subsetneq L/poly$. As technical contributions, we propose two approaches to construct NIZK in the NC1-fine-grained setting. In stark contrast to the classical Fiat-Shamir transformation, both our approaches start with a simple Sigma-protocol and transform it into NIZKs for circuit SAT without random oracles. Additionally, our second approach firstly proposes a fully homomorphic encryption (FHE) scheme in the fine-grained setting, which was not known before, as a building block. Compared with the first approach, the resulting NIZK only supports circuits with constant multiplicative depth, while its proof size is independent of the statement circuit size. Extending our approaches, we obtain two NIZK systems in the uniform reference string model and two non-interactive zaps (namely, non-interactive witness-indistinguishability proof systems in the plain model). While the previous constructions from Ball, Dachman-Soled, and Kulkarni (CRYPTO 2020) require provers to run in polynomial-time, our constructions are the first one with provers in NC1.
2022
EUROCRYPT
Anonymity of NIST PQC Round 3 KEMs 📺
This paper investigates \emph{anonymity} of all NIST PQC Round~3 KEMs: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime (Streamlined NTRU Prime and NTRU LPRime), and SIKE. We show the following results: * NTRU is anonymous in the quantum random oracle model (QROM) if the underlying deterministic PKE is strongly disjoint-simulatable. NTRU is collision-free in the QROM. A hybrid PKE scheme constructed from NTRU as KEM and appropriate DEM is anonymous and robust. (Similar results for BIKE, FrodoKEM, HQC, NTRU LPRime, and SIKE hold except one of three parameter sets of HQC.) * Classic McEliece is anonymous in the QROM if the underlying PKE is strongly disjoint-simulatable and a hybrid PKE scheme constructed from it as KEM and appropriate DEM is anonymous. * Grubbs, Maram, and Paterson pointed out that Kyber and Saber have a gap in the current IND-CCA security proof in the QROM (EUROCRYPT 2022). We found that Streamlined NTRU Prime has another technical obstacle for the IND-CCA security proof in the QROM. Those answer the open problem to investigate the anonymity and robustness of NIST PQC Round~3 KEMs posed by Grubbs, Maram, and Paterson (EUROCRYPT 2022). We use strong disjoint-simulatability of the underlying PKE of KEM and strong pseudorandomness and smoothness/sparseness of KEM as the main tools, which will be of independent interest.
2022
EUROCRYPT
EpiGRAM: Practical Garbled RAM 📺
Best Paper Award
Garbled RAM (GRAM) is a powerful technique introduced by Lu and Ostrovsky that equips Garbled Circuit (GC) with a sublinear cost RAM without adding rounds of interaction. While multiple GRAM constructions are known, none are suitable for practice, due to costs that have high constants and poor scaling. We present the first GRAM suitable for practice. For computational security parameter $\kappa$ and for a size-$n$ RAM that stores blocks of size $w = \Omega(\log^2 n)$ bits, our GRAM incurs only amortized $O(w \cdot \log^2 n \cdot \kappa)$ communication and computation per access. We evaluate the concrete cost of our GRAM; our approach outperforms trivial linear-scan-based RAM for as few as $512$ $128$-bit elements.
2022
EUROCRYPT
Efficient Schemes for Committing Authenticated Encryption 📺
This paper provides efficient authenticated-encryption (AE) schemes in which a ciphertext is a commitment to the key. These are extended, at minimal additional cost, to schemes where the ciphertext is a commitment to all encryption inputs, meaning key, nonce, associated data and message. Our primary schemes are modifications of GCM (for basic, unique-nonce AE security) and AES-GCM-SIV (for misuse-resistant AE security) and add both forms of commitment without any increase in ciphertext size. We also give more generic, but somewhat more costly, solutions.
2022
EUROCRYPT
Authentication in the Bounded Storage Model 📺
We consider the streaming variant of the Bounded Storage Model (BSM), where the honest parties can stream large amounts of data to each other, while only maintaining a small memory of size $n$. The adversary also operates as a streaming algorithm, but has a much larger memory size $m \gg n$. The goal is to construct unconditionally secure cryptographic schemes in the BSM, and prior works did so for symmetric-key encryption, key agreement, oblivious transfer and multiparty computation. In this work, we construct message authentication and signatures in the BSM. First, we consider the symmetric-key setting, where Alice and Bob share a small secret key. Alice can authenticate arbitrarily many messages to Bob by streaming long authentication tags of size $k \gg m$, while ensuring that the tags can be generated and verified using only $n$ bits of memory. We show a solution using local extractors (Vadhan; JoC '04), which allows for up to exponentially large adversarial memory $m = 2^{O(n)}$, and has tags of size $k= O(m)$. Second, we consider the same setting as above, but now additionally require each individual tag to be small, of size $k \leq n$. We show a solution is still possible when the adversary's memory is $m = O(n^2)$, which is optimal. Our solution relies on a space lower bound for leaning parities (Raz; FOCS '16). Third, we consider the public-key signature setting. A signer Alice initially streams a long verification key over an authentic channel, while only keeping a short signing key in her memory. A verifier Bob receives the streamed verification key and generates some short verification digest that he keeps in his memory. Later, Alice can sign arbitrarily many messages using her signing key by streaming the signatures to Bob, who can verify them using his verification digest. We show a solution for $m= O(n^2)$, which we show to be optimal. Our solution relies on a novel entropy lemma, of independent interest. We show that, if a sequence of blocks has sufficiently high min-entropy, then a large fraction of individual blocks must have high min-entropy. Naive versions of this lemma are false, but we show how to patch it to make it hold.
2022
EUROCRYPT
Short Pairing-Free Blind Signatures with Exponential Security 📺
This paper proposes the first practical pairing-free three-move blind signature schemes that (1) are concurrently secure, (2) produce short signatures (i.e., {\em three} or {\em four} group elements/scalars), and (3) are provably secure either in the generic group model (GGM) or the algebraic group model (AGM) under the (plain or one-more) discrete logarithm assumption (beyond additionally assuming random oracles). We also propose a partially blind version of one of our schemes. Our schemes do not rely on the hardness of the ROS problem (which can be broken in polynomial time) or of the mROS problem (which admits sub-exponential attacks). The only prior work with these properties is Abe's signature scheme (EUROCRYPT '02), which was recently proved to be secure in the AGM by Kastner et al. (PKC '22), but which also produces signatures twice as long as those from our scheme. The core of our proofs of security is a new problem, called {\em weighted} {\em fractional} ROS (WFROS), for which we prove (unconditional) exponential lower bounds.
2022
EUROCRYPT
Secure Non-interactive Simulation: Feasibility \& Rate 📺
A natural solution to increase the efficiency of secure computation will be to non-interactively and securely transform diverse inexpensive-to-generate correlated randomness, like, joint samples from noise sources, into correlations useful for secure computation protocols. Motivated by this general application for secure computation, our work introduces the notion of {\em secure non-interactive simulation} (\snis). Parties receive samples of correlated randomness, and they, without any interaction, securely convert them into samples from another correlated randomness. Our work presents a simulation-based security definition for \snis and initiates the study of the feasibility and efficiency of \snis. We also study \snis among fundamental correlated randomnesses like random samples from the binary symmetric and binary erasure channels, represented by \BSC and \BEC, respectively. We show the impossibility of interconversion between \BSC and \BEC samples. Next, we prove that a \snis of a $\BEC(\eps')$ sample (a \BEC with noise characteristic $\eps'$) from $\BEC(\eps)$ is feasible if and only if $(1-\eps') = (1-\eps)^k$, for some $k\in\NN$. In this context, we prove that all \snis constructions must be linear. Furthermore, if $(1-\eps') = (1-\eps)^k$, then the rate of simulating multiple independent $\BEC(\eps')$ samples is at most $1/k$, which is also achievable using (block) linear constructions. Finally, we show that a \snis of a $\BSC(\eps')$ sample from $\BSC(\eps)$ samples is feasible if and only if $(1-2\eps')=(1-2\eps)^k$, for some $k\in\NN$. Interestingly, there are linear as well as non-linear \snis constructions. When $(1-2\eps')=(1-2\eps)^k$, we prove that the rate of a {\em perfectly secure} \snis is at most $1/k$, which is achievable using linear and non-linear constructions. Our technical approach algebraizes the definition of \snis and proceeds via Fourier analysis. Our work develops general analysis methodologies for Boolean functions, explicitly incorporating cryptographic security constraints. Our work also proves strong forms of {\em statistical-to-perfect security} transformations: one can error-correct a statistically secure \snis to make it perfectly secure. We show a connection of our research with {\em homogeneous Boolean functions} and {\em distance-invariant codes}, which may be of independent interest.
2022
EUROCRYPT
SNARGs for P from Sub-exponential DDH and QR 📺
We obtain publicly verifiable Succinct Non-Interactive Arguments (SNARGs) for arbitrary deterministic computations and bounded space non-deterministic computation from well-studied group-based assumptions. In particular, assuming the sub-exponential hardness of the Decisional Diffie-Hellman (DDH) and Quadratic Residuosity (QR) assumptions, we obtain the following results, where n denotes the length of the instance: 1. A SNARG for any language that can be decided in non-deterministic time T and space S with communication complexity and verifier runtime(n+S)·T^{o(1)}. 2. A SNARG for any language that can be decided in deterministic time T with communication complexity n·T^{o(1)} and verifier runtime n·T^{o(1)}.
2022
EUROCRYPT
Hiding in Plain Sight: Memory-tight Proofs via Randomness Programming 📺
This paper continues the study of {\em memory-tight reductions} (Auerbach et al, CRYPTO '17). These are reductions that only incur minimal memory costs over those of the original adversary, allowing precise security statements for memory-bounded adversaries (under appropriate assumptions expressed in terms of adversary time and memory usage). Despite its importance, only a few techniques to achieve memory-tightness are known and impossibility results in prior works show that even basic, textbook reductions cannot be made memory-tight. This paper introduces a new class of memory-tight reductions which leverage random strings in the interaction with the adversary to hide state information, thus shifting the memory costs to the adversary. We exhibit this technique with several examples. We give memory-tight proofs for digital signatures allowing many forgery attempts when considering randomized message distributions or probabilistic RSA-FDH signatures specifically. We prove security of the authenticated encryption scheme Encrypt-then-PRF with a memory-tight reduction to the underlying encryption scheme. By considering specific schemes or restricted definitions we avoid generic impossibility results of Auerbach et al.~(CRYPTO '17) and Ghoshal et al.~(CRYPTO '20). As a further case study, we consider the textbook equivalence of CCA-security for public-key encryption for one or multiple encryption queries. We show two qualitatively different memory-tight versions of this result, depending on the considered notion of CCA security.
2022
EUROCRYPT
On the Multi-User Security of Short Schnorr Signatures with Preprocessing 📺
The Schnorr signature scheme is an efficient digital signature scheme with short signature lengths, i.e., $4k$-bit signatures for $k$ bits of security. A Schnorr signature $\sigma$ over a group of size $p\approx 2^{2k}$ consists of a tuple $(s,e)$, where $e \in \{0,1\}^{2k}$ is a hash output and $s\in \mathbb{Z}_p$ must be computed using the secret key. While the hash output $e$ requires $2k$ bits to encode, Schnorr proposed that it might be possible to truncate the hash value without adversely impacting security. In this paper, we prove that \emph{short} Schnorr signatures of length $3k$ bits provide $k$ bits of multi-user security in the (Shoup's) generic group model and the programmable random oracle model. We further analyze the multi-user security of key-prefixed short Schnorr signatures against preprocessing attacks, showing that it is possible to obtain secure signatures of length $3k + \log S + \log N$ bits. Here, $N$ denotes the number of users and $S$ denotes the size of the hint generated by our preprocessing attacker, e.g., if $S=2^{k/2}$, then we would obtain secure $3.75k$-bit signatures for groups of up to $N \leq 2^{k/4}$ users. Our techniques easily generalize to several other Fiat-Shamir-based signature schemes, allowing us to establish analogous results for Chaum-Pedersen signatures and Katz-Wang signatures. As a building block, we also analyze the $1$-out-of-$N$ discrete-log problem in the generic group model, with and without preprocessing.
2022
EUROCRYPT
Stacking Sigmas: A Framework to Compose Sigma-Protocols for Disjunctions 📺
Zero-Knowledge (ZK) Proofs for disjunctive statements have been a focus of a long line of research. Classical results such as Cramer {\em et al.} [CRYPTO'94] and Abe {\em et al.} [AC'02] design generic compilers that transform certain classes of ZK proofs into ZK proofs for disjunctive statements. However, communication complexity of the resulting protocols in these results ends up being proportional to the total size of all the proofs in the disjunction. More recently, a series of works (e.g. Heath {\em et al.} [EC'20]) has exploited special properties of garbled circuits to construct efficient ZK proofs for disjunctions, where the proof size is only proportional to the length of the largest clause in the disjunction. However, these techniques do not appear to generalize beyond garbled circuits. In this work, we focus on achieving the best of both worlds. We design a \textit{general framework} that compiles a large class of {unmodified} $\Sigma$-protocols, each for an individual statement, into a new $\Sigma$-protocol that proves a disjunction of these statements. Our framework can be used both when each clause is proved with the same $\Sigma$-protocol and when different $\Sigma$-protocols are used for different clauses. The resulting $\Sigma$-protocol is concretely efficient and has communication complexity proportional to the communication required by the largest clause, with additive terms that are only logarithmic in the number of clauses. We show that our compiler can be applied to many well-known $\Sigma$-protocols, including classical protocols (\emph{e.g.} Schnorr and Guillou-Quisquater) and modern MPC-in-the-head protocols such as the recent work of Katz, Kolesnikov and Wang [CCS'18] and the Ligero protocol of Ames {\em et al.} [CCS'17] Finally, since all of the protocols in our class can be made non-interactive in the random oracle model using the Fiat-Shamir transform, our result yields the first generic non-interactive zero-knowledge protocol for disjunctions where the communication only depends on the size of the largest clause.
2022
EUROCRYPT
Gemini: elastic SNARKs for diverse environments 📺
We introduce a new class of succinct arguments, that we call elastic. Elastic SNARKs allow the prover to allocate different resources (such as memory and time) depending on the execution environment and the statement to prove. The resulting output is independent of the prover’s configuration. To study elastic SNARKs, we extend the streaming paradigm of [Block et al., TCC’20]. We provide a definitional framework for elastic polynomial interactive oracle proofs for R1CS instances and design a compiler which transforms an elastic PIOP into a preprocessing argument system that supports streaming or random access to its inputs. Depending on the configuration, the prover will choose different trade-offs for time (either linear, or quasilinear) and memory (either linear, or logarithmic). We prove the existence of elastic SNARKS by presenting Gemini, a novel FFT-free preprocessing argument. We prove its security and develop a proof-of-concept implementation in Rust based on the arkworks framework. We provide benchmarks for large R1CS instances of tens of billions of gates on a single machine.
2022
EUROCRYPT
Lightweight, Maliciously Secure Verifiable Function Secret Sharing 📺
In this work, we present a lightweight construction of verifiable two-party function secret sharing (FSS) for point functions and multi-point functions. Our verifiability method is lightweight in two ways. Firstly, it is concretely efficient, making use of only symmetric key operations and no public key or MPC techniques are involved. Our performance is comparable with the state-of-the-art non-verifiable DPF constructions, and we outperform all prior DPF verification techniques in both computation and communication complexity, which we demonstrate with an implementation of our scheme. Secondly, our verification procedure is essentially unconstrained. It will verify that distributed point function (DPF) shares correspond to some point function irrespective of the output group size, the structure of the DPF output, or the set of points on which the DPF must be evaluated. This is in stark contrast with prior works, which depended on at least one and often all three of these constraints. In addition, our construction is the first DPF verification protocol that can verify general DPFs while remaining secure even if one server is malicious. Prior work on maliciously secure DPF verification could only verify DPFs where the non-zero output is binary and the output space is a large field. As an additional feature, our verification procedure can be batched so that verifying a polynomial number of DPF shares requires the exact same amount of communication as verifying one pair of DPF shares. We combine this packed DPF verification with a novel method for packing DPFs into shares of a multi-point function where the evaluation time, verification time, and verification communication are independent of the number of non-zero points in the function. An immediate corollary of our results are two-server protocols for PIR and PSI that remain secure when any one of the three parties is malicious (either the client or one of the servers).
2022
EUROCRYPT
Indistinguishability Obfuscation from LPN over F_p, DLIN, and PRGs in NC^0 📺
In this work, we study what minimal sets of assumptions suffice for constructing indistinguishability obfuscation ($\iO$). We prove: {\bf Theorem}(Informal): {\em Assume sub-exponential security of the following assumptions: - the Learning Parity with Noise ($\mathsf{LPN}$) assumption over general prime fields $\mathbb{F}_p$ with polynomially many $\mathsf{LPN}$ samples and error rate $1/k^\delta$, where $k$ is the dimension of the $\mathsf{LPN}$ secret, and $\delta>0$ is any constant; - the existence of a Boolean Pseudo-Random Generator ($\mathsf{PRG}$) in $\mathsf{NC}^0$ with stretch $n^{1+\tau}$, where $n$ is the length of the $\mathsf{PRG}$ seed, and $\tau>0$ is any constant; - the Decision Linear ($\mathsf{DLIN}$) assumption on symmetric bilinear groups of prime order. Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists. Further, assuming only polynomial security of the aforementioned assumptions, there exists collusion resistant public-key functional encryption for all polynomial-size circuits. This removes the reliance on the Learning With Errors (LWE) assumption from the recent work of [Jain, Lin, Sahai STOC'21]. As a consequence, we obtain the first fully homomorphic encryption scheme that does not rely on any lattice-based hardness assumption. Our techniques feature a new notion of randomized encoding called Preprocessing Randomized Encoding (PRE), that essentially can be computed in the exponent of pairing groups. When combined with other new techniques, PRE gives a much more streamlined construction of $\iO$ while still maintaining reliance only on well-studied assumptions.
2022
EUROCRYPT
Asymmetric PAKE with low computation and communication 📺
In Crypto'21 Gu, Jarecki, and Krawczyk [20] showed an asymmetric password authenticated key exchange protocol (aPAKE) whose computational cost matches (symmetric) password authenticated key exchange (PAKE) and plain (i.e. unauthenticated) key exchange (KE). However, this minimal-cost aPAKE did not match prior aPAKE's in round complexity, using 4 rounds assuming the client initiates compared to 2 rounds in an aPAKE of Bradley et al. In this paper we show two aPAKE protocols that achieve optimal computational cost and optimal round complexity. Our protocols can be seen as applications of the Encrypted Key Exchange (EKE) compiler of Bellovin and Merritt [6], which creates password-authenticated key exchange by password-encrypting messages in a key exchange protocol. Whereas Bellovin and Merritt used this method to construct a PAKE by applying password-encryption to KE messages, we construct an aPAKE by applying password-encryption to messages of a unilaterally authenticated Key Exchange (ua-KE). We present two versions of this compiler. The first uses salted password hash and takes 3 rounds if the client initiates. The second uses unsalted password hash and takes a single simultaneous flow (it is the first aPAKE to do so), thus simultaneously matching the minimal computational cost and the minimal round complexity of PAKE and KE. We analyze our aPAKE protocols assuming Ideal Cipher (IC) on a group as modular constructions from ua-KE realized via a (universally composable) Authenticated Key Exchange where the server uses one-time keys (otk-AKE). We then show that one-pass variants of 3DH and HMQV securely realize otk-AKE in ROM. Interestingly, the two resulting concrete aPAKE's use the exact same protocol messages as two natural variants of EKE, and the only difference between the symmetric PAKE (EKE) and asymmetric PAKE (our protocols) is in the key derivation equation used to derive the final session key output.
2022
EUROCRYPT
Anonymous, Robust Post-Quantum Public Key Encryption 📺
A core goal of the NIST PQC competition is to produce PKE schemes which, even if attacked with a large-scale quantum computer, maintain the security guarantees needed by applications. The main security focus in the NIST PQC context has been IND-CCA security, but other applications demand that PKE schemes provide 'anonymity' (Bellare et al., ASIACRYPT 2001), and 'robustness' (Abdalla et al., TCC 2010). Examples of such applications include anonymous cryptocurrencies, searchable encryption, and auction protocols. However, almost nothing is known about how to build post-quantum PKE schemes offering these security properties. In particular, the status of the NIST PQC candidates with respect to anonymity and robustness is unknown. This paper initiates a systematic study of anonymity and robustness for post-quantum PKE schemes. Firstly, we identify implicit rejection as a crucial design choice shared by most post-quantum KEMs, show that implicit rejection renders prior results on anonymity and robustness for KEM-DEM PKEs inapplicable, and transfer prior results to the implicit-rejection setting where possible. Secondly, since they are widely used to build post-quantum PKEs, we examine how the Fujisaki-Okamoto (FO) transforms (Fujisaki and Okamoto, Journal of Cryptology 2013) confer robustness and enhance weak anonymity of a base PKE. We then leverage our theoretical results to study the anonymity and robustness of three NIST KEM finalists---Saber, Kyber, and Classic McEliece---and one alternate, FrodoKEM. Overall, our findings for robustness are definitive: we provide positive robustness results for Saber, Kyber, and FrodoKEM, and a negative result for Classic McEliece. Our negative result stems from a striking property of KEM-DEM PKE schemes built with the Classic McEliece KEM: for any message 'm', we can construct a single hybrid ciphertext 'c' which decrypts to the chosen 'm' under any Classic McEliece private key. Our findings for anonymity are more mixed: we identify barriers to proving anonymity for Saber, Kyber, and Classic McEliece. We also found that in the case of Saber and Kyber, these barriers lead to issues with their IND-CCA security claims. We have worked with the Saber and Kyber teams to fix these issues, but they remain unresolved. On the positive side, we were able to prove anonymity for FrodoKEM and a variant of Saber introduced by D'Anvers et al. (AFRICACRYPT 2018). Our analyses of these two schemes also identified technical gaps in their IND-CCA security claims, but we were able to fix them.
2022
EUROCRYPT
Secure Multiparty Computation with Free Branching 📺
We study secure multi-party computation (MPC) protocols for branching circuits that contain multiple sub-circuits (i.e., branches) and the output of the circuit is that of single ``active'' branch. Crucially, the identity of the active branch must remain hidden from the protocol participants. While such circuits can be securely computed by evaluating each branch and then multiplexing the output, such an approach incurs a communication cost linear in the size of the entire circuit. To alleviate this, a series of recent works have investigated the problem of reducing the communication cost of branching executions inside MPC (without relying on fully homomorphic encryption). Most notably, the stacked garbling paradigm [Heath and Kolesnikov, CRYPTO'20] yields garbled circuits for branching circuits whose size only depends on the size of the largest branch. Presently, however, it is not known how to obtain similar communication improvements for secure computation involving {\em more than two parties}. In this work, we provide a generic framework for branching multi-party computation that supports {\em any number of parties}. The communication complexity of our scheme is proportional to the size of the largest branch and the computation is linear in the size of the entire circuit. We provide an implementation and benchmarks to demonstrate practicality of our approach.
2022
EUROCRYPT
Adaptively Secure Computation for RAM Programs 📺
We obtain the first two-round two-party computation protocol, in the plain model, that is secure against passive adversaries who can adaptively corrupt all parties where the communication complexity is proportional to the square of the RAM complexity of the function up to polylogarithmic factors assuming the existence of non-committing encryption.
2022
EUROCRYPT
Round-Optimal Multi-Party Computation with Identifiable Abort 📺
Secure multi-party computation (MPC) protocols that are resilient to a dishonest majority allow the adversary to get the output of the computation while, at the same time, forcing the honest parties to abort. Aumann and Lindell introduced the enhanced notion of security with identifiable abort, which still allows the adversary to trigger an abort but, at the same time, it enables the honest parties to agree on the identity of the party that led to the abort. More recently, in Eurocrypt 2016, Garg et al. showed that, assuming access to a simultaneous message exchange channel for all the parties, at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. Following Garg et al., a sequence of works has matched this lower bound, but none of them achieved security with identifiable abort. In this work, we close this gap and show that four rounds of communication are also sufficient to securely realize any functionality with identifiable abort using standard and generic polynomial-time assumptions. To achieve this result we introduce the new notion of bounded-rewind secure MPC that guarantees security even against an adversary that performs a mild form of reset attacks. We show how to instantiate this primitive starting from any MPC protocol and by assuming trapdoor-permutations. The notion of bounded-rewind secure MPC allows for easier parallel composition of MPC protocols with other (interactive) cryptographic primitives. Therefore, we believe that this primitive can be useful in other contexts in which it is crucial to combine multiple primitives with MPC protocols while keeping the round complexity of the final protocol low.
2022
EUROCRYPT
Non-malleable Commitments Against Quantum Attacks 📺
We construct, under standard hardness assumptions, the first non-malleable commitments secure against quantum attacks. Our commitments are statistically binding and satisfy the standard notion of {\em non-malleability with respect to commitment}. We obtain a $\log^\star(\lambda)$-round classical protocol, assuming the existence of post-quantum one-way functions. Previously, non-malleable commitments with quantum security were only known against a restricted class of adversaries known as {\em synchronizing adversaries.} At the heart of our results is a new general technique that allows to modularly obtain non-malleable commitments from any extractable commitment protocol, obliviously of the underlying extraction strategy (black-box or non-black-box) or round complexity. The transformation may also be of interest in the classical setting.
2022
EUROCRYPT
Secure Multiparty Computation with Sublinear Preprocessing 📺
A common technique for enhancing the efficiency of secure multiparty computation (MPC) with dishonest majority is via {\em preprocessing}: In an offline phase, parties engage in an input-independent protocol to securely generate correlated randomness. Once inputs are known, the correlated randomness is consumed by a ``non-cryptographic'' and highly efficient online protocol. The correlated randomness in such protocols traditionally comes in two flavors: multiplication triples (Beaver, Crypto '91), which suffice for security against semi-honest parties, and {\em authenticated} multiplication triples (Bendlin et al., Eurocrypt '11, Damg{\aa}rd et al., Crypto '12) that yield efficient protocols against malicious parties. Recent constructions of pseudorandom correlation generators (Boyle et al., Crypto '19, '20) enable concretely efficient secure generation of multiplication triples with {\em sublinear communication complexity}. However, these techniques do not efficiently apply to authenticated triples, except in the case of secure two-party computation of arithmetic circuits over large fields. In this work, we propose the first {\em concretely efficient} approach for (malicious) MPC with preprocessing in which the offline communication is {\em sublinear} in the circuit size. More specifically, the offline communication scales with the {\em square root} of the circuit size. From a feasibility point of view, our protocols can make use of any secure protocol for generating (unauthenticated) multiplication triples together with any {\em additive} homomorphic encryption. We propose concretely efficient instantiations (based on strong but plausible ``linear-only'' assumptions) from existing homomorphic encryption schemes and pseudorandom correlation generators. Our technique is based on a variant of a recent protocol of Boyle et al. (Crypto '21) for MPC with preprocessing. As a result, our protocols inherit the succinct correlated randomness feature of the latter protocol.
2022
EUROCRYPT
Information-Combining Differential Fault Attacks on DEFAULT 📺
Differential fault analysis (DFA) is a very powerful attack vector on implementations of symmetric cryptography. Most countermeasures are applied at the implementation level. At ASIACRYPT 2021, Baksi et al. proposed a design strategy that aims to provide inherent cipher level resistance against DFA by using S-boxes with linear structures. They argue that in their instantiation, the block cipher DEFAULT, a DFA adversary can learn at most 64 of the 128 key bits, so the remaining brute-force complexity of 2^64 is impractical. In this paper, we show that a DFA adversary can combine information across rounds to recover the full key, invalidating their security claim. In particular, we observe that such ciphers exhibit large classes of equivalent keys that can be represented efficiently in normalized form using linear equations. We exploit this in combination with the specifics of DEFAULT's strong key schedule to recover the key using less than 100 faulty computation and negligible time complexity. Moreover, we show that even an idealized version of DEFAULT with independent round keys is vulnerable to our information-combining attacks based on normalized keys.
2022
EUROCRYPT
Revamped Differential-Linear Cryptanalysis on Reduced Round ChaCha 📺
In this paper, we provide several improvements over the existing differential-linear attacks on ChaCha. ChaCha is a stream cipher which has $20$ rounds. At CRYPTO $2020$, Beierle et al. observed a differential in the $3.5$-th round if the right pairs are chosen. They produced an improved attack using this, but showed that to achieve a right pair, we need $2^5$ iterations on average. In this direction, we provide a technique to find the right pairs with the help of listing. Also, we provide a strategical improvement in PNB construction, modification of complexity calculation and an alternative attack method using two input-output pairs. Using these, we improve the time complexity, reducing it to $2^{221.95}$ from $2^{230.86}$ reported by Beierle et al. for $256$ bit version of ChaCha. Also, after a decade, we improve existing complexity (Shi et al: ICISC 2012) for a $6$-round of $128$ bit version of ChaCha by more than 11 million times and produce the first-ever attack on 6.5-round ChaCha$128$ with time complexity $2^{123.04}.$
2022
EUROCRYPT
CoCoA: Concurrent Continuous Group Key Agreement 📺
Messaging platforms like Signal are widely deployed and provide strong security in an asynchronous setting. It is a challenging problem to construct a protocol with similar security guarantees that can \emph{efficiently} scale to large groups. A major bottleneck are the frequent key rotations users need to perform to achieve post compromise forward security. In current proposals -- most notably in TreeKEM (which is part of the IETF's Messaging Layer Security (MLS) protocol draft) -- for users in a group of size $n$ to rotate their keys, they must each craft a message of size $\log(n)$ to be broadcast to the group using an (untrusted) delivery server. In larger groups, having users sequentially rotate their keys requires too much bandwidth (or takes too long), so variants allowing any $T \leq n$ users to simultaneously rotate their keys in just $2$ communication rounds have been suggested (e.g.\ ``Propose and Commit" by MLS). Unfortunately, $2$-round concurrent updates are either damaging or expensive (or both); i.e.\ they either result in future operations being more costly (e.g.\ via ``blanking'' or ``tainting'') or are costly themselves requiring $\Omega(T)$ communication for each user [Bienstock et al., TCC'20]. In this paper we propose CoCoA; a new scheme that allows for $T$ concurrent updates that are neither damaging nor costly. That is, they add no cost to future operations yet they only require $\Omega(\log^2(n))$ communication per user. To circumvent the [Bienstock et al.] lower bound, CoCoA increases the number of rounds needed to complete all updates from $2$ up to (at most) $\log(n)$; though typically fewer rounds are needed. The key insight of our protocol is the following: in the (non-concurrent version of) TreeKEM, a delivery server which gets $T$ concurrent update requests will approve one and reject the remaining $T-1$. In contrast, our server attempts to apply all of them. If more than one user requests to rotate the same key during a round, the server arbitrarily picks a winner. Surprisingly, we prove that regardless of how the server chooses the winners, all previously compromised users will recover after at most $\log(n)$ such update rounds. To keep the communication complexity low, CoCoA is a server-aided CGKA. That is, the delivery server no longer blindly forwards packets, but instead actively computes individualized packets tailored to each user. As the server is untrusted, this change requires us to develop new mechanisms ensuring robustness of the protocol.
2022
EUROCRYPT
On the Concrete Security of TLS 1.3 PSK Mode 📺
The pre-shared key (PSK) handshake modes of TLS 1.3 allow for the performant, low-latency resumption of previous connections and are widely used on the Web and by resource-constrained devices, e.g., in the Internet of Things. Taking advantage of these performance benefits with optimal and theoretically-sound parameters requires tight security proofs. We give the first tight security proofs for the TLS 1.3 PSK handshake modes. Our main technical contribution is to address a gap in prior tight security proofs of TLS 1.3 which modeled either the entire key schedule or components thereof as independent random oracles to enable tight proof techniques. These approaches ignore existing interdependencies in TLS 1.3's key schedule, arising from the fact that the same cryptographic hash function is used in several components of the key schedule and the handshake more generally. We overcome this gap by proposing a new abstraction for the key schedule and carefully arguing its soundness via the indifferentiability framework. Interestingly, we observe that for one specific configuration, PSK-only mode with hash function SHA-384, it seems difficult to argue indifferentiability due to a lack of domain separation between the various hash function usages. We view this as an interesting insight for the design of protocols, such as future TLS versions. For all other configurations however, our proofs significantly tighten the security of the TLS 1.3 PSK modes, confirming standardized parameters (for which prior bounds provided subpar or even void guarantees) and enabling a theoretically-sound deployment.
2022
EUROCRYPT
Distributed (Correlation) Samplers: How to Remove a Trusted Dealer in One Round 📺
Structured random strings (SRSs) and correlated randomness are important for many cryptographic protocols. In settings where interaction is expensive, it is desirable to obtain such randomness in as few rounds of communication as possible; ideally, simply by exchanging one reusable round of messages which can be considered public keys. In this paper, we describe how to generate any SRS or correlated randomness in such a single round of communication, using, among other things, indistinguishable obfuscation. We introduce what we call a distributed sampler, which enables n parties to sample a single public value (SRS) from any distribution. We construct a semi-malicious distributed sampler in the plain model, and use it to build a semi-malicious public- key PCF (Boyle et al., FOCS 2020) in the plain model. A public-key PCF can be thought of as a distributed correlation sampler; instead of producing a public SRS, it gives each party a private random value (where the values satisfy some correlation). We introduce a general technique called an anti-rusher which compiles any one-round protocol with semi-malicious security without inputs to a similar one-round protocol with active security by making use of a programmable random oracle. This gets us actively secure distributed samplers and public-key PCFs in the random oracle model. Finally, we explore some tradeoffs. Our first PCF construction is limited to reverse-sampleable correlations (where the random outputs of honest parties must be simulatable given the random outputs of corrupt parties); we additionally show a different construction without this limitation, but which does not allow parties to hold secret parameters of the correlation. We also describe how to avoid the use of a random oracle at the cost of relying on sub-exponentially secure indistinguishability obfuscation.
2022
EUROCRYPT
COA-Secure Obfuscation and Applications 📺
We put forth a new paradigm for program obfuscation, where obfuscated programs are endowed with proofs of ``well formedness.'' In addition to asserting existence of an underlying plaintext program with an attested structure, these proofs also prevent mauling attacks, whereby an adversary surreptitiously creates an obfuscated program based on secrets which are embedded in other obfuscated programs. We call this new guarantee Chosen Obfuscation Attacks (COA) security. We show how to enhance a large class of obfuscation mechanisms to be COA-secure, assuming subexponentially secure IO for circuits and subexponentially secure one-way functions.To demonstrate the power of the new notion, we also use it to realize: - A new form of software watermarking, which provides significantly broader protection than current schemes against counterfeits that pass a keyless, public verification process. - Completely CCA encryption, which is a strengthening of completely non-malleable encryption.
2022
EUROCRYPT
Unclonable Polymers and Their Cryptographic Applications 📺
We propose a mechanism for generating and manipulating protein polymers to obtain a new type of *consumable storage* that exhibits intriguing cryptographic "self-destruct" properties, assuming the hardness of certain polymer-sequencing problems. To demonstrate the cryptographic potential of this technology, we first develop a formalism that captures (in a minimalistic way) the functionality and security properties provided by the technology. Next, using this technology, we construct and prove security of two cryptographic applications that are currently obtainable only via trusted hardware that implements logical circuitry (either classical or quantum). The first application is a password-controlled *secure vault* where the stored data is irrecoverably erased once a threshold of unsuccessful access attempts is reached. The second is (a somewhat relaxed version of) *one time programs*, namely a device that allows evaluating a secret function only a limited number of times before self-destructing, where each evaluation is made on a fresh user-chosen input. Finally, while our constructions, modeling, and analysis are designed to capture the proposed polymer-based technology, they are sufficiently general to be of potential independent interest.
2022
EUROCRYPT
Asymptotically Quasi-Optimal Cryptography 📺
The question of minimizing the {\em computational overhead} of cryptography was put forward by the work of Ishai, Kushilevitz, Ostrovsky and Sahai (STOC 2008). The main conclusion was that, under plausible assumptions, most cryptographic primitives can be realized with {\em constant} computational overhead. However, this ignores an additive term that may depend polynomially on the (concrete) computational security parameter $\lambda$. In this work, we study the question of obtaining optimal efficiency, up to polylogarithmic factors, for {\em all} choices of $n$ and $\lambda$, where $n$ is the size of the given task. In particular, when $n=\lambda$, we would like the computational cost to be only $\tilde O(\lambda)$. We refer to this goal as {\em asymptotically quasi-optimal} (AQO) cryptography. We start by realizing the first AQO semi-honest batch oblivious linear evaluation (BOLE) protocol. Our protocol applies to OLE over small fields and relies on the near-exponential security of the ring learning with errors (RLWE) assumption. Building on the above and on known constructions of AQO PCPs, we design the first AQO zero-knowledge (ZK) argument system for Boolean circuit satisfiability. Our construction combines a new AQO ZK-PCP construction that respects the AQO property of the underlying PCP along with a technique for converting statistical secrecy into soundness via OLE reversal. Finally, combining the above results, we get AQO secure computation protocols for Boolean circuits with security against malicious parties under RLWE.
2022
EUROCRYPT
Guaranteed Output in O(sqrt(n)) Rounds for Round-Robin Sampling Protocols 📺
We introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets. Due to their round-robin structure, protocols of this class inherently require n sequential broadcast rounds, where n is the number of participants. We describe how to compile them generically into protocols that require only O(sqrt(n)) broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require Omega(n) sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.
2022
EUROCRYPT
Constant-round Blind Classical Verification of Quantum Sampling 📺
In a recent breakthrough, Mahadev constructed a classical verification of quantum computation (CVQC) protocol for a classical client to delegate decision problems in BQP to an untrusted quantum prover under computational assumptions. In this work, we explore further the feasibility of CVQC with the more general sampling problems in BQP and with the desirable blindness property. We contribute affirmative solutions to both as follows. * Motivated by the sampling nature of many quantum applications (e.g., quantum algorithms for machine learning and quantum supremacy tasks), we initiate the study of CVQC for quantum sampling problems (denoted by SampBQP). More precisely, in a CVQC protocol for a SampBQP problem, the prover and the verifier are given an input x\in{0, 1}^n and a quantum circuit C, and the goal of the classical client is to learn a sample from the output z\leftarrow C(x) up to a small error, from its interaction with an untrusted prover. We demonstrate its feasibility by constructing a four-message CVQC protocol for SampBQP based on the quantum Learning With Errors assumption. * The blindness of CVQC protocols refers to a property of the protocol where the prover learns nothing, and hence is blind, about the client’s input. It is a highly desirable property that has been intensively studied for the delegation of quantum computation. We provide a simple yet powerful generic compiler that transforms any CVQC protocol to a blind one while preserving its completeness and soundness errors as well as the number of rounds. Applying our compiler to (a parallel repetition of) Mahadev’s CVQC protocol for BQP and our CVQC protocol for SampBQP yields the first constant-round blind CVQC protocol for BQP and SampBQP respectively, with negligible and inverse polynomial soundness errors respectively, and negligible completeness errors.
2022
EUROCRYPT
Optimal Broadcast Encryption and CP-ABE from Evasive Lattice Assumptions 📺
We present a new, simple candidate broadcast encryption scheme for N users with parameter size poly(logN). We prove security of our scheme under a non-standard variant of the LWE assumption where the distinguisher additionally receives short Gaussian pre-images while avoiding zeroizing attacks. This yields the first candidate optimal broadcast encryption that is plausibly post-quantum secure, and enjoys a security reduction to a simple assumption. As a secondary contribution, we present a candidate ciphertext-policy attribute-based encryption (CP-ABE) scheme for circuits of a-priori bounded polynomial depth where the parameter size is independent of the circuit size, and prove security under an additional non-standard assumption.
2022
PKC
Polynomial IOPs for Linear Algebra Relations 📺
This paper proposes new Polynomial IOPs for arithmetic circuits. They rely on the monomial coefficient basis to represent the matrices and vectors arising from the arithmetic constraint satisfaction system, and build on new protocols for establishing the correct computation of linear algebra relations such as matrix-vector products and Hadamard products. Our protocols give rise to concrete proof systems with succinct verification when compiled down with a cryptographic compiler whose role is abstracted away in this paper. Depending only on the compiler, the resulting SNARKs are either transparent or rely on a trusted setup.
2022
PKC
Efficient Lattice-Based Inner-Product Functional Encryption 📺
In the recent years, many research lines on Functional Encryption (FE) have been suggested and studied regarding the functionality, security, or efficiency. Nevertheless, an open problem on a basic functionality, the single-input inner-product (IPFE), remains: can IPFE be instantiated based on the Ring Learning With Errors (RLWE) assumption? The RLWE assumption provides quantum-resistance security while in comparison with LWE assumption gives significant performance and compactness gains. In this paper we present the first RLWE-based IPFE scheme. We carefully choose strategies in the security proofs to optimize the size of parameters. More precisely, we develop two new results on ideal lattices. The first result is a variant of Ring-LWE, that we call multi-hint extended Ring-LWE, where some hints on the secret and the noise are given. We present a reduction from RLWE problem to this variant. The second tool is a special form of Leftover Hash Lemma (LHL) over rings, known as Ring-LHL. To demonstrate the efficiency of our scheme we provide an optimized implementation of RLWE-based IPFE scheme and show its performance on a practical use case. We further present new compilers that, combined with some existing ones, can transfer a single-input FE to its (identity-based, decentralized) multi-client variant with linear size of the ciphertext (w.r.t the number of clients).
2022
PKC
A Note on the Post-Quantum Security of (Ring) Signatures 📺
This work revisits the security of classical signatures and ring signatures in a quantum world. For (ordinary) signatures, we focus on the arguably preferable security notion of {\em blind-unforgeability} recently proposed by Alagic et al.\ (Eurocrypt'20). We present two {\em short} signature schemes achieving this notion: one is in the quantum random oracle model, assuming quantum hardness of SIS; and the other is in the plain model, assuming quantum hardness of LWE with super-polynomial modulus. Prior to this work, the only known blind-unforgeable schemes are Lamport's one-time signature and the Winternitz one-time signature, and both of them are in the quantum random oracle model. For ring signatures, the recent work by Chatterjee et al.\ (Crypto'21) proposes a definition trying to capture adversaries with quantum access to the signer. However, it is unclear if their definition, when restricted to the classical world, is as strong as the standard security notion for ring signatures. They also present a construction that only {\em partially} achieves (even) this seeming weak definition, in the sense that the adversary can only conduct superposition attacks over the messages, but not the rings. We propose a new definition that does not suffer from the above issue. Our definition is an analog to the blind-unforgeability in the ring signature setting. Moreover, assuming the quantum hardness of LWE, we construct a compiler converting any blind-unforgeable (ordinary) signatures to a ring signature satisfying our definition.
2022
PKC
Multitarget decryption failure attacks and their application to Saber and Kyber 📺
Many lattice-based encryption schemes are subject to a very small probability of decryption failures. It has been shown that an adversary can efficiently recover the secret key using a number of ciphertexts that cause such a decryption failure. In PKC 2019, D'Anvers et al. introduced `failure boosting', a technique to speed up the search for decryption failures. In this work we first improve the state-of-the-art multitarget failure boosting attacks. We then improve the cost calculation of failure boosting and extend the applicability of these calculations to permit cost calculations of real-world schemes. Using our newly developed methodologies we determine the multitarget decryption failure attack cost for all parameter sets of Saber and Kyber, showing among others that the quantum security of Saber can theoretically be reduced from 172 bits to 145 bits in specific circumstances. We then discuss the applicability of decryption failure attacks in real-world scenarios, showing that an attack might not be practical to execute.
2022
PKC
On the Isogeny Problem with Torsion Point Information 📺
It has recently been rigorously proven (and was previously known under certain heuristics) that the general supersingular isogeny problem reduces to the supersingular endomorphism ring computation problem. However, in order to attack SIDH-type schemes, one requires a particular isogeny which is usually not returned by the general reduction. At Asiacrypt 2016, Galbraith, Petit, Shani and Ti presented a polynomial-time reduction of the problem of finding the secret isogeny in SIDH to the problem of computing the endomorphism ring of a supersingular elliptic curve. Their method exploits the fact that secret isogenies in SIDH are of degree approximately $p^{1/2}$. The method does not extend to other SIDH-type schemes, where secret isogenies of larger degree are used and this condition is not fulfilled. We present a more general reduction algorithm that generalises to all SIDH-type schemes. The main idea of our algorithm is to exploit available torsion point images together with the KLPT algorithm to obtain a linear system of equations over a certain residue class ring. We show that this system will have a unique solution that can be lifted to the integers if some mild conditions on the parameters are satisfied. This lift then yields the secret isogeny. One consequence of this work is that the choice of the prime $p$ in \mbox{B-SIDH} is tight.
2022
PKC
Financially Backed Covert Security 📺
The security notion of covert security introduced by Aumann and Lindell (TCC'07) allows the adversary to successfully cheat and break security with a fixed probability 1-e, while with probability e, honest parties detect the cheating attempt. Asharov and Orlandi (ASIACRYPT'12) extend covert security to enable parties to create publicly verifiable evidence about misbehavior that can be transferred to any third party. This notion is called publicly verifiable covert security (PVC) and has been investigated by multiple works. While these two notions work well in settings with known identities in which parties care about their reputation, they fall short in Internet-like settings where there are only digital identities that can provide some form of anonymity. In this work, we propose the notion of financially backed covert security (FBC), which ensures that the adversary is financially punished if cheating is detected. Next, we present three transformations that turn PVC protocols into FBC protocols. Our protocols provide highly efficient judging, thereby enabling practical judge implementations via smart contracts deployed on a blockchain. In particular, the judge only needs to non-interactively validate a single protocol message while previous PVC protocols required the judge to emulate the whole protocol. Furthermore, by allowing an interactive punishment procedure, we can reduce the amount of validation to a single program instruction, e.g., a gate in a circuit. An interactive punishment, additionally, enables us to create financially backed covert secure protocols without any form of common public transcript, a property that has not been achieved by prior PVC protocols.
2022
PKC
Efficient Lattice-Based Blind Signatures via Gaussian One-Time Signatures 📺
Lattice-based blind signature schemes have been receiving some recent attention lately. Earlier efficient 3-round schemes (Asiacrypt 2010, Financial Cryptography 2020) were recently shown to have mistakes in their proofs, and fixing them turned out to be extremely inefficient and limited the number of signatures that a signer could send to less than a dozen (Crypto 2020). In this work we propose a round-optimal, 2-round lattice-based blind signature scheme which produces signatures of length 150KB. The running time of the signing protocol is linear in the maximum number signatures that can be given out, and this limits the number of signatures that can be signed per public key. Nevertheless, the scheme is still quite efficient when the number of signatures is limited to a few dozen thousand, and appears to currently be the most efficient lattice-based candidate.
2022
PKC
Post-quantum Asynchronous Deniable Key Exchange and the Signal Handshake 📺
The key exchange protocol that establishes initial shared secrets in the handshake of the Signal end-to-end encrypted messaging protocol has several important characteristics: (1) it runs asynchronously (without both parties needing to be simultaneously online), (2) it provides implicit mutual authentication while retaining deniability (transcripts cannot be used to prove either party participated in the protocol), and (3) it retains security even if some keys are compromised (forward secrecy and beyond). All of these properties emerge from clever use of the highly flexible Diffie--Hellman protocol. While quantum-resistant key encapsulation mechanisms (KEMs) can replace Diffie--Hellman key exchange in some settings, there is no KEM-based replacement for the Signal handshake that achieves all three aforementioned properties, in part due to the inherent asymmetry of KEM operations. In this paper, we show how to construct asynchronous deniable key exchange by combining KEMs and designated verifier signature (DVS) schemes. There are several candidates for post-quantum DVS schemes, either direct constructions or via ring signatures. This yields a template for an efficient post-quantum realization of the Signal handshake with the same asynchronicity and security properties as the original Signal protocol.
2022
PKC
Towards a Simpler Lattice Gadget Toolkit 📺
As a building block, gadgets and associated algorithms are widely used in advanced lattice cryptosystems. The gadget algorithms for power-of-base moduli are very efficient and simple, however the current algorithms for arbitrary moduli are still complicated and practically more costly despite several efforts. Considering the necessity of arbitrary moduli, developing simpler and more practical gadget algorithms for arbitrary moduli is crucial to improving the practical performance of lattice based applications. In this work, we propose two new gadget sampling algorithms for arbitrary moduli. Our first algorithm is for gadget Gaussian sampling. It is simple and efficient. One distinguishing feature of our Gaussian sampler is that it does not need floating-point arithmetic, which makes it better compatible with constrained environments. Our second algorithm is for gadget subgaussian sampling. Compared with the existing algorithm, it is simpler, faster, and requires asymptotically less randomness. In addition, our subgaussian sampler achieves an almost equal quality for different practical parameters. Overall these two algorithms provide simpler options for gadget algorithms and enhance the practicality of the gadget toolkit.
2022
PKC
The Direction of Updatable Encryption Does Matter 📺
We introduce a new definition for key updates, called backward-leak uni-directional key updates, in updatable encryption (UE). This notion is a variant of uni-directional key updates for UE. We show that existing secure UE schemes in the bi-directional key updates setting are not secure in the backward-leak uni-directional key updates setting. Thus, security in the backward-leak uni-directional key updates setting is strictly stronger than security in the bi-directional key updates setting. This result is in sharp contrast to the equivalence theorem by Jiang (Asiacrypt 2020), which says security in the bi-directional key updates setting is equivalent to security in the existing uni-directional key updates setting. We call the existing uni-directional key updates ``forward-leak uni-directional'' key updates to distinguish two types of uni-directional key updates in this paper. We also present two UE schemes with the following features. - The first scheme is post-quantum secure in the backward-leak uni-directional key updates setting under the learning with errors assumption. - The second scheme is secure in the no-directional key updates setting and based on indistinguishability obfuscation and one-way functions. This result solves the open problem left by Jiang (Asiacrypt 2020).
2022
PKC
Leakage-Resilient IBE/ABE with Optimal Leakage Rates from Lattices 📺
We derive the first adaptively secure \ibe~and \abe for t-CNF, and selectively secure \abe for general circuits from lattices, with $1-o(1)$ leakage rates, in the both relative leakage model and bounded retrieval model (\BRM). To achieve this, we first identify a new fine-grained security notion for \abe~-- partially adaptive/selective security, and instantiate this notion from \LWE. Then, by using this notion, we design a new key compressing mechanism for identity-based/attributed-based weak hash proof system (\ib/\ab-\whps) for various policy classes, achieving (1) succinct secret keys and (2) adaptive/selective security matching the existing non-leakage resilient lattice-based designs. Using the existing connection between weak hash proof system and leakage resilient encryption, the succinct-key \ib/\ab-\whps~can yield the desired leakage resilient \ibe/\abe schemes with the optimal leakage rates in the relative leakage model. Finally, by further improving the prior analysis of the compatible locally computable extractors, we can achieve the optimal leakage rates in the \BRM.
2022
PKC
Syndrome Decoding Estimator 📺
The selection of secure parameter sets requires an estimation of the attack cost to break the respective cryptographic scheme instantiated under these parameters. The current NIST standardization process for post-quantum schemes makes this an urgent task, especially considering the announcement to select final candidates by the end of 2021. For code-based schemes, recent estimates seemed to contradict the claimed security of most proposals, leading to a certain doubt about the correctness of those estimates. Furthermore, none of the available estimates includes most recent algorithmic improvements on decoding linear codes, which are based on information set decoding (ISD) in combination with nearest neighbor search. In this work we observe that \emph{all} major ISD improvements are build on nearest neighbor search, explicitly or implicitly. This allows us to derive a framework from which we obtain \emph{practical} variants of all relevant ISD algorithms including the most recent improvements. We derive formulas for the practical attack costs and make those online available in an easy to use estimator tool written in python and C. Eventually, we provide classical and quantum estimates for the bit security of all parameter sets of current code-based NIST proposals.
2022
PKC
On the Bottleneck Complexity of MPC with Correlated Randomness 📺
At ICALP 2018, Boyle et al. introduced the notion of the \emph{bottleneck complexity} of a secure multi-party computation (MPC) protocol. This measures the maximum communication complexity of any one party in the protocol, aiming to improve load-balancing among the parties. In this work, we study the bottleneck complexity of MPC in the preprocessing model, where parties are given correlated randomness ahead of time. We present two constructions of \emph{bottleneck-efficient} MPC protocols, whose bottleneck complexity is independent of the number of parties: 1. A protocol for computing abelian programs, based only on one-way functions. 2. A protocol for selection functions, based on any linearly homomorphic encryption scheme. Compared with previous bottleneck-efficient constructions, our protocols can be based on a wider range of assumptions, and avoid the use of fully homomorphic encryption.
2022
PKC
Traceable PRFs: Full Collusion Resistance and Active Security 📺
The main goal of traceable cryptography is to protect against unauthorized redistribution of cryptographic functionalities. Such schemes provide a way to embed identities (i.e., a "mark") within cryptographic objects (e.g., decryption keys in an encryption scheme, signing keys in a signature scheme). In turn, the tracing guarantee ensures that any "pirate device" that successfully replicates the underlying functionality can be successfully traced to the set of identities used to build the device. In this work, we study traceable pseudorandom functions (PRFs). As PRFs are the workhorses of symmetric cryptography, traceable PRFs are useful for augmenting symmetric cryptographic primitives with strong traceable security guarantees. However, existing constructions of traceable PRFs either rely on strong notions like indistinguishability obfuscation or satisfy weak security guarantees like single-key security (i.e., tracing only works against adversaries that possess a single marked key). In this work, we show how to use fingerprinting codes to upgrade a single-key traceable PRF into a fully collusion resistant traceable PRF, where security holds regardless of how many keys the adversary possesses. We additionally introduce a stronger notion of security where tracing security holds even against active adversaries that have oracle access to the tracing algorithm. In conjunction with known constructions of single-key traceable PRFs, we obtain the first fully collusion resistant traceable PRF from standard lattice assumptions. Our traceable PRFs directly imply new lattice-based secret-key traitor tracing schemes that are CCA-secure and where tracing security holds against active adversaries that have access to the tracing oracle.
2022
PKC
Post-Quantum Anonymous One-Sided Authenticated Key Exchange without Random Oracles 📺
Authenticated Key Exchange (AKE) is a cryptographic protocol to share a common session key among multiple parties. Usually, PKI-based AKE schemes are designed to guarantee secrecy of the session key and mutual authentication. However, in practice, there are many cases where mutual authentication is undesirable such as in anonymous networks like Tor and Riffle, or difficult to achieve due to the certificate management at the user level such as the Internet. Goldberg et al. formulated a model of anonymous one-sided AKE which guarantees the anonymity of the client by allowing only the client to authenticate the server, and proposed a concrete scheme. However, existing anonymous one-sided AKE schemes are only known to be secure in the random oracle model. In this paper, we propose generic constructions of anonymous one-sided AKE in the random oracle model and in the standard model, respectively. Our constructions allow us to construct the first post-quantum anonymous one-sided AKE scheme from isogenies in the standard model.
2022
PKC
Efficient Verifiable Partially-Decryptable Commitments from Lattices and Applications 📺
We introduce verifiable partially-decryptable commitments (VPDC), as a building block for constructing efficient privacy-preserving protocols supporting auditability by a trusted party. A VPDC is an extension of a commitment along with an accompanying proof, convincing a verifier that (i) the given commitment is well-formed and (ii) a certain part of the committed message can be decrypted using a (secret) trapdoor known to a trusted party. We first formalize VPDCs and then introduce a general decryption feasibility result that overcomes the challenges in relaxed proofs arising in the lattice setting. Our general result can be applied to a wide class of Fiat-Shamir based protocols and may be of independent interest. Next, we show how to extend the commonly used lattice-based `Hashed-Message Commitment' (HMC) scheme into a succinct and efficient VPDC. In particular, we devise a novel `gadget'-based Regev-style (partial) decryption method, compatible with efficient relaxed lattice-based zero-knowledge proofs. We prove the soundness of our VPDC in the setting of adversarial proofs, where a prover tries to create a valid VPDC output that fails in decryption. To demonstrate the effectiveness of our results, we extend a private blockchain payment protocol, MatRiCT, by Esgin et al. (ACM CCS '19) into a formally auditable construction, which we call MatRiCT-Au, with very low communication and computation overheads over MatRiCT.
2022
PKC
On Pairing-Free Blind Signature Schemes in the Algebraic Group Model 📺
Studying the security and efficiency of blind signatures is an important goal for privacy sensitive applications. In particular, for large- scale settings (e.g., cryptocurrency tumblers), it is important for schemes to scale well with the number of users in the system. Unfortunately, all practical schemes either 1) rely on (very strong) number theoretic hard- ness assumptions and/or computationally expensive pairing operations over bilinear groups, or 2) support only a polylogarithmic number of concurrent (i.e., arbitrarily interleaved) signing sessions per public key. In this work, we revisit the security of two pairing-free blind signature schemes in the Algebraic Group Model (AGM) + Random Oracle Model (ROM). Concretely, 1. We consider the security of Abe’s scheme (EUROCRYPT ‘01), which is known to have a flawed proof in the plain ROM. We adapt the scheme to allow a partially blind variant and give a proof of the new scheme under the discrete logarithm assumption in the AGM+ROM, even for (polynomially many) concurrent signing sessions. 2. We then prove that the popular blind Schnorr scheme is secure un- der the one-more discrete logarithm assumption if the signatures are issued sequentially. While the work of Fuchsbauer et al. (EURO- CRYPT ‘20) proves the security of the blind Schnorr scheme for con- current signing sessions in the AGM+ROM, its underlying assump- tion, ROS, is proven false by Benhamouda et al. (EUROCRYPT ‘21) when more than polylogarithmically many signatures are issued. Given the recent progress, we present the first security analysis of the blind Schnorr scheme in the slightly weaker sequential setting. We also show that our security proof reduces from the weakest possible assumption, with respect to known reduction techniques.
2022
PKC
A New Security Notion for PKC in the Standard Model: Weaker, Simpler, and Still Realizing Secure Channels 📺
Encryption satisfying CCA2 security is commonly known to be unnecessarily strong for realizing secure channels. Moreover, CCA2 constructions in the standard model are far from being competitive practical alternatives to constructions via random oracle. A promising research area to alleviate this problem are weaker security notions—like IND-RCCA secure encryption or IND-atag-wCCA secure tag-based encryption—which are still able to facilitate secure message transfer (SMT) via authenticated channels. In this paper we introduce the concept of sender-binding encryption (SBE), unifying prior approaches of SMT construction in the universal composability (UC) model. We furthermore develop the corresponding non-trivial security notion of IND-SB-CPA and formally prove that it suffices for realizing SMT in conjunction with authenticated channels. Our notion is the weakest so far in the sense that it can be generically constructed from the weakest prior notions—RCCA and atag-wCCA—without additional assumptions, while the reverse is not true. A direct consequence is that IND-stag-wCCA, which is strictly weaker than IND-atag-wCCA but stronger than our IND-SB-CPA, can be used to construct a secure channel. Finally, we give an efficient IND-SB-CPA secure construction in the standard model from IND-CPA secure double receiver encryption (DRE) based on McEliece. This shows that IND-SB-CPA security yields simpler and more efficient constructions in the standard model than the weakest prior notions, i.e., IND-atag-wCCA and IND-stag-wCCA.
2022
PKC
Post-quantum Security of Plain OAEP Transform 📺
In this paper, we show that OAEP transform is indistinguishable under chosen ciphertext attack in the quantum random oracle model if the underlying trapdoor permutation is quantum partial-domain one-way. The existing post-quantum security of OAEP (TCC 2016-B ) requires a modification to the OAEP transform using an extra hash function. We prove the security of the OAEP transform without any modification and this answers an open question in one of the finalists of NIST competition, NTRU submission, affirmatively.
2022
PKC
Rational Modular Encoding in the DCR Setting: Non-Interactive Range Proofs and Paillier-Based Naor-Yung in the Standard Model 📺
Range proofs allow a sender to convince a verifier that committed integers belong to an interval without revealing anything else. So far, all known non-interactive range proofs in the standard model rely on groups endowed with a bilinear map. Moreover, they either require the group order to be larger than the range of any proven statement or they suffer from a wasteful rate. Recently (Eurocrypt'21), Couteau et al. introduced a new approach to efficiently prove range membership by encoding integers as a modular ratio between small integers. We show that their technique can be transposed in the standard model under the Composite Residuosity (DCR) assumption. Interestingly, with this modification, the size of ranges is not a priori restricted by the common reference string. It also gives a constant ratio between the size of ranges and proofs. Moreover, we show that their technique of encoding messages as bounded rationals provides a secure standard model instantiation of the Naor-Yung CCA2 encryption paradigm under the DCR assumption. Keywords: Range proofs, NIZK, standard model, Naor-Yung.
2022
PKC
Lockable Obfuscation from Circularly Insecure Fully Homomorphic Encryption 📺
In a lockable obfuscation scheme, a party called the obfuscator takes as input a circuit C, a lock value y and, a message m, and outputs an obfuscated circuit. Given the obfuscated circuit, an evaluator can run it on an input x and learn the message if C(x) = y. For security, we require that the obfuscation reveals no information on the circuit as long as the lock y has high entropy even given the circuit C. The only known constructions of lockable obfuscation schemes require indistinguishability obfuscation (iO) or the learning with errors (LWE) assumption. Furthermore, in terms of technique, all known constructions, excluding iO-based, are build from provably secure variations of graph-induced multilinear maps. We show a generic construction of a lockable obfuscation scheme built from a (leveled) fully homomorphic encryption scheme that is circularly insecure. Specifically, we need a fully homomorphic encryption scheme that is secure under chosen-plaintext attack (IND-CPA) but for which there is an efficient cycle tester that can detect encrypted key cycles. Our finding sheds new light on how to construct lockable obfuscation schemes and shows why cycle tester constructions were helpful in the design of lockable obfuscation schemes. One of the many use cases for lockable obfuscation schemes are constructions for IND-CPA secure but circularly insecure encryption schemes. Our work shows that there is a connection in both ways between circular insecure encryption and lockable obfuscation.
2022
PKC
Improved Constructions of Anonymous Credentials From Structure-Preserving Signatures on Equivalence Classes 📺
Anonymous attribute-based credentials (ABCs) are a powerful tool allowing users to authenticate while maintaining privacy. When instantiated from structure-preserving signatures on equivalence classes (SPS-EQ) we obtain a controlled form of malleability, and hence increased functionality and privacy for the user. Existing constructions consider equivalence classes on the message space, allowing the joint randomization of credentials and the corresponding signatures on them. In this work, we additionally consider equivalence classes on the signing-key space. In this regard, we obtain a \emph{signer hiding} notion, where the issuing organization is not revealed when a user shows a credential. To achieve this, we instantiate the ABC framework of Fuchsbauer, Hanser, and Slamanig (FHS, Journal of Cryptology '19) with a recent SPS-EQ scheme (ASIACRYPT '19) modified to support a fully adaptive NIZK from the framework of Couteau and Hartmann (CRYPTO '20). We also show how to obtain Mercurial Signatures (CT-RSA, 2019), extending the application of our construction to anonymous delegatable credentials. To further increase functionality and efficiency, we augment the set-commitment scheme of FHS19 to support openings on attribute sets disjoint from those possessed by the user, while integrating a proof of exponentiation to allow for a more efficient verifier. Instantiating in the CRS model, we obtain an efficient credential system, anonymous under malicious organization keys, with increased expressiveness and privacy, proven secure in the standard model.
2022
PKC
Reusable Two-Round MPC from LPN 📺
We present a new construction of maliciously-secure, two-round multiparty computation (MPC) in the CRS model, where the first message is reusable an unbounded number of times. The security of the protocol relies on the Learning Parity with Noise (LPN) assumption with inverse polynomial noise rate $1/n^{1-\epsilon}$ for small enough constant $\epsilon$, where $n$ is the LPN dimension. Prior works on reusable two-round MPC required assumptions such as DDH or LWE that imply some flavor of homomorphic computation. We obtain our result in two steps: - In the first step, we construct a two-round MPC protocol in the {\it silent pre-processing model} (Boyle et al., Crypto 2019). Specifically, the parties engage in a computationally inexpensive setup procedure that generates some correlated random strings. Then, the parties commit to their inputs. Finally, each party sends a message depending on the function to be computed, and these messages can be decoded to obtain the output. Crucially, the complexity of the pre-processing phase and the input commitment phase do not grow with the size of the circuit to be computed. We call this {\it multiparty silent NISC} (msNISC), generalizing the notion of two-party silent NISC of Boyle et al. (CCS 2019). We provide a construction of msNISC from LPN in the random oracle model. - In the second step, we give a transformation that removes the pre-processing phase and use of random oracle from the previous protocol. This transformation additionally adds (unbounded) reusability of the first round message, giving the first construction of reusable two-round MPC from the LPN assumption. This step makes novel use of randomized encoding of circuits (Applebaum et al., FOCS 2004) and a variant of the ``tree of MPC messages" technique of Ananth et al. and Bartusek et al. (TCC 2020).
2022
PKC
Radical Isogenies on Montgomery Curves 📺
We work on some open problems in radical isogenies. Radical isogenies are formulas to compute chains of N-isogenies for small N and proposed by Castryck, Decru, and Vercauteren in Asiacrypt 2020. These formulas do not need to generate a point of order N generating the kernel and accelerate some isogeny-based cryptosystems like CSIDH. On the other hand, since these formulas use Tate normal forms, these need to transform Tate normal forms to curves with efficient arithmetic, e.g., Montgomery curves. In this paper, we propose radical-isogeny formulas of degrees 3 and 4 on Montgomery curves. Our formulas compute some values determining Montgomery curves, from which one can efficiently recover Montgomery coefficients. And our formulas are more efficient for some cryptosystems than the original radical isogenies. In addition, we prove a conjecture left open by Castryck et al. that relates to radical isogenies of degree 4.
2022
PKC
Lattice-based Signatures with Tight Adaptive Corruptions and More 📺
We construct the first tightly secure signature schemes in the multi-user setting with adaptive corruptions from lattices. In stark contrast to the previous tight constructions whose security is solely based on number-theoretic assumptions, our schemes are based on the Learning with Errors (LWE) assumption which is supposed to be post-quantum secure. The security of our scheme is independent of the numbers of users and signing queries, and it is in the non-programmable random oracle model. Our LWE-based scheme is compact, namely, its signatures contain only a constant number of lattice vectors. At the core of our construction are a new abstraction of the existing lossy identification (ID) schemes using dual-mode commitment schemes and a refinement of the framework by Diemert et al. (PKC 2021) which transforms a lossy ID scheme to a signature using sequential OR proofs. In combination, we obtain a tight generic construction of signatures from dual-mode commitments in the multi-user setting. Improving the work of Diemert et al., our new approach can be instantiated using not only the LWE assumption, but also an isogeny-based assumption. We stress that our LWE-based lossy ID scheme in the intermediate step uses a conceptually different idea than the previous lattice-based ones. Of independent interest, we formally rule out the possibility that the aforementioned ``ID-to-Signature'' methodology can work tightly using parallel OR proofs. In addition to the results of Fischlin et al. (EUROCRYPT 2020), our impossibility result shows a qualitative difference between both forms of OR proofs in terms of tightness.
2022
PKC
Logarithmic-Size (Linkable) Threshold Ring Signatures in the Plain Model 📺
A $1$-out-of-$N$ ring signature scheme, introduced by Rivest, Shamir, and Tauman-Kalai (ASIACRYPT '01), allows a signer to sign a message as part of a set of size $N$ (the so-called ``ring'') which are anonymous to any verifier, including other members of the ring. Threshold ring (or ``thring'') signatures generalize ring signatures to $t$-out-of-$N$ parties, with $t \geq 1$, who anonymously sign messages and show that they are distinct signers (Bresson et al., CRYPTO'02). Until recently, there was no construction of ring signatures that both $(i)$ had logarithmic signature size in $N$, and $(ii)$ was secure in the plain model. The work of Backes et al. (EUROCRYPT'19) resolved both these issues. However, threshold ring signatures have their own particular problem: with a threshold $t \geq 1$, signers must often reveal their identities to the other signers as part of the signing process. This is an issue in situations where a ring member has something controversial to sign; he may feel uncomfortable requesting that other members join the threshold, as this reveals his identity. Building on the Backes et al. template, in this work we present the first construction of a thring signature that is logarithmic-sized in $N$, in the plain model, and does not require signers to interact with each other to produce the thring signature. We also present a linkable counterpart to our construction, which supports a fine-grained control of linkability. Moreover, our thring signatures can easily be adapted to achieve the recent notions of claimability and repudiability (Park and Sealfon, CRYPTO'19).
2022
PKC
Low-Communication Multiparty Triple Generation for SPDZ from Ring-LPN 📺
The SPDZ protocol for multi-party computation relies on a correlated randomness setup consisting of authenticated, multiplication triples. A recent line of work by Boyle et al. (Crypto 2019, Crypto 2020) has investigated the possibility of producing this correlated randomness in a \emph{silent preprocessing} phase, which involves a ``small'' setup protocol with less communication than the total size of the triples being produced. These works do this using a tool called a \emph{pseudorandom correlation generator} (PCG), which allows a large batch of correlated randomness to be compressed into a set of smaller, correlated seeds. However, existing methods for compressing SPDZ triples only apply to the 2-party setting. In this work, we construct a PCG for producing SPDZ triples over large prime fields in the multi-party setting. The security of our PCG is based on the ring-LPN assumption over fields, similar to the work of Boyle et al. (Crypto 2020) in the 2-party setting. We also present a corresponding, actively secure setup protocol, which can be used to generate the PCG seeds and instantiate SPDZ with a silent preprocessing phase. As a building block, which may be of independent interest, we construct a new type of 3-party distributed point function supporting outputs over arbitrary groups (including large prime order), as well as an efficient protocol for setting up our DPF keys with active security.
2022
PKC
CNF-FSS and its Applications 📺
Function Secret Sharing (FSS), introduced by Boyle, Gilboa and Ishai~\cite{BGI15}, extends the classical notion of secret-sharing a \textit{value} to secret sharing a \textit{function}. Namely, for a secret function $f$ (from a class $\cal F$), FSS provides a sharing of $f$ whereby {\em succinct} shares (``keys'') are distributed to a set of parties, so that later the parties can non-interactively compute an additive sharing of $f(x)$, for any input $x$ in the domain of $f$. Previous work on FSS concentrated mostly on the two-party case, where highly efficient schemes are obtained for some simple, yet extremely useful, classes $\cal F$ (in particular, FSS for the class of point functions, a task referred to as DPF~--~Distributed Point Functions~\cite{GI14,BGI15}). In this paper, we concentrate on the multi-party case, with $p\ge 3$ parties and $t$-security ($1\le t<p$). First, we introduce the notion of \textsf{CNF-DPF} (or, more generally, \textsf{CNF-FSS}), where the scheme uses the CNF version of secret sharing (rather than additive sharing) to share each value $f(x)$. We then demonstrate the utility of CNF-DPF by providing several applications. Our main result shows how CNF-DPF can be used to achieve substantial asymptotic improvement in communication complexity when using it as a building block for constructing {\em standard} $(t,p)$-DPF protocols that tolerate $t>1$ (semi-honest) corruptions (of the $p$ parties). For example, we build a 2-out-of-5 secure (standard) DPF scheme of communication complexity $O(N^{1/4})$, where $N$ is the domain size of $f$ (compared with the current best-known of $O(N^{1/2})$ for $(2,5)$-DPF). More generally, with $p>dt$ parties, we give a $(t,p)$-DPF whose communication grows as $O(N^{1/2d})$ (rather than $O(\sqrt{N})$ that follows from the $(p-1,p)$-DPF scheme of \cite{BGI15}). We also present a 1-out-of-3 secure CNF-DPF scheme, in which each party holds two of the three keys, with poly-logarithmic communication complexity. These results have immediate implications to scenarios where (multi-server) DPF was shown to be applicable. For example, we show how to use such a scheme to obtain asymptotic improvement ($O(\log^2N)$ versus $O(\sqrt{N})$) in communication complexity over the 3-party protocol of~\cite{BKKO20}.
2022
PKC
Count Me In! Extendablity for Threshold Ring Signatures 📺
Ring signatures enable a signer to sign a message on behalf of a group anonymously, without revealing her identity. Similarly, threshold ring signatures allow several signers to sign the same message on behalf of a group; while the combined signature reveals that some threshold t of group members signed the message, it does not leak anything else about the signers’ identities. Anonymity is a central feature in threshold ring signature applications, such as whistleblowing, e-voting and privacy-preserving cryptocurrencies: it is often crucial for signers to remain anonymous even from their fellow signers. When the generation of a signature requires interaction, this is diffcult to achieve. There exist threshold ring signatures with non-interactive signing — where signers locally produce partial signatures which can then be aggregated — but a limitation of existing threshold ring signature constructions is that all of the signers must agree on the group on whose behalf they are signing, which implicitly assumes some coordination amongst them. The need to agree on a group before generating a signature also prevents others — from outside that group — from endorsing a message by adding their signature to the statement post-factum. We overcome this limitation by introducing extendability for ring signatures, same-message linkable ring signatures, and threshold ring signatures. Extendability allows an untrusted third party to take a signature, and extend it by enlarging the anonymity set to a larger set. In the extendable threshold ring signature, two signatures on the same message which have been extended to the same anonymity set can then be combined into one signature with a higher threshold. This enhances signers’ anonymity, and enables new signers to anonymously support a statement already made by others. For each of those primitives, we formalize the syntax and provide a meaningful security model which includes different flavors of anonymous extendability. In addition, we present concrete realizations of each primitive and formally prove their security relying on signatures of knowledge and the hardness of the discrete logarithm problem. We also describe a generic transformation to obtain extendable threshold ring signatures from same-message-linkable extendable ring signatures. Finally, we implement and benchmark our constructions.
2022
PKC
Encapsulated Search Index : Public-Key, Sub-linear, Distributed, and Delegatable 📺
We build the first *sub-linear* (in fact, potentially constant-time) *public-key* searchable encryption system: - server can publish a public key $PK$. - anybody can build an encrypted index for document $D$ under $PK$. - client holding the index can obtain a token $z_w$ from the server to check if a keyword $w$ belongs to $D$. - search using $z_w$ is almost as fast (e.g., sub-linear) as the non-private search. - server granting the token does not learn anything about the document $D$, beyond the keyword $w$. - yet, the token $z_w$ is specific to the pair $(D,w)$: the client does not learn if other keywords $w'\neq w$ belong to $D$, or if $w$ belongs to other, freshly indexed documents $D'$. - server cannot fool the client by giving a wrong token $z_w$. We call such a primitive *encapsulated search index* (ESI). Our ESI scheme can be made $(t,n)$-distributed among $n$ servers in the best possible way: *non-interactive*, verifiable, and resilient to any coalition of up to $(t-1)$ malicious servers. We also introduce the notion of *delegatable* ESI and show how to extend our construction to this setting. Our solution --- including public indexing, sub-linear search, delegation, and distributed token generation --- is deployed as a commercial application by a real-world company.
2022
PKC
KDM Security for the Fujisaki-Okamoto Transformations in the QROM 📺
Key dependent message (KDM) security is a security notion that guarantees confidentiality of communication even if secret keys are encrypted. KDM security has found a number of applications in practical situations such as hard-disk encryption systems, anonymous credentials, and bootstrapping of fully homomorphic encryptions. Recently, it also found an application in quantum delegation protocols as shown by Zhang (TCC 2019). In this work, we investigate the KDM security of existing practical public-key encryption (PKE) schemes proposed in the quantum random oracle model (QROM). Concretely, we study a PKE scheme whose KEM is constructed by using Fujisaki-Okamoto (FO) transformations in the QROM. FO transformations are applied to an IND-CPA secure PKE schemes and yield IND-CCA secure key encapsulation mechanisms (KEM). Then, we show the following results. - We can reduce the KDM-CPA security in the QROM of a PKE scheme whose KEM is derived from any of the FO transformations proposed by Hofheinz et al. (TCC 2017) to the IND-CPA security of the underlying PKE scheme, without square root security loss. For this result we use one-time-pad (OTP) as DEM to convert KEM into PKE. - We can reduce the KDM-CCA security in the QROM of a PKE scheme whose KEM is derived from a single variant of the FO transformation proposed by Hofheinz et al. (TCC 2017) to the IND-CPA security of the underlying PKE scheme, without square root security loss. For this result, we use OTP-then-MAC construction as DEM to convert KEM into PKE. Also, we require a mild injectivity assumption for the underlying IND-CPA secure PKE scheme. In order to avoid square root security loss, we use a double-sided one-way to hiding (O2H) lemma proposed by Kuchta et al. (EUROCRYPT 2020). In the context of KDM security, there is a technical hurdle for using double-sided O2H lemma due to the circularity issue. Our main technical contribution is to overcome the hurdle.
2022
PKC
Making Private Function Evaluation Safer, Faster, and Simpler 📺
In the problem of two-party \emph{private function evaluation} (PFE), one party $P_A$ holds a \emph{private function} $f$ and (optionally) a private input $x_A$, while the other party $P_B$ possesses a private input $x_B$. Their goal is to evaluate $f$ on $x_A$ and $x_B$, and one or both parties may obtain the evaluation result $f(x_A, x_B)$ while no other information beyond $f(x_A, x_B)$ is revealed. In this paper, we revisit the two-party PFE problem and provide several enhancements. We propose the \emph{first} constant-round actively secure PFE protocol with linear complexity. Based on this result, we further provide the \emph{first} constant-round publicly verifiable covertly (PVC) secure PFE protocol with linear complexity to gain better efficiency. For instance, when the deterrence factor is $\epsilon = 1/2$, compared to the passively secure protocol, its communication cost is very close and its computation cost is around $2.6\times$. In our constructions, as a by-product, we design a specific protocol for proving that a list of ElGamal ciphertexts is derived from an \emph{extended permutation} performed on a given list of elements. It should be noted that this protocol greatly improves the previous result and may be of independent interest. In addition, a reusability property is added to our two PFE protocols. Namely, if the same function $f$ is involved in multiple executions of the protocol between $P_A$ and $P_B$, then the protocol could be executed more efficiently from the second execution. Moreover, we further extend this property to be \emph{global}, such that it supports multiple executions for the same $f$ in a reusable fashion between $P_A$ and \emph{arbitrary} parties playing the role of $P_B$.
2022
PKC
Lifting Standard Model Reductions to Common Setup Assumptions 📺
In this paper we show that standard model black-box reductions naturally lift to various setup assumptions, such as the random oracle (ROM) or ideal cipher model. Concretely, we prove that a black-box reduction from a security notion $P$ to security notion $Q$ in the standard model can be turned into a non-programmable black-box reduction from $P_\oracle$ to $Q_\oracle$ in a model with a setup assumption $\oracle$, where $P_\oracle$ and $Q_\oracle$ are the natural extensions of $P$ and $Q$ to a model with a setup assumption $\oracle$. Our results rely on a generalization of the recent framework by Hofheinz and Nguyen (PKC 2019) to support primitives which make use of a trusted setup. Our framework encompasses standard idealized settings like the random oracle and the ideal cipher model. At the core of our main result lie novel properties of negligible functions that can be of independent interest.
2022
PKC
ECLIPSE: Enhanced Compiling method for Pedersen-committed zkSNARK Engines 📺
We advance the state-of-the art for zero-knowledge commit-and-prove SNARKs (CP-SNARKs). CP-SNARKs are an important class of SNARKs which, using commitments as ``glue'', allow to efficiently combine proof systems---e.g., general-purpose SNARKs (an efficient way to prove statements about circuits) and $\Sigma$-protocols (an efficient way to prove statements about group operations). Thus, CP-SNARKs allow to efficiently provide zero-knowledge proofs for composite statements such as $h=H(g^{x})$ for some hash-function $H$. Our main contribution is providing the first construction of CP-SNARKs where the proof size is succinct in the number of commitments. We achieve our result by providing a general technique to compile Algebraic Holographic Proofs (AHP) (an underlying abstraction used in many modern SNARKs) with special ``decomposition'' properties into an efficient CP-SNARK. We then show that some of the most efficient AHP constructions---Marlin, PLONK, and Sonic---satisfy our compilation requirements. Our resulting SNARKs achieve universal and updatable reference strings, which are highly desirable features as they greatly reduce the trust needed in the SNARK setup phase.
2022
PKC
Time-Memory tradeoffs for large-weight syndrome decoding in ternary codes 📺
We propose new algorithms for solving a class of large-weight syndrome decoding problems in random ternary codes. This is the main generic problem underlying the security of the recent Wave signature scheme (Debris-Alazard et al., 2019), and it has so far received limited attention. At SAC 2019 Bricout et al. proposed a reduction to a binary subset sum problem requiring many solutions, and used it to obtain the fastest known algorithm. However —as is often the case in the coding theory literature— its memory cost is proportional to its time cost, which makes it unattractive in most applications. In this work we propose a range of memory-efficient algorithms for this problem, which describe a near-continuous time-memory tradeoff curve. Those are obtained by using the same reduction as Bricout et al. and carefully instantiating the derived subset sum problem with exhaustive- search algorithms from the literature, in particular dissection (Dinur et al., 2012) and dissection in tree (Dinur, 2019). We also spend significant effort adapting those algorithms to decrease their granularity, thereby allowing them to be smoothly used in a syndrome decoding context when not all the solutions to the subset sum problem are required. For a proposed parameter set for Wave, one of our best instantiations is estimated to cost 2^177 bit operations and requiring 2^88.5 bits of storage, while we estimate this to be 2^152 and 2^144 for the best algorithm from Bricout et al..
2022
PKC
Two-Round Oblivious Linear Evaluation from Learning with Errors 📺
Oblivious Linear Evaluation (OLE) is the arithmetic analogue of the well-know oblivious transfer primitive. It allows a sender, holding an affine function $f(x)=a+bx$ over a finite field or ring, to let a receiver learn $f(w)$ for a $w$ of the receiver's choice. In terms of security, the sender remains oblivious of the receiver's input $w$, whereas the receiver learns nothing beyond $f(w)$ about $f$. In recent years, OLE has emerged as an essential building block to construct efficient, reusable and maliciously-secure two-party computation. In this work, we present efficient two-round protocols for OLE over large fields based on the Learning with Errors (LWE) assumption, providing a full arithmetic generalization of the oblivious transfer protocol of Peikert, Vaikuntanathan and Waters (CRYPTO 2008). At the technical core of our work is a novel extraction technique which allows to determine if a non-trivial multiple of some vector is close to a $q$-ary lattice.
2022
PKC
On the security of OSIDH 📺
The Oriented Supersingular Isogeny Diffie-Hellman is a post-quantum key exchange scheme recently introduced by Colò and Kohel. It is based on the group action of an ideal class group of a quadratic imaginary order on a subset of supersingular elliptic curves, and in this sense it can be viewed as a generalization of the popular isogeny based key exchange CSIDH. From an algorithmic standpoint, however, OSIDH is quite different from CSIDH. In a sense, OSIDH uses class groups which are more structured than in CSIDH, creating a potential weakness that was already recognized by Colò and Kohel. To circumvent the weakness, they proposed an ingenious way to realize a key exchange by exchanging partial information on how the class group acts in the neighborhood of the public curves, and conjectured that this additional information would not impact security. In this work we revisit the security of OSIDH by presenting a new attack, building upon previous work of Onuki. Our attack has exponential complexity, but it practically breaks Colò and Kohel's parameters unlike Onuki's attack. We also discuss countermeasures to our attack, and analyze their impact on OSIDH, both from an efficiency and a functionality point of view.
2022
PKC
Storing and Retrieving Secrets on a blockchain 📺
A secret sharing scheme enables one party to distribute shares of a secret to n parties and ensures that an adversary in control of t out of n parties will learn no information about the secret. However, traditional secret sharing schemes are often insufficient, especially for applications in which the set of parties who hold the secret shares might change over time. To achieve security in this setting, dynamic proactive secret sharing (DPSS) is used. DPSS schemes proactively update the secret shares held by the parties and allow changes to the set of parties holding the secrets. We propose FaB-DPSS (FAst Batched DPSS) -- a new and highly optimized batched DPSS scheme. While previous work on batched DPSS focuses on a single client submitting a batch of secrets and does not allow storing and releasing secrets independently, we allow multiple different clients to dynamically share and release secrets. FaB-DPSS is the most efficient robust DPSS scheme that supports the highest possible adversarial threshold of 1/2. We prove FaB-DPSS secure and implement it. All operations complete in seconds, and we outperform a prior state-of-the-art DPSS scheme by over 6 times. Additionally, we propose new applications of DPSS in the context of blockchains. Specifically, we propose a protocol that uses blockchains and FaB-DPSS to provide conditional secret storage. The protocol allows parties to store secrets along with a release condition, and once a (possibly different) party satisfies this release condition, the secret is privately released to that party. This functionality is similar to extractable witness encryption. While there are numerous compelling applications (e.g., time-lock encryption, one-time programs, and fair multi-party computation) which rely on extractable witness encryption, there are no known efficient constructions (or even constructions based on any well-studied assumptions) of extractable witness encryption. However, by utilizing blockchains and FaB-DPSS, we can easily build all those applications. We provide an implementation of our conditional secret storage protocol as well as several applications building on top of it.
2022
TOSC
The Legendre Symbol and the Modulo-2 Operator in Symmetric Schemes over Fnp: Preimage Attack on Full Grendel 📺
Motivated by modern cryptographic use cases such as multi-party computation (MPC), homomorphic encryption (HE), and zero-knowledge (ZK) protocols, several symmetric schemes that are efficient in these scenarios have recently been proposed in the literature. Some of these schemes are instantiated with low-degree nonlinear functions, for example low-degree power maps (e.g., MiMC, HadesMiMC, Poseidon) or the Toffoli gate (e.g., Ciminion). Others (e.g., Rescue, Vision, Grendel) are instead instantiated via high-degree functions which are easy to evaluate in the target application. A recent example for the latter case is the hash function Grendel, whose nonlinear layer is constructed using the Legendre symbol. In this paper, we analyze high-degree functions such as the Legendre symbol or the modulo-2 operation as building blocks for the nonlinear layer of a cryptographic scheme over Fnp.Our focus regards the security analysis rather than the efficiency in the mentioned use cases. For this purpose, we present several new invertible functions that make use of the Legendre symbol or of the modulo-2 operation.Even though these functions often provide strong statistical properties and ensure a high degree after a few rounds, the main problem regards their small number of possible outputs, that is, only three for the Legendre symbol and only two for the modulo-2 operation. By fixing them, it is possible to reduce the overall degree of the function significantly. We exploit this behavior by describing the first preimage attack on full Grendel, and we verify it in practice.
2022
TOSC
Weak Tweak-Keys for the CRAFT Block Cipher 📺
CRAFT is a lightweight tweakable Substitution-Permutation-Network (SPN) block cipher optimized for efficient protection of its implementations against Differential Fault Analysis (DFA) attacks. In this paper, we present an equivalent description of CRAFT up to a simple mapping on the plaintext, ciphertext and round tweakeys. We show that the new representation, for a sub-class of keys, leads to a new structure which is a Feistel network, with non-linear operation and key addition only on half the state. Consequently, it reveals a class of weak keys for which CRAFT is less resistant against differential and linear cryptanalyses. As a result, we present one weak-key single-tweak differential attack on 23 rounds (with time complexity of 294 encryptions and data complexity of 274 chosen plaintext/tweak/ciphertext tuples and works for 2112 weak keys) and one weak-key related-tweak attack on 26 rounds of the cipher (with time complexity of 2105 encryptions and data complexity 273 chosen plaintext/tweak/ciphertext tuples and works for 2108 weak keys). Note that these attacks do not break the security claim of the CRAFT block cipher.
2022
TOSC
Bounds for the Security of Ascon against Differential and Linear Cryptanalysis 📺
The NIST Lightweight Cryptography project aims to standardize symmetric cryptographic designs, including authenticated encryption and hashing, suitable for constrained devices. One essential criterion for the evaluation of the 10 finalists is the evidence for their security against attacks like linear and differential cryptanalysis. For Ascon, one of the finalists and previous winner of the CAESAR competition in the ‘lightweight’ category, there is a large gap between the proven bounds and the best known characteristics found with heuristic tools: The bounds only cover up to 3 rounds with 15 differentially and 13 linearly active S-boxes, insufficient for proving a level of security for the full constructions.In this paper, we propose a new modeling strategy for SAT solvers and derive strong bounds for the round-reduced Ascon permutation. We prove that 4 rounds already ensure that any single characteristic has a differential probability or squared correlation of at most 2−72, and 6 rounds at most 2−108. This is significantly below the bound that could be exploited within the query limit for keyed Ascon modes. These bounds are probably not tight. To achieve this result, we propose a new search strategy of dividing the search space into a large number of subproblems based on ‘girdle patterns’, and show how to exploit the rotational symmetry of Ascon using necklace theory. Additionally, we evaluate and optimize several aspects of the pure SAT model, including the counter implementation and parallelizability, which we expect to be useful for future applications to other models.
2022
TOSC
A Formal Analysis of Boomerang Probabilities 📺
In the past 20 years since their conception, boomerang attacks have become an important tool in the cryptanalysis of block ciphers. In the classical estimate of their success probability, assumptions are made about the independence of the underlying differential trails that are not well-founded. We underline the problems inherent in these independence assumptions by using them to prove that for any boomerang there exists a differential trail over the entire cipher with a higher probability than the boomerang.While cryptanalysts today have a clear understanding that the trails can be dependent, the focus of previous research has mostly gone into using these dependencies to improve attacks but little effort has been put into giving boomerangs and their success probabilities a stronger theoretical underpinning. With this publication, we provide such a formalization.We provide a framework which allows us to formulate and prove rigorous statements about the probabilities involved in boomerang attacks without relying on independence assumptions of the trails. Among these statements is a proof that two-round boomerangs on SPNs with differentially 4-uniform S-boxes always deviate from the classical probability estimate to the largest degree possible.We applied the results of this formalization to analyze the validity of some of the first boomerang attacks. We show that the boomerang constructed in the amplified boomerang attack on Serpent by Kelsey, Kohno, and Schneier has probability zero. For the rectangle attack on Serpent by Dunkelman, Biham, and Keller, we demonstrate that a minuscule fraction of only 2−43.4 of all differential trail combinations used in the original attack have a non-zero probability. In spite of this, the probability of the boomerang is in fact a little higher than the original estimate suggests as the non-zero trails have a vastly higher probability than the classical estimate predicts.
2022
TOSC
Influence of the Linear Layer on the Algebraic Degree in SP-Networks 📺
We consider SPN schemes, i.e., schemes whose non-linear layer is defined as the parallel application of t ≥ 1 independent S-Boxes over F2n and whose linear layer is defined by the multiplication with a (n · t) × (n · t) matrix over F2. Even if the algebraic representation of a scheme depends on all its components, upper bounds on the growth of the algebraic degree in the literature usually only consider the details of the non-linear layer. Hence a natural question arises: (how) do the details of the linear layer influence the growth of the algebraic degree? We show that the linear layer plays a crucial role in the growth of the algebraic degree and present a new upper bound on the algebraic degree in SP-networks. As main results, we prove that in the case of low-degree round functions with large S-Boxes: (a) an initial exponential growth of the algebraic degree can be followed by a linear growth until the maximum algebraic degree is reached; (b) the rate of the linear growth is proportional to the degree of the linear layer over Ft2n. Besides providing a theoretical insight, our analysis is particularly relevant for assessing the security of the security of cryptographic permutations designed to be competitive in applications like MPC, FHE, SNARKs, and STARKs, including permutations based on the Hades design strategy. We have verified our findings on small-scale instances and we have compared them against the currently best results in the literature, showing a substantial improvement of upper bounds on the algebraic degree in case of low-degree round functions with large S-Boxes.
2022
TOSC
Security of COFB against Chosen Ciphertext Attacks 📺
COFB is a lightweight Authenticated Encryption with Associated Data (AEAD) mode based on block ciphers. It was proposed in CHES 2017 and is the basis for GIFT-COFB, a finalist in the NIST lightweight standardization project. It comes with provable security results that guarantee its security up to the birthday bound in the nonce-respecting model. However, the designers offer multiple versions of the analysis with different details and the implications of attacks against the scheme are not discussed deeply. In this article, we look at a group of possible forgery and privacy attacks against COFB. We show that the security for both forgery and privacy is bounded by the number of forgery attempts. We show the existence of forgery and privacy attacks with success probability qd/2n/2, given qd forgery attempts. In particular, we show an attack with 2n/2 attempts using only a single known-plaintext encryption query against COFB. While these attacks do not contradict the claims made by the designers of GIFT-COFB, they show its limitations in terms of the number of forgery attempts. They also show that, while COFB generates a 128-bit tag, it behaves in a very similar manner to an AEAD scheme with 64-bit tag. As a result of independent interest, our analysis provides a contradiction to the main theorem of Journal of Cryptology volume 33, pages 703–741 (2020), which includes an improved security proof of COFB compared to the CHES 2017 version. Finally, we discuss the term nqd/2n/2 that appears in the security proof of GIFT-COFB and CHES 2017, showing why there is a security gap between the provable results and the attacks. We emphasize that the results in this article do not threaten the security of GIFT-COFB in the scope of the NIST lightweight cryptography requirements or the claims made by the designers in the specification document of the design.
2022
TOSC
Towards Low-Latency Implementation of Linear Layers 📺
Lightweight cryptography features a small footprint and/or low computational complexity. Low-cost implementations of linear layers usually play an important role in lightweight cryptography. Although it has been shown by Boyar et al. that finding the optimal implementation of a linear layer is a Shortest Linear Program (SLP) problem and NP-hard, there exist a variety of heuristic methods to search for near-optimal solutions. This paper considers the low-latency criteria and focuses on the heuristic search of lightweight implementation for linear layers. Most of the prior approach iteratively combines the inputs (of linear layers) to reach the output, which can be regarded as the forward search. To better adapt the low-latency criteria, we propose a new framework of backward search that attempts to iteratively split every output (into an XORing of two bits) until all inputs appear. By bounding the time of splitting, the new framework can find a sub-optimal solution with a minimized depth of circuits.We apply our new search algorithm to linear layers of block ciphers and find many low-latency candidates for implementations. Notably, for AES Mixcolumns, we provide an implementation with 103 XOR gates with a depth of 3, which is among the best hardware implementations of the AES linear layer. Besides, we obtain better implementations in XOR gates for 54.3% of 4256 Maximum Distance Separable (MDS) matrices proposed by Li et al. at FSE 2019. We also achieve an involutory MDS matrix (in M4(GL(8, F2))) whose implementation uses the lowest number (i.e., 86, saving 2 from the state-of-the-art result) of XORs with the minimum depth.
2022
TOSC
Quantum Period Finding is Compression Robust 📺
We study quantum period finding algorithms such as Simon and Shor (and its variant Ekerå-Håstad). For a periodic function f these algorithms produce – via some quantum embedding of f – a quantum superposition ∑x |x〉 |f(x)〉, which requires a certain amount of output qubits that represent |f(x)〉. We show that one can lower this amount to a single output qubit by hashing f down to a single bit in an oracle setting.Namely, we replace the embedding of f in quantum period finding circuits by oracle access to several embeddings of hashed versions of f. We show that on expectation this modification only doubles the required amount of quantum measurements, while significantly reducing the total number of qubits. For example, for Simon’s algorithm that finds periods in f : Fn2 → Fn2 our hashing technique reduces the required output qubits from n down to 1, and therefore the total amount of qubits from 2n to n + 1. We also show that Simon’s algorithm admits real world applications with only n + 1 qubits by giving a concrete realization of a hashed version of the cryptographic Even-Mansour construction. Moreover, for a variant of Simon’s algorithm on Even-Mansour that requires only classical queries to Even-Mansour we save a factor of (roughly) 4 in the qubits.Our oracle-based hashed version of the Ekerå-Håstad algorithm for factoring n-bit RSA reduces the required qubits from (3/2 + o(1))n down to (1/2+ o(1))n.
2021
ASIACRYPT
Generic Framework for Key-Guessing Improvements 📺
We propose a general technique to improve the key-guessing step of several attacks on block ciphers. This is achieved by defining and studying some new properties of the associated S-boxes and by representing them as a special type of decision trees that are crucial for finding fine-grained guessing strategies for various attack vectors. We have proposed and implemented the algorithm that efficiently finds such trees, and use it for providing several applications of this approach, which include the best known attacks on NOKEON, GIFT, and RECTANGLE.
2021
ASIACRYPT
A Systematic Approach and Analysis of Key Mismatch Attacks on Lattice-Based NIST Candidate KEMs 📺
Research on key mismatch attacks against lattice-based KEMs is an important part of the cryptographic assessment of the ongoing NIST standardization of post-quantum cryptography. There have been a number of these attacks to date. However, a unified method to evaluate these KEMs' resilience under key mismatch attacks is still missing. Since the key index of efficiency is the number of queries needed to successfully mount such an attack, in this paper, we propose and develop a systematic approach to find lower bounds on the minimum average number of queries needed for such attacks. Our basic idea is to transform the problem of finding the lower bound of queries into finding an optimal binary recovery tree (BRT), where the computations of the lower bounds become essentially the computations of a certain Shannon entropy. The optimal BRT approach also enables us to understand why, for some lattice-based NIST candidate KEMs, there is a big gap between the theoretical bounds and bounds observed in practical attacks, in terms of the number of queries needed. This further leads us to propose a generic improvement method for these existing attacks, which are confirmed by our experiments. Moreover, our proposed method could be directly used to improve the side-channel attacks against CCA-secure NIST candidate KEMs.
2021
ASIACRYPT
DEFAULT: Cipher Level Resistance Against Differential Fault Attack 📺
Differential Fault Analysis (DFA) is a well known cryptanalytic technique that exploits faulty outputs of an encryption device. Despite its popularity and similarity with the classical Differential Analysis (DA), a thorough analysis explaining DFA from a designer's point of view is missing in the literature. To the best of our knowledge, no DFA immune cipher at an algorithmic level has been proposed so far. Furthermore, all known DFA countermeasures somehow depend on the device/protocol or on the implementation such as duplication/comparison. As all of these are outside the scope of the cipher designer, we focus on designing a primitive which can protect from DFA on its own. We present the first concept of cipher level DFA resistance which does not rely on any device/protocol related assumption, nor does it depend on any form of duplication. Our construction is simple, software/hardware friendly and DFA security scales up with the state size. It can be plugged before and/or after (almost) any symmetric key cipher and will ensure a non-trivial search complexity against DFA. One key component in our DFA protection layer is an SBox with linear structures. Such SBoxes have never been used in cipher design as they generally perform poorly against differential attacks. We argue that they in fact represent an interesting trade-off between good cryptographic properties and DFA resistance. As a proof of concept, we construct a DFA protecting layer, named DEFAULT-LAYER, as well as a full-fledged block cipher DEFAULT. Our solutions compare favourably to the state-of-the-art, offering advantages over the sophisticated duplication based solutions like impeccable circuits/CRAFT or infective countermeasures.
2021
ASIACRYPT
Massive Superpoly Recovery with Nested Monomial Predictions 📺
Determining the exact algebraic structure or some partial information of the superpoly for a given cube is a necessary step in the cube attack -- a generic cryptanalytic technique for symmetric-key primitives with some secret and public tweakable inputs. Currently, the division property based approach is the most powerful tool for exact superpoly recovery. However, as the algebraic normal form (ANF) of the targeted output bit gets increasingly complicated as the number of rounds grows, existing methods for superpoly recovery quickly hit their bottlenecks. For example, previous method stuck at round 842, 190, and 892 for \trivium, \grain, and \kreyvium, respectively. In this paper, we propose a new framework for recovering the exact ANFs of massive superpolies based on the monomial prediction technique (ASIACRYPT 2020, an alternative language for the division property). In this framework, the targeted output bit is first expressed as a polynomial of the bits of some intermediate states. For each term appearing in the polynomial, the monomial prediction technique is applied to determine its superpoly if the corresponding MILP model can be solved within a preset time limit. Terms unresolved within the time limit are further expanded as polynomials of the bits of some deeper intermediate states with symbolic computation, whose terms are again processed with monomial predictions. The above procedure is iterated until all terms are resolved. Finally, all the sub-superpolies are collected and assembled into the superpoly of the targeted bit. We apply the new framework to \trivium, \grain, and \kreyvium. As a result, the exact ANFs of the superpolies for 843-, 844- and 845-round \trivium, 191-round \grain and 894-round \kreyvium are recovered. Moreover, with help of the M\"{o}bius transform, we present a novel key-recovery technique based on superpolies involving \textit{all} key bits by exploiting the sparse structures, which leads to the best key-recovery attacks on the targets considered.
2021
ASIACRYPT
Gentry-Wichs Is Tight: A Falsifiable Non-Adaptively Sound SNARG 📺
By the impossibility result of Gentry and Wichs, non-falsifiable assumptions are needed to construct (even non-zero-knowledge) adaptively sound succinct non-interactive arguments (SNARGs) for hard languages. It is important to understand whether this impossibility result is tight. While it is known how to construct adaptively sound non-succinct non-interactive arguments for $\mathsf{NP}$ from falsifiable assumptions, adaptively sound SNARGs for $\mathsf{NP}$ from non-falsifiable assumptions, and adaptively sound SNARGs for $\mathsf{P}$ from falsifiable assumptions, there are no known non-adaptively sound SNARGs for $\mathsf{NP}$ from falsifiable assumptions. We show that Gentry-Wichs is tight by constructing the latter. In addition, we prove it is non-adaptively knowledge-sound in the algebraic group model and Sub-ZK (i.e., zero-knowledge even if the CRS is subverted) under a non-falsifiable assumption.
2021
ASIACRYPT
Fine-tuning the ISO/IEC Standard LightMAC 📺
LightMAC, by Luykx et al., is a block cipher based message authentication code (MAC). The simplicity of design and low overhead allows it to have very compact implementations. As a result, it has been recently chosen as an ISO/IEC standard MAC for lightweight applications. LightMAC has been shown to achieve query-length independent security bound of $O(q^2/2^n)$ when instantiated with two independently keyed $n$-bit block ciphers, where $q$ denotes the number of MAC queries and the query-length is upper bounded by $(n-s)2^s$ bits for a fixed counter size $s$. In this paper, we aim to minimize the number of block cipher keys in LightMAC. First, we show that the original LightMAC instantiated with a single block cipher key, referred as 1k-LightMAC, achieves security bound of $O(q^2/2^n)$ while the query-length is at least $(n-s)$ bits and at most $(n-s)\min\{2^{n/4},2^s\}$ bits. Second, we show that a minor variant of 1k-LightMAC, dubbed as LightMAC-ds, achieves security bound of $O(q^2/2^n)$ while query-length is upper bounded by $(n-s)2^{s-1}$ bits. Of independent interest, our security proof of 1k-LightMAC employs a novel sampling approach, called the reset-sampling, as a subroutine within the H-coefficient proof setup.
2021
ASIACRYPT
2021
ASIACRYPT
2021
ASIACRYPT
Private Join and Compute from PIR with Default 📺
The private join and compute (PJC) functionality enables secure computation over data distributed across different databases, and is applicable to a wide range of applications, many of which address settings where the input databases are of significantly different sizes. We introduce the notion of private information retrieval (PIR) with default, which enables two-party PJC functionalities in a way that hides the size of the intersection of the two databases and incurs sublinear communication cost in the size of the bigger database. We provide two constructions for this functionality, one of which requires offline linear communication, which can be amortized across queries, and one that provides sublinear cost for each query but relies on more computationally expensive tools. We construct inner-product PJC, which has applications to ads conversion measurement and contact tracing, relying on an extension of PIR with default. We evaluate the efficiency of our constructions, which can enable $\mathbf{2^{8}}$ PIR with default lookups on a database of size $\mathbf{2^{25}}$ (or inner-product PJC on databases with such sizes) with the communication of $\mathbf{44}$MB, which costs less than $\mathbf{0.17}$c. for the client and $\mathbf{26.48}$c. for the server.
2021
ASIACRYPT
A Geometric Approach to Linear Cryptanalysis 📺
A new interpretation of linear cryptanalysis is proposed. This 'geometric approach' unifies all common variants of linear cryptanalysis, reveals links between various properties, and suggests additional generalizations. For example, new insights into invariants corresponding to non-real eigenvalues of correlation matrices and a generalization of the link between zero-correlation and integral attacks are obtained. Geometric intuition leads to a fixed-key motivation for the piling-up principle, which is illustrated by explaining and generalizing previous results relating invariants and linear approximations. Rank-one approximations are proposed to analyze cell-oriented ciphers, and used to resolve an open problem posed by Beierle, Canteaut and Leander at FSE 2019. In particular, it is shown how such approximations can be analyzed automatically using Riemannian optimization.
2021
ASIACRYPT
Gladius: LWR based efficient hybrid public key encryption with distributed decryption 📺
Standard hybrid encryption schemes based on the KEM-DEM framework are hard to implement efficiently in a distributed manner whilst maintaining the CCA security property of the scheme. This is because the DEM needs to be decrypted under the key encapsulated by the KEM, before the whole ciphertext is declared valid. In this paper we present a new variant of the KEM-DEM framework, closely related to Tag-KEMs, which sidesteps this issue. We then present a post-quantum KEM for this framework based on Learning-with-Rounding, which is designed specifically to have fast distributed decryption. Our combined construction of a hybrid encryption scheme with Learning-with-Rounding based KEM, called Gladius, is closely related to the NIST Round 3 candidate called Saber. Finally, we give a prototype distributed implementation that achieves a decapsulation time of 4.99 seconds for three parties.
2021
ASIACRYPT
Public Key Encryption with Flexible Pattern Matching 📺
Many interesting applications of pattern matching (e.g. deep-packet inspection or medical data analysis) target very sensitive data. In particular, spotting illegal behaviour in internet traffic conflicts with legitimate privacy requirements, which usually forces users (e.g. children, employees) to blindly trust an entity that fully decrypts their traffic in the name of security. The compromise between traffic analysis and privacy can be achieved through searchable encryption. However, as the traffic data is a stream and as the patterns to search are bound to evolve over time (e.g. new virus signatures), these applications require a kind of searchable encryption that provides more flexibility than the classical schemes. We indeed need to be able to search for patterns of variable sizes in an arbitrary long stream that has potentially been encrypted prior to pattern identification. To stress these specificities, we call such a scheme a stream encryption supporting pattern matching. Recent papers use bilinear groups to provide public key constructions supporting these features. These solutions are lighter than more generic ones (e.g. fully homomorphic encryption) while retaining the adequate expressivity to support pattern matching without harming privacy more than needed. However, all existing solutions in this family have weaknesses with respect to efficiency and security that need to be addressed. Regarding efficiency, their public key has a size linear in the size of the alphabet, which can be quite large, in particular for applications that naturally process data as bytestrings. Regarding security, they all rely on a very strong computational assumption that is both interactive and specially tailored for this kind of scheme. In this paper, we tackle these problems by providing two new constructions using bilinear groups to support pattern matching on encrypted streams. Our first construction shares the same strong assumption but dramatically reduces the size of the public key by removing the dependency on the size of the alphabet, while nearly halving the size of the ciphertext. On a typical application with large patterns, our public key is two order of magnitude smaller that the one of previous schemes, which demonstrates the practicality of our approach. Our second construction manages to retain most of the good features of the first one while exclusively relying on a simple (static) variant of DDH, which solves the security problem of previous works.
2021
ASIACRYPT
Algebraic Attacks on Rasta and Dasta Using Low-Degree Equations 📺
Rasta and Dasta are two fully homomorphic encryption friendly symmetric-key primitives proposed at CRYPTO 2018 and ToSC 2020, respectively. We point out that the designers of Rasta and Dasta neglected an important property of the $\chi$ operation. Combined with the special structure of Rasta and Dasta, this property directly leads to significantly improved algebraic cryptanalysis. Especially, it enables us to theoretically break 2 out of 3 instances of full Agrasta, which is the aggressive version of Rasta with the block size only slightly larger than the security level in bits. We further reveal that Dasta is more vulnerable against our attacks than Rasta for its usage of a linear layer composed of an ever-changing bit permutation and a deterministic linear transform. Based on our cryptanalysis, the security margins of Dasta and Rasta parameterized with $(n,\kappa,r)\in\{(327,80,4),(1877,128,4),(3545,256,5)\}$ are reduced to only 1 round, where $n$, $\kappa$ and $r$ denote the block size, the claimed security level and the number of rounds, respectively. These parameters are of particular interest as the corresponding ANDdepth is the lowest among those that can be implemented in reasonable time and target the same claimed security level.
2021
ASIACRYPT
Generalized Channels from Limited Blockchain Scripts and Adaptor Signatures 📺
Decentralized and permissionless ledgers offer an inherently low transaction rate, as a result of their consensus protocol demanding the storage of each transaction on-chain. A prominent proposal to tackle this scalability issue is to utilize off-chain protocols, where parties only need to post a limited number of transactions on-chain. Existing solutions can roughly be categorized into: (i) application-specific channels (e.g., payment channels), offering strictly weaker functionality than the underlying blockchain; and (ii) state channels, supporting arbitrary smart contracts at the cost of being compatible only with the few blockchains having Turing-complete scripting languages (e.g., Ethereum). In this work, we introduce and formalize the notion of generalized channels allowing users to perform any operation supported by the underlying blockchain in an off-chain manner. Generalized channels thus extend the functionality of payment channels and relax the definition of state channels. We present a concrete construction compatible with any blockchain supporting transaction authorization, time-locks and constant number of Boolean and and or operations -- requirements fulfilled by many (non-Turing-complete) blockchains including the popular Bitcoin. To this end, we leverage adaptor signatures -- a cryptographic primitive already used in the cryptocurrency literature but formalized as a standalone primitive in this work for the first time. We formally prove the security of our generalized channel construction in the Universal Composability framework. As an important practical contribution, our generalized channel construction outperforms the state-of-the-art payment channel construction, the Lightning Network, in efficiency. Concretely, it halves the off-chain communication complexity and reduces the on-chain footprint in case of disputes from linear to constant in the number of off-chain applications funded by the channel. Finally, we evaluate the practicality of our construction via a prototype implementation and discuss various applications including financially secured fair two-party computation.
2021
ASIACRYPT
Key Encapsulation Mechanism with Tight Enhanced Security in the Multi-User Setting: Impossibility Result and Optimal Tightness 📺
For Key Encapsulation Mechanism (KEM) deployed in a multi-user setting, an adversary may corrupt some users to learn their secret keys, and obtain some encapsulated keys due to careless key managements of users. To resist such attacks, we formalize Enhanced security against Chosen Plaintext/Ciphertext Attack (ECPA/ECCA), which ask the pseudorandomness of unrevealed encapsulated keys under uncorrupted users. This enhanced security for KEM serves well for the security of a class of Authenticated Key Exchange protocols built from KEM. In this paper, we study the achievability of tight ECPA and ECCA security for KEM in the multi-user setting, and present an impossibility result and an optimal security loss factor that can be obtained. The existing meta-reduction technique due to Bader et al. (EUROCRYPT 2016) rules out some KEMs, but many well-known KEMs, e.g., Cramer-Shoup KEM (SIAM J. Comput. 2003), Kurosawa-Desmedt KEM (CRYPTO 2004), run out. To solve this problem, we develop a new technique tool named rank of KEM and a new secret key partitioning strategy for meta-reduction. With this new tool and new strategy, we prove that KEM schemes with polynomially-bounded ranks have no tight ECPA and ECCA security from non-interactive complexity assumptions, and the security loss is at least linear in the number n of users. This impossibility result covers lots of well-known KEMs, including the Cramer-Shoup KEM, Kurosawa-Desmedt KEM and many others. Moreover, we show that the linear security loss is optimal by presenting concrete KEMs with security loss Θ(n). This is justified by a non-trivial security reduction with linear loss factor from ECPA/ECCA security to the traditional multi-challenge CPA/CCA security.
2021
ASIACRYPT
Balanced Non-Adjacent Forms 📺
Integers can be decomposed in multiple ways. The choice of a recoding technique is generally dictated by performance considerations. The usual metric for optimizing the decomposition is the Hamming weight. In this work, we consider a different metric and propose new modified forms (i.e., integer representations using signed digits) that satisfy minimality requirements under the new metric. Specifically, we introduce what we call balanced non-adjacent forms and prove that they feature a minimal Euclidean weight. We also present efficient algorithms to produce these new minimal forms. We analyze their asymptotic and exact distributions. We extend the definition to modular integers and show similar optimality results. The balanced non adjacent forms find natural applications in fully homomorphic encryption as they optimally reduce the noise variance in LWE-type ciphertexts.
2021
ASIACRYPT
Garbling, Stacked and Staggered: Faster k-out-of-n Garbled Function Evaluation 📺
Stacked Garbling (SGC) is a Garbled Circuit (GC) improvement that efficiently and securely evaluates programs with conditional branching. SGC reduces bandwidth consumption such that communication is proportional to the size of the single longest program execution path, rather than to the size of the entire program. Crucially, the parties expend increased computational effort compared to classic GC. Motivated by procuring a subset in a menu of computational services or tasks, we consider GC evaluation of k-out-of-n branches, whose indices are known (or eventually revealed) to the GC evaluator E. Our stack-and-stagger technique amortizes GC computation in this setting. We retain the communication advantage of SGC, while significantly improving computation and wall-clock time. Namely, each GC party garbles (or evaluates) the total of n branches, a significant improvement over the O(nk) garblings/evaluations needed by standard SGC. We present our construction as a garbling scheme. Our technique brings significant overall performance improvement in various settings, including those typically considered in the literature: e.g. on a 1Gbps LAN we evaluate 16-out-of-128 functions ~7.68x faster than standard stacked garbling.
2021
ASIACRYPT
PrORAM: Fast O(log n) Authenticated Shares ZK ORAM 📺
We construct a concretely efficient Zero Knowledge (ZK) Oblivious RAM (ORAM) for ZK Proof (ZKP) systems based on authenticated sharings of arithmetic values. It consumes 2logn oblivious transfers (OTs) of length-2sigma secrets per access of an arithmetic value, for statistical security parameter sigma and array size n. This is an asymptotic and concrete improvement over previous best (concretely efficient) ZK ORAM BubbleRAM of Heath and Kolesnikov ([HK20a], CCS 2020), whose access cost is 1/2 log^2 n OTs of length-2sigma secrets. ZK ORAM is essential for proving statements that are best expressed as RAM programs, rather than Boolean or arithmetic circuits. Our construction is private-coin ZK. We integrate it with [HK20a]’s ZKP protocol and prove the resulting ZKP system secure. We implemented PrORAM in C++. Compared to the state-of-the-art BubbleRAM, our PrORAM is ~10x faster for arrays of size 2^20 of 40-bit values.
2021
ASIACRYPT
Redeeming Reset Indifferentiability and Applications to Post-Quantum Security 📺
Indifferentiability is used to analyze the security of constructions of idealized objects, such as random oracles or ideal ciphers. Reset indifferentiability is a strengthening of plain indifferentiability which is applicable in far more scenarios, but has largely been abandoned due to significant impossibility results and a lack of positive results. Our main results are: - Under \emph{weak} reset indifferentiability, ideal ciphers imply (fixed size) random oracles, and domain shrinkage is possible. We thus show reset indifferentiability is more useful than previously thought. - We lift our analysis to the quantum setting, showing that ideal ciphers imply random oracles under quantum indifferentiability. - Despite Shor's algorithm, we observe that generic groups are still meaningful quantumly, showing that they are quantumly (reset) indifferentiable from ideal ciphers; combined with the above, cryptographic groups yield post-quantum \emph{symmetric} key cryptography. In particular, we obtain a plausible post-quantum random oracle that is a subset-product followed by two modular reductions.
2021
ASIACRYPT
Bounded Collusion ABE for TMs from IBE 📺
We give an attribute-based encryption system for Turing Machines that is provably secure assuming only the existence of identity- based encryption (IBE) for large identity spaces. Currently, IBE is known to be realizable from most mainstream number theoretic assumptions that imply public key cryptography including factoring, the search Diffie-Hellman assumption, and the Learning with Errors assumption. Our core construction provides security against an attacker that makes a single key query for a machine T before declaring a challenge string w∗ that is associated with the challenge ciphertext. We build our construction by leveraging a Garbled RAM construction of Gentry, Halevi, Raykova and Wichs; however, to prove security we need to introduce a new notion of security called iterated simulation security. We then show how to transform our core construction into one that is secure for an a-priori bounded number q = q(\lambda) of key queries that can occur either before or after the challenge ciphertext. We do this by first showing how one can use a special type of non-committing encryption to transform a system that is secure only if a single key is chosen before the challenge ciphertext is declared into one where the single key can be requested either before or after the challenge ciphertext. We give a simple construction of this non-committing encryption from public key encryption in the Random Oracle Model. Next, one can apply standard combinatorial techniques to lift from single-key adaptive security to q-key adaptive security.
2021
ASIACRYPT
Adaptive Security via Deletion in Attribute-Based Encryption: Solutions from Search Assumptions in Bilinear Groups 📺
One of the primary research challenges in Attribute-Based Encryption (ABE) is constructing and proving cryptosystems that are adaptively secure. To date the main paradigm for achieving adaptive security in ABE is dual system encryption. However, almost all such solutions in bilinear groups rely on (variants of) either the subgroup decision problem over composite order groups or the decision linear assumption. Both of these assumptions are decisional rather than search assumptions and the target of the assumption is a source or bilinear group element. This is in contrast to earlier selectively secure ABE systems which can be proven secure from either the decisional or search Bilinear Diffie-Hellman assumption. In this work we make progress on closing this gap by giving a new ABE construction for the subset functionality and prove security under the Search Bilinear Diffie-Hellman assumption. We first provide a framework for proving adaptive security in Attribute-Based Encryption systems. We introduce a concept of ABE with deletable attributes where any party can take a ciphertext encrypted under the attribute string x in {0, 1}^n and modify it into a ciphertext encrypted under any string x' in {0, 1, bot}^n where x' is derived by replacing any bits of x with bot symbols (i.e. ``deleting" attributes of x). The semantics of the system are that any private key for a circuit C can be used to decrypt a ciphertext associated with x' if none of the input bits read by circuit C are bot symbols and C(x') = 1. We show a pathway for combining ABE with deletable attributes with constrained pseudorandom functions to obtain adaptively secure ABE building upon the recent work of [Tsabary19]. Our new ABE system will be adaptively secure and be a ciphertext-policy ABE that supports the same functionality as the underlying constrained PRF as long as the PRF is ``deletion conforming". Here we also provide a simple constrained PRF construction that gives subset functionality. Our approach enables us to access a broader array of Attribute-Based Encryption schemes support deletion of attributes. For example, we show that both the [GPSW06] and [Boyen13] ABE schemes can trivially handle a deletion operation. And, by using a hardcore bit variant of GPSW scheme we obtain an adaptively secure ABE scheme under the Search Bilinear Diffie-Hellman assumption in addition to pseudo random functions in NC1. This gives the first adaptively secure ABE from a search assumption as all prior work relied on decision assumptions over source group elements.
2021
ASIACRYPT
Homomorphic Secret Sharing for Multipartite and General Adversary Structures Supporting Parallel Evaluation of Low-Degree Polynomials 📺
Homomorphic secret sharing (HSS) for a function $f$ allows input parties to distribute shares for their private inputs and then locally compute output shares from which the value of $f$ is recovered. HSS can be directly used to obtain a two-round multiparty computation (MPC) protocol for possibly non-threshold adversary structures whose communication complexity is independent of the size of $f$. In this paper, we propose two constructions of HSS schemes supporting parallel evaluation of a single low-degree polynomial and tolerating multipartite and general adversary structures. Our multipartite scheme tolerates a wider class of adversary structures than the previous multipartite one in the particular case of a single evaluation and has exponentially smaller share size than the general construction. While restricting the range of tolerable adversary structures (but still applicable to non-threshold ones), our schemes perform $\ell$ parallel evaluations with communication complexity approximately $\ell/\log\ell$ times smaller than simply using $\ell$ independent instances. We also formalize two classes of adversary structures taking into account real-world situations to which the previous threshold schemes are inapplicable. Our schemes then perform $O(m)$ parallel evaluations with almost the same communication cost as a single evaluation, where $m$ is the number of parties.
2021
ASIACRYPT
Onion Routing with Replies 📺
Onion routing (OR) protocols are a crucial tool for providing anonymous internet communication. An OR protocol enables a user to anonymously send requests to a server. A fundamental problem of OR protocols is how to deal with replies: ideally, we would want the server to be able to send a reply back to the anonymous user without knowing or disclosing the user's identity. Existing OR protocols do allow for such replies, but do not provably protect the payload (i.e., message) of replies against manipulation. Kuhn et al. (IEEE S&P 2020) show that such manipulations can in fact be leveraged to break anonymity of the whole protocol. In this work, we close this gap and provide the first framework and protocols for OR with protected replies. We define security in the sense of an ideal functionality in the universal composability model, and provide corresponding (less complex) game-based security notions for the individual properties. We also provide two secure instantiations of our framework: one based on updatable encryption, and one based on succinct non-interactive arguments (SNARGs) to authenticate payloads both in requests and replies. In both cases, our central technical handle is an implicit authentication of the transmitted payload data, as opposed to an explicit, but insufficient authentication (with MACs) in previous solutions. Our results exhibit a new and surprising application of updatable encryption outside of long-term data storage.
2021
ASIACRYPT
Partial Key Exposure Attack on Short Secret Exponent CRT-RSA 📺
Let $(N,e)$ be an RSA public key, where $N=pq$ is the product of equal bitsize primes $p,q$. Let $d_p, d_q$ be the corresponding secret CRT-RSA exponents. Using a Coppersmith-type attack, Takayasu, Lu and Peng (TLP) recently showed that one obtains the factorization of $N$ in polynomial time, provided that $d_p, d_q \leq N^{0.122}$. Building on the TLP attack, we show the first {\em Partial Key Exposure} attack on short secret exponent CRT-RSA. Namely, let $N^{0.122} \leq d_p, d_q \leq N^{0.5}$. Then we show that a constant known fraction of the least significant bits (LSBs) of both $d_p, d_q$ suffices to factor $N$ in polynomial time. Naturally, the larger $d_p,d_q$, the more LSBs are required. E.g. if $d_p, d_q$ are of size $N^{0.13}$, then we have to know roughly a $\frac 1 5$-fraction of their LSBs, whereas for $d_p, d_q$ of size $N^{0.2}$ we require already knowledge of a $\frac 2 3$-LSB fraction. Eventually, if $d_p, d_q$ are of full size $N^{0.5}$, we have to know all of their bits. Notice that as a side-product of our result we obtain a heuristic deterministic polynomial time factorization algorithm on input $(N,e,d_p,d_q)$.
2021
ASIACRYPT
Beyond Software Watermarking: Traitor-Tracing for Pseudorandom Functions 📺
Software watermarking schemes allow a user to embed an identifier into a piece of code such that the resulting program is nearly functionally-equivalent to the original program, and yet, it is difficult to remove the identifier without destroying the functionality of the program. Such schemes are often considered for proving software ownership or for digital rights management. Existing constructions of watermarking have focused primarily on watermarking pseudorandom functions (PRFs). In this work, we revisit the definitional foundations of watermarking, and begin by highlighting a major flaw in existing security notions. Existing security notions for watermarking only require that the identifier be successfully extracted from programs that preserve the exact input/output behavior of the original program. In the context of PRFs, this means that an adversary that constructs a program which computes a quarter of the output bits of the PRF or that is able to distinguish the outputs of the PRF from random are considered to be outside the threat model. However, in any application (e.g., watermarking a decryption device or an authentication token) that relies on PRF security, an adversary that manages to predict a quarter of the bits or distinguishes the PRF outputs from random would be considered to have defeated the scheme. Thus, existing watermarking schemes provide very little security guarantee against realistic adversaries. None of the existing constructions of watermarkable PRFs would be able to extract the identifier from a program that only outputs a quarter of the bits of the PRF or one that perfectly distinguishes. To address the shortcomings in existing watermarkable PRF definitions, we introduce a new primitive called a traceable PRF. Our definitions are inspired by similar definitions from public-key traitor tracing, and aim to capture a very robust set of adversaries: namely, any adversary that produces a useful distinguisher (i.e., a program that can break PRF security), can be traced to a specific identifier. We provide a general framework for constructing traceable PRFs via an intermediate primitive called private linear constrained PRFs. Finally, we show how to construct traceable PRFs from a similar set of assumptions previously used to realize software watermarking. Namely, we obtain a single-key traceable PRF from standard lattice assumptions and a fully collusion-resistant traceable PRF from indistinguishability obfuscation (together with injective one-way functions).
2021
ASIACRYPT
Chain Reductions for Multi-Signatures and the HBMS Scheme 📺
Existing proofs for existing Discrete Log (DL) based multi-signature schemes give only weak guarantees if the schemes are implemented, as they are in practice, in 256-bit groups. This is because the underlying reductions, which are mostly in the standard model and from DL, are loose. We show that relaxing either the model or the assumption suffices to obtain tight reductions. Namely we give (1) tight proofs from DL in the Algebraic Group Model, and (2) tight, standard-model proofs from well-founded assumptions other than DL. We first do this for the classical 3-round schemes, namely $\BN$ and $\MuSig$. Then we give a new 2-round multi-signature scheme, $\MSB$, as efficient as prior ones, for which we do the same. These multiple paths to security for a single scheme are made possible by a framework of chain reductions, in which a reduction is broken into a chain of sub-reductions involving intermediate problems. Overall our results improve the security guarantees for DL-based multi-signature schemes in the groups in which they are implemented in practice.
2021
ASIACRYPT
Transciphering Framework for Approximate Homomorphic Encryption 📺
Homomorphic encryption (HE) is a promising cryptographic primitive that enables computation over encrypted data, with a variety of applications including medical, genomic, and financial tasks. In Asiacrypt 2017, Cheon et al. proposed the CKKS scheme to efficiently support approximate computation over encrypted data of real numbers. HE schemes including CKKS, nevertheless, still suffer from slow encryption speed and large ciphertext expansion compared to symmetric cryptography. In this paper, we propose a novel hybrid framework, dubbed RtF (Real-to-Finite-field) framework, that supports CKKS. The main idea behind this construction is to combine the CKKS and the FV homomorphic encryption schemes, and use a stream cipher using modular arithmetic in between. As a result, real numbers can be encrypted without significant ciphertext expansion or computational overload on the client side. As an instantiation of the stream cipher in our framework, we propose a new HE-friendly cipher, dubbed HERA, and extensively analyze its security and efficiency. The main feature of HERA is that it uses a simple randomized key schedule. Compared to recent HE-friendly ciphers such as FLIP and Rasta using randomized linear layers, HERA requires a smaller number of random bits. For this reason, HERA significantly outperforms existing HE-friendly ciphers on both the client and the server sides. With the RtF transciphering framework combined with HERA at the 128-bit security level, we achieve small ciphertext expansion ratio with a range of 1.23 to 1.54, which is at least 23 times smaller than using (symmetric) CKKS-only, assuming the same precision bits and the same level of ciphertexts at the end of the framework. We also achieve 1.6 $\mu$s and 21.7 MB/s for latency and throughput on the client side, which are 9085 times and 17.8 times faster than the CKKS-only environment, respectively.
2021
ASIACRYPT
A New Variant of Unbalanced Oil and Vinegar Using Quotient Ring: QR-UOV 📺
The unbalanced oil and vinegar signature scheme (UOV) is a multivariate signature scheme that has essentially not been broken for over 20 years. However, it requires the use of a large public key; thus, various methods have been proposed to reduce its size. In this paper, we propose a new variant of UOV with a public key represented by block matrices whose components correspond to an element of a quotient ring. We discuss how it affects the security of our proposed scheme whether or not the quotient ring is a field. Furthermore, we discuss their security against currently known and newly possible attacks and propose parameters for our scheme. We demonstrate that our proposed scheme can achieve a small public key size without significantly increasing the signature size compared with other UOV variants. For example, the public key size of our proposed scheme is 85.8 KB for NIST's Post-Quantum Cryptography Project (security level 3), whereas that of compressed Rainbow is 252.3 KB, where Rainbow is a variant of UOV and is one of the third-round finalists of the NIST PQC project.
2021
ASIACRYPT
FAST: Secure and High Performance Format-Preserving Encryption and Tokenization 📺
We propose a new construction for format-preserving encryption. Our design provides the flexibility for use in format-preserving encryption (FPE) and for static table-driven tokenization. Our algorithm is a substitution-permutation network based on random Sboxes. Using pseudorandom generators and pseudorandom functions, we prove a strong adaptive security based on the super-pseudorandom permutation assumption of our core design. We obtain empirical parameters to reach this assumption. We suggest parameters for quantum security. Our design accommodates very small domains, with a radix $a$ from 4 to the Unicode alphabet size and a block length $l$ starting 2. The number of Sbox evaluations per encryption is asymptotically $l^{\frac32}$, which is also the number of bytes we need to generate using AES in CTR mode for each tweak setup. For instance, we tokenize 10 decimal digits using 29 (parallel) AES computations to be done only once, when the tweak changes.
2021
ASIACRYPT
Bit Security as Computational Cost for Winning Games with High Probability 📺
We introduce a novel framework for quantifying the bit security of security games. Our notion is defined with an operational meaning that a $\lambda$-bit secure game requires a total computational cost of $2^\lambda$ for winning the game with high probability, e.g., 0.99. We define the bit security both for search-type and decision-type games. Since we identify that these two types of games should be structurally different, we treat them differently but define the bit security using the unified framework to guarantee the same operational interpretation. The key novelty of our notion of bit security is to employ two types of adversaries: inner adversary and outer adversary. While the inner adversary plays a ``usual'' security game, the outer adversary invokes the inner adversary many times to amplify the winning probability for the security game. We find from our framework that the bit security for decision games can be characterized by the information measure called the \emph{R\'enyi divergence} of order $1/2$ of the inner adversary. The conventional ``advantage,'' defined as the probability of winning the game, characterizes our bit security for search-type games. We present several security reductions in our framework for justifying our notion of bit security. Many of our results quantitatively match the results for the bit security notion proposed by Micciancio and Walter in 2018. In this sense, our bit security strengthens the previous notion of bit security by adding an operational meaning. A difference from their work is that, in our framework, the Goldreich-Levin theorem gives an optimal reduction only for ``balanced'' adversaries who output binary values in a balanced manner.
2021
ASIACRYPT
Fault-Injection Attacks against NIST’s Post-Quantum Cryptography Round 3 KEM Candidates 📺
We investigate __all__ NIST PQC Round 3 KEM candidates from the viewpoint of fault-injection attacks: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime, and SIKE. All KEM schemes use variants of the Fujisaki-Okamoto transformation, so the equality test with re-encryption in decapsulation is critical. We survey effective key-recovery attacks when we can skip the equality test. We found the existing key-recovery attacks against Kyber, NTRU, Saber, FrodoKEM, HQC, one of two KEM schemes in NTRU Prime, and SIKE. We propose a new key-recovery attack against the other KEM scheme in NTRU Prime. We also report an attack against BIKE that leads to leakage of information of secret keys. The open-source pqm4 library contains all KEM schemes except Classic McEliece and HQC. We show that giving a single instruction-skipping fault in the decapsulation processes leads to skipping the equality test __virtually__ for Kyber, NTRU, Saber, BIKE, and SIKE. We also report the experimental attacks against them. We also report the implementation of NTRU Prime allows chosen-ciphertext attacks freely and the timing side-channel of FrodoKEM reported in Guo, Johansson, and Nilsson (CRYPTO 2020) remains, while there are no such bugs in their NIST PQC Round 3 submissions.
2021
ASIACRYPT
Toward a Fully Secure Authenticated Encryption Scheme From a Pseudorandom Permutation 📺
In this paper, we propose a new block cipher-based authenticated encryption scheme, dubbed the Synthetic Counter with Masking (SCM) mode. SCM follows the NSIV paradigm proposed by Peyrin and Seurin (CRYPTO 2016), where a keyed hash function accepts a nonce N with associated data and a message, yielding an authentication tag T, and then the message is encrypted by a counter-like mode using both T and N. Here we move one step further by encrypting nonces; in the encryption part, the inputs to the block cipher are determined by T, counters, and an encrypted nonce, and all its outputs are also masked by an (additional) encrypted nonce, yielding keystream blocks. As a result, we obtain, for the first time, a block cipher-based authenticated encryption scheme of rate 1/2 that provides n-bit security with respect to the query complexity (ignoring the influence of message length) in the nonce-respecting setting, and at the same time guarantees graceful security degradation in the faulty nonce model, when the underlying n-bit block cipher is modeled as a secure pseudorandom permutation. Seen as a slight variant of GCM-SIV, SCM is also parallelizable and inverse-free, and its performance is still comparable to GCM-SIV.
2021
ASIACRYPT
Strong and Tight Security Guarantees against Integral Distinguishers 📺
Integral attacks belong to the classical attack vectors against any given block ciphers. However, providing arguments that a given cipher is resistant against those attacks is notoriously difficult. In this paper, based solely on the assumption of independent round keys, we develop significantly stronger arguments than what was possible before: our main result is that we show how to argue that the sum of ciphertexts over any possible subset of plaintext is key-dependent, i.e., the non existence of integral distinguishers.
2021
ASIACRYPT
Secure and Efficient Software Masking on Superscalar Pipelined Processors 📺
Physical side-channel attacks like power analysis pose a serious threat to cryptographic devices in real-world applications. Consequently, devices implement algorithmic countermeasures like masking. In the past, works on the design and verification of masked software implementations have mostly focused on simple microprocessors that findusage on smart cards. However, many other applications such as in the automotive industry require side-channel protected cryptographic computations on much more powerful CPUs. In such situations, the security loss due to complex architectural side-effects, the corresponding performance degradation, as well as discussions of suitable probing models and verification techniques are still vastly unexplored research questions. We answer these questions and perform a comprehensive analysis of more complex processor architectures in the context of masking-related side effects. First, we analyze the RISC-V SweRV core — featuring a 9-stage pipeline, two execution units, and load/store buffers — and point out a significant gap between security in a simple software probing model and practical security on such CPUs. More concretely, we show that architectural side effects of complex CPU architectures can significantly reduce the protection order of masked software, both via formal analysis in the hardware probing model, as well as empirically via gate-level timing simulations. We then discuss the options of fixing these problems in hardware or leaving them as constraints to software. Based on these software constraints, we formulate general rules for the design of masked software on more complex CPUs. Finally, we compare several implementation strategies for masking schemes and present in a case study that designing secure masked software for complex CPUs is still possible with overhead as low as 13%.
2021
ASIACRYPT
NTRU Fatigue: How Stretched is Overstretched? 📺
Until recently lattice reduction attacks on NTRU lattices were thought to behave similar as on (ring)-LWE lattices with the same parameters. However several works (Albrecht-Bai-Ducas 2016, Kirchner-Fouque 2017) showed a significant gap for large moduli $q$, the so-called overstretched regime of NTRU. With the NTRU scheme being a finalist to the NIST PQC competition it is important to understand ---both asymptotically and concretely--- where the fatigue point lies exactly, i.e. at which $q$ the overstretched regime begins. Unfortunately the analysis by Kirchner and Fouque is based on an impossibility argument, which only results in an asymptotic upper bound on the fatigue point. It also does not really {\em explain} how lattice reduction actually recovers secret-key information. We propose a new analysis that asymptotically improves on that of Kirchner and Fouque, narrowing down the fatigue point for ternary NTRU from $q \leq n^{2.783+o(1)}$ to $q=n^{2.484+o(1)}$, and finally explaining the mechanism behind this phenomenon. We push this analysis further to a concrete one, settling the fatigue point at $q \approx 0.004 \cdot n^{2.484}$, and allowing precise hardness predictions in the overstretched regime. These predictions are backed by extensive experiments.
2021
ASIACRYPT
On Time-Lock Cryptographic Assumptions in Abelian Hidden-Order Groups 📺
In this paper we study cryptographic finite abelian groups of unknown order and hardness assumptions in these groups. Abelian groups necessitate multiple group generators, which may be chosen at random. We formalize this setting and hardness assumptions therein. Furthermore, we generalize the algebraic group model and strong algebraic group model from cyclic groups to arbitrary finite abelian groups of unknown order. Building on these formalizations, we present techniques to deal with this new setting, and prove new reductions. These results are relevant for class groups of imaginary quadratic number fields and time-lock cryptography build upon them.
2021
ASIACRYPT
Quantum Linearization Attacks 📺
Recent works have shown that quantum period-finding can be used to break many popular constructions (some block ciphers such as Even-Mansour, multiple MACs and AEs...) in the superposition query model. So far, all the constructions broken exhibited a strong algebraic structure, which enables to craft a periodic function of a single input block. The recovery of the secret period allows to recover a key, distinguish, break the confidentiality or authenticity of these modes. In this paper, we introduce the \emph{quantum linearization attack}, a new way of using Simon's algorithm to target MACs in the superposition query model. Specifically, we use inputs of multiple blocks as an interface to a function hiding a linear structure. The recovery of this structure allows to perform forgeries. We also present some variants of this attack that use other quantum algorithms, which are much less common in quantum symmetric cryptanalysis: Deutsch's, Bernstein-Vazirani's, and Shor's. To the best of our knowledge, this is the first time these algorithms have been used in quantum forgery or key-recovery attacks. Our attack breaks many parallelizable MACs such as {\sf LightMac}, {\sf PMAC}, and numerous variants with (classical) beyond-birthday-bound security ({\sf LightMAC+}, {\sf PMAC+}) or using tweakable block ciphers ({\sf ZMAC}). More generally, it shows that constructing parallelizable quantum-secure PRFs might be a challenging task.
2021
ASIACRYPT
Giving an Adversary Guarantees (Or: How to Model Designated Verifier Signatures in a Composable Framework) 📺
When defining a security notion, one typically specifies what dishonest parties cannot achieve. For example, communication is confidential if a third party cannot learn anything about the messages being transmitted, and it is authentic if a third party cannot impersonate the real (honest) sender. For certain applications, however, security crucially relies on giving dishonest parties certain capabilities. As an example, in Designated Verifier Signature (DVS) schemes, one captures that only the designated verifier can be convinced of the authenticity of a message by guaranteeing that any dishonest party can forge signatures which look indistinguishable (to a third party) from original ones created by the sender. However, composable frameworks cannot typically model such guarantees as they are only designed to bound what a dishonest party can do. In this paper we show how to model such guarantees---that dishonest parties must have some capability---in the Constructive Cryptography (CC) framework (Maurer and Renner, ICS 2011). More concretely, we give the first composable security definitions for Multi-Designated Verifier Signature (MDVS) schemes---a generalization of DVS schemes. The ideal world is defined as the intersection of two worlds. The first captures authenticity in the usual way. The second provides the guarantee that a dishonest party can forge signatures. By taking the intersection we have an ideal world with the desired properties. We also compare our composable definitions to existing security notions for MDVS schemes from the literature. We find that only recently, 23 years after the introduction of MDVS schemes, sufficiently strong security notions were introduced capturing the security of MDVS schemes (Damg{\r a}rd et al., TCC 2020). As we prove, however, these notions are still strictly stronger than necessary.
2021
ASIACRYPT
Tardigrade: An Atomic Broadcast Protocol for Arbitrary Network Conditions 📺
We study the problem of \emph{atomic broadcast}---the underlying problem addressed by blockchain protocols---in the presence of a malicious adversary who corrupts some fraction of the $n$ parties running the protocol. Existing protocols are either robust for any number of corruptions in a \emph{synchronous} network (where messages are delivered within some known time~$\Delta$) but fail if the synchrony assumption is violated, or tolerate fewer than $n/3$ corrupted parties in an \emph{asynchronous} network (where messages can be delayed arbitrarily) and cannot tolerate more corruptions even if the network happens to be well behaved. We design an atomic broadcast protocol (TARDIGRADE) that, for any $t_s \geq t_a$ with $2t_s + t_a < n$, provides security against $t_s$ corrupted parties if the network is synchronous, while remaining secure when $t_a$ parties are corrupted even in an asynchronous network. We show that TARDIGRADE achieves optimal tradeoffs between $t_s$ and~$t_a$. Finally, we show a second protocol (UPGRADE) with similar (but slightly weaker) guarantees that achieves per-transaction communication complexity linear in~$n$.
2021
ASIACRYPT
Revisiting Homomorphic Encryption Schemes for Finite Fields 📺
The Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/ Fan-Vercauteren (BFV) schemes are the two main homomorphic encryption (HE) schemes to perform exact computations over finite fields and integers. Although the schemes work with the same plaintext space, there are significant differences in their noise management, algorithms for the core homomorphic multiplication operation, message encoding, and practical usability. The main goal of our work is to revisit both schemes, focusing on closing the gap between the schemes by improving their noise growth, computational complexity of the core algorithms, and usability. The other goal of our work is to provide both theoretical and experimental performance comparison of BGV and BFV. More precisely, we propose an improved variant of BFV where the encryption operation is modified to significantly reduce the noise growth, which makes the BFV noise growth somewhat better than for BGV (in contrast to prior results showing that BGV has smaller noise growth for larger plaintext moduli). We also modify the homomorphic multiplication procedure, which is the main bottleneck in BFV, to reduce its algorithmic complexity. Our work introduces several other novel optimizations, including lazy scaling in BFV homomorphic multiplication and an improved BFV decryption procedure in the Residue Number System (RNS) representation. We also develop a usable variant of BGV as a more efficient alternative to BFV for common practical scenarios. We implement our improved variants of BFV and BGV in PALISADE and evaluate their experimental performance for several benchmark computations. The experimental results suggest that our BGV implementation is faster for intermediate and large plaintext moduli, which are often used in practical scenarios with ciphertext packing, while our BFV implementation is faster for small plaintext moduli. More precisely, we propose an improved variant of BFV where the encryption operation is modified to significantly reduce the noise growth, which makes the BFV noise growth somewhat better than for BGV (in contrast to prior results showing that BGV has smaller noise growth for larger plaintext moduli). We also modify the homomorphic multiplication procedure, which is the main bottleneck in BFV, to reduce its algorithmic complexity. Our work introduces several other novel optimizations, including lazy scaling in BFV homomorphic multiplication and an improved BFV decryption procedure in the Residue Number System (RNS) representation. We also develop a usable variant of BGV as a more efficient alternative to BFV for common practical scenarios. We implement our improved variants of BFV and BGV in PALISADE and evaluate their experimental performance for several benchmark computations. Our results suggest that BGV is faster for intermediate and large plaintext moduli, which are often used in practical scenarios with ciphertext packing, while BFV is faster for small plaintext moduli.
2021
ASIACRYPT
Quantum Computationally Predicate-Binding Commitments with Application in Quantum Zero-Knowledge Arguments for NP 📺
A quantum bit commitment scheme is to realize bit (rather than qubit) commitment by exploiting quantum communication and quantum computation. In this work, we study the binding property of the quantum string commitment scheme obtained by composing a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme (which can be realized based on quantum-secure one-way permutations(resp. functions)) in parallel. We show that the resulting scheme satisfies a stronger quantum computational binding property, which we will call predicate-binding, than the trivial honest-binding. Intuitively and very roughly, the predicate-binding property guarantees that given any inconsistent predicate pair over a set of strings (i.e. no strings in this set can satisfy both predicates), if a (claimed) quantum commitment can be opened so that the revealed string satisfies one predicate with certainty, then the same commitment cannot be opened so that the revealed string satisfies the other predicate (except for a negligible probability). As an application, we plug a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme in Blum's zero-knowledge protocol for the NP-complete language Hamiltonian Cycle. This will give rise to the first quantum perfect(resp. statistical) zero-knowledge argument system (with soundness error 1/2) for all NP languages based solely on quantum-secure one-way permutations(resp. functions). The quantum computational soundness of this system will follow immediately from the quantum computational predicate-binding property of commitments.
2021
ASIACRYPT
(Compact) Adaptively Secure FE for Attribute-Weighted Sums from k-Lin 📺
This paper presents the first adaptively simulation secure functional encryption (FE) schemes for attribute-weighted sums. In such an FE scheme, encryption takes as input N pairs of attribute {(x_i, z_i )}_{i \in [N]} for some N \in \mathbb{N} where the attributes {x_i}_{i \in [N]} are public while the attributes {z_i}_{i \in [N]} are private. The indices i \in [N] are referred to as the slots. A secret key corresponds to some weight function f, and decryption recovers the weighted sum \sum_{i \in [N]} f(x_i)z_i. This is an important functionality with a wide range of potential real life applications. In the proposed FE schemes attributes are viewed as vectors and weight functions are arithmetic branching programs (ABP). We present two schemes with varying parameters and levels of adaptive security. (a) We first present a one-slot scheme that achieves adaptive security in the simulation-based security model against a bounded number of ciphertext queries and an arbitrary polynomial number of secret key queries both before and after the ciphertext queries. This is the best possible level of security one can achieve in the adaptive simulation-based framework. From the relations between the simulation-based and indistinguishability-based security frameworks for FE, it follows that the proposed FE scheme also achieves indistinguishability- based adaptive security against an a-priori unbounded number of ciphertext queries and an arbitrary polynomial number of secret key queries both before and after the ciphertext queries. Moreover, the scheme enjoys compact ciphertexts that do not grow with the number of appearances of the attributes within the weight functions. (b) Next, bootstrapping from the one-slot scheme, we present an unbounded-slot scheme that achieves simulation-based adaptive security against a bounded number of ciphertext and pre-ciphertext secret key queries while supporting an a-priori unbounded number of post-ciphertext secret key queries. The scheme achieves public parameters and secret key sizes independent of the number of slots N and a secret key can decrypt a ciphertext for any a-priori unbounded N. Further, just like the one-slot scheme, this scheme also has the ciphertext size independent of the number of appearances of the attributes within the weight functions. However, all the parameters of the scheme, namely, the master public key, ciphertexts, and secret keys scale linearly with the bound on the number of pre-ciphertext secret key queries. Our schemes are built upon asymmetric bilinear groups of prime order and the security is derived under the standard (bilateral) k-Linear (k-Lin) assumption. Our work resolves an open problem posed by Abdalla, Gong, and Wee in CRYPTO 2020, where they presented an unbounded-slot FE scheme for attribute-weighted sum achieving only semi-adaptive simulation security. At a technical level, our work extends the recent adaptive security framework of Lin and Luo [EUROCRYPT 2020], devised to achieve compact ciphertexts in the context of indistinguishability-based payload-hiding security, into the setting of simulation-based adaptive attribute-hiding security.
2021
ASIACRYPT
Compressed Sigma-Protocols for Bilinear Group Arithmetic Circuits and Application to Logarithmic Transparent Threshold Signatures 📺
Lai et al. (CCS 2019) have shown how Bulletproof’s arithmetic circuit zero-knowledge protocol (Bootle et al., EUROCRYPT 2016 and B{\"u}nz et al., S\&P 2018) can be generalized to work for bilinear group arithmetic circuits directly, i.e., without requiring these circuits to be translated into arithmetic circuits. In a nutshell, a bilinear group arithmetic circuit is a standard arithmetic circuit augmented with special gates capturing group exponentiations or pairings. Such circuits are highly relevant, e.g., in the context of zero-knowledge statements over pairing-based languages. As expressing these special gates in terms of a standard arithmetic circuit results in a significant overhead in circuit size, an approach to zero-knowledge via standard arithmetic circuits may incur substantial additional costs. The approach due to Lai et al. shows how to avoid this by integrating additional zero-knowledge techniques into the Bulletproof framework so as to handle the special gates very efficiently. We take a different approach by generalizing {\em Compressed $\Sigma$-Protocol Theory} (CRYPTO 2020) from arithmetic circuit relations to bilinear group arithmetic circuit relations. Besides its conceptual simplicity, our approach has the practical advantage of reducing the communication costs of Lai et al.'s protocol by roughly a multiplicative factor $3$. Finally, we show an application of our results which may be of independent interest. We construct the first $k$-out-of-$n$ threshold signature scheme (TSS) that allows for transparent setup {\em and} that yields threshold signatures of size logarithmic in $n$. The threshold signature hides the identities of the $k$ signers and the threshold $k$ can be dynamically chosen at aggregation time.
2021
ASIACRYPT
Luby-Rackoff Backwards with More Users and More Security 📺
It is known, from the work of Dai \textit{et al.} (in CRYPTO'17), that the PRF advantage of $\xorp$ (bitwise-xor of two outputs of $n$-bit random permutations with domain separated inputs), against an adversary making $q$ queries, is about $q/2^n$ for $q \leq 2^{n- 5}$. The same bound can be easily shown to hold for $\xorp[k]$ (bitwise-xor of $k$ outputs $n$-bit pseudorandom random permutations with domain separated inputs), for $k \geq 3$. In this work, we first consider multi-user security of $\xorp[3]$. We show that the multi-user PRF advantage of $\xorp[3]$ is about $\sqrt{uq_{\max}}/2^n$ for all {$q_{\max} \leq 2^{n}/12$}, where $u$ is the number of users and $q_{\max}$ is the maximum number of queries the adversary can make to each user. In the multi-user setup, this implies that $\xorp[3]$ gives security for $O(2^n)$ users even allowing almost $O(2^n)$ queries to each user. This also indicates significant improvement in the single-user setup ({\em i.e.,} when $u =1$), where the distinguishing advantage of the adversary even after making $O(2^n)$ queries is $O({1 \over \sqrt{2^n}})$, {\em i.e.,} negligible. Subsequently, we consider a simple efficient variant of $\xorp[3]$ in which we use five calls to produce $2n$ bit output (instead of six calls in the case of $\xorp[3]$). This variant also achieves similar level of security. As an immediate application, we can construct a variant of block cipher based counter mode which provides much higher security (both in the single-user and the multi-user setup) compared to the security of the encryption part of GCM at the cost of efficiency.
2021
ASIACRYPT
Lattice-Based Group Encryption with Full Dynamicity and Message Filtering Policy 📺
Group encryption (GE) is a fundamental privacy-preserving primitive analog of group signatures, which allows users to decrypt specific ciphertexts while hiding themselves within a crowd. Since its first birth, numerous constructions have been proposed, among which the schemes separately constructed by Libert et al. (Asiacrypt 2016) over lattices and by Nguyen et al. (PKC 2021) over coding theory are postquantum secure. Though the last scheme, at the first time, achieved the full dynamicity (allowing group users to join or leave the group in their ease) and message filtering policy, which greatly improved the state-of-affairs of GE systems, its practical applications are still limited due to the rather complicated design, inefficiency and the weaker security (secure in the random oracles). In return, the Libert et al.’s scheme possesses a solid security (secure in the standard model), but it lacks the previous functions and still suffers from inefficiency because of extremely using lattice trapdoors. In this work, we re-formalize the model and security definitions of fully dynamic group encryption (FDGE) that are essentially equivalent to but more succinct than Nguyen et al.’s; Then, we provide a generic and efficient zero-knowledge proof method for proving that a binary vector is non-zero over lattices, on which a proof for the Prohibitive message filtering policy in the lattice setting is first achieved (yet in a simple manner); Finally, by combining appropriate cryptographic materials and our presented zero-knowledge proofs, we achieve the first latticebased FDGE schemes in a simpler manner, which needs no any lattice trapdoor and is proved secure in the standard model (assuming interaction during the proof phase), outweighing the existing post-quantum secure GE systems in terms of functions, efficiency and security.
2021
ASIACRYPT
On the hardness of the NTRU problem 📺
The 25 year-old NTRU problem is an important computational assumption in public-key cryptography. However, from a reduction perspective, its relative hardness compared to other problems on Euclidean lattices is not well-understood. Its decision version reduces to the search Ring-LWE problem, but this only provides a hardness upper bound. We provide two answers to the long-standing open problem of providing reduction-based evidence of the hardness of the NTRU problem. First, we reduce the worst-case approximate Shortest Vector Problem over ideal lattices to an average-case search variant of the NTRU problem. Second, we reduce another average-case search variant of the NTRU problem to the decision NTRU problem.
2021
ASIACRYPT
Snarky Ceremonies 📺
Succinct non-interactive arguments of knowledge (SNARKs) have found numerous applications in the blockchain setting and elsewhere. The most efficient SNARKs require a distributed ceremony protocol to generate public parameters, also known as a structured reference string (SRS). Our contributions are two-fold: \begin{compactitem} \item We give a security framework for non-interactive zero-knowledge arguments with a ceremony protocol. \item We revisit the ceremony protocol of Groth's SNARK [Bowe et al., 2017]. We show that the original construction can be simplified and optimized, and then prove its security in our new framework. Importantly, our construction avoids the random beacon model used in the original work. \end{compactitem}
2021
ASIACRYPT
Reverse Firewalls for Adaptively Secure MPC without Setup 📺
We study Multi-party computation (MPC) in the setting of subversion, where the adversary tampers with the machines of honest parties. Our goal is to construct actively secure MPC protocols where parties are corrupted adaptively by an adversary (as in the standard adaptive security setting), and in addition, honest parties' machines are compromised. The idea of reverse firewalls (RF) was introduced at EUROCRYPT'15 by Mironov and Stephens-Davidowitz as an approach to protecting protocols against corruption of honest parties' devices. Intuitively, an RF for a party $\mathcal{P}$ is an external entity that sits between $\mathcal{P}$ and the outside world and whose scope is to sanitize $\mathcal{P}$’s incoming and outgoing messages in the face of subversion of their computer. Mironov and Stephens-Davidowitz constructed a protocol for passively-secure two-party computation. At CRYPTO'20, Chakraborty, Dziembowski and Nielsen constructed a protocol for secure computation with firewalls that improved on this result, both by extending it to \textit{multi}-party computation protocol, and considering \textit{active} security in the presence of \textit{static} corruptions. In this paper, we initiate the study of RF for MPC in the \textit{adaptive} setting. We put forward a definition for adaptively secure MPC in the reverse firewall setting, explore relationships among the security notions, and then construct reverse firewalls for MPC in this stronger setting of adaptive security. We also resolve the open question of Chakraborty, Dziembowski and Nielsen by removing the need for a trusted setup in constructing RF for MPC. Towards this end, we construct reverse firewalls for adaptively secure augmented coin tossing and adaptively secure zero-knowledge protocols and obtain a constant round adaptively secure MPC protocol in the reverse firewall setting without setup. Along the way, we propose a new multi-party adaptively secure coin tossing protocol in the plain model, that is of independent interest.
2021
ASIACRYPT
Algebraic Adversaries in the Universal Composability Framework 📺
The algebraic-group model (AGM), which lies between the generic group model and the standard model of computation, provides a means by which to analyze the security of cryptosystems against so-called algebraic adversaries. We formalize the AGM within the framework of universal composability, providing formal definitions for this setting and proving an appropriate composition theorem. This extends the applicability of the AGM to more-complex protocols, and lays the foundations for analyzing algebraic adversaries in a composable fashion. Our results also clarify the meaning of composing proofs in the AGM with other proofs and they highlight a natural form of independence between idealized groups that seems inherent to the AGM and has not been made formal before---these insights also apply to the composition of game-based proofs in the AGM. We show the utility of our model by proving several important protocols universally composable for algebraic adversaries, specifically: (1) the Chou-Orlandi protocol for oblivious transfer, and (2) the SPAKE2 and CPace protocols for password-based authenticated key exchange.
2021
ASIACRYPT
Categorization of Faulty Nonce Misuse Resistant Message Authentication 📺
A growing number of lightweight block ciphers are proposed for environments such as the Internet of Things. An important contribution to the reduced implementation cost is a block length n of 64 or 96 bits rather than 128 bits. As a consequence, encryption modes and message authentication code (MAC) algorithms require security beyond the 2^{n/2} birthday bound. This paper provides an extensive treatment of MAC algorithms that offer beyond birthday bound PRF security for both nonce-respecting and nonce-misusing adversaries. We study constructions that use two block cipher calls, one universal hash function call and an arbitrary number of XOR operations. We start with the separate problem of generically identifying all possible secure n-to-n-bit pseudorandom functions (PRFs) based on two block cipher calls. The analysis shows that the existing constructions EDM, SoP, and EDMD are the only constructions of this kind that achieve beyond birthday bound security. Subsequently we deliver an exhaustive treatment of MAC algorithms, where the outcome of a universal hash function evaluation on the message may be entered at any point in the computation of the PRF. We conclude that there are a total amount of nine schemes that achieve beyond birthday bound security, and a tenth construction that cannot be proven using currently known proof techniques. For these former nine MAC algorithms, three constructions achieve optimal n-bit security in the nonce-respecting setting, but are completely insecure if the nonce is reused. The remaining six constructions have 3n/4-bit security in the nonce-respecting setting, and only four out of these six constructions still achieve beyond the birthday bound security in the case of nonce misuse.
2021
ASIACRYPT
Better Security-Efficiency Trade-Offs in Permutation-Based Two-Party Computation 📺
We improve upon the security of (tweakable) correlation-robust hash functions, which are essential components of garbling schemes and oblivious-transfer extension schemes. We in particular focus on constructions from permutations, and improve upon the work by Guo etal. (IEEE S\&P '20) in terms of security and efficiency. We present a tweakable one-call construction which matches the security of the most secure two-call construction -- the resulting security bound takes form O((p+q)q/2^n), where q is the number of construction evaluations and p is the number of direct adversarial queries to the underlying n-bit permutation, which is modeled as random. Moreover, we present a new two-call construction with much better security degradation -- in particular, for applications of interest, where only a constant number of evaluations per tweak are made, the security degrades as O((\sqrt{q} p+q^2)/2^n). Our security proof relies on on the sum-capture theorems (Babai ’02; Steinberger ’12, Cogliati and Seurin ’18), as well as on new balls-into-bins combinatorial lemmas for limited independence ball-throws. Of independent interest, we also provide a self-contained concrete security treatment of oblivious transfer extension.
2021
ASIACRYPT
A Practical Key-Recovery Attack on 805-Round Trivium 📺
The cube attack is one of the most important cryptanalytic techniques against Trivium. Many key-recovery attacks based on cube attacks have been established. However, few attacks can recover the 80-bit full key information practically. In particular, the previous best practical key-recovery attack was on 784-round Trivium proposed by Fouque and Vannet at FSE 2013. To mount practical key-recovery attacks, it requires a sufficient number of low-degree superpolies. It is difficult both for experimental cube attacks and division property based cube attacks with randomly selected cubes due to lack of efficiency. In this paper, we give a new algorithm to construct candidate cubes targeting linear superpolies. Our experiments show that the success probability is 100% for finding linear superpolies using the constructed cubes. We obtain over 1000 linear superpolies for 805-round Trivium. With 42 independent linear superpolies, we mount a practical key-recovery attack on 805-round Trivium, which increases the number of attacked rounds by 21. The complexity of our attack is $ 2^{41.40} $, which could be carried out on a PC with a GTX-1080 GPU in several hours.
2021
ASIACRYPT
Automatic Classical and Quantum Rebound Attacks on AES-like Hashing by Exploiting Related-key Differentials 📺
Collision attacks on AES-like hashing (hash functions constructed by plugging AES-like ciphers or permutations into the famous PGV modes or their variants) can be reduced to the problem of finding a pair of inputs respecting a differential of the underlying AES-like primitive whose input and output differences are the same. The rebound attack due to Mendel et al. is a powerful tool for achieving this goal, whose quantum version was first considered by Hosoyamada and Sasaki at EUROCRYPT 2020. In this work, we automate the process of searching for the configurations of rebound attacks by taking related-key differentials of the underlying block cipher into account with the MILP-based approach. In the quantum setting, our model guide the search towards characteristics that minimize the resources (e.g., QRAM) and complexities of the resulting rebound attacks. We apply our method to Saturnin-hash, Skinny, and Whirlpool and improved results are obtained.
2021
ASIACRYPT
On the non-tightness of measurement-based reductions for key encapsulation mechanism in the quantum random oracle model 📺
Key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto (FO) transformation (TCC 2017) that turn a weakly-secure public-key encryption (PKE) into an IND-CCA-secure KEM, were widely used among the KEM submissions to the NIST Post-Quantum Cryptography Standardization Project. Under the standard CPA security assumptions, i.e., OW-CPA and IND-CPA, the security of these variants in the quantum random oracle model (QROM) has been proved by black-box reductions, e.g., Jiang et al. (CRYPTO 2018), and by non-black-box reductions (EUROCRYPT 2020). The non-black-box reductions (EUROCRYPT 2020) have a liner security loss, but can only apply to specific \emph{reversible} adversaries with strict \emph{reversible} implementation. On the contrary, the existing black-box reductions in the literature can apply to an arbitrary adversary with an arbitrary implementation, but suffer a quadratic security loss. In this paper, for KEM variants of the FO transformation, we first show the tightness limits of the black-box reductions, and prove that a \emph{measurement-based} reduction in the QROM from breaking the standard OW-CPA (or IND-CPA) security of the underlying PKE to breaking the IND-CCA security of the resulting KEM, will \emph{inevitably} incur a quadratic loss of the security, where ``measurement-based" means the reduction measures a hash query from the adversary and uses the measurement outcome to break the underlying security of PKE. In particular, most black-box reductions for these FO-like KEM variants are of this type, and our results suggest an explanation for the lack of progress in improving this reduction tightness in terms of the degree of security loss. Then, we further show that the quadratic loss is also unavoidable when one turns a search problem into a decision problem using the one-way to hiding technique in a black-box manner, which has been recognized as an essential technique to prove the security of cryptosystems involving quantum random oracles.
2021
ASIACRYPT
Simulation-Based Bi-Selective Opening Security for Public Key Encryption 📺
Selective opening attacks (SOA) (for public-key encryption, PKE) concern such a multi-user scenario, where an adversary adaptively corrupts some fraction of the users to break into a subset of honestly created ciphertexts, and tries to learn the information on the messages of some unopened (but potentially related) ciphertexts. Until now, the notion of selective opening attacks is only considered in two settings: sender selective opening (SSO), where part of senders are corrupted and messages together with randomness for encryption are revealed; and receiver selective opening (RSO), where part of receivers are corrupted and messages together with secret keys for decryption are revealed. In this paper, we consider a more natural and general setting for selective opening security. In the setting, the adversary may adaptively corrupt part of senders and receivers \emph{simultaneously}, and get the plaintext messages together with internal randomness for encryption and secret keys for decryption, while it is hoped that messages of uncorrupted parties remain protected. We denote it as Bi-SO security since it is reminiscent of Bi-Deniability for PKE. We first formalize the requirement of Bi-SO security by the simulation-based (SIM) style, and prove that some practical PKE schemes achieve SIM-Bi-$\text{SO}$-CCA security in the random oracle model. Then, we suggest a weak model of Bi-SO security, denoted as SIM-wBi-$\text{SO}$-CCA security, and argue that it is still meaningful and useful. We propose a generic construction of PKE schemes that achieve SIM-wBi-$\text{SO}$-CCA security in the standard model and instantiate them from various standard assumptions. Our generic construction is built on a newly presented primitive, namely, universal$_{\kappa}$ hash proof system with key equivocability, which may be of independent interest.
2021
ASIACRYPT
Modular Design of Role-Symmetric Authenticated Key Exchange Protocols 📺
Authenticated Key Exchange (AKE) is an important primitive in applied cryptography. Previously several strong models of AKE were introduced, e.g., CK, CK+, eCK and their extended versions considering perfect forward secrecy (PFS), (denoted by a “-PFS” suffix). These models provide different security guarantees and they are incomparable. Hence, one still lacks systematic understanding of the prerequisites for secure AKEs and a modular design of AKE protocols. In this paper, we investigate this issue in the context of One-Round Authenticated Key Exchange (ORKE), which is role-symmetric for players and only needs a single round to establish a session key. Our treatments are as follows: First, we reformat the CK, CK-PFS, CK+, CK+-PFS, eCK and eCK-PFS models in the context of ORKE, some of which are formulated for the first time in the literature. Next, we introduce a new tool, Key-wise Recoverable Function (KRF). With merely black-box calls to KRFs, we build modular constructions for ORKEs. As an immediate application, many previous protocols can be explained naturally by the construction. We further build a protocol with CK, CK+, eCK, CK-PFS, CK+-PFS and eCK-PFS security simultaneously, by properly instantiating the underlying KRF. As a by-product, we have simplified proofs for a few known protocols, with non-standard assumptions avoidable.
2021
ASIACRYPT
A formula for disaster: a unified approach to elliptic curve special-point-based attacks 📺
The Refined Power Analysis, Zero-Value Point, and Exceptional Procedure attacks introduced side-channel techniques against specific cases of elliptic curve cryptography. The three attacks recover bits of a static ECDH key adaptively, collecting information on whether a certain multiple of the input point was computed. We unify and generalize these attacks in a common framework, and solve the corresponding problem for a broader class of inputs. We also introduce a version of the attack against windowed scalar multiplication methods, recovering the full scalar instead of just a part of it. Finally, we systematically analyze elliptic curve point addition formulas from the Explicit-Formulas Database, classify all non-trivial exceptional points, and find them in new formulas. These results indicate the usefulness of our tooling, which we released publicly, for unrolling formulas and finding special points, and potentially for independent future work.
2021
ASIACRYPT
Security Analysis of CPace 📺
In response to standardization requests regarding password-authenticated key exchange (PAKE) protocols, the IRTF working group CFRG has setup a PAKE selection process in 2019, which led to the selection of the CPace protocol in the balanced setting, in which parties share a common password. In subsequent standardization efforts, the CPace protocol further developed, yielding a protocol family whose actual security guarantees in practical settings are not well understood. In this paper, we provide a comprehensive security analysis of CPace in the universal composability framework. Our analysis is realistic in the sense that it captures adaptive corruptions and refrains from modeling CPace's MapToPoint function that maps field elements to curve points as an idealized function. In order to extend our proofs to different CPace variants optimized for specific elliptic-curve ecosystems, we employ a new approach which represents the assumptions required by the proof as libraries accessed by a simulator. By allowing for the modular replacement of assumptions used in the proof, this new approach avoids a repeated analysis of unchanged protocol parts and lets us efficiently analyze the security guarantees of all the different CPace variants. As a result of our analysis, all of the investigated practical CPace variants enjoy adaptive UC security.
2021
ASIACRYPT
Cryptanalysis of an oblivious PRF from supersingular isogenies 📺
We cryptanalyse the SIDH-based oblivious pseudorandom function from supersingular isogenies proposed at Asiacrypt'20 by Boneh, Kogan and Woo. To this end, we give an attack on an assumption, the auxiliary one-more assumption, that was introduced by Boneh et al. and we show that this leads to an attack on the oblivious PRF itself. The attack breaks the pseudorandomness as it allows adversaries to evaluate the OPRF without further interactions with the server after some initial OPRF evaluations and some offline computations. More specifically, we first propose a polynomial-time attack. Then, we argue it is easy to change the OPRF protocol to include some countermeasures, and present a second subexponential attack that succeeds in the presence of said countermeasures. Both attacks break the security parameters suggested by Boneh et al. Furthermore, we provide a proof of concept implementation as well as some timings of our attack. Finally, we examine the generation of one of the OPRF parameters and argue that a trusted third party is needed to guarantee provable security.
2021
ASIACRYPT
Boosting the Security of Blind Signature Schemes 📺
Existing blind signature schemes that are secure for polynomially many concurrent executions of the signing protocol are either inefficient or rely on non-standard assumptions (even in the random-oracle model). We show the first efficient blind signature schemes achieving this level of security based on the RSA, quadratic residuosity, and discrete logarithm assumptions (in the random-oracle model). Our core technique involves an extension and generalization of a transform due to Pointcheval (Eurocrypt~'98) that allows us to convert certain blind signature schemes that are secure for (concurrently) issuing logarithmically many signatures into ones secure for (concurrently) issuing polynomially many signatures.
2021
ASIACRYPT
The One-More Discrete Logarithm Assumption in the Generic Group Model 📺
The one more-discrete logarithm assumption (OMDL) underlies the security analysis of identification protocols, blind signature and multi-signature schemes, such as blind Schnorr signatures and the recent MuSig2 multi-signatures. As these schemes produce standard Schnorr signatures, they are compatible with existing systems, e.g. in the context of blockchains. OMDL is moreover assumed for many results on the impossibility of certain security reductions. Despite its wide use, surprisingly, OMDL is lacking any rigorous analysis; there is not even a proof that it holds in the generic group model (GGM). (We show that a claimed proof is flawed.) In this work we give a formal proof of OMDL in the GGM. We also prove a related assumption, the one-more computational Diffie-Hellman assumption, in the GGM. Our proofs deviate from prior GGM proofs and replace the use of the Schwartz-Zippel Lemma by a new argument.
2021
ASIACRYPT
New Attacks on LowMC instances with a Single Plaintext/Ciphertext pair 📺
Cryptanalysis of the LowMC block cipher when the attacker has access to a single known plaintext/ciphertext pair is a mathematically challenging problem. This is because the attacker is unable to employ most of the standard techniques in symmetric cryptography like linear and differential cryptanalysis. This scenario is particularly relevant while arguing the security of the Picnic digital signature scheme in which the plaintext/ciphertext pair generated by the LowMC block cipher serves as the public (verification) key and the corresponding LowMC encryption key also serves as the secret (signing) key of the signature scheme. In the paper by Banik et al. (IACR ToSC 2020:4), the authors used a linearization technique of the LowMC S-box to mount attacks on some instances of the block cipher. In this paper, we first make a more precise complexity analysis of the linearization attack. Then, we show how to perform a 2-stage MITM attack on LowMC. The first stage reduces the key candidates corresponding to a fraction of key bits of the master key. The second MITM stage between this reduced candidate set and the remaining fraction of key bits successfully recovers the master key. We show that the combined computational complexity of both these stages is significantly lower than those reported in the ToSC paper by Banik et al.
2021
ASIACRYPT
Tight Security for Key-Alternating Ciphers with Correlated Sub-Keys 📺
A substantial effort has been devoted to proving optimal bounds for the security of key-alternating ciphers with independent sub-keys in the random permutation model (e.g., Chen and Steinberger, EUROCRYPT '14; Hoang and Tessaro, CRYPTO '16). While common in the study of multi-round constructions, the assumption that sub-keys are truly independent is not realistic, as these are generally highly correlated and generated from shorter keys. In this paper, we show the existence of non-trivial distributions of limited independence for which a t-round key-alternating cipher achieves optimal security. Our work is a natural continuation of the work of Chen et al. (CRYPTO '14) which considered the case of t = 2 when all-subkeys are identical. Here, we show that key-alternating ciphers remain secure for a large class of (t-1)-wise and (t-2)-wise independent distribution of sub-keys. Our proofs proceed by generalizations of the so-called Sum-Capture Theorem, which we prove using Fourier-analytic techniques.
2021
ASIACRYPT
Astrolabous: A Universally Composable Time Lock Encryption Scheme 📺
In this work, we study the cryptographic primitive called time-lock encryption (TLE). The concept of TLE involves a party initiating the encryption of a message that one can only decrypt after a certain amount of time has elapsed. Following the universal composability (UC) paradigm introduced by Canetti [IEEE FOCS 2001], we formally abstract the concept of TLE into an ideal functionality in a flexible way. In addition, we provide a standalone definition for secure TLE schemes in a game-based style and we devise a hybrid protocol that relies on such a secure TLE scheme. We show that if the underlying TLE scheme satisfies the standalone game-based security definition, then our hybrid protocol UC realises the TLE functionality in the random oracle model. Finally, we present \emph{Astrolabous}, a TLE construction that satisfies our security definition, leading to the first UC realization of the TLE functionality. Interestingly, it is hard to prove UC secure any of the TLE construction proposed in the literature. The reason behind this difficulty relates to the UC framework itself. Intuitively, to capture semantic security, no information should be leaked regarding the plaintext in the ideal world, thus the ciphertext should not contain any information relating to the message. On the other hand, all ciphertexts will eventually open, resulting in a trivial distinction of the real from the ideal world in the standard model. We overcome this limitation by extending any secure TLE construction adopting the techniques of Nielsen [CRYPTO 2002] in the random oracle model. Specifically, the description of the extended TLE algorithms includes calls to the random oracle, allowing our simulator to equivocate. This extension can be applied to any TLE algorithm that satisfies our standalone game-based security definition, and in particular to Astrolabous.
2021
ASIACRYPT
Two-Round Adaptively Secure MPC from Isogenies, LPN, or CDH 📺
We present a new framework for building round-optimal (two-round) adaptively secure MPC. We show that a relatively weak notion of OT that we call indistinguishability OT with receiver oblivious sampleability (r-iOT) is enough to build two-round, adaptively secure MPC against malicious adversaries in the CRS model. We then show how to construct r-iOT from CDH, LPN, or isogeny-based assumptions that can be viewed as group actions (such as CSIDH and CSI-FiSh). This yields the first concrete constructions of two-round adaptively secure MPC against malicious adversaries from CDH, LPN, or isogeny-based assumptions. We further extend our non-isogeny results to the plain model, achieving (to the best of our knowledge) the first construction of two-round adaptively secure MPC against semi-honest adversaries in the plain model from LPN. Our results allow us to build two-round adaptively secure MPC against malicious adversaries from essentially all of the well-studied assumptions in cryptography. In addition, our constructions from isogenies or LPN provide the first post-quantum alternatives to LWE-based constructions for round-optimal adaptively secure MPC. Along the way, we show that r-iOT also implies non-committing encryption (NCE), thereby yielding the first constructions of NCE from isogenies or LPN.
2021
ASIACRYPT
Improved single-round secure multiplication using regenerating codes 📺
In 2016, Guruswami and Wootters showed Shamir's secret-sharing scheme defined over an extension field has a regenerating property. Namely, we can compress each share to an element of the base field by applying a linear form, such that the secret is determined by a linear combination of the compressed shares. Immediately it seemed like an application to improve the complexity of unconditionally secure multiparty computation must be imminent; however, thus far, no result has been published. We present the first application of regenerating codes to MPC, and show that its utility lies in reducing the number of rounds. Concretely, we present a protocol that obliviously evaluates a depth-$d$ arithmetic circuit in $d + O(1)$ rounds, in the amortized setting of parallel evaluations, with $o(n^2)$ ring elements communicated per multiplication. Our protocol makes use of function-dependent preprocessing, and is secure against the maximal adversary corrupting $t < n/2$ parties. All existing approaches in this setting have complexity $\Omega(n^2)$. Moreover, we extend some of the theory on regenerating codes to Galois rings. It was already known that the repair property of MDS codes over fields can be fully characterized in terms of its dual code. We show this characterization extends to linear codes over Galois rings, and use it to show the result of Guruswami and Wootters also holds true for Shamir's scheme over Galois rings.
2021
ASIACRYPT
Identity-Based Encryption for Fair Anonymity Applications: Defining, Implementing, and Applying Rerandomizable RCCA-secure IBE 📺
Our context is anonymous encryption schemes hiding their receiver, but in a setting which allows authorities to reveal the receiver when needed. While anonymous Identity-Based Encryption (IBE) is a natural candidate for such fair anonymity (it gives trusted authority access by design), the {\it de facto} security standard (a.k.a. IND-ID-CCA) is incompatible with the ciphertext rerandomizability which is crucial to anonymous communication. Thus, we seek to extend IND-ID-CCA security for IBE to a notion that can be meaningfully relaxed for rerandomizability while it still protects against active adversaries. To the end, inspired by the notion of replayable adaptive chosen-ciphertext attack (RCCA) security (Canetti {\it et al.}, Crypto'03), we formalize a new security notion called Anonymous Identity-Based RCCA (ANON-ID-RCCA) security for rerandomizable IBE and propose the first construction with rigorous security analysis. The core of our scheme is a novel extension of the double-strand paradigm, which was originally proposed by Golle {\it et al.} (CT-RSA'04) and later extended by Prabhakaran and Rosulek (Crypto'07), to the well-known Gentry-IBE (Eurocrypt'06). Notably, our scheme is the first IBE that simultaneously satisfies adaptive security, rerandomizability, and recipient-anonymity to date. As the application of our new notion, we design a new universal mixnet in the identity-based setting that does not require public key distribution (with fair anonymity). More generally, our new notion is also applicable to most existing rerandomizable RCCA-secure applications to eliminate the need for public key distribution infrastructure while allowing fairness.
2021
ASIACRYPT
Double-Block-Length Hash Function for Minimum Memory Size 📺
Sharing a common primitive for multiple functionalities is essential for lightweight cryptography, and NIST's lightweight cryptography competition (LWC) considers the integration of hashing to AEAD. While permutations are natural primitive choices in such a goal, for design diversity, it is interesting to investigate how small block-cipher (BC) based and tweakable block-cipher (TBC) based schemes can be. Double-block-length (DBL) hash function modes are suitable to ensure the same security level for AEAD and hashing, but hard to achieve a small memory size. Romulus, a TBC-based finalist in NIST LWC, introduced the DBL hashing scheme Romulus-H, but it requires $3n+k$ bits of memory using an underlying primitive with an $n$-bit block and a $k$-bit (twea)key. Even the smallest DBL modes in the literature require $2n+k$ bits of memory. Addressing this issue, we present new DBL modes EXEX-NI and EXEX-I achieving $(n+k)$-bit state size, i.e., no extra memory in addition to $n+k$ bits needed within the primitive. EXEX-NI is indifferentiable from a random oracle up to $n - \log n$ bits. By instantiating it with SKINNY, we can provide hashing to Romulus with zero memory overhead. EXEX-I is an optimized mode with collision resistance. We finally compare the hardware performances of EXEX-NI and EXEX-I, and Romulus-H with SKINNY-128-384. EXEX-NI and EXEX-I achieve the circuit-area reduction by 2,000+ GE, yielding the total areas being smaller than 70% of that of Romulus-H.
2021
ASIACRYPT
Proofs for Inner Pairing Products and Applications 📺
We present a generalized inner product argument and demonstrate its applications to pairing-based languages. We apply our generalized argument to prove that an inner pairing product is correctly evaluated with respect to committed vectors of $n$ source group elements. With a structured reference string (SRS), we achieve a logarithmic-time verifier whose work is dominated by $6 \log n$ target group exponentiations. Proofs are of size $6 \log n$ target group elements, computed using $6n$ pairings and $4n$ exponentiations in each source group. We apply our inner product arguments to build the first polynomial commitment scheme with succinct (logarithmic) verification, $O(\sqrt{d})$ prover complexity for degree $d$ polynomials (not including the cost to evaluate the polynomial), and a SRS of size $O(\sqrt{d})$. Concretely, this means that for $d=2^{28}$, producing an evaluation proof in our protocol is $76\times$ faster than doing so in the KZG commitment scheme, and the CRS in our protocol is $1000\times$ smaller: $13$MB vs $13$GB for KZG. As a second application, we introduce an argument for aggregating $n$ Groth16 zkSNARKs into an $O(\log n)$ sized proof. Our protocol is significantly faster ($>1000\times$) than aggregating SNARKs via recursive composition: we aggregate $\sim 130,000$ proofs in $25$ minutes, versus $90$ proofs via recursive composition. Finally, we further apply our aggregation protocol to construct a low-memory SNARK for machine computations that does not rely on recursive composition. For a computation that requires time $T$ and space $S$, our SNARK produces proofs in space $\tilde{\mathcal{O}}(S+T)$, which is significantly more space efficient than a monolithic SNARK, which requires space $\tilde{\mathcal{O}}(S \cdot T)$.
2021
ASIACRYPT
Batching Base Oblivious Transfers 📺
Protocols that make use of oblivious transfer (OT) rarely require just one instance. Usually a batch of OTs is required — notably, when generating base OTs for OT extension. There is a natural way to optimize 2-round OT protocols when generating a batch, by reusing certain protocol messages across all instances. In this work we show that this batch optimization is error-prone. We catalog many implementations and papers that have an incorrect treatment of this batch optimization, some of them leading to catastrophic leakage in OT extension protocols. We provide a full treatment of how to properly optimize recent 2-round OT protocols for the batch setting. Along the way we show several performance improvements to the OT protocol of McQuoid, Rosulek, and Roy (ACM CCS 2020). In particular, we show an extremely simple OT construction that may be of pedagogical interest.
2021
ASIACRYPT
Lunar: a Toolbox for More Efficient Universal and Updatable zkSNARKs and Commit-and-Prove Extensions 📺
We study how to construct zkSNARKs whose SRS is universal and updatable, i.e., valid for all relations within a size-bound and to which a dynamic set of participants can indefinitely add secret randomness. Our focus is: efficient universal updatable zkSNARKs with linear-size SRS and their commit-and-prove variants. We both introduce new formal frameworks and techniques, as well as systematize existing ones. We achieve a collection of zkSNARKs with different tradeoffs. One of our schemes achieves the smallest proof size and proving time compared to the state of art for proofs for arithmetic circuits. The language supported by this scheme is a variant of R1CS that we introduce, called R1CS-lite. Another of our constructions directly supports standard R1CS and achieves the fastest proving time for this type of constraints. These results stem from different contributions: (1) a new algebraically-flavored variant of IOPs that we call Polynomial Holographic IOPs (PHPs); (2) a new compiler that combines our PHPs with commit-and-prove zk-SNARKs (CP-SNARKs) for committed polynomials; (3) pairing-based realizations of these CP-SNARKs for polynomials; (4) constructions of PHPs for R1CS and R1CS-lite. Finally, we extend the compiler in item (2) to yield commit-and-prove universal zkSNARKs.
2021
ASIACRYPT
Clustering Effect in Simon and Simeck 📺
Simon and Simeck are two lightweight block ciphers with a simple round function using only word rotations and a bit-wise AND operation. Previous work has shown a strong clustering effect for differential and linear cryptanalysis, due to the existence of many trails with the same inputs and outputs. In this paper, we explore this clustering effect by exhibiting a class of high probability differential and linear trails where the active bits stay in a fixed window of w bits. Instead of enumerating a set of good trails contributing to a differential or a linear approximation, we compute the probability distribution over this space, including all trails in the class. This results in stronger distinguishers than previously proposed, and we describe key recovery attacks against Simon and Simeck improving the previous results by up to 7 rounds. In particular, we obtain an attack against 42-round Simeck-64, leaving only two rounds of security margin, and an attack against 45-round Simon-96/144, reducing the security margin from 16 rounds to 9 rounds.
2021
ASIACRYPT
Quantum Encryption with Certified Deletion, Revisited: Public Key, Attribute-Based, and Classical Communication 📺
Broadbent and Islam (TCC '20) proposed a quantum cryptographic primitive called quantum encryption with certified deletion. In this primitive, a receiver in possession of a quantum ciphertext can generate a classical certificate that the encrypted message is deleted. Although their construction is information-theoretically secure, it is limited to the setting of one-time symmetric key encryption (SKE), where a sender and receiver have to share a common key in advance and the key can be used only once. Moreover, the sender has to generate a quantum state and send it to the receiver over a quantum channel in their construction. Deletion certificates are privately verifiable, which means a verification key for a certificate must be kept secret, in the definition by Broadbent and Islam. However, we can also consider public verifiability. In this work, we present various constructions of encryption with certified deletion. - Quantum communication case: We achieve (reusable-key) public key encryption (PKE) and attribute-based encryption (ABE) with certified deletion. Our PKE scheme with certified deletion is constructed assuming the existence of IND-CPA secure PKE, and our ABE scheme with certified deletion is constructed assuming the existence of indistinguishability obfuscation and one-way function. These two schemes are privately verifiable. - Classical communication case: We also achieve interactive encryption with certified deletion that uses only classical communication. We give two schemes, a privately verifiable one and a publicly verifiable one. The former is constructed assuming the LWE assumption in the quantum random oracle model. The latter is constructed assuming the existence of one-shot signatures and extractable witness encryption.
2021
ASIACRYPT
Faster Dual Lattice Attacks for Solving LWE -- with applications to CRYSTALS 📺
Cryptosystems based on the learning with errors (LWE) problem are assigned a security level that relates to the cost of generic algorithms for solving the LWE problem. This includes at least the so- called primal and dual lattice attacks. In this paper, we present an improvement of the dual lattice attack using an idea that can be traced back to work by Bleichenbacher. We present an improved distinguisher that in combination with a guessing step shows a reduction in the overall complexity for the dual attack on all schemes. Our second contribution is a new two-step lattice reduction strategy that allows the new dual lattice attack to exploit two recent techniques in lattice reduction algorithms, i.e., the "dimensions for free" trick and the trick of producing many short vectors in one sieving. Since the incompatibility of these two tricks was believed to be the main reason that dual attacks are less interesting, our new reduction strategy allows more efficient dual approaches than primal attacks, for important cryptographic parameter sets. We apply the proposed attacks on CRYSTALS-Kyber and CRYSTALS-Dilithium, two of the finalists in the NIST post-quantum cryptography project and present new lower complexity numbers, both classically and quantumly in the core-SVP model. Most importantly, for the proposed security parameters, our new dual attack with refined lattice reduction strategy greatly improves the state-of-the-art primal attack in the classical gate-count metric, i.e., the classical Random Access Machine (RAM) model, indicating that some parameters are really on the edge for their claimed security level. Specifically, the improvement factor can be as large as 15 bits for Kyber1024 with an extrapolation model (Albrecht et al. at Eurocrypt 2019). Also, we show that Kyber768 could be solved with classical gate complexity below its claimed security level. Last, we apply the new attack to the proposed parameters in a draft version of Homomorphic Encryption Standard (see https://homomorphicencryption.org) and obtain significant gains. For instance, we could solve a parameter set aiming for 192-bit security in $2^{187.0}$ operations in the classical RAM model. Note that these parameters are deployed in well-known Fully Homomorphic Encryption libraries.
2021
ASIACRYPT
Symmetric Key Exchange with Full Forward Security and Robust Synchronization 📺
We construct lightweight authenticated key exchange protocols based on pre-shared keys, which achieve full forward security and rely only on simple and efficient symmetric-key primitives. All of our protocols have rigorous security proofs in a strong security model, all have low communication complexity, and are particularly suitable for resource-constrained devices. We describe three protocols that apply linear key evolution to provide different performance and security properties. Correctness in parallel and concurrent protocol sessions is difficult to achieve for linearly key-evolving protocols, emphasizing the need for assurance of availability alongside the usual confidentiality and authentication security goals. We introduce synchronization robustness as a new formal security goal, which essentially guarantees that parties can re-synchronize efficiently. All of our new protocols achieve this property. Since protocols based on linear key evolution cannot guarantee that all concurrently initiated sessions successfully derive a key, we also propose two constructions with non-linear key evolution based on puncturable PRFs. These are instantiable from standard hash functions and require O( C log(|CTR|)) memory, where C is the number of concurrent sessions and |CTR| is an upper bound on the total number of sessions per party. These are the first protocols to simultaneously achieve full forward security, synchronization robustness, and concurrent correctness.
2021
ASIACRYPT
Séta: Supersingular Encryption from Torsion Attacks 📺
We present Séta, a new family of public-key encryption schemes with post-quantum security based on isogenies of supersingular elliptic curves. It is constructed from a new family of trapdoor one-way functions, where the inversion algorithm uses Petit's so called \emph{torsion attacks} on SIDH to compute an isogeny between supersingular elliptic curves given an endomorphism of the starting curve and images of torsion points. We prove the OW-CPA security of S\'eta and present an IND-CCA variant using the post-quantum OAEP transformation. Several variants for key generation are explored together with their impact on the selection of parameters, such as the base prime of the scheme. We furthermore formalise an ``uber'' isogeny assumption framework which aims to generalize computational isogeny problems encountered in schemes including SIDH, CSDIH, OSIDH and ours. Finally, we carefully select parameters to achieve a balance between security and run-times and present experimental results from our implementation.
2021
ASIACRYPT
How to Build a Trapdoor Function from an Encryption Scheme 📺
In this work we ask the following question: Can we transform any encryption scheme into a trapdoor function (TDF)? Alternatively stated, can we make any encryption scheme randomness recoverable? We propose a generic compiler that takes as input any encryption scheme with pseudorandom ciphertexts and adds a trapdoor to invert the encryption, recovering also the random coins. This universal TDFier only assumes in addition the existence of a hinting pseudorandom generator (PRG). Despite the simplicity, our transformation is quite general and we establish a series of new feasibility results: - The first identity-based TDF [Bellare et al. EUROCRYPT 2012] from the CDH assumption in pairing-free groups (or from factoring), thus matching the state of the art for identity-based encryption schemes. Prior works required pairings or LWE. - The first collusion-resistant attribute-based TDF (AB-TDF) for all ($NC^1$, resp.) circuits from LWE (bilinear maps, resp.). Moreover, the first single-key AB-TDF from CDH. To the best of our knowledge, no AB-TDF was known in the literature (not even for a single key) from any assumption. We obtain the same results for predicate encryption. As an additional contribution, we define and construct a trapdoor garbling scheme: A simulation secure garbling scheme with a hidden ``trigger'' that allows the evaluator to fully recover the randomness used by the garbling algorithm. We show how to construct trapdoor garbling from the DDH or LWE assumption with an interplay of key-dependent message (KDM) and randomness-dependent message (RDM) techniques. Trapdoor garbling allows us to obtain alternative constructions of (single-key) AB-TDFs with additional desirable properties, such as adaptive security (in the choice of the attribute) and projective keys. We expect trapdoor garbling to be useful in other contexts, e.g. in case where, upon successful execution, the evaluator needs to immediately verify that the garbled circuit was well-formed.
2021
ASIACRYPT
Lattice sieving via quantum random walks 📺
Lattice-based cryptography is one of the leading proposals for post-quantum cryptography. The Shortest Vector Problem (SVP) is arguably the most important problem for the cryptanalysis of lattice-based cryptography, and many lattice-based schemes have security claims based on its hardness. The best quantum algorithm for the SVP is due to Laarhoven [Laa16 PhD] and runs in (heuristic) time $2^{0.2653d + o(d)}$. In this article, we present an improvement over Laarhoven's result and present an algorithm that has a (heuristic) running time of $2^{0.2570 d + o(d)}$ where $d$ is the lattice dimension. We also present time-memory trade-offs where we quantify the amount of quantum memory and quantum random access memory of our algorithm. The core idea is to replace Grover's algorithm used in [Laa16 PhD] in a key part of the sieving algorithm by a quantum random walk in which we add a layer of local sensitive filtering.
2021
ASIACRYPT
Shorter Lattice-Based Group Signatures via ``Almost Free'' Encryption and Other Optimizations 📺
We present an improved lattice-based group signature scheme whose parameter sizes and running times are independent of the group size. The signature length in our scheme is around $200$KB, which is approximately a $3$X reduction over the previously most compact such scheme, based on any quantum-safe assumption, of del Pino et al. (CCS 2018). The improvement comes via several optimizations of some basic cryptographic components that make up group signature schemes, and we think that they will find other applications in privacy-based lattice cryptography.
2021
ASIACRYPT
Divided We Stand, United We Fall: Security Analysis of Some SCA+SIFA Countermeasures Against SCA-Enhanced Fault Template Attacks 📺
Protection against side-channel (SCA) and fault attacks (FA) requires two classes of countermeasures to be simultaneously embedded in a cryptographic implementation. It has already been shown that a straightforward combination of SCA and FA countermeasures are vul- nerable against FAs, such as Statistical Ineffective Fault Analysis (SIFA) and Fault Template Attacks (FTA). Consequently, new classes of countermeasures have been proposed which prevent against SIFA, and also includes masking for SCA protection. While they are secure against SIFA and SCA individually, one important question is whether the security claim still holds at the presence of a combined SCA and FA adversary. Security against combined attacks is, however, desired, as countermeasures for both threats are included in such implementations. In this paper, we show that some of the recently proposed combined SIFA and SCA countermeasures fall prey against combined attacks. To this end, we enhance the FTA attacks by considering side-channel information during fault injection. The success of the proposed attacks stems from some non-trivial fault propagation properties of S-Boxes, which remains unexplored in the original FTA proposal. The proposed attacks are validated on an open-source software implementation of Keccak with SIFA-protected χ 5 S-Box with laser fault injection and power measurement, and a hardware implementation of a SIFA-protected χ3 S-Box through gate-level power trace simulation. Finally, we discuss some mitigation strategies to strengthen existing countermeasures.
2021
ASIACRYPT
Improved Programmable Bootstrapping with Larger Precision and Efficient Arithmetic Circuits for TFHE 📺
Fully Homomorphic Encryption} (FHE) schemes enable to compute over encrypted data. Among them, TFHE [CGGI17] has the great advantage of offering an efficient method for bootstrapping noisy ciphertexts, i.e., reduce the noise. Indeed, homomorphic computation increases the noise in ciphertexts and might compromise the encrypted message. TFHE bootstrapping, in addition to reducing the noise, also evaluates (for free) univariate functions expressed as look-up tables. It however requires to have the most significant bit of the plaintext to be known a priori, resulting in the loss of one bit of space to store messages. Furthermore it represents a non negligible overhead in terms of computation in many use cases. In this paper, we propose a solution to overcome this limitation, that we call Programmable Bootstrapping Without Padding (WoP-PBS). This approach relies on two building blocks. The first one is the multiplication à la BFV [FV12] that we incorporate into TFHE. This is possible thanks to a thorough noise analysis showing that correct multiplications can be computed using practical TFHE parameters. The second building block is the generalization of TFHE bootstrapping introduced in this paper. It offers the flexibility to select any chunk of bits in an encrypted plaintext during a bootstrap. It also enables to evaluate many LUTs at the same time when working with small enough precision. All these improvements are particularly helpful in some applications such as the evaluation of Boolean circuits (where a bootstrap is no longer required in each evaluated gate) and, more generally, in the efficient evaluation of arithmetic circuits even with large integers. Those results improve TFHE circuit bootstrapping as well. Moreover, we show that bootstrapping large precision integers is now possible using much smaller parameters than those obtained by scaling TFHE ones.
2021
ASIACRYPT
SHealS and HealS: isogeny-based PKEs from a key validation method for SIDH 📺
In 2016, Galbraith et al. presented an adaptive attack on the SIDH key exchange protocol. In SIKE, one applies a variant of the Fujisaki-Okamoto transform to force Bob to reveal his encryption key to Alice, which Alice then uses to re-encrypt Bob's ciphertext and verify its validity. Therefore, Bob can not reuse his encryption keys. There have been two other proposed countermeasures enabling static-static private keys: k-SIDH and its variant by Jao and Urbanik. These countermeasures are relatively expensive since they consist in running multiple parallel instances of SIDH. In this paper, firstly, we propose a new countermeasure to the GPST adaptive attack on SIDH. Our countermeasure does not require key disclosure as in SIKE, nor multiple parallel instances as in k-SIDH. We translate our countermeasure into a key validation method for SIDH-type schmes. Secondly, we use our key validation to design HealSIDH, an efficient SIDH-type static-static key interactive exchange protocol. Thirdly, we derive a PKE scheme SHealS using HealSIDH. SHealS uses larger primes compared to SIKE, has larger keys and ciphertexts, but only $4$ isogenies are computed in a full execution of the scheme, as opposed to $5$ isogenies in SIKE. We prove that SHealS is IND-CPA secure relying on a new assumption we introduce and we conjecture its IND-CCA security. We suggest HealS, a variant of SHealS using a smaller prime, providing smaller keys and ciphertexts. As a result, HealSIDH is a practically efficient SIDH based (interactive) key exchange incorporating a "direct" countermeasure to the GPST adaptive attack.
2021
ASIACRYPT
Digital Signatures with Memory-Tight Security in the Multi-Challenge Setting 📺
The standard security notion for digital signatures is "single-challenge" (SC) EUF-CMA security, where the adversary outputs a single message-signature pair and "wins" if it is a forgery. Auerbach et al. (CRYPTO 2017) introduced memory-tightness of reductions and argued that the right security goal in this setting is actually a stronger "multi-challenge" (MC) definition, where an adversary may output many message-signature pairs and "wins" if at least one is a forgery. Currently, no construction from simple standard assumptions is known to achieve full tightness with respect to time, success probability, and memory simultaneously. Previous works showed that memory-tight signatures cannot be achieved via certain natural classes of reductions (Auerbach et al., CRYPTO 2017; Wang et al., EUROCRYPT 2018). These impossibility results may give the impression that the construction of memory-tight signatures is difficult or even impossible. We show that this impression is false, by giving the first constructions of signature schemes with full tightness in all dimensions in the MC setting. To circumvent the known impossibility results, we first introduce the notion of canonical reductions in the SC setting. We prove a general theorem establishing that every signature scheme with a canonical reduction is already memory-tightly secure in the MC setting, provided that it is strongly unforgeable, the adversary receives only one signature per message, and assuming the existence of a tightly-secure pseudorandom function. We then achieve memory-tight many-signatures-per-message security in the MC setting by a simple additional generic transformation. This yields the first memory-tightly, strongly EUF-CMA-secure signature schemes in the MC setting. Finally, we show that standard security proofs often already can be viewed as canonical reductions. Concretely, we show this for signatures from lossy identification schemes (Abdalla et al., EUROCRYPT 2012), two variants of RSA Full-Domain Hash (Bellare and Rogaway, EUROCRYPT 1996), and two variants of BLS signatures (Boneh et al., ASIACRYPT 2001).
2021
ASIACRYPT
Dynamic Random Probing Expansion with Quasi Linear Asymptotic Complexity 📺
The masking countermeasure is widely used to protect cryptographic implementations against side-channel attacks. While many masking schemes are shown to be secure in the widely deployed probing model, the latter raised a number of concerns regarding its relevance in practice. Offering the adversary the knowledge of a fixed number of intermediate variables, it does not capture the so-called horizontal attacks which exploit the repeated manipulation of sensitive variables. Therefore, recent works have focused on the random probing model in which each computed variable leaks with some given probability p. This model benefits from fitting better the reality of the embedded devices. In particular, Belaïd, Coron, Prouff, Rivain, and Taleb (CRYPTO 2020) introduced a framework to generate random probing circuits. Their compiler somehow extends base gadgets as soon as they satisfy a notion called random probing expandability (RPE). A subsequent work from Belaïd, Rivain, and Taleb (EUROCRYPT 2021) went a step forward with tighter properties and improved complexities. In particular, their construction reaches a complexity of O(κ^{3.9}), for a κ-bit security, while tolerating a leakage probability of p = 2^{−7.5}. In this paper, we generalize the random probing expansion approach by considering a dynamic choice of the base gadgets at each step in the expansion. This approach makes it possible to use gadgets with high number of shares –which enjoy better asymptotic complexity in the expansion framework– while still tolerating the best leakage rate usually obtained for small gadgets. We investigate strategies for the choice of the sequence of compilers and show that it can reduce the complexity of an AES implementation by a factor 10. We also significantly improve the asymptotic complexity of the expanding compiler by exhibiting new asymptotic gadget constructions. Specifically, we introduce RPE gadgets for linear operations featuring a quasi-linear complexity, as well as, an RPE multiplication gadget with linear number of multiplications. These new gadgets drop the complexity of the expanding compiler from quadratic to quasi-linear.
2021
ASIACRYPT
Verifiably-Extractable OWFs and Their Applications to Subversion Zero-Knowledge 📺
An extractable one-way function (EOWF), introduced by Canetti and Dakdouk (ICALP 2008) and generalized by Bitansky et al. (SIAM Journal on Computing vol. 45), is an OWF that allows for efficient extraction of a preimage for the function. We study (generalized) EOWFs that have a public image verification algorithm. We call such OWFs verifiably-extractable and show that several previously known constructions satisfy this notion. We study how such OWFs relate to subversion zero-knowledge (Sub-ZK) NIZKs by using them to generically construct a Sub-ZK NIZK from a NIZK satisfying certain additional properties, and conversely show how to obtain them from any Sub-ZK NIZK. Prior to our work, the Sub-ZK property of NIZKs was achieved using concrete knowledge assumptions.
2021
ASIACRYPT
QCB: Efficient Quantum-secure Authenticated Encryption 📺
It was long thought that symmetric cryptography was only mildly affected by quantum attacks, and that doubling the key length was sufficient to restore security. However, recent works have shown that Simon's quantum period finding algorithm breaks a large number of MAC and authenticated encryption algorithms when the adversary can query the MAC/encryption oracle with a quantum superposition of messages. In particular, the OCB authenticated encryption mode is broken in this setting, and no quantum-secure mode is known with the same efficiency (rate-one and parallelizable). In this paper we generalize the previous attacks, show that a large class of OCB-like schemes is unsafe against superposition queries, and discuss the quantum security notions for authenticated encryption modes. We propose a new rate-one parallelizable mode named QCB inspired by TAE and OCB and prove its security against quantum superposition queries.
2021
ASIACRYPT
Efficient NIZKs for Algebraic Sets 📺
Significantly extending the framework of (Couteau and Hartmann, Crypto 2020), we propose a general methodology to construct NIZKs for showing that an encrypted vector $\vec{\chi}$ belongs to an algebraic set, i.e., is in the zero locus of an ideal $\mathscr{I}$ of a polynomial ring. In the case where $\mathscr{I}$ is principal, i.e., generated by a single polynomial $F$, we first construct a matrix that is a ``quasideterminantal representation'' of $F$ and then a NIZK argument to show that $F (\vec{\chi}) = 0$. This leads to compact NIZKs for general computational structures, such as polynomial-size algebraic branching programs. We extend the framework to the case where $\IDEAL$ is non-principal, obtaining efficient NIZKs for R1CS, arithmetic constraint satisfaction systems, and thus for $\mathsf{NP}$. As an independent result, we explicitly describe the corresponding language of ciphertexts as an algebraic language, with smaller parameters than in previous constructions that were based on the disjunction of algebraic languages. This results in an efficient GL-SPHF for algebraic branching programs.
2021
ASIACRYPT
ConTra Corona: Contact Tracing against the Coronavirus by Bridging the Centralized–Decentralized Divide for Stronger Privacy 📺
Contact tracing is among the most important interventions to mitigate the spread of any pandemic usually in the form of manual contact tracing. Smartphone-facilitated digital contact tracing may help to increase tracing capabilities and extend the coverage to those contacts one does not know in person. Most implemented protocols use local Bluetooth Low Energy (BLE) communication to detect contagion-relevant proximity, together with cryptographic protections, as necessary to improve the privacy of the users of such a system. However, current decentralized protocols, including DP3T, do not sufficiently protect infected users from having their status revealed to their contacts, which raises fear of stigmatization. We alleviate this by proposing a new and practical solution with stronger privacy guarantees against active adversaries. It is based on the upload-what-you-observed paradigm, includes a separation of duties on the server side, and a mechanism to ensure that users cannot deduce which encounter caused a warning with high time resolution. Finally, we present a simulation-based security notion of digital contact tracing in the real–ideal setting, and prove the security of our protocol in this framework.
2021
ASIACRYPT
Franchised Quantum Money 📺
The construction of public key quantum money based on standard cryptographic assumptions is a longstanding open question. Here we introduce franchised quantum money, an alternative form of quantum money that is easier to construct. Franchised quantum money retains the features of a useful quantum money scheme, namely unforgeability and local verification: anyone can verify banknotes without communicating with the bank. In franchised quantum money, every user gets a unique secret verification key, and the scheme is secure against counterfeiting and sabotage, a new security notion that appears in the franchised model. Finally, we construct franchised quantum money and prove security assuming one-way functions.
2021
ASIACRYPT
Cryptographic Analysis of the Bluetooth Secure Connection Protocol Suite 📺
We give a cryptographic analysis of the Bluetooth Secure Connection Protocol Suite. Bluetooth supports several subprotocols such as numeric comparison, passkey entrance, and just works, in order to match the devices' different input/output capabilities. Previous analyses (e.g., Lindell, CT-RSA'09, or Troncoso and Hale, NDSS'21) often considered (and confirmed) the security of single subprotocols only. Recent practically verified attacks, however, such as the Method Confusion Attack (von Tschirschnitz et al., S&P 21) against Bluetooth's authentication and key secrecy property often exploit the bad interplay of different subprotocols. Even worse, some of these attacks show that one cannot show the Bluetooth protocol suite to be a secure authenticated key exchange protocol. We therefore aim at the best we can hope for, and show that the protocol still matches the common key secrecy requirements of a key-exchange protocol if one assumes a trust-on-first-use relationship. This means that the adversary needs to mount an active attack during the first connection, otherwise the subsequent reconnections remain secure. Investigating the cryptographic strength of the Bluetooth protocol we also look into the privacy mechanism of address randomization in Bluetooth (which is only available in the Low Energy version). We show that the cryptography indeed provides a decent level of address privacy, although this does not rule out identification of devices via other means, such as physical characteristics.
2021
ASIACRYPT
Efficient Boolean Search over Encrypted Data with Reduced Leakage 📺
Encrypted multi-maps enable outsourcing the storage of a multi-map to an untrusted server while maintaining the ability to query privately. We focus on encrypted Boolean multi-maps that support arbitrary Boolean queries over the multi-map. Kamara and Moataz [Eurocrypt’17] presented the first encrypted multi-map, BIEX, that supports CNF queries with optimal communication, worst-case sublinear search time and non-trivial leakage. We improve on previous work by presenting a new construction CNFFilter for CNF queries with significantly less leakage than BIEX, while maintaining both optimal communication and worst-case sublinear search time. As a direct consequence our construction shows additional resistance to leakage-abuse attacks in comparison to prior works. For most CNF queries, CNFFilter avoids leaking the result sets for any singleton queries for labels appearing in the CNF query. As an example, for the CNF query of the form (l1 ∨ l2) ∧ l3, our scheme does not leak the result sizes of queries to l1, l2 or l3 individually. On the other hand, BIEX does leak some of this information. This is just an example of the reduced leakage obtained by CNFFilter. The core of CNFFilter is a new filtering algorithm that performs set intersections with significantly less leakage compared to prior works. We implement CNFFilter and show that CNFFilter achieves faster search times and similar communication overhead compared to BIEX at the cost of a small increase in server storage.
2021
ASIACRYPT
Efficient Leakage-Resilient MACs without Idealized Assumptions 📺
The security proofs of leakage-resilient MACs based on symmetric building blocks currently rely on idealized assumptions that hardly translate into interpretable guidelines for the cryptographic engineers implementing these schemes. In this paper, we first present a leakage-resilient MAC that is both efficient and secure under standard and easily interpretable black box and physical assumptions. It only requires a collision resistant hash function and a single call per message authentication to a Tweakable Block Cipher (TBC) that is unpredictable with leakage. This construction leverages two design twists: large tweaks for the TBC and a verification process that checks the inverse TBC against a constant. It enjoys beyond birthday security bounds. We then discuss the cost of getting rid of these design twists. We show that security can be proven without them as well. Yet, a construction without large tweaks requires stronger (non idealized) assumptions and inevitably incurs performance overheads if specialized TBCs can be exploited, and a construction without twisted verification requires even stronger assumptions (still non idealized) and leads to more involved bounds. The combination of these results makes a case for our first pragmatic construction and suggests the design of TBCs with large tweaks and good properties for side-channel countermeasures as an interesting challenge.
2021
ASIACRYPT
Tight adaptive reprogramming in the QROM 📺
The random oracle model (ROM) enjoys widespread popularity, mostly because it tends to allow for tight and conceptually simple proofs where provable security in the standard model is elusive or costly. While being the adequate replacement of the ROM in the post-quantum security setting, the quantum-accessible random oracle model (QROM) has thus far failed to provide these advantages in many settings. In this work, we focus on adaptive reprogrammability, a feature of the ROM enabling tight and simple proofs in many settings. We show that the straightforward quantum-accessible generalization of adaptive reprogramming is feasible by proving a bound on the adversarial advantage in distinguishing whether a random oracle has been reprogrammed or not. We show that our bound is tight by providing a matching attack. We go on to demonstrate that our technique recovers the mentioned advantages of the ROM in three QROM applications: 1) We give a tighter proof of security of the message compression routine as used by XMSS. 2) We show that the standard ROM proof of chosen-message security for Fiat-Shamir signatures can be lifted to the QROM, straightforwardly, achieving a tighter reduction than previously known. 3) We give the first QROM proof of security against fault injection and nonce attacks for the hedged Fiat-Shamir transform.
2021
ASIACRYPT
Convexity of division property transitions: theory, algorithms and compact models 📺
Integral cryptanalysis is a powerful tool for attacking symmetric primitives, and division property is a state-of-the-art framework for finding integral distinguishers. This work describes new theoretical and practical insights into traditional bit-based division property. We focus on analyzing and exploiting monotonicity/convexity of division property and its relation to the graph indicator. In particular, our investigation leads to a new compact representation of propagation, which allows CNF/MILP modeling for larger S-Boxes, such as 16-bit Super-Sboxes of lightweight block ciphers or even 32-bit random S-boxes. This solves the challenge posed by Derbez and Fouque (ToSC 2020), who questioned the possibility of SAT/SMT/MILP modeling of 16-bit Super-Sboxes. As a proof-of-concept, we model the Super-Sboxes of the 8-round LED by CNF formulas, which was not feasible by any previous approach. Our analysis is further supported by an elegant algorithmic framework. We describe simple algorithms for computing division property of a set of $n$-bit vectors in time $O(n2^n)$, reducing such sets to minimal/maximal elements in time $O(n2^n)$, computing division property propagation table of an $n\times m$-bit S-box and its compact representation in time $O((n+m)2^{n+m})$. In addition, we develop an advanced algorithm tailored to "heavy" bijections, allowing to model, for example, a randomly generated 32-bit S-box.
2021
ASIACRYPT
Hierarchical Integrated Signature and Encryption 📺
In this work, we introduce the notion of hierarchical integrated signature and encryption (HISE), wherein a single public key is used for both signature and encryption, and one can derive a secret key used only for decryption from the signing key, which enables secure delegation of decryption capability. HISE enjoys the benefit of key reuse, and admits individual key escrow. We present two generic constructions of HISE. One is from (constrained) identity-based encryption. The other is from uniform one-way function, public-key encryption, and general-purpose public-coin zero-knowledge proof of knowledge. To further attain global key escrow, we take a little detour to revisit global escrow PKE, an object both of independent interest and with many applications. We formalize the syntax and security model of global escrow PKE, and provide two generic constructions. The first embodies a generic approach to compile any PKE into one with global escrow property. The second establishes a connection between three-party non-interactive key exchange and global escrow PKE. Combining the results developed above, we obtain HISE schemes that support both individual and global key escrow. We instantiate our generic constructions of (global escrow) HISE and implement all the resulting concrete schemes for 128-bit security. Our schemes have performance that is comparable to the best Cartesian product combined public-key scheme, and exhibit advantages in terms of richer functionality and public key reuse. As a byproduct, we obtain a new global escrow PKE scheme that outperforms the best prior work in speed by several orders of magnitude, which might be of independent interest.
2021
ASIACRYPT
Lattice Enumeration for Tower NFS: a 521-bit Discrete Logarithm Computation 📺
The Tower variant of the Number Field Sieve (TNFS) is known to be asymptotically the most efficient algorithm to solve the discrete logarithm problem in finite fields of medium characteristics, when the extension degree is composite. A major obstacle to an efficient implementation of TNFS is the collection of algebraic relations, as it happens in dimension greater than 2. This requires the construction of new sieving algorithms which remain efficient as the dimension grows. In this article, we overcome this difficulty by considering a lattice enumeration algorithm which we adapt to this specific context. We also consider a new sieving area, a high-dimensional sphere, whereas previous sieving algorithms for the classical NFS considered an orthotope. Our new sieving technique leads to a much smaller running time, despite the larger dimension of the search space, and even when considering a larger target, as demonstrated by a record computation we performed in a 521-bit finite field GF(p^6). The target finite field is of the same form than finite fields used in recent zero-knowledge proofs in some blockchains. This is the first reported implementation of TNFS.
2021
ASIACRYPT
Promise $\Sigma$-protocol: How to Construct Efficient Threshold ECDSA from Encryptions Based on Class Groups 📺
Threshold Signatures allow $n$ parties to share the ability of issuing digital signatures so that any coalition of size at least $t+1$ can sign, whereas groups of $t$ or less players cannot. The currently known class-group-based threshold ECDSA constructions are either inefficient (requiring parallel-repetition of the underlying zero knowledge proof with small challenge space) or requiring rather non-standard assumptions. In this paper, under \emph{standard assumptions} we present efficient threshold ECDSA protocols from encryption schemes based on class groups \emph{without parallel repeating the underlying zero knowledge proof}, yielding a significant efficiency improvement in the key generation over previous constructions (even those based on non-standard assumptions). Along the way we introduce a new notion of \emph{promise} $\Sigma$-protocol that satisfies only a weaker soundness called \emph{promise extractability}. An accepting \emph{promise} $\Sigma$-proof for statements related to class-group-based encryptions does not establish the truth of the statement but provides security guarantees (promise extractability) that are sufficient for our applications. We also show how to simulate homomorphic operations on a (possibly invalid) class-group-based encryption whose correctness has been proven via our \emph{promise} $\Sigma$-protocol. We believe that these techniques are of independent interest and applicable to other scenarios where efficient zero knowledge proofs for statements related to class-group is required.
2021
CHES
2021
CHES
2021
CRYPTO
The $t$-wise Independence of Substitution-Permutation Networks 📺
Block ciphers such as the Advanced Encryption Standard (Rijndael) are used extensively in practice, yet our understanding of their security continues to be highly incomplete. This paper promotes and continues a research program aimed at {\em proving} the security of block ciphers against important and well-studied classes of attacks. In particular, we initiate the study of (almost) $t$-wise independence of concrete block-cipher construction paradigms such as substitution-permutation networks and key-alternating ciphers. Sufficiently strong (almost) pairwise independence already suffices to resist (truncated) differential attacks and linear cryptanalysis, and hence this is a relevant and meaningful target. Our results are two-fold. Our first result concerns substitution-permutation networks (SPNs) that model ciphers such as AES. We prove the almost pairwise-independence of an SPN instantiated with concrete S-boxes together with an appropriate linear mixing layer, given sufficiently many rounds and independent sub-keys. Our proof relies on a {\em characterization} of S-box computation on input differences in terms of sampling output differences from certain subspaces, and a new randomness extraction lemma (which we prove with Fourier-analytic techniques) that establishes when such sampling yields uniformity. We use our techniques in particular to prove almost pairwise-independence for sufficiently many rounds of both the AES block cipher (which uses a variant of the patched inverse function $x \mapsto x^{-1}$ as the $S$-box) and the MiMC block cipher (which uses the cubing function $x \mapsto x^3$ as the $S$-box), assuming independent sub-keys. Secondly, we show that instantiating a key-alternating cipher (which can be thought of as a degenerate case of SPNs) with most permutations gives us (almost) $t$-wise independence in $t + o(t)$ rounds. In order to do this, we use the probabilistic method to develop two new lemmas, an {\em independence-amplification lemma} and a {\em distance amplification lemma}, that allow us to reason about the evolution of key-alternating ciphers.
2021
CRYPTO
Time- and Space-Efficient Arguments from Groups of Unknown Order 📺
We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time T * polylog(T) and space S * polylog(T), and the verifier runs in time n * polylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated with a trusted setup from the hardness of factoring (products of safe primes), or without a trusted setup using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform. Our proof builds on DARK (Bunz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we: 1. Identify a significant gap in the proof of security of Dark. 2. Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way. 3. Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption). In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.
2021
CRYPTO
Three Halves Make a Whole? Beating the Half-Gates Lower Bound for Garbled Circuits 📺
We describe a garbling scheme for boolean circuits, in which XOR gates are free and AND gates require communication of $1.5\kappa + 5$ bits. This improves over the state-of-the-art ``half-gates'' scheme of Zahur, Rosulek, and Evans (Eurocrypt 2015), in which XOR gates are free and AND gates cost $2\kappa$ bits. The half-gates paper proved a lower bound of $2\kappa$ bits per AND gate, in a model that captured all known garbling techniques at the time. We bypass this lower bound with a novel technique that we call \textbf{slicing and dicing}, which involves slicing wire labels in half and operating separately on those halves. Ours is the first to bypass the lower bound while being fully compatible with free-XOR, making it a drop-in replacement for half-gates. Our construction is proven secure from a similar assumption to prior free-XOR garbling (circular correlation-robust hash), and uses only slightly more computation than half-gates.
2021
CRYPTO
On the Round Complexity of Black-Box Secure MPC 📺
We consider the question of minimizing the round complexity of secure multiparty computation (MPC) protocols that make a black-box use of simple cryptographic primitives in the setting of security against any number of malicious parties. In the plain model, previous black-box protocols required a high constant number of rounds (>15). This is far from the known lower bound of 4 rounds for protocols with black-box simulators. When allowing a random oblivious transfer (OT) correlation setup, 2-round protocols making a black-box use of a pseudorandom generator were previously known. However, such protocols were obtained via a round-collapsing ``protocol garbling'' technique that has poor concrete efficiency and makes a non-black-box use of an underlying malicious-secure protocol. We improve this state of affairs by presenting the following types of black-box protocols. a. 4-round ``pairwise MPC'' in the plain model. This round-optimal protocol enables each ordered pair of parties to compute a function of both inputs whose output is delivered to the second party. The protocol makes black-box use of any public-key encryption (PKE) with pseudorandom public keys. As a special case, we get a black-box round-optimal realization of secure (copies of) OT between every ordered pair of parties. b. 2-round MPC from OT correlations. This round-optimal protocol makes a black-box use of any general 2-round MPC protocol satisfying an augmented notion of semi-honest security. In the two-party case, this yields new kinds of 2-round black-box protocols. c. 5-round MPC in the plain model. This protocol makes a black-box use of PKE with pseudorandom public keys, and 2-round oblivious transfer with ``semi-malicious'' security. A key technical tool for the first result is a novel combination of split-state non-malleable codes (Dziembowski, Pietrzak, and Wichs, JACM '18) with standalone secure {\em two-party} protocols. The second result is based on a new round-optimized variant of the ``IPS compiler'' (Ishai, Prabhakaran and Sahai, Crypto '08). The third result is obtained via a specialized combination of these two techniques.
2021
CRYPTO
Fine-grained Secure Attribute-based Encryption 📺
Fine-grained cryptography is constructing cryptosystems in a setting where an adversary’s resource is a-prior bounded and an honest party has less resource than an adversary. Currently, only simple form of encryption schemes, such as secret-key and public-key encryption, are constructed in this setting. In this paper, we enrich the available tools in fine-grained cryptography by proposing the first fine-grained secure attribute-based encryption (ABE) scheme. Our construction is adaptively secure under the widely accepted worst-case assumption, $NC1 \subsetneq \oplus L/poly$, and it is presented in a generic manner using the notion of predicate encodings (Wee, TCC’14). By properly instantiating the underlying encoding, we can obtain different types of ABE schemes, including identity-based encryption. Previously, all of these schemes were unknown in fine-grained cryptography. Our main technical contribution is constructing ABE schemes without using pairing or the Diffie-Hellman assumption. Hence, our results show that, even if one-way functions do not exist, we still have ABE schemes with meaningful security. For more application of our techniques, we construct an efficient (quasi-adaptive) non-interactive zero-knowledge (QA-NIZK) proof system.
2021
CRYPTO
Computational Hardness of Optimal Fair Computation: Beyond Minicrypt 📺
Secure multi-party computation allows mutually distrusting parties to compute securely over their private data. However, guaranteeing output delivery to honest parties when the adversarial parties may abort the protocol has been a challenging objective. As a representative task, this work considers two-party coin-tossing protocols with guaranteed output delivery, a.k.a., fair coin-tossing. In the information-theoretic plain model, as in two-party zero-sum games, one of the parties can force an output with certainty. In the commitment-hybrid, any $r$-message coin-tossing protocol is ${1/\sqrt r}$-unfair, i.e., the adversary can change the honest party's output distribution by $1/\sqrt r$ in the statistical distance. Moran, Naor, and Segev (TCC--2009) constructed the first $1/r$-unfair protocol in the oblivious transfer-hybrid. No further security improvement is possible because Cleve (STOC--1986) proved that $1/r$-unfairness is unavoidable. Therefore, Moran, Naor, and Segev's coin-tossing protocol is optimal. However, is oblivious transfer necessary for optimal fair coin-tossing? Maji and Wang (CRYPTO--2020) proved that any coin-tossing protocol using one-way functions in a black-box manner is at least $1/\sqrt r$-unfair. That is, optimal fair coin-tossing is impossible in Minicrypt. Our work focuses on tightly characterizing the hardness of computation assumption necessary and sufficient for optimal fair coin-tossing within Cryptomania, outside Minicrypt. Haitner, Makriyannia, Nissim, Omri, Shaltiel, and Silbak (FOCS--2018 and TCC--2018) proved that better than $1/\sqrt r$-unfairness, for any constant $r$, implies the existence of a key-agreement protocol. We prove that any coin-tossing protocol using public-key encryption (or, multi-round key agreement protocols) in a black-box manner must be $1/\sqrt r$-unfair. Next, our work entirely characterizes the additional power of secure function evaluation functionalities for optimal fair coin-tossing. We augment the model with an idealized secure function evaluation of $f$, \aka, the $f$-hybrid. If $f$ is complete, that is, oblivious transfer is possible in the $f$-hybrid, then optimal fair coin-tossing is also possible in the $f$-hybrid. On the other hand, if $f$ is not complete, then a coin-tossing protocol using public-key encryption in a black-box manner in the $f$-hybrid is at least $1/\sqrt r$-unfair.
2021
CRYPTO
Constructing Locally Leakage-resilient Linear Secret-sharing Schemes 📺
Innovative side-channel attacks have repeatedly falsified the assumption that cryptographic implementations are opaque black-boxes. Therefore, it is essential to ensure cryptographic constructions' security even when information leaks via unforeseen avenues. One such fundamental cryptographic primitive is the secret-sharing schemes, which underlies nearly all threshold cryptography. Our understanding of the leakage-resilience of secret-sharing schemes is still in its preliminary stage. This work studies locally leakage-resilient linear secret-sharing schemes. An adversary can leak $m$ bits of arbitrary local leakage from each $n$ secret shares. However, in a locally leakage-resilient secret-sharing scheme, the leakage's joint distribution reveals no additional information about the secret. For every constant $m$, we prove that the Massey secret-sharing scheme corresponding to a random linear code of dimension $k$ (over sufficiently large prime fields) is locally leakage-resilient, where $k/n > 1/2$ is a constant. The previous best construction by Benhamouda, Degwekar, Ishai, Rabin (CRYPTO--2018) needed $k/n > 0.907$. A technical challenge arises because the number of all possible $m$-bit local leakage functions is exponentially larger than the number of random linear codes. Our technical innovation begins with identifying an appropriate pseudorandomness-inspired family of tests; passing them suffices to ensure leakage-resilience. We show that most linear codes pass all tests in this family. This Monte-Carlo construction of linear secret-sharing scheme that is locally leakage-resilient has applications to leakage-resilient secure computation. Furthermore, we highlight a crucial bottleneck for all the analytical approaches in this line of work. Benhamouda et al. introduced an analytical proxy to study the leakage-resilience of secret-sharing schemes; if the proxy is small, then the scheme is leakage-resilient. However, we present a one-bit local leakage function demonstrating that the converse is false, motivating the need for new analytically well-behaved functions that capture leakage-resilience more accurately. Technically, the analysis involves probabilistic and combinatorial techniques and (discrete) Fourier analysis. The family of new ``tests'' capturing local leakage functions, we believe, is of independent and broader interest.
2021
CRYPTO
No Time to Hash:On Super-Efficient Entropy Accumulation 📺
Real-world random number generators (RNGs) cannot afford to use (slow) cryptographic hashing every time they refresh their state R with a new entropic input X. Instead, they use ``super-efficient'' simple entropy-accumulation procedures, such as R <- rot_{alpha, n}(R) XOR X where rot_{alpha,n} rotates an n-bit state R by some fixed number alpha. For example, Microsoft's RNG uses alpha=5 for n=32 and alpha=19 for n=64. Where do these numbers come from? Are they good choices? Should rotation be replaced by a better permutation pi of the input bits? In this work we initiate a rigorous study of these pragmatic questions, by modeling the sequence of successive entropic inputs X_1,X_2, ... as independent (but otherwise adversarial) samples from some natural distribution family D. We show a simple but surprisingly powerful connection between entropy accumulation and understanding the Fourier spectrum of distributions in D. Our contribution is as follows. - We define 2-monotone distributions as a rich family D that includes relevant real-world distributions (Gaussian, exponential, etc.), but avoids trivial impossibility results. - For any alpha with gcd(alpha,n)=1, we show that rotation accumulates Omega(n) bits of entropy from n independent samples X_1,...,X_n from any (unknown) 2-monotone distribution with entropy k > 1. - However, we also show some choices of alpha perform much better than others for a given n. E.g., we show alpha=19 is one of the best choices for n=64; in contrast, alpha=5 is good, but generally worse than alpha=7, for n=32. - More generally, given a permutation pi and k > 1, we define a simple parameter, the covering number C_{pi,k}, and show that it characterizes the number of steps before the rule (R_1,...,R_n) <- (R_{pi(1)},..., R_{pi(n)}) XOR X accumulates nearly n bits of entropy from independent, 2-monotone samples of min-entropy k each. - We build a simple permutation pi^*, which achieves nearly optimal C_{pi^*,k} \approx n/k for all values of k simultaneously, and experimentally validate that it compares favorably with all rotations rot_{alpha,n}.
2021
CRYPTO
White Box Traitor Tracing 📺
Traitor tracing aims to identify the source of leaked decryption keys. Since the ``traitor'' can try to hide their key within obfuscated code in order to evade tracing, the tracing algorithm should work for general, potentially obfuscated, decoder \emph{programs}. In the setting of such general decoder programs, prior work uses \emph{black box} tracing: the tracing algorithm ignores the implementation of the decoder, and instead traces just by making queries to the decoder and observing the outputs. We observe that, in some settings, such black box tracing leads to consistency and user privacy issues. On the other hand, these issues do not appear inherent to \emph{white box} tracing, where the tracing algorithm actually inspects the decoder implementation. We therefore develop new white box traitor tracing schemes providing consistency and/or privacy. Our schemes can be instantiated under various assumptions ranging from public key encryption to indistinguishability obfuscation, with different trade-offs. To the best of our knowledge, ours is the first work to consider white box tracing in the general decoder setting.
2021
CRYPTO
Oblivious Key-Value Stores and Amplification for Private Set Intersection 📺
Many recent private set intersection (PSI) protocols encode input sets as polynomials. We consider the more general notion of an oblivious key-value store (OKVS), which is a data structure that compactly represents a desired mapping $k_i$ to $v_i$. When the $v_i$ values are random, the OKVS data structure hides the $k_i$ values that were used to generate it. The simplest (and size-optimal) OKVS is a polynomial $p$ that is chosen using interpolation such that $p(k_i)=v_i$. We initiate the formal study of oblivious key-value stores, and show new constructions resulting in the fastest OKVS to date. Similarly to cuckoo hashing, current analysis techniques are insufficient for finding *concrete* parameters to guarantee a small failure probability for our OKVS constructions. Moreover, it would cost too much to run experiments to validate a small upperbound on the failure probability. We therefore show novel techniques to amplify an OKVS construction which has a failure probability $p$, to an OKVS with a similar overhead and failure probability $p^c$. Setting $p$ to be moderately small enables to validate it by running a relatively small number of $O(1/p)$ experiments. This validates a $p^c$ failure probability for the amplified OKVS. Finally, we describe how OKVS can significantly improve the state of the art of essentially all variants of PSI. This leads to the fastest two-party PSI protocols to date, for both the semi-honest and the malicious settings. Specifically, in networks with moderate bandwidth (e.g., 30 - 300 Mbps) our malicious two-party PSI protocol has 40\% less communication and is 20-40% faster than the previous state of the art protocol, even though the latter only has heuristic confidence.
2021
CRYPTO
MPC-Friendly Symmetric Cryptography from Alternating Moduli: Candidates, Protocols, and Applications 📺
We study new candidates for symmetric cryptographic primitives that leverage alternation between linear functions over $\mathbb{Z}_2$ and $\mathbb{Z}_3$ to support fast protocols for secure multiparty computation (MPC). This continues the study of weak pseudorandom functions of this kind initiated by Boneh et al. (TCC 2018) and Cheon et al. (PKC 2021). We make the following contributions. (Candidates). We propose new designs of symmetric primitives based on alternating moduli. These include candidate one-way functions, pseudorandom generators, and weak pseudorandom functions. We propose concrete parameters based on cryptanalysis. (Protocols). We provide a unified approach for securely evaluating modulus-alternating primitives in different MPC models. For the original candidate of Boneh et al., our protocols obtain at least 2x improvement in all performance measures. We report efficiency benchmarks of an optimized implementation. (Applications). We showcase the usefulness of our candidates for a variety of applications. This includes short ``Picnic-style'' signature schemes, as well as protocols for oblivious pseudorandom functions, hierarchical key derivation, and distributed key generation for function secret sharing.
2021
CRYPTO
Counterexamples to New Circular Security Assumptions Underlying iO 📺
We study several strengthening of classical circular security assumptions which were recently introduced in four new lattice-based constructions of indistinguishability obfuscation: Brakerski-D\"ottling-Garg-Malavolta (Eurocrypt 2020), Gay-Pass (STOC 2021), Brakerski-D\"ottling-Garg-Malavolta (Eprint 2020) and Wee-Wichs (Eprint 2020). We provide explicit counterexamples to the {\em $2$-circular shielded randomness leakage} assumption w.r.t.\ the Gentry-Sahai-Waters fully homomorphic encryption scheme proposed by Gay-Pass, and the {\em homomorphic pseudorandom LWE samples} conjecture proposed by Wee-Wichs. Our work suggests a separation between classical circular security of the kind underlying un-levelled fully-homomorphic encryption from the strengthened versions underlying recent iO constructions, showing that they are not (yet) on the same footing. Our counterexamples exploit the flexibility to choose specific implementations of circuits, which is explicitly allowed in the Gay-Pass assumption and unspecified in the Wee-Wichs assumption. Their indistinguishabilty obfuscation schemes are still unbroken. Our work shows that the assumptions, at least, need refinement. In particular, generic leakage-resilient circular security assumptions are delicate, and their security is sensitive to the specific structure of the leakages involved.
2021
CRYPTO
Halo Infinite: Proof-Carrying Data from Additive Polynomial Commitments 📺
Polynomial commitment schemes (PCS) have recently been in the spotlight for their key role in building SNARKs. A PCS provides the ability to commit to a polynomial over a finite field and prove its evaluation at points. A *succinct* PCS has commitment size and evaluation proof size sublinear in the degree of the polynomial. An *efficient* PCS has sublinear proof verification. Any efficient and succinct PCS can be used to construct a SNARK with similar security and efficiency characteristics (in the random oracle model). Proof-carrying data (PCD) enables a set of parties to carry out an indefinitely long distributed computation where every step along the way is accompanied by a proof of correctness. It generalizes *incrementally verifiable computation* and can even be used to construct SNARKs. Until recently, however, the only known method for constructing PCD required expensive SNARK recursion. A system called *Halo* first demonstrated a new methodology for building PCD without SNARKs, exploiting an aggregation property of the *Bulletproofs* inner-product argument. The construction was *heuristic* because it makes non-black-box use of a concrete instantiation of the Fiat-Shamir transform. We expand upon this methodology to show that PCD can be (heuristically) built from any homomorphic polynomial commitment scheme (PCS), even if the PCS evaluation proofs are neither succinct nor efficient. In fact, the Halo methodology extends to any PCS that has an even more general property, namely the ability to aggregate linear combinations of commitments into a new succinct commitment that can later be opened to this linear combination. Our results thus imply new constructions of SNARKs and PCD that were not previously described in the literature and serve as a blueprint for future constructions as well.
2021
CRYPTO
Large Message Homomorphic Secret Sharing from DCR and Applications 📺
We present the first homomorphic secret sharing (HSS) construction that simultaneously (1) has negligible correctness error, (2) supports integers from an exponentially large range, and (3) relies on an assumption not known to imply FHE --- specifically, the Decisional Composite Residuosity (DCR) assumption. This resolves an open question posed by Boyle, Gilboa, and Ishai (Crypto 2016). Homomorphic secret sharing is analogous to fully-homomorphic encryption, except the ciphertexts are shared across two non-colluding evaluators. Previous constructions of HSS either had non-negligible correctness error and polynomial-size plaintext space or were based on the stronger LWE assumption. We also present two applications of our technique: a multi-server ORAM with constant bandwidth overhead, and a rate-$1$ trapdoor hash function with negligible error rate.
2021
CRYPTO
Towards faster polynomial-time lattice reduction 📺
The LLL algorithm is a polynomial-time algorithm for reducing d-dimensional lattice with exponential approximation factor. Currently, the most efficient variant of LLL, by Neumaier and Stehl\'e, has a theoretical running time in $d^4\cdot B^{1+o(1)}$ where $B$ is the bitlength of the entries, but has never been implemented. This work introduces new asymptotically fast, parallel, yet heuristic, reduction algorithms with their optimized implementations. Our algorithms are recursive and fully exploit fast block matrix multiplication. We experimentally demonstrate that by carefully controlling the floating-point precision during the recursion steps, we can reduce euclidean lattices of rank d in time $\tilde{O}(d^\omega\cdot C)$, i.e., almost a constant number of matrix multiplications, where $\omega$ is the exponent of matrix multiplication and C is the log of the condition number of the matrix. For cryptographic applications, C is close to B, while it can be up to d times larger in the worst case. It improves the running-time of the state-of-the-art implementation fplll by a multiplicative factor of order $d^2\cdot B$. Further, we show that we can reduce structured lattices, the so-called knapsack lattices, in time $\tilde{O}(d^{\omega-1}\cdot C)$ with a progressive reduction strategy. Besides allowing reducing huge lattices, our implementation can break several instances of Fully Homomorphic Encryption schemes based on large integers in dimension 2,230 with 4 millions of bits.
2021
CRYPTO
Composition with Knowledge Assumptions 📺
Zero-knowledge succinct non-interactive arguments (zk-SNARKs) rely on knowledge assumptions for their security. Meanwhile, as the complexity and scale of cryptographic systems continues to grow, the composition of secure protocols is of vital importance. The current gold standards of composable security, the Universal Composability and Constructive Cryptography frameworks cannot capture knowledge assumptions, as their core proofs of composition prohibit white-box extraction. In this paper, we present a formal model allowing the composition of knowledge assumptions. Despite showing impossibility for the general case, we demonstrate the model’s usefulness when limiting knowledge assumptions to few instances of protocols at a time. We finish by providing the first instance of a simultaneously succinct and composable zk-SNARK, by using existing results within our framework.
2021
CRYPTO
Low-Complexity Weak Pseudorandom Functions in AC0[MOD2] 📺
A *weak pseudorandom function* (WPRF) is a keyed function $f_k:\{0,1\}^n\to\{0,1\}$ such that, for a random key $k$, a collection of samples $(x, f_k(x))$, for {\em uniformly random} inputs $x$, cannot be efficiently distinguished from totally random input-output pairs $(x,y)$. We study WPRFs in AC0[MOD2], the class of functions computable by AC0 circuits with parity gates, making the following contributions.  - *Between Lapland and Cryptomania.* We show that WPRFs in AC0[MOD2] imply a variant of the Learning Parity with Noise (LPN) assumption. This gives an unconditional version of an earlier conditional result of Akavia et al. (ITCS 2014). We further show that WPRFs in a subclass of AC0[mod 2] that includes a recent WPRF candidate by Boyle et al. (FOCS 2020) imply, under a seemingly weak additional conjecture, public-key encryption.  - *WPRF by sparse polynomials.* We propose the first WPRF candidate that can be computed by sparse multivariate polynomials over $\F_2$. We prove that it has subexponential security against linear and algebraic attacks. - *WPRF in AC0 ◦ MOD2.* We study the existence of WPRFs computed by AC0 circuits \emph{over} parity gates. We propose a modified version of a previous WPRF candidate of Akavia et al., and prove that it resists the algebraic attacks that were used by Bogdanov and Rosen (ECCC 2017) to break the original candidate in quasipolynomial time. We give evidence against the possibility of using {\em public} parity gates and relate this question to other conjectures.
2021
CRYPTO
Asymptotically-Good Arithmetic Secret Sharing over Z/p^{\ell}Z with Strong Multiplication and Its Applications to Efficient MPC 📺
The current paper studies information-theoretically secure multiparty computation (MPC) over rings $\Z/p^{\ell}\Z$. This is a follow-up research of recent work on MPC over rings $\Z/p^{\ell}\Z$. In the work of \cite[TCC2019]{tcc}, a protocol based on the Shamir secret sharing over $\Z/p^{\ell}\Z$ was presented. As in the field case, its limitation is that the share size has to grow as the number of players increases. Then several MPC protocols were developed in \cite[Asiacrypt 2020]{asiacrypt} to overcome this limitation. However, the MPC protocols in \cite[Asiacrypt 2020]{asiacrypt} suffer from several drawbacks: (i) the offline multiplication gate has super-linear communication complexity; (ii) the share size is doubled for the most important case, namely over $\Z/2^{\ell}\Z$ due to infeasible lifting of self-orthogonal codes from fields to rings; (iii) most importantly, the BGW model could not be applied via the secret sharing given in \cite[Asiacrypt 2020]{asiacrypt} due to lack of strong multiplication. Our contribution in this paper is three fold. Firstly, we overcome all the drawbacks in \cite{tcc,asiacrypt} mentioned above. Secondly, we establish an arithmetic secret sharing with strong multiplication, which is the most important primitive in the BGW model. Thirdly, we lift Reverse Multiplication Friendly Embeddings (RMFE) from fields to rings, with same (linear) complexity. Note that RMFE has become a standard technique for amortized communication complexity in MPC, as in \cite[CRYPTO'18]{crypto2018} and \cite[CRYPTO'19]{dn19}. To obtain our theoretical results, we use the existence of lifts of curves over rings, then use the known results stating that Riemann-Roch spaces are free modules. To make our scheme practical, we start from good algebraic geometry codes over finite fields obtained from existing computational techniques. Then we present, and implement, an efficient algorithm to Hensel-lift the generating matrix of the code, such that the multiplicative conditions are preserved over rings. Existence of this specific lift is guaranteed by the previous theory. On the other hand, a random lifting of codes over from fields to Galois rings does not preserve multiplicativity in general. (Notice that our indirect method is motivated by the fact that, following the theory instead, would require to ``preprocess'' the curve under a form with ``smooth" equations, in particular with many variables, before lifting it. But computing on these objects over rings is out of the scope of existing research). Finally we provide efficient elementary methods for sharing and (robust) reconstruction of secrets over rings. As a result, arithmetic secret sharing over $\Z/p^{\ell}\Z$ with strong multiplication can be efficiently constructed and practically applied.
2021
CRYPTO
Separating Adaptive Streaming from Oblivious Streaming using the Bounded Storage Model 📺
Streaming algorithms are algorithms for processing large data streams, using only a limited amount of memory. Classical streaming algorithms typically work under the assumption that the input stream is chosen independently from the internal state of the algorithm. Algorithms that utilize this assumption are called oblivious algorithms. Recently, there is a growing interest in studying streaming algorithms that maintain utility also when the input stream is chosen by an adaptive adversary, possibly as a function of previous estimates given by the streaming algorithm. Such streaming algorithms are said to be adversarially-robust. By combining techniques from learning theory with cryptographic tools from the bounded storage model, we separate the oblivious streaming model from the adversarially-robust streaming model. Specifically, we present a streaming problem for which every adversarially-robust streaming algorithm must use polynomial space, while there exists a classical (oblivious) streaming algorithm that uses only polylogarithmic space. This is the first general separation between the capabilities of these two models, resolving one of the central open questions in adversarial robust streaming.
2021
CRYPTO
Sublinear GMW-Style Compiler for MPC with Preprocessing 📺
We consider the efficiency of protocols for secure multiparty computation (MPC) with a dishonest majority. A popular approach for the design of such protocols is to employ {\em preprocessing}. Before the inputs are known, the parties generate correlated secret randomness, which is consumed by a fast and ``information-theoretic'' online protocol. A powerful technique for securing such protocols against malicious parties uses {\em homomorphic MACs} to authenticate the values produced by the online protocol. Compared to a baseline protocol, which is only secure against semi-honest parties, this involves a significant increase in the size of the correlated randomness, by a factor of up to a statistical security parameter. Different approaches for partially mitigating this extra storage cost come at the expense of increasing the online communication. In this work we propose a new technique for protecting MPC with preprocessing against malicious parties. We show that for circuit evaluation protocols that satisfy mild security and structural requirements, that are met by almost all standard protocols with semi-honest security, the extra {\em additive} storage and online communication costs are both {\em logarithmic} in the circuit size. This applies to Boolean circuits and to arithmetic circuits over fields or rings, and to both information-theoretic and computationally secure protocols. Our protocol can be viewed as a sublinear information-theoretic variant of the celebrated ``GMW compiler'' that applies to MPC with preprocessing. Our compiler makes a novel use of the techniques of Boneh et al. (Crypto 2019) for sublinear distributed zero knowledge, which were previously only used in the setting of {\em honest-majority} MPC.
2021
CRYPTO
You Only Speak Once: Secure MPC with Stateless Ephemeral Roles 📺
The inherent difficulty of maintaining stateful environments over long periods of time gave rise to the paradigm of serverless computing, where mostly-stateless components are deployed on demand to handle computation tasks, and are teared down once their task is complete. Serverless architecture could offer the added benefit of improved resistance to targeted denial-of-service attacks, by hiding from the attacker the physical machines involved in the protocol until after they complete their work. Realizing such protection, however, requires that the protocol only uses stateless parties, where each party sends only one message and never needs to speaks again. Perhaps the most famous example of this style of protocols is the Nakamoto consensus protocol used in Bitcoin: A peer can win the right to produce the next block by running a local lottery (mining), all while staying covert. Once the right has been won, it is executed by sending a single message. After that, the physical entity never needs to send more messages. We refer to this as the You-Only-Speak-Once (YOSO) property, and initiate the formal study of it within a new model that we call the YOSO model. Our model is centered around the notion of roles, which are stateless parties that can only send a single message. Crucially, our modelling separates the protocol design, that only uses roles, from the role-assignment mechanism, that assigns roles to actual physical entities. This separation enables studying these two aspects separately, and our YOSO model in this work only deals with the protocol-design aspect. We describe several techniques for achieving YOSO MPC; both computational and information theoretic. Our protocols are synchronous and provide guaranteed output delivery (which is important for application domains such as blockchains), assuming honest majority of roles in every time step. We describe a practically efficient computationally-secure protocol, as well as a proof-of-concept information theoretically secure protocol.
2021
CRYPTO
Two-Round Trip Schnorr Multi-Signatures via Delinearized Witnesses 📺
We construct a two-round Schnorr-based signature scheme (DWMS) by delinearizing two pre-witnesses supplied by each signer. DWMS is a secure signature scheme in the algebraic group model (AGM) and the random oracle model (ROM) under the assumption of the hardness of the one-more discrete logarithm problem and the 2-entwined sum problem that we introduce in this paper. Our new m-entwined sum problem tweaks the k-sum problem in a scalar field using the associated group. We prove the hardness of our new problem in the AGM assuming the hardness of the discrete logarithm problem in the associated group. We believe that our new problem simplifies the security proofs of multi-signature schemes that use the delinearization of witnesses.
2021
CRYPTO
Compact Ring Signatures from Learning With Errors 📺
Ring signatures allow a user to sign a message on behalf of a ``ring'' of signers, while hiding the true identity of the signer. As the degree of anonymity guaranteed by a ring signature is directly proportional to the size of the ring, an important goal in cryptography is to study constructions that minimize the size of the signature as a function of the number of ring members. In this work, we present the first compact ring signature scheme (i.e., where the size of the signature grows logarithmically with the size of the ring) from the (plain) learning with errors (LWE) problem. The construction is in the standard model and it does not rely on a trusted setup or on the random oracle heuristic. In contrast with the prior work of Backes \etal~[EUROCRYPT'2019], our scheme does not rely on bilinear pairings, which allows us to show that the scheme is post-quantum secure assuming the quantum hardness of LWE. At the heart of our scheme is a new construction of compact and statistically witness-indistinguishable ZAP arguments for NP $\cap$ coNP, that we show to be sound based on the plain LWE assumption. Prior to our work, statistical ZAPs (for all of NP) were known to exist only assuming \emph{sub-exponential} LWE. We believe that this scheme might find further applications in the future.
2021
CRYPTO
Threshold Schnorr with Stateless Deterministic Signing from Standard Assumptions 📺
Schnorr's signature scheme permits an elegant threshold signing protocol due to its linear signing equation. However each new signature consumes fresh randomness, which can be a major attack vector in practice. Sources of randomness in deployments are frequently either unreliable, or require state continuity, i.e. reliable fresh state resilient to rollbacks. State continuity is a notoriously difficult guarantee to achieve in practice, due to system crashes caused by software errors, malicious actors, or power supply interruptions (Parno et al., S&P '11). This is a non-issue for Schnorr variants such as EdDSA, which is specified to derive nonces deterministically as a function of the message and the secret key. However, it is challenging to translate these benefits to the threshold setting, specifically to construct a threshold Schnorr scheme where signing neither requires parties to consume fresh randomness nor update long-term secret state. In this work, we construct a dishonest majority threshold Schnorr protocol that enables such stateless deterministic nonce derivation using standardized block ciphers. Our core technical ingredients are new tools for the zero-knowledge from garbled circuits (ZKGC) paradigm to aid in verifying correct nonce derivation: - A mechanism based on UC Commitments that allows a prover to commit once to a witness, and prove an unbounded number of statements online with only cheap symmetric key operations. - A garbling gadget to translate intermediate garbled circuit wire labels to arithmetic encodings. A proof per our scheme requires only a small constant number of exponentiations.
2021
CRYPTO
Cryptanalysis of Full LowMC and LowMC-M with Algebraic Techniques 📺
In this paper, we revisit the difference enumeration techniques for LowMC and develop new algebraic techniques to achieve efficient key-recovery attacks with negligible memory complexity. \mbox{Benefiting} from our technique to reduce the memory complexity, we could significantly improve the attacks on LowMC when the block size is much larger than the key size and even break LowMC with such a kind of parameter. On the other hand, with our new key-recovery technique, we could significantly improve the time to retrieve the full key if given only a single pair of input and output messages together with the difference trail that they take, which was stated as an interesting question by Rechberger et al. in ToSC 2018. Combining both the techniques, with only 2 chosen plaintexts, we could break 4 rounds of LowMC adopting a full S-Box layer with block size of 129, 192 and 255 bits, respectively, which are the 3 recommended parameters for Picnic3, an alternative \mbox{third-round} candidate in NIST's Post-Quantum Cryptography competition. We have to emphasize that our attacks do not indicate that Picnic3 is broken as the Picnic use-case is very different and an attacker cannot even freely choose 2 plaintexts to encrypt for a concrete LowMC instance. However, such parameters are deemed as secure in the latest LowMC. Moreover, much more rounds of seven instances of the backdoor cipher \mbox{LowMC-M} as proposed by Peyrin and Wang in CRYPTO 2020 can be broken without finding the backdoor by making full use of the allowed $2^{64}$ data. The above mentioned attacks are all achieved with negligible memory.
2021
CRYPTO
Mac'n'Cheese: Zero-Knowledge Proofs for Boolean and Arithmetic Circuits with Nested Disjunctions 📺
Zero knowledge proofs are an important building block in many cryptographic applications. Unfortunately, when the proof statements become very large, existing zero-knowledge proof systems easily reach their limits: either the computational overhead, the memory footprint, or the required bandwidth exceed levels that would be tolerable in practice. We present an interactive zero-knowledge proof system for boolean and arithmetic circuits, called Mac'n'Cheese, with a focus on supporting large circuits. Our work follows the commit-and-prove paradigm instantiated using information-theoretic MACs based on vector oblivious linear evaluation to achieve high efficiency. We additionally show how to optimize disjunctions, with a general OR transformation for proving the disjunction of $m$ statements that has communication complexity proportional to the longest statement (plus an additive term logarithmic in $m$). These disjunctions can further be \emph{nested}, allowing efficient proofs about complex statements with many levels of disjunctions. We also show how to make Mac'n'Cheese non-interactive (after a preprocessing phase) using the Fiat-Shamir transform, and with only a small degradation in soundness. We have implemented the online phase of Mac'n'Cheese and achieve a runtime of 144~ns per AND gate and 1.5~$\mu$s per multiplication gate in $\mathbb{F}_{2^{61}-1}$ when run over a network with a 95~ms latency and a bandwidth of 31.5~Mbps. In addition, we show that the disjunction optimization improves communication as expected: when proving a boolean circuit with eight branches and each branch containing roughly 1 billion multiplications, Mac'n'Cheese requires only 75 more bytes to communicate than in the single branch case.
2021
CRYPTO
Thinking Outside the Superbox 📺
Designing a block cipher or cryptographic permutation can be approached in many different ways. One such approach, popularized by AES, consists in grouping the bits along the S-box boundaries, e.g., in bytes, and in consistently processing them in these groups. This aligned approach leads to hierarchical structures like superboxes that make it possible to reason about the differential and linear propagation properties using combinatorial arguments. In contrast, an unaligned approach avoids any such grouping in the design of transformations. However, without hierarchical structure, sophisticated computer programs are required to investigate the differential and linear propagation properties of the primitive. In this paper, we formalize this notion of alignment and study four primitives that are exponents of different design strategies. We propose a way to analyze the interactions between the linear and the nonlinear layers w.r.t. the differential and linear propagation, and we use it to systematically compare the four primitives using non-trivial computer experiments. We show that alignment naturally leads to different forms of clustering, e.g., of active bits in boxes, of two-round trails in activity patterns, and of trails in differentials and linear approximations.
2021
CRYPTO
Silver: Silent VOLE and Oblivious Transfer from Hardness of Decoding Structured LDPC Codes 📺
We put forth new protocols for oblivious transfer extension and vector OLE, called \emph{Silver}, for SILent Vole and oblivious transfER. Silver offers extremely high performances: generating 10 million random OTs on one core of a standard laptop requires only 300ms of computation and 122KB of communication. This represents 37% less computation and ~1300x less communication than the standard IKNP protocol, as well as ~4x less computation and ~4x less communication than the recent protocol of Yang et al. (CCS 2020). Silver is \emph{silent}: after a one-time cheap interaction, two parties can store small seeds, from which they can later \emph{locally} generate a large number of OTs \emph{while remaining offline}. Neither IKNP nor Yang et al. enjoys this feature; compared to the best known silent OT extension protocol of Boyle et al. (CCS 2019), upon which we build up, Silver has 19x less computation, and the same communication. Due to its attractive efficiency features, Silver yields major efficiency improvements in numerous MPC protocols. Our approach is a radical departure from the standard paradigm for building MPC protocols, in that we do \emph{not} attempt to base our constructions on a well-studied assumption. Rather, we follow an approach closer in spirit to the standard paradigm in the design of symmetric primitives: we identify a set of fundamental structural properties that allow us to withstand all known attacks, and put forth a candidate design, guided by our analysis. We also rely on extensive experimentations to analyze our candidate and experimentally validate their properties. In essence, our approach boils down to constructing new families of linear codes with (plausibly) high minimum distance and extremely low encoding time. While further analysis is of course warranted to confidently assess the security of Silver, we hope and believe that initiating this approach to the design of MPC primitives will pave the way to new secure primitives with extremely attractive efficiency features.
2021
CRYPTO
A World of SNARKS 📺
Invited talk
2021
CRYPTO
Subtractive Sets over Cyclotomic Rings: Limits of Schnorr-like Arguments over Lattices 📺
We study when (dual) Vandermonde systems of the form `V_T ⋅ z = s⋅w` admit a solution `z` over a ring `R`, where `V_T` is the Vandermonde matrix defined by a set `T` and where the “slack” `s` is a measure of the quality of solutions. To this end, we propose the notion of `(s,t)`-subtractive sets over a ring `R`, with the property that if `S` is `(s,t)`-subtractive then the above (dual) Vandermonde systems defined by any `t`-subset `T ⊆ S` are solvable over `R`. The challenge is then to find large sets `S` while minimising (the norm of) `s` when given a ring `R`. By constructing families of `(s,t)`-subtractive sets `S` of size `n = poly(λ)` over cyclotomic rings `R = ZZ[ζ_{p^ℓ}]` for prime `p`, we construct Schnorr-like lattice-based proofs of knowledge for the SIS relation `A ⋅ x = s ⋅ y mod q` with `O(1/n)` knowledge error, and `s=1` in case `p = poly(λ)`. Our technique slots naturally into the lattice Bulletproof framework from Crypto’20, producing lattice-based succinct arguments for NP with better parameters. We then give matching impossibility results constraining `n` relative to `s`, which suggest that our Bulletproof-compatible protocols are optimal unless fundamentally new techniques are discovered. Noting that the knowledge error of lattice Bulletproofs is `Ω(log k/n)` for witnesses in `R^k` and subtractive set size `n`, our result represents a barrier to practically efficient lattice-based succinct arguments in the Bulletproof framework. Beyond these main results, the concept of `(s,t)`-subtractive sets bridges group-based threshold cryptography to the lattice settings, which we demonstrate by relating it to distributed pseudorandom functions.
2021
CRYPTO
Lower bounds on lattice sieving and information set decoding 📺
In two of the main areas of post-quantum cryptography, based on lattices and codes, nearest neighbor techniques have been used to speed up state-of-the-art cryptanalytic algorithms, and to obtain the lowest asymptotic cost estimates to date [May--Ozerov, Eurocrypt'15; Becker--Ducas--Gama--Laarhoven, SODA'16]. These upper bounds are useful for assessing the security of cryptosystems against known attacks, but to guarantee long-term security one would like to have closely matching lower bounds, showing that improvements on the algorithmic side will not drastically reduce the security in the future. As existing lower bounds from the nearest neighbor literature do not apply to the nearest neighbor problems appearing in this context, one might wonder whether further speedups to these cryptanalytic algorithms can still be found by only improving the nearest neighbor subroutines. We derive new lower bounds on the costs of solving the nearest neighbor search problems appearing in these cryptanalytic settings. For the Euclidean metric we show that for random data sets on the sphere, the locality-sensitive filtering approach of [Becker--Ducas--Gama--Laarhoven, SODA 2016] using spherical caps is optimal, and hence within a broad class of lattice sieving algorithms covering almost all approaches to date, their asymptotic time complexity of $2^{0.292d + o(d)}$ is optimal. Similar conditional optimality results apply to lattice sieving variants, such as the $2^{0.265d + o(d)}$ complexity for quantum sieving [Laarhoven, PhD thesis 2016] and previously derived complexity estimates for tuple sieving [Herold--Kirshanova--Laarhoven, PKC 2018]. For the Hamming metric we derive new lower bounds for nearest neighbor searching which almost match the best upper bounds from the literature [May--Ozerov, Eurocrypt 2015]. As a consequence we derive conditional lower bounds on decoding attacks, showing that also here one should search for improvements elsewhere to significantly undermine security estimates from the literature.
2021
CRYPTO
Non-Interactive Secure Multiparty Computation for Symmetric Functions, Revisited: More Efficient Constructions and Extensions 📺
Non-interactive secure multiparty computation (NIMPC) is a variant of secure computation which allows each of $n$ players to send only a single message depending on his input and correlated randomness. Abelian programs, which can realize any symmetric function, are defined as functions on the sum of the players' inputs over an abelian group and provide useful functionalities for real-world applications. We improve and extend the previous results in the following ways: \begin{itemize} \item We present NIMPC protocols for abelian programs that improve the best known communication complexity. If inputs take any value of an abelian group $\mathbb{G}$, our protocol achieves the communication complexity $O(|\mathbb{G}|(\log|\mathbb{G}|)^2)$ improving $O(|\mathbb{G}|^2n^2)$ of Beimel et al. (Crypto 2014). If players are limited to inputs from subsets of size at most $d$, our protocol achieves $|\mathbb{G}|(\log|\mathbb{G}|)^2(\max\{n,d\})^{(1+o(1))t}$ where $t$ is a corruption threshold. This result improves $|\mathbb{G}|^3(nd)^{(1+o(1))t}$ of Beimel et al. (Crypto 2014), and even $|\mathbb{G}|^{\log n+O(1)}n$ of Benhamouda et al. (Crypto 2017) if $t=o(\log n)$ and $|\mathbb{G}|=n^{\Theta(1)}$. \item We propose for the first time NIMPC protocols for linear classifiers that are more efficient than those obtained from the generic construction. \item We revisit a known transformation of Benhamouda et al. (Crypto 2017) from Private Simultaneous Messages (PSM) to NIMPC, which we repeatedly use in the above results. We reveal that a sub-protocol used in the transformation does not satisfy the specified security. We also fix their protocol with only constant overhead in the communication complexity. As a byproduct, we obtain an NIMPC protocol for indicator functions with asymptotically optimal communication complexity with respect to the input length. \end{itemize}
2021
CRYPTO
Linear Cryptanalysis of FF3-1 and FEA 📺
Improved attacks on generic small-domain Feistel ciphers with alternating round tweaks are obtained using linear cryptanalysis. This results in practical distinguishing and message-recovery attacks on the United States format-preserving encryption standard FF3-1 and the South-Korean standards FEA-1 and FEA-2. The data-complexity of the proposed attacks on FF3-1 and FEA-1 is $O(N^{r/2 - 1.5})$, where $N^2$ is the domain size and $r$ is the number of rounds. For example, FF3-1 with $N = 10^3$ can be distinguished from an ideal tweakable block cipher with advantage $\ge 1/10$ using $2^{23}$ encryption queries. Recovering the left half of a message with similar advantage requires $2^{24}$ data. The analysis of FF3-1 serves as an interesting real-world application of (generalized) linear cryptanalysis over the group $\mathbb{Z}/N\mathbb{Z}$.
2021
CRYPTO
Multi-Input Quadratic Functional Encryption from Pairings 📺
We construct the first multi-input functional encryption (MIFE) scheme for quadratic functions from pairings. Our construction supports polynomial number of users, where user $i$, for $i \in [n]$, encrypts input $\bfx_i \in \mbZ^m$ to obtain ciphertext $\ct_i$, the key generator provides a key $\sk_\bfc$ for vector $\bfc \in \mbZ^{({mn})^2}$ and decryption, given $\ct_1,\ldots,\ct_n$ and $\sk_\bfc$, recovers $\ip{\bfc}{\bfx \otimes \bfx}$ and nothing else. We achieve indistinguishability-based (selective) security against unbounded collusions under the standard bilateral matrix Diffie-Hellman assumption. All previous MIFE schemes either support only inner products (linear functions) or rely on strong cryptographic assumptions such as indistinguishability obfuscation or multi-linear maps.
2021
CRYPTO
Deniable Fully Homomorphic Encryption from Learning With Errors 📺
We define and construct {\it Deniable Fully Homomorphic Encryption} based on the Learning With Errors (LWE) polynomial hardness assumption. Deniable FHE enables storing encrypted data in the cloud to be processed securely without decryption, maintaining deniability of the encrypted data, as well the prevention of vote-buying in electronic voting schemes where encrypted votes can be tallied without decryption. Our constructions achieve {\it compactness} independently of the level of deniability- both the size of the public key and the size of the ciphertexts are bounded by a fixed polynomial, independent of the faking probability achieved by the scheme. This is in contrast to all previous constructions of deniable encryption schemes (even without requiring homomorphisms) which are based on polynomial hardness assumptions, originating with the seminal work of Canetti, Dwork, Naor and Ostrovsky (CRYPTO 1997) in which the ciphertext size grows with the inverse of the faking probability. Canetti {\it et al.} argued that this dependence ``seems inherent'', but our constructions illustrate this is not the case. We note that the Sahai-Waters (STOC13) construction of deniable encryption from indistinguishability-obfuscation achieves compactness and can be easily modified to achieve deniable FHE as well, but it requires multiple, stronger sub-exponential hardness assumptions, which are furthermore not post-quantum secure. In contrast, our constructions rely only on the LWE polynomial hardness assumption, as currently required for FHE even without deniability. The running time of our encryption algorithm depends on the inverse of the faking probability, thus the scheme falls short of achieving simultaneously compactness, negligible deniability probability {\it and} polynomial encryption time. Yet, we believe that achieving compactness is a fundamental step on the way to achieving all properties simultaneously as has been the historical journey for other primitives such as functional encryption. Interestingly, we note that our constructions support large message spaces, whereas previous constructions were bit by bit, and can be run in online-offline model of encryption, where the bulk of computation is independent of the message and may be performed in an offline pre-processing phase. The running time of the online phase, is independent of the faking probability, whereas the offline encryption run-time grows with the inverse of the faking probability. At the heart of our constructions is a new way to use bootstrapping to obliviously generate FHE ciphertexts so that it supports faking under coercion.
2021
CRYPTO
Revisiting the Security of DbHtS MACs: Beyond-Birthday-Bound in the Multi-User Setting 📺
Double-block Hash-then-Sum (\textsf{DbHtS}) MACs are a class of MACs that aim for achieving beyond-birthday-bound security, including \textsf{SUM-ECBC}, \textsf{PMAC\_Plus}, \textsf{3kf9} and \textsf{LightMAC\_Plus}. Recently Datta et al. (FSE'19), and then Kim et al. (Eurocrypt'20) prove that \textsf{DbHtS} constructions are secure beyond the birthday bound in the single-user setting. However, by a generic reduction, their results degrade to (or even worse than) the birthday bound in the multi-user setting. In this work, we revisit the security of \textsf{DbHtS} MACs in the multi-user setting. We propose a generic framework to prove beyond-birthday-bound security for \textsf{DbHtS} constructions. We demonstrate the usability of this framework with applications to key-reduced variants of \textsf{DbHtS} MACs, including \textsf{2k-SUM-ECBC}, \textsf{2k-PMAC\_Plus} and \textsf{2k-LightMAC\_Plus}. Our results show that the security of these constructions will not degrade as the number of users grows. On the other hand, our results also indicate that these constructions are secure beyond the birthday bound in both single-user and multi-user setting without additional domain separation, which is used in the prior work to simplify the analysis. Moreover, we find a critical flaw in \textsf{2kf9}, which is proved to be secure beyond the birthday bound by Datta et al. (FSE'19). We can successfully forge a tag with probability 1 without making any queries. We go further to show attacks with birthday-bound complexity on several variants of \textsf{2kf9}.
2021
CRYPTO
A New Simple Technique to Bootstrap Various Lattice Zero-Knowledge Proofs to QROM Secure NIZKs 📺
Many of the recent advanced lattice-based Sigma-/public-coin honest verifier (HVZK) interactive protocols based on the techniques developed by Lyubashevsky (Asiacrypt'09, Eurocrypt'12) can be transformed into a non-interactive zero-knowledge (NIZK) proof in the random oracle model (ROM) using the Fiat-Shamir transform. Unfortunately, although they are known to be secure in the __classical__ ROM, existing proof techniques are incapable of proving them secure in the __quantum__ ROM (QROM). Alternatively, while we could instead rely on the Unruh transform (Eurocrypt'15), the resulting QROM secure NIZK will incur a large overhead compared to the underlying interactive protocol. In this paper, we present a new simple semi-generic transform that compiles many existing lattice-based Sigma-/public-coin HVZK interactive protocols into QROM secure NIZKs. Our transform builds on a new primitive called __extractable linear homomorphic commitment__ protocol. The resulting NIZK has several appealing features: it is not only a proof of knowledge but also straight-line extractable; the proof overhead is smaller compared to the Unruh transform; it enjoys a relatively small reduction loss; and it requires minimal background on quantum computation. To illustrate the generality of our technique, we show how to transform the recent Bootle et al.'s 5-round protocol with an exact sound proof (Crypto'19) into a QROM secure NIZK by increasing the proof size by a factor of 2.6. This compares favorably to the Unruh transform that requires a factor of more than 50.
2021
CRYPTO
Subquadratic SNARGs in the Random Oracle Model 📺
In a seminal work, Micali (FOCS 1994) gave the first succinct non-interactive argument (SNARG) in the random oracle model (ROM). The construction combines a PCP and a cryptographic commitment, and has several attractive features: it is plausibly post-quantum; it can be heuristically instantiated via lightweight cryptography; and it has a transparent (public-coin) parameter setup. However, it also has a significant drawback: a large argument size. In this work, we provide a new construction that achieves a smaller argument size. This is the first progress on the Micali construction since it was introduced over 25 years ago. A SNARG in the ROM is (t,ε)-secure if every t-query malicious prover can convince the verifier of a false statement with probability at most ε. For (t,ε)-security, the argument size of all known SNARGs in the ROM (including Micali's) is Õ((log (t/ε))^2) bits, *even* if one were to rely on conjectured probabilistic proofs well beyond current techniques. In practice, these costs lead to SNARGs that are much larger than constructions based on other (pre-quantum and costly) tools. This has led many to believe that SNARGs in the ROM are inherently quadratic. We show that this is not the case. We present a SNARG in the ROM with a sub-quadratic argument size: Õ(log (t/ε) * log t). Our construction relies on a strong soundness notion for PCPs and a weak binding notion for commitments. We hope that our work paves the way for understanding if a linear argument size, that is O(log (t/ε)), is achievable in the ROM.
2021
CRYPTO
Lattice Reduction with Approximate Enumeration Oracles: Practical Algorithms and Concrete Performance 📺
This work provides a systematic investigation of the use of approximate enumeration oracles in BKZ, building on recent technical progress on speeding-up lattice enumeration: relaxing (the search radius of) enumeration and extended preprocessing which preprocesses in a larger rank than the enumeration rank. First, we heuristically justify that relaxing enumeration with certain extreme pruning asymptotically achieves an exponential speed-up for reaching the same root Hermite factor (RHF). Second, we perform simulations/experiments to validate this and the performance for relaxed enumeration with numerically optimised pruning for both regular and extended preprocessing. Upgrading BKZ with such approximate enumeration oracles gives rise to our main result, namely a practical and faster (compared to previous work) polynomial-space lattice reduction algorithm for reaching the same RHF in practical and cryptographic parameter ranges. We assess its concrete time/quality performance with extensive simulations and experiments. As a consequence, we update the extrapolation of the crossover rank between a square-root cost estimate for quantum enumeration using our algorithm and the Core-SVP cost estimate for quantum sieving to 547.
2021
CRYPTO
Secure Wire Shuffling in the Probing Model 📺
In this paper we describe the first improvement of the wire shuffling countermeasure against side-channel attacks described by Ishai, Sahai and Wagner at Crypto 2003. More precisely, we show how to get worst case statistical security against t probes with running time O(t) instead of O(t log t); our construction is also much simpler. Recall that the classical masking countermeasure achieves perfect security but with running time O(t^2). We also describe a practical implementation for AES that outperforms the masking countermeasure for t ≥ 6 000.
2021
CRYPTO
SSE and SSD: Page-Efficient Searchable Symmetric Encryption 📺
Searchable Symmetric Encryption (SSE) enables a client to outsource a database to an untrusted server, while retaining the ability to securely search the data. The performance bottleneck of classic SSE schemes typically does not come from their fast, symmetric cryptographic operations, but rather from the cost of memory accesses. To address this issue, many works in the literature have considered the notion of locality, a simple design criterion that helps capture the cost of memory accesses in traditional storage media, such as Hard Disk Drives. A common thread among many SSE schemes aiming to improve locality is that they are built on top of new memory allocation schemes, which form the technical core of the constructions. The starting observation of this work is that for newer storage media such as Solid State Drives (SSDs), which have become increasingly common, locality is not a good predictor of practical performance. Instead, SSD performance mainly depends on page efficiency, that is, reading as few pages as possible. We define this notion, and identify a simple allocation problem, Data-Independent Packing, that captures the main technical challenge required to build page-efficient SSE. As our main result, we build a page-efficient and storage-efficient data-independent packing scheme, and deduce an SSE scheme with the same properties. The technical core of the result is a new generalization of cuckoo hashing to items of variable size. Practical experiments show that this approach achieves excellent performance.
2021
CRYPTO
ATLAS: Efficient and Scalable MPC in the Honest Majority Setting 📺
In this work, we address communication, computation, and round efficiency of unconditionally secure multi-party computation for arithmetic circuits in the honest majority setting. We achieve both algorithmic and practical improvements: - The best known result in the semi-honest setting has been due to Damgard and Nielsen (CRYPTO 2007). Over the last decade, their construction has played an important role in the progress of efficient secure computation. However despite a number of follow-up works, any significant improvements to the basic semi-honest protocol have been hard to come by. We show 33% improvement in communication complexity of this protocol. We show how to generalize this result to the malicious setting, leading to the best known unconditional honest majority MPC with malicious security. - We focus on the round complexity of the Damgard and Nielsen protocol and improve it by a factor of 2. Our improvement relies on a novel observation relating to an interplay between Damgard and Nielsen multiplication and Beaver triple multiplication. An implementation of our constructions shows an execution run time improvement compared to the state of the art ranging from 30% to 50%.
2021
CRYPTO
Unconditional Communication-Efficient MPC via Hall's Marriage Theorem 📺
The best known n party unconditional multiparty computation protocols with an optimal corruption threshold communicates O(n) field elements per gate. This has been the case even in the semi-honest setting despite over a decade of research on communication complexity in this setting. Going to the slightly sub-optimal corruption setting, the work of Damgard, Ishai, and Kroigaard (EUROCRYPT 2010) provided the first protocol for a single circuit achieving communication complexity of O(log |C|) elements per gate. While a number of works have improved upon this result, obtaining a protocol with O(1) field elements per gate has been an open problem. In this work, we construct the first unconditional multi-party computation protocol evaluating a single arithmetic circuit with amortized communication complexity of O(1) elements per gate.
2021
CRYPTO
Puncturable Pseudorandom Sets and Private Information Retrieval with Near-Optimal Online Bandwidth and Time 📺
Imagine one or more non-colluding servers each holding a large public database, e.g., the repository of DNS entries. Clients would like to access entries in this database without disclosing their queries to the servers. Classical private information retrieval (PIR) schemes achieve polylogarithmic bandwidth per query, but require the server to perform linear computation per query, which is a significant barrier towards deployment. Several recent works showed, however, that by introducing a one-time, per-client, off-line preprocessing phase, an \emph{unbounded} number of client queries can be subsequently served with sublinear online computation time per query (and the cost of the preprocessing can be amortized over the unboundedly many queries). Existing preprocessing PIR schemes (supporting unbounded queries), unfortunately, make undesirable tradeoffs to achieve sublinear online computation: they are either significantly non-optimal in online time or bandwidth, or require the servers to store a linear amount of state per client or even per query, or require polylogarithmically many non-colluding servers. We propose a novel 2-server preprocessing PIR scheme that achieves $\widetilde{O}(\sqrt{n})$ online computation per query and $\widetilde{O}(\sqrt{n})$ client storage, while preserving the polylogarithmic online bandwidth of classical PIR schemes. Both the online bandwidth and computation are optimal up to a poly-logarithmic factor. In our construction, each server stores only the original database and nothing extra, and each online query is served within a single round trip. Our construction relies on the standard LWE assumption. As an important stepping stone, we propose new, more generalized definitions for a cryptographic object called a Privately Puncturable Pseudorandom Set, and give novel constructions that depart significantly from prior approaches.
2021
CRYPTO
On the Concurrent Composition of Quantum Zero-Knowledge 📺
We study the notion of zero-knowledge secure against quantum polynomial-time verifiers (referred to as quantum zero-knowledge) in the concurrent composition setting. Despite being extensively studied in the classical setting, concurrent composition in the quantum setting has hardly been studied. \par We initiate a formal study of concurrent quantum zero-knowledge. Our results are as follows: - Bounded Concurrent QZK for NP and QMA: Assuming post-quantum one-way functions, there exists a quantum zero-knowledge proof system for NP in the bounded concurrent setting. In this setting, we fix a priori the number of verifiers that can simultaneously interact with the prover. Under the same assumption, we also show that there exists a quantum zero-knowledge proof system for QMA in the bounded concurrency setting. - Quantum Proofs of Knowledge: Assuming quantum hardness of learning with errors (QLWE), there exists a bounded concurrent zero-knowledge proof system for NP satisfying quantum proof of knowledge property. Our extraction mechanism simultaneously allows for extraction probability to be negligibly close to acceptance probability (extractability) and also ensures that the prover's state after extraction is statistically close to the prover's state after interacting with the verifier (simulatability). Even in the standalone setting, the seminal work of [Unruh EUROCRYPT'12], and all its followups, satisfied a weaker version of extractability property and moreover, did not achieve simulatability. Our result yields a proof of {\em quantum knowledge} system for QMA with better parameters than prior works.
2021
CRYPTO
SMILE: Set Membership from Ideal Lattices with Applications to Ring Signatures and Confidential Transactions 📺
In a set membership proof, the public information consists of a set of elements and a commitment. The prover then produces a zero-knowledge proof showing that the commitment is indeed to some element from the set. This primitive is closely related to concepts like ring signatures and ``one-out-of-many'' proofs that underlie many anonymity and privacy protocols. The main result of this work is a new succinct lattice-based set membership proof whose size is logarithmic in the size of the set. We also give transformations of our set membership proof to a ring signature scheme and to a confidential transaction payment system. The ring signature size is also logarithmic in the size of the public key set and has size $16$~KB for a set of $2^5$ elements, and $22$~KB for a set of size $2^{25}$. At an approximately $128$-bit security level, these outputs are between 1.5X and 7X smaller than the current state of the art succinct ring signatures of Beullens et al. (Asiacrypt 2020) and Esgin et al. (CCS 2019). We then show that our ring signature, combined with a few other techniques and optimizations, can be turned into a fairly efficient Monero-like confidential transaction system based on the MatRiCT framework of Esgin et al. (CCS 2019). With our new techniques, we are able to reduce the transaction proof size by factors of about 4X - 10X over the aforementioned work. For example, a transaction with two inputs and two outputs, where each input is hidden among $2^{15}$ other accounts, requires approximately $30$KB in our protocol.
2021
CRYPTO
Functional Encryption for Turing Machines with Dynamic Bounded Collusion from LWE 📺
The classic work of Gorbunov, Vaikuntanathan and Wee (CRYPTO 2012) and follow-ups provided constructions of bounded collusion Functional Encryption (FE) for circuits from mild assumptions. In this work, we improve the state of affairs for bounded collusion FE in several ways: 1. {\it New Security Notion.} We introduce the notion of {\it dynamic} bounded collusion FE, where the declaration of collusion bound is delayed to the time of encryption. This enables the encryptor to dynamically choose the collusion bound for different ciphertexts depending on their individual level of sensitivity. Hence, the ciphertext size grows linearly with its own collusion bound and the public key size is independent of collusion bound. In contrast, all prior constructions have public key and ciphertext size that grow at least linearly with a fixed bound $Q$. 2. {\it CPFE for circuits with Dynamic Bounded Collusion.} We provide the first CPFE schemes for circuits enjoying dynamic bounded collusion security. By assuming identity based encryption (IBE), we construct CPFE for circuits of {\it unbounded} size satisfying {\it non-adaptive} simulation based security. By strengthening the underlying assumption to IBE with receiver selective opening security, we obtain CPFE for circuits of {\it bounded} size enjoying {\it adaptive} simulation based security. Moreover, we show that IBE is a necessary assumption for these primitives. Furthermore, by relying on the Learning With Errors (LWE) assumption, we obtain the first {\it succinct} CPFE for circuits, i.e. supporting circuits with unbounded size, but fixed output length and depth. This scheme achieves {\it adaptive} simulation based security. 3. {\it KPFE for circuits with dynamic bounded collusion.} We provide the first KPFE for circuits of unbounded size, but bounded depth and output length satisfying dynamic bounded collusion security from LWE. Our construction achieves {\it adaptive} simulation security improving security of \cite{GKPVZ13a}. 4. {\it KP and CP FE for TM/NL with dynamic bounded collusion.} We provide the first KPFE and CPFE constructions of bounded collusion functional encryption for Turing machines in the public key setting from LWE. Our constructions achieve non-adaptive simulation based security. Both the input and the machine in our construction can be of {\it unbounded} polynomial length. We provide a variant of the above scheme that satisfies {\it adaptive} security, but at the cost of supporting a smaller class of computation, namely Nondeterministic Logarithmic-space (NL). Since NL contains Nondeterministic Finite Automata (NFA), this result subsumes {\it all} prior work of bounded collusion FE for uniform models from standard assumptions \cite{AMY19,AS17}.
2021
CRYPTO
Game-Theoretic Fairness Meets Multi-Party Protocols: The Case of Leader Election 📺
Suppose that $n$ players want to elect a random leader and they communicate by posting messages to a common broadcast channel. This problem is called leader election, and it is fundamental to the distributed systems and cryptography literature. Recently, it has attracted renewed interests due to its promised applications in decentralized environments. In a game theoretically fair leader election protocol, roughly speaking, we want that even a majority coalition cannot increase its own chance of getting elected, nor hurt the chance of any honest individual. The folklore tournament-tree protocol, which completes in logarithmically many rounds, can easily be shown to satisfy game theoretic security. To the best of our knowledge, no sub-logarithmic round protocol was known in the setting that we consider. We show that by adopting an appropriate notion of approximate game-theoretic fairness, and under standard cryptographic assumption, we can achieve $(1-1/2^{\Theta(r)})$-fairness in $r$ rounds for $\Theta(\log \log n) \leq r \leq \Theta(\log n)$, where $n$ denotes the number of players. In particular, this means that we can approximately match the fairness of the tournament tree protocol using as few as $O(\log \log n)$ rounds. We also prove a lower bound showing that logarithmically many rounds are necessary if we restrict ourselves to ``perfect'' game-theoretic fairness and protocols that are ``very similar in structure'' to the tournament-tree protocol. Although leader election is a well-studied problem in other contexts in distributed computing, our work is the first exploration of the round complexity of {\it game-theoretically fair} leader election in the presence of a possibly majority coalition. As a by-product of our exploration, we suggest a new, approximate game-theoretic fairness notion, called ``approximate sequential fairness'', which provides a more desirable solution concept than some previously studied approximate fairness notions.
2021
CRYPTO
The Cost to Break SIKE: A Comparative Hardware-Based Analysis with AES and SHA-3 📺
This work presents a detailed study of the classical security of the post-quantum supersingular isogeny key encapsulation (SIKE) protocol using a realistic budget-based cost model that considers the actual computing and memory costs that are needed for cryptanalysis. In this effort, we design especially-tailored hardware accelerators for the time-critical multiplication and isogeny computations that we use to model an ASIC-powered instance of the van Oorschot-Wiener (vOW) parallel collision search algorithm. We then extend the analysis to AES and SHA-3 in the context of the NIST post-quantum cryptography standardization process to carry out a parameter analysis based on our cost model. This analysis, together with the state-of-the-art quantum security analysis of SIKE, indicates that the current SIKE parameters offer higher practical security than currently believed, closing an open issue on the suitability of the parameters to match NIST's security levels. In addition, we explore the possibility of using significantly smaller primes to enable more efficient and compact implementations with reduced bandwidth. Our improved cost model and analysis can be applied to other cryptographic settings and primitives, and can have implications for other post-quantum candidates in the NIST process.
2021
CRYPTO
Upslices, Downslices, and Secret-Sharing with Complexity of $1.5^n$ 📺
A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about $s$. The collection of authorized/unauthorized sets is be captured by a monotone function $f:\{0,1\}^n\rightarrow \{0,1\}$. In this paper, we focus on monotone functions that all their min-terms are sets of size $a$, and on their duals -- monotone functions whose max-terms are of size $b$. We refer to these classes as $(a,n)$-\emph{upslices} and $(b,n)$-\emph{downslices}, and note that these natural families correspond to monotone $a$-regular DNFs and monotone $(n-b)$-regular CNFs. We derive the following results. \begin{enumerate} \item (General downslices) Every downslice can be realized with total share size of $1.5^{n+o(n)}<2^{0.585 n}$. Since every monotone function can be cheaply decomposed into $n$ downslices, we obtain a similar result for general access structures improving the previously known $2^{0.637n+o(n)}$ complexity of Applebaum, Beimel, Nir and Peter (STOC 2020). We also achieve a minor improvement in the exponent of linear secrets sharing schemes. \item (Random mixture of upslices) Following, Beimel and Farr{\`{a}}s (TCC 2020) who studied the complexity of random DNFs with constant-size terms, we consider the following general distribution $F$ over monotone DNFs: For each width value $a\in [n]$, uniformly sample $k_a$ monotone terms of size $a$, where $\vec{k}=(k_1,\ldots,k_n)$ is an arbitrary vector of non-negative integers. We show that, except with exponentially small probability, $F$ can be realized with share size of $2^{0.5 n+o(n)}$ and can be linearly realized with an exponent strictly smaller than $2/3$. Our proof also provides a candidate distribution for the ``exponentially-hard'' access structure. \end{enumerate} We use our results to explore connections between several seemingly unrelated questions about the complexity of secret-sharing schemes such as worst-case vs. average-case, linear vs. non-linear, and primal vs. dual access structures. We prove that, in at least one of these settings, there is a significant gap in secret-sharing complexity.
2021
CRYPTO
Impossibility of Quantum Virtual Black-Box Obfuscation of Classical Circuits 📺
Virtual black-box obfuscation is a strong cryptographic primitive: it encrypts a circuit while maintaining its full input/output functionality. A remarkable result by Barak et al. (Crypto 2001) shows that a general obfuscator that obfuscates classical circuits into classical circuits cannot exist. A promising direction that circumvents this impossibility result is to obfuscate classical circuits into quantum states, which would potentially be better capable of hiding information about the obfuscated circuit. We show that, under the assumption that Learning With Errors (LWE) is hard for quantum computers, this quantum variant of virtual black-box obfuscation of classical circuits is generally impossible. On the way, we show that under the presence of dependent classical auxiliary input, even the small class of classical point functions cannot be quantum virtual black-box obfuscated.
2021
CRYPTO
Towards a Unified Approach to Black-Box Constructions of Zero-Knowledge Proofs 📺
General-purpose zero-knowledge proofs for all $\NP$ languages greatly simplify secure protocol design. However, they inherently require the code of the underlying relation. If the relation contains black-box calls to a cryptographic function, the code of that function must be known to use the ZK proof, even if both the relation and the proof require only black-box access to the function. Rosulek (Crypto'12) shows that non-trivial proofs for even simple statements, such as membership in the range of a one-way function, require non-black-box access. We propose an alternative approach to bypass Rosulek's impossibility result. Instead of asking for a ZK proof directly for the given one-way function $f$, we seek to construct a {\em new} one-way function $F$ given only black-box access to $f$, {\em and} an associated ZK protocol for proving non-trivial statements, such as range membership, over its output. We say that $F$, along with its proof system, is a {\em proof-based} one-way function. We similarly define proof-based versions of other primitives, specifically pseudo-random generators and collision-resistant hash functions. We show how to construct proof-based versions of each of the primitives mentioned above from their ordinary counterparts under mild but necessary restrictions over the input. More specifically, \begin{itemize} \item We first show that if the prover entirely chooses the input, then proof-based pseudo-random generators cannot be constructed from ordinary ones in a black-box manner, thus establishing that some restrictions over the input are necessary. \item We next present black-box constructions handling inputs of the form $(x,r)$ where $r$ is chosen uniformly by the verifier. This is similar to the restrictions in the widely used Goldreich-Levin theorem. The associated ZK proofs support range membership over the output as well as arbitrary predicates over prefixes of the input. \end{itemize} Our results open up the possibility that general-purpose ZK proofs for relations that require black-box access to the primitives above may be possible in the future without violating their black-box nature by instantiating them using proof-based primitives instead of ordinary ones.
2021
CRYPTO
Does Fiat-Shamir Require a Cryptographic Hash Function? 📺
The Fiat-Shamir transform is a general method for reducing interaction in public-coin protocols by replacing the random verifier messages with deterministic hashes of the protocol transcript. The soundness of this transformation is usually heuristic and lacks a formal security proof. Instead, to argue security, one can rely on the random oracle methodology, which informally states that whenever a random oracle soundly instantiates Fiat-Shamir, a hash function that is ``sufficiently unstructured'' (such as fixed-length SHA-2) should suffice. Finally, for some special interactive protocols, it is known how to (1) isolate a concrete security property of a hash function that suffices to instantiate Fiat-Shamir and (2) build a hash function satisfying this property under a cryptographic assumption such as Learning with Errors. In this work, we abandon this methodology and ask whether Fiat-Shamir truly requires a cryptographic hash function. Perhaps surprisingly, we show that in two of its most common applications --- building signature schemes as well as (general-purpose) non-interactive zero-knowledge arguments --- there are sound Fiat-Shamir instantiations using extremely simple and non-cryptographic hash functions such as sum-mod-$p$ or bit decomposition. In some cases, we make idealized assumptions (i.e., we invoke the generic group model), while in others, we prove soundness in the plain model. On the negative side, we also identify important cases in which a cryptographic hash function is provably necessary to instantiate Fiat-Shamir. We hope this work leads to an improved understanding of the precise role of the hash function in the Fiat-Shamir transformation.
2021
CRYPTO
A Logarithmic Lower Bound for Oblivious RAM (for all parameters) 📺
An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e., for every input the observed locations accessed are similarly distributed. In recent years there has been great progress both in terms of upper bounds as well as in terms of lower bounds, essentially pinning down the smallest overhead possible in various settings of parameters. We observe that there is a very natural setting of parameters in which \emph{no} non-trivial lower bound is known, even not ones in restricted models of computation (like the so called balls and bins model). Let $N$ and $w$ be the number of cells and bit-size of cells, respectively, in the RAM that we wish to simulate obliviously. Denote by $b$ the cell bit-size of the ORAM. \emph{All} previous ORAM lower bounds have a multiplicative $w/b$ factor which makes them trivial in many settings of parameters of interest. In this work, we prove a new ORAM lower bound that captures this setting (and in all other settings it is at least as good as previous ones, quantitatively). We show that any ORAM must make (amortized) $$ \Omega\left(\log \left(\frac{Nw}{m}\right)/\log\left(\frac{b}{w}\right)\right) $$ memory probes for every logical operation. Here, $m$ denotes the bit-size of the local storage of the ORAM. Our lower bound implies that logarithmic overhead in accesses is necessary, even if $ b \gg w$. Our lower bound is tight for \emph{all} settings of parameters, up to the $\log(b/w)$ factor. Our bound also extends to the non-colluding multi-server setting. As an application, we derive the first (unconditional) separation between the overhead needed for ORAMs in the \emph{online} vs.\ \emph{offline} models. Specifically, we show that when $w=\log N$ and $b,m \in poly\log N$, there exists an offline ORAM that makes (on average) $o(1)$ memory probes per logical operation while every online one must make $\Omega(\log N/\log\log N)$ memory probes per logical operation. No such previous separation was known for any setting of parameters, not even in the balls and bins model.
2021
CRYPTO
Meet-in-the-Middle Attacks Revisited: Key-recovery, Collision, and Preimage Attacks 📺
At EUROCRYPT 2021, Bao et al. proposed an automatic method for systematically exploring the configuration space of meet-in-the-middle (MITM) preimage attacks. We further extend it into a constraint-based framework for finding exploitable MITM characteristics in the context of key-recovery and collision attacks by taking the subtle peculiarities of both scenarios into account. Moreover, to perform attacks based on MITM characteristics with nonlinear constrained neutral words, which have not been seen before, we present a procedure for deriving the solution spaces of neutral words without solving the corresponding nonlinear equations or increasing the overall time complexities of the attack. We apply our method to concrete symmetric-key primitives, including SKINNY, ForkSkinny, Romulus-H, Saturnin, Grostl, Whirlpool, and hashing modes with AES-256. As a result, we identify the first 23-round key-recovery attack on \skinny-$n$-$3n$ and the first 24-round key-recovery attack on ForkSkinny-$n$-$3n$ in the single-key model. Moreover, improved (pseudo) preimage or collision attacks on round-reduced Whirlpool, Grostl, and hashing modes with AES-256 are obtained. In particular, imploying the new representation of the \AES key schedule due to Leurent and Pernot (EUROCRYPT 2021), we identify the first preimage attack on 10-round AES-256 hashing.
2021
CRYPTO
Tighter Security for Schnorr Identification and Signatures: A High-Moment Forking Lemma for ${\Sigma}$-Protocols 📺
The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the ``square-root barrier''. In particular, in any group of order $p$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $t$-time attacks on the Schnorr identification and signature schemes have success probability $t^2/p$, whereas existing proofs of security only rule out attacks with success probabilities $(t^2/p)^{1/2}$ and $(q_{\Hash} \cdot t^2/p)^{1/2}$, respectively, where $q_{\Hash}$ denotes the number of random-oracle queries issued by the attacker. We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time. In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_\Hash \cdot t^2/p)^{2/3}$, respectively.
2021
CRYPTO
Three-Round Secure Multiparty Computation from Black-Box Two-Round Oblivious Transfer 📺
We give constructions of three-round secure multiparty computation (MPC) protocols for general functions that make {\it black-box} use of a two-round oblivious transfer (OT). For the case of semi-honest adversaries, we make use of a two-round, semi-honest secure OT in the plain model. This resolves the round-complexity of black-box (semi-honest) MPC protocols from minimal assumptions and answers an open question of Applebaum et al. (ITCS 2020). For the case of malicious adversaries, we make use of a two-round maliciously-secure OT in the common random/reference string model that satisfies a (mild) variant of adaptive security for the receiver.
2021
CRYPTO
On Tight Quantum Security of HMAC and NMAC in the Quantum Random Oracle Model 📺
HMAC and NMAC are the most basic and important constructions to convert Merkle-Damg{\aa}rd hash functions into message authentication codes (MACs) or pseudorandom functions (PRFs). In the quantum setting, at CRYPTO~2017, Song and Yun showed that HMAC and NMAC are quantum pseudorandom functions (qPRFs) under the standard assumption that the underlying compression function is a qPRF. Their proof guarantees security up to $O(2^{n/5})$ or $O(2^{n/8})$ quantum queries when the output length of HMAC and NMAC is $n$ bits. However, there is a gap between the provable security bound and a simple distinguishing attack that uses $O(2^{n/3})$ quantum queries. This paper settles the problem of closing the gap. We show that the tight bound of the number of quantum queries to distinguish HMAC or NMAC from a random function is $\Theta(2^{n/3})$ in the quantum random oracle model, where compression functions are modeled as quantum random oracles. To give the tight quantum bound, based on an alternative formalization of Zhandry's compressed oracle technique, we introduce a new proof technique focusing on the symmetry of quantum query records.
2021
CRYPTO
Quantum Collision Attacks on Reduced SHA-256 and SHA-512 📺
In this paper, we study dedicated quantum collision attacks on SHA-256 and SHA-512 for the first time. The attacks reach 38 and 39 steps, respectively, which significantly improve the classical attacks for 31 and 27 steps. Both attacks adopt the framework of the previous work that converts many semi-free-start collisions into a 2-block collision, and are faster than the generic attack in the cost metric of time-space tradeoff. We observe that the number of required semi-free-start collisions can be reduced in the quantum setting, which allows us to convert the previous classical 38 and 39 step semi-free-start collisions into a collision. The idea behind our attacks is simple and will also be applicable to other cryptographic hash functions.
2021
CRYPTO
Efficient Information-Theoretic Multi-Party Computation over Non-Commutative Rings 📺
We construct the first efficient MPC protocol that only requires black-box access to a non-commutative ring $R$. Previous results in the same setting were efficient only either for a constant number of corruptions or when computing branching programs and formulas. Our techniques are based on a generalization of Shamir's secret sharing to non-commutative rings, which we derive from the work on Reed Solomon codes by Quintin, Barbier and Chabot (\textit{IEEE Transactions on Information Theory, 2013}). When the center of the ring contains a set $A = \{\alpha_0, \ldots, \alpha_n\}$ such that $\forall i \neq j, \alpha_i - \alpha_j \in R^*$, the resulting secret sharing scheme is strongly multiplicative and we can generalize existing constructions over finite fields without much trouble. Most of our work is devoted to the case where the elements of $A$ do not commute with all of $R$, but they just commute with each other. For such rings, the secret sharing scheme cannot be linear ``on both sides" and furthermore it is not multiplicative. Nevertheless, we are still able to build MPC protocols with a concretely efficient online phase and black-box access to $R$. As an example we consider the ring $\mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$, for which when $m > \log(n+1)$, \enote{maybe adapt/simplify the following claim as the comparison requires some nuances} we obtain protocols that require around $\lceil\log(n+1)\rceil/2$ less communication and $2\lceil\log(n+1)\rceil$ less computation than the state of the art protocol based on Circuit Amortization Friendly Encodings (Dalskov, Lee and Soria-Vazquez, \textit{ASIACRYPT 2020}). In this setting with a ``less commutative" $A$, our black-box preprocessing phase has a less practical complexity of $\poly(n)$. Due to this, we additionally provide specialized, concretely efficient preprocessing protocols for $R = \mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$ that exploit the structure of the matrix ring.
2021
CRYPTO
Adaptive Extractors and their Application to Leakage Resilient Secret Sharing 📺
We introduce Adaptive Extractors, which unlike traditional randomness extractors, guarantee security even when an adversary obtains leakage on the source \textit{after} observing the extractor output. We make a compelling case for the study of such extractors by demonstrating their use in obtaining adaptive leakage in secret sharing schemes. Specifically, at FOCS 2020, Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, Zuckerman, built an adaptively secure leakage resilient secret sharing scheme (LRSS) with both rate and leakage rate being $\mathcal{O}(1/n)$, where $n$ is the number of parties. In this work, we build an adaptively secure LRSS that offers an interesting trade-off between rate, leakage rate, and the total number of shares from which an adversary can obtain leakage. As a special case, when considering $t$-out-of-$n$ secret sharing schemes for threshold $t = \alpha n$ (constant $0<\alpha<1$), we build a scheme with constant rate, constant leakage rate, and allow the adversary leakage from all but $t-1$ of the shares, while giving her the remaining $t-1$ shares completely in the clear. (Prior to this, constant rate LRSS scheme tolerating adaptive leakage was unknown for \textit{any} threshold.) Finally, we show applications of our techniques to both non-malleable secret sharing and secure message transmission.
2021
CRYPTO
Pushing the Limits of Valiant's Universal Circuits: Simpler, Tighter and More Compact 📺
A universal circuit (UC) is a general-purpose circuit that can simulate arbitrary circuits (up to a certain size $n$). Valiant provides a $k$-way recursive construction of UCs (STOC 1976), where $k$ tunes the complexity of the recursion. More concretely, Valiant gives theoretical constructions of 2-way and 4-way UCs of asymptotic (multiplicative) sizes $5n\log n$ and $4.75 n\log n$ respectively, which matches the asymptotic lower bound $\Omega(n\log n)$ up to some constant factor. Motivated by various privacy-preserving cryptographic applications, Kiss et al. (Eurocrypt 2016) validated the practicality of $2$-way universal circuits by giving example implementations for private function evaluation. G{\"{u}}nther et al. (Asiacrypt 2017) and Alhassan et al. (J. Cryptology 2020) implemented the 2-way/4-way hybrid UCs with various optimizations in place towards making universal circuits more practical. Zhao et al. (Asiacrypt 2019) optimized Valiant's 4-way UC to asymptotic size $4.5 n\log n$ and proved a lower bound $3.64 n\log n$ for UCs under the Valiant framework. As the scale of computation goes beyond 10-million-gate ($n=10^7$) or even billion-gate level ($n=10^9$), the constant factor in UCs size plays an increasingly important role in application performance. In this work, we investigate Valiant's universal circuits and present an improved framework for constructing universal circuits with the following advantages. [Simplicity.] Parameterization is no longer needed. In contrast to that previous implementations resorted to a hybrid construction combining $k=2$ and $k=4$ for a tradeoff between fine granularity and asymptotic size-efficiency, our construction gets the best of both worlds when configured at the lowest complexity (i.e., $k=2$). [Compactness.] Our universal circuits have asymptotic size $3n\log n$, improving upon the best previously known $4.5n\log n$ by 33\% and beating the $3.64n\log n$ lower bound for UCs constructed under Valiant's framework (Zhao et al., Asiacrypt 2019). [Tightness.] We show that under our new framework the UCs size is lower bounded by $2.95 n\log n$, which almost matches the $3n\log n$ circuit size of our $2$-way construction. We implement the 2-way universal circuits and evaluate its performance with other implementations, which confirms our theoretical analysis.
2021
CRYPTO
Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN 📺
Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{\log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/\poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19], which is the main focus of this work. We first expand the underlying binary linear codes (of the NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{\min(c,1-c)})},q=2^{\Omega(n^{\min(c,1-c)})}$)-hard assuming that the NCP at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.
2021
CRYPTO
Improved Computational Extractors and their Applications 📺
Recent exciting breakthroughs have achieved the first two-source extractors that operate in the low min-entropy regime. Unfortunately, these constructions suffer from non-negligible error, and reducing the error to negligible remains an important open problem. In recent work, Garg, Kalai, and Khurana (GKK, Eurocrypt 2020) investigated a meaningful relaxation of this problem to the computational setting, in the presence of a common random string (CRS). In this relaxed model, their work built explicit two-source extractors for a restricted class of unbalanced sources with min-entropy n^{\gamma} (for some constant \gamma) and negligible error, under the sub-exponential DDH assumption. In this work, we investigate whether computational extractors in the CRS model be applied to more challenging environments. Specifically, we study network extractor protocols (Kalai et al., FOCS 2008) and extractors for adversarial sources (Chattopadhyay et al., STOC 2020) in the CRS model. We observe that these settings require extractors that work well for balanced sources, making the GKK results inapplicable. We remedy this situation by obtaining the following results, all of which are in the CRS model and assume the sub-exponential hardness of DDH. - We obtain ``optimal'' computational two-source and non-malleable extractors for balanced sources: requiring both sources to have only poly-logarithmic min-entropy, and achieving negligible error. To obtain this result, we perform a tighter and arguably simpler analysis of the GKK extractor. - We obtain a single-round network extractor protocol for poly-logarithmic min-entropy sources that tolerates an optimal number of adversarial corruptions. Prior work in the information-theoretic setting required sources with high min-entropy rates, and in the computational setting had round complexity that grew with the number of parties, required sources with linear min-entropy, and relied on exponential hardness (albeit without a CRS). - We obtain an ``optimal'' adversarial source extractor for poly-logarithmic min-entropy sources, where the number of honest sources is only 2 and each corrupted source can depend on either one of the honest sources. Prior work in the information-theoretic setting had to assume a large number of honest sources.
2021
CRYPTO
Broadcast-Optimal Two Round MPC with an Honest Majority 📺
This paper closes the question of the possibility of two-round MPC protocols achieving different security guarantees with and without the availability of broadcast in any given round. Cohen et al. (Eurocrypt 2020) study this question in the dishonest majority setting; we complete the picture by studying the honest majority setting. In the honest majority setting, given broadcast in both rounds, it is known that the strongest guarantee — guaranteed output delivery — is achievable (Gordon et al. Crypto 2015). We show that, given broadcast in the first round only, guaranteed output delivery is still achievable. Given broadcast in the second round only, we give a new construction that achieves identifiable abort, and we show that fairness — and thus guaranteed output delivery — are not achievable in this setting. Finally, if only peer-to-peer channels are available, we show that the weakest guarantee — selective abort — is the only one achievable for corruption thresholds t > 1 and for t = 1 and n = 3. On the other hand, it is already known that selective abort can be achieved in these cases. In the remaining cases, i.e., t = 1 and n > 3, it is known (from the work of Ishai et al. at Crypto 2010, and Ishai et al. at Crypto 2015) that guaranteed output delivery (and thus all weaker guarantees) are possible.
2021
CRYPTO
Compressing Proofs of k-Out-Of-n Partial Knowledge 📺
In a proof of partial knowledge, introduced by Cramer, Damg{\aa}rd and Schoenmakers (CRYPTO 1994), a prover knowing witnesses for some $k$-subset of $n$ given public statements can convince the verifier of this claim without revealing which $k$-subset. Their solution combines $\Sigma$-protocol theory and linear secret sharing, and achieves linear communication complexity for general $k,n$. Especially the ``one-out-of-$n$'' case $k=1$ has seen myriad applications during the last decades, e.g., in electronic voting, ring signatures, and confidential transaction systems. In this paper we focus on the discrete logarithm (DL) setting, where the prover claims knowledge of DLs of $k$-out-of-$n$ given elements. Groth and Kohlweiss (EUROCRYPT 2015) have shown how to solve the special case $k=1$ %, yet arbitrary~$n$, with {\em logarithmic} (in $n$) communication, instead of linear as prior work. However, their method takes explicit advantage of $k=1$ and does not generalize to $k>1$. Alternatively, an {\em indirect} approach for solving the considered problem is by translating the $k$-out-of-$n$ relation into a circuit and then applying communication-efficient circuit ZK. Indeed, for the $k=1$ case this approach has been highly optimized, e.g., in ZCash. Our main contribution is a new, simple honest-verifier zero-knowledge proof protocol for proving knowledge of $k$ out of $n$ DLs with {\em logarithmic} communication and {\em for general $k$ and $n$}, without requiring any generic circuit ZK machinery. Our solution puts forward a novel extension of the {\em compressed} $\Sigma$-protocol theory (CRYPTO 2020), which we then utilize to compress a new $\Sigma$-protocol for proving knowledge of $k$-out-of-$n$ DL's down to logarithmic size. The latter $\Sigma$-protocol is inspired by the CRYPTO 1994 approach, but a careful re-design of the original protocol is necessary for the compression technique to apply. Interestingly, {\em even for $k=1$ and general $n$} our approach improves prior {\em direct} approaches as it reduces prover complexity without increasing the communication complexity. Besides the conceptual simplicity, we also identify regimes of practical relevance where our approach achieves asymptotic and concrete improvements, e.g., in proof size and prover complexity, over the generic approach based on circuit-ZK. Finally, we show various extensions and generalizations of our core result. For instance, we extend our protocol to proofs of partial knowledge of Pedersen (vector) commitment openings, and/or to include a proof that the witness satisfies some additional constraint, and we show how to extend our results to non-threshold access structures.
2021
CRYPTO
A Compressed Sigma-Protocol Theory for Lattices 📺
We show a \emph{lattice-based} solution for commit-and-prove transparent circuit zero-knowledge (ZK) with \emph{polylog-communication}, the \emph{first} not depending on PCPs. We start from \emph{compressed $\Sigma$-protocol theory} (CRYPTO 2020), which is built around basic $\Sigma$-protocols for opening an arbitrary linear form on a long secret vector that is compactly committed to. These protocols are first compressed using a recursive ``folding-technique'' adapted from Bulletproofs, at the expense of logarithmic rounds. Proving in ZK that the secret vector satisfies a given constraint -- captured by a circuit -- is then by (blackbox) reduction to the linear case, via arithmetic secret-sharing techniques adapted from MPC. Commit-and-prove is also facilitated, i.e., when commitment(s) to the secret vector are created ahead of any circuit-ZK proof. On several platforms (incl.\ DL) this leads to logarithmic communication. Non-interactive versions follow from Fiat-Shamir. This abstract modular theory strongly suggests that it should somehow be supported by a lattice-platform \emph{as well}. However, when going through the motions and trying to establish low communication (on a SIS-platform), a certain significant lack in current understanding of multi-round protocols is exposed. Namely, as opposed to the DL-case, the basic $\Sigma$-protocol in question typically has \emph{poly-small challenge} space. Taking into account the compression-step -- which yields \emph{non-constant} rounds -- and the necessity for parallelization to reduce error, there is no known tight result that the compound protocol admits an efficient knowledge extractor. We resolve the state of affairs here by a combination of two novel results which are fully general and of independent interest. The first gives a tight analysis of efficient knowledge extraction in case of non-constant rounds combined with poly-small challenge space, whereas the second shows that parallel repetition indeed forces rapid decrease of knowledge error. Moreover, in our present context, arithmetic secret sharing is not defined over a large finite field but over a quotient of a number ring and this forces our careful adaptation of how the linearization techniques are deployed. We develop our protocols in an abstract framework that is conceptually simple and can be flexibly instantiated. In particular, the framework applies to arbitrary rings and norms.
2021
CRYPTO
Authenticated Key Exchange and Signatures with Tight Security in the Standard Model 📺
We construct the first authenticated key exchange protocols that achieve tight security in the standard model. Previous works either relied on techniques that seem to inherently require a random oracle, or achieved only “Multi-Bit-Guess” security, which is not known to compose tightly, for instance, to build a secure channel. Our constructions are generic, based on digital signatures and key encapsulation mechanisms (KEMs). The main technical challenges we resolve is to determine suitable KEM security notions which on the one hand are strong enough to yield tight security, but at the same time weak enough to be efficiently instantiable in the standard model, based on standard techniques such as universal hash proof systems. Digital signature schemes with tight multi-user security in presence of adaptive corruptions are a central building block, which is used in all known constructions of tightly-secure AKE with full forward security. We identify a subtle gap in the security proof of the only previously known efficient standard model scheme by Bader et al. (TCC 2015). We develop a new variant, which yields the currently most efficient signature scheme that achieves this strong security notion without random oracles and based on standard hardness assumptions.
2021
CRYPTO
Witness Authenticating NIZKs and Applications 📺
We initiate the study of witness authenticating NIZK proof systems (waNIZKs), in which one can use a witness $w$ of a statement $x$ to identify whether a valid proof for $x$ is indeed generated using $w$. Such a new identification functionality enables more diverse applications, and it also puts new requirements on soundness that: (1) no adversary can generate a valid proof that will not be identified by any witness; (2) or forge a proof using her valid witness to frame others. To work around the obvious obstacle towards conventional zero-knowledgeness, we define entropic zero-knowledgeness that requires the proof to leak no partial information, if the witness has sufficient computational entropy. We give a formal treatment of this new primitive. The modeling turns out to be quite involved and multiple subtle points arise and particular cares are required. We present general constructions from standard assumptions. We also demonstrate three applications in non-malleable (perfect one-way) hash, group signatures with verifier-local revocations and plaintext-checkable public-key encryption. Our waNIZK provides a new tool to advance the state of the art in all these applications.
2021
CRYPTO
Receiver-Anonymity in Reradomizable RCCA-Secure Cryptosystems Resolved 📺
In this work, we resolve the open problem raised by Prabhakaran and Rosulek at CRYPTO 2007, and present the first anonymous, rerandomizable, Replayable-CCA (RCCA) secure public key encryption scheme. This solution opens the door to numerous privacy-oriented applications with a highly desired RCCA security level. At the core of our construction is a non-trivial extension of smooth projective hash functions (Cramer and Shoup, EUROCRYPT 2002), and a modular generic framework developed for constructing Rand-RCCA-secure encryption schemes with receiver-anonymity. The framework gives an enhanced abstraction of the original Prabhakaran and Rosulek’s scheme (which was the first construction of Rand-RCCA-secure encryption in the standard model), where the most crucial enhancement is the first realization of the desirable property of receiver-anonymity, essential to privacy settings. It also serves as a conceptually more intuitive and generic understanding of RCCA security, which leads, for example, to new implementations of the notion. Finally, note that (since CCA security is not applicable to the privacy applications motivating our work) the concrete results and the conceptual advancement presented here, seem to substantially expand the power and relevance of the notion of Rand-RCCA-secure encryption.
2021
CRYPTO
Traceable Secret Sharing and Applications 📺
Consider a scenario where Alice stores some secret data $s$ on $n$ servers using a $t$-out-of-$n$ secret sharing scheme. Trudy (the collector) is interested in the secret data of Alice and is willing to pay for it. Trudy publishes an advertisement on the internet which describes an elaborate cryptographic scheme to collect the shares from the $n$ servers. Each server who decides to submit its share is paid a hefty monetary reward and is guaranteed ``immunity" from being caught or prosecuted in a court for violating its service agreement with Alice. Bob is one of the servers and sees this advertisement. On examining the collection scheme closely, Bob concludes that there is no way for Alice to prove anything in a court that he submitted his share. Indeed, if Bob is rational, he might use the cryptographic scheme in the advertisement and submit his share since there are no penalties and no fear of being caught and prosecuted. Can we design a secret sharing scheme which Alice can use to avoid such a scenario? We introduce a new primitive called as \textit{Traceable Secret Sharing} to tackle this problem. In particular, a traceable secret sharing scheme guarantees that a cheating server always runs the risk of getting traced and prosecuted by providing a valid evidence (which can be examined in a court of law) implicating its dishonest behavior. We explore various definitional aspects and show how they are highly non-trivial to construct (even ignoring efficiency aspects). We then give an efficient construction of traceable secret sharing assuming the existence of a secure two-party computation protocol. We also show an application of this primitive in constructing traceable protocols for multi-server delegation of computation.
2021
CRYPTO
On the Possibility of Basing Cryptography on $\EXP \neq \BPP$ 📺
Liu and Pass (FOCS'20) recently demonstrated an equivalence between the existence of one-way functions and mild average-case hardness of the time-bounded Kolmogorov complexity problem. In this work, we establish a similar equivalence but to a different form of time-bounded Kolmogorov Complexity---namely, Levin's notion of Kolmogorov Complexity---whose hardness is closely related to the problem of whether $\EXP \neq \BPP$. In more detail, let $Kt(x)$ denote the Levin-Kolmogorov Complexity of the string $x$; that is, $Kt(x) = \min_{\desc \in \bitset^*, t \in \N}\{|\desc| + \lceil \log t \rceil: U(\desc, 1^t) = x\}$, where $U$ is a universal Turing machine, and let $\mktp$ denote the language of pairs $(x,k)$ having the property that $Kt(x) \leq k$. We demonstrate that: - $\mktp$ is \emph{two-sided error} mildly average-case hard (i.e., $\mktp \notin \HeurpBPP$) iff infinititely-often one-way functions exist. - $\mktp$ is \emph{errorless} mildly average-case hard (i.e., $\mktp \notin \AvgpBPP$) iff $\EXP \neq \BPP$. Thus, the only ``gap'' towards getting (infinitely-often) one-way functions from the assumption that $\EXP \neq \BPP$ is the seemingly ``minor'' technical gap between two-sided error and errorless average-case hardness of the $\mktp$ problem. As a corollary of this result, we additionally demonstrate that any reduction from errorless to two-sided error average-case hardness for $\mktp$ implies (unconditionally) that $\NP \neq \P$. We finally consider other alternative notions of Kolmogorov complexity---including space-bounded Kolmogorov complexity and conditional Kolmogorov complexity---and show how average-case hardness of problems related to them characterize log-space computable one-way functions, or one-way functions in $\NC^0$.
2021
CRYPTO
Oblivious RAM with Worst-Case Logarithmic Overhead 📺
We present the first Oblivious RAM (ORAM) construction that for $N$ memory blocks supports accesses with \emph{worst-case} $O(\log N)$ overhead for any block size $\Omega(\log N)$ while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an \emph{amortized} sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $\Theta(N)$ overhead. The previously best ORAM in terms of \emph{worst-case} overhead achieves $O(\log^2 N/\log\log N)$ overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the ``shuffled inputs'' assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC~'97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2021
CRYPTO
Provable Security Analysis of FIDO2 📺
We carry out the first provable security analysis of the new FIDO2 protocols, the promising FIDO Alliance’s proposal for a standard for passwordless user authentication. Our analysis covers the core components of FIDO2: the W3C’s Web Authentication (WebAuthn) specification and the new Client-to-Authenticator Protocol (CTAP2). Our analysis is modular. For WebAuthn and CTAP2, in turn, we propose appropriate security models that aim to capture their intended security goals and use the models to analyze their security. First, our proof confirms the authentication security of WebAuthn. Then, we show CTAP2 can only be proved secure in a weak sense; meanwhile, we identify a series of its design flaws and provide suggestions for improvement. To withstand stronger yet realistic adversaries, we propose a generic protocol called sPACA and prove its strong security; with proper instantiations, sPACA is also more efficient than CTAP2. Finally, we analyze the overall security guarantees provided by FIDO2 and WebAuthn+sPACA based on the security of their components. We expect that our models and provable security results will help clarify the security guarantees of the FIDO2 protocols. In addition, we advocate the adoption of our sPACA protocol as a substitute for CTAP2 for both stronger security and better performance.
2021
CRYPTO
MoSS: Modular Security Specifications Framework 📺
Applied cryptographic protocols have to meet a rich set of security requirements under diverse environments and against diverse adversaries. However, currently used security specifications, based on either simulation (e.g., `ideal functionality' in UC) or games, are monolithic, combining together different aspects of protocol requirements, environment and assumptions. Such security specifications are complex, error-prone, and foil reusability, modular analysis and incremental design. We present the Modular Security Specifications (MoSS) framework, which cleanly separates the security requirements (goals) which a protocol should achieve, from the models (assumptions) under which each requirement should be ensured. This modularity allows us to reuse individual models and requirements across different protocols and tasks, and to compare protocols for the same task, either under different assumptions or satisfying different sets of requirements. MoSS is flexible and extendable, e.g., it can support both asymptotic and concrete definitions for security. So far, we confirmed the applicability of MoSS to two applications: secure broadcast protocols and PKI schemes.
2021
CRYPTO
MHz2k: MPC from HE over $\mathbb{Z}_{2^k}$ with New Packing, Simpler Reshare, and Better ZKP 📺
We propose a multi-party computation (MPC) protocol over $\mathbb{Z}_{2^k}$ secure against actively corrupted majority from somewhat homomorphic encryption. The main technical contributions are: (i) a new efficient packing method for $\mathbb{Z}_{2^k}$-messages in lattice-based somewhat homomorphic encryption schemes, (ii) a simpler reshare protocol for level-dependent packings, (iii) a more efficient zero-knowledge proof of plaintext knowledge on cyclotomic rings $\Z[X]/\Phi_M(X)$ with $M$ being a prime. Integrating them, our protocol shows from 2.2x upto 4.8x improvements in amortized communication costs compared to the previous best results. Our techniques not only improve the efficiency of MPC over $\mathbb{Z}_{2^k}$ considerably, but also provide a toolkit that can be leveraged when designing other cryptographic primitives over $\mathbb{Z}_{2^k}$.
2021
CRYPTO
DualRing: Generic Construction of Ring Signatures with Efficient Instantiations 📺
We introduce a novel generic ring signature construction, called DualRing, which can be built from several canonical identification schemes (such as Schnorr identification). DualRing differs from the classical ring signatures by its formation of two rings: a ring of commitments and a ring of challenges. It has a structural difference from the common ring signature approaches based on accumulators or zero-knowledge proofs of the signer index. Comparatively, DualRing has a number of unique advantages. Considering the DL-based setting by using Schnorr identification scheme, our DualRing structure allows the signature size to be compressed into logarithmic size via an argument of knowledge system such as Bulletproofs. We further improve on the Bulletproofs argument system to eliminate about half of the computation while maintaining the same proof size. We call this Sum Argument and it can be of independent interest. This DL-based construction, named DualRing-EC, using Schnorr identification with Sum Argument has the shortest ring signature size in the literature without using trusted setup. Considering the lattice-based setting, we instantiate DualRing by a canonical identification based on M-LWE and M-SIS. In practice, we achieve the shortest lattice-based ring signature, named DualRing-LB, when the ring size is between 4 and 2000. DualRing-LB is also 5x faster in signing and verification than the fastest lattice-based scheme by Esgin et al. (CRYPTO'19).
2021
CRYPTO
A Black-Box Approach to Post-Quantum Zero-Knowledge in Constant Rounds 📺
In a recent seminal work, Bitansky and Shmueli (STOC '20) gave the first construction of a constant round zero-knowledge argument for NP secure against quantum attacks. However, their construction has several drawbacks compared to the classical counterparts. Specifically, their construction only achieves computational soundness, requires strong assumptions of quantum hardness of learning with errors (QLWE assumption) and the existence of quantum fully homomorphic encryption (QFHE), and relies on non-black-box simulation. In this paper, we resolve these issues at the cost of weakening the notion of zero-knowledge to what is called ϵ-zero-knowledge. Concretely, we construct the following protocols: - We construct a constant round interactive proof for NP that satisfies statistical soundness and black-box ϵ-zero-knowledge against quantum attacks assuming the existence of collapsing hash functions, which is a quantum counterpart of collision-resistant hash functions. Interestingly, this construction is just an adapted version of the classical protocol by Goldreich and Kahan (JoC '96) though the proof of ϵ-zero-knowledge property against quantum adversaries requires novel ideas. - We construct a constant round interactive argument for NP that satisfies computational soundness and black-box ϵ-zero-knowledge against quantum attacks only assuming the existence of post-quantum one-way functions. At the heart of our results is a new quantum rewinding technique that enables a simulator to extract a committed message of a malicious verifier while simulating verifier's internal state in an appropriate sense.
2021
CRYPTO
A Rational Protocol Treatment of 51% Attacks 📺
Game-theoretic analysis of cryptocurrencies and, more generally, blockchain-based decentralized ledgers offers insight on their economic robustness, and their behavior when even the cryptographic assumptions that underpin their security fail. In this work we utilize the recently proposed blockchain adaptation of the rational protocol design (RPD) framework [EUROCRYPT~'18] to analyze 51\% double-spending attacks against Nakamoto-style cryptocurrencies. We observe a property of the originally proposed utility class that yields an unnatural behavior against such attacks, and show how to devise a utility that avoids this pitfall and makes predictions that match the observable behavior---i.e., that renders attacking a dominant strategy in settings where an attack was indeed observed. We then propose a generic modification to the underlying protocol which deters attacks on consistency by adversaries controlling a majority of the system's resources, including the 51\% double-spending attack. This can be used as guidance to patch systems that have suffered such attacks, e.g., Ethereum Classic and Bitcoin Cash, and serves as a demonstration of the power of game-theoretic analyses.
2021
CRYPTO
Differential-Linear Cryptanalysis from an Algebraic Perspective 📺
The differential-linear cryptanalysis is an important cryptanalytic tool in cryptography, and has been extensively researched since its discovery by Langford and Hellman in 1994. There are nevertheless very few methods to study the middle part where the differential and linear trail connect, besides the Differential-Linear Connectivity Table (Bar-On et al., EUROCRYPT 2019) and the experimental approach. In this paper, we study differential-linear cryptanalysis from an algebraic perspective. We first introduce a technique called Differential Algebraic Transitional Form (DATF) for differential-linear cryptanalysis, then develop a new theory of estimation of the differential-linear bias and techniques for key recovery in differential-linear cryptanalysis. The techniques are applied to the CAESAR finalist ASCON, the AES finalist SERPENT, and the eSTREAM finalist Grain v1. The bias of the differential-linear approximation is estimated for ASCON and SERPENT. The theoretical estimates of the bias are more accurate than that obtained by the DLCT, and the techniques can be applied with more rounds. Our general techniques can also be used to estimate the bias of Grain v1 in differential cryptanalysis, and have a markedly better performance than the Differential Engine tool tailor-made for the cipher. The improved key recovery attacks on round-reduced variants of these ciphers are then proposed. To the best of our knowledge, they are thus far the best known cryptanalysis of SERPENT, as well as the best differential-linear cryptanalysis of ASCON and the best initialization analysis of Grain v1. The results have been fully verified by experiments. Notably, security analysis of SERPENT is one of the most important applications of differential-linear cryptanalysis in the last two decades. The results in this paper update the differential-linear cryptanalysis of SERPENT-128 and SERPENT-256 with one more round after the work of Biham, Dunkelman and Keller in 2003.
2021
CRYPTO
Round Efficient Secure Multiparty Quantum Computation with Identifiable Abort 📺
A recent result by Dulek et al. (EUROCRYPT 2020) showed a secure protocol for computing any quantum circuit even without the presence of an honest majority. Their protocol, however, is susceptible to a ``denial of service'' attack and allows even a single corrupted party to force an abort. We propose the first quantum protocol that admits security-with-identifiable-abort, which allows the honest parties to agree on the identity of a corrupted party in case of an abort. Additionally, our protocol is the first to have the property that the number of rounds where quantum communication is required is independent of the circuit complexity. Furthermore, if there exists a post-quantum secure classical protocol whose round complexity is independent of the circuit complexity, then our protocol has this property as well. Our protocol is secure under the assumption that classical quantum-resistant fully homomorphic encryption schemes with decryption circuit of logarithmic depth exist. Interestingly, our construction also admits a reduction from quantum fair secure computation to classical fair secure computation.
2021
CRYPTO
How to Meet Ternary LWE Keys 📺
The LWE problem with its ring variants is today the most prominent candidate for building efficient public key cryptosystems resistant to quantum computers. NTRU-type cryptosystems use an LWE-type variant with small max-norm secrets, usually with ternary coefficients from the set $\{-1,0,1\}$. The presumably best attack on these schemes is a hybrid attack that combines lattice reduction techniques with Odlyzko's Meet-in-the-Middle approach. Odlyzko's algorithm is a classical combinatorial attack that for key space size $\S$ runs in time $\S^{0.5}$. We substantially improve on this Meet-in-the-Middle approach, using the representation technique developed for subset sum algorithms. Asymptotically, our heuristic Meet-in-the-Middle attack runs in time roughly $\S^{0.25}$, which also beats the $\S^{\frac 1 3}$ complexity of the best known quantum algorithm. For the round-3 NIST post-quantum encryptions NTRU and NTRU Prime we obtain non-asymptotic instantiations of our attack with complexity roughly $\S^{0.3}$. As opposed to other combinatorial attacks, our attack benefits from larger LWE field sizes $q$, as they are often used in modern lattice-based signatures. For example, for BLISS and GLP signatures we obtain non-asymptotic combinatorial attacks around $\S^{0.28}$. Our attacks do not invalidate the security claims of the aforementioned schemes. However, they establish improved combinatorial upper bounds for their security. We leave it is an open question whether our new Meet-in-the-Middle attack in combination with lattice reduction can be used to speed up the hybrid attack.
2021
CRYPTO
Quadratic Secret Sharing and Conditional Disclosure of Secrets 📺
There is a huge gap between the upper and lower bounds on the share size of secret-sharing schemes for arbitrary $n$-party access structures, and consistent with our current knowledge the optimal share size can be anywhere between polynomial in $n$ and exponential in $n$. For linear secret-sharing schemes, we know that the share size for almost all $n$-party access structures must be exponential in $n$. Furthermore, most constructions of efficient secret-sharing schemes are linear. We would like to study larger classes of secret-sharing schemes with two goals. On one hand, we want to prove lower bounds for larger classes of secret-sharing schemes, possibly shedding some light on the share size of general secret-sharing schemes. On the other hand, we want to construct efficient secret-sharing schemes for access structures that do not have efficient linear secret-sharing schemes. Given this motivation, Paskin-Cherniavsky and Radune (ITC'20) defined and studied a new class of secret-sharing schemes in which the shares are generated by applying degree-$d$ polynomials to the secret and some random field elements. The special case $d=1$ corresponds to linear and multi-linear secret-sharing schemes. We define and study two additional classes of polynomial secret-sharing schemes: (1) schemes in which for every authorized set the reconstruction of the secret is done using polynomials and (2) schemes in which both sharing and reconstruction are done by polynomials. For linear secret-sharing schemes, schemes with linear sharing and schemes with linear reconstruction are equivalent. We give evidence that for polynomial secret-sharing schemes, schemes with polynomial sharing are probably stronger than schemes with polynomial reconstruction. We also prove lower bounds on the share size for schemes with polynomial reconstruction. On the positive side, we provide constructions of secret-sharing schemes and conditional disclosure of secrets (CDS) protocols with quadratic sharing and reconstruction. We extend a construction of Liu et al. (CRYPTO'17) and construct optimal quadratic $k$-server CDS protocols for functions $f:[N]^k\rightarrow \set{0,1}$ with message size $O(N^{(k-1)/3})$. We show how to transform our quadratic $k$-server CDS protocol to a robust CDS protocol, and use the robust CDS protocol to construct quadratic secret-sharing schemes for arbitrary access structures with share size $O(2^{0.705n})$; this is better than the best known share size of $O(2^{0.7576n})$ for linear secret-sharing schemes and worse than the best known share size of $O(2^{0.585n})$ for general secret-sharing schemes.
2021
CRYPTO
Non-Malleable Codes for Bounded Parallel-Time Tampering 📺
Non-malleable codes allow one to encode data in such a way that once a codeword is being tampered with, the modified codeword is either an encoding of the original message, or a completely unrelated one. Since the introduction of this notion by Dziembowski, Pietrzak, and Wichs (ICS '10 and J. ACM '18), there has been a large body of works realizing such coding schemes secure against various classes of tampering functions. It is well known that there is no efficient non-malleable code secure against all polynomial size tampering functions. Nevertheless, no code which is non-malleable for \emph{bounded} polynomial size attackers is known and obtaining such a code has been a major open problem. We present the first construction of a non-malleable code secure against all polynomial size tampering functions that have {bounded} parallel time. This is an even larger class than all bounded polynomial size functions. In particular, this class includes all functions in non-uniform $\mathbf{NC}$ (and much more). Our construction is in the plain model (i.e., no trusted setup) and relies on several cryptographic assumptions such as keyless hash functions, time-lock puzzles, as well as other standard assumptions. Additionally, our construction has several appealing properties: the complexity of encoding is independent of the class of tampering functions and we can obtain (sub-)exponentially small error.
2021
CRYPTO
Sumcheck Arguments and their Applications 📺
We introduce a class of interactive protocols, which we call *sumcheck arguments*, that establishes a novel connection between the sumcheck protocol (Lund et al. JACM 1992) and folding techniques for Pedersen commitments (Bootle et al. EUROCRYPT 2016). Informally, we consider a general notion of bilinear commitment over modules, and show that the sumcheck protocol applied to a certain polynomial associated with the commitment scheme yields a succinct argument of knowledge for openings of the commitment. Building on this, we additionally obtain succinct arguments for the NP-complete language R1CS over certain rings. Sumcheck arguments enable us to recover as a special case numerous prior works in disparate cryptographic settings (such as discrete logarithms, pairings, RSA groups, lattices), providing one abstract framework to understand them all. Further, we answer open questions raised in prior works, such as obtaining a lattice-based succinct argument from the SIS assumption for satisfiability problems over rings.
2021
CRYPTO
Secure Computation from One-Way Noisy Communication, or: Anti-Correlation via Anti-Concentration 📺
Can a sender encode a pair of messages (m_0,m_1) jointly, and send their encoding over (say) a binary erasure channel, so that the receiver can decode exactly one of the two messages and the sender does not know which one? Garg et al. (Crypto 2015) showed that this is information-theoretically impossible. We show how to circumvent this impossibility by assuming that the receiver is computationally bounded, settling for an inverse-polynomial security error (which is provably necessary), and relying on ideal obfuscation. Our solution creates a ``computational anti-correlation'' between the events of receiving m_0 and receiving m_1 by exploiting the anti-concentration of the binomial distribution. The ideal obfuscation primitive in our construction can either be directly realized using (stateless) tamper-proof hardware, yielding an unconditional result, or heuristically instantiated using existing indistinguishability obfuscation schemes. We put forward a new notion of obfuscation that suffices to securely instantiate our construction. As a corollary, we get similar feasibility results for general secure computation of sender-receiver functionalities by leveraging the completeness of the above ``random oblivious transfer'' functionality.
2021
CRYPTO
Proof-Carrying Data without Succinct Arguments 📺
Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Known approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this paper we show how to obtain PCD without relying on SNARKs. We construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size arguments) that has a *split accumulation scheme*, which is a weak form of accumulation that we introduce. Moreover, we construct a transparent non-interactive argument of knowledge for R1CS whose split accumulation is verifiable via a (small) *constant number of group and field operations*. Our construction is proved secure in the random oracle model based on the hardness of discrete logarithms, and it leads, via the random oracle heuristic and our result above, to concrete efficiency improvements for PCD. Along the way, we construct a split accumulation scheme for Hadamard products under Pedersen commitments and for a simple polynomial commitment scheme based on Pedersen commitments. Our results are supported by a modular and efficient implementation.
2021
CRYPTO
Efficient Key Recovery for all HFE Signature Variants 📺
The HFE cryptosystem is one of the best known multivariate schemes. Especially in the area of digital signatures, the HFEv- variant offers short signatures and high performance. Recently, an instance of the HFEv- signature scheme called GeMSS was elected as one of the alternative candidates for signature schemes in the third round of the NIST Post Quantum Crypto (PQC) Standardization Project. In this paper, we propose a new key recovery attack on the HFEv- signature scheme. Our attack shows that both the Minus and the Vinegar modifi- cation do not enhance the security of the basic HFE scheme significantly. This shows that it is very difficult to build a secure and efficient signature scheme on the basis of HFE. In particular, we use our attack to show that the proposed parameters of the GeMSS scheme are not as secure as claimed.
2021
CRYPTO
New Approaches for Quantum Copy-Protection 📺
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the following results: * We show how to copy protect any program that cannot be learned from its input-output behavior, relative to a classical oracle. This improves on Aaronson (CCC 2009), which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection. * We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.
2021
CRYPTO
Multi-theorem Designated-Verifier NIZK for QMA 📺
Abstract. We present a designated-verifier non-interactive zero-knowledge argument system for QMA with multi-theorem security under the Learning with Errors Assumption. All previous such protocols for QMA are only single-theorem secure. We also relax the setup assumption required in previous works. We prove security in the malicious designated-verifier (MDV-NIZK) model (Quach, Rothblum, and Wichs, EUROCRYPT 2019), where the setup consists of a mutually trusted random string and an untrusted verifier public key. Our main technical contribution is a general compiler that given a NIZK for NP and a quantum sigma protocol for QMA generates an MDV-NIZK protocol for QMA.
2021
CRYPTO
Hidden Cosets and Applications to Unclonable Cryptography 📺
In 2012, Aaronson and Christiano introduced the idea of hidden subspace states to build public-key quantum money [STOC '12]. Since then, this idea has been applied to realize several other cryptographic primitives which enjoy some form of unclonability. In this work, we propose a generalization of hidden subspace states to hidden coset states. We study different unclonable properties of coset states and several applications: * We show that, assuming indistinguishability obfuscation (iO), hidden coset states possess a certain direct product hardness property, which immediately implies a tokenized signature scheme in the plain model. Previously, a tokenized signature scheme was known only relative to an oracle, from a work of Ben-David and Sattath [QCrypt '17]. * Combining a tokenized signature scheme with extractable witness encryption, we give a construction of an unclonable decryption scheme in the plain model. The latter primitive was recently proposed by Georgiou and Zhandry [ePrint '20], who gave a construction relative to a classical oracle. * We conjecture that coset states satisfy a certain natural monogamy-of-entanglement property. Assuming this conjecture is true, we remove the requirement for extractable witness encryption in our unclonable decryption construction. As potential evidence in support of the conjecture, we prove a weaker version of this monogamy property, which we believe will still be of independent interest. * Finally, we give the first construction of a copy-protection scheme for pseudorandom functions (PRFs) in the plain model. Our scheme is secure either assuming iO and extractable witness encryption, or iO, LWE and the conjectured monogamy property mentioned above. This is the first example of a copy-protection scheme with provable security in the plain model for a class of functions that is not evasive.
2021
CRYPTO
KHAPE: Asymmetric PAKE from Key-Hiding Key Exchange 📺
OPAQUE [Jarecki et al., Eurocrypt 2018] is an asymmetric password authenticated key exchange (aPAKE) protocol that is being developed as an Internet standard and for use within TLS 1.3. OPAQUE combines an Oblivious PRF (OPRF) with an authenticated key exchange to provide strong security properties, including security against pre-computation attacks (called saPAKE security). However, the security of OPAQUE relies crucially on the integrity of the OPRF. If the latter breaks (by cryptanalysis, quantum attacks or security compromise), the user's password is immediately exposed to an offline dictionary attack. To address this weakness, we present KHAPE, a variant of OPAQUE that does not require the use of an OPRF to achieve aPAKE security, resulting in improved resilience and performance. An OPRF can be optionally added to KHAPE, for enhanced saPAKE security, but without opening the password to an offline dictionary attack upon OPRF compromise. In addition to resilience to OPRF compromise, a DH-based implementation of KHAPE (using HMQV) offers the best performance among aPAKE protocols in terms of exponentiations with less than the cost of an exponentiation on top of an unauthenticated Diffie-Hellman exchange. KHAPE uses three messages with explicit client authentication and four with explicit server authentication (one more than OPAQUE in the latter case). All results in the paper are proven within the UC framework in the ideal cipher model. Of independent interest is our treatment of "key-hiding AKE" which KHAPE uses as a main component, and our UC proofs of AKE security for protocols 3DH (a basis of Signal) and HMQV that we use as efficient instantiations of KHAPE.
2021
CRYPTO
An Algebraic Framework for Universal and Updatable SNARKs 📺
We introduce Checkable Subspace Sampling Arguments, a new information theoretic interactive proof system in which the prover shows that a vector has been sampled in a subspace according to the verifier's coins. We show that this primitive provides a unifying view that explains the technical core of most of the constructions of universal and updatable pairing-based (zk)SNARKs. This characterization is extended to a fully algebraic framework for designing such SNARKs in a modular way. We propose new constructions of CSS arguments that lead to SNARKs with different performance trade-offs.
2021
CRYPTO
Improved torsion-point attacks on SIDH variants 📺
SIDH is a post-quantum key exchange algorithm based on the presumed difficulty of finding isogenies between supersingular elliptic curves. However, SIDH and related cryptosystems also reveal additional information: the restriction of a secret isogeny to a subgroup of the curve (torsion-point information). Petit [31] was the first to demonstrate that torsion-point information could noticeably lower the difficulty of finding secret isogenies. In particular, Petit showed that "overstretched'' parameterizations of SIDH could be broken in polynomial time. However, this did not impact the security of any cryptosystems proposed in the literature. The contribution of this paper is twofold: First, we strengthen the techniques of [31] by exploiting additional information coming from a dual and a Frobenius isogeny. This extends the impact of torsion-point attacks considerably. In particular, our techniques yield a classical attack that completely breaks the $n$-party group key exchange of [2], first introduced as GSIDH in [17], for 6 parties or more, and a quantum attack for 3 parties or more that improves on the best known asymptotic complexity. We also provide a Magma implementation of our attack for 6 parties. We give the full range of parameters for which our attacks apply. Second, we construct SIDH variants designed to be weak against our attacks; this includes backdoor choices of starting curve, as well as backdoor choices of base-field prime. We stress that our results do not degrade the security of, or reveal any weakness in, the NIST submission SIKE [20].
2021
CRYPTO
MuSig2: Simple Two-Round Schnorr Multi-Signatures 📺
Multi-signatures enable a group of signers to produce a joint signature on a joint message. Recently, Drijvers et al. (S\&P'19) showed that all thus far proposed two-round multi-signature schemes in the pure DL setting (without pairings) are insecure under concurrent signing sessions. While Drijvers et al. proposed a secure two-round scheme, this efficiency in terms of rounds comes with the price of having signatures that are more than twice as large as Schnorr signatures, which are becoming popular in cryptographic systems due to their practicality (e.g., they will likely be adopted in Bitcoin). If one needs a multi-signature scheme that can be used as a drop-in replacement for Schnorr signatures, then one is forced to resort either to a three-round scheme or to sequential signing sessions, both of which are undesirable options in practice. In this work, we propose MuSig2, a simple and highly practical two-round multi-signature scheme. This is the first scheme that simultaneously i) is secure under concurrent signing sessions, ii) supports key aggregation, iii) outputs ordinary Schnorr signatures, iv) needs only two communication rounds, and v) has similar signer complexity as ordinary Schnorr signatures. Furthermore, it is the first multi-signature scheme in the pure DL setting that supports preprocessing of all but one rounds, effectively enabling a non-interactive signing process without forgoing security under concurrent sessions. We prove the security of MuSig2 in the random oracle model, and the security of a more efficient variant in the combination of the random oracle and the algebraic group model. Both our proofs rely on a weaker variant of the OMDL assumption.
2021
CRYPTO
Limits on the Adaptive Security of Yao’s Garbling 📺
Yao’s garbling scheme is one of the most fundamental cryptographic constructions. Lindell and Pinkas (Journal of Cryptograhy 2009) gave a formal proof of security in the selective setting assuming secure symmetric-key encryption (and hence one-way functions). This was fol- lowed by results, both positive and negative, concerning its security in the, stronger, adaptive setting. Applebaum et al. (Crypto 2013) showed that it cannot satisfy adaptive security as is, due to a simple incompressibility argument. Jafagholi and Wichs (TCC 2017) considered a natural adaptation of Yao’s scheme that circumvents this negative result, and proved that it is adaptively secure, at least for shallow circuits. In particular, they showed that for the class of circuits of depth d, the loss in security is at most exponential in d. The above results all concern the simulation-based notion of security. In this work, we show that the upper bound of Jafargholi and Wichs is more or less optimal in a strong sense. As our main result, we show that there exists a family of Boolean circuits, one for each depth d ∈ N, such that any black-box reduction proving the adaptive indistinguishability- security of the natural adaptation of Yao’s scheme from any symmetric-key encryption has to lose a factor that is sub-exponential in d. Since indistinguishability is a weaker notion than simulation, our bound also applies to adaptive simulation. To establish our results, we build on the recent approach of Kamath et al. (Eprint 2021), which uses pebbling lower bounds in conjunction with oracle separations to prove fine-grained lower bounds on loss in cryptographic security
2021
CRYPTO
Fluid MPC: Secure Multiparty Computation with Dynamic Participants 📺
Existing approaches to secure multiparty computation (MPC) require all participants to commit to the entire duration of the protocol. As interest in MPC continues to grow, it is inevitable that there will be a desire to use it to evaluate increasingly complex functionalities, resulting in computations spanning several hours or days. Such scenarios call for a *dynamic* participation model for MPC where participants have the flexibility to go offline as needed and (re)join when they have available computational resources. Such a model would also democratize access to privacy-preserving computation by facilitating an ``MPC-as-a-service'' paradigm --- the deployment of MPC in volunteer-operated networks (such as blockchains, where dynamism is inherent) that perform computation on behalf of clients. In this work, we initiate the study of *fluid MPC*, where parties can dynamically join and leave the computation. The minimum commitment required from each participant is referred to as *fluidity*, measured in the number of rounds of communication that it must stay online. Our contributions are threefold: - We provide a formal treatment of fluid MPC, exploring various possible modeling choices. - We construct information-theoretic fluid MPC protocols in the honest-majority setting. Our protocols achieve *maximal fluidity*, meaning that a party can exit the computation after receiving and sending messages in one round. - We implement our protocol and test it in multiple network settings.
2021
CRYPTO
Targeted Lossy Functions and Applications 📺
Lossy trapdoor functions, introduced by Peikert and Waters (STOC '08), can be initialized in one of two indistinguishable modes: in injective mode, the function preserves all information about its input, and can be efficiently inverted given a trapdoor, while in lossy mode, the function loses some information about its input. Such functions have found countless applications in cryptography, and can be constructed from a variety of Cryptomania assumptions. In this work, we introduce \emph{targeted lossy functions (TLFs)}, which relax lossy trapdoor functions along two orthogonal dimensions. Firstly, they do not require an inversion trapdoor in injective mode. Secondly, the lossy mode of the function is initialized with some target input, and the function is only required to lose information about this particular target. The injective and lossy modes should be indistinguishable even given the target. We construct TLFs from Minicrypt assumptions, namely, injective pseudorandom generators, or even one-way functions under a natural relaxation of injectivity. We then generalize TLFs to incorporate \emph{branches}, and construct \emph{all-injective-but-one} and \emph{all-lossy-but-one} variants. We show a wide variety of applications of targeted lossy functions. In several cases, we get the first Minicrypt constructions of primitives that were previously only known under Cryptomania assumptions. Our applications include: \begin{itemize} \item \emph{Pseudo-entropy functions} from one-way functions. \item Deterministic leakage-resilient message-authentication codes and improved leakage-resilient symmetric-key encryption from one-way functions. \item Extractors for \emph{extractor-dependent sources} from one-way functions. \item Selective-opening secure symmetric-key encryption from one-way functions. \item A new construction of CCA PKE from (exponentially secure) trapdoor functions and injective pseudorandom generators. \end{itemize} We also discuss a fascinating connection to distributed point functions.
2021
CRYPTO
On the Round Complexity of Secure Quantum Computation 📺
We construct the first constant-round protocols for secure quantum computation in the two-party (2PQC) and multi-party (MPQC) settings with security against malicious adversaries. Our protocols are in the common random string (CRS) model. - Assuming two-message oblivious transfer (OT), we obtain (i) three-message 2PQC, and (ii) five-round MPQC with only three rounds of online (input-dependent) communication; such OT is known from quantum-hard Learning with Errors (QLWE). - Assuming sub-exponential hardness of QLWE, we obtain (i) three-round 2PQC with two online rounds and (ii) four-round MPQC with two online rounds. - When only one (out of two) parties receives output, we achieve minimal interaction (two messages) from two-message OT; classically, such protocols are known as non-interactive secure computation (NISC), and our result constitutes the first maliciously-secure quantum NISC. Additionally assuming reusable malicious designated-verifier NIZK arguments for NP (MDV-NIZKs), we give the first MDV-NIZK for QMA that only requires one copy of the quantum witness. Finally, we perform a preliminary investigation into two-round secure quantum computation where each party must obtain output. On the negative side, we identify a broad class of simulation strategies that suffice for classical two-round secure computation that are unlikely to work in the quantum setting. Next, as a proof-of-concept, we show that two-round secure quantum computation exists with respect to a quantum oracle.
2021
CRYPTO
Towards Tight Random Probing Security 📺
Proving the security of masked implementations in theoretical models that are relevant to practice and match the best known attacks of the side-channel literature is a notoriously hard problem. The random probing model is a good candidate to contribute to this challenge, due to its ability to capture the continuous nature of physical leakage (contrary to the threshold probing model), while also being convenient to manipulate in proofs and to automate with verification tools. Yet, despite recent progresses in the design of masked circuits with good asymptotic security guarantees in this model, existing results still fall short when it comes to analyze the security of concretely useful circuits under realistic noise levels and with low number of shares. In this paper, we contribute to this issue by introducing a new composability notion, the Probe Distribution Table (PDT), and a new tool (called STRAPS, for the Sampled Testing of the RAndom Probing Security). Their combination allows us to significantly improve the tightness of existing analyses in the most practical (low noise, low number of shares) region of the design space. We illustrate these improvements by quantifying the random probing security of an AES S-box circuit, masked with the popular multiplication gadget of Ishai, Sahai and Wagner from Crypto 2003, with up to six shares.
2021
CRYPTO
One-Way Functions Imply Secure Computation in a Quantum World 📺
We prove that quantum-hard one-way functions imply simulation-secure quantum oblivious transfer (QOT), which is known to suffice for secure computation of arbitrary quantum functionalities. Furthermore, our construction only makes black-box use of the quantum-hard one-way function. Our primary technical contribution is a construction of extractable and equivocal quantum bit commitments based on the black-box use of quantum-hard one-way functions in the standard model. Instantiating the Crépeau-Kilian (FOCS 1988) framework with these commitments yields simulation-secure quantum oblivious transfer.
2021
CRYPTO
Non-Interactive Batch Arguments for NP from Standard Assumptions 📺
We study the problem of designing *non-interactive batch arguments* for NP. Such an argument system allows an efficient prover to prove multiple $\npol$ statements, with size much smaller than the combined witness length. We provide the first construction of such an argument system for NP in the common reference string model based on standard cryptographic assumptions. Prior works either require non-falsifiable assumptions (or the random oracle model) or can only support private verification. At the heart of our result is a new *dual mode* interactive batch argument system for NP. We show how to apply the correlation-intractability framework for Fiat-Shamir -- that has primarily been applied to proof systems -- to such interactive arguments.
2021
CRYPTO
Tight State-Restoration Soundness in the Algebraic Group Model 📺
Most efficient zero-knowledge arguments lack a concrete security analysis, making parameter choices and efficiency comparisons challenging. This is even more true for non-interactive versions of these systems obtained via the Fiat-Shamir transform, for which the security guarantees generically derived from the interactive protocol are often too weak, even when assuming a random oracle. This paper initiates the study of {\em state-restoration soundness} in the algebraic group model (AGM) of Fuchsbauer, Kiltz, and Loss (CRYPTO '18). This is a stronger notion of soundness for an interactive proof or argument which allows the prover to rewind the verifier, and which is tightly connected with the concrete soundness of the non-interactive argument obtained via the Fiat-Shamir transform. We propose a general methodology to prove tight bounds on state-restoration soundness, and apply it to variants of Bulletproofs (Bootle et al, S\&P '18) and Sonic (Maller et al., CCS '19). To the best of our knowledge, our analysis of Bulletproofs gives the {\em first} non-trivial concrete security analysis for a non-constant round argument combined with the Fiat-Shamir transform.
2021
EUROCRYPT
Classical proofs of quantum knowledge 📺
We define the notion of a proof of knowledge in the setting where the verifier is classical, but the prover is quantum, and where the witness that the prover holds is in general a quantum state. We establish simple properties of our definition, including that, if a nondestructive classical proof of quantum knowledge exists for some state, then that state can be cloned by an unbounded adversary, and that, under certain conditions on the parameters in our definition, a proof of knowledge protocol for a hard-to-clone state can be used as a (destructive) quantum money verification protocol. In addition, we provide two examples of protocols (both inspired by private-key classical verification protocols for quantum money schemes) which we can show to be proofs of quantum knowledge under our definition. In so doing, we introduce techniques for the analysis of such protocols which build on results from the literature on nonlocal games. Finally, we show that, under our definition, the verification protocol introduced by Mahadev (FOCS 2018) is a classical argument of quantum knowledge for QMA relations. In all cases, we construct an explicit quantum extractor that is able to produce a quantum witness given black-box quantum (rewinding) access to the prover, the latter of which includes the ability to coherently execute the prover's black-box circuit controlled on a superposition of messages from the verifier.
2021
EUROCRYPT
Large Scale, Actively Secure Computation from LPN and Free-XOR Garbled Circuits 📺
Whilst secure multiparty computation (MPC) based on garbled circuits is concretely efficient for a small number of parties $n$, the gap between the complexity of practical protocols, which is $O(n^2)$ per party, and the theoretical complexity, which is $O(n)$ per party, is prohibitive for large values of $n$. In order to bridge this gap, Ben-Efraim, Lindell and Omri (ASIACRYPT 2017) introduced a garbled-circuit-based MPC protocol with an almost-practical pre-processing, yielding $O(n)$ complexity per party. However, this protocol is only passively secure and does not support the free-XOR technique by Kolesnikov and Schneider (ICALP 2008), on which almost all practical garbled-circuit-based protocols rely on for their efficiency. In this work, to further bridge the gap between theory and practice, we present a new $n$-party garbling technique based on a new variant of standard LPN-based encryption. Using this approach we can describe two new garbled-circuit based protocols, which have practical evaluation phases. Both protocols are in the preprocessing model, have $O(n)$ complexity per party, are actively secure and support the free-XOR technique. The first protocol tolerates full threshold corruption and ensures the garbled circuit contains no adversarially introduced errors, using a rather expensive garbling phase. The second protocol assumes that at least $n/c$ of the parties are honest (for an arbitrary fixed value $c$) and allows a significantly lighter preprocessing, at the cost of a small sacrifice in online efficiency. We demonstrate the practicality of our approach with an implementation of the evaluation phase using different circuits. We show that like the passively-secure protocol of Ben-Efraim, Lindell and Omri, our approach starts to improve upon other practical protocols with $O(n^2)$ complexity when the number of parties is around $100$.
2021
EUROCRYPT
Pre-Computation Scheme of Window $\tau$NAF for Koblitz Curves Revisited 📺
Let $E_a/ \mathbb{F}_{2}: y^2+xy=x^3+ax^2+1$ be a Koblitz curve. The window $\tau$-adic non-adjacent form (window $\tau$NAF) is currently the standard representation system to perform scalar multiplications on $E_a/ \mathbb{F}_{2^m}$ utilizing the Frobenius map $\tau$. This work focuses on the pre-computation part of scalar multiplication. We first introduce $\mu\bar{\tau}$-operations where $\mu=(-1)^{1-a}$ and $\bar{\tau}$ is the complex conjugate of $\tau$. Efficient formulas of $\mu\bar{\tau}$-operations are then derived and used in a novel pre-computation scheme. Our pre-computation scheme requires $6${\bf M}$+6${\bf S}, $18${\bf M}$+17${\bf S}, $44${\bf M}$+32${\bf S}, and $88${\bf M}$+62${\bf S} ($a=0$) and $6${\bf M}$+6${\bf S}, $19${\bf M}$+17${\bf S}, $46${\bf M}$+32${\bf S}, and $90${\bf M}$+62${\bf S} ($a=1$) for window $\tau$NAF with widths from $4$ to $7$ respectively. It is about two times faster, compared to the state-of-the-art technique of pre-computation in the literature. The impact of our new efficient pre-computation is also reflected by the significant improvement of scalar multiplication. Traditionally, window $\tau$NAF with width at most $6$ is used to achieve the best scalar multiplication. Because of the dramatic cost reduction of the proposed pre-computation, we are able to increase the width for window $\tau$NAF to $7$ for a better scalar multiplication. This indicates that the pre-computation part becomes more important in performing scalar multiplication. With our efficient pre-computation and the new window width, our scalar multiplication runs in at least 85.2\% the time of Kohel's work (Eurocrypt'2017) combining the best previous pre-computation. Our results push the scalar multiplication of Koblitz curves, a very well-studied and long-standing research area, to a significant new stage.
2021
EUROCRYPT
A 2^{n/2}-Time Algorithm for \sqrt{n}-SVP and \sqrt{n}-Hermite SVP, and an Improved Time-Approximation Tradeoff for (H)SVP 📺
We show a 2^{n/2+o(n)}-time algorithm that, given as input a basis of a lattice $\lat \subset \R^n$, finds a (non-zero) vector in whose length is at most $\widetilde{O}(\sqrt{n})\cdot \min\{\lambda_1(\lat), \det(\lat)^{1/n}\}$, where $\lambda_1(\lat)$ is the length of a shortest non-zero lattice vector and $\det(\lat)$ is the lattice determinant. Minkowski showed that $\lambda_1(\lat) \leq \sqrt{n} \det(\lat)^{1/n}$ and that there exist lattices with $\lambda_1(\lat) \geq \Omega(\sqrt{n}) \cdot \det(\lat)^{1/n}$, so that our algorithm finds vectors that are as short as possible relative to the determinant (up to a polylogarithmic factor). The main technical contribution behind this result is new analysis of (a simpler variant of) a 2^{n/2 + o(n)}-time algorithm from [ADRS15], which was only previously known to solve less useful problems. To achieve this, we rely crucially on the ``reverse Minkowski theorem'' (conjectured by Dadush [DR16] and proven by [RS17]), which can be thought of as a partial converse to the fact that $\lambda_1(\lat) \leq \sqrt{n} \det(\lat)^{1/n}$. Previously, the fastest known algorithm for finding such a vector was the 2^{0.802n + o(n)}-time algorithm due to [LWXZ11], which actually found a non-zero lattice vector with length $O(1) \cdot \lambda_1(\lat)$. Though we do not show how to find lattice vectors with this length in time $2^{n/2+o(n)}$, we do show that our algorithm suffices for the most important application of such algorithms: basis reduction. In particular, we show a modified version of Gama and Nguyen's slide-reduction algorithm [GN08], which can be combined with the algorithm above to improve the time-length tradeoff for shortest-vector algorithms in nearly all regimes---including the regimes relevant to cryptography.
2021
EUROCRYPT
Rotational Cryptanalysis From a Differential-Linear Perspective - Practical Distinguishers for Round-reduced FRIET, Xoodoo, and Alzette 📺
The differential-linear attack, combining the power of the two most effective techniques for symmetric-key cryptanalysis, was proposed by Langford and Hellman at CRYPTO 1994. From the exact formula for evaluating the bias of a differential-linear distinguisher (JoC2017), to the differential-linear connectivity table (DLCT) technique for dealing with the dependencies in the switch between the differential and linear parts (EUROCRYPT 2019), and to the improvements in the context of cryptanalysis of ARX primitives (CRYPTO 2020), we have seen significant development of the differential-linear attack during the last four years. In this work, we further extend this framework by replacing the differential part of the attack by rotational-xor differentials. Along the way, we establish the theoretical link between the rotational-xor differential and linear approximations, revealing that it is nontrivial to directly apply the closed formula for the bias of ordinary differentiallinear attack to rotational differential-linear cryptanalysis. We then revisit the rotational cryptanalysis from the perspective of differentiallinear cryptanalysis and generalize Morawiecki et al.’s technique for analyzing Keccak, which leads to a practical method for estimating the bias of a (rotational) differential-linear distinguisher in the special case where the output linear mask is a unit vector. Finally, we apply the rotational differential-linear technique to the permutations involved in FRIET, Xoodoo, Alzette, and SipHash. This gives significant improvements over existing cryptanalytic results, or offers explanations for previous experimental distinguishers without a theoretical foundation. To confirm the validity of our analysis, all distinguishers with practical complexities are verified experimentally.
2021
EUROCRYPT
On the Security of Homomorphic Encryption on Approximate Numbers 📺
We present passive attacks against CKKS, the homomorphic encryption scheme for arithmetic on approximate numbers presented at Asiacrypt 2017. The attack is both theoretically efficient (running in expected polynomial time) and very practical, leading to complete key recovery with high probability and very modest running times. We implemented and tested the attack against major open source homomorphic encryption libraries, including HEAAN, SEAL, HElib and PALISADE, and when computing several functions that often arise in applications of the CKKS scheme to machine learning on encrypted data, like mean and variance computations, and approximation of logistic and exponential functions using their Maclaurin series. The attack shows that the traditional formulation of IND-CPA security (or indistinguishability against chosen plaintext attacks) achieved by CKKS does not adequately captures security against passive adversaries when applied to approximate encryption schemes, and that a different, stronger definition is required to evaluate the security of such schemes. We provide a solid theoretical basis for the security evaluation of homomorphic encryption on approximate numbers (against passive attacks) by proposing new definitions, that naturally extend the traditional notion of IND-CPA security to the approximate computation setting. We propose both indistinguishability-based and simulation-based variants, as well as restricted versions of the definitions that limit the order and number of adversarial queries (as may be enforced by some applications). We prove implications and separations among different definitional variants, and discuss possible modifications to CKKS that may serve as a countermeasure to our attacks.
2021
EUROCRYPT
The Mother of All Leakages: How to Simulate Noisy Leakages via Bounded Leakage (Almost) for Free 📺
We show that noisy leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to a small statistical simulation error and a slight loss in the leakage parameter. The latter holds true in particular for one of the most used noisy-leakage models, where the noisiness is measured using the conditional average min-entropy (Naor and Segev, CRYPTO'09 and SICOMP'12). Our reductions between noisy and bounded leakage are achieved in two steps. First, we put forward a new leakage model (dubbed the dense leakage model) and prove that dense leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to small statistical distance. Second, we show that the most common noisy-leakage models fall within the class of dense leakage, with good parameters. We also provide a complete picture of the relationships between different noisy-leakage models, and prove lower bounds showing that our reductions are nearly optimal. Our result finds applications to leakage-resilient cryptography, where we are often able to lift security in the presence of bounded leakage to security in the presence of noisy leakage, both in the information-theoretic and in the computational setting. Additionally, we show how to use lower bounds in communication complexity to prove that bounded-collusion protocols (Kumar, Meka, and Sahai, FOCS'19) for certain functions do not only require long transcripts, but also necessarily need to reveal enough information about the inputs.
2021
EUROCRYPT
Message-recovery Laser Fault Injection Attack on the Classic McEliece Cryptosystem 📺
Code-based public-key cryptosystems are promising candidates for standardization as quantum-resistant public-key cryptographic algorithms. Their security is based on the hardness of the syndrome decoding problem. Computing the syndrome in a finite field, usually $\F_{2}$, guarantees the security of the constructions. We show in this article that the problem becomes considerably easier to solve if the syndrome is computed in $\mathbb{N}$ instead. By means of laser fault injection, we illustrate how to force the matrix-vector product in $\mathbb{N}$ by corrupting specific instructions, and validate it experimentally. To solve the syndrome decoding problem in $\mathbb{N}$, we propose a reduction to an integer linear programming problem. We leverage the computational efficiency of linear programming solvers to obtain real-time message recovery attacks against all the code-based proposals to the NIST Post-Quantum Cryptography standardization challenge. We perform our attacks on worst-case scenarios, i.e. random binary codes, and retrieve the initial message within minutes on a desktop computer. Our practical evaluation of the attack targets the reference implementation of the Niederreiter cryptosystem in the NIST finalist \textit{Classic McEliece} and is feasible for all proposed parameters sets of this submission. For example, for the 256-bit security parameters sets, we successfully recover the plaintext in a couple of seconds on a desktop computer Finally, we highlight the fact that the attack is still possible if only a fraction of the syndrome entries are faulty. This makes the attack feasible even though the fault injection does not have perfect repeatability and reduces the computational complexity of the attack, making it even more practical overall.
2021
EUROCRYPT
Improved Linear Approximations to ARX Ciphers and Attacks Against ChaCha 📺
In this paper, we present a new technique which can be used to find better linear approximations in ARX ciphers. Using this technique, we present the first explicitly derived linear approximations for 3 and 4 rounds of ChaCha and, as a consequence, it enables us to improve the recent attacks against ChaCha. Additionally, we present new differentials for 3 and 3.5 rounds of ChaCha that, when combined with the proposed technique, lead to further improvement in the complexity of the Differential-Linear attacks against ChaCha.
2021
EUROCRYPT
New Lattice Two-Stage Sampling Technique and its Applications to Functional Encryption – Stronger Security and Smaller Ciphertexts 📺
This work proposes a new lattice two-stage sampling technique, generalizing the prior two-stage sampling method of Gentry, Peikert, and Vaikuntanathan (STOC '08). By using our new technique as a key building block, we can significantly improve security and efficiency of the current state of the arts of simulation-based functional encryption. Particularly, our functional encryption achieves $(Q,\poly)$ simulation-based semi-adaptive security that allows arbitrary pre- and post-challenge key queries, and has succinct ciphertexts with only an additive $O(Q)$ overhead. %This significantly improves the current research frontier of simulation-based functional encryption. Additionally, our two-stage sampling technique can derive new feasibilities of indistinguishability-based adaptively-secure $\IB$-$\FE$ for inner products and semi-adaptively-secure $\AB$-$\FE$ for inner products, breaking several technical limitations of the recent work by Abdalla, Catalano, Gay, and Ursu (Asiacrypt '20).
2021
EUROCRYPT
On the ideal shortest vector problem over random rational primes 📺
Any non-zero ideal in a number field can be factored into a product of prime ideals. In this paper we report a surprising connection between the complexity of the shortest vector problem (SVP) of prime ideals in number fields and their decomposition groups. When applying the result to number fields popular in lattice based cryptosystems, such as power-of-two cyclotomic fields, we show that a majority of rational primes lie under prime ideals admitting a polynomial time algorithm for SVP. Although the shortest vector problem of ideal lattices underpins the security of the Ring-LWE cryptosystem, this work does not break Ring-LWE, since the security reduction is from the worst case ideal SVP to the average case Ring-LWE, and it is one-way.
2021
EUROCRYPT
Delay Encryption 📺
We introduce a new primitive named Delay Encryption, and give an efficient instantation based on isogenies of supersingular curves and pairings. Delay Encryption is related to Time-lock Puzzles and Verifiable Delay Functions, and can be roughly described as ``time-lock identity based encryption''. It has several applications in distributed protocols, such as sealed bid Vickrey auctions and electronic voting. We give an instantiation of Delay Encryption by modifying Boneh and Frankiln's IBE scheme, where we replace the master secret key by a long chain of isogenies, as in the isogeny VDF of De Feo, Masson, Petit and Sanso. Similarly to the isogeny-based VDF, our Delay Encryption requires a trusted setup before parameters can be safely used; our trusted setup is identical to that of the VDF, thus the same parameters can be generated once and shared for many executions of both protocols, with possibly different delay parameters. We also discuss several topics around delay protocols based on isogenies that were left untreated by De Feo et al., namely: distributed trusted setup, watermarking, and implementation issues.
2021
EUROCRYPT
Improved cryptanalysis of UOV and Rainbow 📺
The contributions of this paper are twofold. First, we simplify the description of the Unbalanced Oil and Vinegar scheme (UOV) and its Rainbow variant, which makes it easier to understand the scheme and the existing attacks. We hope that this will make UOV and Rainbow more approachable for cryptanalysts. Secondly, we give two new attacks against the UOV and Rainbow signature schemes; the intersection attack that applies to both UOV and Rainbow and the rectangular MinRank attack that applies only to Rainbow. Our attacks are more powerful than existing attacks. In particular, we estimate that compared to previously known attacks, our new attacks reduce the cost of a key recovery by a factor of 2^17, 2^53, and 2^73 for the parameter sets submitted to the second round of the NIST PQC standardization project targeting the security levels I, III, and V respectively. For the third round parameters, the cost is reduced by a factor of 2^20, 2^40, and 2^55 respectively. This means all these parameter sets fall short of the security requirements set out by NIST.
2021
EUROCRYPT
Dummy Shuffling against Algebraic Attacks in White-box Implementations 📺
At CHES 2016, Bos et al. showed that most of existing white-box implementations are easily broken by standard side-channel attacks. A natural idea to apply the well-developed side-channel countermeasure - linear masking schemes - leaves implementations vulnerable to linear algebraic attacks which exploit absence of noise in the white-box setting and are applicable for any order of linear masking. At ASIACRYPT 2018, Biryukov and Udovenko proposed a security model (BU-model for short) for protection against linear algebraic attacks and a new quadratic masking scheme which is provably secure in this model. However, countermeasures against higher-degree attacks were left as an open problem. In this work, we study the effectiveness of another well-known side-channel countermeasure - shuffling - against linear and higher-degree algebraic attacks in the white-box setting. First, we extend the classic shuffling to include dummy computation slots and show that this is a crucial component for protecting against the algebraic attacks. We quantify and prove the security of dummy shuffling against the linear algebraic attack in the BU-model. We introduce a refreshing technique for dummy shuffling and show that it allows to achieve close to optimal protection in the model for arbitrary degrees of the attack, thus solving the open problem of protection against the algebraic attack in the BU-model. Furthermore, we describe an interesting proof-of-concept construction that makes the slot function public (while keeping the shuffling indexes private).
2021
EUROCRYPT
Non-Interactive Anonymous Router 📺
Anonymous routing is one of the most fundamental online privacy problems and has been studied extensively for decades. Almost all known approaches that achieve anonymous routing (e.g., mix-nets, DC-nets, and numerous other systems) rely on multiple servers or routers to engage in some interactive protocol; and anonymity is guaranteed in the threshold model, i.e., if one or more of the servers/routers behave honestly. Departing from all prior approaches, we propose a novel non-interactive abstraction called a Non-Interactive Anonymous Router (NIAR), that works even with a single untrusted router. In a NIAR scheme, suppose that n senders each want to talk to a distinct receiver. A one-time trusted setup is performed such that each sender obtains a sending key, each receiver obtains a receiving key, and the router receives a token that “encrypts” the permutation mapping the senders to receivers. In every time step, the senders can each encrypt its message using its sender key, and the router can use its token to convert the n ciphertexts received from the senders to n transformed ciphertexts. Each transformed ciphertext is delivered to the corresponding receiver, and the receiver can decrypt the message using its receiver key. Imprecisely speaking, security requires that the untrusted router, even when colluding with a subset of corrupt senders and/or receivers, should not be able to break the privacy of honest parties, including who is talking to who, and the messages they exchange. We show how to construct a communication-efficient NIAR scheme with provable security guarantees based on the SXDH assumption in suitable bilinear groups and assuming Random Oracles (RO); further, the RO assumption can be removed if we allow a public key that is as large as the number of time steps supported. We also define a paranoid notion of security that achieves full insider protection, and show that if we additionally assume sub-exponentially secure Indistinguishability Obfuscation and as sub-exponentially secure one-way functions, one can construct a NIAR scheme with paranoid security. We show that a com- pelling application of NIAR is to realize a Non-Interactive Anonymous Shuffler (NIAS), where an untrusted server or data analyst can only de- crypt a shuffled version of the messages coming from n senders where the permutation is hidden. NIAS can be adopted to construct privacy- preserving surveys, differentially private protocols in the shuffle model, and pseudonymous bulletin boards.
2021
EUROCRYPT
Fast verification of masking schemes in characteristic two 📺
We revisit the matrix model for non-interference (NI) probing security of masking gadgets introduced by Belaïd et al. at CRYPTO 2017. This leads to two main results. 1) We generalise the theorems on which this model is based, so as to be able to apply them to masking schemes over any finite field --- in particular GF(2)--- and to be able to analyse the *strong* non-interference (SNI) security notion. We also follow Faust et al. (TCHES 2018) to additionally consider a *robust* probing model that takes hardware defects such as glitches into account. 2) We exploit this improved model to implement a very efficient verification algorithm that improves the performance of state-of-the-art software by three orders of magnitude. We show applications to variants of NI and SNI multiplication gadgets from Barthe et al. (EUROCRYPT~2017) which we verify to be secure up to order 11 after a significant parallel computation effort, whereas the previous largest proven order was 7; SNI refreshing gadgets (ibid.); and NI multiplication gadgets from Gross et al. (TIS@CCS 2016) secure in presence of glitches. We also reduce the randomness cost of some existing gadgets, notably for the implementation-friendly case of 8 shares, improving here the previous best results by 17% (resp. 19%) for SNI multiplication (resp. refreshing).
2021
EUROCRYPT
Function Secret Sharing for Mixed-Mode and Fixed-Point Secure Computation 📺
Recently Boyle et al. (TCC 2019) proposed a new approach for secure computation in the {\em preprocessing model} building on {\em function secret sharing} (FSS). This approach can be used to realize any circuit containing gates that admit efficient FSS schemes. In this work, we make the following three technical contributions: {\bf Improved Key Size.} The complexity of the preprocessing phase directly depends on the FSS key size. We improve the size of FSS keys for several existing FSS constructions through two important steps. First, we present a roughly $4\times$ reduction in FSS key size for the Distributed Comparison Function (DCF), i.e. ($f_\alpha(x) = \beta$ for all $x < \alpha$ and $0$, otherwise). Second, prior FSS schemes for many important function classes are obtained via reductions to multiple instances of DCF; for example, 2 instances for interval containment and $2m$ for splines with $m$ pieces. We significantly improve these reductions for public intervals and obtain {\em optimal} FSS schemes, i.e., through a {\em single instance of DCF}, thereby reducing the key sizes by up to $6-22\times$ for commonly used functions in mixed-mode secure computation such as ReLU and sigmoid. {\bf FSS for New Function Families.} We present the first constructions of FSS schemes for arithmetic and logical right shift, as well as for bit-decomposition, where the output bits must be secret shared in a larger ring. These functions are crucial for many applications such as fixed-point arithmetic and machine learning. {\bf FSS for Fixed-Point Arithmetic and Barrier.} One of the important functions in the realization of secure fixed-point arithmetic is that of multiply-then-truncate. While our work shows how to obtain a construction for this function in 2 rounds using sequential calls to FSS schemes for multiply and shift, we demonstrate a barrier towards improving this via FSS beyond what we achieve. Specifically, we show that a 1-round solution would require settling a major open problem in the area of FSS: namely, building an FSS for the class of bit-conjunction functions based on only symmetric-key cryptographic assumptions.
2021
EUROCRYPT
Secure Software Leasing 📺
Formulating cryptographic definitions to protect against software piracy is an important research direction that has not received much attention. Since natural definitions using classical cryptography are impossible to achieve (as classical programs can always be copied), this directs us towards using techniques from quantum computing. The seminal work of Aaronson [CCC'09] introduced the notion of quantum copy-protection precisely to address the problem of software anti-piracy. However, despite being one of the most important problems in quantum cryptography, there are no provably secure solutions of quantum copy-protection known for {\em any} class of functions. We formulate an alternative definition for tackling software piracy, called quantum secure software leasing (QSSL). While weaker than quantum copy-protection, QSSL is still meaningful and has interesting applications in software anti-piracy. We present a construction of QSSL for a subclass of evasive circuits (that includes natural implementations of point functions, conjunctions with wild cards, and affine testers) based on concrete cryptographic assumptions. Our construction is the first provably secure solution, based on concrete cryptographic assumptions, for software anti-piracy. To complement our positive result, we show, based on cryptographic assumptions, that there is a class of quantum unlearnable functions for which QSSL does not exist. In particular, our impossibility result also rules out quantum copy-protection [Aaronson CCC'09] for an arbitrary class of quantum unlearnable functions; resolving an important open problem on the possibility of constructing copy-protection for arbitrary quantum unlearnable circuits.
2021
EUROCRYPT
Public-Coin Statistical Zero-Knowledge Batch Verification against Malicious Verifiers 📺
Suppose that a problem $\Pi$ has a statistical zero-knowledge (SZK) proof with communication complexity $m$. The question of batch verification for SZK asks whether one can prove that $k$ instances $x_1,\dots,x_k$ all belong to $\Pi$ with a statistical zero-knowledge proof whose communication complexity is better than $k \cdot m$ (which is the complexity of the trivial solution of executing the original protocol independently on each input). In a recent work, Kaslasi et al. (TCC, 2020) constructed such a batch verification protocol for any problem having a non-interactive SZK (NISZK) proof-system. Two drawbacks of their result are that their protocol is private-coin and is only zero-knowledge with respect to the honest verifier. In this work, we eliminate these two drawbacks by constructing a public-coin malicious-verifier SZK protocol for batch verification of NISZK. Similarly to the aforementioned prior work, the communication complexity of our protocol is $(k+poly(m)) \cdot polylog(k,m)$.
2021
EUROCRYPT
Decentralized Multi-Authority ABE for DNFs from LWE 📺
We construct the first decentralized multi-authority attribute-based encryption (????????-????????????) scheme for a non-trivial class of access policies whose security is based (in the random oracle model) solely on the Learning With Errors (LWE) assumption. The supported access policies are ones described by ???????????? formulas. All previous constructions of ????????-???????????? schemes supporting any non-trivial class of access policies were proven secure (in the random oracle model) assuming various assumptions on bilinear maps. In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as a standard ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any ???????????? formulas over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In terms of efficiency, when instantiating the scheme with a global bound ???? on the size of access policies, the sizes of public keys, secret keys, and ciphertexts, all grow with ????. Technically, we develop new tools for building ciphertext-policy ABE (????????-????????????) schemes using LWE. Along the way, we construct the first provably secure ????????-???????????? scheme supporting access policies in ????????^1 under the LWE assumption that avoids the generic universal-circuit-based key-policy to ciphertext-policy transformation. In particular, our construction relies on linear secret sharing schemes with new properties and in some sense is more similar to ????????-???????????? schemes that rely on bilinear maps. While our ????????-???????????? construction is not more efficient than existing ones, it is conceptually intriguing and further we show how to extend it to get the ????????-???????????? scheme described above.
2021
EUROCRYPT
Compact, Efficient and UC-Secure Isogeny-Based Oblivious Transfer 📺
Oblivious transfer (OT) is an essential cryptographic tool that can serve as a building block for almost all secure multiparty functionalities. The strongest security notion against malicious adversaries is universal composability (UC-secure). An important goal is to have post-quantum OT protocols. One area of interest for post-quantum cryptography is isogeny-based crypto. Isogeny-based cryptography has some similarities to Diffie-Hellman, but lacks some algebraic properties that are needed for discrete-log-based OT protocols. Hence it is not always possible to directly adapt existing protocols to the isogeny setting. We propose the first practical isogeny-based UC-secure oblivious transfer protocol in the presence of malicious adversaries. Our scheme uses the CSIDH framework and does not have an analogue in the Diffie-Hellman setting. The scheme consists of a constant number of isogeny computations. The underlying computational assumption is a problem that we call the computational reciprocal CSIDH problem, and that we prove polynomial-time equivalent to the computational CSIDH problem.
2021
EUROCRYPT
Tightly-Secure Authenticated Key Exchange, Revisited 📺
We introduce new tightly-secure authenticated key exchange (AKE) protocols that are extremely efficient, yet have only a constant security loss and can be instantiated in the random oracle model both from the standard DDH assumption and a subgroup assumption over RSA groups. These protocols can be deployed with optimal parameters, independent of the number of users or sessions, without the need to compensate a security loss with increased parameters and thus decreased computational efficiency. We use the standard “Single-Bit-Guess” AKE security (with forward secrecy and state corruption) requiring all challenge keys to be simultaneously pseudo-random. In contrast, most previous papers on tightly secure AKE protocols (Bader et al., TCC 2015; Gjøsteen and Jager, CRYPTO 2018; Liu et al., ASIACRYPT 2020) concentrated on a non-standard “Multi-Bit-Guess” AKE security which is known not to compose tightly with symmetric primitives to build a secure communication channel. Our key technical contribution is a new generic approach to construct tightly-secure AKE protocols based on non-committing key encapsulation mechanisms. The resulting DDH-based protocols are considerably more efficient than all previous constructions.
2021
EUROCRYPT
Leakage Resilient Value Comparison With Application to Message Authentication 📺
Side-channel attacks are a threat to secrets stored on a device, especially if an adversary has physical access to the device. As an effect of this, countermeasures against such attacks for cryptographic algorithms are a well-researched topic. In this work, we deviate from the study of cryptographic algorithms and instead focus on the side-channel protection of a much more basic operation, the comparison of a known attacker-controlled value with a secret one. Comparisons sensitive to side-channel leakage occur in tag comparisons during the verification of message authentication codes (MACs) or authenticated encryption, but are typically omitted in security analyses. Besides, also comparisons performed as part of fault countermeasures might be sensitive to side-channel attacks. In this work, we present a formal analysis on comparing values in a leakage resilient manner by utilizing cryptographic building blocks that are typically part of an implementation anyway. Our results indicate that there is no need to invest additional resources into implementing a protected comparison operation itself if a sufficiently protected implementation of a public cryptographic permutation, or a (tweakable) block cipher, is already available. We complement our contribution by applying our findings to the SuKS message authentication code used by lightweight authenticated encryption scheme ISAP, and to the classical Hash-then-PRF construction.
2021
EUROCRYPT
Sieving for twin smooth integers with solutions to the Prouhet-Tarry-Escott problem 📺
We give a sieving algorithm for finding pairs of consecutive smooth numbers that utilizes solutions to the Prouhet-Tarry-Escott (PTE) problem. Any such solution induces two degree-n polynomials, a(x) and b(x), that differ by a constant integer C and completely split into linear factors in Z[x]. It follows that for any l in Z such that a(l) = b(l) = 0 mod C , the two integers a(l)/C and b(l)/C differ by 1 and necessarily contain n factors of roughly the same size. For a fixed smoothness bound B, restricting the search to pairs of integers that are parameterized in this way increases the probability that they are B-smooth. Our algorithm combines a simple sieve with parametrizations given by a collection of solutions to the PTE problem. The motivation for finding large twin smooth integers lies in their application to compact isogeny-based post-quantum protocols. The recent key exchange scheme B-SIDH and the recent digital signature scheme SQISign both require large primes that lie between two smooth integers; finding such a prime can be seen as a special case of finding twin smooth integers under the additional stipulation that their sum is a prime p. When searching for cryptographic parameters with 2^240 <= p < 2^256, an implementation of our sieve found primes p where p+1 and p-1 are 2^15-smooth; the smoothest prior parameters had a similar sized prime for which p-1 and p+1 were 2^19-smooth. In targeting higher security levels, our sieve found a 376-bit prime lying between two 2^21-smooth integers, a 384-bit prime lying between two 2^22-smooth integers, and a 512-bit prime lying between two 2^29-smooth integers. Our analysis shows that using previously known methods to find high-security instances subject to these smoothness bounds is computationally infeasible.
2021
EUROCRYPT
Cryptanalytic Applications of the Polynomial Method for Solving Multivariate Equation Systems over GF(2) 📺
At SODA 2017 Lokshtanov et al. presented the first worst-case algorithms with exponential speedup over exhaustive search for solving polynomial equation systems of degree $d$ in $n$ variables over finite fields. These algorithms were based on the polynomial method in circuit complexity which is a technique for proving circuit lower bounds that has recently been applied in algorithm design. Subsequent works further improved the asymptotic complexity of polynomial method-based algorithms for solving equations over the field $\mathbb{F}_2$. However, the asymptotic complexity formulas of these algorithms hide significant low-order terms, and hence they outperform exhaustive search only for very large values of~$n$. In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over $\mathbb{F}_2$. We analyze our algorithm's performance for solving random equation systems, and bound its complexity by about $n^2 \cdot 2^{0.815n}$ bit operations for $d = 2$ and $n^2 \cdot 2^{\left(1 - 1/2.7d\right) n}$ for any $d \geq 2$. We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST's post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function. Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems.
2021
EUROCRYPT
Round-Optimal Blind Signatures in the Plain Model from Classical and Quantum Standard Assumptions 📺
Blind signatures, introduced by Chaum (Crypto'82), allows a user to obtain a signature on a message without revealing the message itself to the signer. Thus far, all existing constructions of round-optimal blind signatures are known to require one of the following: a trusted setup, an interactive assumption, or complexity leveraging. This state-of-the-affair is somewhat justified by the few known impossibility results on constructions of round-optimal blind signatures in the plain model (i.e., without trusted setup) from standard assumptions. However, since all of these impossibility results only hold \emph{under some conditions}, fully (dis)proving the existence of such round-optimal blind signatures has remained open. In this work, we provide an affirmative answer to this problem and construct the first round-optimal blind signature scheme in the plain model from standard polynomial-time assumptions. Our construction is based on various standard cryptographic primitives and also on new primitives that we introduce in this work, all of which are instantiable from __classical and post-quantum__ standard polynomial-time assumptions. The main building block of our scheme is a new primitive called a blind-signature-conforming zero-knowledge (ZK) argument system. The distinguishing feature is that the ZK property holds by using a quantum polynomial-time simulator against non-uniform classical polynomial-time adversaries. Syntactically one can view this as a delayed-input three-move ZK argument with a reusable first message, and we believe it would be of independent interest.
2021
EUROCRYPT
Leakage-resilience of the Shamir Secret-sharing Scheme against Physical-bit Leakages 📺
Efficient Reed-Solomon code reconstruction algorithms, for example, by Guruswami and Wooters (STOC--2016), translate into local leakage attacks on Shamir secret-sharing schemes over characteristic-2 fields. However, Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO--2018) showed that the Shamir secret sharing scheme over prime-fields is leakage resilient to one-bit local leakage if the reconstruction threshold is roughly 0.87 times the total number of parties. In several application scenarios, like secure multi-party multiplication, the reconstruction threshold must be at most half the number of parties. Furthermore, the number of leakage bits that the Shamir secret sharing scheme is resilient to is also unclear. Towards this objective, we study the Shamir secret-sharing scheme's leakage-resilience over a prime-field $F$. The parties' secret-shares, which are elements in the finite field $F$, are naturally represented as $\lambda$-bit binary strings representing the elements $\{0,1,\dotsc,p-1\}$. In our leakage model, the adversary can independently probe $m$ bit-locations from each secret share. The inspiration for considering this leakage model stems from the impact that the study of oblivious transfer combiners had on general correlation extraction algorithms, and the significant influence of protecting circuits from probing attacks has on leakage-resilient secure computation. Consider arbitrary reconstruction threshold $k\geq 2$, physical bit-leakage parameter $m\geq 1$, and the number of parties $n\geq 1$. We prove that Shamir's secret-sharing scheme with random evaluation places is leakage-resilient with high probability when the order of the field $F$ is sufficiently large; ignoring polylogarithmic factors, one needs to ensure that $\log \abs F \geq n/k$. Our result, excluding polylogarithmic factors, states that Shamir's scheme is secure as long as the total amount of leakage $m\cdot n$ is less than the entropy $k\cdot\lambda$ introduced by the Shamir secret-sharing scheme. Note that our result holds even for small constant values of the reconstruction threshold $k$, which is essential to several application scenarios. To complement this positive result, we present a physical-bit leakage attack for $m=1$ physical bit-leakage from $n=k$ secret shares and any prime-field $F$ satisfying $\abs F=1\mod k$. In particular, there are (roughly) $\abs F^{n-k+1}$ such vulnerable choices for the $n$-tuple of evaluation places. We lower-bound the advantage of this attack for small values of the reconstruction threshold, like $k=2$ and $k=3$, and any $\abs F=1\mod k$. In general, we present a formula calculating our attack's advantage for every $k$ as $\abs F\rightarrow\infty.$ Technically, our positive result relies on Fourier analysis, analytic properties of proper rank-$r$ generalized arithmetic progressions, and B\'ezout's theorem to bound the number of solutions to an equation over finite fields. The analysis of our attack relies on determining the ``discrepancy'' of the Irwin-Hall distribution. A probability distribution's discrepancy is a new property of distributions that our work introduces, which is of potential independent interest.
2021
EUROCRYPT
Robust Property-Preserving Hash Functions for Hamming Distance and More 📺
Robust property-preserving hash (PPH) functions, recently introduced by Boyle, Lavigne, and Vaikuntanathan [ITCS 2019], compress large inputs $x$ and $y$ into short digests $h(x)$ and $h(y)$ in a manner that allows for computing a predicate $P$ on $x$ and $y$ while only having access to the corresponding hash values. In contrast to locality-sensitive hash functions, a robust PPH function guarantees to correctly evaluate a predicate on $h(x)$ and $h(y)$ even if $x$ and $y$ are chosen adversarially \emph{after} seeing $h$. Our main result is a robust PPH function for the exact hamming distance predicate \[ \mathsf{HAM}^t(x, y) = \begin{cases} 1 &\text{if } d( x, y) \geq t \\ 0 & \text{Otherwise}\\ \end{cases} \] where $d(x, y)$ is the hamming-distance between $x$ and $y$. Our PPH function compresses $n$-bit strings into $\mathcal{O}(t \lambda)$-bit digests, where $\lambda$ is the security parameter. The construction is based on the q-strong bilinear discrete logarithm assumption. Along the way, we construct a robust PPH function for the set intersection predicate \[ \mathsf{INT}^t(X, Y) = \begin{cases} 1 &\text{if } \vert X \cap Y\vert > n - t \\ 0 & \text{Otherwise}\\ \end{cases} \] which compresses sets $X$ and $Y$ of size $n$ with elements from some arbitrary universe $U$ into $\mathcal{O}(t\lambda)$-bit long digests. This PPH function may be of independent interest. We present an almost matching lower bound of $\Omega(t \log t)$ on the digest size of any PPH function for the intersection predicate, which indicates that our compression rate is close to optimal. Finally, we also show how to extend our PPH function for the intersection predicate to more than two inputs.
2021
EUROCRYPT
Alibi: A Flaw in Cuckoo-Hashing based Hierarchical ORAM Schemes and a Solution 📺
There once was a table of hashes That held extra items in stashes It all seemed like bliss But things went amiss When the stashes were stored in the caches The first Oblivious RAM protocols introduced the ``hierarchical solution,'' (STOC '90) where the server stores a series of hash tables of geometrically increasing capacities. Each ORAM query would read a small number of locations from each level of the hierarchy, and each level of the hierarchy would be reshuffled and rebuilt at geometrically increasing intervals to ensure that no single query was ever repeated twice at the same level. This yielded an ORAM protocol with polylogarithmic overhead. Future works extended and improved the hierarchical solution, replacing traditional hashing with cuckoo hashing (ICALP '11) and cuckoo hashing with a combined stash (Goodrich et al. SODA '12). In this work, we identify a subtle flaw in the protocol of Goodrich et al. (SODA '12) that uses cuckoo hashing with a stash in the hierarchical ORAM solution. We give a concrete distinguishing attack against this type of hierarchical ORAM that uses cuckoo hashing with a \emph{combined} stash. This security flaw has propagated to at least 5 subsequent hierarchical ORAM protocols, including the recent optimal ORAM scheme, OptORAMa (Eurocrypt '20). In addition to our attack, we identify a simple fix that does not increase the asymptotic complexity. We note, however, that our attack only affects more recent \emph{hierarchical ORAMs}, but does not affect the early protocols that predate the use of cuckoo hashing, or other types of ORAM solutions (e.g. Path ORAM or Circuit ORAM).
2021
EUROCRYPT
VOLE-PSI: Fast OPRF and Circuit-PSI from Vector-OLE 📺
In this work we present a new construction for a batched Oblivious Pseudorandom Function (OPRF) based on Vector-OLE and the PaXoS data structure. We then use it in the standard transformation for achieving Private Set Intersection (PSI) from an OPRF. Our overall construction is highly efficient with $O(n)$ communication and computation. We demonstrate that our protocol can achieve malicious security at only a very small overhead compared to the semi-honest variant. For input sizes $n = 2^{20}$, our malicious protocol needs 6.2 seconds and less than 59 MB communication. This corresponds to under 450 bits per element, which is the lowest number for any published PSI protocol (semi-honest or malicious) to date. Moreover, in theory our semi-honest (resp. malicious) protocol can achieve as low as 219 (resp. 260) bits per element for $n=2^{20}$ at the added cost of interpolating a polynomial over $n$ elements. As a second contribution, we present an extension where the output of the PSI is secret-shared between the two parties. This functionality is generally referred to as Circuit-PSI. It allows the parties to perform a subsequent MPC protocol on the secret-shared outputs, e.g., train a machine learning model. Our circuit PSI protocol builds on our OPRF construction along with another application of the PaXoS data structure. It achieves semi-honest security and allows for a highly efficient implementation, up to 3x faster than previous work.
2021
EUROCRYPT
Multi-Source Non-Malleable Extractors and Applications 📺
We introduce a natural generalization of two-source non-malleable extractors (Cheragachi and Guruswami, TCC 2014) called as \textit{multi-source non-malleable extractors}. Multi-source non-malleable extractors are special independent source extractors which satisfy an additional non-malleability property. This property requires that the output of the extractor remains close to uniform even conditioned on its output generated by tampering {\it several sources together}. We formally define this primitive, give a construction that is secure against a wide class of tampering functions, and provide applications. More specifically, we obtain the following results: \begin{itemize} \item For any $s \geq 2$, we give an explicit construction of a $s$-source non-malleable extractor for min-entropy $\Omega(n)$ and error $2^{-n^{\Omega(1)}}$ in the {\it overlapping joint tampering model}. This means that each tampered source could depend on any strict subset of all the sources and the sets corresponding to each tampered source could be overlapping in a way that we define. Prior to our work, there were no known explicit constructions that were secure even against disjoint tampering (where the sets are required to be disjoint without any overlap). \item We adapt the techniques used in the above construction to give a $t$-out-of-$n$ non-malleable secret sharing scheme (Goyal and Kumar, STOC 2018) for any $t \leq n$ in the \emph{disjoint tampering model}. This is the first general construction of a threshold non-malleable secret sharing (NMSS) scheme in the disjoint tampering model. All prior constructions had a restriction that the size of the tampered subsets could not be equal. \item We further adapt the techniques used in the above construction to give a $t$-out-of-$n$ non-malleable secret sharing scheme (Goyal and Kumar, STOC 2018) for any $t \leq n$ in the \emph{overlapping joint tampering model}. This is the first construction of a threshold NMSS in the overlapping joint tampering model. \item We show that a stronger notion of $s$-source non-malleable extractor that is multi-tamperable against disjoint tampering functions gives a single round network extractor protocol (Kalai et al., FOCS 2008) with attractive features. Plugging in with a new construction of multi-tamperable, 2-source non-malleable extractors provided in our work, we get a network extractor protocol for min-entropy $\Omega(n)$ that tolerates an {\it optimum} number ($t = p-2$) of faulty processors and extracts random bits for {\it every} honest processor. The prior network extractor protocols could only tolerate $t = \Omega(p)$ faulty processors and failed to extract uniform random bits for a fraction of the honest processors. \end{itemize}
2021
EUROCRYPT
LogStack: Stacked Garbling with O(b log b) Computation 📺
Secure two party computation (2PC) of arbitrary programs can be efficiently achieved using garbled circuits (GC). Until recently, it was widely believed that a GC proportional to the entire program, including parts of the program that are entirely discarded due to conditional branching, must be transmitted over a network. Recent work shows that this belief is false, and that communication proportional only to the longest program execution path suffices (Heath and Kolesnikov, CRYPTO 20, [HK20a]). Although this recent work reduces needed communication, it increases computation. For a conditional with b branches, the players use O(b^2) computation (traditional GC uses only O(b)). Our scheme LogStack reduces stacked garbling computation from O(b^2) to O(b log b) with no increase in communication over [HK20a]. The cause of [HK20a]'s increased computation is the oblivious collection of garbage labels that emerge during the evaluation of inactive branches. Garbage is collected by a multiplexer that is costly to generate. At a high level, we redesign stacking and garbage collection to avoid quadratic scaling. Our construction is also more space efficient: [HK20a] algorithms require O(b) space, while ours use only O(log b) space. This space efficiency allows even modest setups to handle large numbers of branches. [HK20a] assumes a random oracle (RO). We track the source of this need, formalize a simple and natural added assumption on the base garbling scheme, and remove reliance on RO: LogStack is secure in the standard model. Nevertheless, LogStack can be instantiated with typical GC tricks based on non-standard assumptions, such as free XOR and half-gates, and hence can be implemented with high efficiency. We implemented LogStack (in the RO model, based on half-gates garbling) and report performance. In terms of wall-clock time and for fewer than 16 branches, our performance is comparable to [HK20a]'s; for larger branching factors, our approach clearly outperforms [HK20a]. For example, given 1024 branches, our approach is 31x faster.
2021
EUROCRYPT
Ciminion: Symmetric Encryption Based on Toffoli-Gates over Large Finite Fields 📺
Motivated by new applications such as secure Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Zero-Knowledge proofs (ZK), the need for symmetric encryption schemes that minimize the number of field multiplications in their natural algorithmic description is apparent. This development has brought forward many dedicated symmetric encryption schemes that minimize the number of multiplications in GF(2^n) or GF(p), with p being prime. These novel schemes have lead to new cryptanalytic insights that have broken many of said schemes. Interestingly, to the best of our knowledge, all of the newly proposed schemes that minimize the number of multiplications use those multiplications exclusively in S-boxes based on a power mapping that is typically x^3 or x^{-1}. Furthermore, most of those schemes rely on complex and resource-intensive linear layers to achieve a low multiplication count. In this paper, we present Ciminion, an encryption scheme minimizing the number of field multiplications in large binary or prime fields, while using a very lightweight linear layer. In contrast to other schemes that aim to minimize field multiplications in GF(2^n) or GF(p), Ciminion relies on the Toffoli gate to improve the non-linear diffusion of the overall design. In addition, we have tailored the primitive for the use in a Farfalle-like construction in order to minimize the number of rounds of the used primitive, and hence, the number of field multiplications as far as possible.
2021
EUROCRYPT
Efficient Range Proofs with Transparent Setup from Bounded Integer Commitments 📺
We introduce a new approach for constructing range proofs. Our approach is modular, and leads to highly competitive range proofs under standard assumption, using less communication and (much) less computation than the state of the art methods, and without relying on a trusted setup. Our range proofs can be used as a drop-in replacement in a variety of protocols such as distributed ledgers, anonymous transaction systems, and many more, leading to significant reductions in communication and computation for these applications. At the heart of our result is a new method to transform any commitment over a finite field into a commitment scheme which allows to commit to and efficiently prove relations about bounded integers. Combining these new commitments with a classical approach for range proofs based on square decomposition, we obtain several new instantiations of a paradigm which was previously limited to RSA-based range proofs (with high communication and computation, and trusted setup). More specifically, we get: - Under the discrete logarithm assumption, we obtain the most compact and efficient range proof among all existing candidates (with or without trusted setup). Our proofs are 12% to 20% shorter than the state of the art Bulletproof (Bootle et al., CRYPTO'18) for standard choices of range size and security parameter, and are more efficient (both for the prover and the verifier) by more than an order of magnitude. - Under the LWE assumption, we obtain range proofs that improve over the state of the art in a batch setting when at least a few dozen range proofs are required. The amortized communication of our range proofs improves by up to two orders of magnitudes over the state of the art when the number of required range proofs grows. - Eventually, under standard class group assumptions, we obtain the first concretely efficient standard integer commitment scheme (without bounds on the size of the committed integer) which does not assume trusted setup.
2021
EUROCRYPT
Compactness of Hashing Modes and Efficiency beyond Merkle Tree 📺
We revisit the classical problem of designing optimally efficient cryptographically secure hash functions. Hash functions are traditionally designed via applying modes of operation on primitives with smaller domains. The results of Shrimpton and Stam (ICALP 2008), Rogaway and Steinberger (CRYPTO 2008), and Mennink and Preneel (CRYPTO 2012) show how to achieve optimally efficient designs of $2n$-to-$n$-bit compression functions from non-compressing primitives with asymptotically optimal $2^{n/2-\epsilon}$-query collision resistance. Designing optimally efficient and secure hash functions for larger domains ($> 2n$ bits) is still an open problem. To enable efficiency analysis and comparison across hash functions built from primitives of different domain sizes, in this work we propose the new \textit{compactness} efficiency notion. It allows us to focus on asymptotically optimally collision resistant hash function and normalize their parameters based on Stam's bound from CRYPTO 2008 to obtain maximal efficiency. We then present two tree-based modes of operation as a design principle for compact, large domain, fixed-input-length hash functions. \begin{enumerate} \item Our first construction is an \underline{A}ugmented \underline{B}inary T\underline{r}ee (\cmt) mode. The design is a $(2^{\ell}+2^{\ell-1} -1)n$-to-$n$-bit hash function making a total of $(2^{\ell}-1)$ calls to $2n$-to-$n$-bit compression functions for any $\ell\geq 2$. Our construction is optimally compact with asymptotically (optimal) $2^{n/2-\epsilon}$-query collision resistance in the ideal model. For a tree of height $\ell$, in comparison with Merkle tree, the $\cmt$ mode processes additional $(2^{\ell-1}-1)$ data blocks making the same number of internal compression function calls. \item With our second design we focus our attention on the indifferentiability security notion. While the $\cmt$ mode achieves collision resistance, it fails to achieve indifferentiability from a random oracle within $2^{n/3}$ queries. $\cmt^{+}$ compresses only $1$ less data block than $\cmt$ with the same number of compression calls and achieves in addition indifferentiability up to $2^{n/2-\epsilon}$ queries. \end{enumerate} Both of our designs are closely related to the ubiquitous Merkle Trees and have the potential for real-world applicability where the speed of hashing is of primary interest.
2021
EUROCRYPT
Aggregatable Distributed Key Generation 📺
In this paper we introduce a distributed key generation (DKG) protocol with aggregatable and publicly verifiable transcripts. As compared with prior approaches, our DKG reduces the size of the final transcript and the time to verify it from O(n^2) to O(n), where n denotes the number of parties. We also revisit existing DKG security definitions, which are quite strong, and propose new and natural relaxations. As a result, we can prove the security of our aggregatable DKG as well as that of several existing DKGs, including the popular Pedersen variant. We show that, under these new definitions, these existing DKGs can be used to yield secure threshold variants of popular cryptosystems such as El-Gamal encryption and BLS signatures. We also prove that our DKG can be securely combined with a new efficient verifiable unpredictable function (VUF), whose security we prove in the random oracle model. Finally, we experimentally evaluate our DKG and show that the per-party overheads scale linearly and are practical: for 64 parties it takes 71ms to share and 359ms to verify the overall transcript, while these respective costs for 8192 parties are 8s and 42.2s.
2021
EUROCRYPT
Generic Compiler for Publicly Verifiable Covert Multi-Party Computation 📺
Covert security has been introduced as a compromise between semi-honest and malicious security. In a nutshell, covert security guarantees that malicious behavior can be detected by the honest parties with some probability, but in case detection fails all bets are off. While the security guarantee offered by covert security is weaker than full-fledged malicious security, it comes with significantly improved efficiency. An important extension of covert security introduced by Asharov and Orlandi (ASIACRYPT'12) is \emph{public verifiability}, which allows the honest parties to create a publicly verifiable certificate of malicious behavior. Public verifiability significantly strengthen covert security as the certificate allows punishment via an external party, e.g., a judge. Most previous work on publicly verifiable covert (PVC) security focuses on the two-party case, and the multi-party case has mostly been neglected. In this work, we introduce a novel compiler for multi-party PVC secure protocols with no private inputs. The class of supported protocols includes the preprocessing of common multi-party computation protocols that are designed in the offline-online model. Our compiler leverages time-lock encryption to offer high probability of cheating detection (often also called deterrence factor) independent of the number of involved parties. Moreover, in contrast to the only earlier work that studies PVC in the multi-party setting (CRYPTO'20), we provide the first full formal security analysis.
2021
EUROCRYPT
Abuse Resistant Law Enforcement Access Systems 📺
The increased deployment of end-to-end encryption has ignited a debate between technology firms and law enforcement agencies over the need for lawful access to encrypted communications. Unfortunately, existing solutions to this problem suffer from serious technical risks, such as the possibility of operator abuse and theft of escrow key material. In this work we investigate the problem of constructing law enforcement access systems that mitigate the possibility of unauthorized surveillance. We first define a set of desirable properties for an abuse-resistant law enforcement access system (ARLEAS), and motivate each of these properties. We then formalize these definitions in the Universal Composability framework, and present two main constructions that realize this definition. The first construction enables {\em prospective} access, allowing surveillance only if encryption occurs after a warrant has been issued and activated. The second, more powerful construction, allows {\em retrospective} access to communications that occurred prior to a warrant's issuance. To illustrate the technical challenge of constructing the latter type of protocol, we conclude by investigating the minimal assumptions required to realize these systems.
2021
EUROCRYPT
Analysing the HPKE Standard 📺
The Hybrid Public Key Encryption (HPKE) scheme is an emerging standard currently under consideration by the Crypto Forum Research Group (CFRG) of the IETF as a candidate for formal approval. Of the four modes of HPKE, we analyse the authenticated mode HPKE_Auth in its single-shot encryption form as it contains what is, arguably, the most novel part of HPKE. HPKE_Auth’s intended application domain is captured by a new primitive which we call Authenticated Public Key Encryption (APKE). We provide syntax and security definitions for APKE schemes, as well as for the related Authenticated Key Encapsulation Mechanisms (AKEMs). We prove security of the AKEM scheme DH-AKEM underlying HPKE Auth based on the Gap Diffie-Hellman assumption and provide general AKEM/DEM composition theorems with which to argue about HPKE_Auth’s security. To this end, we also formally analyse HPKE_Auth’s key schedule and key derivation functions. To increase confidence in our results we use the automatic theorem proving tool CryptoVerif. All our bounds are quantitative and we discuss their practical implications for HPKE_Auth. As an independent contribution we propose the new framework of nominal groups that allows us to capture abstract syntactical and security properties of practical elliptic curves, including the Curve25519 and Curve448 based groups (which do not constitute cyclic groups).
2021
EUROCRYPT
Candidate Obfuscation via Oblivious LWE Sampling 📺
We present a new, simple candidate construction of indistinguishability obfuscation (iO). Our scheme is inspired by lattices and learning-with-errors (LWE) techniques, but we are unable to prove security under a standard assumption. Instead, we formulate a new falsifiable assumption under which the scheme is secure. Furthermore, the scheme plausibly achieves post-quantum security. Our construction is based on the recent ``split FHE'' framework of Brakerski, D\"ottling, Garg, and Malavolta (EUROCRYPT '20), and we provide a new instantiation of this framework. As a first step, we construct an iO scheme that is provably secure assuming that LWE holds and that it is possible to obliviously generate LWE samples without knowing the corresponding secrets. We define a precise notion of oblivious LWE sampling that suffices for the construction. It is known how to obliviously sample from any distribution (in a very strong sense) using iO, and our result provides a converse, showing that the ability to obliviously sample from the specific LWE distribution (in a much weaker sense) already also implies iO. As a second step, we give a heuristic contraction of oblivious LWE sampling. On a very high level, we do this by homomorphically generating pseudorandom LWE samples using an encrypted pseudorandom function.
2021
EUROCRYPT
Automatic Search of Meet-in-the-Middle Preimage Attacks on AES-like Hashing 📺
The Meet-in-the-Middle (MITM) preimage attack is highly effective in breaking the preimage resistance of many hash functions, including but not limited to the full MD5, HAVAL, and Tiger, and reduced SHA-0/1/2. It was also shown to be a threat to hash functions built on block ciphers like AES by Sasaki in 2011. Recently, such attacks on AES hashing modes evolved from merely using the freedom of choosing the internal state to also exploiting the freedom of choosing the message state. However, detecting such attacks especially those evolved variants is difficult. In previous works, the search space of the configurations of such attacks is limited, such that manual analysis is practical, which results in sub-optimal solutions. In this paper, we remove artificial limitations in previous works, formulate the essential ideas of the construction of the attack in well-defined ways, and translate the problem of searching for the best attacks into optimization problems under constraints in Mixed-Integer-Linear-Programming (MILP) models. The MILP models capture a large solution space of valid attacks; and the objectives of the MILP models are attack configurations with the minimized computational complexity. With such MILP models and using the off-the-shelf solver, it is efficient to search for the best attacks exhaustively. As a result, we obtain the first attacks against the full (5-round) and an extended (5.5-round) version of Haraka-512 v2, and 8-round AES-128 hashing modes, as well as improved attacks covering more rounds of Haraka-256 v2 and other members of AES and Rijndael hashing modes.
2021
EUROCRYPT
Classical vs Quantum Random Oracles 📺
In this paper, we study relationship between security of cryptographic schemes in the random oracle model (ROM) and quantum random oracle model (QROM). First, we introduce a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol to prove the capability to quantumly access a random oracle to a classical verifier. We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC '20) can be seen as a PoQRO. We also give a construction of a publicly verifiable PoQRO relative to a classical oracle. Based on them, we construct digital signature and public key encryption schemes that are secure in the ROM but insecure in the QROM. In particular, we obtain the first examples of natural cryptographic schemes that separate the ROM and QROM under a standard cryptographic assumption. On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions. For example, our lifting theorems are applicable to Fiat-Shamir non-interactive arguments, Fiat-Shamir signatures, and Full-Domain-Hash signatures etc. We also discuss applications of our lifting theorems to quantum query complexity.
2021
EUROCRYPT
Threshold Garbled Circuits and Ad Hoc Secure Computation 📺
Garbled Circuits (GCs) represent fundamental and powerful tools in cryptography, and many variants of GCs have been considered since their introduction. An important property of the garbled circuits is that they can be evaluated securely if and only if exactly 1 key for each input wire is obtained: no less and no more. In this work we study the case when: 1) some of the wire-keys are missing, but we are still interested in computing the output of the garbled circuit and 2) the evaluator of the GC might have both keys for a constant number of wires. We start to study this question in terms of non-interactive multi-party computation (NIMPC) which is strongly connected with GCs. In this notion, there is a fixed number of parties (n) that can get correlated information from a trusted setup. Then these parties can send an encoding of their input to an evaluator, which can compute the output of the function. Similarly to the notion of ad hoc secure computation proposed by Beimel et al. [ITCS 2016], we consider the case when less than n parties participate in the online phase, and in addition we let these parties colluding with the evaluator. We refer to this notion as Threshold NIMPC. In addition, we show that when the number of parties participating in the online phase is a fixed threshold l <= n then it is possible to securely evaluate any l-input function. We build our result on top of a new secret-sharing scheme (which can be of independent interest) and on the results proposed by Benhamouda, Krawczyk and Rabin [Crypto 2017]. Our protocol can be used to compute any function in NC1 in the information-theoretic setting and any function in P assuming one-way functions. As a second (and main) contribution, we consider a slightly different notion of security in which the number of parties that can participate in the online phase is not specified, and can be any number c above the threshold l (in this case the evaluator cannot collude with the other parties). We solve an open question left open by Beimel, Ishai and Kushilevitz [Eurocrypt 2017] showing how to build a secure protocol for the case when c is constant, under the Learning with Errors assumption.
2021
EUROCRYPT
Post-Quantum Multi-Party Computation 📺
We initiate the study of multi-party computation for classical functionalities in the plain model, with security against malicious quantum adversaries. We observe that existing techniques readily give a polynomial-round protocol, but our main result is a construction of *constant-round* post-quantum multi-party computation. We assume mildly super-polynomial quantum hardness of learning with errors (LWE), and quantum polynomial hardness of an LWE-based circular security assumption. Along the way, we develop the following cryptographic primitives that may be of independent interest: 1.) A spooky encryption scheme for relations computable by quantum circuits, from the quantum hardness of (a circular variant of) the LWE problem. This immediately yields the first quantum multi-key fully-homomorphic encryption scheme with classical keys. 2.) A constant-round post-quantum non-malleable commitment scheme, from the mildly super-polynomial quantum hardness of LWE. To prove the security of our protocol, we develop a new straight-line non-black-box simulation technique against parallel sessions that does not clone the adversary's state. This technique may also be relevant to the classical setting.
2021
EUROCRYPT
The Nested Subset Differential Attack: A Practical Direct Attack Against LUOV which Forges a Signature within 210 Minutes 📺
In 2017, Ward Beullenset al.submitted Lifted Unbalanced Oil and Vinegar [4], which is a modification to the Unbalanced Oil and Vinegar Schemeby Patarin. Previously, Ding et al.proposed the Subfield Differential Attack [20]which prompted a change of parameters by the authors of LUOV for the second round of the NIST post quantum standardization competition [3].In this paper we propose a modification to the Subfield Differential Attackcalled the Nested Subset Differential Attack which fully breaks half of the parameter sets put forward. We also show by experimentation that this attack is practically possible to do in under 210 minutes for the level I security parameters and not just a theoretical attack. The Nested Subset Differential attack is a large improvement of the Subfield differential attack which can be used in real world circumstances. Moreover, we will only use what is called the "lifted" structure of LUOV, and our attack can be thought as a development of solving"lifted" quadratic systems.
2021
EUROCRYPT
Black-Box Non-Interactive Non-Malleable Commitments 📺
There has been recent exciting progress in building non-interactive non-malleable commitments from judicious assumptions. All proposed approaches proceed in two steps. First, obtain simple “base” commitment schemes for very small tag/identity spaces based on a various sub-exponential hardness assumptions. Next, assuming sub-exponential non-interactive witness indistinguishable proofs (NIWIs), and variants of keyless collision-resistant hash functions, construct non-interactive compilers that convert tag-based non-malleable commitments for a small tag space into tag-based non-malleable commitments for a larger tag space. We propose the first black-box construction of non-interactive non-malleable commitments. Our key technical contribution is a novel implementation of the non-interactive proof of consistency required for tag amplification. Prior to our work, the only known approach to tag amplification without setup and with black-box use of the base scheme (Goyal, Lee, Ostrovsky, and Visconti, FOCS 2012) added multiple rounds of interaction. Our construction satisfies the strongest known definition of non-malleability, i.e., CCA (chosen commitment attack) security. In addition to being black-box, our approach dispenses with the need for sub-exponential NIWIs, that was common to all prior work. Instead of NIWIs, we rely on sub-exponential hinting PRGs which can be obtained based on a broad set of assumptions such as sub-exponential CDH or LWE.
2021
EUROCRYPT
"Bifurcated Cryptography" Folding Competing Cryptosystems into a Single Scheme: On Accountability vs. Anonymity in Private Signatures 📺
Over the development of modern cryptography, often, alternative cryptographic schemes are developed to achieve goals that in some important respect are orthogonal. Thus, we have to choose either a scheme which achieves the first goal and not the second, or vice versa. This results in two types of schemes that compete with each other. In the basic area of user privacy, specifically in anonymous (multi-use credentials) signing, such an orthogonality exists between anonymity and accountability. The conceptual contribution of this work is to reverse the above orthogonality by design, which essentially typifies the last 25 years or so, and to suggest an alternative methodology where the opposed properties are carefully folded into a single scheme. The schemes will support both opposing properties simultaneously in a bifurcated fashion, where: - First, based on rich semantics expressed over the message's context and content, the user, etc., the relevant property is applied point-wise per message operation depending on a predicate; and - Secondly, at the same time, the schemes provide what we call ``branch-hiding;'' namely, the resulting calculated value hides from outsiders which property has actually been locally applied. Specifically, we precisely define and give the first construction and security proof of a ``Bifurcated Anonymous Signature'' (BiAS): A scheme which supports either absolute anonymity or anonymity with accountability, based on a specific contextual predicate, while being branch-hiding. This novel signing scheme has numerous applications not easily implementable or not considered before, especially because: (i) the conditional traceability does 'not' rely on a trusted authority as it is (non-interactively) encapsulated into signatures; and (ii) signers 'know' the predicate value and can make a conscious choice at each signing time. Technically, we realize BiAS from homomorphic commitments for a general family of predicates that can be represented by bounded-depth circuits. Our construction is generic and can be instantiated in the standard model from lattices and, more efficiently, from bilinear maps. In particular, the signature length is independent of the circuit size when we use commitments with suitable efficiency properties.
2021
EUROCRYPT
Oblivious Transfer is in MiniQCrypt 📺
MiniQCrypt is a world where quantum-secure one-way functions exist, and quantum communication is possible. We construct an oblivious transfer (OT) protocol in MiniQCrypt that achieves simulation-security against malicious quantum polynomial-time adversaries, building on the foundational work of Bennett, Brassard, Crepeau and Skubiszewska (CRYPTO 1991). Combining the OT protocol with prior works, we obtain secure two-party and multi-party computation protocols also in MiniQCrypt. This is in contrast to the classical world, where it is widely believed that OT does not exist in MiniCrypt.
2021
EUROCRYPT
Order-C Secure Multiparty Computation for Highly Repetitive Circuits 📺
Running secure multiparty computation (MPC) protocols with hundreds or thousands of players would allow leveraging large volunteer networks (such as blockchains and Tor) and help justify honest majority assumptions. However, most existing protocols have at least a linear (multiplicative) dependence on the number of players, making scaling difficult. Known protocols with asymptotic efficiency independent of the number of parties (excluding additive factors) require expensive circuit transformations that induce large overheads. We observe that the circuits used in many important applications of MPC such as training algorithms used to create machine learning models have a highly repetitive structure. We formalize this class of circuits and propose an MPC protocol that achieves O(|C|) total complexity for this class. We implement our protocol and show that it is practical and outperforms O(n|C|) protocols for modest numbers of players.
2021
EUROCRYPT
Towards Accountability in CRS Generation 📺
It is well known that several cryptographic primitives cannot be achieved without a common reference string (CRS). Those include, for instance, non-interactive zero-knowledge for NP, or malicious secure computation in fewer than four rounds. The security of those primitives heavily rely upon on the assumption that the trusted authority, who generates the CRS, does not misuse the randomness used in the CRS generation. However, we argue that there is no such thing as an unconditionally trusted authority and every authority must be held accountable for any trust to be well-founded. Indeed, a malicious authority can, for instance, recover private inputs of honest parties given transcripts of the protocols executed with respect to the CRS it has generated. While eliminating trust in the trusted authority may not be entirely feasible, can we at least move towards achieving some notion of accountability? We propose a new notion in which, if the CRS authority releases the private inputs of protocol executions to others, we can then provide a publicly-verifiable proof that certifies that the authority misbehaved. We study the feasibility of this notion in the context of non-interactive zero knowledge and two-round secure two-party computation.
2021
EUROCRYPT
TARDIS: A Foundation of Time-Lock Puzzles in UC 📺
Time-based primitives like time-lock puzzles (TLP) are finding widespread use in practical protocols, partially due to the surge of interest in the blockchain space where TLPs and related primitives are perceived to solve many problems. Unfortunately, the security claims are often shaky or plainly wrong since these primitives are used under composition. One reason is that TLPs are inherently not UC secure and time is tricky to model and use in the UC model. On the other hand, just specifying standalone notions of the intended task, left alone correctly using standalone notions like non-malleable TLPs only, might be hard or impossible for the given task. And even when possible a standalone secure primitive is harder to apply securely in practice afterwards as its behavior under composition is unclear. The ideal solution would be a model of TLPs in the UC framework to allow simple modular proofs. In this paper we provide a foundation for proving composable security of practical protocols using time-lock puzzles and related timed primitives in the UC model. We construct UC-secure TLPs based on random oracles and show that using random oracles is necessary. In order to prove security, we provide a simple and abstract way to reason about time in UC protocols. Finally, we demonstrate the usefulness of this foundation by constructing applications that are interesting in their own right, such as UC-secure two-party computation with output-independent abort.
2021
EUROCRYPT
Multi-Party Reusable Non-Interactive Secure Computation from LWE 📺
Motivated by the goal of designing versatile and flexible secure computation protocols that at the same time require as little interaction as possible, we present new multiparty reusable Non-Interactive Secure Computation (mrNISC) protocols. This notion, recently introduced by Benhamouda and Lin (TCC 2020), is essentially two-round Multi-Party Computation (MPC) protocols where the first round of messages serves as a reusable commitment to the private inputs of participating parties. Using these commitments, any subset of parties can later compute any function of their choice on their respective inputs by just sending a single message to a stateless evaluator, conveying the result of the computation but nothing else. Importantly, the input commitments can be computed without knowing anything about other participating parties (neither their identities nor their number) and they are reusable across any number of desired computations. We give a construction of mrNISC that achieves standard simulation security, as classical multi-round MPC protocols achieve. Our construction relies on the Learning With Errors (LWE) assumption with polynomial modulus, and on the existence of a pseudorandom function (PRF) in $\mathsf{NC}^1$. We achieve semi-malicious security in the plain model and malicious security by further relying on trusted setup (which is unavoidable for mrNISC). In comparison, the only previously known constructions of mrNISC were either using bilinear maps or using strong primitives such as program obfuscation. We use our mrNISC to obtain new Multi-Key FHE (MKFHE) schemes with threshold decryption: - In the CRS model, we obtain threshold MKFHE for $\mathsf{NC}^1$ based on LWE with only {\em polynomial} modulus and PRFs in $\mathsf{NC}^1$, whereas all previous constructions rely on LWE with super-polynomial modulus-to-noise ratio. - In the plain model, we obtain threshold levelled MKFHE for $\mathsf{P}$ based on LWE with {\em polynomial} modulus, PRF in $\mathsf{NC}^1$, and NTRU, and another scheme for constant number of parties from LWE with sub-exponential modulus-to-noise ratio. The only known prior construction of threshold MKFHE (Ananth et al., TCC 2020) in the plain model restricts the set of parties who can compute together at the onset.
2021
EUROCRYPT
Constant-Overhead Unconditionally Secure Multiparty Computation over Binary Fields 📺
We study the communication complexity of unconditionally secure multiparty computation (MPC) protocols in the honest majority setting. Despite tremendous efforts in achieving efficient protocols for binary fields under computational assumptions, there are no efficient unconditional MPC protocols in this setting. In particular, there are no n party protocols with constant overhead admitting communication complexity of O(n) bits per gate. Cascudo, Cramer, Xing and Yuan (CRYPTO 2018) were the first ones to achieve such an overhead in the amortized setting by evaluating O(log n) copies of the same circuit in the binary field in parallel. In this work, we construct the first unconditional MPC protocol secure against a malicious adversary in the honest majority setting evaluating just a single boolean circuit with amortized communication complexity of O(n) bits per gate.
2021
EUROCRYPT
On the power of multiple anonymous messages: Frequency Estimation and Selection in the Shuffle Model of Differential Privacy 📺
It is well-known that general secure multi-party computation can in principle be applied to implement differentially private mechanisms over distributed data with utility matching the curator (a.k.a. central) model. In this paper we study the power of protocols running on top of a much weaker primitive: A non-interactive anonymous channel, known as the shuffled model in the differential privacy literature. Such protocols are implementable in a scalable way using known cryptographic methods and are known to enable non-interactive, differentially private protocols with error much smaller than what is possible in the local model. We study fundamental counting problems in the shuffled model and obtain tight, up to poly-logarithmic factors, bounds on the error and communication in several settings. For the problem of frequency estimation for n users and a domain of size B, we obtain: - A nearly tight lower bound of ˜Ω(min(n^(1/4), sqrt(B))) on the error in the single-message shuffled model. This implies that the protocols obtained from the amplification via shuffling work of Erlingsson et al. (SODA 2019) and Balle et al. (Crypto 2019) are essentially optimal for single-message protocols. - Protocols in the multi-message shuffled model with poly(log B, log n) bits of communication per user and poly log B error, which provide an exponential improvement on the error compared to what is possible with single-message algorithms. This implies protocols with similar error and communication guarantees for several well-studied problems such as heavy hitters, d-dimensional range counting, M-estimation of the median and quantiles, and more generally sparse non-adaptive statistical query algorithms. For the related selection problem on a domain of size B, we prove: - A nearly tight lower bound of Ω(B) on the number of users in the single-message shuffled model. This significantly improves on the Ω(B^(1/17)) lower bound obtained by Cheu et al. (Eurocrypt 2019). A key ingredient in the proof is a lower bound on the error of locally-private frequency estimation in the low-privacy (aka high ε) regime. For this we develop new techniques to extend the results of Duchi et al. (FOCS 2013; JASA 2018) and Bassily & Smith (STOC 2015), whose techniques only gave tight bounds in the high-privacy setting.
2021
EUROCRYPT
Unbounded Multi-Party Computation from Learning with Errors 📺
We consider the problem of round-optimal *unbounded MPC*: in the first round, parties publish a message that depends only on their input. In the second round, any subset of parties can jointly and securely compute any function $f$ over their inputs in a single round of broadcast. We do not impose any a priori bound on the number of parties nor on the size of the functions that can be computed. Our main result is a semi-honest two-round protocol for unbounded MPC in the plain model from the hardness of the standard learning with errors (LWE) problem. Prior work in the same setting assumes the hardness of problems over bilinear maps. Thus, our protocol is the first example of unbounded MPC that is post-quantum secure. The central ingredient of our protocol is a new scheme of attribute-based secure function evaluation (AB-SFE) with *public decryption*. Our construction combines techniques from the realm of homomorphic commitments with delegation of lattice basis. We believe that such a scheme may find further applications in the future.
2021
EUROCRYPT
Password Hashing and Preprocessing 📺
How does the cryptanalytic effort needed to compromise t out of m instances of hashed passwords scale with the number of users when arbitrary preprocessing information on the hash function is available? We provide a formal treatment of this problem in the multi-instance setting with auxiliary information. A central contribution of our work is an (arguably simple) transcript-counting argument that allows us to resolve a fundamental question left open by Bellare, Ristenpart, and Tessaro (BRT; CRYPTO 2012) in multi-instance security. We leverage this proof technique to formally justify unrecoverability of hashed salted passwords in the presence of auxiliary information in the random-oracle model. To this end we utilize the recent pre-sampling techniques for dealing with auxiliary information developed by Coretti et al. (CRYPTO 2018). Our bounds closely match those commonly assumed in practice. Besides hashing of passwords through a monolithic random oracle, we consider the effect of iteration, a technique that is used in classical mechanisms, such as bcrypt and PBKDF2, to slow down the rate of guessing. Building on the work of BRT, we formulate a notion of KDF security, also in the presence of auxiliary information, and prove an appropriate composition theorem for it.
2021
EUROCRYPT
Mind the Middle Layer: The HADES Design Strategy Revisited 📺
The HADES design strategy combines the classical SPN construction with the Partial SPN (PSPN) construction, in which at every encryption round, the non-linear layer is applied to only a part of the state. In a HADES design, a middle layer that consists of PSPN rounds is surrounded by outer layers of SPN rounds. The security arguments of HADES with respect to statistical attacks use only the SPN rounds, disregarding the PSPN rounds. This allows the designers to not pose any restriction on the MDS matrix used as the linear mixing operation. In this paper we show that the choice of the MDS matrix significantly affects the security level provided by HADES designs. If the MDS is chosen properly, then the security level of the scheme against differential and linear attacks is significantly higher than claimed by the designers. On the other hand, weaker choices of the MDS allow for extremely large invariant subspaces that pass the entire middle layer without activating any non-linear operation (a.k.a. S-box). We showcase our results on the Starkad and Poseidon instantiations of HADES. For Poseidon, we significantly improve the lower bounds on the number of active S-boxes with respect to both differential and linear cryptanalysis provided by the designers – for example, from 28 to 60 active S-boxes for the t = 6 variant. For Starkad, we show that for any variant with t (i.e., the number of S-boxes in each round) divisible by 4, the cipher admits a huge invariant subspace that passes any number of PSPN rounds without activating any S-box (e.g., a subspace of size 2^1134 for the t = 24 variant). Furthermore, for various choices of the parameters, this invariant subspace can be used to mount a preimage attack on the hash function that breakes its security claims. On the other hand, we show that the problem can be fixed easily by replacing t with any value that is not divisible by four. Following our paper, the designers of Starkad and Poseidon amended their design, by adding requirements which ensure that the MDS matrix is chosen properly.
2021
EUROCRYPT
Security Analysis of Quantum Lightning 📺
Zhandry recently defined a new cryptographic object called quantum lightning, which has a number of useful applications, including a strong form of quantum money. Further, they proposed a construction of quantum lightning based on superpositions of low-rank matrices. The scheme is unusual, so it is difficult to base the scheme's security on any widespread computational assumptions. So instead, they proposed a new hardness assumption that, if true, could be used to prove security. In this work, we show that the hardness assumption is in fact false, so the proof of security does not hold. However, we note that the proposal for quantum lightning has not been proven insecure. This work is the first step in analyzing the security of Zhandry's proposal and moving toward a scheme that we can prove to be secure.
2021
EUROCRYPT
Non-interactive Distributional Indistinguishability (NIDI) and Non-Malleable Commitments 📺
We introduce non-interactive distributionally indistinguishable arguments (NIDI) to remedy a significant weakness of NIWI proofs: namely, the lack of meaningful secrecy when proving statements about NP languages with unique witnesses. NIDI arguments allow a prover $\cP$ to send a single message to verifier $\cV$, given which $\cV$ can obtain a sample $d$ from a (secret) distribution $\cD$ together with a proof of membership of $d$ in an NP language. The soundness guarantee is that if the sample $d$ obtained by the verifier $\cV$ is not in the language, then $\cV$ outputs $\bot$. The secrecy guarantee is that secrets about the distribution remain hidden: for every pair of (sufficiently) hard-to-distinguish distributions $\cD_0$ and $\cD_1$, a NIDI that outputs samples from $\cD_0$ with proofs is indistinguishable from one that outputs samples from $\cD_1$ with proofs. We build NIDI arguments that satisfy secrecy for sufficiently hard distributions, assuming sub-exponential indistinguishability obfuscation and sub-exponentially secure (variants of) one-way functions. We demonstrate preliminary applications of NIDI and of our techniques to obtaining the first (relaxed) non-interactive constructions in the plain model, from well-founded assumptions, of: -- Commit-and-prove that provably hides the committed message -- CCA-secure commitments against non-uniform adversaries. The commit phase of our commitment schemes consists of a single message from the committer to the receiver, followed by a randomized output by the receiver (that need not be sent to the committer).
2021
EUROCRYPT
Dynamic Ad Hoc Clock Synchronization 📺
Clock synchronization allows parties to establish a common notion of global time by leveraging a weaker synchrony assumption, i.e., local clocks with approximately the same speed. Despite intensive investigation of the problem in the fault-tolerant distributed computing literature, existing solutions do not apply to settings where participation is unknown, e.g., the ad hoc model of Beimel et al. [EUROCRYPT 17], or is dynamically shifting over time, e.g., the fluctuating/sleepy/dynamic-availability models of Garay et al. [CRYPTO 17], Pass and Shi [ASIACRYPT 17] and Badertscher et al. CCS 18]. We show how to apply and extend ideas from the blockchain literature to devise synchronizers that work in such dynamic ad hoc settings and tolerate corrupted minorities under the standard assumption that local clocks advance at approximately the same speed. We discuss both the setting of honest-majority hashing power and that of a PKI with honest majority. Our main result is a synchronizer that is directly integrated with a new proof-of-stake (PoS) blockchain protocol, Ouroboros Chronos, which we construct and prove secure; to our knowledge, this is the first PoS blockchain protocol to rely only on local clocks, while tolerating worst-case corruption and dynamically fluctuating participation. We believe that this result might be of independent interest.
2021
EUROCRYPT
High-Precision Bootstrapping of RNS-CKKS Homomorphic Encryption Using Optimal Minimax Polynomial Approximation and Inverse Sine Function 📺
Approximate homomorphic encryption with the residue number system (RNS), called RNS-variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme, is a fully homomorphic encryption scheme that supports arithmetic operations for real or complex number data encrypted. Although the RNS-CKKS scheme is a fully homomorphic encryption scheme, most of the applications with the RNS-CKKS scheme use it as the only leveled homomorphic encryption scheme because of the lack of the practicality of the bootstrapping operation of the RNS-CKKS scheme. One of the crucial problems of the bootstrapping operation is its poor precision. While other basic homomorphic operations ensure sufficiently high precision for practical use, the bootstrapping operation only supports about 20-bit fixed-point precision at best, which is not high precision enough to be used for the reliable large-depth homomorphic computations until now. In this paper, we improve the message precision in the bootstrapping operation of the RNS-CKKS scheme. Since the homomorphic modular reduction process is one of the most important steps in determining the precision of the bootstrapping, we focus on the homomorphic modular reduction process. Firstly, we propose a fast algorithm of obtaining the optimal minimax approximate polynomial of modular reduction function and the scaled sine/cosine function over the union of the approximation regions, called an improved multi-interval Remez algorithm. In fact, this algorithm derives the optimal minimax approximate polynomial of any continuous functions over any union of the finite number of intervals. Next, we propose the composite function method using the inverse sine function to reduce the difference between the scaling factor used in the bootstrapping and the default scaling factor. With these methods, we reduce the approximation error in the bootstrapping of the RNS-CKKS scheme by 1/1176~1/42 (5.4~10.2-bit precision improvement) for each parameter setting. While the bootstrapping without the composite function method has 27.2~30.3-bit precision at maximum, the bootstrapping with the composite function method has 32.6~40.5-bit precision.
2021
EUROCRYPT
On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work 📺
We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). To start off with, we offer a concise exposition of the technique, which easily extends to the parallel-query QROM, where in each query-round the considered algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis. Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, for typical examples the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds. We demonstrate this on a few examples, recovering known results but also obtaining new results. Our main target is the hardness of finding a q-chain with fewer than q parallel queries, i.e., a sequence x_0, x_1, ..., x_q with x_i = H(x_{i-1}) for all 1 \leq i \leq q. The above problem of finding a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete cryptographic application of our techniques, we prove quantum security of the ``Simple Proofs of Sequential Work'' by Cohen and Pietrzak.
2021
EUROCRYPT
One-way functions and malleability oracles: Hidden shift attacks on isogeny-based protocols 📺
Supersingular isogeny Diffie-Hellman key exchange (SIDH) is a post-quantum protocol based on the presumed hardness of computing an isogeny between two supersingular elliptic curves given some additional torsion point information. Unlike other isogeny-based protocols, SIDH has been widely believed to be immune to subexponential quantum attacks because of the non-commutative structure of the endomorphism rings of supersingular curves. We contradict this commonly believed misconception in this paper. More precisely, we highlight the existence of an abelian group action on the SIDH key space, and we show that for sufficiently \emph{unbalanced} and \emph{overstretched} SIDH parameters, this action can be efficiently computed (heuristically) using the torsion point information revealed in the protocol. This reduces the underlying hardness assumption to a hidden shift problem instance which can be solved in quantum subexponential time. We formulate our attack in a new framework allowing the inversion of one-way functions in quantum subexponential time provided a malleability oracle with respect to some commutative group action. This framework unifies our new attack with earlier subexponential quantum attacks on isogeny-based protocols, and it may be of further interest for cryptanalysis.
2021
EUROCRYPT
On the Power of Expansion: More Efficient Constructions in the Random Probing Model 📺
The random probing model is a leakage model in which each wire of a circuit leaks with a given probability $p$. This model enjoys practical relevance thanks to a reduction to the noisy leakage model, which is admitted as the right formalization for power and electromagnetic side-channel attacks. In addition, the random probing model is much more convenient than the noisy leakage model to prove the security of masking schemes. In a recent work, Ananth, Ishai and Sahai (CRYPTO 2018) introduce a nice expansion strategy to construct random probing secure circuits. Their construction tolerates a leakage probability of $2^{-26}$, which is the first quantified achievable leakage probability in the random probing model. In a follow-up work, Bela\"id, Coron, Prouff, Rivain and Taleb (CRYPTO 2020) generalize their idea and put forward a complete and practical framework to generate random probing secure circuits. The so-called expanding compiler can bootstrap simple base gadgets as long as they satisfy a new security notion called \emph{random probing expandability} (RPE). They further provide an instantiation of the framework which tolerates a $2^{-8}$ leakage probability in complexity $\mathcal{O}(\kappa^{7.5})$ where $\kappa$ denotes the security parameter. In this paper, we provide an in-depth analysis of the RPE security notion. We exhibit the first upper bounds for the main parameter of a RPE gadget, which is known as the \emph{amplification order}. We further show that the RPE notion can be made tighter and we exhibit strong connections between RPE and the \emph{strong non-interference} (SNI) composition notion. We then introduce the first generic constructions of gadgets achieving RPE for any number of shares and with nearly optimal amplification orders and provide an asymptotic analysis of such constructions. Last but not least, we introduce new concrete constructions of small gadgets achieving maximal amplification orders. This allows us to obtain much more efficient instantiations of the expanding compiler: we obtain a complexity of $\mathcal{O}(\kappa^{3.9})$ for a slightly better leakage probability, as well as $\mathcal{O}(\kappa^{3.2})$ for a slightly lower leakage probability.
2021
EUROCRYPT
Breaking the Circuit Size Barrier for Secure Computation under Quasi-Polynomial LPN 📺
In this work we introduce a new (circuit-dependent) homomorphic secret sharing (HSS) scheme for all log/loglog-local circuits, with communication proportional only to the width of the circuit, and polynomial computation, assuming the super-polynomial hardness of learning parity with noise (LPN). At the heart of our new construction is a pseudorandom correlation generator (PCG), which allows two partie to locally stretch, from short seeds, pseudorandom instances of an arbitrary log / log log-local additive correlation. Our main application, and the main motivation behind this work, is a generic two-party secure computation protocol for every layered (boolean or arithmetic) circuit of size s with total communication O(s/ log log s) and polynomial computation, assuming the super-polynomial hardness of the standard learning parity with noise assumption (a circuit is layered if its nodes can be partitioned in layers, such that any wire connects adjacent layers). This expands the set of assumptions under which the ‘circuit size barrier’ can be broken, for a large class of circuits. The strength of the underlying assumption is tied to the sublinearity factor: we achieve communication O(s/k(s)) under the s^2^k(s) -hardness of LPN, for any k(s) ≤ log log s /4. Previously, the set of assumptions known to imply a PCG for correlations of degree ω(1) or generic secure computation protocols with sublinear communication was restricted to LWE, DDH, and a circularly secure variant of DCR.
2021
EUROCRYPT
Indistinguishability Obfuscation from Simple-to-State Hard Problems: New Assumptions, New Techniques, and Simplification 📺
In this work, we study the question of what set of simple-to-state assumptions suffice for constructing functional encryption and indistinguishability obfuscation ($i\mathcal{O}$), supporting all functions describable by polynomial-size circuits. Our work improves over the state-of-the-art work of Jain, Lin, Matt, and Sahai (Eurocrypt 2019) in multiple dimensions. New Assumption: Previous to our work, all constructions of $i\mathcal{O}$ from simple assumptions required novel pseudorandomness generators involving LWE samples and constant-degree polynomials over the integers, evaluated on the error of the LWE samples. In contrast, Boolean pseudorandom generators (PRGs) computable by constant-degree polynomials have been extensively studied since the work of Goldreich (2000). We show how to replace the novel pseudorandom objects over the integers used in previous works, with appropriate Boolean pseudorandom generators with sufficient stretch, when combined with LWE with binary error over suitable parameters. Both binary error LWE and constant degree Goldreich PRGs have been a subject of extensive cryptanalysis since much before our work and thus we back the plausibility of our assumption with security against algorithms studied in context of cryptanalysis of these objects. New Techniques: we introduce a number of new techniques: - We show how to build partially-hiding public-key functional encryption, supporting degree-2 functions in the secret part of the message, and arithmetic $\mathsf{NC}^1$ functions over the public part of the message, assuming only standard assumptions over asymmetric pairing groups. - We construct single-ciphertext secret-key functional encryption for all circuits with {\em linear} key generation, assuming only the LWE assumption. Simplification: Unlike prior works, our new techniques furthermore let us construct public-key functional encryption for polynomial-sized circuits directly (without invoking any bootstrapping theorem, nor transformation from secret-key to public key FE), and based only on the polynomial hardness of underlying assumptions. The functional encryption scheme satisfies a strong notion of efficiency where the size of the ciphertext grows only sublinearly in the output size of the circuit and not its size. Finally, assuming that the underlying assumptions are subexponentially hard, we can bootstrap this construction to achieve $i\mathcal{O}$.
2021
EUROCRYPT
The More The Merrier: Reducing the Cost of Large Scale MPC 📺
Secure multi-party computation (MPC) allows multiple parties to perform secure joint computations on their private inputs. Today, applications for MPC are growing with thousands of parties wishing to build federated machine learning models or trusted setups for blockchains. To address such scenarios we propose a suite of novel MPC protocols that maximize throughput when run with large numbers of parties. In particular, our protocols have both communication and computation complexity that decrease with the number of parties. Our protocols build on prior protocols based on packed secret-sharing, introducing new techniques to build more efficient computation for general circuits. Specifically, we introduce a new approach for handling \emph{linear attacks} that arise in protocols using packed secret-sharing and we propose a method for unpacking shared multiplication triples without increasing the asymptotic costs. Compared with prior work, we avoid the $\log |C|$ overhead required when generically compiling circuits of size $|C|$ for use in a SIMD computation, and we improve over folklore ``committee-based'' solutions by a factor of $O(s)$, the statistical security parameter. In practice, our protocol is up to $10X$ faster than any known construction, under a reasonable set of parameters.
2021
EUROCRYPT
Three Third Generation Attacks on the Format Preserving Encryption Scheme FF3 📺
Format-Preserving Encryption (FPE) schemes accept plaintexts from any finite set of values (such as social security numbers or birth dates) and produce ciphertexts that belong to the same set. They are extremely useful in practice since they make it possible to encrypt existing databases or communication packets without changing their format. Due to industry demand, NIST had standardized in 2016 two such encryption schemes called FF1 and FF3. They immediately attracted considerable cryptanalytic attention with decreasing attack complexities. The best currently known attack on the Feistel construction FF3 has data and memory complexity of ${O}(N^{11/6})$ and time complexity of ${O}(N^{17/6})$, where the input belongs to a domain of size $N \times N$. In this paper, we present and experimentally verify three improved attacks on FF3. Our best attack achieves the tradeoff curve $D=M=\tilde{O}(N^{2-t})$, $T=\tilde{O}(N^{2+t})$ for all $t \leq 0.5$. In particular, we can reduce the data and memory complexities to the more practical $\tilde{O}(N^{1.5})$, and at the same time, reduce the time complexity to $\tilde{O}(N^{2.5})$. We also identify another attack vector against FPE schemes, the {\em related-domain} attack. We show how one can mount powerful attacks when the adversary is given access to the encryption under the same key in different domains, and show how to apply it to efficiently distinguish FF3 and FF3-1 instances.
2021
EUROCRYPT
2021
EUROCRYPT
Structured Encryption and Dynamic Leakage Suppression 📺
Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. Special cases ofSTE include searchable symmetric encryption (SSE) and graph encryption. Like all sub-linear encrypted search solutions, STE leaks information about queries against persistent adversaries. To address this, a line of work on leakage suppression was recently initiated that focuses on techniques to mitigate or completely remove the leakage of STE schemes (Kamara et al. CRYPTO’18 and Kamara and Moataz, Eurocrypt ’19). A notable example is the cache-based compiler which, when combined with the rebuild compiler, transforms any dynamic STE scheme that leaks the query equality into a new scheme that does not. Unfortunately, this compiler can only produce static schemes and it was left as an open problem to design a compiler that could yield dynamic constructions. In this work, we propose a dynamic variant of the cache-based compiler. Our compiler can transform any volume-hiding semi-dynamic or mutable STE scheme that leaks the query equality pattern into into a new fully-dynamic construction that does not. Using this compiler, we design three new fully-dynamic STE schemes that are “almost” and fully zero-leakage which, under natural assumptions about the data and query distributions, are asymptotically more efficient than using black-box ORAM simulation. These are the first constructions of their kind.
2021
EUROCRYPT
A Deeper Look at Machine Learning-Based Cryptanalysis 📺
At CRYPTO’19, Gohr proposed a new cryptanalysis strategy based on the utilisation of machine learning algorithms. Using deep neural networks, he managed to build a neural based distinguisher that surprisingly surpassed state-of-the-art cryptanalysis efforts on one of the versions of the well studied NSA block cipher SPECK (this distinguisher could in turn be placed in a larger key recovery attack). While this work opens new possibilities for machine learning-aided cryptanalysis, it remains unclear how this distinguisher actually works and what information is the machine learning algorithm deducing. The attacker is left with a black-box that does not tell much about the nature of the possible weaknesses of the algorithm tested, while hope is thin as interpretability of deep neural networks is a well-known difficult task. In this article, we propose a detailed analysis and thorough explanations of the inherent workings of this new neural distinguisher. First, we studied the classified sets and tried to find some patterns that could guide us to better understand Gohr’s results. We show with experiments that the neural distinguisher generally relies on the differential distribution on the ciphertext pairs, but also on the differential distribution in penultimate and antepenultimate rounds. In order to validate our findings, we construct a distinguisher for SPECK cipher based on pure cryptanalysis, without using any neural network, that achieves basically the same accuracy as Gohr’s neural distinguisher and with the same efficiency (therefore improving over previous non-neural based distinguishers). Moreover, as another approach, we provide a machine learning-based distinguisher that strips down Gohr’s deep neural network to a bare minimum. We are able to remain very close to Gohr’s distinguishers’ accuracy using simple standard machine learning tools. In particular, we show that Gohr’s neural distinguisher is in fact inherently building a very good approximation of the Differential Distribution Table (DDT) of the cipher during the learning phase, and using that information to directly classify ciphertext pairs. This result allows a full interpretability of the distinguisher and represents on its own an interesting contribution towards interpretability of deep neural networks. Finally, we propose some method to improve over Gohr’s work and possible new neural distinguishers settings. All our results are confirmed with experiments we have conducted on SPECK block cipher.
2021
EUROCRYPT
The Rise of Paillier: Homomorphic Secret Sharing and Public-Key Silent OT 📺
We describe a simple method for solving the distributed discrete logarithm problem in Paillier groups, allowing two parties to locally convert multiplicative shares of a secret (in the exponent) into additive shares. Our algorithm is perfectly correct, unlike previous methods with an inverse polynomial error probability. We obtain the following applications and further results. – Homomorphic secret sharing: We construct homomorphic secret sharing for branching programs with negligible correctness error and supporting exponentially large plaintexts, with security based on the decisional composite residuosity (DCR) assumption. – Correlated pseudorandomness: Pseudorandom correlation functions (PCFs), recently introduced by Boyle et al. (FOCS 2020), allow two parties to obtain a practically unbounded quantity of correlated randomness, given a pair of short, correlated keys. We construct PCFs for the oblivious transfer (OT) and vector oblivious linear evaluation (VOLE) correlations, based on the quadratic residuosity (QR) or DCR assumptions, respectively. We also construct a pseudorandom correlation generator (for producing a bounded number of samples, all at once) for OLE, based on a combination of the DCR and learning parity with noise assumptions. – Public-keysilentOT/VOLE: We upgrade our PCF constructions to have a public-key setup, where after independently posting a public key, each party can locally derive its PCF key. This allows completely silent generation of an arbitrary amount of OTs or VOLEs, without any interaction beyond a PKI, based on QR and DCR. The public-key setup is based on a novel non-interactive vector OLE protocol which can be seen as a variant of the Bellare-Micali oblivious transfer protocol.
2021
EUROCRYPT
Advanced Lattice Sieving on GPUs, with Tensor Cores 📺
In this work, we study GPU implementations of various state-of-the-art sieving algorithms for lattices (Becker-Gama-Joux 2015, Becker-Ducas-Gama-Laarhoven 2016, Herold-Kirshanova 2017) inside the General Sieve Kernel (G6K, Albrecht et al. 2019). In particular, we extensively exploit the recently introduced Tensor Cores -- originally designed for raytracing and machine learning -- and demonstrate their fitness for the cryptanalytic task at hand. We also propose a new dual-hash technique for efficient detection of `lift-worthy' pairs to accelerate a key ingredient of G6K: finding short lifted vectors. We obtain new computational records, reaching dimension 180 for the SVP Darmstadt Challenge improving upon the previous record for dimension 155. This computation ran for 51.6 days on a server with 4 NVIDIA Turing GPUs and 1.5TB of RAM. This corresponds to a gain of about two orders of magnitude over previous records both in terms of wall-clock time and of energy efficiency.
2021
EUROCRYPT
Efficient Bootstrapping for Approximate Homomorphic Encryption with Non-Sparse Keys 📺
We present a bootstrapping procedure for the full-RNS variant of the approximate homomorphic-encryption scheme of Cheon et al., CKKS (Asiacrypt 17, SAC 18). Compared to the previously proposed procedures (Eurocrypt 18 & 19, CT-RSA 20), our bootstrapping procedure is more precise, more efficient (in terms of CPU cost and number of consumed levels), and is more reliable and 128-bit-secure. Unlike the previous approaches, it does not require the use of sparse secret-keys. Therefore, to the best of our knowledge, this is the first procedure that enables a highly efficient and precise bootstrapping with a low probability of failure for parameters that are 128-bit-secure under the most recent attacks on sparse R-LWE secrets. We achieve this efficiency and precision by introducing three novel contributions: (i) We propose a generic algorithm for homomorphic polynomial-evaluation that takes into account the approximate rescaling and is optimal in level consumption. (ii) We optimize the key-switch procedure and propose a new technique for linear transformations (double hoisting). (iii) We propose a systematic approach to parameterize the bootstrapping, including a precise way to assess its failure probability. We implemented our improvements and bootstrapping procedure in the open-source Lattigo library. For example, bootstrapping a plaintext in C^32768 takes 18 seconds, has an output coefficient modulus of 505 bits, a mean precision of 19.1 bits, and a failure probability of 2^-15.58. Hence, we achieve 14.1x improvement in bootstrapped throughput (plaintext-bit per second), with respect to the previous best results, and we have a failure probability 468x smaller and ensure 128-bit security.
2021
PKC
Two-round n-out-of-n and Multi-Signatures and Trapdoor Commitment from Lattice 📺
Although they have been studied for a long time, distributed signature protocols have garnered renewed interest in recent years in view of novel applications to topics like blockchains. Most recent works have focused on distributed versions of ECDSA or variants of Schnorr signatures, however, and in particular, little attention has been given to constructions based on post-quantum secure assumptions like the hardness of lattice problems. A few lattice-based threshold signature and multi-signature schemes have been proposed in the literature, but they either rely on hash-and-sign lattice signatures (which tend to be comparatively inefficient), use expensive generic transformations, or only come with incomplete security proofs. In this paper, we construct several lattice-based distributed signing protocols with low round complexity following the Fiat--Shamir with Aborts (FSwA) paradigm of Lyubashevsky (Asiacrypt 2009). Our protocols can be seen as distributed variants of the fast Dilithium-G signature scheme and the full security proof can be made assuming the hardness of module SIS and LWE problems. A key step to achieving security (unexplained in some earlier papers) is to prevent the leakage that can occur when parties abort after their first message---which can inevitably happen in the Fiat--Shamir with Aborts setting. We manage to do so using homomorphic commitments. Exploiting the similarities between FSwA and Schnorr-style signatures, our approach makes the most of observations from recent advancements in the discrete log setting, such as Drijvers et al.'s seminal work on two-round multi-signatures (S&P 2019). In particular, we observe that the use of commitment not only resolves the subtle issue with aborts, but also makes it possible to realize secure two-round n-out-of-n distributed signing and multi-signature in the plain public key model, by equipping the commitment with a trapdoor feature. The construction of suitable trapdoor commitment from lattices is a side contribution of this paper.
2021
PKC
On the (In)Security of the Diffie-Hellman Oblivious PRF with Multiplicative Blinding 📺
Oblivious Pseudorandom Function (OPRF) is a protocol between a client holding input x and a server holding key k for a PRF F. At the end, the client learns F_k(x) and nothing else while the server learns nothing. OPRF's have found diverse applications as components of larger protocols, and the currently most efficient instantiation, with security proven in the UC model, is F_k(x)=H2(x,(H1(x))^k) computed using so-called exponential blinding, i.e., the client sends a=(H1(x))^r for random r, the server responds b=a^k, which the client ublinds as v=b^{1/r} to compute F_k(x)=H2(x,v). However, this protocol requires two variable-base exponentiations on the client, while a more efficient multiplicative blinding scheme replaces one or both client exponentiations with fixed-base exponentiation, leading to the decrease of the client's computational cost by a factor between two to six, depending on pre-computation. We analyze the security of the above OPRF with multiplicative blinding, showing surprising weaknesses that offer attack avenues which are not present using exponential blinding. We characterize the security of this OPRF implementation as a "Revised OPRF" functionality, a relaxation of UC OPRF functionality used in prior work. On the positive side, we show that the Revised OPRF suffices for the security of OPAQUE, the asymmetric PAKE protocol, hence allowing OPAQUE the computational advantages of multiplicative blinding. Unfortunately, we also show examples of other OPRF applications which become insecure when using such blinding. The conclusion is that usage of multiplicative blinding for F_k(x) defined as above, in settings where correct value g^k (needed for multiplicative blinding) is not authenticated, and OPRF inputs are of low entropy, must be carefully analyzed, or avoided all together. We complete the picture by showing a simple and safe alternative definition of function F_k(x) which offers (full) UC OPRF security using either form of blinding.
2021
PKC
Compact Zero-Knowledge Proofs for Threshold ECDSA with Trustless Setup 📺
Threshold ECDSA signatures provide a higher level of security to a crypto wallet since it requires more than t parties out of n parties to sign a transaction. The state-of-the-art bandwidth efficient threshold ECDSA used the additive homomorphic Castagnos and Laguillaumie (CL) encryption based on an unknown order group G, together with a number of zero-knowledge proofs in G. In this paper, we propose compact zero-knowledge proofs for threshold ECDSA to lower the communication bandwidth, as well as the computation cost. The proposed zero-knowledge proofs include the discrete-logarithm relation in G and the well-formedness of a CL ciphertext. When applied to two-party ECDSA, we can lower the bandwidth of the key generation algorithm by 47%, and the running time for the key generation and signing algorithms are boosted by about 35% and 104% respectively. When applied to threshold ECDSA, our first scheme is more optimized for the key generation algorithm (about 70% lower bandwidth and 70% faster computation in key generation, at a cost of 20% larger bandwidth in signing), while our second scheme has an all-rounded performance improvement (about 60% lower bandwidth, 27% faster computation in key generation without additional cost in signing).
2021
PKC
An Efficient and Generic Construction for Signal's Handshake (X3DH): Post-Quantum, State Leakage Secure, and Deniable 📺
The Signal protocol is a secure instant messaging protocol that underlies the security of numerous applications such as WhatsApp, Skype, Facebook Messenger among many others. The Signal protocol consists of two sub-protocols known as the X3DH protocol and the double ratchet protocol, where the latter has recently gained much attention. For instance, Alwen, Coretti, and Dodis (Eurocrypt'19) provided a concrete security model along with a generic construction based on simple building blocks that are instantiable from versatile assumptions, including post-quantum ones. In contrast, as far as we are aware, works focusing on the X3DH protocol seem limited. In this work, we cast the X3DH protocol as a specific type of authenticated key exchange (AKE) protocol, which we call a Signal-conforming AKE protocol, and formally define its security model based on the vast prior work on AKE protocols. We then provide the first efficient generic construction of a Signal-conforming AKE protocol based on standard cryptographic primitives such as key encapsulation mechanisms (KEM) and signature schemes. Specifically, this results in the first post-quantum secure replacement of the X3DH protocol on well-established assumptions. Similar to the X3DH protocol, our Signal-conforming AKE protocol offers a strong (or stronger) flavor of security, where the exchanged key remains secure even when all the non-trivial combinations of the long-term secrets and session-specific secrets are compromised. Moreover, our protocol has a weak flavor of deniability and we further show how to strengthen it using ring signatures. Finally, we provide a full-fledged, generic C implementation of our (weakly deniable) protocol. We instantiate it with several Round 3 candidates (finalists and alternates) to the NIST post-quantum standardization process and c