The Module Learning with Errors () problem is one of the most commonly used hardness assumption in lattice-based cryptography. In its standard version, a matrix is sampled uniformly at random over a quotient ring , as well as noisy linear equations in the form of , where is the secret, sampled uniformly at random over , and is the error, coming from a Gaussian distribution. Many previous works have focused on variants of , where the secret and/or the error are sampled from different distributions. Only few works have focused on different distributions for the matrix . One variant proposed in the literature is to consider matrix distributions where the low-order bits of a uniform are deleted. This seems a natural approach in order to save in bandwidth. We call it truncated .
In this work, we show that the hardness of standard implies the hardness of truncated , both for search and decision versions. Prior works only covered the search variant and relied on the (module) assumption, limitations which we are able to overcome. Overall, we provide two approaches, offering different advantages. The first uses a general Rényi divergence argument, applicable to a wide range of secret/error distributions, but which only works for the search variants of (truncated) . The second applies to the decision versions, by going through an intermediate variant of , where additional hints on the secret are given to the adversary. However, the reduction makes use of discrete Gaussian distributions.