Thursday, April 21, 2016

Study group: Robustness of the Learning with Errors Assumption

MathJax TeX Test Page

In the paper Robustness of the Learning with Errors Assumption, Goldwasser, Kalai, Peikert and Vaikuntanathan show that the standard LWE assumption implies the hardness of LWE even if the secret is drawn from an arbitrary distribution with sufficient min-entropy (and also given an arbitrary hard-to-invert function of the secret, but we won't focus on this point here since we didn't cover it during the study group). A noticeable application of this result is a symmetric-key encryption scheme which is provable secure in the presence of leakage but for which no maximum amount of leakage is foreseen in the parameters. In fact, leakage-resilient schemes usually guarantee security up to, say, $\lambda$ bits of leakage, hence using constructions to tolerate such a loss at the cost of some overhead. However, think of the extreme scenario where none of those $\lambda$ bits are actually leaked. This results in the overhead being still there, and hence in the scheme being less efficient, without the constructions adding any security, since the unprotected version was secure anyway.

We adopt the following conventions: even though they aren't the most general possible, they make the explanation of the work easier. The Decisional Learning With Errors problem of parameters $n$, $q$ and $\alpha$ ($DLWE_{n,q,\alpha}$) asks to distinguish between samples of the form $<\mathbf{a}_i,\mathbf{s}>+x_i$ from random where $\mathbf{a}_i \leftarrow U(\mathbb{Z}_q^n)$ are known and randomly chosen vectors, $\mathbf{s} \leftarrow U(\mathbb{Z}_q^n)$ is a secret randomly chosen vector and $x_i \leftarrow \psi_\alpha$ is an error vector chosen from the Gaussian distribution over $\mathbb{Z}_q$ of mean 0 and standard deviation $\alpha$. If we collect $m$ of such samples, the above can be rewritten using matrices as: $$ (\mathbf{A},\mathbf{A}\mathbf{s}+\mathbf{x}) \approx (\mathbf{A},\mathbf{u}) $$ where $\mathbf{A} \leftarrow U(\mathbb{Z}_q^{m\times n})$, $\mathbf{x} \leftarrow \psi_\alpha^m$ and $\mathbf{u}$ is random. We use the symbol $\approx$ to mean (statistical) indistinguishability. Finally, we denote by $DLWE_{n,q,\alpha}(\mathcal{D})$ the same problem with the exception that the secret $\mathbf{s}$ is randomly drawn from the distribution $\mathcal{D}$ instead of from the uniform one.

Then, the main theorem, stated using the above formalism, looks like $$ DLWE_{\ell,q,\gamma} \Rightarrow DLWE_{n,q,\beta}(\mathcal{D}) $$ where $\mathcal{D}$ is a distribution over $\{0,1\}^n$ with min-entropy at least $k$, $\ell \leq (k-\omega(\log{n}))/\log{q}$ and $\gamma,\beta > 0$ are such that $\gamma/\beta$ is a negligible function of $n$. The requirement of $\mathcal{D}$ being over $\{0,1\}^n$ instead of over $\mathbb{Z}_q^n$ is made just for simplicity and can be dropped at the cost of a less neat proof. In essence, we want to prove that $$ (\mathbf{A},\mathbf{A}\mathbf{s}+\mathbf{x}) \approx (\mathbf{A},\mathbf{u}) $$ assuming $DLWE_{\ell,q,\gamma}$ and where $\mathbf{A} \leftarrow U(\mathbb{Z}_q^{m\times n})$, $\mathbf{s} \leftarrow \mathcal{D}$ and $\mathbf{x} \leftarrow \psi_\beta^m$.

We just sketch the main idea of each step here. First of all, we draw $\mathbf{B} \leftarrow U(\mathbb{Z}_q^{m\times \ell})$, $\mathbf{C} \leftarrow U(\mathbb{Z}_q^{\ell\times n})$ and $\mathbf{Z} \leftarrow \psi_\gamma^{m\times n}$ and define $\mathbf{A}' = \mathbf{B}\mathbf{C} + \mathbf{Z}$. Each column of $\mathbf{A}'$ is of the form $$ \mathbf{B}\mathbf{c}_i + \mathbf{z}_i $$ which can be thought as a bunch of $DLWE_{\ell,q,\gamma}$ samples. Hence, by that assumption, we can assume $\mathbf{A}' \approx \mathbf{A}$ since $\mathbf{A}$ was an uniformly random matrix.

The proof then reduces to show that $$ (\mathbf{A}',\mathbf{A}'\mathbf{s}+\mathbf{x}) \approx (\mathbf{A}',\mathbf{u}) $$ which can be rewritten as $$ (\mathbf{B}\mathbf{C} + \mathbf{Z},\mathbf{B}\mathbf{C}\mathbf{s}+\mathbf{Z}\mathbf{s}+\mathbf{x}) \approx (\mathbf{B}\mathbf{C} + \mathbf{Z},\mathbf{u}). $$ Instead, the author prove the following, stronger statement $$ (\mathbf{B},\mathbf{C},\mathbf{Z},\mathbf{B}\mathbf{C}\mathbf{s}+\mathbf{Z}\mathbf{s}+\mathbf{x}) \approx (\mathbf{B},\mathbf{C},\mathbf{Z},\mathbf{u}). $$

Next step is getting rid of the term $\mathbf{Z}\mathbf{s}+\mathbf{x}$ by noticing that since $\mathbf{Z}$ was drawn from a Gaussian distribution with standard deviation $\gamma$, $\mathbf{s}$ is a binary vector, $\mathbf{x}$ was drawn from a Gaussian distribution with standard deviation $\beta$ and $\gamma \ll \beta$, then the term $\mathbf{Z}\mathbf{s}$ has a negligible impact on $\mathbf{x}$. This means that there exists $\mathbf{x}' \leftarrow \psi_\beta^m$ such that $$ (\mathbf{Z},\mathbf{s},\mathbf{Z}\mathbf{s}+\mathbf{x}) \approx (\mathbf{Z},\mathbf{s},\mathbf{x}'). $$ The proof is then reduced to show that $$ (\mathbf{B},\mathbf{C},\mathbf{B}\mathbf{C}\mathbf{s}+\mathbf{x}') \approx (\mathbf{B},\mathbf{C},\mathbf{u}). $$

You can probably notice that we are getting closer to a $DLWE$ instance again. Eventually we will be able to use our assumption. Before, we note that, for the leftover hash lemma $$ (\mathbf{C},\mathbf{C}\mathbf{s}) \approx (\mathbf{C},\mathbf{t}) $$ because of the min-entropy of $\mathbf{s}$ and where $\mathbf{t} \leftarrow U(\mathbb{Z}_q^\ell)$ is a random vector. This lets us rewrite the statement we want to prove as $$ (\mathbf{B},\mathbf{B}\mathbf{t}+\mathbf{x}') \approx (\mathbf{B},\mathbf{u}). $$

We have finally obtained a proper $DLWE_{\ell,q,\beta}$ equation whose truth then follows from our $DLWE_{\ell,q,\gamma}$ assumption and by noting that $DLWE_{\ell,q,\gamma} \Rightarrow DLWE_{\ell,q,\beta}$. $\Box$

Thanks to the above result, the author define a symmetric-key encryption scheme whose parameters are just $q = q(n)$, $\beta = q/poly(n)$ and $m = poly(n)$. As you can see, nothing is said about any anticipated amount of leakage: the scheme does not introduce any overhead just to protect against leakage. However, it is proved secure under the $DLWE_{\ell,q,\gamma}$ assumption with respect to the following definition of security.

We say that a symmetric encryption scheme is CPA secure w.r.t. k(n)-weak keys if for any distribution $\{\mathcal{D}_n\}_{n\in \mathbb{N}}$ with min-entropy k(n), the scheme is CPA secure even if the secret key is chosen according to the distribution $\mathcal{D}_n$.

A couple of conclusive remarks.

  • Our discussion was informal and didn't go into too many details on purpose. For instance we didn't mention how this proof affects the parameters involved. For a more insight we refer to the paper itself. Instead, if you prefer a "less math, more ideas" approach, we refer to a post on the ECRYPT-EU blog;
  • I find really interesting that during the study group it was pointed out how saying "no overhead is introduced" is not correct. Some overhead is indeed introduced, but in the initial parameters (to compensate for a possible loss by means of leakage) rather than through specific constructions. This has the advantage that, if no leakage occurs, the scheme is simply more secure because the parameters have been raised.

No comments:

Post a Comment