Homomorphic evaluation of deep neural networks 

Get Complete Project Material File(s) Now! »

Homomorphic encryption

This part is a very general overview on the topic. These concepts are expanded, together with a more technical presentation, in Chapter 3.
A homomorphic encryption scheme is an encryption scheme that admits some kind of computations on encrypted data. In particular, given an input x encrypted as Enc (x), it should be possible to publicly compute Enc (f(x)) for a function f taken from some class of functions. The key point here is “publicly”, meaning that carrying out this computation has to be possible without having access to any secret information.
For a first trivial example, consider the following scenario: let us say that an encryption schemes consists of taking a message and adding a secret number S to it. Then an encryption of m1 will be c1 = m1 + S, while an encryption of m2 will be c2 = m2 + S. Then, any party that receives these two encryptions, can sum them together and obtain c0 = m1 + m2 + 2S, without knowing either m1, m2 , or S. A party that knows S and receives the result c0 can simply subtract 2S (or take the result modulo S , if some other conditions hold) and recover m1 + m2. The final result is that the operation of adding two numbers has been delegated to another party, without this party knowing the operands or the result.
The “scheme” used in this toy example is obviously insecure, but it turns out that many widely-used encryption schemes already have some homomorphic properties. For example, let us consider the famous RSA cryptosystem [RSA78], in which an encryption of a message m is me mod N, where e is a public exponent and N is a public modulus. It is easy to see that, given two ciphertexts c1 = m1e mod N and c2 = m2e mod N that encrypt messages m1 and m2, we can multiply them together and obtain c0 = (m1 m2)e mod N, which is an encryption of m1 m2. This means that we can homomorphically multiply the messages, meaning that we can take two ciphertexts, perform only public operations on them, and obtain an encryption of the product of whatever was encrypted in the ciphertexts, without knowing any message or any secret key. For this reason we say that the RSA cryptosystem is multiplicatively homomorphic. Another example is the well known El Gamal encryption scheme [ElG84], which is also multiplicatively homomorphic. Notice that, for these schemes, we do not know how to perform an homomorphic addition, which amounts to computing a ciphertext that decrypts to the sum of the two original plaintexts.
Instead let us consider the famous Paillier cryptosystem [Pai99], based on the decisional composite residuosity assumption. In this scheme, an encryption of a message m is of the form c = gm • rn mod n2, where g and n are public and r is random. In this case, a party that is given two ciphertexts c , c encrypting messages m , m , can compute c0 = c 1 • c 2 = n 1 2 1 2 gm1+m2 • (r1 r2) mod n2, which is a Paillier encryption of m1 + m2. This means that we can homomorphically sum two ciphertexts and produce an encryption of the two original messages, without knowing them or any secret piece of information. We then say that the Paillier cryptosystem is additively homomorphic. Notice, however, that in this case we do not know how to homomorphically multiply two ciphertexts. We call partially homomorphic encryption schemes those schemes that support either ad-dition or multiplication, but not both. Schemes that are both additively and multiplicatively homomorphic are harder to come by. An example of such scheme is the DGHV encryption scheme [DGHV10], which will be described in details later in the manuscript (cf. Chapter 6). In most of these encryption schemes, a “noise” term is added during the encryption for secu-rity purposes. This noise (sometimes called “error”) grows when performing homomorphic additions and multiplications, and must remain below a certain threshold in order for the ci-phertext to be correctly decryptable. In the majority of the homomorphic schemes known so far, the way noise grows with multiplication is more severe than in the case of addition. For this reason, the most important metric when quantifying the hardness of homomorphically evaluating a circuit is its multiplicative depth.
Another important definition is that of somewhat homomorphic encryption (SHE) schemes. We use this term to refer to encryption schemes that can evaluate a certain number of opera-tions on encrypted inputs, but for which proceeding further would result in losing decryption correctness, meaning that the result will not decrypt to what is expected. Every encryption scheme is instantiated with some parameters, e.g., the size of the primes which are used, the size of the secret key, etc. We say that a homomorphic encryption scheme is leveled if, for any multiplicative depth L fixed a priori, it is possible to find a set of parameters such that the encryption scheme instantiated with those parameters is able to homomorphically evaluate any circuit of depth L. It is easy to see that this is a stronger notion than that of somewhat homomorphic encryption scheme.
However, the main obstacle that researchers faced for more than 30 years (since the sug-gestion of [RAD78]) was the following: in any case, the number of operations that can be evaluated is bounded. At some point, the noise level will become too large and it will be impossible to proceed without losing correctness. The goal of constructing a fully homo-morphic encryption (FHE) scheme, i.e., a scheme that can homomorphically evaluate an unbounded number of operations, remained a dream for a long time, and some famous re-searchers claimed it was never going to be reached. The turning point came in 2009 with the breakthrough by Craig Gentry, then a Ph.D. student at Stanford under the supervision of Dan Boneh. In his dissertation [Gen09a], Gentry put forward the first plausible construc-tion for an FHE scheme based on the hardness of some lattice problems, and that of the approximate GCD problem (cf. Chapter 2 for the details). This breakthrough had the effect of reigniting FHE as a topic of research, and since then many important results followed. Although the techniques improved greatly, and the schemes became simpler and more ef-ficient, the original blueprint presented in Gentry’s thesis continues to underlie all known FHE constructions.
The key idea that was proposed in Gentry’s work is that of bootstrapping. By this term, we denote the process of refreshing a ciphertext in order to produce a new ciphertext that encrypts the same message, but with a lower level of noise so that more homomorphic operations can be evaluated on it. This operation is at the very core of any FHE schemes known to date, and consists of homomorphically evaluating the decryption circuit of the scheme. Roughly speaking, it is like decrypting the ciphertext with the secret key, and then re-encrypting the message, with the difference that the secret key is not known and it is replaced by an encryption of the secret key, called the bootstrapping key. This requires an additional hardness assumption, called circular security assumption, meaning that we must assume it is safe to publish an encryption of the secret key under itself. Although this assumption is still not well studied and understood, and it is sometimes regarded with suspicion, no attacks that exploit this extra piece of information have been proposed.

FHE in the user-server scenario

Fully homomorphic encryption has huge consequences in the world of delegated computa-tions, as it essentially enables a user to safely hand all of its (encrypted) data to an untrusted remote party, and let it process the information, with the guarantee that the remote party will learn neither the input nor the output of the computation. Practical consequences are fairly easy to see; we present here some simple use cases:
• Emails could be stored encrypted, so that the email provider does not know the content of the messages. Also emails could be searchable, without the provider knowing what a user is looking for.
• Pictures could be uploaded to websites offering image processing capabilities, without the site learning the content of the original picture or that of the final picture.
• Medical data could be safely shared in order to extract statistics or make predictions on one’s health condition, without revealing any sensitive information. For example, in the case of estimating the cost of a life insurance, this could be done by running an algorithm on encrypted data (the applicant’s personal information), and returning an encrypted answer (the predicted cost of the insurance policy).
• One could even go so far as to imagining a completely homomorphic search engine, that can process encrypted search queries and return an encrypted list of matches.
The list of potential examples is unsurprisingly long, and the consequences of an introduction of FHE on a large scale would be very serious, with a strong enhancement of user’s privacy.
However, it might be worth pointing out something which might not be obvious at a first glance at the subject. While FHE certainly helps protecting user’s privacy against a curious server, the fact that the server is no longer able to collect users’ data would necessarily produce a change in the business plan of numerous internet companies. These enterprises normally provide services for free, in exchange for the possibility to harvest people’s information in order to create tailored advertisement campaigns, extract statistics, . . . 3 And while someone might argue that this is wrong on a moral ground and that anything that brings this to a stop is welcome, it is also true that companies need revenues for their survival. Preventing companies from generating profits from users’ data would likely force them to adopt different strategies, such as that of making people pay for services that are normally expected to be free (rather, paid for through personal data), like email addresses, storage space, . . . This means that, just like any major technology, FHE has potentially serious and life-changing consequences, that have to be evaluated carefully, on the basis of the specific application, the privacy requirements, the costs, etc.
What currently keeps FHE from becoming a widespread tool is essentially efficiency. As we said before, all the solutions we know of for achieving the ability to perform unbounded computations are based on the operation of bootstrapping. Although this procedure has been improved and refined over the years, it remains costly and can considerably limit the efficiency of the scheme. A more viable alternative is that of relying on custom-made SHE instantiations, tailored to the specific application that is targeted. This solution is certainly less flexible, as it has to be adapted on a case-by-case basis, but offers the advantage that no bootstrapping is required. For real-world applications, SHE is usually the chosen approach, and more and more examples of practically efficient SHE are surfacing regularly.

User and server: different problems for different players

Since the user-server scenario is a world with two players, it is expectable that each of them has a different perspective, different security requirements, and different goals. We analyze them in the following.

The user’s point of view

The user is mainly interested in having the privacy of his input data guaranteed by a strong encryption scheme, but is also concerned by the workload he faces in order to encrypt the input and decrypt the output. Another important point regards the communication complexity, or bandwidth usage: the user would like to minimize the amount of data he has to send to or download from the Cloud, which implies making ciphertexts as small as possible.
An emerging and fast-growing application that lies within the boundaries of Cloud com-puting is that of machine-learning-as-a-service (MLaaS), where a user submits data to a remote server, that applies a previously-trained predictive model (e.g., a neural network) to the data. The challenge, in this scenario, is usually represented by the complexity of the computations involved in the homomorphic evaluation of the predictive model, especially in the case of deep learning, where the depth of the circuit can be particularly large. For this kind of applications, the main goal is then to improve the efficiency in order to produce accurate results in a fast and reliable way.

The server’s point of view

Although the previous part was mainly devoted to the user’s needs and how to protect his input data, there can be security requirements from the server’s side as well. In fact, a server might provide some data processing capabilities based on proprietary algorithms, that represent a key company asset and constitute sensitive intellectual property. A typical requirement is then that of circuit privacy [SYY99; IP07], meaning that the result returned by the server to the user at the end of the computation should leak no information on the algorithm (or circuit) that was applied to the encrypted inputs. In fact, it turns out that the noise term that is added for security purposes and that we mentioned before, not only grows with homomorphic operations, but changes in a way that somehow reflects (or depends on) the operations that are performed. Therefore, the noise contained in the final ciphertext that the user receives can leak information about how the result was produced, therefore jeopardizing the privacy of the algorithm. Instead, we would like the final result to contain no information at all on the computation, and we model this requirement as follows: let x be an input, encrypted as Enc (x), and let f be a function. Homomorphically evaluating f on Enc (x) means performing a certain number of homomorphic operations in order to produce a ciphertext that decrypts to f(x). The intuitive goal of circuit privacy is then for the outcome to “look like” a somewhat fresh encryption of f(x), which in turn guarantees that the result contains no information on f. The rough idea is that we want to keep the correct result, i.e., f(x), while removing all the information on f.

READ  Experimental study of In thin films on Pd(111) and alloy formation(Article)

Our results

In this manuscript we propose contributions that improve the previous state of the art on both ends of the user-server use case.

 A new framework for homomorphic evaluation of neural networks

With respect to the user’s point of view, we propose a new framework for efficiently evaluating discretized neural networks on encrypted data. In this framework, the user encrypts his input data under a lattice-based encryption scheme, and sends the encrypted data to the other party. This party is able to homomorphically evaluate a neural network of arbitrary depth on the input, and finally returns the encrypted result. The substantial difference between our work [BMMP18] and previous works is that the instantiation of the encryption schemes, i.e., the parameters that define the scheme, does not have to depend on the network that has to be evaluated. In previous works, the chosen approach was to tailor a somewhat homomorphic encryption scheme to a specific network, with parameters that were good enough for that application. The problem is that this solution is (1) not flexible, and (2) not suited for evaluating deep neural network, that would force to take extremely large and cumbersome parameters for correctness to hold, thus making the scheme completely inefficient and impractical.
On the other hand, our framework heavily relies on a recent and efficient implementation of the bootstrapping operation, through which we can refresh ciphertexts after the evalu-ation of each layer of the neural network. In turn, this allows us to “keep going with the computations”, meaning that there is no a priori bound on the number of layers that can be evaluated. The disadvantage of this approach is that, in order to make our neural networks “FHE-friendly”, we have to impose some limitations, namely the inputs and some internal values (weights and biases) have to be discretized. However, we note that this simplified model of neural networks has already been studied in the literature, and is known to achieve near-state-of-the-art performance.

A new technique for circuit privacy

With respect to the server’s point of view, in [BPMW16] we propose a conceptually different approach to the problem of circuit privacy that, unlike previous works, is not based on annihilating the noise term by adding another large noise term to it (this technique is called noise flooding), or by using bootstrapping in order to “wash away” any trace of the previous computations. Instead, we directly analyze the distribution of the noise term and we show a way of removing the information on the circuit from the ciphertext in a simple and efficient way. In particular, we target the GSW FHE encryption scheme for branching program evaluation, and we introduce a modification that achieves circuit privacy essentially without worsening the overall efficiency of the scheme.
This work also gives better insights into the algebraic structure and the noise distribution in the GSW scheme, and provide new tools for analyzing noise randomization that could be of independent interest.
The fundamental downside of our approach is that it is fundamentally limited to the GSW encryption scheme, while it is not clear how to apply our techniques to other FHE schemes. In fact, although GSW is quite ubiquitous in the FHE world and it represents the most recent and asymptotically best instantiation of FHE, other schemes can be preferable and more efficient for specific applications. Another limiting problem is that our technique only works in the honest-but-curious ad-versary model, i.e., we have to assume the adversary follows the protocol and play by the rules, but tries to extract as much information as it can from what it sees. This is opposed to the malicious adversary model, where the adversary can do whatever it wants and deviate in any way from the prescribed execution, including submitting malformed inputs. In the latter case, our technique breaks, as it relies on well-formed input ciphertexts.
Another contribution of this work is given by [BBB+17] and regards privacy-preserving interactions with a database held by another party. In particular, we devise a protocol for private information retrieval, meaning that a party A can query a database held by a party B, without B learning what A is looking for. We also show that, under appropriate conditions and with some formalization caveats, our protocol also achieves the property that A learns nothing more from the database than the records that match the query, thus achieving a stronger notion known as oblivious transfer. We also propose a C++ implementation of the protocol that shows the practicality of our approach.

Other contributions

Apart from input data privacy, Cloud computing introduces another crucial challenge: can we trust the computation done by the Cloud? Can we be sure that the result is indeed the correct one and not some random or forged value? Outsourcing computations is clearly useless without a certain level of confidence in the correctness of the result, which is why we would like the Cloud to be able to “convince us” that the result is indeed valid, but without having to re-run the entire computation step by step.
In zero-knowledge (ZK) proofs (introduced in [GMR89]), a powerful prover can prove to a weaker verifier that a particular statement is true, without revealing why this is the case. More formally, given an NP language L with corresponding witness relation R, the prover will know a witness w that demonstrates the truth of a certain statement x ∈ L. With a zero-knowledge proof, the prover will be able to convince the verifier that the state-ment is indeed true, without revealing any information on the witness w. Non-interactive zero-knowledge proofs [BFM88] and succinct ZK arguments [Kil92; Mic94] were introduced shortly thereafter, but remained of mainly theoretical interest until more recently, when sev-eral breakthroughs have shown that these proofs can be used in practical applications (see e.g., Pinocchio [PHGR13]).
A particularly interesting concept is that of SNARK, i.e., succinct non-interactive ar-gument of knowledge. By this term, we denote a proof system which is non-interactive (it does not require multiple rounds of communication), complete (all correctly generated proofs are accepted), succinct (the size of the proof is linear in the security parameter), and knowledge-sound (for any prover able to produce a valid proof, there is an algorithm capable of extracting a witness for the statement). Recently, in two companion papers [BISW17; BISW18], Boneh et al. provided the first designated-verifier SNARKs construction based on lattice assumptions.
In [GMNO18] we build zero knowledge SNARKs (succinct non-interactive arguments of knowledge) from lattice assumptions for square span programs (SSPs), which are a way to characterize NP introduced in [DFGK14]. Compared with previous works, our result achieves zero-knowledge, relies on weaker assumptions, and is simpler and more efficient. Also, for a reasonable set of parameters, we achieve post-quantum security. In contrast, the size of our proofs is considerably larger than in previous works. Moreover, our construction is designated-verifier, meaning that the verification procedure requires access to some secret piece of information, as opposed to publicly verifiable, where the verification procedure can be run based solely on public information.

Organization of the manuscript

This manuscript is organized as follows: Chapter 2 introduces the notation used in the manuscript, and gives some definitions and some general notions; Chapter 3 presents fully homomorphic encryption in details, and constitutes a sort of survey on the area; Chapter 4 addresses the problem of homomorphic evaluation of deep neural networks, and presents an efficient framework that allows one to evaluate an already-trained predictive model on encrypted inputs; Chapter 5 considers the other side of the coin for outsourced computa-tions, and examines the problem of circuit privacy, resulting in a new and more efficient way of avoiding undesired leakages from the result of homomorphic computations; Chap-ter 6 presents the design and the concrete implementation of a protocol based on FHE for private information retrieval; finally, Chapter 7 draws some conclusions and outlines several questions that remain open and that can be the topic for extending the research presented in this manuscript.

Table of contents :

1 Introduction 
1.1 Computation outsourcing
1.1.1 Homomorphic encryption
1.2 FHE in the user-server scenario
1.3 User and server: different problems for different players
1.3.1 The user’s point of view
1.3.2 The server’s point of view
1.4 Our results
1.4.1 A new framework for homomorphic evaluation of neural networks
1.4.2 A new technique for circuit privacy
1.4.3 A protocol for private information retrieval
1.5 Other contributions
1.6 Organization of the manuscript
Personal Publications
2 Preliminaries 
2.1 Notation and preliminaries
2.1.1 Mathematical notation
2.1.2 Algorithms
2.1.3 Provable security
2.2 Cryptographic primitives
2.3 Lattices
2.3.1 Basic definitions
2.3.2 Computational problems
2.3.3 Worst-case hardness
2.3.4 Gaussians
2.3.5 Short integer solution (SIS)
2.3.6 Learning with errors (LWE)
2.4 Complexity assumptions
3 Fully homomorphic encryption 
3.1 Introduction
3.2 Homomorphic encryption scheme
3.3 Bootstrapping and key-switching
3.4 Three generations of FHE
3.4.1 First generation FHE
3.4.2 Second generation FHE
3.4.3 Third generation FHE
3.4.4 Message packing
3.5 Advanced constructions
3.6 Libraries and practical implementations
3.7 FHE constructions from non-lattice assumptions
4 Homomorphic evaluation of deep neural networks 
4.1 Introduction to the problem
4.2 Refresher on neural networks
4.2.1 Basic definitions
4.2.2 Neural networks’ layers
4.2.3 Activation functions
4.2.4 Perceptrons, multilayer perceptrons, and deep NNs
4.2.5 Training and evaluating neural networks
4.2.6 MNIST: a typical dataset for NNs
4.3 State of the art for privacy-preserving predictions
4.4 TFHE: a framework for efficient bootstrapping
4.4.1 LWE over the torus and related constructions
4.4.2 External product and bootstrapping procedure
4.5 Our contributions
4.5.1 Definition of a discretized neural network
4.5.2 Simple conversion from a traditional NN to a DiNN
4.5.3 Homomorphic evaluation of a DiNN
4.5.4 Refinements of TFHE
4.5.5 Experimental results
4.5.6 Comparison with Cryptonets [DGL+16]
5 Circuit privacy for homomorphic computations 
5.1 Introduction
5.1.1 Our results
5.1.2 Technical overview
5.2 Additional preliminaries
5.2.1 Randomized G−1 (·) algorithm
5.2.2 Probability results
5.2.3 Results on lattices and Gaussian distributions
5.2.4 Entropy and leftover hash lemma
5.2.5 Permutation branching programs
5.3 Core randomization lemma
5.3.1 Proof of randomization lemma
5.3.2 Rerandomizing LWE samples
5.4 Our scheme: circuit-private homomorphic evaluation for GSW
5.4.1 Rerandomizing and scaling GSW ciphertexts
5.4.2 Circuit privacy: definition and main theorem
5.4.3 Modified Eval algorithm for the GSW encryption scheme
5.4.4 Setting the parameters
5.4.5 Extension to arbitrary moduli and trapdoor matrices
5.5 Discussions
6 Private information retrieval through homomorphic encryption
6.1 Introduction
6.1.1 Private information retrieval
6.1.2 Oblivious transfer
6.1.3 Our contributions
6.2 Our protocol
6.3 The DGHV encryption scheme and its extension
6.4 Implementing our protocol
6.4.1 How to choose the random polynomials for conjunction queries
6.4.2 Handling the “false positives”
6.4.3 Concrete parameters and benchmarks
6.5 Discussions
7 Conclusions and open questions 
7.1 Conclusions
7.2 Open questions
7.2.1 Homomorphic evaluation of neural networks
7.2.2 Circuit privacy
7.2.3 Private information retrieval
Notation
Abbreviations
List of Illustrations
Figures
Tables
Bibliography 

GET THE COMPLETE PROJECT

Related Posts