Homomorphic evaluation of support vector machine 

Get Complete Project Material File(s) Now! »

Historical context and motivations

Although already used during ancient times, cryptology – etymologically science of secret – only justified its science status recently with its theoritical foundations laid out by Shannon at the end of the 1940s ([Sha49]), and the advent of public-key cryptography (1970s). It can be split into two mains areas of study which are opposite by nature but complementary: cryptography and cryptanalysis.
The first is about protecting communications from unauthorized third parties, requiring four properties: confidentiality, authentication, data integrity and non-repudiation. Confi-dentiality guaranties that only authorized people can access the messages. The purpose of authentication is to certify the identity of the participants, while data integrity is required to ensure that messages were not corrupted by any unauthorized third party. Finally non-repudiation prevents participants in the communication from falsy denying their implication. In practice, participants of a communication need to possess a secret, also called key, to ensure these properties.
On the other hand, cryptanalysis aims at retrieving some information from a secure com-munication in order to corrupt one, or several, of the previous four properties and this, without necessarily knowing the key.
Historically cryptography was mainly about ensuring the confidentiality of messages. The other properties too are integral to modern cryptography and particularly in the current internet era. Confidentiality of a message is ensured by converting it into an unintelligible text, called ciphertext, by using a key. From there, the message can only be understood by people with knowledge of the key.
During war, it is essential to ensure that encrypted messages remain undecipherable to the adversary in the eventuality where they are intercepted. This is why historically most progress in encryption techniques and cryptanalysis appeared during different conflicts. Among the famous classical encryption methods we can mention the scytale used by the Spartans during the Peloponnesian War, and the Caesar cipher used by the Romans during the Gallic Wars.
More recently, during the two major conflicts of the XXth century, cryptology has been decisive. In 1917, British intelligence intercepted and decrypted a telegram sent by the Ger-man foreign secretary, Arthur Zimmermann, to the German ambassador in Mexico in which Germany was offering an alliance to Mexico. The terms of this alliance specified that Mexico should invade the south of the United States in order to prevent Americans from taking action in Europe in case the United States dropped their neutrality. Once the United States were aware of the content of this telegram they declared war to Germany which sped up the end of the conflict. During World War II the Enigma machine, used by the Germans to encipher their communications, was the first mechanic cipher machine in history. Its cryptanalysis by Alan Turing’s team in 1940 has itself required the construction of the first computer in history. Once Enigma was broken, the Allies had a significant advantage for the rest of the conflict. Cryptology just entered its modern era.
Until the second half of the XXth century, the same key was used to encrypt and decrypt messages: the case referred to as symmetric cryptography. However symmetric cryptography suffers from a major drawback since it requires the parties wishing to communicate have exchanged the cipher key before the communication starts. The problem is therefore to exchange this key in a secure way. This was addressed in the seventies when W. Diffie and M. Hellman proposed the concept of asymmetric cryptography, also called public key cryptography ([DH76]). One should probably mention it was also proposed, roughly at the same time, by R. Merkle even though it is less well-known ([Mer78]).
The principle of public key cryptography is that the person who needs to decrypt messages generates two keys. The first one must remain secret while the second one is made public. The public key is used to encrypt messages by anyone willing to communicate with the owner of the secret key. However the owner of the secret key is the only one able to decrypt the messages encrypted with the corresponding public key. Of course the two keys are related, but it is made sure that one cannot retrieve easily the secret key from the public key unless by solving a difficult computationnal problem.
An analogy for thinking about public key cryptography is a mail box. Everyone can easily put a letter in a mail box, however it is hard to get the letters which are inside the box if you do not have its key. Only the owner of the mail box can easily take the letters inside it.
In the context of cryptology, “easy” and “difficult” must be understood in terms of com-putational complexities. “Easy” means that there exists an algorithm which can solve any instantiation of the problem in polynomial time while “difficult” means that it is not easy. If for security reasons it must be difficult to retrieve the secret key from the public key, for efficiency reasons deriving the public key from the secret key should be easy.
As a consequence, public key cryptography relies on mathematical problems which are easy in one way but difficult on the other. A good example of such a problem is the multipli-cation/factorization of integers. Multiplying two integers is quiet easy, for instance it is not hard to find that 41 37 = 1517, however it is much harder to find out that 2021 is actually the product of 43 and 47.
It is precisely on this problem that in 1978, R. Rivest, A. Shamir and L. Adleman based their encryption/signature scheme, called RSA, which was the first concrete public key con-struction in history ([RSA78]). A noteworthy property of RSA is that when multiplying two ciphertexts, encrypting two different messages, one obtains a ciphertext corresponding to the product of the messages. This property, allowing to process messages directly through their ciphertexts without having to decrypt them beforehand is called homomorphic property.
From there the question whether or not it is possible to construct an encryption scheme generalizing the homomorphic property of RSA arose. This encryption scheme should allow to perform, not only multiplications, but any kind of computations on messages directly through their ciphertexts ([RAD78]). Later this will be referred as Fully Homomorphic En-cryption (FHE).
Whether or not it was possible to construct such a scheme remained opened for many years. Knowing that any continuous function can be approximated over a compact domain by a polynomial, in practice the question can be reduced to being able to evaluate polyno-mials homomorphically. This only requires to perform both additions and multiplications homomorphically. Several encryption schemes allowing to perform one of the two operations like RSA were built in the ensuing years: for instance ElGamal for the multiplication ([ElG85]) and Paillier for the addition ([Pai99]).
The first significant step was made in 2005 by D. Boneh, E. Goh and K. Nissim who con-structed the first encryption scheme possessing the two homomorphic properties ([BGN05]). Nevertheless their construction suffers from a major drawback since it only allows to perform one homomorphic multiplication. In a major breakthrough C. Gentry in 2009 resolved the problem by constructing the first fully homomorphic scheme, i.e. which allows to perform an arbitrary number of additions and multiplications homomorphically, solving a thirty years old problem ([Gen09]).
Its construction, based on euclidean lattices, works through two steps. The first step is the construction of a scheme allowing to perform a limited number of additions and multi-plications. Such restrictive schemes, referred as Somewhat Homomorphic Encryption (SHE), are called “noisy”. In a nutshell, ciphertexts of SHE schemes contains a noise which grows after each homomorphic operation. Unfortunately one is only able to decrypt the message correctly as long as the noise remains small enough, which is the reason why one can only perform a limited number of operations. The second step of Gentry’s construction is a pro-cedure to “refresh” ciphertexts, i.e. which allows to reduce the size of the noise inherent to a ciphertext. This procedure, called “bootstrapping”, roughly consists in running homomor-phically the decryption function. At the end of the procedure, we obtain a new ciphertext, encrypting the same message, whose inherent noise has the same size as if one had evalu-ated the decryption circuit from a “fresh” ciphertext. Noise growth after a multiplication is more important than after an addition. Therefore if the SHE scheme allows to perform one multiplication additionally to the evaluation of its decryption procedure, then the scheme is said “bootstrappable” which means that it can be transformed to a fully homomorphic scheme.
Nowadays, the potential impact of such schemes on privacy is considerable, in particu-lar with the rise of cloud computing. FHE offers a concrete solution to different problems appearing in cryptography such as secure multiparty/outsourced computations or e-voting. Unfortunately, although theoretically correct, the parameters required by the first construc-tion of Gentry make it unusable in practice.
Hence in the years following Gentry’s breakthrough, the cryptographic community has de-voted important efforts to improve this first construction in order to make it implementable. This was finally accomplished in 2011 by C. Gentry and S. Halevi ([GH11]). This first im-plementation quickly became outdated with the so-called second generation schemes which appeared in the next few years. We mention for instance: Brakerski-Gentry-Vaikuntanathan (BGV) ([BGV12]), Brakerski scale invariant construction ([Bra12]) and Gentry-Sahai-Waters (GSW) ([GSW13]). The better performances of the aforementioned schemes are directly re-lated to the underlying problem on which their security is based upon: Learning With Errors (LWE).
The LWE problem was introduced in 2005 by O. Regev in his seminal work ([Reg05]), is currently the main structure used for constructing homomorphic encryption schemes. In an informal way this problem consists in solving a “noisy” linear system. More precisely, let us consider a matrix A and a vector s both with integer coefficients lying in a large centered interval [ q=2; q=2[. Knowing the matrix A, the “decisional” version of the problem consists in distinguishing whether a vector b has its coefficients sampled uniformly in [ q=2; q=2[ or if it is such that b = As + e ( mod q), for a vector e with “small” coefficients. Equivalently the search version of the problem consists in finding the vector s from the couple ( A; b = As+e ( mod q) ). Note the importance of the noise/error vector e, without which this problem would reduce to the resolution of a linear system.
Despite much better performances when compared to Gentry’s first construction, in prac-tice the structure underlying the LWE problem is not completely satisfying. Indeed, in order to obtain levels of security high enough, and to make the scheme bootstrappable, one needs to use large size of matrices and vectors with also large coefficients. As a consequence, the time and memory complexities required to manipulate these elements turn out to be quickly unpractical.
Therefore the cryptographic community has started to consider different variants of this problem and in particular its ring version called Ring-Learning With Errors (Ring-LWE), also called Learning With Errors over Rings, which is based on ideal lattices ([LPR10]). In this variant, the matrix A is substituted for an element a lying in the ring Rq = (Z=qZ)[X]=(f), where f 2 Z[X] is a monic and irreducible polynomial of degree n. The vectors s and e are now also considered as elements of Rq. The problem is similar : determine whether a couple of elements (a; b) 2 Rq2 was sampled uniformly in Rq2 or if there exists s and e such that b = a s + e. The benefits brought by this variant are twofold:
a can now be represented by n coefficients instead of n2 for a matrix, which reduces the memory complexity by a factor n ; instead of performing matrix vector products (asymptotic time complexity O(n2)), one now performs product of elements of Rq which can be done more efficiently (asymptotic time complexity O(n log n)), especially when f ( X ) = X n + 1 with n a power-of-two.
Homomorphic encryption schemes based on the LWE problem can be adapted quite nat-urally to Ring-LWE which enhances significantly their performance. The BGV scheme has been directly developed to fit on the two problems, Brakerski’ scale-invariant construction was adapted to Ring-LWE by J. Fan and F. Vercauteren and named after them FV ([FV12]) and GSW’ adaptation was named SHIELD ([KGV16]).
Both LWE and Ring-LWE problems reduce, under some conditions, to hard problems on euclidean and ideal lattices respectively. So far no algorithm solving Ring-LWE more effi-ciently than LWE is known, and actually concrete cryptanalysis of a Ring-LWE instantiation is currently made by constructing an LWE instantiation from it. In this case the coefficients of the element a 2 Rq form the first column of the matrix A and the other columns are derived from the first one by exploiting the structure of the polynomial f.
Nonetheless, the benefits brought by Ring-LWE are not without potential consequences. Indeed there are important suspicions that the additional structure appearing in ideal lat-tices, which is brought by the polynomial f, can be exploited to solve problems in this kind of lattices more efficiently. Hence, despite it is not known yet whether this structure can be exploited or not, people tends to believe that Ring-LWE is less hard than LWE.
In practice, Gentry’s bootstrapping is the most cumbersome procedure, hence one should try to avoid using it as much as possible. One way of avoiding it is to select parameters for an SHE scheme with a targeted application, so that it allows to perform enough operations to evaluate homomorphically the required functions without bootstrapping ([BGV12]).
In a noteworthy work L. Ducas and D. Micciancio have built a scheme, called FHEW, based on the Ring-LWE problem in which the bootstrapping procedure can be performed very efficiently – less than 1 sec ([DM15]). By considering a variant of Ring-LWE on the real torus, FHEW was improved a few years latter by Chillotti et al. (TFHE) reaching execution time of 0.1 sec ([CGGI16]). In their setting, bootstrapping is performed after the evaluation of each logic gate, so that one can evaluate homomorphically as many gates as he wants achieving therefore FHE with very competitive performance.
It is probably important to mention that after years of research and development in aca-demic laboratories, the quantum computer is no longer a fanciful dream. Its impending arrival will confront modern cryptology to the first revolution of its young history.
Indeed with such computer, the problems on which the security of the current cryptosys-tems are based upon (factorization, discrete logarithm) can be solved in polynomial time ([Sho97]). As a direct consequence, all the cryptographic schemes whose security is based on these problems will not be safe any longer. Therefore it becomes urgent to develop new constructions based on problems which cannot be solved more efficiently with a quantum computer. The recent call of the national Institute of Standards and Technology (NIST) to lay down the foundations of the future standards of post-quantum cryptography goes in this direction.
In view of this, problems based on lattices (euclidean or ideal) are good candidates since we do not know so far any quantum algorithm to solve them more efficiently than with clas-sical algorithms. This is why the medium-term security of homomorphic encryption schemes is not compromised by the quantum computer. However the important costs in time and memory required in practice by homomorphic schemes are still the major bottleneck to their deployment for real-life applications.

READ  Elasticity in Cloud Computing: State of the Art and Research Challenges

Contributions and organization of the report

The purpose of this thesis is to optimize the underlying arithmetic of several SHE schemes based on the Ring-LWE problem in order to enhance their performance. In this way, we hope to break some barriers which still prevent the widespread use of homomorphic encryption. To this end, we have focused on two main approaches: optimize the arithmetic used by the primitive of SHE schemes by considering, either the specific operations of these primitives, or the operations needed in the more general framework of Ring-LWE; optimize the operations required by a concrete and specific application of homomorphic encryption, both at the algorithmic level and their practical implementations.
Chapter 1 aims at introducing the different notions needed for a correct understanding of the rest of the report. We start by briefly introducing the Ring-LWE problem in the context of cyclotomic fields while highlighting its link with ideal lattices. In a second part we present two of the most currently used homomorphic encryption schemes: BGV and FV which are both based on the Ring-LWE problem. Finally, we present the different tools used in practice to perform the operations needed by these schemes efficiently.
The first contribution of this thesis ([BEHZ17]), presented in Chapter 2, is a suite of algortihms implementing all the primitives of scale-invariant SHE schemes efficiently within the Residue Number System (RNS) representation. RNS representation permits efficient arithmetic (additions and multiplications) for big numbers which is critical to homomorphic encryption. Unfortunately certain operations, like comparison for instance, cannot be per-formed directly in RNS. By overcoming these limitations, we have been able to perform all the computations required by the different procedures of scale-invariant schemes like FV in RNS representation. This has resulted in a noticeable improvement of the performance of these schemes.
Efficiency of the arithmetic in Rq = (Z=qZ)[X]=(f), which is the ambient space in Ring-LWE, heavily depends on f, in particular because of the reduction modulo this polynomial. This efficiency is maximal when f is a cyclotomic polynomial of the form X n + 1, but this kind of polynomials suffers from others limitations which, depending on the targeted application, may require to consider other kind of cyclotomics. Chapter 3 presents a work in which we have fo-cused on improving the reduction modulo f procedure, when it is a cyclotomic different from X n +1 ([BEH+18]). Since this operation is required for each product of elements in Rq, its opti-mization has led to a significant improvement of the performance of BGV and FV in these cases.
In the fourth and last chapter we present a concrete use case of homomorphic encryption to the classification of private data ([BMSZ18]). This classification uses advanced machine learning techniques and in particular Support Vector Machine (SVM). Evaluating a SVM homomorphically allows to classify data without revealing them. We start by presenting how SVMs, which are techniques from the supervised learning domain, work. We then describe the algorithmic methods we have devised in order to perform the operations required by SVMs homomorphically. The efficiency of our methods is illustrated by several implementations on different kind of architectures.
Finally in the last chapter, we summarize the results of this thesis together with directions for possible future works.

Lattices and ideal lattices

Definition 1.1.21. Let n 1 be an integer, a lattice L is a discrete additive subgroup of Rn endowed with a norm k k.
Discrete means that for a given norm there exists an n-dimensional ball centered at zero that does not contain any vector of the lattice L except 0. Therefore there exists a non-zero vector u 2 L which minimizes this norm. This vector is not unique ( u also satisfies this condition) and is called a shortest vector of L. One may notice that if a vector v 2 L is linearly dependent with u, i.e. such that v = u, then 2 Z. Indeed otherwise the non-zero vector v b e u 2 L, with b e the rounding to the nearest integer, would have a smaller norm than u which would contradict the definition of u. From there we can build by induction on the degree of liberty of L a family of linearly independent vectors that generate L.
Definition 1.1.22. A basis B of a lattice L is a set of d n linearly independent vectors fb1; : : : ; bd g such that any element of L can be written as a linear combination, with integer coefficients, of elements of B. The lattice generated by a basis B is denoted L(B).

Ring-Learning with Errors problem

As a direct consequence a lattice of rank d can be seen as a free Z-module of rank d. We can represent it by a matrix B 2 Mn;d (R) formed by the column vectors of the basis. One may notice that any non-trivial unimodular transformation (matrix with integer coefficients whose determinant is 1) transforms a basis into another. Hence a lattice of rank d 2 has an infinite number of bases, all of same cardinality. The common cardinality of the bases is called the rank of the lattice while a full rank lattice refers to a lattice of rank n. From now on n = ’(m) will denote the dimension of Km as Q-vector space.
Definition 1.1.23. A geometric embedding : Km ,! Rn is an injective morphism of additive groups.
Through a geometric embedding any fractional ideal can be mapped to Rn and can therefore be identified to a full rank lattice (remark 1.1.18). However those lattices inherit some additional structure from the absorption property of their generating integral ideal, in order to differentiate them from other lattices, they are called ideal lattices.
Definition 1.1.24. Let be a geometric embedding, then any fractional ideal I Km can be identified through to a full rank lattice L(I) Rn and this lattice is called an ideal lattice.
Remark 1.1.25. Depending on the geometric embedding the lattice will not be the same.
Therefore, one should always precise through which one the ideal lattice is considered.
The most intuitive geometric embedding is probably the coefficient embedding which repre-sents an element a = P ai mi 2 Z[ m] through its coordinates in the power basis (a0; : : : ; an 1) 2 Zn. Unfortunately, despite its relative simplicity, the coefficient embedding is not adapted from an algebraic point of view since it does not preserve the multiplication and thus the structure of ideals. The canonical embedding from algebraic number theory on the other hand, allows to perform multiplication coefficient-wise and thus preserves this structure. Although two embeddings are always related by a linear transformation of Rn, the multiplicative property of the canonical embedding makes it more suited for the study of ideal lattices and enables the security reduction of the Ring-LWE problem ([LPR10]) and thus its theoretical foundation .
Each element i 2 G = Gal(Km=Q) defines an embedding from Km to C given by m 7!mi for i 2 (Z=mZ) . Since any polynomial with real coefficients which vanishes on a complex number x also vanishes on its complex conjugate x, complex embeddings in number fields always come in pairs. Let s1 be the number of real embeddings and s2 be the number of pairs of complex embeddings, we have s1 + 2s2 = n. We define H Rs1 C2s2 hereafter:
Remark 1.1.28. General number fields may have some real embedding corresponding to the real roots of their minimal polynomial but in our case m has no real root as soon as m > 2. Since the cases m = 1; 2 present few interest from a cryptographic point of view, we only consider the case where m > 2, thus Km has only complex embeddings and s1 = 0.
The attentive reader may have noticed that our definition of the canonical embedding does not match the definition of a geometric embedding as stated in definition 1.1.23 since it embeds into Cn. Elements of H have half of their coordinates fixed by complex conjugation, therefore H has dimension n=2 as C-vector space and thus dimension n as R-vector space. In order to work with real lattices, we need to differentiate the real and imaginary parts of the canonical embedding to pull it back to Rn. In the case of cyclotomic fields with m > 2, it can be done through the linear transformation whose matrix A is given below:
Figures 1 and 2 illustrate on a small example how different the lattices generated by the coefficient and canonical embedding are. One can see for instance that one is sparser than the other and that their number of short vectors is different. The interested reader can find a more detailed study of the relations between these two embeddings in [Bat15].

Theoretical hardness of Ring-Learning With Errors

Defining the Ring-LWE problem in a formal way would involve notions and considerations on the shape of the error distribution which are beyond the scope of this thesis. For the sake of simplicity, we introduce the Ring-LWE problem and highlight its hardness in an informal way. For further details on the rigorous definition of Ring-LWE the reader should refer to [LPR10]. We start by introducing the Shortest (Independent) Vector Problem (S(I)VP) in the context of ideal lattices. Unless stated otherwise, from now we only consider ideal lattices through the canonical embedding.
For a given norm, the length of a shortest vector in a lattice L is called the minimum distance of the lattice and denoted 1 (L). This notion can be extended to define the sequence of successive minima. The k-th minimum of the lattice L is defined as the smallest positive real number k (L) such that there exists k linearly independent vectors of L, with each vector of norm at most k (L).
Definition 1.1.31 (SVP and SIVP). Let Km be the m-th cyclotomic number field endowed with a norm k k and let 1. The Km-SVP problem in the given norm is: given a fractional ideal I in Km find some non-zero x 2 I such that kx k 1 (I). The Km-SIVP problem is defined similarly, where the goal is to find n linearly independent elements in I whose norms are all at most n (I).
Definition 1.1.36 (Ring-LWE, Search). Let q 2 be an integer, s 2 Rq and err be a distribution over R. The search version of the Ring-LWE problem is defined as follows: given access to a polynomial number of independent samples from the Ring-LWE distribution Asq; er r , find s.
The error term e is supposed to be sampled according to a discrete n-dimensional centered spherical Gaussian distribution in the embedding space H, with each coordinate sampled in-dependently ([LPR10], [CIV16]). A noteworthy case is when m is a power of two, in which case the geometry of the corresponding cyclotomic field allows to sample the error term e, as a scaled centered spherical Gaussian, directly in R. We can emphasize two main approaches to transfer the error in R for a general m. In [LPR13] the authors explain how to invert the canonical embedding map in quasi-linear time allowing therefore to transfer the error term from H to R. The authors of [DD12] proposed an alternative method to generate the error and proved it does not affect notably the difficulty of the problems. Their approach consists in sampling the error according to a centered spherical Gaussian in Q[X]=( m( X ) ), where m( X ) = X m=2 + 1 if m is even and m( X ) = X m 1 otherwise. From there the error is reduced to Q[X]=( m( X ) ) with a polynomial reduction and then its coordinates are rounded coefficient-wise to get it in R.

Practical considerations

Conditions of the Ring-LWE reduction to S(I)VP ensure an asymptotic security however in practice we need to have an idea of the concrete hardness of an instantiation to estimate its security. Concrete cryptanalysis of lattice based cryptography is beyond the scope of this thesis. However having a basic understanding of the relation between the parameters and the security of a Ring-LWE instantiation is essential to avoid gross errors in parameters selection. We remind that so far it is unknown whether or not the additional structure brought by the defining polynomial in Ring-LWE can be exploited for cryptanalysis. Therefore the security estimations for Ring-LWE are made through analysis for an LWE instance.

Table of contents :

Introduction (français)
1 Contexte historique et motivations
2 But et organisation de la thèse
Introduction
1 Historical context and motivations
1 Preliminarie
1.1 Ring-Learning with Errors problem
1.2 Somewhat Homomorphic Encryption based on Ring-LWE
1.3 Classical arithmetic in cyclotomic rings
2 Full RNS scaled-invariant schemes 
2.1 Towards a full RNS decryption
2.2 Towards a full RNS homomorphic multiplication
2.3 Last step: the relinearization
2.4 Software implementation
2.5 Conclusion
3 Polynomial arithmetic in general cyclotomic rings
3.1 Preliminaries
3.2 Improving polynomial reduction modulo m
3.3 Adaptation of BGV and FV to the Montgomery representation
3.4 Experimental results
3.5 Conclusion
4 Homomorphic evaluation of support vector machine 
4.1 Preliminaries
4.2 Novel homomorphic techniques for SVM classification
4.3 Implementation details
4.4 Experimental results
4.5 Conclusion

GET THE COMPLETE PROJECT

Related Posts