Get Complete Project Material File(s) Now! »

## Notation and Preliminaries

**Mathematical Notions**

Integers, Sets, and Modular Arithmetic. We denote by Z the set of integers and by N the set of non-negative integers. If S is a set, then |S | denotes its size.

For two sets D and R, we denote by Fun(D, R) the set of all functions f: D → R with domain D and range R.

For a positive integer N , we denote by (ZN , +) the additive group of integers modulo N and by (ZN , +, •) the ring of integers modulo N. We often abuse this notation by using ZN . Furthermore, we denote by (Z∗N , •) or simply Z∗N the multiplicative subgroup of (ZN , +, •) that contains the invertible elements of the ring ZN . For an integer x ∈ Z, we denote by x mod N the remainder of the Euclidean division of x by N, which can be considered both as an integer in {0, . . . , N − 1} or as an element of ZN . We then identify elements from ZN with integers of the set {0, . . . , N − 1}. We recall that performing additions and multiplications over ZN is eﬃcient (polynomial-time in the size of N).

In this work, N is often a prime integer, thus denoted p. Please note that, for any prime number p, the ring (Zp, +, •) is also a finite field, and then Z∗p = Zp \ {0}.

Cyclic Groups. We make extensive use of cyclic groups in this thesis. We recall that a cyclic group is a finite group generated by a single element. In particular, a cyclic group is commutative (or Abelian). Throughout this manuscript, we denote by a tuple (p, G, g) a multiplicative cyclic group of prime order p generated by an element g, so that G = hgi = {1, g, . . . , gp−1}, where 1 denote the identity element (corresponding to g0). We often abuse this notation by using the notation G when the order and the generator are clear from the context. We denote by [x]g, or simply [x] when the generator is clear from the context, the element gx, for any x ∈ Zp, so in particular g = [1]g.

We also assume that the multiplication over G is an eﬃcient operation, so given g and x ∈ Zp, one can eﬃciently compute [x]g.

Linear Algebra. For two vector spaces D and R, we denote by L(D, R) the vector space of linear functions from D to R. Similarly, if D1, . . . , Dn denote n vector spaces, then L(D1 ⊗ • • • ⊗ Dn, R) is the vector space of n-linear functions from D1 × • • • × Dn to R. In this thesis, vector spaces are implicitly supposed to be Zp-vector spaces. Vectors and Matrices. Vectors are denoted with an arrow (e.g., # v ) and matrices by # # denotes its length and v denotes its i-th bold capital letters (e.g., A). For a vector v , v | # = (vi) # . For a vector # | n and a function f: i , we denote component, so v i=1,…, v ∈ D v # | | D → R # n. Reversely, for a vector of functions f = (f ) , by f( v ) the vector (f(v )) # i i=1,…,n ∈ R # # i i=1,…,n we denote by f (x) the vector (f1(x), . . . , fn(x)). For two vectors u , v of same length n over a ring R, we denote by hu , v i = P i=1 ui • vi their inner product. For a matrix A, we denote # # n by ai,j its entry in the i-th row and j-th column. We sometimes combine both notations, by # ). defining vectors of matrices (e.g., A = (A1, . . . , A #

For a cyclic group (p, G, g) and a vector # |A | # a ∈ Zpn, we denote by [ a ]g the vector of group elements ([a1]g , . . . , [an]g) ∈ Gn. Similarly, for a matrix A ∈ Zpn×m, we denote by [A ]g the matrix of group elements ([ai,j] )i=1,…,n n×m. As already stated above, we often abuse g j=1,…,m ∈ G these notations by not specifying the generator g when it is clear from the context.

Polynomials. We denote by Zp[T1, . . . , Tn] the vector space (and ring) of multivariate polynomials in indeterminates T1, . . . , Tn over Zp , and by Zp[T1, . . . , Tn]≤d its subspace containing only polynomials whose degree in each indeterminate is at most d. We sometimes P # the polynomial P (T ,…,T ) [T ,…,T # T ) ∈ Zp ], and by P ( a ) its evaluation denote by # ( # 1 n 1 n by setting T to a , meaning that we set T1 = a1, . . . , Tn = an.

Assignation. If S is a set, x ←$ S indicates that x is taken uniformly at random from the set S (independently of everything else). We also write x, y ←$ S to indicate that x and y are chosen independently and uniformly at random from S . We often write that an element is “picked at random” to mean “picked independently and uniformly at random”.

**Algorithmic Concepts**

Bitstrings. Binary strings are denoted with lowercase letters (e.g., x). We denote by {0, 1}∗ the set of all bitstrings and by {0, 1}n the set of all bitstrings of length n. For a binary string x, we denote its length by |x|, so x ∈ {0, 1}|x|, and xi its i-th bit, so x = x1 k . . . k x|x|. The exclusive or (XOR) of two bitstrings x and y of same length is denoted by x ⊕ y.

Algorithms and Eﬃciency. For simplicity, we consider our algorithms as probabilistic Turing machines. That is, we implicitly assume that they can use an additional tape containing random bits (also referred to as random coins). In the rest of this thesis, the term PPT algorithm stands for Probabilistic Polynomial-Time algorithm and we say that an algorithm is eﬃcient if it is a PPT algorithm.

For an algorithm A, we denote by y ←$ A(x) for the fact of running algorithm A on input x and with fresh random coins and letting y denote its output. If A is deterministic, we simply note y ← A(x).

**Provable Security**

Negligibility. Let be a function from N to [0, 1]. We say that is negligible or 1 − is overwhelming, if for any constant c ∈ N, there exists η ∈ N such that for any κ ≥ η, ≤ κ1c . Security Parameter. As most of the cryptosystems, our constructions only achieve a computational notion of security and can actually be broken with a powerful enough computer. However, it is widely accepted that if 2128 elementary operations are required to break a cryptosystem with high probability, then this cryptosystem can be considered as secure. We say that such a cryptosystem provides 128 bits of security.

Of course, with the computers being more and more powerful, it might happen in the future that making 2128 elementary operations is reasonable. We then use the notion of security parameter to formalize the security of our constructions. Informally speaking, the security parameter is an integer κ ∈ N that is (sometimes implicitly) fed (in unary) to all the algorithms of a cryptosystem and such that algorithms run in polynomial time in this security parameter and such that a specific instantiation of the cryptosystem with security parameter κ provides κ bits of security.

Adversaries. Adversaries are probabilistic Turing machines and are denoted by calligraphic letters (e.g., A , D). Adversaries implicitly takes as input a unary representation of the security parameter. We consider two types of adversaries: on the one hand PPT adversaries, which run in polynomial time, so in particular in polynomial time in the security parameter, and on the other hand unbounded adversaries.

Experiments, Games, Oracles. We often define our security notions or assumptions using experiments, parametrized by the security parameter κ, and during which an adversary is called one or several times with various inputs. The adversary may also be provided access to oracles, which are Turing machines, however, the running time of an adversary does not depend on the running time of the oracle and a query to an oracle only counts for one operation.

Then, an experiment can be seen as a game between an adversary A and an implicit challenger which provides its input to the adversary as well as some oracle access. This game has an Initialize procedure, procedures to respond to adversary oracle queries, and a Finalize procedure. In the case where the Finalize procedure is not explicitly defined, it is implicitly defined as the procedure that simply outputs its input. To execute a game G with an adversary A , we proceed as follows. First, Initialize is executed and its outputs become the input of A . When A executes, its oracle queries are answered by the corresponding procedures of G. When A terminates, its outputs become the input of Finalize. The output of the latter, denoted GA is called the output of the game, and we let “GA ⇒ 1” denote the event that this game output takes the value 1. The running time of an adversary by convention is the worst case time for the execution of the adversary with any of the games defining its security, so that the time of the called game procedures is included.

Advantage. The advantage of an adversary A in an experiment Exp is the probability that this adversary outputs 1 in this experiment: AdvExp(A , κ) := Pr [ Exp(A , κ) = 1 ] .

Similarly, we define the advantage of an adversary D in distinguishing two experiments Exp0 and Exp1 as: AdvExp(D, κ) := Pr h Exp1(D, κ) = 1 i − Pr h Exp0(D, κ) = 1 i .

The advantage depends on the security parameter κ. For simplicity, κ is often implicit.

Assumptions, Security, and Indistinguishability. To define an assumption or a security notion, we then define a problem as an experiment or as distinguishing two experiments. We say that the problem is hard If no PPT algorithm can solve it.

We say that an assumption (or security) computationally holds if the above advantage is negligible for any PPT adversary. We further say that it statistically holds if the above advantage is negligible for any (even unbounded) adversary. We finally say that it perfectly holds if the above advantage is 0 for any (even unbounded) adversary.

Similarly, we say that two games are computationally indistinguishable if the advantage of any PPT adversary in distinguishing these two games is negligible. We say that two experiments are statistically indistinguishable (resp. perfectly indistinguishable or equivalent) if the advantage of any (unbounded) adversary in distinguishing these two games is negligible (resp. 0).

Hybrid Arguments. Most of our security proofs are proofs by games (also called hybrid arguments) as defined by Shoup in [Sho01; KR01; BR06]: to bound an advantage in some game experiment corresponding to some security notion, we construct of sequence of games.

The first game is G0 is the experiment itself, while the last game corresponds to some security notion or is such that the adversary just cannot win. Furthermore, we prove that two consecutive games are indistinguishable either perfectly, statistically, or computationally. In other words, we bound the diﬀerence of advantages by a negligible quantity.

Similarly, to bound an advantage of an adversary in distinguishing two experiments, we construct a sequence of indistinguishable games starting with the first experiment and ending with the second experiment.

### Classical Computational Assumptions

**The Discrete Logarithm Problem**

For cryptographic purpose, we require our cyclic groups (p, G, g) to be such that the reverse operation of exponentiation is hard. This operation, called the discrete logarithm operation, consists, given an element g ∈ G and an element h = gx ∈ G, in computing scalar x ∈ Zp such that h = gx. The scalar x is called the discrete logarithm of h in the basis g.

**Classical Discrete-Logarithm-Based Assumptions**

**Search Assumptions**

We define two assumptions, termed the discrete logarithm and the strong discrete logarithm assumptions, that respectively correspond to the hardness of the following search problems. Discrete Logarithm. We define the advantage of an adversary A against the discrete logarithm (DL) problem in G as: AdvdlG(A ) := Pr h DLAG ⇒ 1 i where the probability is over the choices of a ∈ Zp, g ∈ G, and the random coins used by the adversary, and where DLG is described in Figure 2.1.

Strong Discrete Logarithm. For d ≥ 2, we define the advantage of an adversary A against the d-strong discrete logarithm (d-SDL) problem in G as: AdvdG-sdl(A ) := Pr h d-SDLAG ⇒ 1 i where the probability is over the choices of a ∈ Zp, g ∈ G, and the random coins used by the adversary, and where d-SDLG is described in Figure 2.1.

**Decisional Assumptions**

We first define two assumptions, termed the Decisional Diﬃe-Hellman and the d-Decisional Diﬃe-Hellman Inversion assumptions, that respectively correspond to the hardness of the following decisional problems.

DDH. The advantage of an adversary D against the DDH problem in G is defined to be: AdvddhG(D) := Pr h DDHRealDG ⇒ 1 i − Pr h DDHRandDG ⇒ 1 i

where the probabilities are over the choices of a, z ∈ Zp, g ∈ G, and the random coins used by the adversary, and where DDHRealG and DDHRandG are described in Figure 2.2.

DDHI. For d ≥ 1, the advantage of an adversary D against the d-DDHI problem in G is defined as: AdvdG-ddhi(D) := Pr h d-DDHIRealDG ⇒ 1 i − Pr h d-DDHIRandDG ⇒ 1 i where the probabilities are over the choices of a, z ∈ Zp, g ∈ G, and the random coins used by the adversary, and where d-DDHIRealG and d-DDHIRandG are described in Figure 2.2.

We also define the k-Linear assumption as the hardness of the following decisional prob-lem. The particular case with k = 2 is usually referred to as the Decisional Linear (DLin) assumption. k-Lin. For k ≥ 2, the advantage of an adversary D against the k-Lin problem in G is defined as: AdvkG-lin(D) := Pr h k-LinRealDG ⇒ 1 i − Pr h k-LinRandDG ⇒ 1 i where the probabilities are over the choices of a, z ∈ Zp, g ∈ G, and the random coins used by the adversary, and where k-LinRealG and k-LinRandG are described in Figure 2.3.

#### Building DDH-Hard Groups

It is widely accepted that one can build groups in which the DDH assumption holds (and thus, also the DL assumption) from elliptic curves [Mil86; Kob87; Ber06].

Specifically, for a security parameter κ, it is believed that in “reasonably well-chosen” elliptic curves of prime order p, the only way to compute the discrete logarithm is by using generic algorithm (such as Shank’s baby-step giant-step algorithm [Sha71], which runs in time O(√p)). Therefore, with p being a 2κ-bit prime number, one can find an elliptic curve of order p that provides κ bits of security for the discrete logarithm problem. Please note that, using point compression, elements from such a group can be represented using about 2κ + 1 bits.

In the sequel, we do not extend further on how groups in which our assumptions hold can be generated. We only assume that it can be done eﬃciently (in polynomial time) and that there exist small (polynomial-size) representations for the group and its elements. This is suﬃcient for our purpose.

**Cryptographic Primitives**

**Collision-Resistant Hash Functions**

**Definition 2.3.1** (Collision-Resistant Hash Function). A collision-resistant hash function is defined by a tuple of two polynomial-time algorithms H = (H.Setup, H.Eval) such that:

• H.Setup(1κ) outputs some public parameters pp;

• H.Evalpp(m) is a function that depends on pp, takes as input a bitstring m ∈ {0, 1}∗, and deterministically outputs a bitstring y ∈ {0, 1}n(κ) called the hash value of m. The size of the output is determined by some polynomial n(•).

Furthermore, we additionally require that any PPT adversary cannot find two inputs m0 =6 m1 that have the same hash value, unless with negligible property. Formally, we define the advantage of an adversary A in attacking the collision-resistance security of H as: AdvcrH(A ) := Pr [ Expcr(A , κ) = 1 ] , where Expcr is defined in Figure 2.4. We say that H is a collision-resistant hash function if the advantage of any PPT adversary in attacking the collision-resistance security of H is negligible.

In order to ease the reading, we often abuse notation by simply writing that H is a collision resistant hash function and letting H(•) stands for H.Evalpp(•), where public parameters pp is implicitly sampled in advance as pp ←$ H.Setup(1κ) for the given security parameter.

**Pseudorandom Functions**

**Definition**

**Definition 2.3.2** (Pseudorandom Function). A pseudorandom function is defined by a tuple of two polynomial-time algorithms F = (F.Setup, F.Eval) such that:

• F.Setup(1κ) outputs some public parameters pp;

• F.Evalpp(•, •) is a function that depends on pp and takes as input a secret key K ∈ K and an input x ∈ D and deterministically outputs a value y = F.Evalpp(K, x) ∈ R.

Furthermore, we define the advantage of an adversary D in attacking the pseudorandom function security of F as: AdvprfF (D) := Pr h PRFRealDF ⇒ 1 i − Pr h PRFRandDF ⇒ 1 i , where PRFRealF and PRFRandF are described in Figure 2.4. We say that F is a pseudorandom function if the advantage of any PPT adversary in attacking the pseudorandom function security of F is negligible.

**Table of contents :**

Abstract

**1 Introduction **

1.1 Pseudorandom Functions

1.1.1 From One-Way Functions to Pseudorandom Functions

1.1.2 Number-Theoretic Constructions

1.1.3 Extensions of Pseudorandom Functions

1.2 Our Contributions

1.2.1 Extension and Correction of the Bellare-Cash Framework

1.2.2 Algebraic Framework for Pseudorandom Functions

1.2.3 Assumptions

1.2.4 Related-Key Secure Pseudorandom Function for XOR Relations

1.3 Other Contributions

1.3.1 Order-Revealing Encryption

1.3.2 Private Circuits

1.3.3 Functional Encryption and Obfuscation

1.4 Organization

**2 Preliminaries **

2.1 Notation and Preliminaries

2.1.1 Mathematical Notions

2.1.2 Algorithmic Concepts

2.1.3 Provable Security

2.2 Classical Computational Assumptions

2.2.1 The Discrete Logarithm Problem

2.2.2 Classical Discrete-Logarithm-Based Assumptions

2.2.3 Building DDH-Hard Groups

2.3 Cryptographic Primitives

2.3.1 Collision-Resistant Hash Functions

2.3.2 Pseudorandom Functions

2.3.3 Aggregate Pseudorandom Functions

2.3.4 Multilinear Pseudorandom Functions

2.4 Multilinear Maps and the Generic Multilinear Map Model

2.4.1 Multilinear Maps

2.4.2 Generic Multilinear Map Model

**3 Introduction to Related-Key Security **

3.1 Definition and Security Model

3.2 First Impossibility and Feasibility Results

3.2.1 Impossibility Results

3.2.2 Feasibility Results

3.3 The Central Role of Pseudorandom Functions

3.4 The Bellare-Cash Framework, Revisited

3.4.1 Additional Notions

3.4.2 Dealing with Key-Collisions

3.4.3 The (Extended) Framework

3.5 Application: Related-Key Security for Affine Relations

3.5.1 Ingredients

3.5.2 Putting Everything Together

3.6 Further Generalization of the Bellare-Cash Framework

3.6.1 Relaxing the Requirements of the Framework

3.6.2 From Malleability to Unique-Input-Related-Key Security

3.7 Application: Related-Key Security for Affine Relations

**4 A New Family of Assumptions **

4.1 The Matrix-Decisional-Diffie-Hellman Assumptions

4.2 A New Family of Matrix-Diffie-Hellman Assumptions

4.3 Connexion with Standard Assumptions

4.3.1 Summary of Relations

4.3.2 Relation with the DDHI Assumption

4.4 Security in the Generic Multilinear Group Model

4.4.1 Definitions: Monomial Order and Leading Commutative Monomials

4.4.2 Main Lemma

4.4.3 Putting Everything Together

**5 An Algebraic Framework for Pseudorandomness **

5.1 Intuition and Subtleties

5.1.1 Intuition

5.1.2 Procedure for Testing Linear Dependence

5.1.3 Extension to Weaker Assumptions

5.1.4 On the Representation of Multivariate Polynomials

5.2 Formal Security Notion and Main Result

5.2.1 Formal Definition of the Polynomial Linear Pseudorandomness Security

5.2.2 The PLP Theorem

5.2.3 Immediate Corollary: the LIP Theorem

5.3 Proof of Theorem 5.2.2

5.3.1 Decomposition Lemmata

5.3.2 The Main Proof

**6 Applications **

6.1 Applications to Standard Pseudorandom Functions

6.1.1 Extended Number-Theoretic Pseudorandom Functions

6.1.2 Simple Proofs of Security

6.2 Applications to Related-Key Security

6.2.1 Direct Constructions

6.2.2 Constructions From Unique-Input-Related-Key Security, Algebraically

6.2.3 Other Applications to Related-Key Security

6.2.4 Extension to Weaker Assumptions

6.2.5 A Further Generalization of the Framework

6.3 Applications to Aggregate Pseudorandom Functions

6.3.1 Read-Once Formula Aggregation

6.3.2 Impossibility Results

6.3.3 Extension to Weaker Assumptions

6.4 Applications to Multilinear Pseudorandom Functions

6.4.1 Cohen-Holmgren Construction

6.4.2 Symmetric Multilinear Pseudorandom Functions

6.4.3 Skew-Symmetric Multilinear Pseudorandom Function

6.4.4 Extension to Weaker Assumptions

**7 A Provably-Secure Pseudorandom Function For XOR-Relations **

7.1 Additional Material

7.1.1 Related-Key Security for XOR Relations

7.1.2 Multilinear Maps

7.1.3 Straddling Sets

7.2 Our Construction

7.2.1 Intuition

7.2.2 Actual Construction

7.3 Security in the Generic Multilinear Map Model

7.4 Security under Non-Interactive Assumptions

7.4.1 Two Non-Interactive Assumptions

7.4.2 Security of our Construction

7.5 Security of the XY-DDH Assumption in the Generic Multilinear Map Model

7.6 Security of the Sel-Prod Assumption in the Generic Multilinear Map Model

7.6.1 Proof Ingredient: Index Sets and Profiles

7.6.2 Security of the Sel-Prod Assumption

**8 Conclusion and Open Questions **

8.1 Conclusion

8.2 Open Questions

Notation

Abbreviations

List of Illustrations

Figures

Tables

**Bibliography **