Get Complete Project Material File(s) Now! »

## The Role of Cryptography

Human beings have always valued their privacy and security, the protection of their personal sphere of life. Even if unconsciously people expose themselves to many risks in their online activities, they will always claim their right to privacy.

Alan Westin believes that new technologies alter the balance between privacy and disclosure, he defines privacy as ”the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others” [Wes67].

We certainly do not want our personal information to be accessible to just anyone at any time, our outsourced data to be altered without us noticing, or our identities to be stolen over the Internet. Meanwhile, it would be preferable to be able to enjoy the benefits of using the latest technologies. Cryptography addresses these and more concerns by designing systems that help us keep our information secret to unintended receivers, prove our identity to digital interlocutors,, and allowing secure communication in the presence of untrusted parties.

By using Cryptography, we can dream of regaining the comfort and privacy we seem to lose as an information intensive society.

**History of Cryptography**

Cryptography has a long and fascinating history that begins together with the invention of writing. In its early years, the focus was mainly on protecting the confidentiality of information and ensuring secrecy in communications, such as those of spies, military leaders, lovers. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital signatures, interactive and non-interactive proofs of knowledge and secure computation, among others.

Early Ages. The earliest forms of secret writing required little more than local pen and paper analogues, as most people could not read. Simple versions of ciphers have never oﬀered much confidentiality from enterprising opponents. An early substitution encryption method was the Caesar cipher, in which each letter in the plaintext was replaced by another letter some fixed number of positions further down the alphabet. It was named after Julius Caesar who is reported to have used it, with a shift of 3, to communicate with his generals during his military campaigns.

These methods of secure communication and authentication were only guaranteed by ad-hoc constructions, and most of the time by keeping secret the design of the systems. Later, the industrial revolution and the apparitions of machines changed both the possible functionalities and the theoretical treatment of cryptosystems.

Military Cryptography. In 1883 Auguste Kerckhoﬀs wrote two journal articles on La Cryp-tographie Militaire, [Ker83] in which he stated six design principles for military ciphers. Translated from French, they are:

1. The system must be practical, if not mathematically indecipherable;

2. It should not require secrecy, and it should not be a problem if it falls into enemy hands;

3. It must be possible to communicate and remember the key without using written notes, and correspondents must be able to change or modify it at will;

4. It must be applicable to telegraph communications;

5. It must be portable, and should not require several persons to handle or operate;

6. Lastly, given the circumstances in which it is to be used, the system must be easy to use and should not be stressful to use or require its users to know and comply with a long list of rules.

Some are no longer relevant given the ability of computers to perform complex encryption, but his second axiom, now known as Kerckhoﬀs’s principle, is still critically important.

The development of digital computers and electronics after WWII made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant.

The American mathematician Claude Shannon reformulated Kerckhoﬀs’s principle in 1949 [Sha49] as ”the enemy knows the system”, i.e., ”one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them”. In that form, it is called Shannon’s maxim. In contrast to security through obscurity, it is widely embraced by cryptographers.

Modern Cryptography. Extensive open academic research into cryptography is relatively recent; it began only in the mid-1970s. Modern cryptology is a research field that lives in the intersection of mathematics, computer science and more recently, quantum physics according to [HPS08]. The security of most nowadays cryptosystems is related to hard mathematical problems and to some assumptions we make on the limitations of the machines designed to break them. For those cryptographic techniques, there are no absolute proofs of security, as one can always consider the brute-force attack, a method consisting in trying all possible solutions to a problem until the correct one is found. At best, there are proofs that some techniques are secure if some problem is diﬃcult to solve, given a specific infrastructure.

The modern cryptosystems were developed and formalized beginning with the work of Feistel [Fei73] at IBM in the early 1970s and culminating in 1977 with the adoption as a U.S. Federal Information Processing Standard for encrypting unclassified information, the Data Encryption Standard (DES). Nowadays, DES remains the standard cryptosystem for securing electronic commerce for many financial institutions around the world.

This kind of encryption technique is called symmetric-key cryptosystems or secret-key cryptosystems. They use the same key to encrypt a message and later on to decrypt the ciphertext. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a diﬀerent key. The diﬃculty of securely establishing a secret key between two communicating parties, when a secure channel does not already exist between them is a considerable practical obstacle for users.

The most striking development in the history of cryptography that changed the way encryption was perceived came in 1976 when Diﬃe and Hellman published ”New Directions in Cryptography” [DH76]. This work introduced the revolutionary concept of Public-key cryptography and also provided a new and ingenious method for key exchange, which solves the key management problem mentioned before. The security of this key exchange protocol is based on the diﬃculty of solving the discrete logarithm problem (with the existent means of computation).

Although the authors had no practical realization of a public-key encryption scheme at the time, the idea was clear, and it generated extensive interest and activity in the cryptographic community.

Public-key Cryptography. The historian David Kahn, author of the famous book ”The Codebreakers” [Kah67], described public-key cryptography as ”the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance”.

In public-key cryptography, each party has a pair of keys: a public one and a private (or secret) one. The public one can be published, e.g., on the Internet, and allows anyone to encrypt a message, that can only be decrypted with the corresponding private key. In order to explain this concept, a simple analogy is often used: the public key corresponds to an open lock, whereas the private key corresponds to the lock’s key. Publishing the public key is equivalent to making the open lock available; then anyone can write a message, put it in a box, and close the box with the provided lock. The sealed box is then sent to the recipient, who can open it with the appropriate key.

In 1978 Rivest, Shamir, and Adleman discovered the first practical public-key encryption and signature scheme, now referred to as RSA [RSA78].

The RSA scheme is based on another hard mathematical problem, the intractability of factoring large integers. This application of a hard mathematical problem to cryptography revitalized eﬀorts to find more eﬃcient methods to factor. The 1980s saw significant advances in this area but none which rendered the RSA system insecure. Another class of powerful and practical public-key schemes was found by ElGamal in 1984 [ElG84]. These are also based on some problem assumed intractable, the discrete logarithm problem.

One of the most significant contributions provided by public-key cryptography is the digital signature ; this is the digital analogous of a handwritten signature or stamped seal, and it is intended to solve the problem of tampering and impersonation in digital communications. Digital signatures can provide the assurances of origin, identity and status of an electronic document, transaction or message.

In 1991, the first international standard for digital signatures (ISO/IEC 9796) was adopted. It is based on the RSA public-key scheme. In 1994 the U.S. Government adopted the Digital Signature Standard, a mechanism based on the ElGamal public-key scheme. The search for new public-key schemes, improvements to existing cryptographic mechanisms, and proofs of security continue at a rapid pace.

Nowadays, security products are being developed to address the security needs of an information intensive society. Cryptography has become a widely used tool in communications, computer networks, and in computer security generally.

Future of Cryptography. As well as being aware of the fascinating history of the field, the cryptographic community must also sensibly consider hypothetical future developments while working on their system designs. For instance, continuous improvements in computer processing power have increased the eﬃcacy of brute-force attacks; thus the required key sizes are similarly advancing. Some cryptographic system designers are already considering potential eﬀects of quantum computing. The announced imminence of implementations of these quantum machines should justify the need for this preemptive caution.

### The Main Characters

To describe protocols functionalities, the roles of diﬀerent parties and the type of interactions between them, the use of some fictional characters and scenarios is necessary. Alice and Bob are the world’s most famous cryptographic couple. Since their invention in 1978 by Ron Rivest, Adi Shamir, and Leonard Adleman in their paper ”A method for obtaining digital signatures and public-key cryptosystems” [RSA78], they have become familiar archetypes in the cryptography field. Through this thesis, various cryptographic scenarios will be imagined between Alice and Bob. The two either want to secretly exchange messages, either authenticate on some platform, sign documents and exchange them, etc. Moreover, a thing to remember about Alice is that she is an Internet addict and she uses various Cloud services, storage for her personal data, outsourced computation, remote software, etc. Occasionally a third party will appear in the story, his name will be Charlie or Oscar. Other remarkable characters in this crypto journey are the villains; Eve is the passive and submissive eavesdropper, Mallory the active malicious attacker. Often, an adversary will interfere and make the lives of Alice and Bob complicated.

Mallory. A malicious attacker. Unlike passive Eve, Mallory is an active attacker, who can modify messages, substitute messages, or replay old messages. The diﬃculty of securing a system against Mallory is much higher than against Eve.

Oscar. An opponent, similar to Mallory, but not necessarily malicious. He is not trustworthy. Nevertheless, he is delegated data to; he performs tasks for other parties, such as large computations.

Arthur and Merlin. Arthur asks questions and Merlin provides answers. Merlin has unbounded computational ability (like the wizard Merlin). In interactive proof systems, Merlin claims the truth of a statement, and Arthur (like King Arthur), questions him to verify the claim.

SNARK. ”For the Snark’s a peculiar creature, that won’t

Be caught in a commonplace way.

Do all that you know, and try all that you don’t:

Not a chance must be wasted to-day!”

Lewis Carroll

**The Main Security Goals**

Cryptographic primitives and protocols are designed to maintain a desired set of security goals even under attempts at making them deviate from the expected functionality. The most common security requirements bursting from applications are:

Privacy/Confidentiality. This is the main idea people associate to the term of cryptography. Keeping information secret from all, but those who are authorized to see it. Confi-dentiality is the protection of transmitted data from passive attacks, by preventing disclosure of information to unauthorized parties. In a nutshell, a cryptographic scheme or protocol between Alice and Bob achieves confidentiality if Alice is able to send messages to Bob in such a way that only Bob can read the messages and Eve is not able to see the actual content of their communication. Encryption is the main cryptographic primitive to provide confidentiality.

Authentication. The process of proving one’s identity. This property can refer to both data and user authentication. In the case of user authentication, this functionality ensures that Alice is whom she claims to be. For message authentication, the goal is to provide some additional information that guarantees to Bob that Alice originated the message he received. In particular, no undesired third party, Mallory, should be able to impersonate Alice. Digital Signatures (DS) and Message Authentication Codes (MAC) are the two cryptographic primitives that guarantee data authentication.

Integrity. Ensuring the information has not been altered by unauthorized or unknown means. Data integrity provides assurance that data has not been modified in an unauthorized manner after it was created, transmitted or stored. This means that there has been no insertion, deletion or substitution done with the data. In a communication between Alice and Bob, one must have the ability to detect data manipulation by adversarial parties. Cryptographic hash functions are a fundamental building block for integrity.

Non-repudiation. Non-repudiation prevents either Alice or Bob from denying a message. Thus, when Bob sends a message to Alice, he cannot change his mind and Alice can prove that the message was, in fact, sent by the alleged sender.

**Outsourcing Computation**

Cloud computing is a revolutionary movement in the area of IT industry that provides storage, computing power, network and software as an abstraction and as a service on demand over the Internet, which enables its clients to access these services remotely from anywhere, anytime via any terminal equipment. Moreover, the clients are no longer restricted to their limited CPU, storage, and bandwidth; they are empowered to use the cloud resources to execute large computations. Since the Cloud has modified the definition of data storage and computation from personal computers to the huge data centers, security and integrity of data have become some of the major concerns that should be taken into account by the developers of the Cloud. While clients are willing to outsource many tasks and resources to Cloud providers for the sake of several benefits, how can they be assured that the Cloud will operate correctly? The business providing the computing services may have a strong financial incentive to return incorrect answers, if such answers require less work and are unlikely to be detected by the clients. On the other side, some services are provided for ”free”, in exchange for the possibility to harvest users information in order to create tailored advertising campaigns, extract statistics or trade data to generate profits. The clients may not agree to lose control of their personal data or to divulge sensitive information to unknown parties.

Two main challenges for cryptographers arise here: find solutions that are able to guarantee the integrity of the results for the outsourced computations and/or the confidentiality of client’s data. More importantly, while privacy and integrity were long studied questions in cryptography, the key diﬀerence is that cryptographic solutions for the cloud setting must preserve the benefits of cloud computing. This means, for example, that these solutions must make sure that the users do less work than what they outsourced to the cloud. It is also desirable to keep the work performed by the workers as close as possible to the amount of work needed to complete the original task.

These questions motivated the study of secure delegation of computation and further research on proof systems, especially on a specific class of protocols, called SNARKs. SNARKs are proof systems that enable users to securely outsource computations to untrusted cloud providers in a verifiable manner. These protocols elegantly solve the problem of limited computational power and public trust, providing to the worker a way of proving correctness of computations to the client in a way so the client can eﬃciently verify the result.

The other concern that we highlighted is the privacy of data while outsourcing storage and computation. The user’s data can carry sensitive information such as personal health records, financial records, trends of stock, scientific research records, just to list a few. Therefore, we would like to use cryptography, for example, to encrypt the data before outsourcing it to the Cloud server to maintain confidentiality. However, the traditional encryption schemes would not allow the untrusted machine to perform the desired computation on the encrypted values.

More advanced techniques, called fully-homomorphic encryption allow a worker to compute arbitrary functions over encrypted data, but they do not suﬃce to secure the outsourced computation. Indeed, fully-homomorphic encryption provides no guarantee that the worker performed the correct computation and they are very ineﬃcient to use together with a general verification tool such as the above mentionned SNARK.

We will examine all the current known approaches, their limitations and try to find better solutions along this thesis.

#### Proof Systems in Cryptography

Goldreich argued in [Gol95] that the notion of proof in cryptography is somewhat diﬀerent from the concept of a proof in a strict mathematical sense and more similar to a dynamical way of understanding the proof by interaction and interpretation used by humans. Mathematical proofs are more strict; they have a static and formal nature as they are either self-evident or are obtained from earlier rules and axioms.

However, humans tend to use a more intuitive sense of proofs where the soundness of a statement is established through the process. In a similar sense, in a cryptographic proving protocol, instead of presenting a static proof for the claim, the prover tries to convince the verifier by exchanging messages with it, a kind of questions & answers process.

A proof system in the cryptographical sense, is an interactive protocol by which one party (called the prover) wishes to convince another party (called the verifier) that a given statement is true. In zero-knowledge proof, we require further that the proof does not reveal anything more than the truth of the statement. At a first glimpse, it sounds counter-intuitive, being able to prove something is correct, without revealing any extra detail.

Let’s see that it is perfectly possible by a very simple day-to-day example:

**Example 1.4.1** (Playing card). Imagine that we pick a card A♦ from a complete deck of playing cards and we want to prove to an adversary that we have a red card in our hand. We can prove that by revealing more information than expressed in the statement, just by showing our card, A♦. Alternatively, we can choose to prove nothing more than the colour of our card by revealing to the adversary all the black cards ♣, ♠ left in the deck. Our opponent should now be convinced we have a red card in our hands, but it did not learn anything else about the value of our card.

Researches in zero-knowledge proofs have been prompted by authentication systems where one party wants to prove its identity to a second party via some secret information such as a password but doesn’t want to disclose anything about its secret password to the second party. This is called a zero-knowledge proof of knowledge.

A proof of knowledge is an interactive protocol in which the prover succeeds in ”convincing” a verifier that it knows something (a password, the steps of a computation, etc.) associated with the statement. For example, if the statement is ”I am Alice.”, the prover should show knowledge of the secret password of Alice; if the statement is ”I computed the function f(x) and obtained y.”, then the prover must convince its verifier that it knows all the steps of this computation that lead to the result y.

What it means for a machine to have knowledge is defined formally in terms of an extractor. As the program of the prover does not necessarily spit out the knowledge itself (as is the case for zero-knowledge proofs), we will invoke another machine, called the knowledge extractor that, by having access to the prover, can extract this witness (the knowledge).

The next step is the introduction of non-interactive proof systems, which reduce the number of rounds of interaction between the prover and the verifier to only one. Some non-interactive protocols consist in only one message from the prover to verifier; others need the verifier to generate some setting information, called CRS, that can be made publicly available ahead of time and independently of the statement to be proved later. To enforce security, and avoid cheating from the verifier, this CRS is often generated by a third trusted party.

**SNARKs**

In the class of non-interactive proofs, a particularly interesting concept for proving integrity of results for large computations is that of SNARK, i.e., succinct non-interactive argument of knowledge. By this term, we denote a proof system which is:

succinct: the size of the proof is very small compared to the size of the statement or the witness, i.e., the size of the computation itself, non-interactive: it does not require rounds of interaction between the prover and the verifier, argument: we consider it secure only for provers that have bounded computational resources, which means that provers with enough computational power can convince the verifier of a wrong statement,

knowledge-sound: it is not possible for the prover to construct a proof without knowing a certain so-called witness for the statement; formally, for any prover able to produce a valid proof, there is an extractor capable of extracting a witness (”the knowledge”) for the statement.

SNARK systems can be further equipped with a zero-knowledge property that enables the proof to be done without revealing anything about the intermediate steps (the witness). We will call these schemes zk-SNARKs.

A (zk-)SNARK protocol (as any other non-interactive proof system) is described by three algorithms that work as follows:

• Gen is the setup algorithm, generating a necessary string crs used later in the proving process and some verification key vrs, sometimes assumed to be secret to the verifier only. It is typically run by a trusted party.

• Prove is the proving algorithm that takes as input the crs, the statement u and a corresponding witness w and outputs the proof π.

• Verify is the algorithm that takes as input the verification key vrs, the statement u and the proof π, and returns 1 ”accept” the proof or 0, ”reject”.

The SNARK schemes can be used for delegating computation in the following way: a server can run a computation for a client and non-interactively prove the accuracy of the result. The client can verify the result’s correctness in nearly-linear time in the size of the input (instead of running the entire computation itself).

The goal of this thesis is to study these protocols, try to construct new eﬃcient and secure schemes and broaden their use to more applications.

**Our Contributions**

In this manuscript, we actively study the SNARK protocols, by proposing new constructions, by revisiting the security analysis of existing schemes, by finding new applications, or by combining SNARKs with other primitives in order to obtain further functionalities.

We expand below on the main contributions that are developed in this thesis and outline some of their implications. The results discussed along this manuscript have been published (or are in review process) in [GMNO18] (co-authored with Rosario Gennaro, Michele Minelli, Michele Orrù), [FN16] (co-authored with Dario Fiore) and [FNP18] (co-authored with Dario Fiore and David Pointcheval).

All along the five chapters, we place ourselves in the context of verifiable computation (VC): we study the setting where a party with limited storage and computation resources outsources its data and asks a powerful server located in the Cloud to run computation jobs on it maintaining verifiable results.

**Table of contents :**

**1 Introduction **

1.1 The Role of Cryptography

1.2 History of Cryptography

1.2.1 The Main Characters

1.2.2 The Main Security Goals

1.3 Outsourcing Computation

1.4 Proof Systems in Cryptography

1.5 SNARKs

1.6 Our Contributions

1.6.1 A Survey on SNARKs

1.6.2 Post-Quantum SNARKs

1.6.3 O-SNARKs

1.6.4 Applications of O-SNARK

1.6.5 SNARKs for Data Privacy

1.6.6 Other Contributions

1.7 Organization of the manuscript

**2 Preliminaries **

2.1 Notation and Preliminaries

2.1.1 Mathematical Notions

2.1.2 Algorithms

2.2 Provable security

2.2.1 Security Reductions

2.2.2 Abstract Models of Computation

2.3 Computational Assumptions

2.3.1 Discrete-Logarithm-Based Assumptions

2.3.2 Assumptions on Bilinear Groups

2.3.3 Falsifiable vs. Non-Falsifiable Assumptions

2.3.4 Knowledge Assumptions

2.3.5 Lattice Assumptions

2.3.6 Learning With Errors

2.4 Cryptographic Primitives

2.4.1 One-Way Functions

2.4.2 Encryption Schemes

2.4.3 Homomorphic Encryption Schemes

2.4.4 Digital Signatures

2.4.5 Commitment Schemes

**3 Succinct Non-Interactive Arguments of Knowledge **

3.1 Introduction to Proofs and Arguments

3.1.1 Interactive Proofs

3.1.2 Interactive Arguments of Knowledge

3.1.3 Non-Interactive Proofs and Arguments

3.2 SNARK: Definitions

3.3 SNARK: Construction from PCP

3.3.1 Probabilistically Checkable Proofs

3.3.2 SNARKs from PCP

3.4 SNARK: Construction from QAP

3.4.1 From Circuits to Quadratic/Square Programs

3.4.2 Encoding Schemes

3.5 SNARK: Construction from LIP

3.5.1 Linear-Only Encoding Schemes

**4 Post-Quantum SNARK**

4.1 Introduction

4.2 New Framework for SNARK from SSP

4.2.1 Boolean Circuit Satisfaction Problems

4.2.2 Square Span Programs

4.2.3 Encoding Schemes

4.2.4 Improved SNARK Framework from SSP

4.3 An Encoding Scheme Based on LWE

4.3.1 Technical Challenges

4.4 Post-Quantum Designated-Verifier zk-SNARK

4.5 Assumptions

4.5.1 Assumptions Comparison

4.6 Proofs of Security

4.6.1 Zero-Knowledge

4.6.2 Knowledge Soundness

4.7 Efficiency and Concrete Parameters

4.7.1 Implementation

**5 O-SNARKs **

5.1 Introduction

5.1.1 Extraction for Proving Knowledge

5.1.2 Extraction in the Presence of Oracles

5.1.3 An Overview of Our Results

5.2 SNARKs with Auxiliar Input

5.3 SNARKs in the Presence of Oracles

5.3.1 O-SNARKs

5.3.2 Non-Adaptive O-SNARKs

5.4 On the Existence of O-SNARKs

5.4.1 O-SNARKs in the Random Oracle Model from Micali’s CS Proofs

5.4.2 Impossibility of O-SNARKs in the Standard Model

5.4.3 O-SNARKs for Signing Oracles from SNARKs

5.4.4 O-SNARKs for (Pseudo)random Oracles from SNARKs

**6 Applications of O-SNARK **

6.1 Introduction

6.2 Homomorphic Signature

6.2.1 Homomorphic Signature Scheme Definition

6.2.2 Homomorphic Signatures from O-SNARKs

6.2.3 Insecurity of HomSig[, ]

6.3 Succinct Functional Signatures

6.3.1 Definition of Functional Signatures

6.3.2 Succinct Functional Signatures from O-SNARKs

6.3.3 Construction of FS

6.3.4 Function Privacy of FS

6.4 SNARKs on Authenticated Data

6.5 Universal Signature Aggregators

6.5.1 Definition

6.5.2 Universal Signatures Aggregators from SNARKs

6.5.3 Security from O-SNARKs

6.5.4 Security for Unique Signatures from SNARKs

**7 SNARKs with Data Privacy **

7.1 Introduction

7.1.1 Ensuring Correctness of Privacy-Preserving Computation

7.1.2 Our Results

7.2 New Bivariate Computational Assumptions

7.3 SNARK for Bivariate Polynomial Evaluation

7.3.1 Knowledge Commitment for Bivariate Polynomials

7.3.2 Relations for Bivariate Polynomial Partial Evaluation

7.3.3 SNARK for Bivariate Polynomial (Partial) Evaluation

7.4 Security Analysis of our CaP − P.SNARK

7.4.1 Correctness

7.4.2 Soundness

7.5 SNARK for Simultaneous Evaluations

7.5.1 Commitment for Multiple Univariate Polynomials

7.5.2 Succinct Proof of Simultaneous Evaluations in a Point k

7.6 Proof Systems for Arithmetic Function Evaluation over Polynomial Rings

7.6.1 Relations for the Two SNARKs

7.6.2 Security Analysis

7.7 Applications to Computing on Encrypted Data

7.7.1 Verifiable Computation

7.7.2 Our VC scheme

7.7.3 Preserving Privacy of the Inputs Against the Verifier

**8 Conclusion**