Computational higher-order logics

Get Complete Project Material File(s) Now! »

Formalization of mathematics

Formalizing various mathematical theories inside proof systems requires a lot of work, because we need to detail the proofs much more than on paper, but it also increases our confidence. There are theorems whose proofs are so large, or that require exhaustive computations that are so complex, that they were not accepted by the mathematical community until they have been completely formalized and proved in a proof system. Such is the case of the 4-color theorem, which states that any planar map can be colored such that no two adjacent regions have the same color, using only 4 colors. Stated in the 1850s, it was only proved in 1976 but the proof was not universally accepted because it used a computer program for doing large calculations. In 2005, the proof was formalized by Gonthier and Werner in the Coq proof assistant [Gon05, Gon08], which helped eliminate doubts in the proof. Other notable examples are the Feit–Thompson theorem [GAA+13] and the Kepler conjecture [HHM+10, Hal14]. These formalization efforts are typically accompanied by the development of mathematical libraries [GM10] that contain collections of well-thought definitions and lemmas that can be reused for further work.

Software verification

We can also use proof systems to verify programs and software systems, proving that they are bug-free and that they respect their specification. This is very useful for critical systems where small errors can lead to huge economical or even human losses, for example in industrial, medical, and financial systems. Again, the advantages are that computers are much better than humans at verifying large, complex derivations. The CompCert C compiler [Ler15] is a verified C compiler that has been proved correct in Coq with respect to the C specification, from the source language all the way down to the machine code. Other notable examples include the use of Isabelle/HOL in the formal verification of the seL4 operating system microkernel [KEH+09], the use of HOL Light by Intel in the verification of the design of new chips [Har09], and the use of PVS by NASA to verify airline control systems [Muñ03, MCDB03]. Together with model checking and abstract interpretation, proof systems constitute one of the pillars of formal methods in software verification.

Universality of proofs

Proofs in mathematics are universal, but that is not always the case in proof systems. Doing proofs on a computer requires a certain level of trust because we cannot verify most of these proofs ourselves, nor the systems in which they are done. When we write a proof in one system, we would like to check it independently and reuse it for the benefit of developments in another system. These two aspects of universality, independent proof checking and proof interoperability, are not always possible.
The obstacles can be many:
• The proofs are written in high-level proof scripts, partially proved by tactics or automatically proved by automated theorem provers. There is a lot of missing information that requires complex machinery to reconstruct and check.
• The system has complete proof objects but provides no way of accessing them or presenting them in a way that is readable or usable, or does not have a clear specification of their meaning or the logic behind them.
• We can retrieve proofs but we need to translate them in the language of the other systems in order to reuse them. For n systems, we need a number of translations of the order of n2, unless the translations are composable.
• The translations are inefficient and cannot be composed well.
• The logical formalisms are widely different, and even sometimes incompatible.
A solution to these problems is to translate proofs to a common framework called a logical framework, which is used to:
1. Specify the language for writing proofs and the rules of the logic in which the proofs are carried.
2. Check the correctness of the proof, by verifying that each individual step of reasoning follows the rules of the logic.
The framework must be simple enough to be trustworthy but powerful enough to be able to express the proofs of various logics efficiently. The benefits of doing so are several:
• We develop tools to retrieve the proof objects from various systems and drive the effort of providing a better specification for these proof objects.
• We can recheck the proofs in the logical framework, thus increasing our confidence. Logical frameworks provide a venue for proof systems to produce proof objects that are relatively independent of the implementation of the system itself.
• We need a number of translations that is only linear in the number of systems, instead of being quadratic.
• It gives a level playing field in which to study logics. Expressing, or trying to express, different logics in the same framework gives us better insight on the nature of these logics, their similarities, and their differences.
• It paves the way for proof interoperability. By having the proofs of different systems in a common framework, it becomes easier to combine them to form larger theories, in the same way that libraries written in the C language can be linked with programs written in OCaml in low-level assembly languages.
In this thesis, we contribute to the design of these translations, called logical embeddings. There are various different logics and logical frameworks, based on a variety of different formalisms. In our work, we focus on those based on type theory. In particular, we use the λΠ-calculus modulo rewriting as our logical framework and we focus on the embedding of logics that allow computational and higher-order reasoning.

Computational reasoning

Computation inside proofs can reduce the size of proofs, sometimes by drastic amounts. Indeed, if a person asserts that the statement 2+2=4 is true, then we only need to compute the value of 2 + 2 and compare it to the right hand-side. In this case, the result of 2 + 2 is 4 and therefore the statement is true. On the other hand, the statement 2+2=5 is not true, and indeed the value 4 is not equal to 5. In practice, computation allows us to express the proof of the first statement using one step:
4 = 4
2+2=4
which is smaller than a non-computational proof that uses the axioms of arithmetic:
3+1=4+0 4+0=4
2+2=3+1 3+1=4
2+2=4 .
Computation in proofs is especially relevant when we deal with logics that reason about programs, where these steps correspond to program evaluation. On larger or more complex examples, the gains become significant enough that the proofs of difficult theorems become tractable. This technique, called proof by reflection [BC04], has been famously and successfully used in the proof of the 4-color theorem. To express such proofs in a logical framework, it is thus important to design embeddings that preserve computation.

Higher-order reasoning

Higher-order reasoning allows additional forms of quantification and can add a lot of expressive power to logic. In first-order logic, the universal quantification ∀ is restricted to usual objects like natural numbers. It can already express a lot of mathematics, but it has its limits. For example, the induction principle: [P (0) ∧ ∀n. [P (n) ⇒ P (n + 1)]] =⇒ ∀n.P (n) cannot be expressed with a finite number of axioms. If we allow ourselves to quantify over propositions and predicates, we can formulate it as a single axiom: ∀P. [[P (0) ∧ ∀n. [P (n) ⇒ P (n + 1)]] =⇒ ∀n.P (n)]
Similarly, it can be shown that we cannot prove 0 =6 1 in intuitionistic type theory and the calculus of constructions without the ability to quantify over types [Smi88]. To express such proofs in a logical framework, it is thus important to design embeddings that can deal with higher-order reasoning.
While computational embeddings and higher-order embeddings have been the subject of much previous work, doing both at the same time has rarely been studied before, and indeed the two are usually incompatible in weak logical frameworks. In this thesis, we design embeddings that allow both computation and higher-order reasoning. We proceed in several stages to achieve our end-goal, which is the translation of the calculus of inductive constructions, the formalism behind the Coq proof system.

Type theory and logical frameworks

Type theory [Rus08 ] is an alternative to set theory that was proposed at the beginning of the 20th century as a solution to avoid the paradoxes of naive set theory exhibited by Russell. The main idea behind it is to classify mathematical objects into different classes called types (for example the type nat of natural numbers, the type nat → nat of functions, etc.), that ensure that all objects have a meaning and that the range of universal quantification ∀ is bounded to a single type. This simple but powerful idea found many applications in both logic and computer science.
After many simplifications, Church [Chu40] reformulated type theory using the λ-calculus, in what is known today as simple type theory (STT). The main innovations in that theory are that propositions are represented as objects of just another type o, and that we can write functions that construct these objects in the language of the λ-calculus. This means that we can quantify over propositions (by quantifying over objects of type o) and over predicates (by quantifying over functions of type nat → o), which allows higher-order reasoning. For these reasons, this simple yet expressive system is also known as higher-order logic (HOL). It has been implemented in many proof assistants, including HOL4, HOL Light, ProofPower, and Isabelle.

The Curry–Howard correspondence

Over the years, type theory has evolved to give rise to many systems, all based more or less on the λ-calculus. In the end of the 1960s, a central idea that has emerged is the Curry–Howard correspondence [Cur34, dB68, How80], also known as the “propositions-as-types” principle, which highlights an isomorphism between proofs and programs. A proposition A in some logic L corresponds to a type JAK in some calculus λL, and a proof of A in this logic corresponds to a program of type JAK in that calculus. This correspondence can be summarized by the statement: `LA ⇐⇒ ∃M. J K`λLM:JAK
where M is a straightforward encoding of the proof of A. The implication A ⇒ B corresponds to the type of functions A → B, and the various derivation rules of the logic are in one-to-one correspondence with the typing rules of the calculus.
By viewing proofs as programs in a typed functional language, proof-checking becomes equivalent to type-checking, and a lot of the knowledge and expertise that we have from programming languages can be applied to logic, and vice versa. Proofs are no longer only about programs but they also are programs themselves, and can be manipulated as such, quantified over, passed around, and executed. This beautiful duality has since been extended to many logics and systems
This correspondence highlights the computational content of proofs, which is important for constructive logics, i.e. logics that avoid classical reasoning and the use of the law of excluded middle (A ∨ ¬A), because it allows the extraction of programs from proofs and allows us to view types as program specifications. Intuitionistic type theory (ITT) was developed by Martin-Löf [ML73, ML84] as a logic based solely on this idea, and was proposed as a foundation for constructive mathematics. It is implemented in the system Agda. The calculus of constructions (CC) was proposed by Coquand and Huet [CH88] in the late 1980s as a powerful calculus that allows impredicative reasoning. Later, the work of Luo [Luo90] and Coquand and Paulin [CP90] unified ITT and CC in what is known today as the calculus of inductive constructions (CIC). This powerful and expressive theory serves as a basis for several proof assistants including the famous proof system Coq and Matita. In this thesis, we make heavy use of the Curry–Howard correspondence and we treat proof systems as typed λ-calculi.

READ  Recursive construction of the second order balance matrix

The logical framework approach

A logical framework in type theory is a system that can define various logics and express proofs in those logics in such a way that provability corresponds to type inhabitation. Instead of using a different calculus λL for every logic L, we use a single calculus LF . A logic L is embedded as a signature ΣL that defines the constructs, axioms, and rules of the logic, in such a way that a proposition A is provable in L if and only if its corresponding type is inhabited in this signature:
`L A ⇐⇒ ∃M. ΣL,J K `LF M : JAK.
We can thus use the type checker of the framework as a proof checker for different logics. Several logical frameworks with varying degrees of power and expressivity have been proposed over the years. The approach was pioneered by de Bruijn [dB68] in the Automath system in the late 1960s, but it was popularized by Harper, Honsell, and Plotkin [HHP93] in the system LF, which is a relatively weak but expressive system that consists of a minimal lambda-calculus with dependent types, also known as the λΠ-calculus. Through the propositions-as-types principle, it can express a wide variety of logics. It is implemented in the system Twelf, which is a type-checker and meta-theorem prover for that calculus. It has been successfully used to embed a wide variety of theories,
including some higher-order ones such as HOL and system F.
In our thesis, we use the λΠ-calculus modulo rewriting [CD07, BCH12], an extension of the λΠ-calculus with term rewriting. As we will see, the λΠ-calculus is powerful enough to express computational embeddings, such as for first-order logic, and higher-order embeddings, such as for higher-order logic. However, it cannot easily and efficiently express the proofs of theories which are at the same time computational and higher-order, such as intuitionistic type theory or the calculus of constructions. Therefore, a logical framework based on it cannot practically express the proofs of Agda, Coq, and Matita. The λΠ-calculus modulo rewriting, as we will show, can be seen as a good compromise between the λΠ-calculus and Martin-Löf’s logical framework. It is implemented in the type checker Dedukti, which we use in our thesis to check the results of our translations. In Chapter 2, we recall the theory of the λΠ-calculus and discuss its strength and weaknesses. In Chapter 3, we present the λΠ-calculus modulo rewriting and its metatheory and show how it alleviates the weaknesses mentioned above.

Computational higher-order logics

Pure type systems

In the late 1980s, Berardi and Terlouw [Ber89, Ter89] introduced pure type systems as a general framework for describing various typed λ-calculi. The framework is general enough to include many popular systems such as the simply typed lambda-calculus, system F, the calculus of constructions, and even higher-order logic. Because of its generality, it is the perfect candidate for studying the embedding of various computational higher-order systems. We briefly recall the theory of pure type systems in Chapter 4.
This thesis stems from the work of Cousineau and Dowek [CD07], who presented in 2007 a general embedding of pure type systems in the λΠ-calculus modulo rewriting. We recall and revise their work in Chapter 5, where we show that the embedding preserves typing and computation. Chapter 6 is devoted to proving the conservativity of this embedding, a result which was missing from the original paper. More specifically, we show that an original formula is provable in the embedding only if it is provable in the original system, thus justifying the use of the λΠ-calculus modulo rewriting as a logical framework. Traditional techniques fail in this setting, so we propose a new approach based on reducibility to show that a proof in the embedding reduces to a proof in the original system. This work has been the subject of a publication at TLCA 2015 [Ass15], although the proof we present here is more general and can be adapted to the translations we present later in this thesis.
In Chapter 7, we show how we applied these ideas in the implementation of Holide, a tool for the automatic translation of HOL proofs to Dedukti, and present experimental results obtained from translating the OpenTheory standard library. Holide was started as an internship project, but we continued its development during this thesis. In particular, we implemented term sharing, an optimization that proved to be crucial for the efficiency of the translation. This work was presented at PxTP 2015 [AB15].

Universes and cumulativity

In type theory, a universe is a type whose elements are themselves types. In systems based on the Curry–Howard correspondence, universes allow higher-order quantification (e.g. over propositions or predicates) and are essential to higher-order reasoning. They are already present in pure type systems, where they are called sorts.
While a basic embedding of universes can already be found in the embedding of pure type systems, there are additional features related to universes that are not covered, namely infinite universe hierarchies and universe cumulativity. Universes are usually organized in an infinite hierarchy, where each universe is a member of the next one: Type0 ∈ Type1 ∈ Type2 ∈ • • • .
This stratification is required to avoid paradoxes that would arise from Type ∈ Type. It is similar to the distinction between sets (or small sets) and collections (or large sets) in set theory. Cumulativity expresses that each universe is contained in the next one: Type0 ⊆ Type1 ⊆ Type2 ⊆ • • • .
Infinite hierarchies and cumulativity have become a common part of many logical systems such as intuitionistic type theory and the calculus of inductive constructions, implemented in Coq and Matita. We present techniques for embedding those two features in the λΠ-calculus modulo rewriting. To this end, we first generalize the framework of pure type systems to cumulative type systems (CTS) [Bar99, Las12], which we present in Chapter 8. We then extend the embedding of pure type systems to cumulative type systems. A major difficulty is that cumulativity in Coq and Matita is expressed implicitly. To this end, we base ourselves on a comparison of two popular formulations of universes in intuitionistic type theory, à la Russell (implicit) and à la Tarski (explicit), and their interaction with cumulativity. We propose to use explicit coercions to represent cumulativity. In particular, we show that, in order for the embedding to be complete, coercions need to satisfy a condition called full reflection that ensures that types have a unique representation inside universes. This idea was the subject of a publication at TYPES 2014 [Ass14], although under a different light, as a calculus of constructions with explicit subtyping. The embedding of cumulative type systems is the subject of Chapter 9. In Chapter 10, we focus on infinite hierarchies, and show how to represent them using a finite set of constants and rewrite rules.
This thesis finally culminates with an embedding of the calculus of inductive con-structions, which we have implemented in two automated translation tools: Coqine, that translates the proofs of Coq to Dedukti, and Krajono, that translates the proofs of Matita to Dedukti. We present this translation along with experimental results obtained from translating various libraries in Chapter 11.

Table of contents :

Introduction 
1 Introduction 
1.1 Proof systems
1.1.1 Formalization of mathematics
1.1.2 Software verification
1.2 Universality of proofs
1.2.1 Computational reasoning
1.2.2 Higher-order reasoning
1.3 Type theory and logical frameworks
1.3.1 The Curry–Howard correspondence
1.3.2 The logical framework approach
1.4 Computational higher-order logics
1.4.1 Pure type systems
1.4.2 Universes and cumulativity
1.5 Contributions
1.6 Preliminary notions and notations
I Frameworks and embeddings 
2 The -calculus 
2.1 Definition
2.2 As a first-order logic calculus
2.3 As a logical framework
2.4 Limitations of
3 The -calculus modulo rewriting 
3.1 Definition
3.2 Basic properties
3.2.1 Soundness properties
3.2.2 Completeness properties
3.3 Computational higher-order embeddings
II Pure type systems 
4 Pure type systems 
4.1 Definition
4.2 Examples of pure type systems
4.3 Basic properties
4.4 Inter-system properties
4.4.1 Subsystems
4.4.2 Morphisms
5 Embedding pure type systems 
5.1 Translation
5.2 Completeness
5.2.1 Preservation of substitution
5.2.2 Preservation of equivalence
5.2.3 Preservation of typing
5.3 Alternative embeddings
5.3.1 Embedding without in the rewriting at the level of types
5.3.2 Embedding without rewriting at the level of types
6 Conservativity 
6.1 Normalization and conservativity
6.2 Proof of conservativity
6.2.1 Conservativity of equivalence
6.2.2 Conservativity of typing
7 Application: translating HOL to Dedukti 
7.1 HOL
7.2 Translation
7.2.1 Translation of types
7.2.2 Translation of terms
7.2.3 Translation of proofs
7.3 Completeness
7.4 Implementation
7.4.1 Proof retrieval
7.4.2 Holide: HOL to Dedukti
7.4.3 Extensions
III Cumulative and infinite hierarchies 
8 Cumulative type systems 
8.1 Definition
8.1.1 Specification and syntax
8.1.2 Subtyping
8.1.3 Typing
8.2 Basic properties
8.3 Inter-system properties
8.3.1 Subsystems
8.3.2 Morphisms
8.3.3 Closures
8.4 Principal typing
8.4.1 Definition
8.4.2 Derivation rules
8.4.3 Soundness
8.4.4 Completeness
8.5 Examples of CTSs with principal typing
9 Embedding cumulativity 
9.1 Translation
9.2 Completeness
9.2.1 Preservation of substitution
9.2.2 Preservation of equivalence
9.2.3 Preservation of typing
10 Infinite universe hierarchies 
10.1 Predicative universes
10.2 Impredicative universe
10.3 Cumulativity
11 Application: translating CIC to Dedukti 
11.1 Inductive types
11.1.1 Inductive types and eliminators
11.1.2 Eliminators in the -calculus modulo rewriting
11.2 Other features
11.2.1 Modules
11.2.2 Local let definitions and local fixpoints
11.2.3 Floating universes
11.2.4 Universe polymorphism
11.3 Implementation
11.3.1 Coqine: Coq to Dedukti
11.3.2 Krajono: Matita to Dedukti
Conclusion
12 Conclusion 
12.1 Related work
12.1.1 Logical embeddings
12.1.2 Proof interoperability
12.2 Future work
12.2.1 Improving the translations
12.2.2 Designing new embeddings
12.2.3 Interoperability in Dedukti
Appendix

GET THE COMPLETE PROJECT

Related Posts