Epistemic logic semantics 

Get Complete Project Material File(s) Now! »

The distributed computing setting

The field of fault-tolerant computability studies which concurrent tasks can or cannot be solved in various computational models. In particular, the topological approach that we present here has been developed in order to prove impossibility results. One of the first such result is known as “FLP impossibility” [42], and says that in an asynchronous message-passing system with one faulty process, the consensus task is not solvable. Soon thereafter, people started developing powerful mathematical tools based on algebraic topology in order to prove impossibility results [14, 112, 70].
There is a wide variety of distributed computing models that we might want to consider:
– the processes might be synchronous or asynchronous;
– we might consider shared memory or message-passing systems;
– various shared objects or synchronization primitives might be available to the processes;
– there are various kinds of faults, from simple crashes where a faulty process just stops computing, to messages getting lost, or byzantine failures where a process can send false information.
Unlike in sequential computability theory, where Turing machines provide a universal computing model, all of these models are worth studying, and they can vary greatly in terms of computational power.


In the basic setting that we study here, a fixed number n of processes are running concurrently. Initially, each process is given a private input value. After communicating with each other using the communication primitives at their disposal, each process produces an output value. The goal of such a computation is to satisfy some relationship between the inputs and the outputs; this relation is called the task specification. Tasks have been studied since early on in distributed computability [10].
The most well-known task is the consensus task, where the participating processes must agree on one of their input values. For instance, if three processes start with input values (1; 2; 3), their goal is to produce the same output, either (1; 1; 1), or (2; 2; 2), or (3; 3; 3). More precisely, if the n processes start the computation with input values (v1; : : : ; vn), they should output a set of decision values (d1; : : : ; dn) such that the following two properties are satisfied:
– Validity: each decision is among the proposed inputs, i.e., for all i, there exists j such that di = vj.
– Agreement: all processes decide on the same output, i.e., d1 = = dn.
In fact, to account for the possibility of crashing processes, one should also specify what the expected behavior is when some processes are missing. The fully formal definition of a task will be given in Section 1.3.1.

Processes, programs and protocols

In principle, the simplicial complex approach to distributed computing should be able to model most computational models that we can conceive of. However, in this thesis, we will usually restrict ourselves to the context of asynchronous processes communicating through shared objects. In this section, we discuss the various assumptions about the computational model that we usually make.
A fixed number of processes. A very common assumption in the field of fault-tolerant computability is that we are running a fixed number n of processes. Those n processes start the computation together, and once a process has decided its output value, it does not intervene in the computation anymore. The processes are not allowed to create new threads: so, each process has its own sequential program, and these n programs are running concurrently. A protocol simply consists of one program for each process.
The processes have unique identifiers. Each process has access to its own identifier, or “PID”, and all processes have distinct identifiers. Moreover, all the processes know in advance the names of all the other processes; in particular, they know the number n of participating processes. In practice, this assumption is enforced by the fact that we allow each process to have its own program. This is contrasted with the context of anonymous computation, where all the processes run the same program. In this context, the suitable notion of task is called a colorless task [64].
Processes are asynchronous. We will mostly be interested in asynchronous computation, where an execution consists of an arbitrary interleaving of operations of all the processes. However, in some cases, a round-based structure can be recovered in an asynchronous model, such as in the case of the iterated immediate-snapshot protocol [15].
Protocols are wait-free. We usually require the protocols to be wait-free, meaning that every process must be guaranteed to terminate its program in a bounded number of steps. In the presence of asynchrony, this implies that a process is not allowed to wait until it receives information from any of the other processes, since they may be arbitrarily slow. This assumption makes sense when we consider that any number of processes are allowed to crash during the computation. In contrast, the weaker t-resilience assumption is also often considered, when we consider that at most t processes can crash. Intuitively, that means that a process is allowed to wait until it hears from n t 1 other processes before making any progress. We will occasionally mention t-resilience in the rest of the thesis, but the focus will be mostly on wait-free protocols.
Faulty processes do not participate in the execution. The only kind of failure that we consider is when some processes crash before they take any steps in the computation. Thus, in an execution of a protocol, we simply select a subset of the n processes (the non-faulty ones), and we run their programs together. The subset of participating processes is not known in advance by the processes. This assumption is not as restrictive as it seems: indeed, in an asynchronous and wait-free context, a process that crashes in the middle of its execution is indistinguishable from a process that is just very slow. Moreover, other kinds of failures such as messages getting lost, or byzantine failures, can be encoded in the behavior of the communication primitives that we use (see Section 1.1.4).

The (un)importance of crashes

In many cases, the two ingredients that allow us to prove impossibility results are asynchrony and wait-freeness. Once we make those two assumptions, the possibility of crashing processes often becomes irrelevant: this is for example the case for the usual impossibility of solving consensus [63] and set-agreement [112] using read/write registers. Similarly, the impossibility result that we present in Section 1.4 takes place in a setting without any crashes: all the n processes are guaranteed to participate in the execution, and to run until the end of their program.
Of course, the reason why one would study wait-free protocols in the first place is that we want our programs to be correct in the presence of failures. But the point that we want to make here is that there is often no need to explicitly model the crashes in order to obtain impossibility results. However, there is one subtle difference between whether or not we allow the processes to crash. We illustrate it by comparing the consensus task (see Section 1.1.1), and the leader election task. The leader election task is very similar to consensus, except that instead of being given input values, the processes must elect a leader, that is, agree on one of their PIDs. Thus, in a leader election task, every process knows in advance its “input”, as well as the inputs of the other processes.
If there is no possibility of crashes, there is a trivial solution to the leader election task: every process elects the process with the smallest PID, without needing to communicate with the others. This protocol would not work if crashes can happen: indeed, if the process with the smallest PID has crashed (does not participate in the computation), then this is not a valid decision anymore. And in fact, leader election is unsolvable in an asynchronous wait-free setting using read/write registers, if there is a possibility of crash.
On the other hand, the consensus task, where the processes do not know their inputs in advance, is unsolvable whether or not there is a possibility of crashes. Indeed, consider a model without crashes, and suppose that a process P starts the computation with input i. Since our model is asynchronous, it might be the case that all the other processes are very slow; and since the model is wait-free, process P must eventually decide on an output after running in isolation. Since there is a possibility that the set of inputs was (i; i; : : : ; i), and because of the validity requirement, the only possible decision is to return i. This is precisely the restriction that we would add in a model with crashes: in a solo execution, a process must return its own input. Here, there is no need to add this requirement in advance, since it can be derived from the wait-free and asynchrony assumptions, and the task specification.
This distinction will be important in Chapter 3, where the epistemic logic definition of task solvability is naturally “without crashes”, but we are still able to prove impossibility results.

Shared objects

As we mentioned before, in our setting, the processes communicate through shared objects. Here, the notion of object should be understood in an abstract sense; in particular, it does not necessarily mean shared memory. An object is seen as a “black box” with which the processes can interact by calling its methods. Examples of objects include:
– shared memory primitives: read/write registers, immediate-snapshot objects,
– synchronization primitives: test-and-set, compare-and-swap,
– concurrent data structures: lists, queues, trees, hashmaps,
– message-passing interfaces: reliable or with failures,
– consensus objects, set-agreement objects, . . .
Chapter 2 will discuss thoroughly the various methods that can be used to specify the behavior of such objects. For the moment, the main point that we want to make is that all these objects can be specified by giving their interface (i.e., a set of methods), and by specifying the behavior of these methods. For instance, lists and queues both have the same interface (with methods push and pop), but depending on the case, the specification of the pop method will say that it returns either the last or the first value that was previously pushed. Similarly, a message-passing interface is usually specified by its methods send and receive, but depending on their specification, we can model reliable or unreliable operations.
Since we are interested in an asynchronous wait-free setting, our main degree of freedom to produce models of varying computational power will be the choice of objects that the processes have access to.

Combinatorial topology

In this section, we recall the definitions from combinatorial topology that we will be using in the rest of the thesis. A more thorough account of these notions can be found in [64].

Simplicial complexes and simplicial maps

The main basic structure that we will use is called a simplicial complex. Combinatorially, it can be seen as a kind of higher-dimensional generalization of a graph: it has vertices and edges, but also higher-dimensional cells such as triangles, tetrahedra, and so on. A n-dimensional cell is called a n-simplex or just simplex (plural simplices), when the dimension is irrelevant, or clear from the context. Thus, vertices, edges, triangles and tetrahedra can also be called 0, 1, 2 and 3-simplices, respectively. Geometrically, an n-simplex corresponds to the convex hull of n + 1 affinely independent points in Rd, for d n.
Although this geometric interpretation will be useful when we use pictures to illustrate what is going on in low-dimensional examples, our formalization will rely entirely on the following combinatorial definition. In the literature, the name abstract simplicial complex is often used to differentiate it from other more geometric definitions; here we simply refer to them as simplicial complexes, since it is the only definition that we use.
Definition 1.1. A simplicial complex C = (V; S) consists of a set V of vertices and a set S of non-empty finite subsets of V called simplices, such that:
– for each vertex v 2 V , fvg 2 S, and
– S is closed under containment, i.e., if X 2 S and Y X with Y 6= ?, then Y 2 S.
We sometimes abuse notations and write X 2 C instead of X 2 S to mean that X is a simplex of C. If Y X, we say that Y is a face of X. The simplices that are maximal w.r.t. inclusion are called facets. The dimension of a simplex X 2 S is dim(X) = jXj 1, and a simplex of dimension n is called an n-simplex. We usually identify a vertex v 2 V with the 0-simplex fvg 2 S. The dimension of the simplicial complex C is dim(C) = supfdim(X) j X 2 Sg. A simplicial complex is pure if all its facets have the same dimension (which is also the dimension of the complex). We say that a simplicial complex
C = (V; S) is a subcomplex of C0 = (V 0; S0) when S S0, and we write it C C0.
Example 1.2. The simplicial complex represented below has set of vertices V = fa; b; c; d; e; f; g; h; i; jg, and its five facets are fa; bg, fb; c; dg, fc; d; eg, ffg, and fg; h; i; jg. So, the set S of simplices contains all subsets of these facets. For instance, the 2-dimensional simplex fg; h; jg is a face of fg; h; i; jg. This simplicial complex is of dimension 3 because the largest simplex fg; h; i; jg is a 3-simplex.
Definition 1.3. A simplicial map f : C ! C0 from C = (V; S) to C0 = (V 0; S0) is a mapping f : V ! V 0 between the vertices of the two complexes, such that the image of a simplex is a simplex, i.e., for every X 2 S, we have f(X) := ff(v) j v 2 Xg 2 S0.
A simplicial map is rigid if the image of each simplex X 2 S has the same dimension, i.e., if dim(f(X)) = dim(X). It is easily checked that the composition of two simplicial maps is a simplicial map, and that rigidity is preserved by composition. If K C is a subcomplex of C, and f : C ! C0 is a simplicial map, we write f(K) = SX2K f(X), which is a subcomplex of C0.
Chromatic simplicial complexes. To model distributed computation, we will often decorate the vertices of our simplicial complexes with various labelings: process names, input and output values, local states. In some cases, it is important that in each simplex, all the vertices are assigned distinct labels. When this is the case, we usually refer to these labels as colors. When a simplicial complex is equipped with such a coloring, we say that it is a chromatic simplicial complex.
In the following, we fix a finite set A whose elements are called colors.
Definition 1.4. A chromatic simplicial complex C = (V; S; ) consists of a simplicial complex (V; S) equipped with a coloring map : V ! A such that all vertices of every simplex X 2 S have different colors, that is, for all X 2 S, for all v; v0 2 X, if v 6= v0 then (v) 6= (v0).
On the pictures, we often use the three colors black, gray and white, for printer-friendliness. The example below depicts a pure 2-dimensional chromatic simplicial complex with three colors.
Definition 1.5. A chromatic simplicial map f : C ! C0 from C = (V; S; ) to C0 = (V 0; S0; 0) is a
mapping f : V ! V 0 between the vertices of the two complexes, such that:
– f is a simplicial map on the underlying simplicial complexes, and
– f is color-preserving, i.e., for every v 2 V , 0(f(v)) = (v).
Note that a chromatic simplicial map is necessarily rigid, and moreover, the composition of two such maps is still a chromatic simplicial map.
Remark 1.6. An alternative way to define chromatic simplicial complexes is to consider the simplicial complex A = (A; P(A)), where P(A) is the powerset of A. Then, a chromatic simplicial complex is given by a simplicial complex C = (V; S) and a rigid simplicial map : C ! A. A chromatic simplicial map f : C ! C0 is a simplicial map from C to C0 that makes the following diagram commute:
In other words, the category of chromatic simplicial complexes and chromatic simplicial maps is the slice category RigidSC= A, where RigidSC is the category of simplicial complexes and rigid simplicial maps.

READ  channel allocation strategy in ad hoc networks 

Carrier maps

To model distributed computing, we also need another kind of map between simplicial complexes, called a carrier map. A carrier map associates with each simplex X of C, a subcomplex (X) of C0; therefore, we write such a map : C ! 2C0, in order to mimic the usual powerset notation.
Definition 1.7. Let C = (V; S) and C0 = (V 0; S0) be two simplicial complexes. A carrier map from C to C0, written : C ! 2C0, assigns to each simplex X 2 S a subcomplex (X) of C0, such that is monotonic, i.e., if Y X then (Y ) (X).
Given two carrier maps : C ! 2C0 and : C ! 2C0, we say that is carried by , written , when for every simplex X 2 C, (X) (X). A carrier map is called rigid if for every simplex X 2 S with dim(X) = d, the subcomplex (X) is pure of dimension d. For a subcomplex K C of C, we C ! C S C0 ! C C ! C
write (K) = X2K (X). Then the composition of two carrier maps can be defined as follows: given : 2 0 and : 2 00, their composition is : 2 00, defined by (X) = ( (X)).
It is easily checked that is still a carrier map, and that if both and are rigid, then so it . It is also possible to compose a carrier map with a simplicial map. Let : C ! 2C0 be a carrier map, and f : C0 ! C00 a simplicial map. Then f : C ! 2C00, defined by f (X) = f( (X)), is a carrier map. We could also define similarly the precomposition of a carrier map with a simplicial map, but actually we will only use the postcomposition.
Chromatic carrier maps. Unlike in the case of chromatic simplicial maps, here the rigidity is not implied by the preservation of colors. So, we also require that a chromatic carrier map should be rigid.
Definition 1.8. A chromatic carrier map from C = (V; S; ) to C0 = (V 0; S0; 0), written : C ! 2C0, assigns to each simplex X 2 S a subcomplex (X) of C0, such that:
– is monotonic, i.e., if Y X then (Y )(X),
– is rigid, i.e., for every simplex X 2 S of dimension d, (X) is pure of dimension d,
– is color-preserving, i.e., for every X 2 S, (X) = 0( (X)), where (X) = f (v) j v 2 Xg and 0( (X)) = SZ2 (X) 0(Z).
Given two chromatic carrier maps : C ! 2C0 and : C0 ! 2C00, and a chromatic simplicial map f : C0 ! C00, we can check that the compositions and fare still chromatic carrier maps.

The standard chromatic subdivision

An important operation on chromatic simplicial complexes is called the standard chromatic subdivision. This construction will naturally arise when we model the immediate-snapshot protocol complex in Section 1.3.2. Chromatic subdivisions have been thoroughly studied by distributed computer scientists, due to the fact that they are able to model wait-free computation using read/write registers [64, 54, 80].
We start with an abstract combinatorial description of this construction, without referring to the colors:
Definition 1.9. Let C = (V; S) be a simplicial complex. Its standard chromatic subdivision ChSub(C) has vertices of the form (v; Xv) where v 2 V is a vertex of C, and Xv 2 S is a simplex of C that contains v. The set of vertices f(v0; Xv0 ); : : : ; (vk; Xvk )g is a k-simplex if:
– it can be indexed so that Xv0 : : : Xvk , and
– for all i; j, if vi 2 Xvj , then Xvi Xvj .
Of course, as the name indicates, the most notable property of this subdivision operator is that when we subdivide a chromatic simplicial complex, we still have a chromatic simplicial complex. Proposition 1.10. Let C = (V; S; ) be a chromatic simplicial complex. Then ChSub(C), equipped with the coloring b(v; Xv) := (v), is a chromatic simplicial complex.
Proof. Let f(v0; Xv0 ); : : : ; (vk; Xvk )g be a simplex of ChSub(C). First, notice that the vertices v0; : : : ; vk are distinct: if vi = vj for i 6= j, then since vi 2 Xvj and vj 2 Xvi , we would have Xvi = Xvj . Moreover, if we index the vertices so that Xv0 : : : Xvk , then Xvk is a simplex of C that contains all the vertices v0; : : : ; vk. Since Xvk is properly colored, all the vertices vi must have distinct colors. There is an associated carrier map : C ! 2ChSub(C), carrying each simplex of C to its subdivided image in ChSub(C). Formally, a simplex X 2 C is sent to the subcomplex of ChSub(C) consisting of all the simplices f(v0; Yv0 ); : : : ; (vk; Yvk )g such that for all i, Yvi X. It is easily checked that when C is a chromatic simplicial complex, is a chromatic carrier map [64].
Finally, the iterated chromatic subdivision of C is obtained by iterating the previous construction, subdividing each simplex of ChSub(C) and so on. We denote by ChSubk(C) the complex obtained by iterating the subdivision operation k times.

Pseudomanifolds with boundary

We now define a class of simplicial complexes with nice topological properties, called pseudomanifolds with boundary or simply pseudomanifolds. Our main motivation for introducing them is to be able to formulate two important theorems of combinatorial topology, specifically Sperner’s lemma (Section 1.2.5 and the Index lemma (Section 1.2.6), both of which involve pseudomanifolds.
Definition 1.11. A pure simplicial complex C = (V; S) of dimension n is strongly connected if any two n-simplices can be connected by a sequence of n-simplices where two consecutive simplices share an (n 1)-dimensional face. More formally, for all X; Y 2 S of dimension n, there exists a sequence of n-simplices X0; : : : ; Xk 2 S, such that X0 = X, Xk = Y , and for all i, dim(Xi \ Xi+1) = n 1.
Definition 1.12. A simplicial complex C is a pseudomanifold with boundary if:
– it is pure of dimension n,
– it is strongly connected,
– and every (n 1)-simplex is a face of either one or two n-simplices.
The set of (n 1)-simplices that are the face of exactly one n-simplex induces a pure (n 1)-dimensional subcomplex of C, called the boundary of C, and written @C.
As the name indicates, pseudomanifolds with boundary are the combinatorial counterpart of a manifold in standard topology. However, they are less restricted than the usual manifolds, for two reasons. Firstly, they are allowed to have a boundary, corresponding to the (n 1)-simplices that are the face of exactly one n-simplex; secondly, they are allowed to have singularities or “pinches”.

Sperner’s lemma

Sperner’s lemma is perhaps the most well-known result of combinatorial topology. It is a discrete version of Brouwer’s fixed-point theorem, which says that a continuous function from the n-dimensional unit ball into itself must have a fixed point. The simplest formulation of Sperner’s lemma deals with colorings of a subdivided triangle (or, in dimension n, a subdivided n-simplex); but in fact, it can be generalized to any pseudomanifold of dimension n. In this section, we just give the statement of this generalized version of Sperner’s lemma. The proof is a straightforward adaptation of the usual proof. Detailed proofs of Sperner’s lemma and its generalization in pseudomanifolds can be found in many places, such as in [79], in chapter 9 of [64], or in chapter 1 of [13].
Sperner’s lemma is traditionally stated in terms of colorings of the vertices of a simplicial complex. However, contrary to Section 1.2.1, several vertices of the same simplex can have the same color. In other words, the simplicial complex is not necessarily chromatic. When this is the case, we use the colors red, green, blue on the pictures (instead of black, gray, white) to differentiate. In this context, when a simplex has all its vertices of distinct colors, we say that it is properly colored. Sperner’s lemma says that the number of properly-colored simplices inside a pseudomanifold is odd, provided that the colors on its boundary satisfy some properties.

Table of contents :

1 Preliminaries 
1.1 The distributed computing setting
1.1.1 Tasks
1.1.2 Processes, programs and protocols
1.1.3 The (un)importance of crashes
1.1.4 Shared objects
1.2 Combinatorial topology
1.2.1 Simplicial complexes and simplicial maps
1.2.2 Carrier maps
1.2.3 The standard chromatic subdivision
1.2.4 Pseudomanifolds with boundary
1.2.5 Sperner’s lemma
1.2.6 The Index lemma
1.3 Distributed computing through combinatorial topology
1.3.1 The task specification
1.3.2 The protocol complex
1.3.3 Definition of solvability
1.4 A case study: solvability of Equality Negation tasks
1.4.1 Solvable cases
1.4.2 Impossibility proof when k is small
1.4.3 Impossibility proof when n 􀀀 k is odd
1.4.4 Discussion on the number of input values
2 Trace semantics 
2.1 A first approach
2.1.1 Objects specifications
2.1.2 Tasks
2.1.3 Protocols
2.1.4 Protocol Complex
2.1.5 Limits of this approach
2.2 Specifying concurrent objects
2.2.1 Objects vs Tasks
2.2.2 Concurrent specifications
2.2.3 Comparison of linearizability-based techniques
2.3 A computational model
2.3.1 Programs and protocols
2.3.2 Semantics of a protocol
2.4 From trace semantics to geometric semantics
2.4.1 Tasks as one-shot objects
2.4.2 Simplicial tasks
2.4.3 The protocol complex
2.4.4 A generalized asynchronous computability theorem
2.5 Towards game semantics for fault-tolerant protocols
2.5.1 What is game semantics?
2.5.2 Back to our computational model
2.5.3 A game semantics for fault-tolerant protocols
3 Epistemic logic semantics 
3.1 Preliminaries
3.1.1 Syntax
3.1.2 Kripke semantics
3.1.3 Dynamic Epistemic Logic
3.2 Simplicial complex models for Dynamic Epistemic Logic
3.2.1 Simplicial models
3.2.2 Equivalence between simplicial and Kripke models
3.2.3 Soundness and completeness
3.2.4 Relaxing the locality condition
3.2.5 A simplicial product update model
3.3 Distributed computing through Dynamic Epistemic Logic
3.3.1 Protocols as action models
3.3.2 Tasks as action models
3.3.3 DEL definition of task solvability
3.4 Examples
3.4.1 Consensus
3.4.2 Approximate agreement
3.4.3 2-Set agreement
3.5 Limits of the DEL approach
3.5.1 Bisimulation between simplicial models
3.5.2 The equality negation task
3.6 Extended DEL
3.6.1 Unsolvability of equality negation and 2-set-agreement
3.6.2 Perspectives of the Extended DEL approach
3.7 Conclusion and future work
4 Towards directed topological semantics 
4.1 Preliminaries
4.1.1 Higher-dimensional automata
4.1.2 Notions of paths on HDAs
4.2 Relating trace semantics and paths on HDAs
4.2.1 Three simple bijections
4.2.2 Chromatic subdivisions via partial cube chains
4.3 Future work and open questions


Related Posts