Get Complete Project Material File(s) Now! »

## Population Protocols

Population Protocols were introduced by Angluin et al. ([2]) as a theoretical model aim-ing at capturing the behaviour of networks of interacting resource-limited passively-mobile agents. Agents are identically programmed mobile finite automata that are capable of wire-less communication, interacting in pairs with other neighbouring agents as they come across one-another. The agents, however, do not control their movements and thus have no control over their changing neighbourhood. Computations are then made by having each agent ex-change his state information with other agents passing by, and updating states according to some transition rules. A key assumption is that both the memory size and the programming are fixed, and they depend only on the problem one to be solved, and not on the population which will be running the protocol.

The canonical example introduced in [2] was that of a flock of birds in which a small fever sensor has been attached to each bird. Each fever detector is a simple machine able to detect if the bird it is attached to has higher-than-usual body temperature. This information will be considered its input for the computation. Additionally the fever detectors are assumed to have a limited computational power, represented by a fixed-sized finite memory (or set of states). They also have the ability for wireless communication with other sensors of other birds passing close by and exchange state information with them.. The programming of a sensor allows it to update its state when it exchanges information with another sensor as a function of the pair of interacting states. The goal of such a system is to detect potential epidemics, that is, if more than a certain threshold of birds have elevated temperature, in a fully distributed fashion: after some (undetermined) computation time, an outside observer should only have to check any single sensor to know whether or not there is an epidemic in the entire population.

It is easy to see that if the threshold is some fixed number of diseased birds, say 10, then using a simple counter capable to go up to 10; coupled with an alert state, the computation can be achieved by having a counter starting at 1 if the bird carrying the sensor has temper-ature, and at 0 otherwise. During interactions, one counter is emptied into the other, and both go into alert mode if either counter reaches 10 (or either sensor is already in alert mode). Then if the rate of encounters is suﬃcient to insure information dissemination, every sensor will eventually go into alert mode if 10 or more birds are sick. Is it possible to apply the same approach if the threshold is not a fixed number, but a fixed ratio of the population? If for example an epidemic is detected when one in ten birds or more have temperature? Population Protocols bring an answer to this question by allowing us to determine which question can be anwered.

The seminal work of Angluin et al. [2] defines the formal model of population protocols and lays out the principle of computation, notably of predicates, using this model. It also shows how to compute predicates definable in Presburger arithmetic with population pro-tocols on a complete interaction graph. Article [3] proves that any predicate computable in such a population protocol is, reciprocally, definable in Presburger arithmetics. While these results give a complete characterisation of the computational power of population protocols, they have spawned a large body of work on variations of the formal model, including other forms of computation with similarities with other computational models. While [7] provides a thorough and quite clear survey of the early work on population protocols, and remains a recommended introductory read on the matter, this chapter will recall many of the results presented there as well as additional later work.

**Population protocols and their computational power**

**Definitions**

Definition 1 (Population Protocol, [2]) A population protocol is a 6-tuple (Q; ; Y; ; !; ) where

Q is a finite set of possible states for an agent, is a finite input alphabet,

Y is a (finite) set of possible outputs.

is an input map from to Q; where ( ) represents the initial state of an agent whose input is ;

! an output map from Q to the output range Y; where !(q) represents the output value of an agent in state q,

Q4; a transition relation that describes how pairs of agents can interact and update their states.

In a population of n 2 agents, a configuration C of the system is the vector of all the agents’ states. If C is a configuration on a given population, C(u) denotes the state of agent u in configuration C: Because the agents are assumed to be entirely anonymous and exchangeable, two configurations C and C0 that are identical up to a permutation of the states of some or all agents are indistinguishable. Therefore a configuration can be reduced to a multiset of states or to a vector of NQ; representing the number of occurrence of each possible state in the configuration. Each agent initially possesses an input value from and its initial state is determined by applying to the initial value. This provides a population with its initial configuration. As the initial configuration is indistinguishable from any configuration that diﬀers only by a permutation of the agents’ states, inputs can similarly be reduced to a vector of N :

A configuration C0 is said to be reachable in one step from a configuration C; if there exist two agents u; v in the population such that (C(u); C(v); C0(u); C0(v)) 2 ; and for any other agent w in the population, C0(w) = C(w) . In other words there is a pair of agents u; v that could interact to move the population from configuration C to C0: We then write C ! C0 to say that C0 can be reached from C in one step. The transitive closure of the ! relation, denoted ! ; means that a configuration C0 is reachable in an arbitrary number of steps from C or, in other words, that there is a sequence C ! C1 ! : : : ! Ck ! C0 of arbitrary length. A computation is then an infinite sequence of configurations (Ck)k2N such that for any integer k 2 N; we have Ck ! Ck+1:

Definition 2 (Fair computation) A computation (Ck)k2N is said to be fair if, for any configurations C and C0 such that C ! C0; if C appears infinitely often, then C0 must also appear infinitely often.

Equivalently, a computation is fair if any configuration that is always reachable will eventually be reached.

**Computational power.**

Definition 3 (Stable Computation) A protocol (Q; ; Y; ; !; ) is said to stably compute a function f : N ! Y if, for any population of any size n; and for any input vector x 2 N ; any fair computation starting from the associated initial configuration eventually stabilizes to a subset of configurations in which all agents agree on a same output y 2 Y with y = f(x):

In the case of a binary output alphabet Y = f0; 1g; the function f can be interpreted as a predicate on the count of diﬀerent input symbols in the configuration. One of the main early results on population protocols is a complete characterisation of the predicates that can be stably computed by a population protocol.

Theorem 1 ([2], [3]) The predicates stably computable by population protocols are exactly the semilinear predicates.

A semilinear predicate is a predicate over Nk for some k that can be written as a boolean combination of predicates of the form:

ki=1aixi c where c and all ais are (integer) constants;

ki=1aixi c( (mod b)) where c; b and all ais are (integer) constants.

Such predicates are called semilinear because they are the characteristic predicates of semilinear subsets of Nk; that is subsets of Nk that can be written as a finite union of sets of the from fu + 1u1 + : : : + mum; ( 1; : : : ; m) 2 Nmg; where u1; : : : ; um are fixed vectors of Nk: They are known ([47]) to be exactly those predicates that can be defined in Presburger arithmetics.

In fact, [2] provides a constructive mechanism to design a population protocol corre-sponding to any given semilinear predicate described by its decomposition as a boolean combination of the above type. This construction relies on a mechanism to elect a leader: every agent starts with a « leader » token and one token is destroyed whenever two token-wielding agents meet. The leader is then used to compute the final result by meeting all other agents, and spreading the correct output. Angluin et al. [3] proved that, reciprocally, any computable predicate must be semilinear.

The characterization of computable predicates is suﬃcient to characterize other functions computed by population protocols. Indeed, if the (assumed finite) output alphabet Y is not binary, then, for any computable function f; and any given output y 2 Y; the predicate fy defined such that fy(x) is true if and only if f(x) = y is computable (by a simply composing the output function ! with the characteristic function of fyg). Reciprocally, if for some function f : Nk ! Y; all predicates fy are computable for all y 2 Y; then by simply running all of the corresponding protocols in parallel, and taking as output the single value y for which the corresponding predicates stably converges to output 1 one can compute f: Thus a function f : Nk ! Y is computable if and only if all the associated predicates fy; y 2 Y are semilinear.

A large number of variations of the model have been developed after the seminal work of Angluin et al. Some variations increase the individual computational power of the agents or restrict the interaction pattern, others study the impact of a uniformly random scheduler, or the tolerance to faulty or Byzantine agents.

### Random Scheduler

The idea of having the interacting pair of agents decided not by a fair scheduler but by picking them amongst all possible pairs of agents uniformly at random is very natural and was introduced by Angluin et al. [2] as a way to increase the power of population protocols. Indeed a computation decided by such a random scheduler will be fair with probability 1 and, thus, be able to compute any semilinear predicate with probability 1: But having a randomized scheduler also allows to determine the time to converge. Indeed, a fair scheduler may still delay a particularly critical interaction between two agents by any arbitrary number of rounds, whereas in a random scheduler, such an interaction always has probability n12 to happen and thus will happen within an expected time of n2: In fact, Angluin et al. [2] prove that any predicate can be computed with probability 1 in expected total number of rounds O(n2 log n):

On the other hand, relaxing the probability of success might allow more advanced com-putation to be achieved with high probability. Indeed, Angluin et al. [2] show how a leader may simulate a clock-like system by issuing a single mark and, after waiting to meet the marked agent k times, assume that (nk) time has passed since the mark was issued. Using this, it can, for example, assume to have met all other agents at least once with high prob-ability. This allows to use the non-leader agents as a finite number of unary counters which can simulate a register machine with an O(log n) bit memory [43] using time polynomial in n:

Assuming the additional prerequisite of one agent being singled out as a leader from the start, Angluin, Aspnes, and Eisenstat [4] prove that such computations can be sped up to use only O(n log5 n) time. First is designed a language of atomic operation on the registries represented by the non-leader agents, each of which can be executed in time O(n log n):

Using the unique leader to initiate a phase clock then allows to determine when an atomic operation over the registries has been executed with high probability using the spread of a finite number m of epidemics over the population in time (n log n) to determine clock phases. This can then be used by the leader to determine with high probability when one atomic operation has had enough time to complete, thus allowing to start the next operation of a sequential program and simulate the virtual register machine. Angluin, Aspnes, and Eisenstat show how to use this virtual register machine to compute the addition, subtraction, multiplication or division using at worst O(log 4n) atomic operations, meaning that the actual computation time for division, for example, would be O(n log 5n); while the other operations use fewer atomic operations.

It is also shown in [4] how to use this to simulate a randomized LOGSPACE Turing machine with a constant number of read-only unary input tapes in dn log2 n interactions with a probability of failure bounded by n c for any constant c > 0: Note that this brings to total computation time lower (for n large enough) than the time required to elect the leader by the destruction of initially universally distributed leader tokens.

**Restricted Interactions**

There are mainly two ways of restricting interactions in a population protocol and they have radically diﬀerent impacts on the computational power of the protocols considered. The first is to restrict the actual interaction mechanism by only allowing one-way communication, that is by having only one agent (called the receiver) get the full state of the other agent (called the sender). The other way is to only allow certain pairs of agents to interact by defining a communication graph over the set of agents, the edges of which define legal interaction pairs (the classical model then corresponds to the complete communication graph).

**One-way interaction**

Angluin et al. [5] proposed several possibilities for restricting the interaction to a one-way communication. In the observational model the sender is not even aware that a interaction is happening (and thus cannot update its state) while in the transmission model, it is asked to transmit its data but has no additional information on the receiver. In addition, either model can be further altered by adding a possible delay in the exchange of information. In this case, instead of direct interactions between agents, there are two types of actions: send in which a sender sends a new message which is added to a set (an unordered queue) of messages and receive in which a receiver takes a message from the queue and updates its state accordingly. In this delayed model, there is again the distinction between transmission in which the sender can update its state in a send action (for example to update a counter of messages sent) or an observation model in which it cannot. The receiver, of course is always able to update its state to take new information into account.

In the delayed observation model, the only computable predicates are boolean combi-nation of predicates of the form x 1 that detect whether or not an input symbol is present in the configuration. If an additional restriction that agents may not receive their own messages is considered, then predicates of form x 2 detecting multiple occurrences of a symbol become computable too.

In the immediate observation model all predicates of form x k for any threshold k become computable and any boolean combination thereof. Again no other predicates may be computed.

The immediate and delayed transmission model (in which the sender is aware that it is sending a message) were proven to have the same computational power, which is a strict subset of semilinear predicates but also strictly larger than the simple threshold predicates considered above as it also includes modulo predicates.

Finally if agents are allowed to refuse to receive messages when in certain states, called the queued transmission model (to signify that senders are aware that they are sending messages), then any two-way interaction protocol can be simulated by a queued transmission protocol.

**Restricted communication graphs**

Restricting the communication graph, that is, only allowing certain pairs of agents to inter-act does not reduce the computational power of population protocols as long as the graph of permitted interactions remains connected. It is possible to simulate a protocol on the complete graph by using a smaller connected graph by having neighbours swap their states to simulate movement in the graph. Contrariwise, if additional information on the structure of the communication graph is known beforehand, the designer of a protocol can take advan-tage of this knowledge to perform computations that would be impossible on the complete graph. Angluin et al. [2] proved that a straight line graph allows the simulation of a linear-space Turing machine by having each agent represent one square of the tape of the Turing machine.

If, however the graph is not known beforehand, population protocols can be used to decide properties of the underlying graph. A population protocol can be used, for example (see [1]), to determine if the underlying graph of communications of a given population has maximum degree less than some predefined finite value k: Angluin et al. [1] give several other types of properties of graphs that can be computed by using population protocols.

#### Enhanced individual computational power

A section of work on population protocols has focused on the impact of increasing the individual power of agents in a population protocol, generally in keeping with the idea that the individual computational power should remain small compared to the size of the population.

**Unique identifiers**

Gerraoui and Ruppert [31] proposed a model, called community protocols in which the state-space of the agents is extended with a an identifier storing capacity (of size O(log n)). Each agent starts with a unique identifier and can store a fixed finite number of other identifiers. The only operations allowed on these identifier are testing for equality and, after an inter-action, the set of identifiers stored by the two interacting agents should be a subset of the set of identifiers known to either of them before the interaction.

Such a community protocol can then compute a predicate if and only if that same pred-icate can be computed by a Turing machine with O(n log n) space and permutation of the input characters does not aﬀect the output value. Since a population of n agents with any kind of memory of size O(log n) could be simulated using a Turing machine with O(n log n); this means that restricting the use of the extra memory to identifier storage (with the ad-dition of individual unique identifiers at start) is not restrictive when compared to general purpose O(log n) memory.

**Passively mobile machines**

Going further, Chatzigiannakis et al. [20] consider using communication-capable Turing Ma-chines as agents instead of finite automata. Calling their model Passively mobile machines, their main result is that a predicate is computable by passively mobile machines using O(f(n)) space with f(n) = (log n) if and only if it is computable by a Non-deterministic Turing Machine with O(nf(n)) space and the output does not change if a permutation of the input characters occurs.

Additionally, Chazigiannakis et al. proved that if the memory size used by passively mobile machines is f(n) = o(log log n); then the predicates computable are exactly the semilinear predicates, just as in the classical population protocol model.

**Fault tolerance**

Fault tolerance, that is the ability of a distributed system to compute a correct output despite faulty operation by some of the components of the system, is a major component of the literature on distributed system and it was only to be expected that population protocols would be studied to determine such fault tolerance properties. In general, faults are divided in several categories : crashes in which an agent ceases to operate completely, transient failures which corrupt the memory of a single agents arbitrarily altering its current state and Byzantine failures in which an agent does not behave according to specification.

**Byzantine Agents**

The exact behaviour of a Byzantine agent may vary from simple unreliability of the state to, in the worst case scenario, malignant design of an agent actively trying to undermine the computation by any means possible (e.g. not interacting accordingly with the fairness property, giving false information or not updating its state properly), possibly even with information about the entire system that a regular agent could not possibly have. With such vast possibilities for disrupting computation it should come as no surprise that Byzantine agents can wreak havoc among simple interacting computing machines such as are found in population protocol. Indeed, Guerraoui and Ruppert [31] proved that in the presence of even a single Byzantine agent, only trivial predicates (that is predicates that are universally true or universally false) can be computed. They proved, however that the community protocol model, enhancing traditional population protocol with unique identifiers, allowed for some level of resistance to Byzantine agents as long as the Byzantine were not allowed to alter identifiers. In this case, community protocols can compute any decision problem in N SP ACE(n log n) in the presence of f = O(1) Byzantine agents, given preconditions ensuring the output does not change if up to f of the input characters are changed and up to 3f + 1 of the input characters are deleted.

This means that, in addition to faking their own input (which cannot be prevented), f Byzantine agents can, in a sense, prevent up to 3f + 1 agents to be acknowledged by the rest of the population.

**Crashes and Transient Failures**

On the front of Crash failures, in which an agent simply stops operating completely, Delporte-Gallet et al. [23] proved that, in a similar fashion, if a function f is computable in a failure-free environment and if D is a subset of all possible input multisets such that taking any input multiset I 2 D and removing c + t input symbols and adding t; then f can be computed in an environment tolerating up to c crashes and t transient failures as long as the input I is in D if the population is suﬃciently large.

In other words, c crashes and t transient failures can be tolerated without loss of com-putational power if the additional precondition is respected that removing c input symbols and altering t more does not change the output value. A better margin cannot be hoped since, the c + t alterations (or removals) correspond to possible crashes and failures that would occur before the computation has even started in which case no recovery would be possible. Note that the method described in [23] requires the population to be larger than 2((jY j + 2)(c + 2t) + 2)2 where jY j is the number of output symbols. An algorithm to turn any protocol into such a fault-tolerant protocol is given and the resulting protocol requires O(jY j(c+ 2t)) memory and is thus dependant on the fixed number of faults to be determined (transient failures being, in a sense, twice as bad a crashes).

#### Self Stabilization

The problem of making population protocols self-stabilizing, that is to have a population protocol able to achieve a desired property even if starting from a bad configuration, has been investigated in several papers [27, 6]. If, in [6], Angluin et al. show how to make some protocols presented in [2] self-stabilizing, such as distance-2 colouring in a bounded-degree graph, ring orientation on a distance-2 coloured graph, or the computation of a spanning tree in a regular graph of bounded degree D.1

The leader election protocol, so central to the computation of semilinear predicates in [2] is however proven to be impossible to achieve in population protocols on general graphs in a self-stabilizing fashion but a protocol is given for the special case of odd-sized rings. Fischer and Jiang [27] proved that an eventual leader detector oracle (that eventually informs the agents if no leader is present in the population) is suﬃcient to design a self-stabilizing leader election protocol on the complete graph or on the ring.

Finally, Beauquier et al. [12, 11] introduced the idea of having a Base Station with unlimited capacity with which the agents may also interact and use this to design self-stabilizing protocols. [12] shows that even with a base station, the problem of counting the number of (regular) agents is impossible to solve if the agents’ memory size is smaller than the number of agents. In [11], introduce the additional assumption of the existence cover times for each agents, that is, a bound on the time a given agent may have to wait to be sure to have met all other agents. Such cover times are used to model diﬀerences in speed or mobility pattern for the agents in the population. Agents are unaware of their cover time but are able, when meeting, to determine which of them is the faster agent (ie. the agent with the lower covering time). Using the base station to start the computation and, subsequently to initiate non overlapping phases in which the protocol is repeatedly executed, Beauquier et al. show how to construct a self-stabilizing protocol from any non self-stabilizing such protocols. It is also shown that this non overlapping repeated execution would be impossible without the base station. In [10], Beauquier et al. show how the base station and covering times model can be used to compute the Gathering problem, in which input values distributed in the population must be transferred to the base station exactly once.

**Table of contents :**

**Introduction [FR] **

Contexte et Motivations

**Contributions **

**Introduction [EN] **

Context and Motivations

Contributions

**1 Population Protocols **

1.1 Population protocols and their computational power

1.1.1 Definitions

1.1.2 Computational power

1.2 Random Scheduler

1.3 Restricted Interactions

1.3.1 One-way interaction

1.3.2 Restricted communication graphs

1.4 Enhanced individual computational power

1.4.1 Unique identifiers

1.4.2 Passively mobile machines

1.5 Fault tolerance

1.5.1 Byzantine Agents

1.5.2 Crashes and Transient Failures

1.6 Self Stabilization

**I Games and Population Protocols **

**2 Pavlovian Population Protocols **

2.1 Elementary Game Theory

2.2 From Games To Population Protocols

2.3 Main Result

2.4 Threshold Predicates

2.5 Modulo Counting

2.6 Conclusion

**3 Symmetric Games and protocols **

3.1 Symmetric population protocols

3.2 Some simple exclusive Pavlovian protocols

3.2.1 Counting up to 3 exclusively

3.3 Some non-trivial Exclusive Pavlovian Protocols

3.3.1 Leader Election

3.3.2 Majority

3.3.3 Counting up to 2k

3.4 Conclusion

**II Computing with Large Population Protocols **

**4 Large Population Protocols **

4.1 An illustrative example

4.1.1 A General Theorem about Approximation of Diffusions

4.1.2 Proving convergence of our example

4.1.3 Giving a better Asymptotic Development of p(k)

4.2 General Framework

4.3 Computing with LPPs

4.4 Conclusion

**5 The computational power of LPPs **

5.1 Any computable number is algebraic

5.2 Computing Algebraic Numbers

5.2.1 Computing Rationals

5.2.2 Derandomization

5.2.3 Constructing Equilibria

5.2.4 Enforcing Stability

5.3 Conclusion

**Conclusion and Perspectives **

Conclusion

Perspectives