Get Complete Project Material File(s) Now! »

## The maximum value diffusion seen as a Rumor Spreading problem

Rumor spreading is the problem of spreading a particular, local information through a network.

It was introduced by [10] in the case of replicated databases (name servers, yellow pages, etc.) where the updates are made locally (on a single database) and have to be propagated to all other sites. This seminal paper introduced some very useful concepts:

• rumor mongering – it is the idea that all updates are not of equal importance to the network so that only the most recent ones may be propagated. It is clear that this asymmetry is something we encounter when trying to spread the maximal initial value through a network.

• push transmission – it is when information goes from the caller to the called agent. This transmission setup is responsible for the term epidemic algorithms which is the original term coined by Demers et al.[10].

• pull transmission – it is when information goes from the called to the calling agent.

The push and pull transmission methods are often combined to form push&pull algorithms.

These algorithms are based on pairwise exchanges between neighbors in the network which recalls the Random Gossip algorithm [11] introduced for average value computation and in a asynchronous context.

### The maximum value diffusion seen as a Rumor Spreading problem

Due to i) the wired medium in the original works which implies that an agent may only call (or be called by) its neighbors one by one; and ii) the possible congestion and collisions in the network, the rumor spreading algorithms are randomized by nature.

#### Rumor spreading algorithms

In terms of communication between agents, most papers dealing with rumor spreading consider push and/or pull transmission methods [12, 13, 14, 15, 16, 17, 18, 19] as it is more suited for a wired network. Some papers [20, 21, 22, 23, 24] consider a radio medium and hence propose broadcast1 based algorithms.

Randomized rumor spreading may seem very close to our problem but the two problems differ on two major points summarized in Table 2.1. On the one hand, even if the communications of rumor spreading are randomized, many nodes may communicate simultaneously which leads to collisions. In the WSN setup, communications are asynchronous so collisions are avoided by construction and only one sensor can speak at a time. On the other hand, in the rumor spreading problem, the nodes know if they have the desired rumor and hence can act consequently. In the maximal value problem, no sensor can know if it has the maximum value so the time spent by each (potentially useless) communication has to be taken into account. These two differences make our problem quite different from classical rumor spreading and imply the use of different mathematical tools for analyzing the related algorithms. The fact that the sensors do not know if they have the maximal value in our context is a key difference with rumor spreading as it will change the algorithms as well as the proof techniques.

**Case of non-doubly-stochastic matrices**

First of all, we will see that the double stochasticity of the updates requires a feedback. Therefore, when feedback-free algorithm is of interest, the update matrices will be either row-stochastic or column-stochastic but not both simultaneously. Hereafter, we will inspect existing results under this less-restrictive assumption.

**Table of contents :**

LIST OF ACRONYMS

GENERAL RULES FOR NOTATION

INTRODUCTION

**1 GENERAL MODEL AND TECHNICAL PRELIMINARIES **

1.1 Network Model

1.1.1 Graph-based network

1.1.2 Nodes activations

1.1.3 Nodes measurements

1.2 Non-negative matrices

1.2.1 Definitions

1.2.2 Irreducible non-negative matrices

1.2.3 Primitive non-negative matrices

1.2.4 Stochastic matrices

1.2.5 Stochastic matrices and Markov Chain

1.3 Norms

**2 MAXIMAL VALUE SPREADING **

2.1 Introduction

2.2 The maximum value diffusion seen as a Rumor Spreading problem

2.2.1 The rumor spreading problem

2.2.2 Rumor spreading algorithms

2.2.3 Maximum value gossipping algorithms

2.3 Setup

2.3.1 Precisions about the setup

2.3.2 The convergence time

2.4 Performance Analysis

2.4.1 Random-Walk

2.4.2 Random-Pairwise-Max

2.4.3 Random-Broadcast-Max

2.5 Numerical illustrations

2.5.1 About Random Geometric Graphs (RGGs)

2.5.2 About the tightness of the derived bounds

2.6 Conclusion

**3 DISTRIBUTED AVERAGE CONSENSUS **

3.1 Introduction

3.2 Standard (single-variate) framework

3.2.1 Model

3.2.2 Case of doubly-stochastic matrices

3.2.3 Case of non-doubly-stochastic matrices

3.3 Sum-Weight framework

3.4 Convergence of Sum-Weight-based averaging algorithms

3.4.1 Preliminary results

3.4.2 Analysis of 1(t)

3.4.3 Analysis of 2(t)

3.4.4 Final results

3.5 Proposed algorithm and extensions

3.5.1 Broadcast Weighted Gossip (BWGossip) algorithm

3.5.2 Performance of the BWGossip

3.5.3 Adaptation to smart clock management

3.5.4 Distributed estimation of the sum

3.5.5 Convergence with i.i.d. failures in the communication graph

3.6 Comparison with existing works

3.6.1 Comparison with Kempe’s algorithm

3.6.2 Comparison with Bénézit’s algorithm

3.6.3 Comparison with the single-variate algorithms

3.7 An application of averaging algorithms to cognitive radio

3.7.1 The problem of distributed spectrum sensing in cognitive radio networks

3.7.2 Model

3.7.3 Review on centralized cooperative spectrum sensing

3.7.4 Fully distributed spectrum sensing algorithms

3.7.5 Numerical illustrations

3.8 Conclusion

**4 DISTRIBUTED OPTIMIZATION **

4.1 Introduction

4.2 First order methods

4.2.1 Model

4.2.2 Convergence of the Synchronous Distributed gradient algorithm

4.2.3 Convergence of the Asynchronous distributed gradient algorithm

4.2.4 Extensions

4.3 Distributed Optimization with the ADMM

4.3.1 Proximal methods

4.3.2 Constrained problems and Method of Lagrange multipliers

4.3.3 ADMM

4.3.4 Distributed optimization using the ADMM

4.4 Review on Monotone operators

4.4.1 Monotone operators

4.4.2 The resolvent and the proximal point algorithm

4.4.3 From the proximal point algorithm to the ADMM

4.5 Asynchronous Distributed Optimization using random ADMM

4.5.1 Motivation

4.5.2 Subgraphs and block-variables

4.5.3 Random Gauss-Seidel iterations on the proximal point algorithm

4.5.4 Asynchronous Distributed Optimization with the ADMM

4.6 Numerical Illustrations

4.7 Conclusion

**CONCLUSION AND PERSPECTIVES **

APPENDICES

**APPENDIX A PROOFS RELATED TO CHAPTER 2 **

A.1 Proof of Eq. (2.3)

A.2 Proof of Theorem

A.3 Proof of Theorem

A.4 Proof of Theorem

A.5 Proof of Theorem

APPENDIX B PROOFS RELATED TO CHAPTER 3

B.1 Derivations for Eq. (3.38)

B.2 Derivations for Eq. (3.40)

B.3 Computations related to Section 3.7.3-a

RÉSUMÉ EN FRANÇAIS

**BIBLIOGRAPHY**