Get Complete Project Material File(s) Now! »


In 2012, Das et al. [30, 31] show that some problems can be solved by a LUMINOUS SSYNC robot, but not by an oblivious FSYNC robot. However, it remained an open question whether or not a LUMINOUS SSYNC robot could perform all the tasks an oblivious FSYNC robot could. If this was true, then any problem could simply be studied under the FSYNC model, and then solved under the LUMINOUS ASYNC model.
Theorem 2.1. There exists problems that can be solved under the oblivious FSYNC model and cannot be solved under the LUMINOUS SSYNC model.
Proof. First, we look at the fundamental differences between the FSYNC and SSYNC model. Property 2.1 (Identical Inputs). Throughout the execution, the sequences of perceived positions and colors of the network are identical for all robots, with respect to their own coordinate systems.
Property 2.2 (Simultaneous Action). Given a configuration C, all robots in C are in the same phase.
Lemma 2.1. Under the full visibility model, properties 2.1 and 2.2 are true for the FSYNC oblivious model, and false for the SSYNC LUMINOUS model.
Proof. By definition of the FSYNC model, robots’ phases are always simultaneous, so properties 2.1 and 2.2 are trivially true.
To show that property 2.1 is false for the SSYNC LUMINOUS model, let us simply consider a configuration in which at least one robot r1 can change its state, i.e. move or change color, at its next activation. Let the scheduler have robot r1 perform a full cycle and not activate any other robot in the network. Let the scheduler now activate another robot r2 to perform a full cycle. Since robot r1 changed its state the sequence of snapshots of r2 does not include the same snapshots as r1. So, as long as the algorithm allows for a change of state, property 2.1 is false.
Property 2.2 is similarly false for the SSYNC LUMINOUS model whenever the scheduler only activates a subset of robots.
Let us now build a problem that cannot be solved under the LUMINOUS SSYNC model because of property 2.2.
Let us assume a network of three robots with rigid motion. The robot with the smallest angle between the two other robots is the LEADER. The unit vector is defined as the altitude from the opposite base to the LEADER. We define the base opposite to the LEADER as having a y coordinate of 0. So the LEADER has a y coordinate of 1, following the unit vector. The problem is solved if:
1. The y coordinates of NON-LEADER robots are always identical.
2. All robots are eventually located at coordinates y = 1.

Our Algorithm: 2-color Rendezvous

We observe that in the problematic aforementioned execution, there is an instant when both robots are actually gathered, but are later separated because of pending moves. We thus introduce a behavior change in the WHITE state of Viglietta’s [84] algorithm to obtain our proposal, presented in figure 3.2 and algorithm 3.2. Our proposal breaks the infinite loop in the problematic execution, as it prevents robot r1 from switching to color BLACK after reaching r2 and forces it to remain WHITE. This implies that activating r2 afterwards actually separates the robots into different colors, and prevents them from going back to both being Black.
Let us observe that our new algorithm no longer belongs to class L, since the same observed 2-tuple of colors may yield different outcomes depending on the distance between r1 and r2. In particular, when both robots are observed WHITE, the next color depends on whether the two robots are gathered. So, the assumption of Viglietta [84] that a new color is solely determined.

From the System Model to the Verification Model

Implementing the system model we consider into a verification model that can be checked by a model-checker is difficult, as some elements are continuous (position of both robots, pending moves of both robots). This section is dedicated to proving that those problematic elements can be discretized in a way that enables mechanized verification.

Simple vs. Complete Self-Stabilization

This subsection is dedicated to proving that pending moves and pending colors can be removed from the verification model in the case of self-stabilizing Rendezvous algorithms. This is true for all self-stabilizing algorithms under the FSYNC, Centralized, and SSYNC schedulers (lemma 4.2), and true for specific self-stabilizing algorithms under the ASYNC scheduler (theorem 4.2).
Lemma 4.1. Any completely self-stabilizing algorithm is also simply self-stabilizing.
Proof. Since the set of initial configurations allowed for simple self-stabilization is a subset of the one allowed for complete self-stabilization, if all complete initial configurations lead to a successful execution, then all simple starting executions also do.
We now want to prove that every complete execution is simple-reachable. If this is the case, then any complete execution eventually has a common suffix with a simple execution, and we only need to verify simple initial configurations to verify eventual gathering starting from a complete initial configuration.
Lemma 4.2. Under the FSYNC, Centralized and SSYNC schedulers, any simply self-stabilizing rendezvous algorithm is also completely self-stabilizing.
Proof. Under the FSYNC, Centralized and SSYNC scheduler, any complete initial configuration becomes a simple initial configuration after all robots finish their current cycle. Intuitively, it seems that this also holds for the case of ASYNC algorithms. Since both robots are oblivious, with the exception of color, it seems logical that the system eventually « forgets » its initial configuration, and becomes reachable from a simple initial configuration. Surprisingly, we show that it is possible, for a well-chosen algorithm, ASYNC scheduling, and complete initial configuration to create an infinite execution that never becomes simplereachable.
Theorem 4.1. There exist algorithms for which, under the ASYNC scheduler, there exist complete executions which are not simple-reachable.
For a well-chosen algorithm, scheduler, and complete initial configuration, it is possible for the system to have an emerging property of memory. This is because it is possible to have the current configuration depend on the initial configuration indefinitely. We prove this theorem for both oblivious and LUMINOUS robots.

Self-Stabilization and Rigidity

The non-rigid assumption is another source of a continuous variable in the model: when the robot targets a point at some distance d d > 0, the scheduler may stop the robot anywhere between d and d. In this section, we explore under which circumstances we can restrict the verification model to rigid moves only without losing generality, and show that completely selfstabilizing Rendezvous algorithms satisfy the condition (theorem 4.4). Using criteria such as (complete) self-stabilization and rigidity, we can define four different settings for Rendezvous algorithms according to the combination of frigid; non-rigid} and fself-stabilizing; non self-stabilizing}. Studying the literature on Rendezvous algorithms, we were not able to find examples of self-stabilizing algorithms requiring rigid moves that failed with non-rigid moves. We now prove that, in fact, such algorithms cannot exist.
Lemma 4.3. The three types of motion stay put (STAY), move to the midpoint (M2H), and move to the other robot (M2O) are both necessary and sufficient to achieve Rendezvous in SSYNC and ASYNC.
Proof. First, these three motions are obviously sufficient since they are the only ones used by both Heriban et al. [53] and Viglietta [84] with two and three colors, respectively. Next, we prove that it is necessary to use all of these motions to achieve Rendezvous. Consider the case where both robots are at distinct positions, anonymous and have the same color. As proven by Viglietta [84] in proposition 4.1: « We may assume that both robots get isometric snapshots at each cycle, so they both turn the same colors, and compute destination points that are symmetric with respect to their midpoint. If they never compute the midpoint and their execution is rigid and fully synchronous, they never gather. » Therefore, Move to Half is necessary.
Similarly, consider now a case where snapshots are not isometric because of different colors. Let us assume that their algorithm makes them target any point between them, but not their own positions.
Because of the case where they are both activated at the same time, their targets need to be identical to gather. We model this as D x for robot r1 and D (x􀀀1) x for robot r2, with D the distance between r1 and r2 and x is a real positive number. This is mandatory in the case where r1 and r2 are activated at the same time.
However, if r1 and r2 are activated sequentially for a full cycle and x 6= 1, then r1 and r2 have different targets. As long as no motion where x = 1 exists, no Rendezvous can happen if r1 and r2 are separated. When x = 1, r1 uses Move to Other and r2 uses Stay and Rendezvous can be achieved. Therefore, M2O and Stay are both necessary.
Theorem 4.4. Any completely self-stabilizing algorithm that achieves Rendezvous assuming rigid moves also achieves it assuming non-rigid moves.
Proof. Because lemma 4.3 shows STAY, M2H and M2O are necessary and sufficient, we now assume that all ASYNC Rendezvous algorithms use only these three types of motion.
• Stay (STAY).
• Move to the midpoint (M2H).
• Move to the other robot (M2O).

READ  Titan Electromagnetic Environment Characterization

Table of contents :

1 Introduction: Networks of Realistic Robots 
1.1 Distributed Robotics
1.2 Motivation
1.3 The OBLOT Model
1.4 More Realistic Mobile Robots
1.4.1 Sensors
1.4.2 Transparency and Size
1.4.3 Environment
1.4.4 Memory and Communication
1.4.5 Synchronicity
1.4.6 Fairness and Boundedness
1.4.7 Rigid Motion
1.4.8 Faults
1.5 A Realistic Example: Collision Avoiding Blind Robots
1.6 Our Contributions
1.6.1 Published Work
I The Power of Lights 
2 The LUMINOUS Model 
3 Benchmark: Two-Robot Gathering 
3.1 2-color Impossibility ?
3.2 Our Algorithm: 2-color Rendezvous
4 Model Checking Rendezvous Algorithms 
4.1 System Model
4.1.1 Configurations and Executions
4.1.2 Self-Stabilization
4.2 From the System Model to the Verification Model
4.2.1 Simple vs. Complete Self-Stabilization
4.2.2 Self-Stabilization and Rigidity
4.2.3 Proving Rendezvous Algorithms
4.3 Verification Model
4.3.1 Position
4.3.2 Activation and Synchrony
4.3.3 Movement Resolution
4.3.4 State Variables
4.3.5 Activation Phases
4.3.6 The Case of Non-Rigid, Non-Self-Stabilizing Algorithms
4.4 Checking Rendezvous Algorithms
4.4.1 Verified Algorithms
4.4.2 Verification by Model Checking
4.4.3 Performance
4.5 Investigating Lights with Weaker Consistency Guarantees
5 Safe and Unbiased Leader Election with Lights 
5.1 Details of the Model
5.2 Problem Definition
5.3 Leader Election Based on Motion
5.4 Leader Election Based on Lights
5.5 Safe Leader Election
5.6 Unbiased Leader Election
5.7 Safe Unbiased Leader Election
Conclusion: The Power of Lights
II Unreliable Vision 
6 Uncertain Visibility 
6.1 Model Definition and Basic Results
6.2 FSYNC n robots Gathering
6.3 FSYNC Uniform Circle Formation
6.4 FSYNC Leader election
6.5 FSYNC LUMINOUS Rendezvous
7 Obstructed Visibility 
7.1 Model and Problem Definition
7.2 Simplifying the Problem: Line Theorem
7.3 Obstruction Detection for the Line Configuration
7.4 Non-Line Obstruction Detection: a Simple Approach
7.5 Non-Line Obstruction Detection: Using a Token
7.5.1 Difficulty of Creating a Token with Obstructed Visibility
7.5.2 Algorithm Architecture
7.5.3 A Possible Solution
7.5.4 Gathering Information and Transmitting the Token
7.5.5 The Issue of Proving Obstructed Algorithms
7.5.6 Sidenote: Ensuring Token Unicity for a Line
Conclusion: Unreliable Vision
III Real World Performance 
8 Monte-Carlo Simulation of Mobile Robots 
8.1 Motivation
8.2 Overview of the Framework
8.3 Scheduling
8.4 Simulation Conditions
8.5 Existing Simulators
8.6 Limitations of the Simulation
8.6.1 Halting the Simulation: Victory and Defeat Conditions
8.6.2 The Consequences of the Discretized Euclidean Plane
9 Fuel Efficiency in the Usual Settings 
9.1 Rendezvous Algorithms
9.2 Convergence For n Robots
10 Analyzing Algorithms in Realistic Settings 
10.1 Visibility Sensor Errors
10.2 Convergence for n=2 Robots
10.3 Compass Errors
10.4 Geoleader Election
10.5 Errors in Color Perception
11 Improved Convergence and Leader Election 
11.1 Fuel Efficient Convergence
11.2 Error Resilient Geoleader Election
11.2.1 Geoleader Election for Four Robots
11.2.2 Proposed Algorithm
Conclusion: RealWorld Performance
12 Conclusion: Networks of Realistic Robots 
12.1 Our contributions
12.1.1 Published Work
12.2 Short-Term Perspectives
12.2.1 Analyzing More Models and Algorithms
12.2.2 Gathering of n Robots Using Two Colors
12.3 Long-Term Perspectives
12.3.1 A Proven Simulator
12.3.2 Stronger Simulator Adversaries
12.3.3 Obstruction Detection
12.3.4 Expanding Uncertain Visibility
12.3.5 Robots with Finite Memory Snapshots
A Appendix: Details and Results of the Model Checker 
A.1 Movement Resolution
A.2 Verified Algorithms Written in Promela
A.3 Compile Options
A.4 Output
A.4.1 Vig2Cols in ASYNC (failure)
A.4.2 Her2Cols in ASYNC (Success)
B Appendix: Example of an Instance of the Simulator 
C Appendix: Details of Color Perception Error
List of Acronyms


Related Posts