Computing realistic and adaptive capacity of D2D contacts 

Get Complete Project Material File(s) Now! »

Spatial centrality (Chapter 2)

First, we take a look at contact graphs from a theoretical standpoint, to familiarize ourselves with the concept and have a broad spectrum idea of recurrent issues which are found in the literature. Doing so, we notably observe the high computational complexity of commonly used graph metrics. These metrics are typically useful to understand the global (or local) im-portance of a node within a graph structure. This “importance” may be defined in many dif-ferent ways, but the core idea is generally the same: the importance depends on how central a node is located in regards to the rest of the graph. Thus, this metric is commonly referred to as centrality. When considering static graphs, even large ones, approximating the centrality values of nodes is considered a viable option. To this extent, numerous approximation algo-rithms have been proposed by the community. However, in a D2D scenario, since the nodes are highly mobile, the centrality of nodes is likely to vary accordingly.
Since the network continually evolves, the computational time becomes a critical feature. The central question we address is: is it possible to compute the centrality of nodes at the pace of the topological changes? This sole question spawns other problems, such as how frequent do the changes have to be in order to be considered unfeasible, or are approximation strategies accurate enough to capture topological changes?Let us make the hypothesis that approximation algorithms converge fast enough and ac-curately enough to find the most important individual in the entire D2D network, at all times. From a purely theoretical standpoint, this would solve our issue, but in reality, several factors weigh in. First, in a D2D context, the construction of the graph is not instantaneous as it requires numerous message exchanges between all of the nodes of the actual graph. Even if we considered a scenario where base towers could roughly assume that all nodes within their cellular reach can communicate together, thus simplifying the contact graph construction process, there would still be a non-negligible latency issue. Indeed, since contact graphs may span over several base stations, it still requires inter-BS communications, thus having a latency issue to maintain up-to-date graphs. Second, from a sustained development as well as an eco-nomic perspective, the computational cost of maintaining up-to-date centralities, possibly at all times, may far exceed its benefits.
Instead of traditionally investigating contact graphs as mentioned above, we decided to bypass these questions entirely by taking an original approach and, to the best of our knowl-edge, never researched in the literature. Considering the D2D context of our study, nodes (i.e., mobile users) are in fact individuals carrying devices, and are roaming through space. The movements of users in space are arguably never random since not only they must have a destination, but the movements are also restricted by the underlying topography (e.g., a pedestrian must walk on curbs, a car must follow roads).
Therefore, the centrality of users could be directly correlated with their geographical loca-tion. We seek to estimate centralities of nodes, and by construction of contact graphs, using this intuitive observation. Considering a reasonable learning period, and the fact that day-to-day mobility does not change significantly, we could approximate the centrality of a node by merely looking up its position in space, thus mitigating the perpetual computation issue, complexity issue, and global network knowledge issue.

Our proposal: Geo-centrality

We are looking into a method to estimate the centrality of mobile nodes, in a contact graph. With this method, we aim to provide an estimate reasonably close to the real node’s centrality without requiring as many calculations. We assign, based on the day of the week and the time of day, a centrality value to an area in space, and not a node.
Therefore, when we want to know the centrality of a node at a given moment, we refer to its position in space, and take in consideration the time of day as well as whether it is a weekday or holiday. Other features may be We utilize the fact that human mobility patterns are redundant, depending on the type of day [41], so that our results generalize to days with the same type. We first calculate a centrality map, a database that lists the centralities by point in space and by the time of day. This section explains our method for the centrality map calculation. Once a centrality map is obtained, using our centrality map for a node’s centrality estimation is as easy and fast as looking up in a database.

Centrality map calculation

Let us rely on Figure 2.3 to explain our method to calculate the geo-c, and by extension, the centrality map, the database storing the centrality estimations of all geographical positions.
On the left side of the figure, inside the dashed frame, we have:
• The temporal window W, this represents the temporal granularity of the centrality map.
• W is divided into 4 time-stamps, noted t1 to t4.
• A geographic area, represented at each instant t, divided into 9 squares.
• Up to 6 nodes, noted n1 to n6, living in the geographic area. These nodes may join and leave the area during W.
• a red square, noted S, for which we compute the geo-centrality for time window W.
• And γ(t, n), our centrality estimated value for node n at time t.
We explain in this section how the geo-centrality of S is computed. The same method will be applied for all the squares of the considered area to obtain the complete centrality map.
Consider that we have a function γ, explained in details in the following subsection, which computes the centrality estimated value of node n at instant t. The γ function with the topol-ogy represented in Figure 2.3 gives the following results:
1. At t1: γ(t1, n1) = 1.
2. At t2: γ(t2, n1) = 1, γ(t2, n2) = 1.
3. At t3: γ(t3, n3) = 1, γ(t3, n6) = 5.
4. At t4: γ(t4, n5) = 5.
These estimations of nodes’ centrality values living in square S during period W, includ-ing the four instants t1 to t4, are used to populate the array γ(W, S), which represents the centrality estimated value of S during W. In our example, this array equals to [1, 1, 1, 1, 5, 5] and represents the γ-values of square S during window W. This array gives the histogram of γ for S during W (right-hand side of Figure 2.3).
This histogram allows deducing the geo-c of S during W based on the γ with the highest probability, 1 in the example we depict here.

Calculation of γ

Our approach is centrality-agnostic but, for the sake of illustration, we consider the between-ness centrality in Figure 2.3. We evaluate, however, different centrality metrics in the evalua-tion section 2.5.
In dynamic graphs, the notion of high or low centrality values quickly becomes blurred because of the constant topological changes and a fluctuating number of nodes. This makes absolute values of centrality difficult to interpret [60]. A better approach is to rely on the relative centrality of the nodes by ranking them [16] in decreasing order of centralities. Nev-ertheless, because such a ranking is still too unstable, we make a step forward and consider the quantiles of ranks. We refer to this rank as the γ(t, n).
Computing square S’s geo-centrality during window W, noted geo-c(W, S), requires to obtain the γ(t, n) of all nodes n inside of square S. A value of γ is always calculated on nodes that existed inside of S at specific snapshot t ∈ W. To clarify how to obtain the γ(n, t) for all nodes n existing inside S at t, we go through the following steps:
1. We compute the centrality (betweenness centrality is considered in this figure) of all nodes at the specific snapshot t ∈ W. In Figure 2.3, at instant t1, centrality of node n1 is equal to 23 while centralities of n2, n3, and n4 are equal to 0. If the square were to be empty of nodes at snapshot t, we do note compute any γ.
2. We rank all nodes based on their computed centralities. In the considered figure, at time t1, node n1 has the highest centrality and is then at rank 1. n2, n3, and n4 have the same centrality value (0) and have then rank 2.
3. We segment the ranks in N quantiles, with each quantile holding 100N percent of the ranks. Without loss of generality, we fix N = 5. Hence, we have γ : X 7→ 1{, 2, 3, 4, 5}, where the first quantile corresponds to the top 20% of the ranks, the second quantile corresponds to the range 20-40%, and so on. In Figure 2.3, at instant t1, we consider only node n1 since its the only one in square S. γ(t1, n1) = 1 then.
Note that because of this segmentation, we require at least 5 nodes with 5 different ranks so that no quantile is empty. We suppose this condition always true as our datasets always fulfil this requirement.
4. We repeat steps 1 − 3 for every instant t of window W. We can see, for instance, that γ of nodes n3 and n6 are respectively equal to 1 and 5 at instant t3. That means that n3 is among the top 20% values and n6 is among the lowest 20% values, resulting in
γ (t3, n3) = 1 and γ(t3, n6) = 5.

READ  HEROISM AND HUBRIS (ON HEROES AND HEROICS)

Applying centrality maps to vehicular networks

When observing the centrality value of a node, for instance in Figure 2.4 we observe the betweenness-centrality of a random node over 100 seconds, we noticed that values prior to time t were not evidently correlated with of the value at time t. In other words, “predicting” the current centrality value of a node based on its previous values does not seem straightfor-ward.
Our solution offers to rely on the geographical position as a way to predict the current value of a node. As such, we must verify that the passed centrality values of nodes in a given area will help predict the current centrality values of nodes within this very area. In short, the general idea is that we aim guess the present centrality of a node by looking at past geo-c of the square it falls in. For all time steps t spaced by τ:
1. We compute the geographic centralities geo-c(Wt, S) of all squares S of the map on the sliding window Wt = [t, W + t[.
2. We compute the γ of all nodes at the time W + t + τ.
3. For each node n within a square S, we compare the true centrality quantile of the nodes, γ(W + t + τ, n) to the one predicted by its geographic position geo-c(Wt, S) when it exists. If both values are equal, we count the prediction as a success.

Evolution over time

In Figure 2.7, we show how the prediction success rate evolves over time, during a day-long period. We plot success and failure rates for the first quantile (i.e., nodes with centrality values among the top 20%). For all four centralities, the plot shows a pattern corresponding to three periods: the morning rush (from 6 a.m. to 11 a.m.), the noon break (12 p.m. to 3 p.m.), and the evening rush (4 p.m. to 8 p.m.).
The degree centrality (Figure 2.7a) shows a dented pattern. This means that success/failure patterns are not strongly time-correlated in the short term. Predicting degree centrality is very uneasy: there are no long periods where success outnumbers failures (but the total number of successes is still larger than failures).
For betweenness and ego-betweenness, represented in (Figures 2.7b and 2.7c, a better prediction is achieved for the rush hours, either in the morning or in the evening. At noon, it is difficult to obtain good predictions.
The closeness centrality (Figure 2.7d) leads to better and more regular prediction suc-cesses – it is significantly less sensitive to population density the majority of the time; no matter the population or hour of the day, there are more successes than failures. Observing these centralities over time helps to shed light on their behaviors. They respond differently to population density, with the closeness being the less sensitive, as well as providing an excellent prediction for the morning and evening rush-hour periods.
On Figure 2.8 we focus on the morning peak of the rush hour period; we plot the success rate of the closeness centrality prediction every 5 minutes from 8 a.m. to 10 a.m. Prediction, in this case, is very successful: there is at least 80% chance of successfully predicting .

Table of contents :

1 Introduction 
1.1 Problem statement and positioning
1.2 Contributions and thesis outline
2 Centrality estimation through spatial positioning 
2.1 Intuition
2.2 Related work
2.3 Our proposal: Geo-centrality
2.4 Applying centrality maps to vehicular networks
2.5 Results
2.6 Influence of parameters
2.7 Use case: Closeness as an epidemic propagation tool
2.8 Conclusion and future work
3 Characterization of D2D throughput through empirical validation
3.1 Related work
3.2 Stock Android High-Speed D2D APIs
3.3 Experimental procedure
3.4 Devices testing
3.5 RSSI for goodput estimation
3.6 Goodput estimation according to distance
3.7 Conclusion and future work
4 Computing realistic and adaptive capacity of D2D contacts 
4.1 Related work
4.2 Definitions and problem formulation
4.3 Empirical reference link characterization
4.4 Fixed vs. adaptive contact characterization
4.5 Contact network capacity computation tool
4.6 Conclusion
5 Conclusion and perspectives 
5.1 Summary and takeaways
5.2 Perspectives
References 
Appendix A Realistic contact computation library: OpportunistiKapacity
A.1 Basic run using wrapper
A.2 Supported traces
A.3 Mobility trace
A.4 Module and classes

GET THE COMPLETE PROJECT

Related Posts