Telecommunication Networks

Get Complete Project Material File(s) Now! »

Forwarding module: design and implementation

In this section, we detail our two-stage name-based LPM algorithm used in the forwarding module of Caesar. Section 3.3.1 and 3.3.2 describe the prefix Bloom filter (PBF) and the concept of block expansion, respectively. Both are used in the first stage to find the length of the longest prefix match. Section 3.3.3 shows our technique to reduce the number of hash values computation. Section 3.3.4 describes the hash table used in the second stage, and the optimizations introduced to speed up the lookups.

Prefix Bloom Filter

The first stage of our name-based LPM algorithm consists in finding the greatest length of a prefix matching an incoming Content name. To perform this lookup, we introduce a novel data structure called prefix Bloom filter (PBF). The PBF exploits the hierarchical semantics of content prefixes to find the longest prefix match using (most of the times) a single memory access.
Similarly to the Bloom Filter, PBF is a space-efficient data structure which is made of several blocks. Each block is a small Bloom filter: its size is chosen to fit one or multiple CPU cache lines of the memory in which it is stored (e.g. DRAM caches, not to be confused with CPU caches, are usually 64-128 MBytes, while CPU caches are typically smaller than 2 MB). During the population of the FIB, a content prefix is stored both in the FIB and into a block chosen from the hash value of its first component1. During the lookup of a content name, its first component identifies the unique block that must be loaded from memory to cache before conducting the membership query. The possible load unbalancing issues due to that choice are investigated in Section 3.3.2.
A PBF comprises b blocks, m bits per block, with k hash functions. The overall size of the PBF is M = b m. A scheme of the insertion of a content prefix is shown in Figure 3.1. Let p = =c1=c2= : : : =cd be a d-component prefix to be inserted into the PBF. The hash value resulting from a hash function (chosen on purpose to be as much uniform as possible) g() with output in the range f0; 1; : : : ; b 􀀀 1g determines the block where p should be inserted. Let now pi = =c1=c2= : : : =ci be a subprefix with the first i components of p, such that p1 = =c1, p2 = =c1=c2, p3 = =c1=c2=c3, and so on. We choose a block in the PBF identified by the value g(p1), which is the result of the hash computation performed from the subprefix p1 defined by the first component. This implies that all prefixes starting with the same component are stored in the same block, which enables fast lookups (the possible load balancing issues are discussed in Section 3.3.2). Once the block is selected, the values h1(p); h2(p); : : : ; hk(p) are computed using the entire prefix p, resulting in k indexes within the range f0; 1; : : : ;m􀀀 1g. Finally, the bits of positions h1(p); h2(p); : : : ; hk(p) in the selected block are set to 1.

Reducing the number of hashing operations

Hashing is a fundamental operation in our name-based LPM algorithm. For the lookup of a given content prefix p = =c1=c2= : : : =cd with d components and k hash functions in the PBF, a total of kd hash values must be generated for LPM in the worst case, i.e. the running time is O(k d). Longer content names thus have a higher overall impact on the system throughput than shorter names. To reduce this overhead, we propose a linear O(k+d) running time hashing scheme that only generates k + d􀀀1 seed hash values, while the other (k 􀀀1)(d􀀀1) values are computed from XOR operations.

READ  UN/EARTHLY POWERS

Hash table design

The PBF performs a membership query on content prefixes to find the longest prefix length stored in the FIB. Once the longest prefix length is chosen, the second stage of our name-based LPM algorithm consists of a hash table lookup to either fetch the next hop information or to manage the possible false positives.
Our hash table design is shown in Figure 3.3. The hash table used for our forwarding module consists of several buckets where the prefixes in the FIB are hashed to. In a similar way of the PBF’s functions, we keep in mind, for the hash table design, the goal to minimize memory access latency, in order to speed-up the overall processing rate. For this purpose, each bucket is restricted to the fixed size of one cache line such that, for well-dimensioned tables, only a single memory access is required to find and fetch an entry. In case of collisions, entries are stored next to each other in a contiguous fashion up to the limit imposed by the cache line size. We can manage the bucket overflows by chaining with linked lists, but this is expected to be rare if the number of buckets is large enough.

Table of contents :

Acknowledgements
Introduction
1 Background 
1.1 Telecommunication Networks
1.1.1 Circuit switching and packet switching
1.1.2 Computer Networks
1.2 The Internet
1.2.1 The TCP/IP Protocol Stack
1.2.2 Data Plane and Control Plane
1.3 The evolution of the Internet
1.3.1 The behavioral issue
1.3.2 The architectural issue
1.4 Enhancing the Data Plane with ICN
1.4.1 Architecture of NDN
1.4.2 Features of ICN and NDN
1.4.3 Implementation of ICN
1.4.4 Challenges
1.5 The SDN approach for an evolvable Network
1.5.1 Architecture
1.5.2 Implementation of SDN
1.5.3 Features of SDN
1.5.4 Challenges
I Data Plane Enhancement 
2 Introduction to the First Part 
2.1 Design principles
2.2 Design space
2.3 Architecture
2.4 Methodology and testbed
2.4.1 Methodology
2.4.2 Test equipment
2.4.3 Workload
2.5 Contributions
3 Forwarding module 
3.1 Description
3.2 Design space
3.2.1 Related work
3.2.2 Algorithm
3.2.3 Data structure
3.3 Forwarding module: design and implementation
3.3.1 Prefix Bloom Filter
3.3.2 Block Expansion
3.3.3 Reducing the number of hashing operations
3.3.4 Hash table design
3.3.5 Caesar extensions
3.3.6 Implementation
3.4 Evaluation
3.4.1 Experimental setting
3.4.2 Performance evaluation
3.4.3 Distributed Processing
3.4.4 GPU Off-load
3.5 Conclusion
4 PIT module 
4.1 Description
4.2 Design space
4.2.1 Related work
4.2.2 Placement
4.2.3 Data structure
4.2.4 Timer support
4.2.5 Loop detection
4.2.6 Parallel access
4.3 PIT: design and implementation
4.3.1 PIT placement and packet walktrough
4.3.2 Data structure
4.3.3 PIT operations
4.3.4 Timer support
4.3.5 Loop detection with Bloom filter
4.4 Evaluation
4.4.1 Experimental setting
4.4.2 Memory footprint
4.4.3 Throughput without timer
4.4.4 Throughput with timer
4.5 Conclusion . . .
II Network Verification 
5 Introduction to the Second Part 
5.1 Network Verification
5.2 State of the art
5.3 Contributions
6 Forwarding rule verification through atom computation 
6.1 Model
6.1.1 Definitions
6.1.2 Header Classes
6.1.3 Set representation
6.1.4 Representation of a collection of sets
6.2 Atoms generated by a collection of sets
6.2.1 Representing atoms by uncovered combinations
6.2.2 Overlapping degree of a collection
6.3 Incremental computation of atoms
6.3.1 Computation of atoms generated by a collection of sets
6.3.2 Application to forwarding loop detection
6.4 Theoretical comparison with related work
6.4.1 Related notion of weak completeness
6.4.2 Lower bound for HSA / NetPlumber
6.4.3 Lower bound for VeriFlow
6.4.4 Linear fragmentation versus overlapping degree
6.5 Conclusion
Table of symbols
Conclusion
Glossary
Bibliography 

GET THE COMPLETE PROJECT

Related Posts