Routing andWireless Resource Allocation inHybrid Data Center Networks: Overview 

Get Complete Project Material File(s) Now! »

Taxonomy of data center network architectures

In this section, we present a taxonomy of the existent DCN architectures with a detailed review of each drawn class. In general, several criteria have to be considered to design robust DCNs, namely, high network performance, efficient resource utilization, full available bandwidth, high scalability, easy cabling, etc. To deal with the aforementioned challenges, a panoply of solutions have been designed. Mainly, we can distinguish two research directions. In the first one, wired DCN architectures have been upgraded to build advanced cost-effective topologies able to scale up data centers. The second approach has resorted to deploying new network techniques within the existing DCN so as to handle the challenges encountered in the prior architectures. Hereafter, we will give a detailed taxonomy of these techniques.

Classification of DCN architectures

With regard to the aforementioned research directions, we can identify three main groups of DCN architectures, namely, switch-centric DCN, server-centric DCN, and enhanced DCN. Each group includes a variety of categories that we will detail hereafter.
• Switch-centric DCN architecture: Switches are, mostly, responsible for network-related functions, whereas the servers handle processing tasks. The focus of such a design is to improve the topology so as to increase network scale, reduce oversubscription and speed up flow transmission. Switch-centric architectures can be classified into five main categories according to their structural properties:
1. Traditional tree-based DCN architecture: represents a specific kind of switch-centric architecture, where switches are linked in a multi-rooted form.
2. Hierarchic DCN architecture: is a switch-centric DCN where network components are arranged in multiple layers. Each layer characterizes traffic differently.
3. Flat DCN architecture: compresses the three switch layers into only one or two switch layers, in order to simplify the management and maintenance of the DCN.
• Server-centric DCN architecture: Servers are enhanced to handle networking functions, whereas switches are used only to forward packets. Basically, servers are simultaneously end-hosts and relaying nodes for multi-hop communications. Usually, server-centric DCN are recursively defined multi-level topologies.
• Enhanced DCN architecture: Is a specific DCN which is tailored for future Cloud comput-ing services. Indeed, the future research direction attempts to deploy networking techniques so as to deal with wired DCN designs limitations. Recently, a variety of technologies have been used in this context, namely, optical switching, and wireless communications. Accord-ingly, we distinguish two main classes of enhanced DCN architectures:
1. Optical DCN: makes use of optical devices to speed up communications. It can be either: i) all-optical DCN (i.e., with completely optical devices) or ii) hybrid optical DCN (i.e., both optical and Ethernet switches)
2. Wireless DCN: deploys wireless infrastructure in order to enhance network perfor-mance, and may be: i) fully wireless DCN (i.e., only wireless devices) or ii) Hybrid DCN (i.e., both wireless and wired devices)

Switch-centric DCN architectures overview

Tree-based DCN

The traditional DCN is typically based on a multi-root tree architecture. The latter is a three-tier topology composed by three layers of switches. The top level (i.e., root) represents the core layer, the middle level is the aggregation layer, while the bottom level is known as the access layer. The core devices are characterized by high capacities compared with aggregation and access switches. Typically, the core switches’ uplinks connect the data center to the Internet. On the other hand, the access layer switches commonly use 1 Gbps downlink interfaces and 10 Gbps uplink interfaces, while aggregation switches provide 10 Gbps links. Access switches (i.e., ToRs) interconnect servers in the same rack. Aggregation layer allows the connection between access switches and the data forwarding. An illustration of tree-based DCN architecture is depicted in Figure 2.2.
Unfortunately, traditional DCNs struggle to resist to the increasing traffic demand. First, core switches are prone to bottlenecks issues as soon as the workloads reach the peak. Moreover, in such a DCN, several downlinks of a ToR switch share the same uplink which limits the available bandwidth. Second, DCN scalability strongly depends on the number of switch ports. Therefore, the unique way to scale this topology is to increase the number of network devices. However, this solutions results in high construction costs and energy consumption. Third, tree-based DCN suffers from serious resiliency problems. For instance, if a failure happens on some of the aggregation switches, then servers are likely to lose connection with others. In addition, resource utilization is not efficiently balanced. For all the aforementioned reasons, researchers put forward alternative DCN topologies.

Hierarchical DCN architecture

Hierarchical topology arranges the DCN components in multiple layers. The key insight behind this model is to reduce the congestion by minimizing the oversubscription in lower layer switches using the upper layer devices. In the literature, we find several hierarchic DCN examples, namely, CLOS, FatTree and VL2. Hereafter, we will describe each one of them.
CLOS-based DCN: is an advanced tree-based network architecture. It was, first, introduced by Charles Clos, from Bell Labs, in 1953 to create non-blocking multi-stage topologies, able to provide higher bandwidth than a single switch. Typically, CLOS-based DCNs come with three layers of switches: i) Access layer (ingress), composed of the ToRs switches, directly connected to servers in the rack, ii) Aggregation layer (middle), formed by aggregation switches referred as spines and connected to the ToRs, and ii) Core layer (egress), formed by core switches serving as edges to manage traffic in and out the DCN [26]. The CLOS network has been widely used to build modern IP fabrics, generally referred to as spine and leaf topologies. Accordingly, in this kind of DCN, commonly named folded-CLOS topology, the spine layer represents the aggregation switches (i.e., spines) while the leaf layer is composed of the ToR switches (i.e., leaves). The spine layer is responsible for interconnecting leafs. CLOS inhibits the transition of traffic through horizontal links (i.e., inside the same layer). Moreover, CLOS topology scales up the number of ports and makes possible huge connection using only a small number of switches. Indeed, augmenting the switches ports enhances the spine layer width and, hence, alleviates the network congestion. In general, each leaf switch is connected to all spines. In other words, the number of up (respectively down) ports of each ToR is equal to the number of spines (respectively leaves). Accordingly, in a DCN of n leaves and m spines, there are n × m wired links. The main reason behind this link redundancy is to enable multi-path routing and to mitigate oversubscription caused by the conventional link state OSPF routing protocol. In doing so, CLOS network provides multiple paths for the communication to be switched without being blocked.
CLOS architecture succeeds to ensure better scalability and path diversity than conventional tree-based DC topologies. Moreover, this design reduces bandwidth limitation in aggregation layer. However, this architecture requires homogeneous switches, and deploys huge number of links.
Fat-Tree DCN: is a special instance of CLOS-based DCN introduced by Al-Fares [27] in order to remedy the network bottleneck problem existing in the prior tree-based architectures. Specifi-cally, Fat-Tree comes with a new way to interconnect commodity Ethernet switches. Typically, it is organized in k pods, where each pod contains two layers of k 2 switches. Each k-port switch in the lower layer is directly connected to k 2 hosts, and to k 2 of the k ports in the aggregation layer. Therefore, there is a total of (k 2)2 k-port core switches, each one is connected to each port of the k pods. Accordingly, a fat-tree built with k-port switches supports k3 4 hosts.
The main advantage of the Fat-Tree topology is its capability to deploy identical cheap switches, which alleviates the cost of designing DCN. Further, it guarantees equal number of links in different layers which inhibits communication blockage among servers. In addition, this design can impor-tantly mitigate congestion effects thanks to the large number of redundant paths available between any two given communicating ToR switches. Nevertheless, Fat-Tree DCN suffers from complex connections and its scalability is closely dependent on the number of switch ports. Moreover, this structure is impacted by the possible low-layer devices failure which may entail the degradation of DCN performance.
This architecture has been improved by designing new structures based on a Fat-Tree model, namely, ElasticTree [28], PortLand [29] and Diamond [30]. The main advantage of such topolo-gies is to reduce maintenance cost and enhance scalability by reducing the number of switch layers.
Valiant Load Balancing DCN architecture VLB is introduced in order to handle traffic varia-tion and alleviate hotspots when random traffic transits through multi-paths. in the literature, we find, mainly, two kinds of VLB architectures. First, VL2 is three-layer CLOS architecture intro-duced by Microsoft in [16]. Contrarily to Fat-Tree, VL2 resorts to connecting all servers through a virtual 2-layer Ethernet, located in the same LAN with servers. Moreover, VL2 implements VLB mechanism and OpenFlow to perform routing while enhancing load balancing. To forward data over multiple equal cost paths, it makes use of Equal-Cost Multi-Path (ECMP) protocol. VL2 architecture is characterized by its simple connection and does not require software or hardware modifications. Nevertheless, it still suffers from scalability issue and does not take into account reliability, since single node failure problem persists.
Second, Monsoon architecture [31], aims to alleviate over-subscription based on a 2-layer network that connects servers and a third layer for core switches/routers. Unfortunately, it is not compatible with the existing wired DCN architecture.

READ  Structural and morphological characterization techniques

Flat DCN architecture

The main idea of the Flat switch-centric architectures is to flatten down the multiple switch layers to only two or one single layer, so as to simplify maintenance and resource management tasks. There are several topologies that are proposed for this kind of architecture. First, the authors of [32] conceive FBFLY architecture to build energy-aware DCN. Specifically, it considers power con-sumption proportionally to the traffic load, and so replaces the 40 Gbps links by several links with fewer capacity regarding the requested traffic in each scenario. C-FBFLY [33] is an improved ver-sion of FBFLY which makes use of the optical infrastructure in order to reduce cabling complexity while keeping the same control plane. Then, FlaNet [34] is also a 2-layer DCN architecture. Layer 1 includes a single n-port switch connecting n servers, whereas the second layer is recursively formed by n2 1-layer FlatNet. In doing so, this architecture reduces the number of deployed links and switches by roughly 1 3 compared to the classical 3-layer FatTree topology, while keeping the same performance level. Moreover, FlatNet guarantees fault-tolerance thanks to the 2-layer structure and ensures load balancing using the efficient routing protocols.
Discussion In conclusion, switch-centric architectures succeed to relatively enhance traffic load balancing. Most of these structures ensure multi-routing. Nevertheless, such a design brings up in general at least three layers of switches which strongly increases cabling complexity and lim-its, hence, network scalability. Moreover, the commodity switches commonly deployed in these architectures do not provide fault-tolerance compared to the high-level switches.

Server-centric DCN architectures overview

In general, these DCN architectures are conceived in a recursive way where a high-level structure is formed by several low-level structures connected in a specific manner. The key insight behind this design is to avoid the bottleneck of a single element failure and enhance network capacity.
The main server-centric DCN architectures found in the literature include DCell which is a re-cursive architecture built on switches and servers with multiple Network Interface Cards (NICs) [35]. The objective is to increase the scale of servers. Moreover, BCube is a recursive server-centric ar-chitecture [17], which makes use of on specific topological properties to ensure custom routing protocols. Finally, CamCube [36] is a free of switching DCN architecture, specifically modeled as a 3D DCN topology, where each server connects to exactly two servers in 3D directions.

Table of contents :

1 Introduction 
1.1 Data Center Designing
1.1.1 Data Center Network
1.1.2 Data Center Network Challenges
1.2 Data Center Network Solutions
1.3 Problem statement
1.4 Thesis contributions
1.5 Thesis outline
2 Architectures of Data Center Networks: Overview 
2.1 Introduction
2.2 Taxonomy of data center network architectures
2.2.1 Classification of DCN architectures
2.2.2 Switch-centric DCN architectures overview
2.2.2.1 Tree-based DCN
2.2.2.2 Hierarchical DCN architecture
2.2.2.3 Flat DCN architecture
2.2.3 Server-centric DCN architectures overview
2.2.4 Enhanced DCN architectures overview
2.2.4.1 Optical DCN architecture
2.2.4.2 Wireless DCN architecture
2.3 Comparison between DCN architectures
2.4 Proposed HDCN architecture
2.4.1 HDCN architecture based on MSDC model
2.4.1.1 ECMP protocol
2.4.2 60 GHz technology in HDCN
2.4.3 Beamforming technique in HDCN
2.5 Conclusion
3 Routing andWireless Resource Allocation inHybrid Data Center Networks: Overview 
3.1 Introduction
3.2 Routing and wireless channel allocation problematic in HDCN
3.2.1 Routing and wireless channel assignment challenges in HDCN
3.2.2 Routing and wireless channel assignment criteria in HDCN
3.3 Wireless Channel Allocation strategies for one-hop communications in HDCN
3.3.1 Channel allocation problem in wireless networks
3.3.2 Omni-directional antennas based strategies
3.3.3 Beamforming based strategies
3.4 Online Joint Routing and Wireless Channel Allocation strategies in HDCN
3.4.1 Joint routing and channel assignment in Mesh networks
3.4.2 Online joint routing and channel assignment strategies in HDCN
3.5 Joint Batch Routing and Channel Allocation strategies in HDCN
3.6 Summary
3.7 Conclusion
4 Wireless channel allocation for one-hop communications in HDCN 
4.1 Introduction
4.2 Problem Formulation
4.2.1 Hybrid Data Center Network Model
4.2.2 Wireless Channel allocation problem in HDCN
4.3 Proposal: GC-HDCN
4.3.1 Generation of initial solution
4.3.2 Resolution of the relaxed RM-Problem
4.3.3 Resolution of the pricing problem
4.3.3.1 B&C algorithm
4.3.3.2 Pricing Greedy heuristic
4.3.4 Branch and price stage
4.4 Performance evaluation
4.4.1 Simulation Environment and Methodologies
4.4.1.1 Experiment Design
4.4.1.2 Simulation setup
4.4.2 Performance metrics
4.4.3 Simulation Results
4.4.3.1 Omni-Beam scenario
4.4.3.2 Uniform-Load scenario
4.4.3.3 Real-Load scenario
4.5 Conclusion
5 Joint online routing and channel allocation in HDCN 
5.1 Introduction
5.2 Problem Formulation
5.2.1 Hybrid Data Center Network Model
5.2.2 Joint Routing and Channel Assignment problem
5.2.2.1 Edmonds-Szeider Expansion
5.2.2.2 Minimum Weight Perfect Matching formulation
5.3 Proposal: JRCA-HDCN
5.3.1 Initialization stage
5.3.2 Primal operations stage
5.3.3 Dual updates stage
5.4 Performance Evaluation
5.4.1 Simulation Environment and Methodologies
5.4.1.1 Experiment Design
5.4.1.2 Simulation setup
5.4.2 Performance metrics
5.4.3 Simulation Results
5.4.3.1 Close-WTU and Far-WTU scenarios
5.4.3.2 Hotspot scenario
5.4.3.3 Real-Load
5.5 Conclusion
6 Joint batch routing and channel allocation in HDCN 
6.1 Introduction
6.2 Problem Formulation
6.2.1 Hybrid Data Center Network Model
6.2.2 Joint Batch Routing & Channel Assignment (JBRC) problem
6.2.2.1 Graph Expansion
6.2.2.2 Multi-Commodity Flow problem (MCF) formulation
6.3 Heuristic solution: JBH-HDCN
6.3.1 Initialization stage
6.3.2 Cost evaluation and selection stage
6.3.3 Expansion stage
6.4 Approximate solution: SJB-HDCN
6.4.1 Relaxation stage
6.4.2 Evaluating the Lagrangian function and its subgradient
6.4.3 Lagrangian update stage
6.4.3.1 Initialization
6.4.3.2 Update of Lagrangian multipliers
6.4.3.3 Computation of the current Lagrangian function
6.5 Performance Evaluation
6.5.1 Simulation Environment and Methodologies
6.5.1.1 Experiment Design
6.5.1.2 Simulation setup
6.5.2 Performance metrics
6.5.3 Simulation Results
6.5.3.1 Uniform-Load
6.5.3.2 Real-Load
6.6 Conclusion
7 Conclusions 
7.1 Introduction
7.2 Summary of contributions
7.3 Future research directions
7.4 Publications

GET THE COMPLETE PROJECT

Related Posts