Virtual Network Function Placement and Routing 

Get Complete Project Material File(s) Now! »

Hypervisor or Virtual Machine Monitor Architecture

Figure 1.5 depicts the way applications access to physical interfaces via the OS. The hardware layer includes the physical resources and interfaces such as CPU, memory, storage and Network Interface Card (NIC). The operating system leaves to the kernel space the direct access to interfaces, and to the userland (or user space) the interface with the application code running.
With recent software-driven virtualization solutions, we can distinguish two hypervisor architecture modes: the hosted type and the native or bare metal mode.
The hosted mode consists of adding a software application on top of the host’s OS layer to act as hypervisor. In each virtual machine, a guest OS can be installed and run any application. Figure 1.6 shows an example of the hosted mode. The peculiarity is therefore that the hypervisor software is run as an application and is not run at the OS kernel level.
The hosted mode therefore targets end users and not industrial-grade virtualization servers. A well-known example of a hosted hypervisor is Oracle VM VirtualBox. Others include VMWare Workstation, Microsoft Virtual PC, and Parallels.

The Dawn of Cloud Networking

Executing servers as virtual machines has a direct impact on network operations. The rst tangible impact is the fact that virtual machines also need to be addressed from an IP and Ethernet network perspective, transparently with respect to the physical server, by means of its virtual network interfaces. Moreover, when IaaS networks need to be operated, trac from and between VMs of a same IaaS needs to be isolated from other virtual networks and uniquely identied at the data plane level. Furthermore, the virtualization servers expose novel trac patterns in data-center network to support novel operations such as VM migration, storage synchronization, VM redundancy, etc.
Traditionally, data center network architectures were relying on a rather simple tree-like layer-2 Ethernet fabric, using the Spanning Tree Protocol (STP) [9], which for hierarchical tree-like topologies can be considered as suciently meeting the connectivity requirement. With virtualization, the emergence of IaaS virtual networks with many VMs located at many distinct edge servers creates a huge amount of horizontal trac among edge servers, especially if VM migrations happen across the servers. As later explained in the next chapter, novel data-center network topologies have been dened and deployed to meet these new requirements.

Data Center Network Optimization

We review in the following a selection of the state of the art on data center network optimization algorithms and protocols that see real applications in data center networking today. We classify as virtual network embedding, virtual machine placement, and trac engineering methods.

Virtual Network Embedding

Virtual Network Embedding (VNE) is a term adopted since the inception of advanced software virtualization in early 2000s’. A generic denition of the virtual network embedding problem is to assign to a set of virtual network demands a virtual network built over a same physical network infrastructure, which technically is supposed to oer isolation and possibly deterministic service level specications at both nodes and links. In the literature the term Virtual Data Center (VDC) is also frequently used, often associated to an embedding algorithm. A representation of a virtual network is given in Figure 2.9. Each virtual network is represented by a graph that has to be embedded over a physical network graph that shall have sucient idle resources to serve the demands. A virtual network demand can be rejected in case of resource unavailability.
VNE algorithms at the state of the art can be classied as static or dynamic. Static approaches do not consider the remapping possibility of some VNs to improve the embedding cost as a function of changing network states, while the dynamic approach allows it, taking into consideration several metrics such as network resources fragmentation. In fact, over time, some VNs can expire and release their resources, which cause fragmentation; this fragmentation can increase the rejection rate of future VN embedding requests. Hence the dynamic approach allows a better network resource allocation eciency. In this context, the authors of [46] have shown that most VN request rejections can be due to bottlenecked links.

READ  Effects of Camera Monitoring on Least Tern Nest Success at Cape Lookout National Seashore, North Carolina

Reformulation of the optimization problem

Recently, modeling an optical network design problem as a facility location problem, the authors in [104] extended a repeated matching heuristic described in [102, 103] to solve the SSFLP and proved it can reach good optimality gaps for many large instances of the problem.
Motivated by those results and basic similarities with the problem in [104], we redesign our DCN optimization problem as a repeated matching problem. With respect to the network context of [104], we have more matching sets with peculiar constraints due to fundamental dierences between optical networks and DCNs.
The double mapping we have to handle in our problem and the multiple capacity constraints we have to care about (at both the link and VM container sides) makes this problem more combinatorial than [104], so that comparison to the optimum is not dierently possible than in previous applications [102{104].
In our DCN scope, communications are between VMs that can be hosted behind the same VM container or behind distant containers interconnected by a DCN path. External communications can be modeled introducing ctitious VMs and VM containers acting as an egress point, if needed. When multipath is enabled, multiple paths can be used toward a same destination, and when virtual bridging is enabled, a VM container can transit external trac if the topology supports it. When communicating VMs are not collocated, inter-VM communication should involve a pair of containers and at least a DCN path between them. Let a VM container node pair be designated by cp, cp 2 TC, so that cp = (ci; cj ), i.e., a container pair is composed of two containers ci and cj . When the source (ci) and the destination (cj) of the container pair cp are the same (ci = cj ) it is called recursive container. A subset of container node pairs is designated by DC so, DC TC. Let the kth path from RB r1 to RB r2 be designated by rp = (r1; r2; k). A set of RB paths is designated by DR so that DR TR.

Table of contents :

Remerciements
Abstract
Resume en Langue Francaise
Contents
List of Figures
List of Tables
Acronyms
1 Introduction 
1.1 Server Virtualization
1.1.1 Virtualization: historical facts
1.1.2 Hypervisor or Virtual Machine Monitor Architecture
Hosted Mode
Native Mode
1.2 The Dawn of Cloud Networking
1.2.1 Software Dened Networking
1.2.2 Network Functions Virtualization
2 Related Work 
2.1 Data Center Fabrics
2.1.1 Topologies
2.1.2 Ethernet Switching
Enhancements to IEEE 802.1D
Ethernet Carrier Grade
Toward Ethernet Routing
Cloud Network Overlay Protocols
2.2 Data Center Network Optimization
2.2.1 Virtual Network Embedding
2.2.2 Virtual Machine Placement
2.2.3 Trac Engineering
Explicit routing
2.2.4 Trac Fairness
3 Virtual Machine Placement 
3.1 Introduction
3.2 Optimization Problem
Enabling multipath capabilities
Enabling virtual bridging
3.3 Heuristic approach
3.3.1 Reformulation of the optimization problem
3.3.2 Matching Problem
3.3.3 Steps of the repeated matching heuristic
3.3.4 Time complexity
3.4 Simulation results
3.4.1 Trac model
3.4.2 Energy eciency considerations
Optimization goal sensibility with respect to multi-path forwarding features
Optimization goal sensibility with respect to virtual-bridging features
3.4.3 Trac engineering considerations
Optimization goal sensibility with respect to multipath forwarding features
Optimization goal sensibility with respect to virtual-bridging features
3.5 Summary
4 Trac Fairness of Data Center Fabrics 
4.1 Introduction
4.2 Problem Formulation
4.2.1 Linear approximation of the objective
4.2.2 On weights wd
4.3 Performance evaluation
4.3.1 Study cases
4.3.2 Results
Throughput allocation
Trac distribution
Path Diversity
4.4 Summary
5 Virtual Network Function Placement and Routing 
5.1 Introduction
5.2 Background
5.3 Network Model
5.3.1 Problem Statement
5.3.2 Mathematical Formulation
5.3.3 Multi-objective math-heuristic resolution
5.3.4 Possible modeling renements and customization
5.4 Results
5.4.1 TE vs. TE-NFV objectives sensibility
5.4.2 Sensibility to the latency bound
5.4.3 Buerized vs buerless VNF switching mode
5.5 Summary
6 Conclusion and Perspectives 
Publications
I Appendix
.1 Matching cost matrix computation
References 

GET THE COMPLETE PROJECT

Related Posts