Distributed algorithms for the allocation of resources: state of the art and taxonomy

Get Complete Project Material File(s) Now! »

The convergence of telecommunications and com-puter networks

The rst networks, retroactively called Plain Old Telephony System (POTS), introduced at the end of the nineteenth century (1876) relied on analogue transmission. They were initially used for interpersonal voice calls. The most famous image of this is the very rst call made by Alexander Graham Bell to his assistant Thomas Watson. Almost a century later, the rst generation of cellular networks (1G) appeared in the late 1970s in Japan, it was also based on analogue transmission between the handsets and the radio towers. Figure 2.1 gives an overview of the evolution of networks.
Interpersonal voice calls remained the main service o ered until these networks started to be replaced by networks based on digital transmission even if other services, such as fax, started to appear. Fixed telecom networks moved to the Integrated Services Digital Network (ISDN) standard, rst deployed in 1988 in the United States. ISDN made it easier to o er additional services like voicemail, conference calls, telephone exchanges for home and enterprise customers.
Mass adoption of cellular networks started in the 1990s when the Global System for Mobile Communications (GSM) standardised the 2nd Generation (2G) of mobile networks that relies on digital transmission. 2G at launch allowed interpersonal voice calls using a circuit-switched network similar to xed networks. In circuit-switched networks, a circuit is dedicated to the communication between the two endpoints.
The idea to interconnect computers arose in the late 1950s and was enabled by the conception of new packet-switched networks. In packed-switched networks the data is grouped in packets and each packet has a header that allows the identi cation of the recipient and of the packet route to it. In the late 1960s the Defense Advanced Research Projects Agency (DARPA) began working on what would eventually become the Internet to interconnect computers networks. The IP and TCP protocols were both described in 1974 and are still the backbone of the World Wide Web. In 2020 most of the internet still relies on the version 4 of the IP protocol (IPv4) that has been around since 1981 [Pos81].
At this point telecom networks, mainly used for voice calls, and computer networks relied on very di erent technologies. Even when telecom networks moved to digital trans-mission they still used a whole set of network protocols di erent from the TCP/IP stack of computer networks. However these networks could use common physical layers, and it be-came more and more common for these networks to share copper or bre optic lines. The introduction of the Digital Subscriber Line (DSL) technology allowed to use the same cop-per pair to transmit both ISDN and DSL data, by using di erent frequencies than ISDN. DSL was introduced once the cost allowed it in the late 1990s to transmit data between subscribers most notably to provide Internet interconnection. Similarly wireless and xed telecom networks could share parts of a common backbone, the wireless components of the Radio Access Network are only some of the parts of a cellular networks and can be connected to other nodes of the networks with landlines.
E orts to standardise how to make voice calls using computers and computer networks started in the late 1990s and resulted in the ITU Telecommunication Standardisation Sec-tor (ITU-T) H.323 [ITU96] and Internet Engineering Task Force (IETF) Session Initiation Protocol (SIP) [HSS99] protocols as the rst standards for Voice over IP (VoIP), respec-tively in 1996 and 1999.
Access to the internet from Cellular Networks was introduced with General Packet Radio Service (GPRS) in 2000 on 2G networks. This introduced an additional packet-switched network in mobile networks that were only circuit-switched before that. The next major evolution appeared with 4G networks. 4G was the rst cellular networks technology to introduce VoIP on the packet-switched network, with Voice over LTE (VoLTE). This allowed future telecommunications networks to rely only on packet-switching both for voice and data services. The rst commercial VoLTE service was launched in 2012 in Dallas (Texas, United States) [Inc12].
Slowly computer and telecommunications networks have converged to use common network technologies. All communications have moved, or are moving, to IP networks and circuit-switched networks are becoming an artefact of the past. The latest generation of Telecommunication Networks, 5G networks, now tries to consolidate all of these evolutions to enable VoIP communications, access to IP networks from mobile phones as well as other endpoints such as computers, Internet of Things (IoT) sensors . . .
This evolution also can be seen on the network infrastructures. Even if telecom net-works have always included computing elements (such as processors, memory, . . . ) they used to be based on dedicated hardware due to the speci city of the protocols and services that were used only in the telecom world. The main service node of the 2G/3G voice net-works is the Mobile Switching Center (MSC). A typical MSC from the 1990s or 2000s was composed of a large number of specialised processors such as Digital Signal Processing (DSP) for the treatment of the signal. New network functions can for the most part be run on what is referred to as Customer of the Shelf (COTS) hardware, the same type of servers that are used by other industries requiring computing power. COTS hardware is mostly based on x86 processors.

Evolution of the architecture of services

Since the time when telecommunications and computer networks started to rely on the same IP technologies and COTS hardware, their infrastructure has evolved and started to converge too. In 2012 the European Telecommunications Standards Institute (ETSI), an in-dustry forum composed of Network Operators, has standardised NFV [ISG14] to leverage the recent evolution in computing and IP networking infrastructures: the virtualisation of computing resources and the programmability and management functions o ered by the Cloud Computing paradigm, and Software-De ned Networking (SDN). This evolution was introduced during the lifespan of 4G but 5G, the fth generation of mobile networks, is the rst to include these evolutions from the start.
Telecommunication operators, or network service providers, host NFV infrastructures thatˆ are composed of:
Chains of Virtual Network Functions (VNFs): a Network Function is the unitary component of a NFV network. Common VNFs can be a switch, a router, a rewall or other security equipment. In the case of network service providers these services are standardised by the 3rd Generation Partnership Project (3GPP), for example in 3G and 4G networks the Home Subscriber Server (HSS) is the main user database. These functions are called VNF in the NFV context because they are virtualised.
The technology’s capabilities mean that a large number of virtual VNFs can be con-nected together in a NFV environment. Because it’s done in software using virtual circuits, these connections can be set up and torn down as needed with service chain provisioning through the NFV orchestration layer.

SDN

Cloud Computing o ers abstractions of computing infrastructures through the exposition of APIs by the Infrastructure as a Service (IaaS) orchestrator. Likewise SDN o ers abstrac-tion of network resources and network services through a NorthBound API exposed by a SDN controller. Origins of SDN lies in the observation [Kre+15; Ope16] that IP networks are vertically integrated by vendors in their hardware middle-boxes (switches, routers, …). Both control plane and forwarding plane (also called data plane, or user plane) are bundled. The control plane is the part of the network component in charge of the service logic. The forwarding plane is the part in charge of forwarding the network packets. The acronym SDN rst appeared in a 2009 article [Gre09] to de ne the works done at Stanford University that led to the OpenFlow protocol as described by McKe-own et al. in their seminal white-paper in 2008 [McK+08]. They consider that this strong integration has hindered innovation and evolution of networking architecture and made managing IP networks increasingly complex. Kreutz et al. [Kre+15] cite the example of the very slow transition of IPv4 to IPv6 as an example of these limitations.
The SDN controller is the main component of the control plane. It relies on the SouthBound API to discuss with the forwarding plane that is composed of physical nodes such as switches. SDN applications can interact with the SDN controller using the NorthBound API. A sample SDN architecture, taken from [Wic+15], is shown in Figure 2.3.
This WAN is deployed using SDN, based on OpenFlow as the SouthBound API and a modi ed version of Onix [Kop+10] as the SDN controller. Their drivers for such solution was that typical WAN are over-provisioned and have 30 40% average utilisation. This over-provisioning allow an higher resiliency to network failure but at the cost of two to three times the necessary bandwidth and equipment. Considering the massive tra c Google has to deal with, and its increase that is faster that the increase of tra c on the Internet, they chose this solution to be cost-e cient. Many of the links of this WAN run at near 100% utilisation and all links average 70% utilisation over long time periods, which is an improvement in e ciency of two to three times.

Network slicing

The 5G standard introduced the concept of network slicing [3GP16]. It is a concept independent of the notion of chains of VNFs introduced above. A slice is a virtual network composed of a set of VNFs built on top of one or several NFV infrastructures. The slicing concept was rst introduced by the Next-Generation Mobile Networks (NGMN) in January 2016 [All16]. Since then several Standards Developing Organisation (SDO) and Industry Fora have launched works to analyse the use cases and impact of network slicing. The concept is not completely new, 3GPP rst introduced a concept of end-to-end slicing of 4G mobile networks in their release 13 as DEdicated CORe Network (DECOR).
Figure 2.4 is adapted from the architecture introduced by the NGMN for network slicing. On top of a single NFV infrastructure 4 slices use di erent subsets of the VNFs available. Slice 1 uses one VNF in pale red, slice 2 uses also one VNF in orange. Slices 3 and 4 both use chains of 3 VNFs. In turn these slices are used by Service Instances, i.e., instances of services o ered to customers. There is 1 Service Instance running on slice 1. Two di erent Service Instances run on slices 2 and 3. Two di erent Service Instances, or two instances of the same service, run on slice 4. This allows 5 di erent Service Instances to run on a single NFV infrastructure.
The resources are exposed by di erent APIs (Cloud, SDN, NFV MANO, …) that can be used simultaneously by di erent services or users. This is a shift from traditional telecom networks where resources are often used by a single service. NFV introduced the possibility for multiple functions to share the same infrastructure but slicing also allows multiple end-to-end services to share infrastructures.

Multi-domain services

This section rst de nes the di erent possible architectures for services that are split across multiple domains and then introduces a recent use case.

Multi-domain architectures

The standard architectures for Cloud, SDN or network slicing focused on infrastructures within a single domain. There are multiple reasons for which infrastructures might require to be deployed in multiple domains. For scalability: when a single domain reached its maximum capacity and the only way to add more capacity is to add another domain. Infrastructures can be split in multiple domains for resilience or security, for example to remove or isolate a compromised domain in case of an incident or an attack and still be able to provide the service to the user. A service can also require to leverage services from multiple providers, each in their own domain.
A domain is a set of resources with its dedicated manager. A provider can have one or more domains. For example a domain in the context of SDN is the topology and set of resources managed by one SDN controller. In the case of a Cloud it is the set of resources managed by an orchestrator, e.g., an instance of OpenStack. For a network it can be a Point of Presence (PoP), a set of VNFs in a small data centre on the border of the network.
An architecture is multi-domain when it involves multiple domains. The domains can depend from a provider, or be split between di erent providers. Sung et al. [Sun+16] gives some insight on networks consisting of multiple domains stating that large internet-facing services such as Facebook are often hosted on a network of networks, where each sub-network has unique characteristics. Facebook’s network consists of edge PoPs, a backbone, and data centres. The devices, topology and management tasks vary per sub-network. Yet, all of them must be con gured correctly in order for the entire network to work.
Figure 2.5 show an example of multi-domain end-to-end architecture for network slices across three network service providers (NSP) A, B and C. Provider A is composed of three domains A1, A2 and A3. Provider B and C are both composed of a single domain. A domain here is a NFV infrastructure either with virtualised computing, storage and network resources orchestrated by OpenStack, a Cloud Orchestrator and a SDN Controller as A1, A2 and C1 or with only network resources orchestrated by a SDN controller as A3 and B1. There are two 5G slices. Slice 1 runs 4 VNFs that are hosted on 4 VMs in domains A1, A2 and C1. Slice 2 runs 2 VNFs that are hosted on 2 VMs in domains A2 and C1. Orthogonal to the slices is a management plane with specialised orchestrators that communicates with the orchestrators of the various domains to manage these VNFs.

READ  Extreme Single Label Classification

A 5G multi-domain use case: eHealth telemedecine

The European Commission launched the 5G Infrastructure Public Private Partnership (5G PPP) [PPP] joint initiative with the European industry. 5G PPP identi ed several vertical industry sectors which provided their needs and potential barriers for adoption: automotive, manufacturing, media, energy, health, public safety and smart cities. As part of the projects funded by this initiative, several use cases for these verticals have been detailed. This section focuses on one in particular : eHealth in-Ambulance telemedecine [Ale17] from the SliceNet 5G PPP project.
In this use case an ambulance serves as a connection hub for the devices inside. A connected device, glasses in this speci c use case, is present in the ambulance and connects to the emergency department team at the destination hospital. The glasses are used by the paramedics present in the ambulance to assist a patient and send real-time live video stream of what they see to the emergency team at the hospital. The requirements expressed for this use case include enhanced Mobile BroadBand (eMBB), to support the bandwidth required (around 10Mbps) by the high-de nition video stream, as well as ultra-Reliable and Low-Latency communications (uRLLC) due to patient safety factors (10ms peak to peak jitter, end-to-end latency inferior to 100ms, less than 0.05% packet loss and 99,999% reliability).
Figure 2.6 shows that this use case involves 4 network service providers (NSP) [Eur18]. Providers NSP1 and NSP3 provide connectivity with their Radio Access Networks (RAN) and the network services for the ambulance in their Core and Mobile Edge Computing (MEC) platforms, for the ambulance. Two providers are necessary to have su cient cov-erage of the RANs. Provider NSP2 may be an ISP that provides the Wan-Area Network that connects the others providers and can also host other VNFs or management functions for the end-to-end use case, e.g., it can con gure QoS parameters between the domains. Provider NSP4 is the enterprise domain for the hospital and its computing infrastructure as well as the devices present in the ambulance.

Resource allocation problems in networks

This thesis focuses on the problem of allocating resources in networks for multi-domain use cases. Networks rely on multiple kinds of resources and there are multiple known problems for allocating these resources. The rst section introduces the problem statement and some formalism. Then Section 2.4.2 presents centralised solutions for the allocation of VNF in NFV. Section 2.4.3 shows with an experiment why centralised solutions are not always suited for multi-domain use cases. The last section introduces the distributed approach that is developed in the next chapters.

Problem statement and model

Resource model The left part of Figure 2.7 shows a system with three network service providers (NSP). Three types of resources are available: an Intrusion Detection Systems (IDS), a Firewall (FW) and a Load-Balancer (LB). There are three instances of each type of resources distributed across the three providers.
In an example of chain of VNFs described by the ETSI NFV Working Group [ISG13], packets need to traverse an IDS, a FW and a LB.
A request Req1 can be noted as Req1 =Req(n1; [c3;c2;c1]) in the system introduced above. n1 is the requesting node, 3 types of resources c1, c2 and c3 are requested. The request order is c3 < c2 < c1, i.e., rst c3, then c2 and nally c1.
Allocation of a request The problem addressed is to allocate the requests, i.e., select instances of types of resources in the order requested and allocate them to the requesting node. When all the instances selected are allocated, the requesting node can enter its Critical Section (CS) and start using the resource. Figure 2.8 shows a selection of instances for Req1: the instances in nodes n8, n7 and n5 are selected and the requesting node n1 can use these resources in that order.

Solutions for the resource allocation problem

Telecommunications networks and computing infrastructures often have a management function. For NFV architectures ETSI introduced the MANagement and Orchestration (MANO) architectural framework [ISG14]. A SDN network relies on a SDN controller. Cloud Computing IaaS relies on an orchestrator. This function is often centralised, i.e., a single instance manages all the resources of the domain.
Among the roles of the MANO is the placement of VNF in the NFV infrastructure. When managed by a centralised orchestrator, most of these problems are NP-hard variants of the bin packing problem. Bari et al. [Bar+15] proposed an Integer Linear Program-ming (ilp) formulation to optimise the operational costs and usage while respecting the Service Level Agreements (SLAs). Jia et al. [Jia+16] also try to minimise operational costs and propose an online algorithm, i.e., an algorithm that handles requests as they arrive, to address the problem that they classify as the multiple-knapsack problem, a prob-lem known to be NP-hard. Carpio et al. [CDJ17] address the problem of placing VNF with replicas to load-balance the network functions and try three optimisation methods (Linear Programming, Genetic Algorithm and a Random Fit Placement algorithm).
One sub-problem for the placement of VNF on a NFV infrastructure is the problem of the placement of VMs on Cloud infrastructures. Some surveys ([JS15; Wid+12]) analysed VM placement techniques aiming at improving the way VMs are placed on the baremetal machines that compose Cloud infrastructures. Mills et al. [MFD11] compared 18 VM placement algorithms inspired by bin-packing literature. They conclude that the choice of the algorithm for VM placement had no signi cant impact, but that the algorithm for the selection of the cluster, i.e., a subset of servers in the infrastructure with their own properties, leads to the most signi cant impact on resource optimisation.The objective of VM placement is usually to reduce cost for a customer. However Feng et al. [Fen+12] addressed it from the point of view of the provider and considered optimising the revenue as their objective. Mijumbi et al. analyse in [Mij+16] some of the existing projects related to NFV MANO. They concluded that current projects focused on centralised solutions which pose scalability imitations. Centralised solutions are usually o ine, i.e., they do not handle requests as they arrive but compute a solution once a set of requests has been received. This is because either the method used, or the cost and duration of the computation, does not allow the online handling of requests.
All of the above, and many more, techniques are tting centralised architectures. Most of them handle requests o ine and are not scalable to large number of resources due to their computation costs. They do not consider the multi-domain architectures described in Section 2.3.

Problems of concurrent resource allocation from multiple providers

Multi-domain use cases require an orchestration of the resources of all domains. In some cases it can be possible to have a centralised orchestrator that manages the resources of all the domains. However, this type of solution is not always available or possible.
The rest of this section details an example of one of the problems that can arise when multiple centralised orchestrators are involved and each of them only manages the resources of its domain. The example shows the problem for a multi-domain SDN service.
Allocation use case This section introduces a problem faced by clients to allocate resources distributed across multiple SDN providers. The content is adapted from a paper originally published at the 21st Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN) in 2018 [Fra+18b].
One of the SDN promises is the programmability of networks through APIs. Those APIs allow di erent users to access the network concurrently. Thus leading to the allocation of dedicated resources by a given domain. These APIs allow users to integrate the services exposed by domains in their applications.
A user is an application or a person (e.g., system administrator) that uses NorthBound APIs to interact with the SDN domains.
Consider two providers and with no prior knowledge of each others and that cannot share information. No assumption is made on the architecture of the SDN controllers themselves: they can be either centralised (like NOX [Gud+08]) or distributed (like Onix [Kop+10] or Open Network Operating System (ONOS) [Ber+14]).
Figure 2.9 illustrates a use case with two users, Alice and Bob. They both want to allocate one resource from each of the two providers and . Depending on how their requests are processed, four results are possible.

Table of contents :

1 Introduction 
1.1 Context and motivation
1.2 Contributions
1.3 Structure of this manuscript
1.4 Publications
2 Background and problem statement 
2.1 The convergence of telecommunications and computer networks
2.2 Evolution of the architecture of services
2.3 Multi-domain services
2.4 Resource allocation problems in networks
2.5 Conclusion
3 State of the art 
3.1 Defnition and model for distributed resource allocation
3.2 Distributed algorithms for the allocation of resources: state of the art and taxonomy
3.3 Performance evaluation and comparison
3.4 Conclusion
4 A distributed algorithm for the allocation of resources 
4.1 Variables of nodes and messages
4.2 Path computation
4.3 Allocation
4.4 Examples
4.5 Heuristics
4.6 Algorithm Complexity
4.7 Conclusion
5 Performance analysis 
5.1 Metrics and reference confguration
5.2 Experimental environment
5.3 Systems with one instance of n types of resources
5.4 System with m instances of n types of resources
5.5 Computing the expected value for the Average Usage Rate
5.6 Conclusion
6 Experimental comparison with state of the art algorithms
6.1 System setup
6.2 Dijkstra’s Incremental algorithm
6.3 Chandy-Misra Drinking Philosophers Problem (DrPP) algorithm
6.4 Rhee’s algorithm
6.5 Bouabdallah-Laforest algorithm
6.6 Summary
6.7 Conclusion
7 Conclusion 
7.1 Contributions
7.2 Limitations and future work

GET THE COMPLETE PROJECT

Related Posts