Offloading computing tasks beyond the edge in a cell 

Get Complete Project Material File(s) Now! »

Mobile Cloud Computing

The concept of computation offloading stands out as a core mechanism enabling mobile users to migrate computationally and resource-intensive tasks to an external provider of computing resources rather than the mobile device itself. Furthermore, it enables solving the inherent problems of mobile computing by using scalable computing resources of the cloud servers without time and space constraints [30]. As a result, mobile devices can run compute-intensive applications using the cloud’s computing and storage capabilities — this usage introduces mobile cloud computing (MCC).
Since its formal definition and standardization, cloud computing (CC) has evolved dramatically. It has been widely deployed in the past decade. Softwarization and virtual-ization technologies centralize resource management. They give flexibility and a supply of processing power, storage space, and network resources across distributed locations. Thus, CC resources are promising to support the emergence of diverse and sophisticated mobile applications enhanced by the revolution in cellular networks and mobile devices capacity. However, the MCC framework is distinct from the traditional CC paradigm. Indeed, MCC visualizes service provisioning through a collection of mobile-aware computational and information services. Awareness in a mobile computing environment includes vari-ous features such as energy consumption evaluation, location, interest awareness, wireless network cost calculation, and trace prediction. The MCC paradigm combines CC with Mobile Computing to provide cross-platform resource provisioning at a significantly lower cost. It brings many advantages for mobile devices besides its original purpose of over-coming battery constraints. We summarize the main benefits of MCC [31, 32, 33] as follows:
• Extending battery lifetime by offloading energy-consuming computations of the ap-plications to the Cloud.
• Enabling sophisticated applications to the mobile users.
• Offering the mobility of location-independent computation.
• Providing higher data storage capabilities to the user.
However, the considerable distance between end devices and the MCC causes several issues:
• High latency: In MCC, data created in end devices must be sent to the Cloud to be processed. After that, MCC has to return the process to the end devices.
• Performance depends on network conditions: The more network hops between the end device and the Cloud, the worse the network state, which is terrible for the user experience.
• Network congestion: Billions of Internet-connected devices are on the market and continue to increase, so they contribute to Internet traffic, which will become a burden on the network backhaul.
In addition, several factors can reduce the benefit of the offloading method, especially bandwidth limitations between mobile devices and the distant Cloud and the amount of data transmitted between them. Researchers have already proposed many techniques for optimizing offloading strategies to improve computational performance and save en-ergy. For example, users can decide to offload some or all of their computations to cloud servers for data processing instead of using only the mobile device [34, 35]. Barbera et al. proposed that time-sensitive tasks should perform in the cloudlet, and those with more flexible time requirements might execute in the Cloud [36]. Liu et al. demonstrated that computationally-intensive tasks are a better fit for cloud execution [37]. High-volume tasks with lower demands on computing resources are suitable for user equipment exe-cution. Thus, despite such an improvement for mobile devices, MCC fails to fulfill the low-latency requirements of emerging mobile applications due to the bandwidth limita-tion between mobile devices and the distant Cloud and the amount of data transmitted between them. Moreover, the delay induced by the distant Cloud cannot tackle time-sensitive applications, such as online mobile gaming, augmented reality, face and speech recognition. Therefore, to solve the high latency issue, cloud services need to shift closer to the end-users. Recently, the emerging paradigm is computing at the edge of the mobile network.

Multi-access edge computing

As one of the most promising technical approaches to improve the effectiveness of cloud computing, multi-access edge computing (MEC) has attracted considerable attention re-cently. The MEC shifts the cloud services to the radio access network and provides the cloud-computing capability closer to the mobile users [2, 5, 38, 39, 40]. This proximity ensures low latency connection in the computation offloading process, thus improving the quality of services expected by users [2, 38]. Furthermore, MEC performs computational tasks and supports offloading data from mobile applications and services near the end-user (see Figure 2.1). As a result, considerable research in MEC computation offloading in cellular networks has been proposed in recent years. System performance gains, such as reducing the energy consumption or system cost by optimizing offloading decisions and allocating the resources effectively, are the main target of MEC technology. In ad-dition, recently undertaken work enhances MEC with many features and functionalities such as connected vehicles, video streaming, video gaming, extensive data analysis, and healthcare [14, 41, 42].
Various architecture with similar concepts has been introduced to tackle challenges. Edge computing has been proposed to provide an intermediate layer between mobile devices and the cloud. This layer is implemented with few differences depending on the usage scenarios and locations. There are three leading architectures in the literature in edge computing: fog computing, Cloudlet, and multi-access edge computing.
The concept of Cloudlet was proposed by an academic team from Carnegie Mellon University, and the term “cloudlet” was first thrown by Satyanarayanan et al. [43]. It is considered an extension of the cloud as depicted in Figure 2.2. It is a trusted and resource-rich node with stable Internet connectivity. Cloudlet offers computing and access, and storage resources to nearby mobile devices. Furthermore, we can qualify the cloudlet concept as a small-box data center using a virtual machine-based approach. Cloudlet is usually deployed at one wireless hop away from mobile devices, such as public places like hospitals, shopping centers, and office buildings. Cloudlet is presented as a promising solution for remote wide-area network latency and cellular energy consumption through cellular data connections to the cloud. Nevertheless, Cloudlet relies on a solid Internet connection since it uses technologies such as Wi-Fi, which is placed one hop or many hops at the Internet’s edge. Furthermore, particular security and privacy challenges entail consumer reluctance to utilize privacy-related services, such as e-commerce websites. Since it can be deployed independently in different locations, managing it is difficult.
On the other hand, the concept of Fog computing, also known as “fog networking” or “fogging”, was introduced by Cisco to enable mainly the support of Internet of things applications [44, 45]. Fog computing uses edge devices such as switches and routers to perform offload functions on wireless networks rather than on separate servers. Moreover, the owner edge server in both of cloudlet and fog node can be anyone. In addition, fog computing has limitations because it relies on wireless connections that need to be live to perform complex actions. The terms fog computing and MEC are often used interchangeably, but they differ in some respects. We show the main difference between fog computing and MEC in Table 2.1.
Finally, the third approach, namely Multi-access edge computing, is a standardiza-tion developed by ETSI group. Unlike the first two implementations, the ETSI group defines algorithms and standards that describe how the MEC infrastructure is designed, deployed, and used. MEC architecture essential components are mobile devices such as smartphones, intelligent sensors, connected vehicles, MEC servers, and wireless base stations such as Wi-Fi access points, 3G/4G/5G). Multi-access edge computing provides cloud computing capabilities to radio access networks and hosts applications at the mobile  network edge, improving user experience. The benefits of the MEC architecture compared to the traditional centralized cloud computing are as follows:
• Reduced latency: Communication latency reduces significantly because the end-users processing and storage functions are close. As a result, user experience is considered high quality with ultra-low latency and high bandwidth.
• Proximity: MEC benefits latency-sensitive applications and computing devices such as augmented reality and video analytics by distributing resources closer to the end-user.
• Network Context Awareness: By incorporating MEC into their business model, ap-plications that provide network information and services based on real-time network data may benefit businesses and events. These applications can estimate radio cell utilization and network bandwidth throughout real-time RAN information.
• Energy efficiency: The hardware components of mobile terminals, such as the screen, CPU, and network interfaces, are more energy-intensive. Moreover, one of the most crucial issues with today’s mobile devices is battery life. Because manufacturers have difficulties fulfilling rising demand, MEC is a potential alternative by bringing cloud computing capabilities closer to mobile terminals to extend battery life.
• On-demand self-services: MEC provides resources for mobile devices in just sec-onds. However, end-users may be required to make expected computing resources reservations for some complex applications and services.
MEC is a cost-effective and highly attractive extension of cloud computing. It can dramatically improve the performance of numerous applications and services and enhance the user experience. One of the main benefits of this paradigm is to translate multiple applications from prototype to development, and large-scale use, such as mobile gaming and mobile augmented reality apps. In addition, MEC provides computing resources ev-erywhere, allowing end-users to run where it works best for them. However, real-world deployment and use of MEC requires tackling complex challenges. Indeed, integrating a MEC platform into a mobile network environment poses several challenges related to service orchestration, primarily due to fluctuating resources and changing radio and net-work conditions caused by user mobility. Furthermore, a MEC system requires support for managing the application lifecycle, i.e., instantiating and ending an application, either on-demand or responding to a request from an authorized third party. Orchestrating a MEC platform in resource allocation and service placement is essential to ensure effi-cient use of network resources, optimal quality of service, and reliability. The following service attributes are particularly relevant to the efficient operation of a MEC service orchestration process. Computation offloading is considered a crucial use case from the user’s perspective. It enables the execution of sophisticated new applications from mobile devices while reducing their energy consumption.

READ  An Algorithm for Non-Redundant Sampling of RNA Secondary Structures 

Offloading to the MEC

Computing Offloading from mobile devices to MCC or MEC is a well-established con-cept for supporting mobile devices with limited resources. Computation offloading is a mechanism in which resource-constrained mobile devices fully or partially offload their computation-intensive tasks to a resource-rich cloud to be executed in the cloud environ-ment and obtain the results back. As a result, it saves battery life on mobile devices and speeds up the computation process. Many works have investigated this issue in the litera-ture, introducing several new approaches with different offloading policies and algorithms. There is three mains taxonomy of computation offloading :
• Architecture aspect: This aspect defines the architecture of the MEC-enabled ap-plication. The works on this aspect define methods and designs to know how to offload an application and which parts. For example, we can offload different parts of an application to the MEC, such as a task, thread, class. We distinguish two categories of applications supported by MEC, the single task application, and the multi-task application. The first one is just one task that is offloaded to the MEC.
The second one consists of many tasks that can be performed on the MEC or some locally.
• Resource allocation aspect: A MEC application requires two sorts of resources: one for data volume and the other for its computation processing. However, mobile devices must consider CPU and bandwidth resources when offloading data and not only the computation resources aspect. Therefore, it has to reserve an amount of the network resource to transmit data to the MEC. Thus, both radio transmission and computing resources are essential for task offloading on the MEC.
• Performance aspect: This aspect is relevant to the offloading performance mea-surement. Metrics are critical indicators of offloading decisions. A key metric is application completion time, including quantifying time when an application starts, transmission, and processing time until it finishes. Minimizing this metric is essen-tial for the effectiveness of computational offloading. Moreover, there is also the measure of energy consumption whose relevant to analyze the performance aspect. It measures how much energy a mobile device consumes while the application is run-ning of the application. The goal is to ensure low energy consumption to prolong battery life.
In sum, the three main points reflect the primary purpose of exploring offloading computing solutions. The goal is to minimize the mobile device’s energy consumption and execution time while adhering to the maximum latency limit. Alternatively, finding the optimal compromise between power consumption and execution time.
Recent studies focus on minimizing computational offload latency. For example, Ren et al. discusse the computation offloading possibility of multi-user video compression applications to minimize the latency [46]. Furthermore, they express also taking into account partial offload scenarios. Using a one-dimensional search algorithm by Liu et al. allows achieving the goal of minimizing execution delay [47]. The proposed algorithm finds the optimal offload decision-making policy according to mobile devices’ current state, such as buffer state, available processing resources, and radio link conditions between the mobile device and the MEC host.
Various approaches have been proposed to minimize the total energy consumption of mobile devices while satisfying a tolerable latency constraint. Kwak et al. [48] focused on an energy minimization problem for local computation and task offloading in a single-user MEC system. Chen et al. [49] proposed a peer offloading framework for autonomous decision-making. A pre-computed offline approach and an online learning approach are used to solve the optimization problem. Further extensions from single-user to multi-user scenarios are also investigate [50, 51]. Regarding task offloading, Tong et al. [52] designed a hierarchical peripheral computing architecture according to the distance between the peripheral server and users. They proposed an optimal offloading scheme to minimize the heuristic task duration algorithm.
Several papers [20, 53] investigate the power consumption of mobile devices based on dynamic voltage and frequency scaling (DVFS). Energy harvesting strategies are also used to reduce energy consumption in computation offloading during the execution at the mobile device and data transmission over the radio network [54]. Chen et al. [15] considered the task offloading and resource allocation in mobile edge computing. For stabilizing all task queues at the mobile device and MEC server, dynamic task offloading and resource allocation policies were proposed using Lyapunov’s stochastic optimization in [48, 55].

Graphical view of persistence

Figs. 3.4 and 3.5, show the persistence of dense cells in both cities over a period of one hour. Note that each colored curve represents the dissolution of a set over time. Recall that a set starts at each time step and that its initial size corresponds to the total number of vehicles in the cell (black dashed line in the plot).
Let us pick the specific set formed at 7:03 am and describe its dynamics. In Rome, this set initially contains 11 vehicles at T =5:35 am. Consider =0, which leads to persistence ranging from T until all nodes of set leave the cell; in this case, the persistence is 47 minutes. Similarly, let us consider = 0:5, the persistence is 10 minutes, which corresponds to the period between T until half of the set’s nodes leave the cell. For example, in Rio, let us select the set formed at T =7:27 am; it initially contains 24 vehicles. The persistence with = 0 is 18 minutes while it is 6 minutes when = 0:5. Notably, Rio’s persistence is lower than Rome’s despite the larger number of nodes. There is a faster decrease in the curves leading to a shorter persistence. Main outcomes: The initial degree of sets of nodes is not necessarily related to the persistence time.

Table of contents :

1 Introduction 
1.1 Context and motivations
1.2 Mobile devices: An extra computing resource
1.3 Approach
1.4 Exploitation of real-world datasets
1.5 Challenges and contributions
1.6 Thesis outline
2 Positioning and Related Works 
2.1 Mobile Cloud Computing
2.2 Multi-access edge computing
2.3 Offloading to the MEC
2.4 Discussion
2.5 Conclusion
3 Persistence of Vehicular-Augmented Mobile Edges 
3.1 Persistence of sets of nodes
3.1.1 Vehicular datasets
3.1.2 Graphical view of persistence
3.2 Detailled analyses
3.2.1 Persistence in Rome (Taxis)
3.2.2 Persistence in Rio (Buses)
3.3 Varying the reduction threshold
3.3.1 Influence of the reduction threshold in Rome
3.3.2 Influence of the reduction threshold in Rio
3.4 Impact of cell size on the persistence
3.4.1 Average persistence evolution in Rome using 1 km2 cells
3.4.2 Average persistence evolution in Rio using 1 km2 cells
3.4.3 Variation of cells size in Rome
3.4.4 Variation of cells size in Rio
3.5 Analysis of the ability to accomplish task
3.5.1 Case 1: Percentage of successful tasks when at least one node is present in the set
3.5.2 Case 2: Percentage of successful tasks when sets of nodes stay together as formed
3.6 Conclusion
4 Offloading computing tasks beyond the edge in a cell 
4.1 Offloading strategy and assumptions
4.1.1 Task allocation process
4.1.2 Cumulative presence time and completion delay
4.1.3 Formalization
4.2 Pedestrian dataset
4.2.1 Number of users in cells
4.2.2 Average frequency of nodes return
4.2.3 Average presence time of node in a cell
4.3 Varying the completion delay
4.4 Capacities provided by a node in a cell
4.4.1 Case 1: Average number of tasks with a zero completion tolerance delay
4.4.2 Case 2: Average number of tasks with a 24-hour completion tolerance delay
4.4.3 The gain derived from the completion delay
4.5 Conclusion
5 Conclusion and futures perspectives 
5.1 Conclusion
5.2 Future perspectives
Appendix A Résumé détaillé en Français 
A.1 Introduction
A.2 Les appareils mobiles : une ressource informatique supplémentaire
A.3 Challenges
A.3.1 Contribution 1 : L’exploitation des dispositifs mobiles dans un scénario véhiculaire
A.3.2 Contribution 2 : L’exploitation des dispositifs mobiles dans un scénario piéton
A.4 Conclusion
A.5 Perspectives
Bibliography 

GET THE COMPLETE PROJECT

Related Posts