A Socially-Aware Hybrid Computation Offloading Framework for Mobile Edge Com- puting 

Get Complete Project Material File(s) Now! »

Learning-based computation offloading strategies in MEC:

In view of computation offloading in MEC, one important fact is that benefits and drawbacks from offloading computation-intensive portions of mobile applications can be influenced by various in-ternal and external factors, such as application requirements, network conditions, and computing capabilities of mobile and external devices.
Prior studies have primarily focused on core mechanisms for offloading. Yet, adaptive schedul-ing in such systems is important because offloading effectiveness can be influenced by varying network conditions, workload requirements, and load at the target device. We will present in this thesis a study on the feasibility of applying deep learning techniques to address the adaptive scheduling problem in mobile offloading framework.

MEC architecture

Mobile edge computing, which equips the radio access networks with cloud computing capabilities and hosts the application at the mobile network edge, has the potential to offer an improved user experience. Fig. 1.1 illustrates the basic architecture of MEC-enable cloud computing network. We can see that there are three basic components in MEC architecture: i) Mobile devices such as smart phones, smart sensors and intelligent vehicular, ii) MEC servers and iii) Wireless base stations (e.g., 3G/4G/5G, macro base stations and Wi-Fi access points) through which the mobile devices connect to the MEC servers.
Regarding MEC servers, they provide computing resource, storage capability and connectivity. They can either handle a user request and respond directly to the user equipment (UE) or forward the request to remote data centers and content distribution networks (CDNs). The server can be deployed at high-end or low-end proximate in the small cell based edge cloud network [20]. The high-end deployment is a current focus of the ETSI Mobile-Edge Computing standard, as shown in Fig. 1.2 (b). In this case, typically a high-end standard server is located in the access network that is an aggregation point of a set of small cell base stations (SCeNB i.e., Small Cell e-Node B). The server is also regarded as a central coordinator that is required to design a cache placement strategy. On the other hand, in the low-end case, application servers can be deployed in low-end devices which can be routers, access points, or home-based nodes. Femto-cloud ( [17, 21– 23]) is a typical low-end deployment. The idea is to endow small cell base stations, with cloud functionalities (computation, storage and server), thus providing mobile UEs with proximity access to the mobile edge cloud. The novel base stations are called small cell cloud-enhanced e-Node B (SCceNB), as illustrated in Fig. 1.2 (a) , which are deployed with high-capacity storage units but have limited capacity backhaul links. In this case, the popular contents can be cached in a distributed manner among the SCceNBs without a central coordinator. With respect to the high-end deployment, this deployment brings three advantages: i) strong reduction of latency with respect to centralized clouds, because small cells are the closest points in the mobile network to the mobile users with only one wireless hop, and therefore with minimum latency, ii) storage of large amounts of local contents. Indeed, data can be temporarily stored (local caching) in the nearby SCceNB, with very low latency, and iii) no central coordinator is required to collect the information of the whole network, which significantly saves signalling overhead.
Compared to traditional centralized cloud computing architecture, the MEC architecture has several advantages listed below:
Reduced latency: Since the processing and storage capabilities are in proximity to the end users, the communication delay can be reduced significantly. Authors in [24] present experimen-tal results from Wi-Fi and 4G LTE networks showing that offloading to cloudlets1 can improve response times by 51% compared to cloud offloading.
Reduced bandwidth: Authors in [25] proved that through deploying at network edge, we can save operation cost by up to 67% for the bandwidth-hungry applications and compute intensive applications. Research results also show that backhaul savings can be up to 22% exploiting proac-tive caching scheme [26]. Higher gains are possible if the storage capability of edge servers are increased.
Energy efficiency: Experimental results in [24] showed that the energy consumption in a mo-bile device can be reduced by up to 42% through offloading to cloudlets compared to cloud offload-ing.

Key issues in 5G MEC

Emerging mobile collaborative applications: The new applications are the main driven force for the evolvement of network architecture. The emerging applications need mobile devices not only with considerable energy, memory size and computational resources, but also with col-laboration capacity. For example, mobile collaborative applications require a close collaboration between the participants. The number of participants typically varies between only a few and up to a hundred. The collaboration takes place in a decentralized closed group, whereby each partici-pant has a comprehensive knowledge of all the others. Therefore, these new types of applications bring several challenges to edge computing system deployment, particularly in terms of design, implementation, use, and evaluation.
Computation offloading in MEC: One of the main purposes of edge computing is computa-tion offloading to break the limitations of mobile devices such as computational capabilities, battery resources and storage availability. When, where and how to offload the computation tasks is a hard problem. Various approaches have been proposed to tackle this problem under many kinds of sce-narios such as single user case, multi user case etc. For the single user case, the common design objective is to save energy for the mobile device. Whereas in the multi user case, the optimal of-floading problem is usually NP-hard. In next generation heterogeneous networks, the computation tasks can not only be offloaded to servers but also to devices by utilizing D2D communication (i.e., D2D computation offloading). Therefore, we can offload computation tasks to nearby mobile devices, MEC servers and remote cloud servers in the MEC system, thus bring new challenges to offloading decision making.
Edge caching in MEC: The future mobile networks will be heterogeneous due to dense de-ployment of different types of base stations. Thus, cache can be deployed at various places in the mobile networks. In legacy cellular system, the content requested by users has to be fetched from the Internet CDN node far away from the mobile networks. Then, caching content at the mobile core network is implemented. However, the backhaul links are still constrained. In addition, with the evolvement of base station and low cost storage unit, deploying cache at macro base stations and small base stations become feasible. In the future 5G networks, D2D communication enables the storage unit at user devices to be exploited for content sharing according to the social relationship among users [27]. Thus storage resources in mobile devices can be exploited.
Socially-aware MEC: Exploiting the social relationship of mobile users have emerged recently as a new topic for network design and optimization. In MEC network, a large number of users inter-acts with different operators, social service providers, and application gateways. It is thus extremely important to provide a trustworthy environment. When considering the computation offloading in MEC, the trustworthy environment must provide reliable, secure, and personalized connectivity between resource (including computation and storage) seekers and resource providers. Trust man-agement allows exhaustive utilization of network services by diversified users with mutual consent over social behaviours especially focusing on user privacy and confidentiality. Thus, it becomes important to consider trust-management for MEC.

Proposed an optimal offloading with caching-enhancement scheme (OOCS) for femto-cloud scenario and mobile edge computing scenario, respectively.

As a first contribution, we propose a fine-grained collaborative computation offloading and caching strategy that optimizes the offloading decisions on the mobile terminal side with caching enhance-ment. The objective is to minimize the overall execution latency for the mobile users within the network. Most of previous works either focus on the offloading decisions [10, 21–23, 28–33] or caching placement strategy [13, 34–36]. To the best of our knowledge, our work is the first of its kind that optimizes offloading decisions, while considering data caching.
Based on the concept of the call graph [37], we propose in this thesis the concept of the col-laborative call graph to model the offloading and caching relationship for the Mobile Collaborative Applications (MCA) execution, and then compute the delay and energy overhead for each sin-gle user. We first evaluate our algorithm in a single-user scenario for both femto-cloud and MEC networks. Moreover, to further reduce the execution delay in the multi-user case, we explore the concept of the coalition formation game for the distributed caching scenario. We compare our ap-proach with six benchmark policies from the literature and discuss the associated gains in terms of required cpu cycles per byte (cpb) for applications and content’s popularity factor.
This contribution is the object of a conference publication in the International conference on communications (ICC) [38] and a submitted journal in IEEE Trans. on Vehicular Tech.

READ  LTCC technology validation for RF packaging applications 

Proposed a device-to-device (D2D) multicast-based computation offloading strat-egy

As a second contribution, we propose in this thesis a flexible D2D multicast computation offloading framework in which the role of offloadees (servers) and offloaders (users) are negotiated within multiple mobile devices according to their channel state and battery level.
Our objective is to reduce the overall energy consumption at the mobile terminal side under delay and energy constraints. To this end, we modelled the optimal offloading framework as a com- binatorial optimization problem, and then solved using the concepts of from maximum weighted bipartite matching and coalitional game.
This contribution is the object of a conference publication in the IEEE Global Communications Conference (GLOBECOM) [39]

Proposed a socially-aware hybrid (D2D/MEC) computation offloading (SAHCO) strategy

As a third contribution, we propose here a new fine-grained task assignment mechanism for MCA execution in Wireless Distributed Computing (WDC) with social trust consideration. The objective is to minimize the overall energy consumption at the mobile terminal side. Most of existing D2D offloading works either lack of social tie consideration [29, 30, 40], or consider the social tie in D2D communications ( [41, 42], i.e., D2D data/traffic offloading). To the best of our knowledge, our work is the first of its kind that optimizes task assignment, while considering the social rela-tionship. We propose in this thesis a new framework of social-aware D2D computation offloading (named SAHCO) and then compute the energy overhead for each application cluster. In order to enhance the performance, we solve the computation offloading problem for all the components of an application at the same time. In fact, while the sequential processing for components, mini-mizes the computation energy, it may fail to ensure an efficient optimized offloading decision. To this end, we propose a new optimal task assignment approach based on Monte-Carlo search tree (MCTS), named TA-MCTS. Our proposed solution achieves an optimized computation offloading policy. We compare our approach with the related benchmark policies from the literature, and dis-cuss the associated gains in terms of mobile user density, social trust threshold and CPU structure, respectively. This contribution is the object of a journal submission in IEEE Trans. on Mobile Computing.

Proposed a deep learning based low complexity computation offloading strat-egy

Last but not least, we propose a fine-grained computation offloading framework that minimizes the offloading cost for the MEC network. The offloading actions taken by a mobile user consider the local execution overhead as well as varying network conditions (including wireless channel con-dition, available communication and computation resources). Most of previous machine learning-based offloading works consider either coarse-grained computation offloading [43, 44], or assume an infinite amount of available communication or computation resources in cloudlet.
In this context, we formulate and build a Deep Supervised Learning (DSL) model to obtain an optimal offloading policy for the mobile user. Our method provides a pre-calculated offloading solution which is employed when a certain level of knowledge about the application and network conditions is achieved. We formulate the continuous offloading decision problem as a multi-label classification problem. This modelling strategy largely benefits from the emerging deep learning methods in the artificial intelligence field. Our method approaches the optimal solution obtained by the exhaustive strategy performance with a very subtle margin. As the number of fine-grained components n of an application grows, the exhaustive strategy suffers from the exponential time complexity O(2n). Fortunately, the complexity of our learning system is only O(mn)2, where m is the number of neurons in a hidden layer which indicates the scale of the learning model. In fact, the artificial intelligence problem with both large input/output dimension and large number of examples such as image classification is proved experimentally feasible using similar technique [45]. To the best of our knowledge, our work is the first of its kind that achieves optimal offloading policy through Deep Learning approach.
This contribution is the object of a conference publication in the International symposium on personal, indoor and mobile radio communications (PIMRC) [46].

Table of contents :

1 Introduction 
1.1 Context and Motivations
1.1.1 Muti-user computation offloading with edge caching consideration in MEC:
1.1.2 Socially-aware MEC:
1.1.3 Learning-based computation offloading strategies in MEC:
1.2 MEC architecture
1.3 Key issues in 5G MEC
1.4 Problem statement
1.5 Thesis contributions
1.5.1 Proposed an optimal offloading with caching-enhancement scheme (OOCS) for femto-cloud scenario and mobile edge computing scenario, respectively.
1.5.2 Proposed a device-to-device (D2D) multicast-based computation offload- ing strategy
1.5.3 Proposed a socially-aware hybrid (D2D/MEC) computation offloading (SAHCO) strategy
1.5.4 Proposed a deep learning based low complexity computation offloading strategy
1.6 Thesis outline
2 Background and Literature Review 
2.1 Introduction
2.2 Computation offloading
2.2.1 Coarse-grained/full offloading
2.2.2 Fine-grained/partial offloading
2.3 Mobile application model
2.3.1 Call graph
2.3.2 Mobile augment reality applications
2.3.3 Mobile crowd sensing applications
2.4 Edge caching in MEC
2.4.1 Service caching
2.4.2 Data caching
2.5 Summary and discussion
2.6 Conclusion
3 Computation Offloading with Caching-Enhancement for Mobile Edge Computing 
3.1 Introduction
3.2 System Model
3.2.1 Network Model
3.2.2 Application Model
3.2.3 Caching Model
3.2.4 Execution Model
3.2.4.1 Local Execution
3.2.4.2 Low-end Execution
3.2.4.3 High-end Execution
3.3 Proposed collaborative call graph and problem formulation
3.3.1 Collaborative call graph with caching enhancement
3.3.2 Problem formulation for Low-end MEC deployment
3.3.2.1 Single-user Scenario
3.3.2.2 Multi-user scenario
3.3.3 Problem formulation for High-end MEC deployment
3.3.3.1 Single-user Scenario
3.3.3.2 Multi-user scenario
3.4 Proposal: OOCS scheme
3.4.1 Game Formulation
3.4.2 Algorithm Description
3.4.3 Complexity analysis
3.5 Performance evaluation
3.5.1 Performance evaluation of OOCS in single-user scenario
3.5.1.1 Low-end deployment scenario
3.5.1.2 High-end/Low-end performance comparison
3.5.2 Performance evaluation of OOCS in multi-user scenario
3.6 Conclusion
4 D2D-Multicast Based Computation Offloading Frameworks for Mobile Edge Com- puting 
4.1 Introduction
4.2 A multicast based D2D computation offloading framework
4.2.1 Information collection through LTE uplink channel
4.2.2 Computation result feedback through D2D multicast channel
4.3 Problem formulation
4.4 Proposal: MWBM-CG scheme
4.5 Performance evaluation
4.5.1 Network parameters
4.5.2 Application parameters
4.5.3 Simulation results
4.6 Conclusion
5 A Socially-Aware Hybrid Computation Offloading Framework for Mobile Edge Com- puting 
5.1 Introduction
5.2 System Model
5.3 A social-aware hybrid computation offloading framework
5.3.1 Description of the proposed offloading framework
5.3.2 D2D Link Model
5.3.3 Social Relationship Model
5.4 Problem formulation
5.4.1 State Space and Action Space
5.4.2 Immediate Cost
5.4.3 Optimal Problem Formulation
5.5 Proposal: TA-MCTS scheme
5.5.1 Selection
5.5.2 Expansion
5.5.3 Simulation
5.5.4 Backpropagation
5.6 Performance Evaluation
5.7 Conclusion
6 Computation Offloading for Multi-Core Devices in 5G Mobile Edge Computing: A Deep Learning Approach 
6.1 Introduction
6.2 Decision making procedure and problem formulation
6.2.1 Decision making procedure
6.2.2 State space and action space
6.2.3 Immediate cost
6.2.4 Optimal problem formulation
6.3 Algorithm design
6.3.1 Initial phase
6.3.2 Training phase
6.3.3 Testing phase
6.4 Performance evaluation
6.4.1 Network parameters
6.4.2 Simulation results
6.5 Conclusion and future work
7 Conclusions 
7.1 Summary of contributions
7.2 Future work
7.3 Publications
References 

GET THE COMPLETE PROJECT

Related Posts