Methodology and underlying technology 

Get Complete Project Material File(s) Now! »

Related Work

This section presents the previous works done that stand as motivation for our thesis in several contexts. It presents the works where CPU utilization was used as a deciding factor for reaching their objectives of determining performance in clouds.
In [10], the authors performed experiments in multi tenant and single tenant cloud for making comparison between their performances. According to their hypothesis, single-tenant cloud environment should have smaller CPU utilization than multitenant environments. Even though this is quite obvious, their results proved otherwise. Their performance tests were based on two web services in dif-ferent cloud environments. Their results show that in most of the scenarios, single tenant clouds showed higher CPU utilization than multi-tenant clouds. Based on this, their future work included further tests on different cloud scenarios with different metrics. Our thesis is not only similar to their work in terms of metric – CPU utilization, but also includes performance tests in operational cloud and virtualization environments to observe the metric’s behavior with respect to guest and host.
Another research in this direction is [11], which provides analysis and charac-terization of server resource utilization in a large cloud production environment. The researchers in this paper discovered that the CPU utilization of about 4.54 to 14% is wasted, which leaves further scope for resource management techniques to come into picture. Making a fine-grain analysis of the correlation and workload types would be their potential future work. Similar to this, there is another re-search in performed in measuring server utilization in public clouds. The authors of [14] believe that it is important to understand current utilization level in cloud environment to improve it and developed a technique to identify current CPU utilization level from physical server.
As energy efficiency is a concerned topic for many, authors of [13] addressed this issue by consolidating VMs. For the same, they have done experiments on OpenStack cloud with optimization goals and guidelines set. The results proved that VM consolidation is definitely an optimal solution for better management of resources and we implemented our test cases in the same direction by testing on a multi tenant cloud.
The paper [12] presents an empirical study on OpenStack cloud’s scheduling func-tionality aiming to evaluate OpenStack scheduler with respect to memory and CPU cores of PM and VMs and their interactions. Their analysis states that number of CPU cores assigned to virtual machines stands as an important aspect in resource allocation of OpenStack schedulers.
Inspired by these related researches that are summarized above, we understand the importance of CPU utilization in resource management for better efficiency in cloud infrastructures and our thesis is a measurement study performed to un-derstand the CPU utilization for better resource management applying.

Methodology and underlying technology

The first step is defining method to obtain our goal. In section 1.1 we identified a research problem that is unknown relationship between host and guest in cloud computing. Section 2 presented similar work, which is done by other researchers to show the research gap. To perform method we need to get insight about infrastructure and their underlying technologies.
In the second step, we need to provide methodology for our research in order to reach initial information for answering our research questions. The next step is discovering required parameters and the algorithms, which will use to structure the model. After the model has been created it will be evaluated by conducting experiment on real environment in cloud infrastructure and standalone system. This chapter provides overview of virtualization and cloud computing technology to specify the infrastructure used in this study. Also, it contains methodology and the designed model to conduct experiment.

Underlying technology

This section provides the overview of fundamental concepts, which helps to un-derstand our thesis work area. Different virtualization techniques and hypervisor types explain briefly in section 3.1.1 and 3.1.2. Cloud computing principles and OpenStack architecture are presented in section 3.1.3.


Virtualization is the act of creating virtual version of devices, servers, and net-works including Operating Systems. These virtual machines are configured and run on physical machines and there can be multiple VMs sharing resources on a single host. There are many techniques of virtualization like OS virtualization and Hardware virtualization.
• OS virtualization
When OS is portioned into multiple instances that behave like individual guest systems, it is called OS virtualization. These systems can be run or managed independently. These virtual servers or machines are provided to the customers for high performance and can also be used by operators as virtual servers. However, there is a disadvantage to this technique because it uses same kernel version as they are partitioned on same OS [15].
• Hardware Virtualization
In this technique, VMs are created with a complete OS resembling a physical machine. These VMs are independent compute machines with kernels of their own. Moreover, they can have different kernel versions. This technique provides all the hardware components that support installation of VMs. There are two types of hardware virtualization:
– Full virtualization binary translations: when instructions are given to the VM, the VM’s kernel forwards it to the host and those instructions are translated during run time by the host machine [15].
– Full virtualization hardware assisted: The hypervisor that hosts the VMs and that acts like a VMM is responsible to the operations on the VM.
• Para virtualization
Para virtualization: In this case, a virtual system will be provided with an interface for virtual OS to use the physical resources through hyper calls. It is visible in storage controllers and network cards [16].


Hypervisor also called, as a virtual machine monitor is the software that is re-sponsible for creating and administrating the virtual machines. There are two types of hypervisors:
• Type 1 – This type of hypervisor will be running on the CPU but not as software on the OS. The administration and monitoring of the VMs is performed by the first VM installed, which is made to run on ring 0. This is responsible for creation and provision of the VMs. It is also called as bare-metal hypervisor and Xen is an example. As it shows in Figure 3.1, type 1 hypervisor has its own CPU scheduler for guests’ virtual machine [17].
• Type 2 -This type of hypervisor is installed like a software on host OS and it is responsible for monitoring and creating VMs on the host. KVM is an example. Figure 3.2 represented type 2 hypervisor, which is following kernel schedule [17].
KVM is an open source hypervisor and is extensively used in cloud technologies. This comes as a pair with another user process – QEMU, Quick Emulator. Both KVM and QEMU together create virtual instances using hardware virtualization mechanism. Two popular clouds that use KVM are Joyent and Google Compute Engine [4]. First guests are created and as-signed the VCPUs and then provided with scheduling by hypervisor. The number of VCPUs that can be allotted is limited to the physical CPUs available [18].
Hardware support on VM is only unto 8 virtual cores for each physical core. If this is exceeded, QEMU provides software virtualization to KVM. For better performance, 1 VM can be allocated with 1 virtual core [19].

READ  Energy fluxes estimation at low spatial resolution: Application of the energy balance model SPARSE

Cloud computing and OpenStack

As stated before, cloud computing is a widespread phenomena where a phys-ical resource is shared by multiple servers. This technology offers three types of services – Infrastructure-as -a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) [1,2].
IaaS is the service that provides infrastructure – compute and network resources to the consumers. The customers have the possibility to create and provision their systems as desired. PaaS is the service that provides a platform for the user to install and run their applications and SaaS is the service that provides required software to the customer for usage [1,2].
When such clouds are interconnected it is called as federated clouds and it is a popular trend in Future Internet. One project working towards cloud federation is FI-PPP, Future Internet Public Private Partnership framework. This project comprises of various other projects and XiFi is a project aiming at building and providing smart infrastructures for FI-PPP. These Infrastructures enable the de-ployment of several applications and instances in a unified market place. BTH is one among the current operational nodes across Europe [20].
These clouds communicate with each other via networking devices and they are highly scalable and secure. They are implemented at multiple places and user has the power to choose his desired location for installing his instances or appli-cations. The user can move his instances to other locations when hardware at one location malfunctions.
XiFi nodes utilize OpenStack for building their heterogeneous clouds and pro-vide various tools and generic enables to the user to facilitate deployment of applications. They follow the standards of cloud computing namely, on demand self-service, resource pooling, scalability and security [20]. Another example for such federated platform is the infrastructure at City Net-work hosting AB. City Network AB is a corporation that provides cloud domain services and hosting to its customers. The services are delivered to customers via OpenStack’s dashboard, where the customers can create and provision their VMs similar to XiFi. CityNetwork offers storage and other high quality services to its customers via dashboard. This web interface is called as City Cloud and is similar to FIWARE cloud lab of XiFi. The dissimilarity is that City Network upgrades regularly to latest OpenStack releases unlike XiFi. Currently, they pro-vide hosting on their datacenters in UK, Stockholm and Karlskrona.
City Network architectures have considerable number of customers utilizing their services. Identifying and comparing the host and guest CPU utilization will be of great value in these operational clouds for better customer service and load balancing [21].
The VMs in the clouds are provided with IP addresses, firewalls and other network functions to enable interconnection between various other VMs. This is called as cloud networking and is important, as the user should be able to access his VM from anywhere else. Cloud networking also provides security for these services that are provided over the global infrastructure, meaning, federated cloud infras-tructure that comprises of various other cloud architectures. Cloud Networking is of two types- CEN and CBN.
CEN is Cloud Enabled Networking where management is moved to cloud but the other networking components like routing and switching are in the network hardware [22,23].
CBN is Cloud Based Networking where control and management including the network components and functions are moved to cloud. The network hardware is still used for connecting the physical devices [22,23].
OpenStack is open source software and tools used for providing IaaS. It is used to build and manage a cloud-computing environment. The basic architecture con-tains two or three nodes, which will run different services as it is needed. In two nodes architecture network services will move in compute node.
Controller or cloud controller is the device that controls or manages other Open-Stack components and services. This provides central managements system for the OpenStack infrastructure-as-a-service deployments. Controller of-fers many services, namely, authentication and authorization, databases, dashboard and image services. Mostly single node as a controller is used in cloud platform but for high availability two or more controller node is needed [24].
Compute in OpenStack is called by its project name – Nova. The compute node, by the name itself, is the device or node that hosts virtual instances, thus providing IaaS. Nova comes with OpenStack software components and APIs but does not have any virtualization software of its own, but instead is configured to use the existing virtualization mechanism through interfaces and drivers. Its main components include object storage component called as swift and block storage component called as cinder that provide storage services [25].
Networking aims at providing network services to the instances and enables communication between the compute nodes. It is not necessary to have a separate node for networking and one of the compute nodes can be utilized for the this purpose [26].
OpenStack’s core components on each node pictured in figure 3.3. Telemetry management is the only optional service is shown in the figure 3.3. That is used to monitor the OpenStack cloud for billing, statistical purposes and scalability. Telemetry agent in the compute node collects data based on its metric. It has different types of agent to collect data. It is not providing a full billing solution but able to show resource usage per user and tenant.

Table of contents :

1 Introduction 
1.1 Background
1.2 Aim and Objectives
1.3 Research question
1.4 Expected contribution
1.5 Outline of a report
2 Related Work 
3 Methodology and underlying technology 
3.1 Underlying technology
3.1.1 Virtualization
3.1.2 Hypervisor
3.1.3 Cloud computing and OpenStack
3.2 Methodology
3.2.1 Measurement tools
3.3 Modeling
3.4 General experiment setup
3.4.1 OpenStack test case
3.4.2 Standalone test case
3.4.3 Single guest with stress
3.4.4 Single guest with stress-ng
4 Results and analysis 
4.1 Results from OpenStack and standalone test case
4.2 Results of generating load with Stress
4.3 Results of generating load with Stress-ng
4.3.1 Mpstat tool
4.3.2 Top tool
4.4 Analysis
5 Conclusions and Future Work 
5.1 Conclusions
5.1.1 Multi guests
5.1.2 single guest
5.2 Future work


Related Posts