Understanding the Hurdles and Requirements to Optimize Energy Consumption 

Get Complete Project Material File(s) Now! »

Energy Consumption Variations

By energy variations, we refer to the problem of steadiness and stability when measuring software energy consumption. This means that ruining the exact same job or application within the exact same environment and configuration can result in different measures in energy consumption due to many factors [53].
This variation has often been related to the manufacturing process [31] of CPU and com-ponents, but has also been a subject of many studies, considering several aspects that could impact and vary the energy consumption across executions. The correlation between the pro-cessor temperature and the energy consumption was one of the most explored paths. Joakim v Kisroski et al. reported that identical processors can exhibit significant energy consumption variation with no close correlation with the processor temperature and performance [63]. The registered differences in energy consumption were most significant for idle and high load states with up to 29.6% and 19.5% variations respectively. However, Wang et al. [150] claimed that the processor thermal effect is one of the most contributing factors to the energy variation, and that the correlation between the CPU temperature and the energy consumption variation is very tight. The Spearman correlation coefficient between CPU temperature and energy consumption was higher than 0.93 for all experiments. This makes the processor temperature a delicate factor to consider when comparing energy consumption variations.
The ambient temperature was also discussed in other papers as a candidate factor for the energy variation of a processor. Varsamopoulos et al. [143] claimed that energy consumption may vary due to fluctuations caused by the external environment. These fluctuations may alter the processor temperature and its energy consumption. However, the temperature inside a data center did not show major variations from one node to another. Diouri et al. [38], the study showed that switching the spot of two servers does not sub-stantially affect their energy consumption (this was also confirmed by Wang et al. [150], where the rack placement and power supply only introduced a maximum of 2.8 % variation on the observed energy consumption). Moreover, changing hardware components, such as the hard drive, the memory or even the power supply, does not affect the energy variation of a node, making it mainly related to the processor. The authors also noticed during their experiments that old CPU versions could constitute a cause for a higher variation. Beyond hardware components, the accuracy of power meters has also been questioned. Inadomi et al. [59] used three different power measurement tools: RAPL, Power Insight and BlueGene/Q EMON. The study showed that the variations in energy consumption are not related to energy monitoring tools. In fact, all of the three tools recorded the same 10 % of energy variation on the executed benchmarks. The authors related those variations mainly to the manufacturing process. Mitigating Energy Variations. By acknowledging the energy variation problem on proces-sors, some papers proposed contributions to reduce and mitigate this variation.
Inadomi et al. [59] introduced a low-cost scalable variation-aware algorithm that improves application performance under a power constraint, by determining module-level (individual processor and associated DRAM) power allocation. The algorithm uses a model to predict the power consumption of applications. The model is based on the assumption that energy consumption for both CPU and DRAM is proportional to the CPU frequency. Experiments recorded up to 5.4× speedup on a 1920 sockets environment.
Acun et al. [4] suggested a way to reduce the energy variation on Ivy Bridge and Sandy Bridge processors, by disabling the Turbo Boost feature to stabilize the execution time over a set of processors. They also formalized some guidelines to reduce this variation by replacing the old/slow chips with recent ones, by load balancing the workload on the CPU cores. They also claimed that the variation between the CPU cores is insignificant.
Chasapis et al. [28] introduced an algorithm for parallel systems that can be used to reduce the energy variation by compensating the uneven effects of power capping. The algorithm needs a calibration phase, its approach considers numerous execution segments of the application parallel execution and associates each segment with a particular power and number of active core assignations per socket. It starts with evenly distributing power and activating all cores and then progressively iterates over a set of available power/#cores configurations and selects the best one. For each configuration, the targeted application runs for a certain amount of time called a monitoring window.
Contrary to some paper results [38], Marathe et al. [93] highlighted the increase of energy variation across some recent Intel micro-architectures by a factor of 4 from Sandy Bridge to Broadwell. Moreover, they noticed a 15 % of run-to-run variation within the same processor and the increase of the inter-cores variation from 2.5 % to 5 % due to hardware-enforced constraints. The authors suggested some recommendations for Broadwell chips to reduce the energy consumption variations, such as leaving one hyper-thread per core idle for the system processes, or avoiding co-locating high and low efficiency processors on the same node on large-scale clusters.

Understanding Software Energy Consumption

In order to promote green software design and reduce software energy consumption, it is important to understand the current situation, including developers and users needs on one hand, and sensitize/teach them about the subject on the other hand. In the past decade, some researchers have started such investigations to deliver an understanding on how to enhance practitioners experience on GSD, especially within companies.
First, Pang et al. [114] surveyed 100 persons. They focused on investigating programmers’ knowledge of software energy consumption. Their results expressed a lack of knowledge on ways and best practices to reduce software energy consumption, especially for desktop computers compared to mobile devices. The paper claims that only 10% of the participants try to measure the energy consumption of their software project, and only 17% consider reducing software energy consumption as a requirement in software development. Moreover, the authors mention an urgent need for training and education on software energy efficiency.
Pinto et al. presented an empirical study on how programmers understand software energy consumption problems [119]. The authors used more than 300 questions and 550 answers from 800 participants on Stack Overflow (SO) 5 as their primary data source. They stated that practitioners are aware of software energy consumption problems, but have limited knowledge and vague answers on how to deal with it. The extracted knowledge was divided into 5 themes: energy measurement, general knowledge, code design, context specific and noise. The authors also summarized 7 major causes and 8 common solutions evoked by developers for software energy consumption problems. One of the encouraging insights of the paper is the yearly increase of GSD-related questions and answers on SO, with a peak of 180%. 85% of these questions are answered, 45% are correctly answered with an average of 2.6 answers per question. In a later work, Manotas et al. [91] conducted a mixed quantitative and qualitative study, applied on 464 candidates from ABB, Google, IBM and Microsoft, and 18 from Microsoft respectively. The study starts with the qualitative study. It consists of 18 semi-structured interviews of 30-60 minutes each in order to deeply explore and understand participants’ opinions. The interviews were recorded, transcribed and then analyzed using open, axial and selective coding. Then, a quantitative study is conducted with a larger set of participants in order to quantitatively assess the quality of information learned from the interviews.
The study provided some interesting results, reporting for example that: i) mobile devel-opers are more concerned about software energy consumption problems, ii) energy concerns are largely ignored during maintenance, iii) energy requirements are more often desires rather than specific targets, iv) developers believe they do not have accurate intuitions about the energy usage of their code and are undecided about whether energy issues are more difficult to fix than performance issues, and v) 93% of the survey participants want to learn about energy issues from profiling and static code analysis.

READ  Microtubules as an integrator of the physical and chemical properties of the microenvironment.

Reducing Software Energy Consumption

Even if the topic of SEC is not mature yet, it has become an important topic and the subject of many researches. The last decade witnessed an emergence of several papers and studies about software energy efficiency. For the rest of the section, we showcase many of these studies, organized at 2 different levels.

At Execution Environment Level

Several works have been conducted to improve the energy efficiency of the infrastructure and the execution environment. The purpose is to set up and configure an environment where software runs with the least energy consumption. This might include infrastructure sizing, virtual machines and processes allocation/placement, system configuration, software execution and scheduling, etc. For instance, many works have been pursued to extend the battery life and reduce the en-ergy consumption of mobile applications. Most of these works only focus on achieving battery life savings, which does not completely reflect the energy efficiency of an application, due to the high network and back-end cost that it induces. Balasubramanian et al. [15] compared the energy consumption of 3 networking technologies (3G, GSM, and WiFi). They realized through comparison that 3G and GSM consume much more energy than WiFi. They also suggested a protocol TailEnder that reduces SEC by re-scheduling transfers for applications and packages that accept some delay (such as emails), achieving up to 2 times energy consumption improvement.
In another example, Othman and Hailes [106] showed through a simulation that users jobs can be transferred from a mobile host to a fixed host (method offloading) to reduce the energy consumption of the mobile application and extend the battery lifetime by up to 20%. This offloading method reduced both the energy consumption on the mobile devices and the response time in some cases as the methods are sent to faster servers with higher performance. However, the global energy consumption when using methods offloading does increase as it includes the energy consumption of the offloaded methods on the servers and network energy cost.
Other than mobile applications, Ribic and Liu presented Aequitas [126]. Aequitas is a framework that helps to co-exist parallel applications on co-managed power domains, i.e. that share the same resources (CPU cores). The applications cohabitation consists of a energy-performance trade-off. Aequitas mainly achieves its purpose with a round-robin algorithm (or other contention policy such as first-come-first-serve, or a policy to average the CPU frequencies requested by contending parties) that allows the different applications and their sub-threads to access the underlying hardware power management within their share of time, with a focus on energy consumption. The experiments reported on up to 12.9% of potential energy saving against only 2.5% of performance loss. The energy consumption of VMs allocation and tasks placement has also been studied [97]. The authors suggest an algorithm to map tasks to VMs and VMs to physical machines in an energy efficient way. Based on the resources requirements of each task, the algorithm selects a VM then a physical machine where the VM can be deployed. The authors claimed that using their allocation algorithm ETVMC reduces the number of active physical machines along with the task rejection rate using a cloud simulator.
In another study on VMs, Kurpicz et al. discussed the total energy consumption of a VM in a data center while highlighting the static cost [73]. They presented their model EPAVE as being transparent, reproducible and predictive cost calculator for VM based environments. EPAVE’s role is to attribute data center’s static and dynamic costs to each VM. The dynamic cost includes the dynamic energy consumption part of the servers, routers and storage devices, while the static cost aggregates the idle consumption of nodes and routers, air conditioning, power distribution, etc.
According to Eddie Antonio Santos et al. [39], the usage of docker containers to run software does not add a substantial overhead to the energy consumption. Concretely, this empirical study compared the energy consumption of bare-metal applications against docker containerized ones. The results reported on a non-substantial difference in energy consumption except for docker I/O system calls. They advised developers worrying about I/O overhead to consider bare-metal deployments over docker container deployments.

Table of contents :

List of figures
List of tables
Nomenclature
1 Introduction 
1.1 Problem Statement
1.2 Objectives
1.3 Organization
2 State of the Art on Software Energy Consumption 
2.1 Measuring Software Energy Consumption
2.1.1 Hardware tools
2.1.2 Software tools
2.1.3 Energy Consumption Variations
2.1.4 Summary
2.2 Understanding Software Energy Consumption
2.3 Reducing Software Energy Consumption
2.3.1 At Execution Environment Level
2.3.2 At Code Level
2.3.3 Summary
3 Understanding the Hurdles and Requirements to Optimize Energy Consumption 
3.1 Overview
3.2 Methodology
3.2.1 Interviews
3.2.2 Transcription
3.2.3 Analysis
3.3 Observations and Findings
3.3.1 Developers Awareness and Knowledge About Software Energy Consumption
3.3.2 Constraints and Tooling Problems
3.3.3 Promoting SEC
3.4 Implications
3.4.1 For Developers
3.4.2 For Decision Makers
3.4.3 For Tool Makers
3.4.4 For Researchers
3.5 Threats To Validity
3.6 Summary
4 Controlling the Measurement of Energy Consumption 
4.1 Overview
4.2 Research Questions
4.3 Experimental Setup
4.3.1 Hardware Platform
4.3.2 Systems Benchmarks
4.3.3 Measurement Tools and Methodology
4.4 Energy Variation Analysis
4.4.1 RQ 1: Benchmarking Protocol
4.4.2 RQ 2: Processor Features
4.4.3 RQ 3: Processor Generation
4.4.4 RQ 4: Operating System
4.5 Experimental Guidelines
4.6 Threats to Validity
4.7 Summary
5 Measuring and Evaluating the Energy Consumption of JVMs 
5.1 Overview
5.2 Experimental Protocol
5.3 Experiments and Results
5.3.1 Energy Impact of JVM Distributions
5.3.2 Energy Impact of JVM Settings
5.4 Threats to Validity
5.5 Summary
6 Evaluating the Impact of Java Code Refactoring on Energy 
6.1 Overview
6.2 Experimental Protocol
6.2.1 Hardware Environment
6.2.2 Projects Under Study
6.2.3 Methodology and Tools
6.3 Refactoring Impact Analysis
6.3.1 Software Energy Consumption Evolution
6.3.2 Refactoring Rules Impact
6.4 Threats to Validity
6.5 Summary
7 Reducing the Energy Consumption of Java Software I/O 
7.1 Overview
7.2 Methodology
7.2.1 Environment Settings
7.2.2 Experiments Design
7.3 Experiments and Results
7.3.1 Behavior of I/O Methods
7.3.2 Refactoring I/O Methods
7.4 Threats to Validity
7.5 Summarry
8 Conclusion 
8.1 Contributions
8.2 Short-Term Perspectives
8.3 Long-Term Perspectives
Bibliography 

GET THE COMPLETE PROJECT

Related Posts