Energy aware routing: An SDN perspective 

Get Complete Project Material File(s) Now! »

Energy efficiency technology for networking devices

The availability of inexpensive and extreme capacity optical transmission and improvement in routing technologies for the network infrastructure have made possible to grow the Internet at this current unstopping rate. This increasing demand and improvement of the ICT technologies are leading to higher energy consumption. According to European Commission’s digital agenda report in 2013, “ICT products and services are currently responsible for 8 to 10% of the EU’s electricity consumption and up to 4% of its carbon emissions.” [Europa, 2013]. And among several sectors of ICT, network architectures and devices are one of the highest energy consumers.
Figure 2.1 shows the trends of energy consumption of different components of ICT and it shows a tremendous uprising of the energy consumption of the networking devices compare to other components [Corcoran and Andrae, ]. This is because of the exponential increase of the user of the Internet and most of the network architectures and resources like bandwidth, processing power and memory are designed keeping in mind the peak time network usage [Zhang et al., 2010],[Addis et al., 2014].
Even though it performs well during peak hours, but during off-peak hours it causes redun-dancy, over-provisioning of the resources and consumes extra energy which is unnecessary. As the passing of data through the network cannot be stopped, ICT engineers are needed to be tactical about how to make the network architecture more energy efficient with respect to traffic. Energy-efficient networking is in the limelight of the researchers for a quite long time. Gupta, who is one of the pioneers in this field, coined the concept of greening the internet in his work [Gupta and Singh, 2003]. In this research, they suggested the possible methods for reducing the carbon footprint of ICT network infrastructure. Authors in [Rondeau et al., 2015] compare the traditional and current metrics for assessing ICT performance and their effects on the environ-ment. In [Drouant et al., 2014] a new approach for designing a green network architecture is introduced. Numerous works have done considering the problem of energy efficiency in network and there are mainly three research directions. The first direction mainly concentrates on the network hardware level. And the second direction is more on network software level. When building an energy-efficient network infrastructure, alongside two mentioned directions there is one more direction that focuses on the practical level of green network designing and it can be identified as network design level. However, these three research directions are molded and extended into many branches. For example, especially for design level, energy savings are mainly done changing the current topology. Therefore, a design level solution can be online or offline. On-line solutions mean the solution will take place during the run time of the system and many of the variables might be unknown such as the number of demands or the throughput requirement. On the other hand, off-line solutions mean based on traffic patterns a solution will be provided which will not change within time and more importantly the traffic pattern is known beforehand. Another important thing is the approach to solving the problem. A centralized approach can be used where the state of all the nodes in a network is known by the central controller and decision can be made globally, whereas in distributed solutions each node only knows the whereabouts of its neighbor node and any decision has to be made locally. Here, different energy-saving research directions are discussed in light of different works. As our research scope bounded to the wired network, therefore all the works that are described here, are also completely based on the wired network.

Network hardware level

In the network hardware level, energy consumption is reduced by developing more efficient cir-cuits and power-saving techniques. Improving the power draining components like memory or modifying the cooling system are all part of efficient device buildup. On the other hand, tech-niques like Energy Efficient Ethernet (EEE), Adaptive Link Rate (ALR) are all part of different power-saving techniques.

Hardware re-engineering:

In the beginning, most of the network devices are designed to keep in mind to provide the highest level of performance without giving much attention towards energy consumption. Therefore, these network devices are always consuming more energy than it actually requires. Each of these devices comprises several components like power supply and cooling device, forwarding engine, switching fabric, control plane, input/output cards, and buffers. Several studies have been conducted focusing on the consumption of the specific components and how energy savings can be done by modifying a particular component without hampering the performance level of the current network device. Figure 2.2 shows the power consumption percentage of the different components of a core router. The values have been taken from [Hinton et al., 2011].
A node can be in several states like hibernation, turned-off/on. Similarly, in a module-based design of a network device, each component can be considered separately and different energy-saving techniques can be applied to each of them. Researches have been done so that both input/output cards and buffers can be put into sleeping mode so that it can save energy while not in use. Figure 2.2 shows that power supply and cooling are responsible for more than one-third of the total energy consumption of a core router. If the router is in use then cooling is mandatory and the method of cooling is somewhat controllable. An energy-efficient approach is accepted by a few companies which is free cooling, proposed by [Pawlish and Aparna, 2010] where outside air is used to cool the network infrastructure. Infrastructure unlike core network, where nodes are location independent for example datacenters, many leading tech companies like Facebook moved there datacenter to Sweden in order to use the cold climate for sustainable cooling. [Orgerie et al., 2014] Whereas another tech giant Google uses sea water of Finland to achieve a free cooling system [Google, 2013]. The second re-engineering is done in the memory of the network devices. Ternary content addressable memory (TCAM) is introduced for the forwarding engine. These memories can be accessed block by block and other portions can be turned off for energy saving purposes. TCAM is getting very popular to provide high through-put from the forwarding engines end. As forwarding engine is also responsible for 30% of the overall energy consumption of the core router, many works are done to improve the performance of TCAM and use it more energy efficiently [Zane et al., 2003] worked on TCAM and proposed to construct a preclassifier to activate TCAM blocks separately resulting significant amount of energy savings. [Li et al., 2017] on the other hand, provided a two-stage scheme for TCAM based IP routing table lookup which efficiently places the routing table into different TCAM blocks without creating memory holes and guaranteeing both energy and memory efficient solution.

Hibernation/Turning off:

A device cannot consume energy, if it is not on. Therefore, putting the network device in hi-bernation or turning it off is one of the most basic yet effective ways to save energy that has been considered by many researchers. There is a slight difference between hibernation and turn-ing off. Hibernation consumes a little bit more energy than turning off whereas at the same time the up-time of a network device from hibernation is much quicker than when a device is turned off. Hibernation or putting the device in the sleeping mode is the key concept for the modeling of the energy-efficient routing that will be discussed in the network design level. [Gupta and Singh, 2003] is one of the first work that considered hibernating the nodes and the links and if possible turn of the devices to increase the amount of savings. They showed how much energy is consumed by the internet and also identify the scope of saving energy by putting the devices into sleep. They analyzed the impact of selectively putting line cards into sleep mode on the switch protocol. Their work was one of the preliminary work in this field, where line-cards were manually chosen to put into sleep by analyzing the traffic pattern. Same au-thors also published similar research in [Gupta et al., 2004] and [Gupta and Singh, 2007a]. In [Gupta et al., 2004], they have discussed the energy saving potential in LAN switches by putting them into sleep. They examined the traffic of a LAN network and found out that there are three time-spans in a day when the traffic is lowest, both line cards and switches can be put into sleep. They have fixed a predefined time schedule for sleeping of the device and thus saving substantial amount of energy. Which can save energy but at the same time, in many cases it provides huge amount of packet loss. Moreover, this method fails to utilize all the other in-termediate periods where energy can be saved. In [Gupta and Singh, 2007a], they proposed a dynamic technique to shut down the unused link and turn them back on when it is required. They have designed a dynamic algorithm that uses buffer occupancy, behavioral analysis of the previous packets and maximum bounded delay to make sleeping decisions. However they have used synthetic traffic to evaluate their model and even though in some cases they can turn off 80% of the links but the maximum traffic load is 5% of the total network which is unrealistic. So far, all the discussed work is based on sleeping or turning of the links and link ports, however in [Bolla et al., 2009], authors provided an framework that can be used to optimize the energy consumption of a network device. In this framework they have included both network device and links to be put into sleep mode. They have considered the network device as multi-core routers, and based on the traffic density they proposed to turn off cores of the routers to save energy and if none of the cores are used then the whole router can be turned off to save energy.

Defining a mechanism for swift changes in data rates of the link.

Creating a policy to change the link data rate to maximize energy savings (that is, maximize time spent in a low link data rate) without significantly increasing the packet delay.
Several works have been done in order to handle these two challenges. According to the work of [Nedevschi et al., 2008], when a negotiation process is done during the link setup, the maximum capacity link is usually chosen even if a lower capacity link can fulfill the requirement. Therefore the adaptive link rate mechanism offers an algorithm that automatically adjusts the link speed according to the traffic. That means when there is a high level of traffic it will re-main in its maximum capacity. However for a certain duration if there is no significant traffic is passing through the network then the network device will auto-negotiate to a lower level of link capacity. In their work, they discussed the technique of choosing the level of link speed according to the traffic pattern. If the traffic is high link speed will increase automatically and vice versa. They showed that this auto-negotiation process takes hundreds of milliseconds to complete. During this time communication is interrupted as the link is down which can be vital for providing better QoS or QoE. They discussed briefly how greening the network is always a trade-off between QoS and energy savings. They also showed that choosing the ideal rate for ALR is NP-hard (Non-deterministic Polynomial-time hard).
[Blanquicet and Christensen, 2007] proposed a method to select the physical layer devices (PHYs) more quickly. In this method, a frame exchange is used for renegotiating the change of the speed that withholds the need of restarting the auto-negotiation process. This method makes the transformation of the speed faster but still, it is not fast enough. The adjustments of equalizers, echo cancelers, and timing circuits are needed which takes time. Researchers are working in this field in order to make the change even quicker. Nevertheless, there is a great advantage in switching to lower data rates whenever it is possible. According to them, the main difficulty of implementing ALR is choosing the speed. It is always hard to know when and how much the speed is needed to be lowered.
[Gunaratne and Christensen, 2006] shows that 90% of the time in a small network 10Mbps links are enough to fulfill the needs of the users. However, the experiment is done in a small network with less amount of demand, therefore the value does not depict the real picture. Nevertheless, even in large networks, it is seen that the lowest possible bandwidth is actually enough for han-dling demands. They have proposed a mechanism to do the auto-negotiation and switching of the link rate. They have also proposed a Markov model for packet arrivals which are following Poisson distribution. This model helped to understand the relation between packet delay and energy savings.
The authors of [Gunaratne et al., 2008] also proposed a MAC handshake mechanism to fix the link rate automatically by following the buffer occupancy. To follow the buffer occupancy, the number of bytes is needed to be calculated all the time which might increase the complexity of the hardware. In order to ease this problem, they proposed a heuristic where they have proposed a timer-based switching mechanism. Two time intervals were defined. One is for a high link rate and the second is for a lower link rate. The link rate must stay at a higher rate for a fixed amount of time before switching it to a lower rate regardless of the buffer occupancy. And the policy was itself an adaptive one. The duration for staying at higher rate gets double every time there is a mandatory switching in the link rate from lower to higher before the fixed interval ends.
All these works have proved the usefulness of using ALR for saving a significant amount of energy consumption.

READ  THEORETICAL DEVELOPMENTS AND THE WATER DISCOURSE: TOWARDS A CONSTRUCTIVIST SYNTHESIS

Software defined networking: A paradigm shift in energy savings

In traditional networking, the networking device(i.e: switch/router) contains both the data plane and the control plane. The control plane provides the rules required for forwarding into the for-warding table. This forwarding table contains all the information regarding the next hop of each packet to reach their destination. And based on this forwarding table, the data plane forwards the packets to its destination. The intelligence in networking which includes security, forward-ing, and load balancing all are done in switch level and as both planes are strongly coupled in every networking component, making any change in the network architecture is time-consuming and in many cases not feasible because every component of the network needs to be configured individually. In this regard, a novel paradigm emerges with the key design principle of sepa-ration of the control plane and the data plane, known as software defined networking or SDN. In disparity to current communication systems, it integrates a controller – a centralized control entity containing the control plane and the data plane resides at the network devices, provides flexibility in managing network. SDN architectures generally have three components or groups of functionality: SDN applications, controller and network devices. SDN applications are the program that communicate with the controller and provide outside resources for decision making purposes. SDN controller is a logical entity which is receives the instruction from the application layer and after processing the instruction, transmit it to the network devices. The controller has the functionality of configuring the forwarding tables (in other words, flow tables) of switches. And lastly, the network devices receives the instruction from the controller and act on it. This approach offers mechanisms allowing network operators to have a global point of view; therefore, the centralized approach is suitable for globally monitoring the system according to network policies and service requirements provided by application layer. SDN makes the network pro-grammable, therefore, any changes done by the network engineers on the controller can be easily propagated throughout the entire network. The birds-eye-view of SDN is also very effective for identifying any zombie machine of the network. Zombie machine are compromised hosts (i.e.: Datacenter servers) that are connected to the network and often difficult to identify. The authors of [Hakiri et al., 2014] and [Wibowo et al., 2017] provide extensive state of the art about SDN research scopes, challenges, opportunities and future directions. Software defined networking provides several benefits including on-demand provisioning, automated load balancing, and the ability to scale network resources based on application requirements. SDN also helps to create completely virtualized data-center, where end-to-end network between user and servers can be created with flexibility and will have the scope to terminate the connection at any moment.
According to [Assefa and Özkasap, 2019] energy efficient approaches using SDN can be classified into three groups: traffic aware, end-system aware and rule placement. Traffic aware energy efficiency approaches are mainly designed based on the fact that, most of the network compo-nents (i.e.: Forwarding switches/routers) are under utilized. And as we have discussed in the previous section, a large number of devices can be turned off during off-peak hours. Approaches are designed to utilize the full potential of SDN and provide an energy efficient network. End-system aware approaches are mainly applicable for datacenter networks (DCN). In a DCN, most of the time there are few idle servers only kept for fail-safe purposes. Using SDN, these unused servers can be shut down and save energy. And the last approach is rule placement. Usually, when SDN is used, a set of rules is required in order to make a connection between controller and forwarding switches. Placing these rules is a NP-hard problem. And inefficient placement of rules might cause congestion and also unnecessary energy consumption. Therefore, the last energy efficient approach is to proper way of rule placement. Figure 2.3 gives a general outline of the SDN architecture and also identify the different parts of the architecture where the energy saving approaches are focused on. Using SDN can positively affect the energy consumption of the network. As the work done in Georgia Southern University proved that using SDN architec-ture can decrease the energy consumption significantly [Rawat and Bajracharya, 2016]. In this work, they compared the energy consumption of a conventional campus network and an SDN architecture. Accordingly, the results reveal that using SDN contributes to the reduction of the carbon emission by 25% or more.

Table of contents :

Chapter 1 Introduction 
1.1 Problem statement
1.2 Thesis contribution
1.3 Thesis organization
Chapter 2 Green ICT and green networking: state of the art 
2.1 Energy efficiency technology for networking devices
2.1.1 Network hardware level
2.1.2 Network software level
2.1.3 Network design level
2.2 Software defined networking: A paradigm shift in energy savings
2.3 Brown energy vs green energy
2.4 Open issues and discussion
Chapter 3 Analysing the power consumption of the core network: GÉANT 
3.1 Topology and traffic analysis
3.1.1 Day vs night and week vs weekend traffic analysis
3.1.2 Testing bandwidth capacity
3.1.3 Why not brute-force?
3.2 Shared path first algorithm
3.2.1 Network model formalization
3.2.2 Network constraints
3.2.3 Algorithm description
3.3 Result analysis
3.4 Conclusion
Chapter 4 Energy aware routing: An SDN perspective 
4.1 Accepted green policies
4.2 Defining planes: control, data and management plane
4.3 Formalization of the energy model
4.4 Heuristic algorithm: genetic algorithm for solving minimization problem
4.4.1 Algorithm description
4.4.2 Design of experiment
4.4.3 Tuning of the algorithm
4.4.4 Simulation setup
4.5 Result analysis
4.5.1 Tuning outcome
4.5.2 Fitness function analysis
4.6 Conclusion
Chapter 5 Moving towards Sustainability 
5.1 Quality in Sustainability
5.2 Introducing environmental metrics
5.2.1 Carbon emission factor
5.2.2 Non-renewable energy usage percentage
5.3 Energy aware routing vs Pollution aware routing
5.4 Proposed solution
5.4.1 integration of environmental metrics into objective function
5.4.2 Design of a sustainable system
5.5 Evaluation of the model in terms of QiS
5.5.1 Result analysis
5.5.2 Discussion on sustainability
5.6 Evaluation of the model in terms of QoS
5.6.1 QoS parameters
5.6.2 Result analysis
5.7 Conclusion
Chapter 6 Green-networking and stability 
6.1 Network stability issues
6.1.1 Incoherent data planes
6.1.2 Convergence issue in distributing a data plane
6.1.3 Network instability and its occurrence
6.2 Improving stability of the system
6.2.1 Introducing penalty
6.2.2 Plane filtering: To implement or not
6.2.3 Extension of the model
6.2.4 Case study
6.3 Conclusion
Chapter 7 Conclusion and Perspective 
7.1 Main outcome
7.2 Perspectives
7.2.1 Variability of the static part of energy model
7.2.2 Implementing the solution in LAN context
7.2.3 Applying the solution in end to end network
7.2.4 Temporal analysis for environmental factors
7.2.5 Implementation challenges of SDN platform
7.2.6 Different heuristics for solving the problem
7.2.7 Intelligence defined network for increasing stability and QoE
Appendix A 
Appendix B 
Appendix C List of Publications 
Bibliography

GET THE COMPLETE PROJECT

Related Posts