Over the last decade, novel methods in engineering and electronics, non-expensive construc-tion materials, and enhanced methodologies for large-scale production have arisen, triggering a massive flood of disparate powerful hardware with diﬀerent abilities. This aspect has made room for new diﬀerent technologies to emerge. With cheaper and more powerful hardware, the idea of virtualizing hardware platforms and general computing resources became popular . The use of virtualization techniques has augmented the amount of network resources, real or virtua-lized, increasing the complexity of network management. With the incorporation of high-speed communication lines, cloud computing has gone one step further . Cloud computing oﬀers ser-vices whose processing is transparently performed combining resources distributed all around the globe. The decoupling of services from underlying resources has opened new horizons for emerging technologies such as Software-Defined Networks (SDN) [140, 64]. SDN allows administrators to manage computer networks independently from the real devices that actually implement low level network functionalities . Complementary, Network Functions Virtualization (NFS) has come up with a novel perspective that leverage technology virtualization to consolidate disparate network devices into a single standard high-performance network platform . These eﬀorts aim at simplifying network management, to which autonomic computing can highly contribute.
The rapid evolution of the hardware industry has also promoted the increasing use of mobile devices such as smartphones and tablets. Indeed, it is expected that the number of mobile-connected devices will exceed the number of people on Earth by the end of 2013 . With thousands of applications and services, mobile end-users expect seamless service provisioning from the cloud and clean communications on the move. This issue becomes a challenging problem for network operators when the reality tends to fast expanding mobile networks. Even though mobile devices spread fast, they still lack of powerful computation resources. In light of this, a new research domain denominated mobile cloud computing is emerging, which aims at making mobile devices resource-full in terms of computational power, memory, storage, energy, and context-awareness. This issue makes their management even more complex.
But mobile networks are not actually the only technological field with power-less devices. Low-Power and Lossy Networks (LLN) is a classification of networking environments with re-source constrained devices , which includes Wireless Sensor Networks (WSN) . WSNs are dynamic networks constituted by powerless sensors and short wireless signal range, e.g. tempera-ture and home automation, which collaborate each other to route messages to their destination. However, energy or range limitations might break all communication links passing through them, posing hard management problems including routing, security and interoperability issues. In ad-dition, WSNs use the ISM (industrial, scientific and medical) radio bands to transmit packets across the network . But ISM is also used by other technologies, and therefore, the throughput of communication links can be degraded due to sensors usually have to wait for available chan-nels to send their packets. To overcome this and many other issues, a novel research area called Cognitive Radio Networks (CRN) is arising . CNRs feature autonomic strategies, making a smart use of the radio-frequency spectrum for opportunistically transmitting information.
All these technologies coexist within an environment that requires to be properly managed. Indeed, the Internet constitutes a target infrastructure where all these technologies can be inte-grated. With the advent of IPv6, the unique identification of any network device in the world becomes possible . This has given room to a new way of understanding the Internet named the Internet of Things (IoT) [18, 23]. IoT describes the concept of a global network where he-terogeneous and ubiquitous devices, from standard computers to cars and appliance on-board wireless sensors, become interconnected through high-speed communication channels. The revo-lutionary vision of an Internet integrated by any type of object providing smart services to others over a dense mesh of communication links, is for several scientists, the Internet of the future.
In brief, computer technologies evolve fast and constitute very complex internetworked envi-ronments. To make their management sustainable, systems involved in these convoluted networks must expose higher levels of autonomy, as reported in [16, 143]. To that end, autonomic computing provides strong foundations to tackle the management complexity of current and future networks.
Large-scale network management
With the overwhelming avalanche of new devices and technologies, the trend to the Internet of Things seems indeed natural. However, there exist other influential factors that cannot be ignored, for instance, people’s behavior with respect to the use of these technologies. As illustra-ted in Figure 2.1, people also determinate to some extent, in which way technology can be used, or whether its purpose makes sense. Social networks constitute an outstanding example of tech-nologies that have greatly influenced people’s behavior and vice versa , like Facebook  and LinkedIn . During the last decade, it has been observed that social interactions mostly involve exchange and sharing of information and media content. The mechanisms that provide access to this content have dramatically evolved since then. Media aggregators such as YouTube , and photo sharing sites such as Picasa  and Flickr , are some examples.
Even though these web-based technologies are used on a daily basis, they are not the only means for accessing content. Dedicated infrastructures designed to share digital material have also arisen such as Peer-2-Peer (P2P) overlays (e.g. BitTorrent , eMule ), and Content Distribution Networks (CDN) (e.g. Akamai , Limelight ). The overwhelming consumption of media content over the Internet is currently so strong that several academics and industry entities have proposed radical approaches to deal with the burden of information exchange across the global network. Information-Centric Networking (ICN) is a new paradigm that provides a clean architectural approach to address these new relevant requirements [164, 3]. The main argument is that the Internet, as originally conceived, has been designed to exchange data packets between identified machines. However, end users do not really care about who provides the information (node endpoints), they just want to access the exact content they are looking for, no matter where it is stored. This vision drives to radical changes on the architectural design of the Internet itself, orienting its organization to named content, and using in-network caching for storing popular content so close users can retrieve the same information faster. Content-Centric Networks (CCN) is a well-known approach inside the ICN paradigm [139, 36]. The main goal is to provide a network infrastructure service that is better suited to today’s use, particularly content distribution and mobility, and more resilient to disruptions and failures .
These novel information-based network models appear as a response to the inability of current networks to face end-users requirements. Therefore, a potential lecture of this phenomenon is that the community is trying to recreate or adapt more than thirty years of IP-based technologies to support current needs, which is indeed natural, but also extremely hard. This issue shows why flexible technologies are important. In that context, the adaptive nature of autonomic approaches may better fit current and future end-users behavior, providing more flexibility and scalability.
In summary, we have illustrated in this and the previous section, two diﬀerent evolving paths that can have a strong impact on the future Internet. The objective is not to make an exhaustive analysis of each current edge technology, but to provide an insight of what is happening within current networks and the Internet, as well as disparate technologies running on top of them. In order to support their natural evolution, new perspectives to manage these evolving networks in a smart and controlled manner, constitutes a crucial problem. These perspectives must provide novel ways to deal with network management complexity. With this issue in mind, we have observed that no matter what technology or trend is considered, a common feature remains suitable to these challenges, autonomy. In order to tackle the overwhelming demand of end users as well as the fast large-scale deployment of heterogeneous network devices, autonomous mechanisms for managing these networks are essential to achieve scalability and reliability. In the next section, we present an overview of autonomic computing and detail architectural aspects as well as internal design issues.
Autonomic computing overvie
The autonomic computing paradigm was born as a response to cope with the evident arising complexity of managing computing systems [91, 57]. In 2001, IBM  communicated a mani-festo where it was explained that the management methods used at that moment, including software and hardware installation, configuration, integration, and maintenance, were not sui-table to scale with the future landscape that was coming. In that context, a radical new way to deal with computer and network management was proposed, autonomic computing. The vision of autonomic computing is strongly inspired on the autonomous nervous system. Everyday, we, as human beings, perform several tasks conscientiously that to some extent, are the result of following some high-level objective within our lives. We eat when we are hungry, we sleep when we are tired, we take decisions expecting to succeed in our goals, and so many complex tasks as we can imagine. However, there are some other tasks, which are as much important as eating or sleeping, that are unconsciously and automatically done by our own bodies. Indeed, every time we breath, we do not do it conscientiously, we do not think about it. However, it is still a vital function that actually follows a high-level human law, being alive. In the same manner, the autonomous nervous system also governs our heart rate and body temperature, thus freeing our conscious brain from the burden of dealing with these and many other low-level, yet vital, functions . In that context, autonomic computing aims at providing an infrastructure where networks can be managed by establishing high level-objectives. The underlying networking com-ponents will then perform in accordance to these rules and the changing environment, without explicit human intervention. This perspective aims at providing strong foundations for develo-ping scalable and flexible infrastructures able to support a demanding and changing technological reality , .
Key concepts of autonomics
The first step towards the construction of autonomic solutions is in fact to understand what autonomic actually is. In other words, we should be able to distinguish between autonomic solutions and those that are just automatic. In 2009, NASA published a book where it is explained how autonomic systems are applied to NASA intelligent spacecraft operations and exploration systems . There, the diﬀerences between automation, autonomy and autonomicity are very well discussed. Both automation and autonomy refer to the ability of executing a complete process without human intervention. The main diﬀerence is that while an automated solution sticks to the step-by-step process it was built for, an autonomous solution may involve a decision process under certain circumstances that aﬀects the way tasks are executed, in order to accomplish its goals or final purpose. Such circumstances are usually related to the environment perceived by the autonomous agent. In other words, automated solutions replace step-by-step processes done by humans. Autonomic solutions on the other hand, aim at emulating human behavior during the process, by reasoning and taking decisions for the successful operation of the system. According to the dictionary definitions used in , autonomous and autonomic solutions means almost the same, except for a very slight property, spontaneity. While autonomy means self-governance and self-direction, independent and not controlled by external forces ; autonomic means self-management, that occurs automatically and involuntarily, just as the autonomic nervous system does. For the sake of clarity and simplicity, we will use in this thesis both terms, autonomous and autonomic, without distinction except where explicitly noted, considering the following definition.
Autonomic computing overview
Definition 1 (Autonomic system). An autonomic system is a self-governed entity, able to ma-nage itself without any type of external control, and to perform required tasks without human intervention in order to accomplish the goals it was created for. While the purpose of the auto-nomic system is defined by high-level objectives, the achievement of this purpose is delegated to the system itself. The internal activities performed by the autonomic system may include envi-ronment perception, analysis, reasoning, decision making, planning and execution of actions that must ensure the successful operation of the system.
As an eﬀort to classify existing management solutions and their positioning with respect to the autonomic vision, IBM established what they called the evolutionary path to autonomic computing . This path is represented by five levels as illustrated in Figure 2.2. The first le-vel depicts the basic approach where network elements are independently managed by system administrators. Following, the managed level integrates the information collected from disparate network systems into one consolidated view. At the predictive level, new technologies are incor-porated for correlating information, predicting optimal configurations and providing advices to system administrators. The ability to automatically take actions based on the available infor-mation constitutes the adaptive level. Finally, the autonomic level is achieved when the system is governed by business policies and objectives. These objectives define the purpose of the au-tonomic system that will be inherently pursuit by its logical implementation and supported by its internal know-how as well as its ability to sense the environment. The fact of being gover-ned by policies or high-level objectives is what makes the key diﬀerence between autonomic and automated solutions.
Autonomic solutions are usually composed of smaller components called autonomic entities. Autonomic entities are designed to serve a specific purpose such as monitoring a network device or providing routing services. These entities can then be combined to construct more complex autonomic solutions. In order to organize their self-governing nature, autonomic systems involve a set of functional areas called self-* properties, defined as follows :
– self-configuration, providing mechanisms and techniques for automatically configuring components and services,
– self-optimization, covering methods for monitoring and adapting parameters in order to achieve an optimal operation according to the laws that govern the system,
– self-healing, for automatically detecting, diagnosing and repairing localized software and hardware problems, and
– self-protection, supporting activities for identifying and defending the system against potential threats.
These areas permit to classify the mechanisms used for regulating the behavior of autonomic entities. This aspect is discussed in the following subsection.
Behavioral and architectural models
From a high-level perspective, autonomic entities work under closed control loops that govern their behavior. In control theory, a closed control loop refers to a feedback mechanism that controls the dynamic behavior of a system based on the sensed environment as well as its own state (feedback) . As shown in Figure 2.3, the resource manager interface provides means for monitoring and controlling the managed element (hardware, software, others). While its sensor enables the autonomic manager to obtain data from the resource, the eﬀector allows it to perform operations on the resource. The autonomic manager is composed of a cycle of four activities that determines the behavior of the specific autonomic entity. This cycle or control loop includes monitoring its current state, analyzing the available information, planning future actions, and executing generated plans compliant to specified high-level goals . In addition, the autonomic manager also exposes a management interface, in the same way the managed element does. This interface allows other autonomic managers to use its services and provides composition capabilities on distributed infrastructures.
It is important to highlight that a wide range of software and systems could embody, to some extent, autonomic solutions. To achieve this, the involved elements should be adapted to interact with the environment by means of sensors and eﬀectors, as illustrated in Figure 2.3. In addition, they should work guided by rules and policies intended to achieve a specific purpose. Under a behavioral perspective, the autonomic manager will continuously monitor the managed elements and will perform an analysis of the perceived state. This information is then used to plan and execute changes required to align the state of the managed elements with the specified high-level objectives. This process describes the internal and finest working level of an autonomic system. However, the management interfaces exposed by these self-governed components, permit to combine them and to obtain easy solutions for complex problems. Indeed, the ability to integrate autonomic solutions into broader autonomic systems provides support for accompanying evolving technological landscapes .
The basic architecture of the autonomic computing approach proposes a layered organization as illustrated in Figure 2.4. Starting from the bottom, layer 1 contains managed resources over the IT infrastructure. These resources can be hardware devices such as servers, routers and access points, services and software components. Their management interfaces provide means for accessing and controlling the managed resources. Layer 2 describes autonomic managers for, but not limited to, the four aforementioned categories of self-management denominated self-* properties, i.e., self-configuration, self-optimization, self-healing and self-protection. Each autonomic manager within layer 2 is in charge of controlling a specific group of resources. To do so, these managers involve control loops for each self-* property, which control the state and behavior of the underlying devices. In order to line up the general directives of the autonomic system, it is required an autonomic component able to organize the overall activity of each specific self-* property across the whole system. Layer 3 contains autonomic managers that orchestrate other autonomic managers incorporating control loops that have the broadest view of the overall IT infrastructure. For instance, the self-configuration orchestrator will control the autonomic managers underneath related to self-configuration. This provides a consistent outlook of the system for each self-* property. The self-* orchestrator component is in charge of controlling every specific orchestrator, which is essential to achieve consistency with respect to the laws that govern the autonomic system. The top layer illustrates a manual manager that provides a common system management interface for the IT professional using an integrated solution console. The various manual and autonomic manager layers can obtain and share knowledge via knowledge sources.
Table of contents :
Chapter 1 General introduction
1.1 The context
1.2 The problem
1.3 Organization of the document
1.3.1 Part I: State of the art
1.3.2 Part II: Contributions
1.3.3 Part III: Implementation
Part I Autonomic environments and vulnerability management
Chapter 2 Network management and autonomic computing
2.2 Large-scale network management
2.2.1 Computer networks and the Internet
2.2.2 Technological evolution
2.2.3 End-users behavior
2.3 Autonomic computing overview
2.3.1 Key concepts of autonomics
2.3.2 Behavioral and architectural models
2.4 Security issues in autonomics
2.4.1 Vulnerability management in autonomic environments
Chapter 3 Vulnerability management
3.2 The vulnerability management process
3.2.1 A brief history of vulnerability management
3.2.2 On the organization of vulnerability management activities
3.3 Discovering Vulnerabilities
3.3.1 Exploiting testing methods
3.3.2 Using network forensics
3.3.3 Taking advantage of experience
3.4 Describing Vulnerabilities
3.5 Detecting vulnerabilities
3.5.1 Analyzing device vulnerabilities
3.5.2 Analyzing network vulnerabilities
3.5.3 Correlating vulnerabilities with threats and attack graphs
3.6 Remediating vulnerabilities
3.6.1 Change management
3.6.2 Risk and impact assessment
3.7 Research challenges
Part II An autonomic platform for managing configuration vulnerabilities
Chapter 4 Autonomic vulnerability awareness
4.2 Integration of OVAL vulnerability descriptions
4.3 OVAL-aware self-configuration
4.3.1 Overall architecture
4.3.2 OVAL to Cfengine translation formalization
4.4 Experimental results
4.4.1 IOS coverage and execution time
4.4.2 Size of generated Cfengine policies for Cisco IOS
Chapter 5 Extension to distributed vulnerabilities
5.2 Specification of distributed vulnerabilities
5.2.1 Motivation, definition, and mathematical modeling
5.2.2 DOVAL, a distributed vulnerability description language
5.3 Assessing distributed vulnerabilities
5.3.1 Extended architecture overview
5.3.2 Assessment strategies
5.4 Performance evaluation
Chapter 6 Support for past hidden vulnerable states
6.2 Modeling past unknown security exposures
6.2.1 Understanding past unknown security exposures
6.2.2 Specifying past unknown security exposures
6.3 Detecting past hidden vulnerable states
6.3.1 Extended architecture overview
6.3.2 Assessment strategy
6.4 Experimental results
Chapter 7 Mobile security assessment
7.2 Background and motivations
7.3 Vulnerability self-assessment
7.3.1 Self-assessment process model
7.3.2 Assessing Android vulnerabilities
7.3.3 Experimental results
7.4 Probabilistic vulnerability assessment
7.4.1 Probabilistic assessment model
7.4.2 Ovaldroid, a probabilistic vulnerability assessment extension
7.4.3 Performance evaluation
Chapter 8 Remediation of configuration vulnerabilities
8.2 Background and motivations
8.3 Remediating device-based vulnerabilities
8.3.1 Vulnerability remediation modeling
8.3.2 The X2CCDF specification language
8.3.3 Extended framework for remediating device-based vulnerabilities
8.3.4 Performance evaluation
8.4 Towards the remediation of distributed vulnerabilities
8.4.1 Modeling vulnerability treatments
8.4.2 DXCCDF, a distributed vulnerability remediation language
8.4.3 A strategy for collaboratively treating distributed vulnerabilities
8.4.4 Performance evaluation
Part III Implementation
Chapter 9 Development of autonomic vulnerability assessment solutions
9.2 Autonomic vulnerability assessment with Ovalyzer
9.2.1 Implementation prototype
9.2.2 OVAL to Cfengine generation example with Ovalyzer
9.3 Extension to past hidden vulnerable states
9.4 Mobile security assessment with Ovaldroid
9.4.1 Implementation prototype
9.4.2 A probabilistic extension
Chapter 10 General conclusion
10.1 Contributions summary
10.1.1 Autonomic vulnerability management
10.1.2 Implementation prototypes
10.2.1 Proactive autonomic defense by anticipating future vulnerable states
10.2.2 Unified autonomic management platform
10.2.3 Autonomic security for current and emerging technologies
10.3 List of publications