CCS with probabilistic choice

Get Complete Project Material File(s) Now! »

Qualitative Information Flow

Multilevel Security Systems In 1976, Bell and La Padula introduced a security formalism called multilevel security systems, in which all components of a security system were classified into subjects and objects [BLP76]. Objects consisted of pas-sive entities such as files, while subjects were active components such as users or processes. Processes were further divided into trusted and untrusted entities. In this context, the security policy of the system consisted on a set of rules applied to un-trusted subjects, where these latter had to follow the “read down and write up” condi-tion, i.e., untrusted processes were only allowed to read from objects of lower or equal security level and to write to objects of greater or equal security level.
This model was further developed to represent discrete event systems in which information was typically classified into different security levels. In these models, all the sensitive information contained in a process entered in the form of discrete inputs labeled with a security level indicating their sensitivity, and left the process in the form of labeled outputs. An event consisted of an input or an output, and the view of a security level l corresponded to the events labeled with a level less or equal than l. All other events were said to be hidden from l. In this formalism, a trace represented a temporally ordered series of events that corresponded to one possible execution of the system. Usually, only two different security levels were distinguished: low information corresponded to public, observable data, while high information represented sensi-tive data that had to be kept secret from observers. The goal of secure information flow analysis was then to prevent programs from leaking their high inputs to their low outputs. Under the settings of information flow, confidentiality (i.e., ensuring that in-formation is accessible only to the users authorized to have access to it) corresponded to the absence of flows of information from secret inputs to publically observable out-puts. Similarly, integrity (i.e., ensuring that the data remains valid) meant that there was no flow from a possibility tainted source, low, to an untainted one, high. In this thesis, we will primarily focus on the security goal of confidentiality.
Information flow is still tremendously relevant today as it addresses for instance issues of the everyday use of the internet: the goal of a spyware (trojan horse) is precisely to get access to high data and to leak this information to an attacker legally restricted to access only low data.
Noninterference Even if the Bell-LaPadula model represented a fundamental ad-vance in the protection of systems security, it suffered from several drawbacks. First, it only considered confidentiality as security goal, and thus did neither enforce integrity nor availability. Furthermore, it lacked flexibility because access rules for subjects could not be changed once they had been defined. Last but not least, the model did not prevent the existence of covert channels, i.e., transmissions of information using non legitimate data transfer mechanisms that were neither designed nor intended for this purpose. Typical examples of covert channels include storage channels and tim-ing channels. With a storage channel, one process (the sender) allocates some specific storage location and the other (the receiver) checks for the evidences of this allocation. With a timing channel, the sender influences the timing (e.g. CPU-time) of an event visible to the receiver. In terms of access control, a low subject could for instance send an object to a higher compromised subject who could then reclassify the object to his own (higher) security level or leave it in the lower security level, thus defining two different states that could be observed by the lower user and used to encode one bit. If, for instance, the object was reclassified, then the lower subject would not be allowed to access the object anymore, contrary to the situation in which the object was not reclassified and its access still permitted to the lower user.
These issues motivated the development of stronger multilevel security systems mainly based on the enforcement of noninterference, a property stating that security can only be reached if high information does not interfere with low data output from the system. In other words, the low outputs are completely independent from the high inputs and any kind of information leakage is forbidden. Goguen and Meseguer gave an extensive study of noninterference [GM82] which they informally defined as follows:
One group of users, using a certain set of commands, is noninterfering with another group of users if what the first group does with those com-mands has no effect on what the second group of users can see.
They proposed a new model which strictly distinguished between the system and the security policies applying on it. On the one hand, the system was modelled as a generalized automaton called a capability system. It consisted of both an ordinary state machine component and a capability machine component which kept track of what actions were permitted to which users. On the other hand, security policies were given as a set of noninterference assertions that declared which information flows were forbidden. They were further divided into static policies that always held and dynamic policies that depended on the state of the capability component of a system and were thus handled using conditional noninterference assertions.
Nondeducibility Goguen and Meseguer’s model was however restricted to deter-ministic systems, in which outputs were entirely determined by the inputs and non-interference held if there was no noticeable difference in the outputs inferred by a removal of the high inputs. In a nondeterministic system however, removing high inputs and re-executing the protocol could lead to an (unrelated) change in the low outputs due to nondeterminism only, but this change would violate noninterference, albeit not revealing a security flaw. In many situations, the requirement of nonin-terference actually proved to be too strong for the security level needed in a system. Consider for instance the system illustrated in Figure 1.1, in which a high-level user H1 sends a message m which is xor-ed by a string k issued by a second high-level user H2 before being transmitted to a low-level receiver L. In this case, noninterference does not hold because high-level information is leaked from the system (in form of the cypher text c), even if the low-level user, because of the encryption, cannot learn anything from the message.
Most of the following research was therefore aimed at weakening noninterfer-ence and extending it to nondeterministic systems. In 1986, Sutherland proposed a slightly weaker definition of security than noninterference, called nondeducibility [Sut86]. According to this definition, information flow was interpreted as deducibil-ity and a system was secure if it was impossible for a user to deduce the actions of other users having a higher security status, thus being closer in spirit of the consid-eration of security as ”nondisclosure of information”. In the previous example of an encrypted high-level message transmitted to a low-level user, nondeducibility was en-forced since the output message c did not increase the receiver’s knowledge. In a deterministic system, nondeducibility held therefore whenever a low-level user could not select, from the outputs, a preferred input whose occurence was more likely than the others. This definition could be generalized to nondeterministic system, by in-terpreting nondeducibility as the impossibility for a low-level subject to rule out any consistent high input from the low outputs of the system.
Restrictiveness In 1990, McCullough, an expert in network systems, argued that Sutherland’s model was not appropriate for the protection of security in real-life sys-tems which often showed nondeterministic and asynchronous behaviours. In partic-ular, McCullough described examples of nondeterministic systems which satisfied nondeducibility while violating basic security requirements [McC90]. Consider for instance a nondeterministic system which fills in the spaces between messages with strings of random bits to prevent attackers from analyze the flow of traffic on the system. Nondeducibility would be preserved even if a low-level user could read un-encrypted high messages. If the attacker reads e.g., “We attack tomorrow”, then she might guess that the string belonged to the high message, which might be true with high probability. However, the attacker would not be able to deduce with certainty that the sentence was not generated by chance in the string of random bits, hence en-forcing nondeducibility. This criticism could be seen as an early attempt to advocate for the use of a quantitative definition, since the security of the system relied on “how much” an attacker could deduce about the high information given the observables, and nondeducibility held as soon as some information concerning high variables could not be deduced with certainty.
In order to address this issue, McCullough’s first contribution was to define gener-alized noninterference, a notion of noninterference that could be extended to nonde-terministic systems [McC87]. McCullough also pointed out the crucial importance of hook-up security, i.e., composability in multi-users systems, a property which has to-day become all the more relevant with the increasing complexity of computer systems and will therefore be specifically addressed in this thesis. Composability is a desir-able property for several reasons. First, complex systems cannot today be built and maintained effectively in the long run if they cannot be decomposed into smaller com-ponents that can be handled separately and added or removed when necessary without perturbating the whole system. This is particularly relevant in open systems, which usually do not have a permanent notion of “entire system” and may evolve quickly. On the other hand, composability makes it much easier to check that a given property holds for the global system if it is sufficient to check it at the scale of the components of the system. With the degree of complexity reached by today’s computer systems, composability has become a necessary requirement for building a modular and flexi-ble system.
McCullough’s contribution was to show that the three main approaches to system security that had been used so far, i.e., the Bell-LaPadula (access control), nonin-terference and nondeducibility all failed to be composable. Moreover, McCullough recognized that his generalized noninterference was not hooked-up secure either, and was thus not yet a satisfactory security property for multi-users systems. The example provided by McCullough in [McC90] to show that generalized noninterference was not composable is illustrated on Figure 1.2.
The system consists of a component A represented on a vertical time axis in Fig-ure 1.2 (case a and case b), which can exchange low-level and high-level inputs and outputs. Low-level inputs and outputs are represented as solid arrows, while high-level inputs and outputs are represented as dashed arrows.
The component is subject to a random number of high-level inputs (from its left) and high-level outputs (to its right) possibly occuring after some delays. At one point in time, the low message stop is output to the right and is immediately followed by another low message odd or even (called parity message) which is output to the left, leaking the parity of the number of high-level events (inputs and outputs) that have occured so far (i.e., before the output of stop). Since the high-level outputs may be delayed and the two low outputs can occur at any time one after the other, all high-level inputs may not have been output when the two low outputs occur.
This system is noninterference secure: if there had been an even number of high-level inputs, the output would be either {stop, odd} if an odd number of high-level outputs occured so far (case a in Figure 1.2), or {stop, even} if an even number of high-level outputs occured so far (case b in Figure 1.2). An odd number of high-level inputs would lead to the symmetrical situation and therefore a change in the high inputs would not affect the low-level outputs, as required by noninterference (i.e., whatever the high inputs may be, an adversary observing the low outputs of the protocol can never rule out any of the two possibilities {stop, odd} or {stop, even}).
We now consider the component B which is similar to A except that the high-level outputs occur to the left, the parity messages are output to the right and the stop message is now a low-level input that comes from the left. The component B works similarly to A and it is easy to see that it satisfies noninterference as well (as illustrated on cases c and d in Figure 1.2).
Cases e and f in Figure 1.2, show that it is now possible to connect A to the left-hand side of B so that the outputs to the right of A become inputs to the left of B and the outputs to the left of B become inputs to the right of A. We assume here that B cannot receive any high-level input from the outside world. Under these settings, the combined system does not satisfy noninterference anymore: since the high-level outputs from A are input to B and vice-versa, one can easily see that A and B both output the same parity messages as long as no input to A from the outside occured (case e in Figure 1.2). However, an adversary can deduce with certainty that some high-level input occured from the outside if A and B do not output the same parity messages (case f in Figure 1.2). Therefore a change in the high inputs could affect the low-level outputs of the combined system, which violates generalized noninterference.
In order to enforce composability of security in nondeterministic systems, McCul-lough defined a new security property called restrictiveness, equivalent to noninterfer-ence for deterministic systems but extended to nondeterministic systems. The main problem with the previous approach was the need for a machine which behaves the same whether a low-level input was preceded by a low-level, a high-level or no input. The component B in the previous example for instance does not satisfy this property, since it outputs even if no high-level input occured, and odd otherwise. McCullough modelled the system as a state machine and specified a set of rules that had to be fulfilled by the state machine to satisfy restrictiveness, i.e., to prevent any high-level information from affecting the behavior of the system, as viewed by a low-level user. The important notion of view in this model was defined as an equivalence relation on states and input sequences. Then McCullough could prove that restrictiveness was composable. In the previous example, the fact that a high-level input followed by stop did not have the same effect as stop alone violated restrictiveness for compo-nent B.
Forward correctability In 1988, Johnson and Thayer argued that hook-up security as defined by McCullough was often stronger than needed in typical real-life appli-cations such as basic text editors [JT88]. In order to justify their statement, they first defined a weaker notion of security called n-forward-correctability, based on the no-tions of perturbation and correction. A perturbation is a sequence of events (which may not necessarily be a valid trace) obtained from a valid trace by inserting or delet-ing high-level inputs. A correction consists then of a valid trace which is obtained from a sequence of events by inserting or deleting high-level outputs only. An event system is n-forward-correctable iff for any trace α and any perturbation α′ obtained by inserting or deleting a single high-level input before at most n low-level inputs preceding a high-input-free segment γ , there is a correction of α′ supported in γ. Johnson and Thayer proved that McCullough’s hook-up security is equivalent to ω- forward-correctability, i.e., n-forward-correctability for any integer n. In other words, it is possible to correct a perturbation after an arbitrarily long sequence of low-level inputs. As shown in [JT88], this is strictly stronger than simple forward-correctability (n = 1), because of the following relation:
∀m, n, s.t. m > n m-forward correctability ⇒ n-forward correctability (1.1)
This proposition can be highlighted by a simple example in the case n = 0, m = ω , which shows that 0-forward correctability (i.e., simple forward-correctability) does not imply ω-forward correctability. We consider a low-level monitoring task checking whether messages between two high-level processes have been transferred (without being itself able to read the content of the messages).
In order to comply with the specification saying that any high input (a message released by the sender process) has indeed been output (received by the receiver pro-cess), it is required that every low output (an aknowledgment of the monitoring task) forward-correctability, i.e., n-forward-correctability for any integer n. In other words, it is possible to correct a perturbation after an arbitrarily long sequence of low-level inputs. As shown in [JT88], this is strictly stronger than simple forward-correctability (n = 1), because of the following relation:
∀m, n, s.t. m > n m-forward correctability ⇒ n-forward correctability (1.1)
This proposition can be highlighted by a simple example in the case n = 0, m = ω , which shows that 0-forward correctability (i.e., simple forward-correctability) does not imply ω-forward correctability. We consider a low-level monitoring task checking whether messages between two high-level processes have been transferred (without being itself able to read the content of the messages).
In order to comply with the specification saying that any high input (a message released by the sender process) has indeed been output (received by the receiver pro-cess), it is required that every low output (an aknowledgment of the monitoring task) is preceded by an equal number (possibly zero) of high inputs and high outputs. Any high output oh occurs therefore in a segment of the form i+h[il]o+h, i.e., a finite number of high inputs followed by a finite number of high outputs, with possibly one low input between the two sequences (see the segments a and b in the trace given as example on Figure 1.3). In this figure, high inputs ih (resp. low inputs il) appear as grey (resp. white) squares labeled i (resp. i), while high outputs oh (resp. low outputs ol) appear as grey (resp. white) squares labeled o (resp. o). A perturbation of this system can be performed by inserting or deleting a high input ih closely before a high-level-free final segment γ (i.e., immediately before γ or before a low-level input immediately preceding γ). Then a correction must be applied in case a low output is contained in the final segment in order to restablish the balance between high inputs and high outputs before the low output occurs.
This can be achieved by inserting a high output oh immediately after an ih (case b in Figure 1.4) or after a low input if it occured right after an ih (case a in Figure 1.4). On the other hand, a perturbation may also come from the deletion of a ih preced-ing a high-level-free final segment γ with or without a low level input preceding γ (cases c in Figure 1.4 and case d in Figure 1.5 respectively). Again, if the final seg-ment contains a high output, a correction must be applied, as illustrated in the figure. Since all these corrections are performed inside the sequence γ, the system satisfies 0-forward correctability (case b in Figure 1.4 and case d in Figure 1.5) and 1-forward correctability (cases a and case c in Figure 1.4).
On the other hand, it is not McCullough hook-up secure (i.e., ω-forward cor-rectable): consider the perturbation ihililol, where ih has been inserted in the trace ililol (cases e and f in Figure 1.5). The only possible corrections are ihohililol (case f ) and ihilohilol (case e). However, these corrections are performed outside the se-quence γ, therefore the system is not 2-forward correctable and thus not ω-forward correctable.
Johnson and Thayer argued that requiring forward correctability is strong enough for a large set of security systems, and they proved that this property is also compos-able, i.e., two hooked-up systems which are individually forward correctable lead to a combined system which is forward-correctable as well.

READ  MANAGEMENT OF ELECTRONIC RECORDS FOR E-GOVERNMENT

Table of contents :

1 Introduction 
1.1 Motivations
1.2 Related Work
1.2.1 Information Flow
1.2.1.1 Qualitative Information Flow
1.2.1.2 Quantitative Information Flow
1.2.1.3 Consideration of the attacker model
1.2.2 Anonymity
1.2.2.1 Definition and Protocols
1.2.2.2 Possibilistic versus probabilistic approaches
1.2.2.3 Degrees of anonymity
1.2.2.4 Anonymity Protocols as Noisy Channel
1.2.3 Belief Logics
1.3 Contribution
1.4 Publications
2 Preliminaries 
2.1 Probability spaces
2.2 Probabilistic Automata
2.3 CCS with probabilistic choice
2.3.1 Syntax
2.3.2 Semantics
2.4 R´enyi Entropies, Shannon Entropy, and Mutual Information
2.5 Convexity and Corner Points
3 The process calculus approach 
3.1 Introduction
3.2 CCSp with secret and observable actions
3.3 Modeling protocols for information-hiding
3.3.1 Protocols as channels
3.3.2 Process terms as channels
3.4 Inferring the secrets from the observables
3.5 Safe constructs
3.6 A case study: the Dining Cryptographers
3.7 Conclusion and future work
4 The logical approach 
4.1 Introduction
4.2 A quantitative doxastic logic and its interpretation in CCSp
4.2.1 Modal operators for belief and truth
4.2.2 Syntax of DμCEC
4.2.3 CCSp revisited
4.2.4 Interpretation of DμCEC
4.2.5 Relation with standard (KD45) belief
4.3 Application: Dining Cryptographers
4.4 Application: Oblivious Transfer
4.4.1 Oblivious Transfer of one bit only
4.4.1.1 Description
4.4.1.2 Specification
4.4.2 The 1-out-of-2 Oblivious Transfer
4.4.2.1 Description
4.4.2.2 Specification
4.4.2.3 Implementation of the OT12
protocol using a publickey
cryptosystem
4.4.2.4 Verification
4.5 Conclusion and Future Work
5 Other notions based on Bayesian risk 
5.1 Introduction
5.2 Mutual Information and Capacity
5.3 Towards a more suitable notion of leakage
5.3.1 Probabilities of a right guess
5.3.2 Leakage and uncertainty
5.3.3 Multiplicative leakage
5.3.4 Additive leakage
5.4 Properties of the mutiplicative leakage
5.5 Properties of the additive leakage
5.6 Comparison
5.6.1 Comparison on a specific input distribution
5.6.2 Comparison of the worst cases
5.7 Conclusion
6 Conclusion 
Bibliography

GET THE COMPLETE PROJECT

Related Posts