Temporal Logic Characterization of utcc Processes 

Get Complete Project Material File(s) Now! »

Process Calculi

Process calculi such as CCS [Milner 1989] , CSP [Hoare 1985], the process algebra ACP [Bergstra 1985, Baeten 1990] and the π-calculus [Milner 1999, Sangiorgi 2001] are among the most influential formal methods for reasoning about concurrent systems. A common feature of these calculi is that they treat processes much like the λ-calculus treats com-putable functions. For example, a typical process term is the parallel composition P Q, which is built from the terms P and Q with the constructor and represents the process that results from the parallel execution of the processes P and Q. An operational semantics may dictate that if P can reduce to (or evolve into) P ′, written P −→ P ′, then we can also have the reduction P | Q −→ P ′ | Q.
Process calculi in the literature mainly agree in their emphasis upon algebra. The distinctions among them arise from issues such as the process constructions considered (i.e., the language of processes), the methods used for giving meaning to process terms (i.e. the semantics), and the methods to reason about process behavior (e.g., process equivalences or process logics). Some other issues addressed in the theory of these calculi are their expressive power, and analysis of their behavioral equivalences.
The utcc process calculus aims at modeling mobile reactive systems, i.e., systems that interact continuously with the environment and may change their communication structure. In this dissertation mobility is understood as generation and communication of private links or channels much like in the π-calculus [Milner 1999, Sangiorgi 2001], one of the main representative formalisms for mobility in concurrency theory. Mobility is fundamental, e.g., to specify security protocols where nonces (i.e, randomly-generated unguessable items) are transmitted. In the next section we give a brief introduction of the π-calculus. This will help us to better understand the notion of mobility we address in this dissertation and also the applications we describe in Chapters 8 and 9.

The π-calculus

The π-calculus [Milner 1999, Milner 1992b, Sangiorgi 2001] is a process calculus aiming at describing mobile systems whose configuration may change during the computation. Similar to the λ-calculus, the π-calculus is minimal in that it does not contain primitives such as numbers, booleans, data structures, variables, functions, or even the usual flow control statements.
Mobility in the π-calculus. As we said before, mobility in the π-calculus is understood as generation and communication of private names or links. Links between processes can be created and communicated, thus changing the communication structure of the system. This also allows us to consider the location of an agent in an interactive system to be determined by the links it possesses, i.e. which other agents it possesses as neighbors.

Names and Actions

Names are the most primitive entities in the π-calculus. The simplicity of the calculus relays on the dual role that they can play as communication channels and variables.
We presuppose a countable set of (ports, links or channels) names, ranged over by x, y, …. For each name x, we assume a co-name x thought of as complementary, so that x = x. We shall use l, l′, … to range over names and co-names.
Definition 2.2.1 (π-calculus syntax). Process in the π-calculus are built from names by the following syntax:P, Q, … := (αi.Pi) | (νx)P | P | Q i∈I α := xy | x(y) where I is a finite set of indexes.
We shall recall briefly some notions as well as the intuitive behavior of the constructs above. See [Milner 1999, Milner 1992b, Sangiorgi 2001] for further details.
The construct αi.Pi represents a process able to perform one -but only one- of its i∈I αi’s actions and then behave as the corresponding Pi . When |I| = 0 we shall simply write 0 (i.e. the inactive process). The actions prefixing the P i’s can be of two forms: An output xy and an input x(y). In both cases x is called the subject and y the object. The action xy represents the capability of sending the name y on channel x. The action x(y) represents the capability of receiving the name, say z, on channel x and replacing y with z in its corresponding continuation. Furthermore, in x(y).P the input action binds the name y in P . The other name binder is the restriction (νx)P that declares a name x private to P , hence bound in P . Given a process Q, we define in the standard way its bound names bn(Q) as the set of variables with a bound occurrence in Q, and its free names fn(Q) as the set of variables with a non-bound occurrence in Q.
Finally, the process P | Q denotes parallel composition ; P and Q running in parallel.

Operational Semantics

The above intuition about process behavior in the π-calculus is made precise by the rules in Table 2.1. The reduction relation −→ is the least binary relation on processes satisfying the rules in Table 2.1. These rules are easily seen to realize the above intuition. We shall use −→∗ to denote the reflexive and transitive closure of −→. A reduction P −→ Q basically says that P can evolve, after some communication between its subprocesses, into Q. The reductions are quotiented by the structural congruence relation ≡ which postulates some basic process equivalences.
Definition 2.2.2 (π-calculus Structural Congruence). Let ≡ be the smallest congruence over processes satisfying the following axioms:
1. P ≡ Q if P and Q differ only by a change of bound names (α-conversion).
REACT
(x(y).P + M ) | ( (z).Q + N ) −→ P [z/y] | Q
x
P −→P′ P −→P′
PAR RES
P |Q−→P′|Q (νa)(P ) −→ (νa)P ′
P ≡P′−→Q′≡Q
STRUCT P −→Q
Table 2.1: Reductions Rules for the π-calculus.
2. P|0≡P,P|Q≡Q|P,P|(Q|R)≡(P|Q)|R.
3. If x ∈/ fn(P ) then (νx)(P | Q) ≡ P | (νx)Q.
4. (νx)0 ≡ 0 , (νx)(νy)P ≡ (νy)(νx)P .

An Example of Mobility

Let us extend the syntax in Definition 2.2.1 with the match process [x = y].P . This process behaves as P if x and y are the same name, otherwise it does nothing.
Consider now the following processes:
P = c1(y).c2y
Q = (νz)(c1z.c2(z′).[z = z′].R)
Intuitively, if a link y is sent on channel c1, P forwards it on channel c2. Now, Q sends its private link z on c1 and if it gets it back on c2 it executes the process R.
Using the rules in Table 2.1 we can verify the following
P |Q −→∗ y | z.c2(z′).[z = z′].R)
(νz)(c1(y).c2 c1
−→∗ (νz)( z | c2(z′).[z = z′].R) c2
−→∗ (νz)(([z = z′].R)[z/z′])
−→∗ (νz)(R[z/z′])
This means that the parallel composition of P and Q leads to a configuration where the process R is executed. We shall come back to this example in Section 3.6 where we show that the utcc calculus is able to mimic the mobile behavior of the processes above, where the private name z of Q is sent to P .

Concurrent Constraint Programming

Concurrent Constraint Programming (CCP) [Saraswat 1993, Saraswat 1991] has emerged as a simple but powerful paradigm for concurrency tied to logic. CCP extends and sub-sumes both concurrent logic programming [Shapiro 1989] and constraint logic programming [Jaffar 1987]. A fundamental feature in CCP is the specification of concurrent systems by means of constraints. A constraint (e.g. x + y ≥ 10) represents partial information about certain variables. During the computation, the current state of the system is specified by a set of constraints called the store. Processes can change the state of the system by telling information to the store (i.e., adding constraints), and synchronize by asking information to the store (i.e., determining whether a given constraint can be inferred from the store).
Like done in process calculi, the language of processes in the CCP model is given by a small number of primitive operators or combinators. A typical CCP process language features the following operators:
• A tell operator adding a constraint to the store.
• An ask operator querying if a constraint can be deduced from the store.
• Parallel Composition combining processes concurrently.
• A hiding operator (also called restriction or locality ) introducing local variables and thus restricting the interface a process can use to interact with others.

Reactive Systems and Timed CCP

Reactive systems [Berry 1992] are those that react continuously with their environment at a rate controlled by the environment. For example, a controller or a signal-processing system, receive a stimulus (input) from the environment. It computes an output and then, waits for the next interaction with the environment.
Languages such as Esterel [Berry 1992], Lustre [Halbwachs 1991], Lucid Synchrone [Caspi 1999] and Signal [Benveniste 1991] among others have been proposed in the literature for programming reactive systems. Those languages are based on the hypothesis of Perfect Synchrony : Program combinators are determinate primitives that respond instantaneously to input signals.
The timed CCP calculus (tcc) [Saraswat 1994] extends CCP for reactive systems. The fundamental move in the tcc model is then to extend the standard CCP with delay and time-out operations. The delay operation forces the execution of a process to be postponed to the next time interval. The time-out operation waits during the current time interval for a given piece of information to be present and if it is not, triggers a process in the next time interval.
Time in tcc is conceptually divided into time intervals (or time units). In a particular time interval, a CCP process P gets as input a constraint c from the environment, it executes with this input as the initial store, and when it reaches its resting point, it outputs the resulting store d to the environment. The resting point determines also a residual process Q which is then executed in the next time unit. The resulting store d is not automatically transferred to the next time unit. This way, computations during a time unit proceed monotonically but outputs of two different time units are not supposed to be related to each other.
We postpone the presentation of the syntax and the operational semantics of CCP and tcc to Chapter 3 where we introduce the utcc calculus.

READ  Hardware implementation and on-board prototyping 

First-Order Linear-Time Temporal Logic

Temporal logics were introduced into computer science by Pnueli [Pnueli 1977] and there-after proven to be a good basis for specification as well as for (automatic and machine-assisted) reasoning about concurrent systems.
In this dissertation, we shall show that utcc is a declarative model for concurrency. More precisely, we shall show that utcc processes can be seen, at the same time, as computing agents and formulae in First-Order Linear-Time Temporal Logic (FLTL) [Manna 1991].
Furthermore, the symbolic semantics that we develop in Chapter 4 makes use of temporal formulae to give a finite representation of a possible infinite number of substitutions in the operational semantics.
For those reasons, in this section we recall the syntax and semantics of FLTL. We refer the reader to [Manna 1991] for further details.
Recall that a signature Σ is a set of constant, function and predicate symbols. A first-order language L is built from the symbols in Σ, a denumerable set of variables x, y, . . ., and the logic symbols ¬, ∧, ∨, ⇒, ⇔, ∃, ∀, true and false.
Definition 2.4.1 (FLTL Syntax). Given a first-order language L, the FLTL formulae we use are given by the syntax: F, G, . . . := c | F ∧ G | ¬F | ∃xF | ⊖ F | ◦F | F. where c is a predicate symbol in L.
The modalities ⊖F, ◦F and F state, respectively, that F holds previously, next and always. We shall use ∀xF for ¬∃x¬F , and ✸F as an abbreviation of ¬ ¬F . Intuitively, ✸F means that F eventually has to hold.
We say that F is a state formula if F does not have occurrences of temporal modalities.

Semantics of FLTL.

As done in Model Theory, the non-logical symbols of L (predicate, function and constant symbols) are given meaning in an underlying L-structure, or L-model, M(L) = (I, D). This means, they are interpreted via I as relations over a domain D of the corresponding arity.
States and Interpretations A state s is a mapping assigning to each variable x in L a value s[x] in D. This interpretation is extended to L-expressions in the usual way, for example, s[f (x)] = I(f )(s[x]). We write s |=M(L) c if and only if c is true with respect to s in M(L).
The state s is said to be an x-variant of s′ iff s′[y] = s[y] for each y = x. This is, s and s′ are the same except possibly for the value of the variable x.
We shall use σ, σ′, . . . to range over infinite sequences of states. We say that σ is an x-variant of σ′ iff for each i ≥ 0, σ(i) (the i-th state in σ) is an x-variant of σ′(i) .
Flexible and Rigid Variables. The set of variables is partitioned into rigid and flexible. For the rigid variables, each state σ must satisfy the rigidity condition: If x is rigid then for all s, s′ in σ s[x] = s′[x]. If x is a flexible variable then different states in σ may assign different values to x.

Syntax and Operational Semantics

This chapter introduces the syntax of the utcc calculus as well as its operational semantics. In the same lines of tcc [Saraswat 1994, Nielsen 2002a], the operational semantics of utcc is given by an internal and an observable transition relation. The first one describes the evolution of processes during a time unit. The second one describes how given an input from the environment, a process reacts outputting the final store obtained from a finite number of internal reductions. This defines the input-output behavior of a process. We shall also discuss some technical problems the abstraction construct of utcc poses in the operational semantics. Namely, we show that there exist processes that exhibit infinitely many internal reductions thus never producing an output. We shall deal with these termination problems in the next chapter where we present a symbolic semantics for utcc.

Constraint Systems

Concurrent Constraint programming (CCP) based calculi are parametric in a constraint system [Saraswat 1993] that specifies the basic constraints agents can tell or ask during execution. In this section we recall the definition of these systems.
A constraint represents a piece of (partial) information upon which processes may act. A constraint system then provides a signature from which constraints can be built. Furthermore, the constraint system provides an entailment relation (|=) specifying inter-dependencies between constraints. Intuitively, c |= d means that the information d can be deduced from the information represented by c. For example, x > 60 |= x > 42.
Formally, we can set up the notion of constraint system by using First-Order Logic as in [Smolka 1994, Nielsen 2002a]. Let us suppose that Σ is a signature (i.e., a set of constant, function and predicate symbols) and that Δ is a consistent first-order theory over Σ (i.e., a set of sentences over Σ having at least one model). Constraints can be thought of as first-order formulae over Σ. Consequently, the entailment relation |=Δ is defined as follows: c |=Δ d if the implication c ⇒ d is valid in Δ. This gives us a simple and general formalization of the notion of constraint system as a pair (Σ, Δ).
Definition 3.1.1 (Constraint System). A constraint system is as a pair (Σ, Δ) where Σ is a signature of constant, function and predicate symbols, and Δ is a first-order theory over Σ (i.e., a set of first-order sentences over Σ having at least one model).
Given a constraint system (Σ, Δ), let L be its underlying first-order language with vari-ables x, y, . . ., and logic symbols ¬, ∧, ∨, ⇒, ⇔, ∃, ∀, true and false. Constraints, denoted by a, b, c, d, . . . are first-order formulae over L.
We say that c entails d in Δ, written c |=Δ d, iff (c ⇒ d) ∈ Δ (i.e., iff c ⇒ d is true in all models of Δ). We shall omit « Δ » in |=Δ when Δ = ∅. Furthermore, we say that c is equivalent to d, written c ≡ d, iff c |=Δ d and d |=Δ c. Henceforth we shall use the following notation.

Table of contents :

1 Introduction 
1.1 Contributions and Organization
1.2 Publications from this Dissertation
2 Preliminaries 
2.1 Process Calculi
2.2 The π-calculus
2.2.1 Names and Actions
2.2.2 Operational Semantics
2.2.3 An Example of Mobility
2.3 Concurrent Constraint Programming
2.3.1 Reactive Systems and Timed CCP
2.4 First-Order Linear-Time Temporal Logic
2.4.1 Semantics of FLTL
3 Syntax and Operational Semantics 
3.1 Constraint Systems
3.2 Timed CCP (tcc)
3.3 Abstractions and Universal Timed CCP
3.3.1 Recursion in utcc
3.4 Structural Operational Semantics
3.5 Properties of the Internal Transitions
3.6 Mobility in utcc
3.7 Input-Output Behavior
3.8 Infinite Internal Behavior
3.9 Summary and Related Work
4 Symbolic Semantics 
4.1 Symbolic Intuitions
4.1.1 Future-Free Temporal Formulae as Constraints
4.2 Symbolic Reductions
4.2.1 The Abstracted-Unless Free Fragment
4.2.2 Past-Monotonic Sequences
4.3 Properties of the Symbolic Semantics
4.3.1 Normal Form of Processes
4.3.2 Symbolic Output Invariance
4.3.3 The Monotonic Fragment
4.4 Relating the SOS and the Symbolic Semantics
4.4.1 Elimination of local processes
4.4.2 Local-Free Fragment
4.4.3 Semantic Correspondence
4.5 Summary and Related Work
5 Temporal Logic Characterization of utcc Processes 
5.1 utcc processes as FLTL formulae
5.2 FLTL Correspondence
5.2.1 Symbolic Reductions Correspond to FLTL Deductions
5.2.2 Deductions in FLTL correspond to Symbolic Reductions
5.3 Summary and Related Work
6 Expressiveness of utcc and Undecidability of FLTL ذ
6.1 Minsky Machines
6.2 Encoding Minsky Machines into utcc
6.2.1 Representation of Numbers in utcc
6.2.2 Encoding of Machine Configurations
6.3 Correctness of the Encoding
6.3.1 Derivations in the Minsky Machine and utcc Observables
6.3.2 Termination and Computations of the Minsky Machine
6.4 Undecidability of monadic FLTL
6.5 Encoding the λ-calculus into utcc
6.5.1 The call-by-name λ-calculus
6.5.2 Encoding the λ-calculus into the π-calculus
6.5.3 Encoding the λ-calculus into utcc
6.5.4 Correctness of the Encoding
6.6 Summary and Related Work
7 Denotational Semantics 
7.1 Symbolic Behavior as Closure Operators
7.1.1 Symbolic Input-Output Relation
7.1.2 Retrieving the Input-Output Behavior
7.2 Denotational Semantics for utcc
7.2.1 Semantic Equation for Abstractions Using ~x-variants
7.3 Full Abstraction
7.3.1 Soundness of the Denotation
7.3.2 Completeness of the Denotation
7.4 Summary and Related Work
8 Closure Operators for Security 
8.1 The modeling language: SCCP
8.1.1 Processes in SCCP
8.2 Dolev-Yao Constraint System
8.3 Modeling a Security Protocol in SCCP
8.4 Closure Operator semantics for SCCP
8.4.1 Closure Properties of SCCP
8.5 Verification of Secrecy Properties
8.6 Reachability Analysis in SCCP
8.7 Summary and Related Work
9 Applications 
9.1 Service Oriented Computing
9.2 A Language for Structured Communication
9.2.1 Operational Semantics of HVK
9.2.2 An Example
9.3 A Declarative Interpretation for Sessions
9.3.1 Operational Correspondence
9.4 HVK-T: An Temporal extension of HVK
9.4.1 Case Study: Electronic booking
9.4.2 Exploiting the Logic Correspondence
9.5 Multimedia Interaction Systems
9.6 Dynamic Interactive Scores
9.6.1 A utcc model for Dynamic Interactive Scores
9.6.2 Verification of the Model
9.7 A Model for Music Improvisation
9.8 Summary and Related Work
10 Abstract Semantics and Static Analysis of utcc Programs 
10.1 Static Analysis and Abstract Interpretation
10.1.1 Static Analysis of Timed CCP Programs
10.2 Constraint Systems as Information Systems
10.2.1 Recursion and Parameter Passing
10.3 A Denotational model for tcc and utcc
10.3.1 Semantic Correspondence
10.4 Abstract Interpretation Framework
10.4.1 Abstract Constraint Systems
10.4.2 Abstract Semantics
10.4.3 Soundness of the Approximation
10.5 Applications
10.5.1 Groundness Analysis
10.5.2 Analysis of Reactive Systems
10.5.3 Analyzing Secrecy Properties
10.5.4 A prototypical implementation
10.6 Summary and Related Work
11 Concluding Remarks 149
11.1 Overview
11.2 Future Directions
Bibliography 
A Incompleteness of Monadic FLTL: Alternative Proof
A.1 Encoding Minsky Machines
A.2 Encoding of Numbers and Configurations
A.3 Monadic FLTL is Undecidable
Index

GET THE COMPLETE PROJECT

Related Posts