Deriving Labels and Bisimilarity for CCP 

Get Complete Project Material File(s) Now! »

Deriving Labels and Bisimilarity for CCP

Labeled semantics does two things: it gives the semantics of the raw processes, the computational steps and it helps you reason about the congruences. Reduction semantics just does the first thing and then you have some horrid rules about congruences and you need some nice tools to reason about this.
– Martin Berger.
In this chapter we introduce a labeled semantics for ccp [SR90] and a well-behaved equivalence for this language, namely, a new coarse grained bisimilarity notion for ccp. We begin by presenting the notion of constraint systems (cs), then we show the basic insights of ccp by recalling its syntax, its operational semantics and its observational equivalence. We show that the only existing notion of stan-dard bisimilarity for ccp, one of the central reasoning techniques in concurrency, is not completely satisfactory for ccp since it yields an equivalence that is too fine grained. Therefore, by building upon recent foundational investigations, we give a labeled transition semantics and a novel notion of bisimilarity that is fully ab-stract w.r.t. the typical observational equivalence in ccp. This way we are able to provide ccp with a new proof technique coherent with the existing ones.

Background

Constraint Systems

The ccp model is parametric in a constraint system specifying the structure and in-terdependencies of the information that processes can ask and tell. We presuppose a basic knowledge of domain theory (see [AJ94]). Following [SRP91, dBPP95], we regard a constraint system as a complete algebraic lattice structure in which the ordering ⊑ is the reverse of an entailment relation (c ⊑ d means that d con-tains “more information” than c, hence c can be derived from d). The top element false represents inconsistency, the bottom element true is the empty constraint, and the least upper bound (lub) ⊔ represents the join of information.
Definition 2.1.1 A constraint system C is a complete algebraic lattice (Con, Con0, ⊑, ⊔, true, false) where Con (the set of constraints) is a partially ordered set w.r.t. ⊑, Con0 is the subset of finite elements of Con, ⊔ is the lub operation, and true and false are the least and greatest elements of Con, respectively.
Recall that C is a complete lattice if every subset of Con has a least upper bound in Con. An element c ∈ Con is finite if for any directed subset D of Con, c ⊑ F D implies c ⊑ d for some d ∈ D. C is algebraic if each element c ∈ Con is the least upper bound of the finite elements below c.
Example 2.1.1 We briefly explain the Herbrand cs from [V.A89, SRP91]. This cs captures syntactic equality between terms t, t′, . . . built from a first-order alphabet L with countably many variables x, y, . . ., function symbols, and equality =. The constraints are sets of equalities over the terms of L (e.g., {x = t, y = t} is a constraint). The relation c ⊑ d holds if the equalities in c follow from those in d (e.g., {x = y} ⊑ {x = t, y = t}). The constraint false is the set of all term equalities in L and true is (the equivalence class of) the empty set. The finite elements are the (equivalence classes of) finite sets of equalities. The lub is (the equivalence class of) set union. (See [V.A89, SRP91] for full details). Figure 2.1 represents this kind of constraint lattice for x, y, a, b. Let us stress that the only symbols of L are the constants a and b.
In order to model hiding of local variables and parameter passing, in [SRP91] the notion of constraint system is enriched with cylindrification operators and diagonal elements, concepts borrowed from the theory of cylindric algebras (see [HT71]).
Let us consider a (denumerable) set of variables Var with typical elements x, y, z, . . . Define ∃Var as the family of operators ∃Var = {∃x | x ∈ Var } (cylin-dric operators) and DVar as the set DVar = {dxy | x, y ∈ Var } (diagonal ele-ments).
A cylindric constraint system over a set of variables Var is a constraint system whose underlying support set Con ⊇ DVar is closed under the cylindric operators ∃Var and quotiented by Axioms C1 − C4, and whose ordering ⊑ satisfies Axioms where c, ci, d indicate finite constraints, and ∃xc ⊔ d stands for (∃xc) ⊔ d. For our purposes, it is enough to think the operator ∃x as existential quantifier and the constraint dxy as the equality x = y.
Cylindrification and diagonal elements allow us to model the variable renam-ing of a formula φ; in fact, by the aforementioned axioms, the formula ∃x(dxy ⊔ φ) = true can be depicted as the formula φ[y/x], i.e., the formula obtained from φ by replacing all free occurrences of x by y.
We assume notions of free variable and of substitution that satisfy the follow-ing conditions, where c[y/x] is the constraint obtained by substituting x by y in c and fv (c) is the set of free variables of c: (1) if y ∈/ fv (c) then (c[y/x])[x/y] = c;
(2) (c⊔d)[y/x] = c[y/x]⊔d[y/x]; (3) x ∈/ fv (c[y/x]); (4) fv (c⊔d) = fv (c)∪fv (d).
Example 2.1.2 The Herbrand constraint system in Example 2.1.1 can be extended to be a cylindric constraint system, where ∃x just represents the standard existen-tial quantifier. For instance, consider the alphabet which contains a constant symbol a and a monadic function symbol f . Figure 2.2 represents the part of the cylindric constraint system in which x (and only x) is free. For simplicity we have indicated a set {t = u} by t = u.
Example 2.1.3 (The S Constraint System) Let S = (ω + 1, 0, ∞, =, <, succ) def be a first-order structure whose domain of interpretation is ω + 1 = ω ∪ {∞}, i.e., the natural numbers extended with a top element ∞. The constant symbols 0 and ∞ are interpreted as zero and infinity, respectively. The symbols =, < and succ are all binary predicates on ω + 1. The symbol = is interpreted as the identity relation. The symbol < is interpreted as the set of pairs (n, m) s.t. n ∈ ω, m ∈ ω + 1 and n strictly smaller than m. The symbol succ is interpreted as the set of pairs (n, m) s.t. n, m ∈ ω and m = n + 1.
Let Var be an infinite set of variables. Let L be the logic whose formulae φ are: φ ::= t | φ1 ∧ φ2 | ∃xφ and t ::= e1 = e2 | e1 < e2 | succ(e1, e2) where e1 and e2 are either 0 or ∞ or variables in V ar. Note that formulas like x = n or x < n (for n = 1, 2, . . . ) do not belong to L. A useful abbreviation to def ∃y0 . . . ∃yn( 0<i≤n succ(yi−1, yi) ∧ x = y0 ∧ y = express them is succn(x, y) = V yn). We use x = n as shorthand for succn(0, x) and x < n as shorthand for ∃y(x < y ∧ y = n).
A variable assignment is a function : Var −→ ω + 1. We use A to denote the set of all assignments; P(X) to denote the powerset of a set X, ∅ the empty set and ∩ the intersection of sets. We use M(φ) to denote the set of all assignments that satisfy the formula φ, where the definition of satisfaction is as expected.
We can now introduce a constraint system as follows: the set of constraints is P(A), and define c ⊑ d iff c ⊇ d. The constraint false is ∅, while true is A. Given two constraints c and d, c ⊔ d is the intersection c ∩ d. By abusing the notation, we will often use a formula φ to denote the corresponding constraint, i.e., the set of all assignments satisfying φ. E.g. we use 1 < x ⊑ 5 < x to mean M(1 < x) ⊑ M(5 < x).
From this structure, let us now define the cylindric constraint system S as follows. We say that an assignment ′ is an x-variant of if ∀y = x, (y) = Given x ∈ Var and c ∈ P(A), the constraint ∃xc is the set of assignments′(y).
such that there exists ′ ∈ c that is an x-variant of . The diagonal element dxy is x = y.
We make an assumption that will be pivotal in Section 2.3. Given a partial order (C, ⊑), we say that c is strictly smaller than d (written c ⊏ d) if c ⊑ d and
c = d. We say that (C, ⊑) is well-founded if there exists no infinite descending chain ⊏ cn ⊏ ⊏ c1 ⊏ c0. For a set A ⊆ C, we say that an element m ∈ A is minimal in A if for all a ∈ A, a ⊏ m. We shall use min(A) to denote the set of all minimal elements of A. Well-founded orders and minimal elements are related by the following result.
Proposition 2.1.1 Let (C, ⊑) be a well-founded order and A ⊆ C. If a ∈ A, then ∃m ∈ min(A) s.t. m ⊑ a.
In spite of its being a reasonable assumption, well-foundedness of (Con, ⊑) is not usually required in the standard theory of ccp. We require it because the above proposition is fundamental for proving the completeness of labeled seman-tics (Lemma 2.3.2).

READ  Globalization and evolution emerging markets

Syntax

Ccp was proposed in [V.A89] and then refined in [SR90, SRP91]. In this chapter we restrict ourselves to the summation-free fragment of this formalism. The dis-tinctive confluent (for more details about confluence in ccp see [FGMP94]) nature of this fragment is necessary for showing that our notion of bisimilarity coincides with the observational equivalence for infinite ccp processes given in [SRP91].
Remark 2.1.1 A ccp summation-free fragment is the one in which the nondeter-ministic choice is not included in the syntax of the ccp language.
Definition 2.1.2 Assume a cylindric constraint system C = (Con, Con0, ⊑, ⊔, true, false) over a set of variables Var . The ccp processes are given by the fol-lowing syntax:
P, Q . . . ::= tell(c) | ask(c) → P | P | Q | ∃xP | ask (c) → pz( )
where c ∈ Con0, x ∈ Var ,z ∈ Var ∗. We use Proc to denote the set of all processes.
Remark 2.1.2 The notions and relations in this and the following chapters as-sume an underlying cylindric constraint system C.
Finite processes. Intuitively, the tell process tell(c) adds c to the global store. This addition is performed regardless of the generation of inconsistent informa-tion. The ask process ask(c) → P may execute P if c is entailed by the informa-tion in the store. The process P | Q stands for the parallel execution of P and Q; ∃x is a hiding operator, namely it indicates that in ∃xP the variable x is local to P . The occurrences of x in ∃xP are said to be bound. The bound variables of P , bv (P ), are those with a bound occurrence in P , and its free variables, fv (P ), are those with an unbound occurrence.
Infinite processes. To specify infinite behavior, ccp provides parametric proc-ess definitions. A process pz( ) is said to be a procedure call with identifier p and actual parametersz . We presuppose that for each procedure call p(z1 . . . zm) there exists a unique (possibly recursive) procedure definition of the form def where fv (P ) ⊆ {x1, . . . , xm}. Furthermore we require re- p(x1 . . . xm) = P cursion to be guarded, i.e., each procedure call within P must occur within an ask process. The behavior of p(z1 . . . zm) is that of P [z1 . . . zm/x1 . . . xm], i.e., P with each xi replaced with zi (applying α-conversion to avoid clashes). We shall use D to denote the set of all procedure definitions.
Although we have not yet defined the semantics of processes, we find it in-structive to illustrate the above operators with the following example. Recall that we shall use S in Example 2.1.3 as the underlying constraint system in all exam-ples.
Example 2.1.4 Consider the following (family of) process definitions.
def → up(x, y))
upn(x) = ∃y(tell(y = n) | ask (y = n)
def
up(x, y) = ∃y′ (tell(y < x∧succ2(y, y′)) | ask(y < x∧succ2(y, y′)) → up(x, y′))
Intuitively, upn(x), where n is a natural number, specifies that x should be greater than any natural number (i.e., x = ∞ since x ∈ ω + 1) by telling (adding to the global store) the constraints yi+1 = yi + 2 and yi < x for some y0, y1, . . . with y0 = n. The process up0 (x) | ask(42 < x) → tell(z = 0) can set z = 0 when it infers from the global store that 42 < x. (This inference is only possible after the 22nd call to up.)

Reduction Semantics

To describe the evolution of processes, we extend the syntax by introducing a process stop representing successful termination, and a process ∃exP representing the evolution of a process of the form ∃xP , where e is the local information (local store) produced during this evolution. The process ∃xP can be seen as a particular case of ∃exP : it represents the situation in which the local store is empty. Namely, ∃xP = ∃truex P .
A configuration is a pair P, d representing the state of a system; d is a constraint representing the global store, and P is a process in the extended syntax. We use Conf with typical elements γ, γ′, . . . to denote the set of configurations. The operational model of ccp can be described formally in the SOS style by means of the relation between configurations −→ ⊆ Conf × Conf defined in Table 2.1.
Rules R1-R3 and R5 are easily seen to realize the above process intuitions. Rule R4 is somewhat more involved. Intuitively, ∃exP behaves like P , except that the variable x possibly present in P must be considered local, and that the infor-mation present in e has to be taken into account. It is convenient to distinguish between the external and the internal points of view. From the internal point of view, the variable x, possibly occurring in the global store d, is hidden.
This corresponds to the usual scoping rules: the x in d is global, hence “cov-ered” by the local x. Therefore, P has no access to the information on x in d, and this is achieved by filtering d with ∃x. Furthermore, P can use the information (which may also concern the local x) that has been produced locally and accumu-lated in e. In conclusion, if the visible store at the external level is d1, then the store that is visible internally by P is e ⊔ ∃xd. Now, if P is able to make a step, thus reducing to P ′ and transforming the local store into e′, what we see from the external point of view is that the process is transformed into ∃ex′ P ′, and that the information ∃xe present in the global store is transformed into ∃xe′. To show how this works we show an instructive example.

Table of contents :

1 Introduction 
1.1 Bisimilarity for CCP
1.2 Labeled Semantics
1.3 Algorithms
1.4 From Weak to Strong Bisimilarity
1.5 Summary of Contributions and Organization
1.6 Publications from this Dissertation
2 Deriving Labels and Bisimilarity for CCP 
2.1 Background
2.1.1 Constraint Systems
2.1.2 Syntax
2.1.3 Reduction Semantics
2.1.4 Observational Equivalence
2.2 Saturated Bisimilarity for CCP
2.2.1 Saturated Barbed Bisimilarity
2.2.2 Correspondence with Observational Equivalence
2.3 Labeled Semantics
2.4 Strong and Weak Bisimilarity
2.5 Summary of Contributions and Related Work
3 Partition Refinement for Bisimilarity in CCP 
3.1 Background
3.1.1 Partition Refinement
3.2 Irredundant Bisimilarity
3.2.1 Reduction Semantics for Non-deterministic CCP
3.2.2 Labeled Semantics for Non-deterministic CCP
3.2.3 Saturated Bisimilarity in a Non-deterministic CCP Fragment
3.2.4 Soundness and Completeness
3.2.5 Syntactic Bisimilarity and Redundancy
3.2.6 Symbolic and Irredundant Bisimilarity
3.3 Partition Refinement for CCP
3.3.1 Termination
3.3.2 Complexity of the Algorithm
3.4 Summary of Contributions and Related Work
4 ReducingWeak to Strong Bisimilarity in CCP 
4.1 Background
4.1.1 Reducing Weak to Strong Bisimilarity
4.2 Defining a New Saturation Method for CCP
4.2.1 A New Saturation Method
4.2.2 A Remark about our Saturation in CCS
4.2.3 Soundness and Completeness
4.3 Correspondence between ≈˙ sb, ∼˙ sym, ∼˙ I
4.4 Algorithm for the Weak Notion of the CCP Partition Refinement Algorithm
4.5 Summary of Contributions and Related Work
5 Conclusions 
A Implementation and Experiments for the CCP Partition Refinement
A.1 Programming Language
A.2 Abstract Data Types
A.2.1 Implementation Layout
A.2.2 Layout of the Automaton
A.2.3 Layout of the Redundant Class
A.2.4 Layout of a Tool to Verify Bisimilarity in CCP
A.3 Procedures and Functions
A.3.1 Calculating Irredundancy
A.3.2 Strong Prerefinement
A.3.3 Weak Prerefinement
A.3.4 Final Algorithm
A.3.5 Final Strong Algorithm
A.3.6 Saturation
A.3.7 Final Weak Algorithm
A.4 Results and Examples
A.4.1 Transitions/Nodes
A.4.2 Percentage of Same Labels
A.4.3 Percentage of Configurations Satisfying the Same Barb
A.4.4 Percentage of Dominated Transitions
A.4.5 Time vs Branches
A.5 Summary of Contributions and Related Work

GET THE COMPLETE PROJECT

Related Posts