# An abstract language for separation logic

Get Complete Project Material File(s) Now! »

## Semantics of the language

The abstractions of separation-logic formulae can be efficiently implemented because we for- malize them as a disjunction of eight simple semantic-interpretation functions, represented as J·Ki. This makes a denotation into a tuple of orthogonal elements.
First, we recall the concrete domains: For simplicity, we work with total functions, so we extend V al with the “out of domain” value ood to define V al′. S′ are total stacks, that is we have a bijection between a normal stack and a total stack. We define ¯ : S′ → S by ¯s′ , s′ ↾dom(s′)∩{x|s′(x)6=ood}, and ¯ : S → S′ by ¯s , [x ∈ dom(s) → s(x) | x 6∈ dom(s) → ood].
The domain F, should be seen as a stack for auxiliary variables. So the abstract language embeds information about the normal variables which is represented in the concrete domain with the stacks in S′ , Var total → V al′, the abstract language embeds information about auxiliary variable which is represented in the concrete domain with values in F , TVar total → P(V al′). We have TVar total → P(V al′) instead of TVar total → V al′ because we want to have summary nodes which will represent several concrete values.
To lighten the notation, we define a union of stacks in S′ and stacks for auxiliary variables in F : ·+· : (S′ × F) → (V AR total → P(V al′)) such that if x ∈ Var then s+f(x) , {s(x)} and if α ∈ TVar then s+f(α) , f(α).
For (s, h, f, r) ∈ MFR, the s corresponds to a (total) stack, h is the heap, f is a stack for the variables in TVar, where a variable can map to a set of values (cf. a summary node), and r maps locations to their reachable set of locations, thus encoding separation information.

### Operations on the language

We now present five key operations and describe how they are computed within the abstract language. All the operations have been proved sound. We first present an extension function to add an auxiliary variable to a graph, which is used to tune the precision of the union function presented second. Then we present a merging function which is used to merge two nodes to reduce the size of the graph, which is used for the stabilization function presented just after. Then we present a function, ast, which is used to translate the connective ∗. Below, P is a formula in the separation logic with fixpoints defined in Chapter 2, and T(P) ∈ AR is the translation of the formula.
For all the transformations T of elements of the language, we provide theorems of this form: ∀s, h, f, r ∈ MFR.∃g. s, h, f, r ∈ JarK′ ⇒ s, h, g(f), r ∈ JT(ar)K′ . The “g” being there because we wish to allow changes for auxiliary variables (like for the function, merge), so the values taken by the auxiliary variables do not have to be characterised by f in both sides of the implication, but can be transformed by a function g on the right side of the implication.
To explain, we want to translate formulae in separation logic with fixpoints into the language AR. We want JPK ⊆ JT(P)K. For efficiency on the computation of conjunction, we will define another translation, T′ : AR×BIμν → AR, from an element of the language and a formula to an element of the language, so that we have T′(ar, P1∧P2) = T′(T′(ar, P1), P2).
We do the proofs by induction on the syntax of the formula. We do the proofs at the level of J·K′ and need to keep the properties at this level and not J·K because Jar1K ⊆ Jar2K 6⇒ Jar1K′ ⊆ Jar2K′ . Then at the end we define T(P) = T′(ari, P). Since we have ∀s, h, f, r ∈ MFR. ∃g. s, h, f, r ∈ ari ∧ ¯s, h ∈ JPK ⇒ s, h, g(f), r ∈ JT(P)K′ and JariK′ = MFR, we get JPK ⊆ JT(P)K.
Recall that J·K is the semantic on the state domain S × H like for formulae, in this semantics, the auxiliary variables can be renamed by alpha-renaming. J·K′ is on S′×H×F×R and the auxiliary variables can not be renamed.

#### Stabilization

This stabilization function does not insure convergence to a single value but that the stabi- lization will take a finite number of values.
The stabilization operator combines the stabilization on the numerical domain and a strategy to bound the number of used variables. First, the stabilization operator use the function merge to merge nodes to keep the number of auxiliary varibles used bounded, then the only possibility for divergence is from the numerical domain, so it applies the numerical stabilization for the numerical component. Typically, a numerical widening is a numerical stabilization, so when computing a least- fixpoint, we will apply the stabilization , with the numerical widening as numerical stabi- lization, to a chain which we already know is increasing thus we will insure convergence and overapproximation.
A numerical narrowing is not in general a numerical stabilization because we define stabilization to be an overapproximation of the current element of the chain, while the narrowing is an overapproximation of the intersection of all the previous elements, but i case of a decreasing chain, the numerical narrowing behaves like a numerical stabilization. So when computing a greatest-fixpoint, we will apply the stabilization with the numerical narrowing as numerical stabilization, to a chain which we already know is decreasing thus we will insure convergence and overapproximation.

List of Figures
Acknowledgements
1 Introduction
1.1 Motivations
1.2 Introduction to separation logic
1.3 History of the project and contributions
1.4 Structure of the manuscript
2 Separation logic with fixpoints
2.1 Commands and basic domains
2.1.1 Command syntax
2.1.2 Semantic domains
2.1.3 Small-step semantics
2.2 BIμν
2.2.1 Syntax of formulae
2.2.2 Semantics of formulae
2.2.3 Interpretation of Triples
2.2.4 Fixpoints and postponed substitution
2.3 Backward analysis
2.4 Forward analysis
2.5 Variations of BIμν
2.6 Appendix
2.6.1 Definitions
2.6.2 Stack Extension Theorem
2.6.3 Variable Renaming Theorem for BIμν
2.6.4 Unfolding theorems
2.6.5 Substitution theorems for BIμν
2.6.6 Substitution theorems for BIμν general
2.6.7 μ and ν coincide
2.6.8 Simplifications on [ / ]
2.6.9 sp’s proofs
2.6.10 wlp’s proofs
2.6.11 Upper-continuous results
2.6.12 Simplification theorems
3 An abstract language for separation logic
3.1 Introduction
3.2 Examples: Introduction to the language, translations of formulae
3.2.1 Full example: tree
3.3 Definition of the language, AR
3.4 Semantics of the language
3.5 Operations on the language
3.5.1 Extension
3.5.2 Union
3.5.3 Merging nodes
3.5.4 Stabilization
3.5.5 ast
3.5.6 Intersection
3.6 Translation of formulae
3.6.1 Properties of the translation
3.6.2 Translation of ∧
3.6.3 Translation of ∨
3.6.4 Translation of ∃
3.6.5 Translation of E1 = E2
3.6.6 Translation of x 7→ E1,E2
3.6.7 Translation of μ and ν
3.7 Conclusion
3.8 Appendix
3.8.1 Replace
3.8.2 Cheap extension proofs
3.8.3 Extension proofs
3.8.4 Union proofs
3.8.5 Merging proofs
3.8.6 Functions on CLeq proofs
3.8.7 Widening proofs
3.8.8 Basic ast proofs
3.8.9 Extra ast proofs
3.8.10 Basic equal proofs
3.8.11 Reach functions proofs
3.8.12 Class proofs
3.8.13 Exists proofs
3.8.14 Why using J·K′ ?
4 Implementation
4.1 Introduction
4.2 Software architecture
4.2.2 Computing informations
4.2.3 Building executables
4.3 Syntaxes of inputs and data structures
4.3.1 Program syntax
4.3.2 Formula syntax
4.3.3 Abstract data syntax
4.4 The translation of formula into elements of the domain: sl2ar.ml
4.5 Analysis
5 Comparison with other works
5.1 Smallfoot
5.1.1 Smallfoot: Modular Automatic Assertion Checking with Separation Logic (FMCO’05)
5.1.2 A local shape analysis based on separation logic (TACAS’06)
5.1.3 Shape analysis for composite data structures (CAV’07)
5.1.4 Footprint Analysis: A Shape Analysis that Discovers Preconditions (SAS’07)
5.2 TVLA
5.3 others
5.4 Conclusion
5.4.1 Modularity
5.4.2 Expressibility of heap logic
5.4.3 Folding/unfolding
5.4.4 Theorem provers
5.4.5 Auxiliary variables
5.4.6 Analysis versus verification
5.4.7 What is distinctive about our system
6 Conclusion
Bibliography

GET THE COMPLETE PROJECT