Live range non-interference and tiling: a relaxed permutability criterion 

Get Complete Project Material File(s) Now! »

Polyhedral Representation of Programs

In polyhedral compilation, an abstract mathematical representation is used to model programs, each statement in the program is represented using three pieces of information: an iteration domain, access relations and a schedule. This representation is first extracted from the pro-gram AST (Abstract Syntax Tree), it is then analyzed and transformed (this is the step where loop transformations are performed). Finally the polyhedral representation is converted back into an AST.

Iteration Domains

The iteration domain of a statement is a set that contains all the execution instances of that statement. Each execution instance (or dynamic execution) of the statement in the loop is represented individually by an identifier for the statement (the name of the statement) and an iteration vector, which is tuple of integers that uniquely identifies the execution instance.
The iteration domain for the statement S, called Ds, is the following set Ds = {S[1, 1], S[1, 2], S[2, 1], S[2, 2]}, where each tuple in the set represents a possible value for the loop iterators i and j. The iteration domain for the statement S can also be written as follows Ds = {S[i, j] : 1 ≤ i ≤ 2 ∧ 1 ≤ j ≤ 2} We use Ds to refer to the iteration domain of a statement S.
Figure 1.4 shows a graphical representation of the iteration domain of the statement S1 in the following loop .

Access Relations

Access relations are a set of read, write and may-write access relations that capture memory locations on which statements operate. They map statement execution instances to the array elements that are read or written by those instances. They are used mainly in dependence analysis.
In the previous example, the set of read access relations is R = {S[i] → B[i] : i ≥ 1 ∧ i ≤ 99} which means that the statement S in iteration i reads the array element B[i] and that this is true for each i ≥ 1 ∧ i ≤ 99.
The set of write access relations is W = {S[i] → A[i] : i ≥ 1 ∧ i ≤ 99} and the set of may-write access relations (i.e., writes that may happen) is equal to the set of writes.

Dependence Relations

Dependence. A statement S′ depends on a statement S (we write δS→S′ ) if there exists a statement instance S′(i′) and a statement instance S(i) and a memory location m such that:
• S′(i′) and S(i) (may) access the same memory location m and at least one of them is a write access;
• i′ and i respectively belong to the iteration domains of S and S′;
• in the original sequential order, S(i) is executed before S′(i′).
Dependence Relation. Many formalisms can be used to represent dependences, some of these formalisms are precise and some formalisms are imprecise. Dependence relations are an ex-ample of precise formalisms and are the formalism that we use throughout this dissertation. They map statement instances (called source statements) to statement instances that depend on them for their execution (which are called sink statements). These dependence relations are derived from the access relations and the original execution order. A dependence relation is represented using a map and thus it has the following general form (the same general form of a map):
δS→S′ = {S[i] → S′[i′], (i, i′) ∈ Zd1 × Zd2 | f (i, i′, p)} where S[i] represents the source statement and S′[i′] represents the sink statement. p ∈ Ze is a vector of e parameters and f (i, i′, p) is a Presburger formula that evaluates to true if and only if S′[i′] depends on S[i] for the given parameters p.
In the following example the value written in in the iteration ( ) is read in the following iteration ( ). This dependence can be represented using the following map: δS→S = {S[i] → S[i + 1] : i ≥ 1 ∧ i ≤ 98}
Computing Dependences. Many tests have been designed for dependence checking between statement instances. Some of these tests give an exact solution and some of them only provide approximative but conservative solutions (they overestimate data dependences). Examples of these tests include the GCD-test [8] and the Banerjee test [9] which provide approximate solutions. The Omega-test [83] on the opposite is an example of tests that provide exact solutions.
Dependence Graph. The dependence graph Δ = (V, E ) is a directed graph where the vertices V are the statements of the program and where the edges E are the dependence relations (maps) between these statements.
Classification of Dependences Program dependences can be classified into:
• Flow dependences (also known as true dependences and as read after write depen-dences), in which the write to a memory cell occurs before the read.
• Output dependences (also known as write after write dependences) in which both ac-cesses are writes.
• Anti-dependences (also known as write after read dependences) in which the read occurs before the write.
• In some contexts (e.g., when discussing data locality), one may consider read after read dependences, which do not constrain parallelization.
Some flow dependences are spurious in general. Let us consider a memory cell and an execution instance of a statement that reads it (let us call it ). There may be many writes to this cell. The only one on which really depends is the last one which is executed before . The dependence from to is a direct dependence. If the source program is regular, direct dependences can be computed using a technique such as array dataflow analysis [36].

READ  Beyond the Exodus of May-June 1940: Internal Flows of Refugees in France 

Schedule

While iteration domains define the set of all dynamic instances of an instruction, they do not describe the execution order of these instances. In order to define the execution order we need to assign an execution time (date) to each dynamic instance. This is done by constructing a schedule representing the relation between iteration vectors and time stamp vectors. The schedule determines the relative execution order of the statement instances. Program transfor-mations are performed through modifications to the schedule.

Table of contents :

1 Introduction and Background 
1.1 Introduction
1.2 Background and Notations
1.2.1 Maps and Sets
1.2.2 Additional Definitions
1.2.3 Polyhedral Representation of Programs
1.2.4 Data Locality
1.2.5 Loop Transformations
1.3 Outline
2 Tiling Code With Memory-Based Dependences 
2.1 Introduction and related work
2.2 Sources of False Dependences
2.3 Motivating Example
2.4 Live Range Non-Interference
2.4.1 Live ranges
2.4.2 Construction of live ranges
2.4.3 Live range non-interference
2.4.4 Live range non-interference and tiling: a relaxed permutability criterion
2.4.5 Illustrative examples
2.4.6 Parallelism
2.5 Effect of 3AC on the Tilability of a Loop
2.6 Experimental Evaluation
2.7 Experimental Results
2.7.1 Evaluating the Effect of Ignoring False Dependences on Tiling
2.7.2 Performance evaluation for Polybench-3AC
2.7.3 Reasons for slowdowns
2.7.4 Performance evaluation for Polybench-PRE
2.8 Conclusion and Future Work
3 Scheduling Large Programs
3.1 Introduction
3.2 Definitions
3.3 Example of Statement Clustering
3.4 Clustering Algorithm
3.5 Using Offline Clustering with the Pluto Affine Scheduling Algorithm
3.5.1 Correctness of Loop Transformations after Clustering
3.6 Clustering Heuristics
3.6.1 Clustering Decision Validity
3.6.2 SCC Clustering
3.6.3 Basic-block Clustering
3.7 Experiments
3.8 Related Work
3.9 Conclusion and Future Work
4 The PENCIL Language 
4.1 Introduction
4.2 PENCIL Language
4.2.1 Overview of PENCIL
4.2.2 PENCIL Definition as a Subset of C99
4.2.3 PENCIL Extensions to C99
4.3 Detailed Description
4.3.1 stationst and restri
4.3.2 Description of the Memory Accesses of Functions
4.3.3 onst Function Attribute
4.3.4 Pragma Directives
4.3.5 Builtin Functions
4.4 Examples of PENCIL and non-PENCIL Code
4.4.1 BFS Example
4.4.2 Image Resizing Example
4.4.3 Recursive Data Structures and Recursive Function Calls
4.5 Polyhedral Compilation of PENCIL code
4.6 Related Work
5 Evaluating PENCIL 
5.1 Introduction
5.2 Benchmarks
5.2.1 Image Processing Benchmark Suite
5.2.2 Rodinia and SHOC Benchmark Suites
5.2.3 VOBLA DSL for Linear Algebra
5.2.4 SpearDE DSL for Data-Streaming Applications
5.3 Discussion of the Results
5.4 Conclusion and Future Work
6 Conclusions and Perspectives 
6.1 Contributions
6.2 Future Directions

GET THE COMPLETE PROJECT

Related Posts