Limitation of classical fixed priority assignment algorithms 

Get Complete Project Material File(s) Now! »

Memory Allocation

RTOS may support dynamic and static memory allocation.
Definition 37 (Static Memory Allocation [37]). Static memory allocation is the allocation of memory at compile or design time. No memory allocation or deallocation actions are performed during execution. When using static memory allocation, sizes of the tasks must be known at com-pile or design time. As a result, the disadvantages are that sizes of data structures cannot be dynamically varied, and programs cannot be recursive. However, it is also fast and eliminates the possibility of running out of memory [50]. Definition 38 (Dynamic Memory Allocation [37]). Dynamic memory allocation is the allocation of memory at run-time. Dynamic memory allocation is sometimes considered a poor design choice be-cause spatial and temporal worst case for allocation and deallocation operations were insufficiently bounded [61]. They lead to unpredictable timing behaviors, which are a problem when designing a RTES. Fully static designs do not have those limitations.

HARDWARE

This section provides a brief summary about three hardware components of a RTES: processor, memory system and network. Cache memory, which is the focus of this thesis, is detailed in Chapter 2.

Processor

In RTES, processors are usually small and have low power consumption. In essence, they are different from processors used in a workstation, laptop or desk-top computer. They can be a general purpose processor or application specific instruction-set processor. We can classify a RTES based on the number of processors.
• Uniprocessor: RTES has only one processor.
• Multiprocessor: RTES has more than one processor. We can distinguish at RTES three kinds of multiprocessor RTES from a theoretical point of view [32].
– Identical parallel machines: all the processors are identical in the sense that they have the same speed.
– Uniform parallel machines: each processor is characterized by its own speed.
– Unrelated parallel machines: there is an execution rate associated with each job-processor pair.

Memory System

On embedded systems, memory is often not expandable or very costly to expand. When programming embedded systems, one needs to be aware of the memory needed to complete a task. A memory device can be classified based on several characteristics:
• Accessibility: random access, serial access or block access.
• Persistence of storage: volatile storage or non-volatile storage.
• Read/write speed.
• Size.
• Cost.
• Power consumption.
A memory system needs to meet the following requirements. First, processors are built to expect a random-access memory. Second, this memory must be fast compared to the speed of the processor. If memory speed is too slow compared to processor speed, a high proportion of the execution time of a program is waiting for data to arrive. It will be a significant waste of processing power and energy. Third, this memory needs to be large. Nowadays, software are using megabytes of code and expecting up to gigabytes of storage. Finally, from the consumer point of view, this memory must also be cheap.
It is possible to provide all technical requirements in a single memory technol-ogy; however, the cost will be very high [50]. Thus, in practice, people exploit the locality of reference in order to create a memory hierarchy which is able to answer all the requirements above. The idea is to have multiple levels of storage. Each level is optimized for a specific requirement. This point will be discussed in detail in Chapter 2.

SCHEDULING ANALYSIS

Scheduling analysis provides a mean to assess the ability of a given RTES to meet its timing constraints. In other words, all the tasks will meet their dead-lines during the life-time of the system. It includes the analysis and testing the feasibility and schedulability of several scheduling policies on a specific system model.
In this section, the system model used in this thesis is presented. Then, we explain what is feasibility and schedulability in the context of RTES. We present several scheduling policies and tests applied to them.

System Model

A system model is an abstraction of a system. A system can be described by different models with different levels of abstraction. In this thesis, we assume the following system model:
• A uniprocessor RTES
• There are n independent periodic tasks: τ1, τ2, …, τi, …, τn.
• A task is defined by a quintuple: (Ci, Ti, Di, Oi, Πi). The five elements are
respectively the capacity (or the worst case execution time), the period, the deadline, the offset and the priority of the task τi. Task τi makes its initial request after Oi units of time, and then releases periodically every Ti units of time. Each release of a task is called a job. Each job requires Ci units of computation time and must complete before Di units of time. A unique priority level Πi is assigned to each task. The higher the priority value of a task, the higher its priority level.
• The capacity of a task is smaller than its deadline: Ci 6 Di.
• The deadline of a task is smaller than or equal to its period: Di 6 Ti.
• There is no dependency and shared software resources between tasks.
• The task set can be either synchronous or asynchronous.
In addition, we introduce the following notations used to discuss about the system model.
• Dmax is the largest relative deadline in the task set.
• Omax is the largest offset in the task set.
• A job of τi released at time t = Oi + k · Ti, k 2 N is denoted as τi[t].
• hp(i) (respectively lp(i)) is the set of tasks with higher (respectively lower) priority than task τi.
• hep(i) (respectively lep(i)) is the set of tasks with higher (respectively lower) or equal priority to task τi.

Fixed Priority Preemptive Scheduling

Under fixed priority preemptive (FPP) scheduling, each task is assigned a prior-ity level. Task can preempt each other based on the statically assigned priorities. In this section, first we introduce priority assignment algorithms. Second, we present schedulability tests applied to FPP scheduling.
A Priority Assignment One of the most known priority assignment algorithms are the Rate Monotonic (RM) and Deadline Monotonic (DM). RM assigns priority levels to periodic tasks based on their periods. The shorter the period of a task, the higher its priority level. It was shown that for syn-chronous periodic tasks with deadlines on requests (8τi : Ti = Di, Oi = 0), RM is the optimal priority assignment algorithm.
DM assigns priority to tasks based on their relative deadlines. The shorter the relative deadline of a task, the higher its priority level. Leung and Whitehead [55] showed that for synchronous tasks with deadlines less than or equal to their periods (8τi : Di 6 Ti, Oi = 0), (DM) is optimal. Audsley [7] addressed asynchronous periodic tasks with arbitrary deadlines (Ti and Di are not related). Audsley’s priority assignment algorithm is optimal in the sense that for a given RTES model, it provides a feasible priority ordering resulting in a schedulable RTES whenever such an ordering exists. For n tasks, the algorithm performs at most n · (n + 1)/2 schedulability tests and guarantees to find a schedulable priority assignment if one exists.

READ  SSM Comparing & Debating techniques 

CRPD ANALYSIS FOR FPP SCHEDULING

In this section, we present existing research which has been made to account for CRPD in scheduling analysis. It is divided into three subjects:
• CRPD analysis for WCRT: extensions that have been made to the WCRT computation equation proposed by Joseph and Pandya [51] to take into account CRPD.
• Limiting CRPD: approaches that can be used to limit CRPD by either elimi-nating CRPD, reducing CRPD of each preemption or lowering the number of preemptions.
• CRPD analysis for scheduling simulation: approaches used in order to take into account CRPD in scheduling simulation.

CRPD analysis for WCRT

This section presents the extensions that have been made in order to take into account the effect of CRPD in WCRT computation for FPP scheduling. In FPP scheduling context, the WCRT Ri of a task τi can be computed and compared against the deadline using the following equation [51]:
& ‘ Ri = Ci + X Ri · Cj (15) j Tj 8 2hp(i) To take into account the CRPD, the term γi,j was introduced by [23]. In this case, γi,j refers to the total cost of preemption due to each job of higher priority task τj (τj 2 hp(i)) executing within the response time of task τi. Then, the worst case response time of task τi can be computed by: & ‘ Ri = Ci + X Ri · (Cj + γi,j) (16).

Table of contents :

Introduction
1 real-time embedded system 
1.1 Properties of Real-Time Embedded System
1.1.1 RTES Classification
1.1.2 RTES Architecture
1.2 Software
1.2.1 Task – Unit of Execution
1.2.2 Task Properties
1.2.3 Task Dependencies
1.2.4 Task Types
1.2.5 Task Set Types
1.3 Real-Time Operating Systems
1.3.1 Scheduler
1.3.2 Memory Allocation
1.4 Hardware
1.4.1 Processor
1.4.2 Memory System
1.4.3 Network
1.5 Scheduling Analysis
1.5.1 System Model
1.5.2 Feasibility and Schedulability
1.5.3 Sustainability and Robustness
1.5.4 Fixed Priority Preemptive Scheduling
1.5.5 Dynamic Priority Preemptive Scheduling
1.6 Scheduling Simulation
1.6.1 Feasibility Interval
1.6.2 Scheduling simulator
1.7 Conclusion
2 cache memory and cacherelated preem ptiondelay 
2.1 The Need of Cache Memory and Memory Hierarchy
2.1.1 Memory Hierarchy
2.2 Basics Concepts about Cache Memory
2.2.1 Cache classification
2.2.2 Cache organization
2.2.3 Cache operations
2.3 Cache problems in RTES
2.4 CRPD Computation Approaches
2.4.1 Evicting Cache Block
2.4.2 Useful Cache Block
2.5 CRPD Analysis for FPP Scheduling
2.5.1 CRPD analysis for WCRT
2.5.2 Limiting CRPD
2.5.3 CRPD analysis for scheduling simulation
2.6 Conclusion and Thesis Summary
3 crpd-aware priority assignment 
3.1 System model and assumptions
3.2 Limitation of classical fixed priority assignment algorithms
3.2.1 Limitation of RM and DM
3.2.2 Limitation of OPA
3.3 Problem formulation and overview of the approach
3.3.1 Feasibility condition of OPA
3.3.2 Extending the feasibility condition with CRPD
3.4 CRPD interference computation solutions
3.4.1 CPA – ECB
3.4.2 CPA-PT and CPA-PT Simplified
3.4.3 CPA -Tree
3.5 Complexity of the algorithms
3.5.1 CPA-ECB
3.5.2 CPA-PT and CPA-PT-Simplified
3.5.3 CPA-Tree
3.6 Evaluation
3.6.1 Evaluating the impact of CRPD on the original OPA .
3.6.2 Efficiency evaluation of CPA solutions
3.6.3 Evaluating the performance of the proposed feasibility test
3.6.4 Combined solution: CPA-Combined
3.7 Conclusions
4 crpd-aware scheduling simulation 
4.1 Definitions
4.2 CRPD computation models
4.2.1 Classical CRPD computation models
4.2.2 Problems with classical models
4.2.3 Fixed sets of UCBs and ECBs with constraint (FSC-CRPD) .
4.3 Sustainability analysis
4.3.1 Definitions
4.3.2 CRPD problem in sustainability analysis
4.3.3 Sustainability analysis of scheduling simulation with classical CRPD computation models
4.3.4 Sustainability analysis of FSC-CRPD
4.4 Feasibility interval analysis
4.4.1 Stabilization Time
4.4.2 Periodic Behavior
4.5 Conclusions
5 cache-aware scheduling analysis tool implementation 
5.1 CRPD analysis implemented in Cheddar
5.2 Cheddar Framework
5.2.1 Cheddar ADL model of RTES components
5.2.2 Analysis features in Cheddar scheduling analyzer
5.2.3 Use Process
5.2.4 Development Process
5.3 Cache access profile computation
5.3.1 Extending Cheddar ADL
5.3.2 Implement analysis features: cache access profile computation
5.3.3 Implementation summary
5.3.4 Experiments
5.4 CRPD analysis for WCRT
5.4.1 Extending Cheddar ADL
5.4.2 Implementing analysis features: CRPD analysis for WCRT
5.4.3 Implementation Summary
5.5 CRPD-aware priority assignment algorithm
5.5.1 Extending Cheddar ADL
5.5.2 Implementing analysis feature: CRPD-aware priority assignment algorithm
5.5.3 Implementation Summary
5.6 CRPD-aware scheduling simulation
5.6.1 Extending Cheddar ADL
5.6.2 Implementing analysis feature: CRPD-aware scheduling simulation
5.6.3 Implementation summary
5.6.4 Experiments and evaluation
5.7 Implementation Issues
5.8 Conclusions
6 conclusion 
6.1 Contribution Summary
6.2 Future Work
a algorithmand pseudo code 
a.1 CPA-PT: CRPD potential preemption computation
a.2 CPA-Tree: Tree computation
a.3 Event handlers for scheduling simulation with FS-CRPD
a.4 Event handlers for scheduling simulation with FSC-CRPD
b express schema 
b.1 EXPRESS schema of cache memory
b.2 EXPRESS schema of CFG and cache access profile
c methodsignature 
c.1 Procedure Compute_Cache_Access_Profile
c.2 Procedure Compute_Response_Time
c.3 Procedure CPA_CRPD
bibliography

GET THE COMPLETE PROJECT

Related Posts