Deadline Timing and OSEK - Embedded.com

Deadline Timing and OSEK

Tasks running on an OSEK-compliant RTOS can be prioritized using deadline monotonic analysis. Here's a summary of the mathematics required to do so, and a look at the way the results can be used to improve your software design.

OSEK/VDX is the name of a family of automotive software infrastructure standards whose roots are found in the European automotive industry. Specifically, it includes an RTOS standard (known as OSEK/VDX OS), a standard for communication between tasks and nodes, and a standard for network management. The key concern addressed by OSEK/VDX is portability of automotive applications among different microprocessors. Another motivating concern is to develop and encourage the use off-the-shelf tools and the benefits for recruiting and training engineers that arise from the use of common techniques.

The following services are standardized by OSEK/VDX OS:

  • Scheduling
  • Managing ISRs
  • Managing resources
  • Handling events

The inclusion of an offline configuration language, OIL (OSEK Implementation Language), allows electronic control unit developers to statically define components of their application architecture (for example, tasks, resources, and alarms). This allows RTOS data structures to be placed in ROM, rather than RAM.

For more information on OSEK, see “The OSEK/VDX Standard: Operating System and Communication,” (Joseph Lemieux, March 2000, p. 90).

Periodic versus preemptive

Embedded systems have traditionally used a cyclic approach to scheduling, which gives excellent real-time predictability. However, its use typically leads to systems that are hard to maintain and that may contain inefficiencies. Consider the example task set in Table 1.

Table 1: Time and sample rate for an application withthree tasks using simple cycle scheduling

Task Min. required sample rate Processing time
t1 3ms (333Hz) 0.5ms
t2 6ms (166Hz) 0.75ms
t3 14ms (71Hz) 1.25ms

In the cyclic approach, all tasks run at the rate required by the task with the smallest sample rate. In the example, this means all tasks run once every 3ms. This approach allows a simplistic implementation in which a loop calls each task in turn before delaying to ensure that the overall cycle time remains constant (3ms in this case). This results in over-sampling, where tasks are executed more frequently than necessary. For example, t2 uses 25% of the available CPU time, where only 12.5% is required; t3 uses 42% of the available CPU time where only 9% is required.


Figure 1: Major-minor cyclic scheduling approach

In a major-minor cyclic scheduling approach (illustrated in Figure 1), a 12ms major cycle that contains 3ms minor cycles is defined. Thus, t1 runs once every minor cycle, t2 every two minor cycles (6ms), and t3 every four minor cycles (or once every major cycle). Task t3 is still over-sampled (10.4% where only 9% is required).

In either the simple cyclic or the major-minor cyclic scheduling approaches, time must be allocated in each cycle (in each minor cycle in the case of major-minor cyclic scheduling) for all interrupts that could occur within the cycle. For example, a periodic (every 10ms) interrupt could fire in any 3ms cycle. Therefore, the worst case must be assumed: that the interrupt occurs in every cycle. To do otherwise could result in task execution overflowing into the next minor cycle.

New tasks could be added to the system by fitting them into empty “slots” during which no processing takes place. In adding a new task, t4 (which has a sample rate 14ms and processing time of 5ms), it cannot fit into any of the available slots; the user ends up chopping it into pieces (see Figure 2).


Figure 2: Dispersal of added task t4 among available slots

This need to split up tasks to fit into available slots makes an application considerably less maintainable. In the example, any change to t1, t2, or t3 will require t4 to be manually repartitioned.

The alternative is to use a fixed-priority preemptive scheduling policy, such as the one provided by an RTOS. In this approach, each task is assigned a fixed priority based upon its period or deadline. Each task is made ready to run when necessary (with periods of 3ms, 6ms, 14ms, and 14ms for our example system). At any time, the scheduler selects the highest-priority ready task to run. If a lower-priority task is already running and a higher-priority task becomes ready, the higher-priority task immediately preempts the lower-priority task. Harmonic sampling becomes unnecessary and support for sporadic tasks is improved, making the CPU use more efficient. Consider the example in Figure 3.


Figure 3: A fixed-priority preemptive scheduling policy

The maintainability problem has been solved, as t4 is automatically partitioned by the operation of the scheduler. Furthermore, the overall response time of the RTOS is faster. However, determining the response time of each task and interrupt is no longer straightforward.

Deadline monotonic analysis

Deadline monotonic analysis (DMA) is a technique to calculate the worst-case response time of tasks. It can be used to assign static priorities to ensure that all tasks will meet their deadlines (in other words, that the system is schedulable). DMA is derived from a similar technique called rate monotonic analysis (RMA). (For more information on how RMA compares to DMA, see “Deadline Monotonic Analysis,” by Ken Tindell, June 2000, pp. 20–38.)

Here is the standard notation:

Ti is the period of task i
Di is the deadline of task i
Ci is the worst-case execution time of task i
Ri is the worst-case response time to task i

Ti and Di can be derived from requirements. Ci can either be measured by allocating a worst-case execution time to a task as a budget, assuming support exists to ensure that the task keeps to its budget, or the execution time of the task can be measured by running the task in isolation.

Ri is a result of the DMA technique. Under the rules of DMA, there may be more than one deadline per task. See Figure 4 for an example.


Figure 4: Multiple deadlines per task

Ri is composed of Ci and the computation time from any higher-priority tasks that preempt task i. The preemption time is called interference Ii . Therefore,

If there is only one higher-priority task, k , one must calculate the interference that is made of the number of times that k can preempt i, multiplied by the execution time of k . See Figure 5.


Figure 5: Interference

The interference from one task is

The total interference from all higher-priority tasks, is therefore

where hp(i) is the set of tasks with priority higher than i. So, for Ri :

This eventually translates into a recurrence relationship:

Looking again at our previous example with four tasks and one interrupt, Table 2 shows the input parameters for DMA. We can use that data to calculate the response time for t4, as shown in Table 3.

Table 2: DMA input parameters

Task Ti Ci Di
i 10ms 0.50ms 3ms
t1 3ms 0.50ms 3ms
t2 6ms 0.75ms 6ms
t3 14ms 1.25ms 14ms
t4 14ms 5.00ms 14ms

Table 3: Response time for t4

N Rn In Rn+1
0 5.00 0.5 + 2x(0.5) + 0.75 + 1.25 8.50
1 8.50 0.5 + 3x(0.5) + 2x(0.75) + 1.25 9.75
2 9.75 0.5 + 4x(0.5) + 2x(0.75) + 1.25 10.25
3 10.25 2x(0.5) + 4x(0.5) + 2x(0.75) + 1.25 10.75
4 10.75 2x(0.5) + 4x(0.5) + 2x(0.75) + 1.25 10.75

DMA makes a couple assumptions about how the system behaves:

  • During its execution, a task is never delayed waiting for a lower-priority task to execute.
  • The RTOS performs scheduling and task switching in zero time.

To make the analysis applicable to real-life systems, you can implement an extended form of DMA that does not make these assumptions. The detail of these extensions are beyond the scope of this paper. In brief, they include the computation of blocking time between tasks (within a preemptive system, the blocking of a higher-priority task by a lower-priority task can occur when interrupts are disabled or when resources are shared), introduction of jitter information, buffering of interrupts, and inclusion of scheduling/task switching times.

Analysis

In our example, the system consists of four tasks and one interrupt. After capturing task and interrupt data, you can create a stimulus for each task in the system. You can then use schedulability analysis to guarantee that all stimulus/response relationships complete within their period and that all tasks meet their deadlines. In practice, developers can model an application's requirements and progressing gradually to a detailed implementation.

Deadline monotonic analysis also enables you to determine the effects of changes to the timing behavior of your application, which in turn allows you to investigate the limits of schedulability under various system parameters. This enables developers to identify, among other things, the areas of risks in a design, explore potential for system modifications, target corrective action, investigate idle time, and select processor clock speeds.

Andrew Coombes is a product manager for LiveDevices. Prior to 1999, he was involved in research and consulting in the automotive and aerospace sectors. He has a BSc and DPhil in Computer Science from the University of York in the area of safety-critical systems. Contact him at .

Further reading

1. Burns, Alan and Andy Wellings. Real-time Systems and Their Programming Languages, Addison Wesley, 1997.

2. Tindell, Ken. “Deadline Monotonic Analysis,” Embedded Systems Programming, June 2000, pp. 20–38.

3. Tracey, Nigel. “Engineering real-time behavior through the development process,” Embedded Systems Conference, Boston 2001.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.