Design Con 2015

Lower the overhead in RTOS scheduling

Alexander G. Dean

March 03, 2011

Alexander G. DeanMarch 03, 2011

Research shows that preemption-threshold scheduling helps to mitigate the deadline-vs.-overhead tradeoff faced by developers of real-time systems.

Engineers creating real-time embedded applications typically use a real-time operating system (RTOS) to develop a system as a collection of independent tasks or threads, while maintaining responsiveness to time-critical events. These threads must run periodically or in response to some event and must complete before a certain deadline. Hard real-time systems demand that deadlines be met with absolute certainty, even under worst-case conditions. However, the need to meet deadlines can also result in increased overhead, conflicting with a real-time system's demand for low overhead. One technology—preemption-threshold scheduling (PTS)—reduces overhead while maintaining the ability to meet deadlines under worst-case conditions. In this article, I explain what PTS is, how it works, and how PTS addresses the real-time programming challenges facing embedded system designers.

Preemption-threshold scheduling reduces the number of context-switches by inhibiting preemption for a thread-specific range of priorities.1 PTS preserves the ability to meet deadlines 100% of the time, with fewer high-overhead context switches. Developers can elect where and when to use PTS and otherwise allow traditional preemptive scheduling to occur. PTS gives them control over the responsiveness of each system thread and enables them to guarantee that all threads meet their individual deadlines.

PTS also offers other valuable benefits for resource-constrained systems. PTS is optimal among all preemption-limiting methods for minimizing a system's total stack memory requirements—critical for systems with many threads. PTS also provides a useful mechanism for avoiding priority inversion.

Why smarter preemption?
Most RTOSes use fully-preemptive schedulers where a task can preempt any lower-priority task at any time. Preemption allows critical processing to occur sooner than less-critical processing, delaying other processing until time is available. An interrupt service routine (ISR) is an example of such time-critical processing, but the same concept is typically extended to cover all tasks in the system. Preemption reduces the response time of critical processing in the system. A shorter response time can be used to achieve a more responsive system or more sophisticated processing. Alternatively, the processor clock speed can be reduced to save power, energy, or cost (through the use of a less expensive processor).

However, preemption has drawbacks:

  • Each preemption incurs processing overhead. The scheduler must decide which task to run, and then swap processor states. Modern embedded processors typically have some hardware support to reduce this delay, but it isn't eliminated completely.
  • Since preemptions can occur asynchronously among tasks, the system needs enough RAM to fit all worst-case stacks simultaneously. This can be prohibitively expensive for systems with little RAM or many tasks. This stack space must be large enough to accommodate worst-case function call nesting, local variable allocation, ISR activity, and possibly the task control block and context.  Adding features such as a GUI, embedded web server, and TCP/IP stack will add multiple threads, each needing its own stack. Even in systems with larger amounts of RAM, running out of memory can be a common problem. Embedded systems usually evolve over time, with each new generation of software accreting a new layer onto the existing code base. This is also known as "The Software Gas Law." Hence, over a long enough time the data memory requirements can fill available memory, making RAM a precious resource. This makes the use of real-time kernels for low-end embedded systems particularly hard if not impossible.
  • Systems with caches suffer from cache pollution by other tasks ("cache-related preemption delay") unless precautions are taken. Unfortunately, these precautions reduce performance. As the preempting Task B executes, it can evict data and instructions of the preempted Task A. When Task A resumes execution, it runs more slowly due to increased cache misses, which greatly complicates the worst-case timing analysis, since the preemption can typically occur at any place within Task A. Worst-case timing analysis is critical to ensuring real-time systems will meet their deadlines.
  • Many embedded systems use dynamic voltage and frequency scaling to perform processing with as little energy or power as possible. Each task has its optimal frequency/voltage operating point and requires certain peripheral devices to be powered up. Allowing preemption introduces the possibility that the RTOS needs to transition to a new operating voltage and frequency, which further increases delays. Preempted tasks may have been using peripherals that need to remain powered during the preemption to maintain their state. Trying to guarantee the system will meet real-time deadlines even with delays to change the state of power management makes the analysis even more difficult.

To summarize, preemption improves system responsiveness at the cost of greater memory requirements and more difficult timing analysis and performance optimization.

Partial preemption

Preemption need not be an all-or-nothing design decision. Many systems can in fact meet all deadlines without full preemption among tasks. In fact, there is a spectrum of possible approaches, ranging from nonpreemptable to fully preemptable. Researchers have been actively investigating this field and determining how to mathematically model the resulting systems in order to understand the design trade-offs better.

PTS  is a promising technique for limiting preemptions.2 PTS tries to minimize preemptions as much as possible while preserving the system's schedulability. (Real-time analysis techniques enable us to determine mathematically if a set of tasks is schedulable—all of its tasks will meet their deadlines in all possible scheduling situations—a fundamental requirement for many embedded systems.)



Researchers have been trying to understand how and when preemptions can be reduced or even eliminated. Baker introduced the Stack Resource Policy (SRP).3 Gai et al. extended it to support PTS, resulting in the stack resource policy with preemption-thresholds (SRPT).4 SRP and SRPT prevent priority inversions and deadlocks and bound the maximum time any task can be blocked.

Preemption-threshold scheduling

Normal fully-preemptive scheduling assigns a single priority to each task. This priority determines two aspects of the task's behavior—whether it can preempt another task and whether it can be preempted by another task. PTS assigns a separate priority to each aspect: The nominal priority determines whether this task can preempt other tasks, and the preemption-threshold sets this task's effective priority while executing.

When the task begins execution, its priority is raised to its preemption-threshold. In this way, all the tasks with priorities less than or equal to the preemption-threshold of the executing task cannot preempt it. PTS effectively creates groups of tasks that are not allowed to preempt each other.

It's easily seen that fully-preemptive and fully-nonpreemptive scheduling policies are special boundary cases of PTS. By assigning the preemption-threshold of each task equal to its priority, PTS simplifies to a fully-preemptive scheduling policy. By assigning the preemption-threshold of each task equal to the system's highest priority, PTS simplifies to a fully-nonpreemptive policy.

Assigning preemption-thresholds
Preemption between tasks can be limited to occur only when necessary to maintain system schedulability. Tasks that run nonpreemptively with respect to each other can be mapped into the same run-time thread and share the same run-time stack, minimizing the memory requirements and other preemption overheads.

Preemption-threshold assignment works by starting with a default, fully-preemptive system—each task's preemption-threshold is equal to its preemption level. This is also called the identity assignment.

Starting with the lowest-priority task (setting i to N in Line 2 of Listing 1), each task's preemption-threshold gi is set to j and the system is evaluated for schedulability. This preemption-threshold j (and therefore gi) is incremented until any further increase would make the system unschedulable. Wang and Saksena developed a heuristic shown in Listing 1 that can always find a feasible preemption-threshold assignment if it exists with a time complexity of O(N2 · q(N)) where q(N) is the complexity of the schedulability checking function. We call this algorithm the maximal preemption-threshold assignment algorithm (MPTAA) because it will always find the preemption-threshold assignment that is larger than any other feasible preemption-threshold assignment.


Click on image to enlarge.

This algorithm can be used for both fixed-priority as well as dynamic-priority schemes. The procedure is_task_schedulable() evaluates the schedulability of the system using either level-i busy period analysis for fixed-priority systems, or by computing the maximal blocking a task can endure without violating its schedulability for dynamic-priority schemes.

The MPTAA was analyzed and was shown to always find the maximal preemption-threshold assignment if one exists. This means the preemption-thresholds will be as large as possible. Since in our case we always start with a feasible assignment (the identity assignment), the maximal preemption-threshold assignment always exists (in the worst-case being equal to the identity assignment) and will always be found by the MPTAA.

PTS reduces number of preemptions

In 1999, Wang and Saksena evaluated how effectively PTS reduces the number of preemptions required to meet deadlines.2 They found that PTS reduced preemptions by 5% to 32% when compared with fully-preemptive scheduling.

They evaluated randomly-generated periodic task sets. Each task set consists of a number of tasks, each with a computation time Ci and period Ti (which is also its deadline). The period is chosen randomly from the range 1 to MaxPeriod using a uniform probability distribution function. The computation time Ci is then defined with a uniform probability distribution function. This simulation approach allows us to see the behavior of different scheduling approaches a cross a range of workloads and evaluate their sensitivity to different factors.

Wang and Saksena measured the number of context switches occurring in a 100,000 time-unit execution for each of the workloads. The results are presented in Figure 1 and show two interesting trends. PTS does reduce the number of preemptions significantly, depending on the number of tasks and the MaxPeriod parameter.  First, the number of tasks in a workload matters because it determines how long the system runs before a task finishes, allowing another task to run without preempting the first task. For constant processor utilization, more tasks means the processor's busy time is broken into smaller pieces, reducing the time to wait.  Second, the MaxPeriod parameter matters because a broader range of periods (and hence deadlines) provides more opportunities to limit preemptions while still meeting deadlines. A smaller range reduces preemption reduction because task deadlines are more closely synchronized.


Click on image to enlarge.

In 2004, Jejurikar and Gupta evaluated using PTS in a cache-based system in conjunction with dynamic voltage scaling to reduce system energy requirements.5 They found that PTS significantly reduced the number of context switches required. Randomly-generated periodic task sets were used, each with 10 to 20 tasks, and deadlines equaled periods. Periods ranged over a factor of 10x.

The authors counted two types of context switches (actual and effective) and compared them against those for a fully-preemptive scheduling approach. Effective context switches occur when a task begins or ends execution, and are important because they directly affect cache performance by disrupting the locality of memory references.

Jejurikar and Gupta in Figure 2 showed that on average, PTS reduced the number of context switches by 90% from the fully-preemptive approach, essentially independent of processor utilization. The number of effective context switches was reduced by 19% to 29%, improving memory system performance by about the same amount.


Click on image to enlarge.

PTS reduces stack space requirements
Threads in a nonpreemptive group can share stack space since their execution is not interleaved. Run-to-completion threads and some blocking threads qualify. Baker's Stack Resource Protocol calculates the possible maximum blocking times of tasks, allowing a system designer to determine whether deadlines can be met.3 SRP involves modifying the scheduler to not start executing a task if its preemption level is not high enough. With this, the task will not block partway through, since it doesn't start running unless all resources are available.

My own research group and others have evaluated how PTS can reduce stack space requirements. We showed that PTS can be used with any scheduling algorithm (such as rate monotonic and earliest deadline first) and it will always render the smallest total stack space requirements6 when using the maximal preemption-threshold assignment algorithm described by Wang and Saksena.2 This further extends their finding that the algorithm minimizes the number of separate stacks required, which does not necessarily minimize total stack space. Gai extended SRP to support preemption-thresholds and showed that it minimizes total stack space requirements.4

We investigated workload characteristics that affect the stack size reduction achievable through PTS. For example, the optimal stack utilization for some workloads through the use of PTS can be as small as 20% of the stack space utilization required by the fully-preemptive version of the system, while others need 80%. We simulated and analyzed 40,000 randomly-generated systems of 10 threads each to better understand PTS. The results shown in Figure 3 are summarized below.



We first investigated the effect of the system utilization on the optimal stack utilization required with PTS. Using the MPTAA, the optimal stack space required by each system was computed and normalized to the stack space required by the fully-preemptive version of the system. The average normalized stack utilizations were then plotted as a function of the overall system utilization and the standard deviation in the task periods. The results are shown in Figures 3 and 4 for the fixed-priority and the dynamic-priority schemes, respectively.




First, consider the fixed-priority scheme in Figure 3. At low utilizations, the system's stack space requirements might be less than 30% of those of a fully-preemptive system. However, at higher utilization levels, the variation in the tasks' periods increasingly affects the savings attainable. With σP = 25% the savings are only slightly dependent on the utilization level.

On the other hand, as the variance and standard deviation of the period increases, the savings attainable decrease significantly at higher utilizations. This is because task WCETs are larger so it is much harder to maintain the system's schedulability while minimizing preemptions. This becomes very apparent at high system utilizations where there is far less slack time.

Second, consider the dynamic-priority scheme in Figure 4. The normalized stack space utilizations in this case are much more uniform across all utilization levels. This is because the dynamic priority scheme (EDF in this case) is much more adaptive to the workload characteristics and has a higher schedulable utilization than a fixed-priority scheme such as RM. This more-efficient scheduling allows more preemption limiting to occur before schedulability is lost.

Another interesting property is the distribution of the 40,000 systems among the different normalized stack space utilization levels. To evaluate this property, we constructed histograms showing the percentage of systems versus the normalized (optimal) stack space utilization achievable by each system. Figures 5 and 6 show this distribution for the overall system utilization levels of 60%, 70%, 80%, and 90%, respectively. Again we can see that the workloads scheduled with the dynamic-priority schemes depend less on the system utilization level than those in a fixed-priority scheme.






In Part 2 of this series online at Embedded.com, PTS offers advantages in the schedulability of real-time systems while affording reduced overhead, and other benefits as well.

Alexander G. Dean, PhD, is associate professor in the Department of Electrical and Computer Engineering at North Carolina State University and a member of its Center for Efficient, Scalable and Reliable Computing. He received his BS EE from the University of Wisconsin and MS and PhD ECE degrees at Carnegie Mellon. His research focus in embedded systems includes compiling for concurrency and performance, energy efficient use of COTS processors, memory allocation, and benchmarking for real-time systems and robust embedded system design.


Endnotes:
1.   Preemption-Threshold is a trademark of Express Logic, Inc.
2.   Wang, Yun and Manas Saksena. "Scheduling fixed-priority tasks with preemption threshold," Real-Time Computing Systems and Applications, 1999. RTCSA ‘99.  http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=811269&isnumber=17590.  
3.   Baker, T. P. "Stack-based scheduling of realtime processes," Real-Time Systems Symposium, 1990. Proceedings, 11th. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=128747&isnumber=3599.
4.   Gai, Paolo, Giuseppe Lipari, and Marco Di Natale. "Minimizing Memory Utilization of Real-Time Task Sets in Single and Multi-Processor Systems-on-a-Chip," IEEE International, 22nd IEEE Real-Time Systems Symposium (RTSS'01), 2001. http://doi.ieeecomputersociety.org/10.1109/REAL.2001.990598
5.   Jejurikar, Ravindra and Rajesh Gupta. "Integrating Preemption Threshold Scheduling and Dynamic Voltage Scaling for Energy Efficient Real-Time Systems," Proceedings of the Ninth International Conference on Real-Time Computing Systems and Applications, 2004.  http://dar.ucsd.edu/site/pubs/jerjurikar-rtcsa04.pdf
6.   Ghattas, Rony and  Alexander G. Dean. "Preemption Threshold Scheduling: Stack Optimality, Enhancements and Analysis," Proceedings of the 13th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 2007), IEEE, 2007.  A version of this article is at www.cesr.ncsu.edu/agdean/Papers/ghattas-Preemption.pdf.

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER