Multitasking alternatives and the perils of preemption - Embedded.com

Multitasking alternatives and the perils of preemption

It has come to pass that virtually every commercial real-time operatingsystememploys a priority-based preemptive scheduler at its core. Thisregularity exists despite the fact that real-time systems vary in theirrequirements and real-time scheduling doesn't have to be so uniform.Multitasking and meeting deadlines is certainly not a one-size-fits-allproblem.

Though there are good reasons to use priority-based preemption insome applications, preemption also creates a number of problems forembedded software developers. Programming in such an environmentnecessarily creates excess complexity when the application is not wellsuited to being coded as a set of tasks that can preempt each other.Sometimes this added complexity results in system failures. It almostalways also lengthens development and debug cycles.

This article is excerpted from a paper ofthe same name presented atthe Embedded Systems Conference Boston 2006. Used with permission ofthe Embedded Systems Conference. For more information, please visit www.embedded.com/esc/boston/

The big question is: Are you choosing an RTOS for the right reason?

Priority-based Scheduling
There is a relatively simple explanation for the observed regularity ofRTOS designs: academia. In the 1970's and 1980's the concept of”fixed-priority scheduling” was widely studied. This researchculminated in the canonical 1991 research paper: “Rate MonotonicAnalysis for Real-Time Systems” by Sha, Klein and Goodenough.

There is a technical description of RMA, as it is abbreviated, here:http://www.sei.cmu.edu/str/descriptions/rma_body.htmland a concise overview by David Stewart and your author here: http://www.netrino.com/Publications/Glossary/RMA.html

In a nutshell, the ratemonotonic algorithm (RMA) is a procedure for assigning fixedpriorities to tasks to maximize their “schedulability.” (A task set isconsidered schedulable if all tasks meet all deadlines all the time.)The RMA algorithm is simple – Assignthe priority of each taskaccording to its period, so that the shorter the period of a task thehigher is its priority. RMA, the academics ultimately determined, isthe optimal fixed-priority scheduling algorithm. If a specific set oftasks cannot be scheduled to meet all deadlines using the RMAalgorithm, it cannot be scheduled using any fixed-priority algorithm .

Given this “optimality” of RMA, it was natural for operating systemsvendors to offer products that provide fixed-prioritypreemptive scheduling. Once ReadySystems' VRTX and a few other early RTOS players made a spacefor this technology, the other major market participants quicklyfollowed suit.

The herd behavior had begun: a technique that solves one kind ofproblem best became the norm. Other kinds of problems would need to besolved by the brute force of fitting a square peg into the round holeof priority-based preemption.

Despite the fact that all our RTOS choices are designed forcompatibility with RMA, these facts persist: few engineers are workingon problems with hard real-time deadlines; very few folks outside ofthat group are using RMA to prioritize tasks; and measuring theworst-case execution time of each periodic task and finding a way tocalculate the worst-case behavior of aperiodics (e.g., interupt serviceroutines, or ISRs) needs to be done at each recompile and is solabor intensive it is probably only done on less than 1 in 10,000embedded projects that use an RTOS.

Implications of Preemption
Aside from RMA compatibility, the one positive implication ofpreemption is task responsiveness. A preemptive priority-basedscheduler effectively treats software tasks as hardware treats ISRs. Assoon as the highest-priority task (ISR) is ready to use the CPU, thescheduler (interrupt controller) makes it so. The latency in responsetime for the highest-priority ready task is thus minimized to thecontext switch time.

(For a more thorough introductionto preemption read: http://www.netrino.com/Publications/Glossary/Preemption.html)

By contrast, there are at least ten negative implications ofpreemption, as indicated in Figure 1below .

Figure1 – Ten Negative Implications of Preemption

The figure shows how these implications ripple out of the”Preemption” node. As we talk about what these implications are, you'llsee that these are the very problems we encounter Let's start ouranalysis with the simple stuff. For example, the three yellow bubblesimmediately surrounding the Preemption bubble are Schedulable Bound , Multiple Stacks , and Context Switches . Yellow is meantto indicate that, though these are serious issues, they are moreannoyances than the Orange-colored system failures.

The SchedulableBound implication refers to the fact that to gain the maximumbenefit from RMA, the user must be willing to give up use of up to 31%of CPU cycles. That is, the CPU-intensive work to meet all thedeadlines for all N tasks in your system is limited to 69% (ln over andover in embedded software design. They result from fitting our squarepeg (a certain set of requirements that are generally soft real-time atworst) into a round hole (a solution meant for a specific set ofrequirements associated with hard real-time).

The Schedulable Bound implication refers to the fact that to gainthe maximum benefit from RMA, the user must be willing to give up useof up to 31% of CPU cycles. That is, the CPU-intensive work to meet allthe deadlines for all N tasks in your system is limited to 69% (ln 2,to be precise) CPU utilization. Put another way, you'll need to pay forabout 45% (31/69) more CPU cycles than you'll actually use.

The MultipleStacks implication goes like this: If your system will have tentasks and preemption is present, then all ten of those tasks needs tohave its own private stack space.

Because stack space must be allocated on a worst-case-ever-neededbasis, this requires the use of up to 10 times the amount of RAM thatwould be required to complete the same set of work in the absence ofpreemption. And unless you have also a distinct “interrupt stack” forprocessing worst-cast ISR nesting, each of the ten task stacks must bethat much larger.

The third immediateresource wastage implication of preemption is the need toperform a “Context Switch” each time preemption occurs. The process ofperforming a context switch varies by processor, as some CPUs are setup to handle this rather efficiently.

In the typical case, however, the CPU must effectively push thecontents of all its registers and flags into RAM, then pop the sametype of information for the next task from another RAM storage area.This wastes CPU cycles in addition to those already out of reachbecause of RMA's schedulable bound.

A fourth and more dangerous preemptionimplication is the ubiquitous Race Condition. In real systems,tasks do not execute in isolation from each other. Real tasks need toshare data to get work done. In a system that lacks preemption, thereis no risk of tasks corrupting data in the process of sharing it.

A good analogy here is of two co-workers who share the same job andoffice, but work on alternating days. Though Worker A and Worker B eachoperate on the same files, there is no chance for the files themselvesto be corrupted in the process.

A pair of tasks running in a preemptive RTOS and manipulating thesame areas of RAM or hardware registers, though, will create numerousopportunities each second for data corruption. There are at least threesecond-order implications of race conditions: Interrupt Latency, Reentrant Libraries,and Mutexes .

InterruptLatency increases in preemptive systems as a result of thepotential for race conditions. The issue here is that there arepotential race conditions within the OS itself. For example, in orderto select the highest-priority task to run, the scheduler must keep aninternal linked list or other data structure with information abouteach task that wants to use the CPU.

To prevent these internal data structures from being corrupted byinterrupts that result in system calls, interrupts must be disabledduring every critical section of OS code that touches these datastructures. Interrupt latency for the system as a whole increases bythe worst-case length of an OS critical section. In a nutshell, systemswith an RTOS respond more rapidly to software tasks but more slowly tohardware interrupts.

Since the application code in several tasks may call the same sharedlibrary routines, such as a driver for a UART, it is possible the taskpreemption will take place inside one of those shared functions. Toprevent race conditions from occurring in the shared library routines,that code must have its critical sections identified and protected.That means that every shared library routine must be made reentrant,meaning both longer (more code space) and slower to run.

Finally, as a workaround for race conditions in the tasks and sharedlibrary routines (which cannot safely disable interrupts), a new OSprimitive is required: the mutex (a.k.a., binary semaphore). Mutexesprovide a rather nifty way of protecting shared data from raceconditions.

To use them, programmers first identify shared data and the criticalsections of code in each task that uses that data; next they create andassociate a mutex object with that shared data; finally, they surroundeach critical section with calls to “take” and “release” the mutex. Theonly problem with mutexes is that you shouldn't actually use them inpractice.

(Despite what you learned inOperating Systems class, the right way to share data between tasks isto use mailboxes to send data from one task to another for the nextstage of processing in an always-safe manner. You should always keepredesigning your task interactions until you don't need a single mutexto get the job done .)

Mutexes do solve the race condition problem quite nicely. However,mutexes are fraught with negative implications of their own: Starvation, Deadlock ,and PriorityInversion – all of which are ultimately associated withdifficult-to-diagnose product lockup and failures in the field.

Put simply, Starvation isthe death of one task. The affected task withers on the vine unable tomake progress because it never is able to obtain a needed mutex. Thetrouble here is the whole priority-based scheduling paradigm. Asecond-tier task may never be selected to run if some higher prioritytask is always using the processor. It's called starvation if thiscondition exists for any length of time that prevents proper operationof the affected task.

Deadlock involves two ormore tasks and is also known as a “deadly embrace”. Any text aboutoperating systems will tell you more about deadlocks. Suffice it to saythat the problem here is circular blocking; Task A has a mutex Task Bis waiting for, and vice versa. Neither task will ever be able toprogress and those tasks can only be restarted via system reboot.

Finally, there is the implication of Priority Inversion .Priorityinversion is more subtle than starvation or deadlock. In short, amedium-priority task prevents a high-priority task from using the CPUby preempting a low-priority task that holds the mutex needed by high.This is a system failure because it violates the basic assumption ofpriority-based preemptive scheduling: The highest priority ready taskis NOT using the CPU for the length of the inversion—the length ofwhich may even be unbounded because of potential starvation of thelow-priority task.

(For more details on PriorityInversion including a helpful diagram, check out: http://www.netrino.com/Publications/Glossary/PriorityInversion.html ).

Several workarounds to priority inversion exist, but they alwaysresult in wastage. For example, under the Priority CeilingProtocol eachshared resources has a priority at least as high as thehighest-priority task that ever uses it. Unfortunately, this popularworkaround results in another violation of the basic assumption ofpriority-based preemptive scheduling: A medium priority task may NOTuse the CPU because a low-priority task is running and using a resourcesometimes used by a high-priority task.

Conclusions
Many embedded developers vastly underestimate the true costs and skillsneeded to program with a preemptive RTOS. The truth is that alltraditional RTOS mechanisms for managing concurrency such assemaphores, mutexes, monitors, critical sections, condition variables,event flags, and others are tricky to use and often lead to subtle bugsthat are notoriously hard to reproduce, isolate, and fix.

There are several useful alternatives to priority-based preemptivescheduling, ranging from infinite loop cyclic executives torun-to-completion processing kernels.

The “killer app” for the existing crop of RTOSes is a set of tasksthat are each similar or identical to their brethren and runningwithout interaction among tasks. In this type of design, the preemptivescheduler serves the function of load balancer. Examples of suchapplications are an embedded Web (HTTP) server or any telecom/datacomswitch with multiple channels.

Many other types of applications are poorly served by priority-basedpreemptive scheduling. Trying to fit the square peg problems into theround hole of a commercial RTOS leads to frustration, bugs, andoverly-complicated designs.

So next time you're trying to figure out why your system just lockedup unexpectedly in the lab or in the field, spend some time thinkingabout why you chose an RTOS. If you aren't in need of the one positivething from priority-based preemptive scheduling, why are you paying theprice ten-fold!

Michael Barr ispresident of the Netrino ConsultantsNetwork. For more than three years he served as editor-in-chief ofEmbedded Systems Programming magazine.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.