Keeping your priorities straight: Part 1 - context switching - Embedded.com

Keeping your priorities straight: Part 1 – context switching

Real-time applications consist of multiple threads, each performing aportion of the system's workload under the management of a real-timeoperating system (RTOS). In a real-time system, a thread's priorityreflects the relative urgency of its work, and RTOSes strive at alltimes to run the most urgent work that needs to be performed, swappingout lower priority work.

Often, multiple portions of a system's workload are of equalimportance, and none warrants a higher priority than another. In suchcases, multiple threads operate at the same priority, and runsequentially, in a “round-robin” fashion. Whether due to greaterurgency, or a round-robin sequence, whenever one thread gives way toanother, the RTOS must perform what is called a “context switch.”

A context-switch is a complex procedure in which the RTOS saves allthe information being used by the running thread (its “context”) andloads the context of another thread in its place. A thread's contextincludes its working register set, program counter, and otherthread-critical information. This context is saved on the stack, or ina thread control block data structure, from which it gets re-loadedwhen the RTOS wants to run that thread again.

Context switches generally are the single most time-consuming RTOSoperation in a real-time system, often taking hundreds of cycles toexecute. The amount of processing varies from RTOS to RTOS, butgenerally involves the following operations as shown below in Figure 1 below.

Figure1 – A typical context switch involves a number of operations, each onerequiring a number of CPU cycles

As will be seen in the example below, when threads are assignedunique priorities, the order in which they become ready to rundetermines the number of context switches performed. If the order inwhich they become ready to run is in ascending priority order, theneach time one becomes ready to run, it will immediately cause apreemption of the lower-priority thread that is running, and result ina context switch.

Conversely, if the order of activation is in descending priorityorder, then each activation does not cause preemption, since therunning thread is higher in priority. But when threads are of the samepriority, the order in which they become ready has no impact on thenumber of preemptions, and always results in fewest consistent numberof context switches.

Because of all the processing it requires, a context switch is oneof the most important measures of real-time performance in embeddedsystems. While all RTOSes go to great lengths to optimize theperformance of their context switch operation, the applicationdeveloper must ensure that a system performs as few context switches aspossible.

For a given application, the way priorities are assigned toindividual threads can have a significant impact on the number ofcontext switches performed. In particular, by running multiple threadsat the same priority, rather than assigning them each a uniquepriority, the system designer can avoid unnecessary context switchesand reduce RTOS overhead.

Assigning multiple threads the same priority also makes it possibleto properly address priority inheritance, and to implement round-robinscheduling and time-slicing. Each of these mechanisms is important in areal-time system, and is difficult, if not impossible to implement,without running multiple threads at the same priority. Each can be usedto keep system overhead low and—perhaps more importantly—to keep systembehavior understandable.

What's Prioritization About?
Before analyzing the relationship between priority assignment andsystem performance, it is important to understand what a thread'spriority represents, and how it affects the way the RTOS schedules thatthread to run.

Most RTOSes employ a priority-based, preemptive scheduler. In thistype of scheduler, the highest priority thread that is “ready to run”(i.e., is not waiting for something else to happen) is the one that theRTOS runs on the CPU. A thread's “readiness” may change as the resultof an interrupt or the action of another thread.

One simple, but common scenario is for a thread to be “waiting” fora message to appear in a message queue, and when the message appears,the waiting thread becomes “ready.” The RTOS is responsible for keepingtrack of which threads are ready, which are waiting, and forrecognizing when an event enables a waiting thread to become ready.

When a thread with priority higher than the active thread becomesready to run (e.g., because the message it was waiting for finallyarrived), the RTOS preempts the active thread. This preemption resultsin a context switch in which the context of the active thread is saved,the context of the higher priority thread is loaded, and the higherpriority thread then runs on the CPU.

Because of the complex relationship between priority assignment andcontext switches, many application developers might not realize howmuch control they have over the number of context switches anapplication must perform.

How Priorities Determine Context Switch Count
To illustrate the effect that various methods of priority assignmentcan have on context switching, consider a system with four threads (Figure 2, below ), named A, B, C, andD.

Figure2 – To measure the impact of priority assignment, we set up twocases – one where all threads have the same priority and one where theyeach have a unique priority.

In this example, the threads operate in a producer-consumerfashion., with Thread D the producer thread, sending three messagesinto each of the queues of threads A, B, and C. Let's look at how thepriorities of the threads impact the total number of context switchesperformed by this system.

To do this, we'll examine two cases. In”Case-1″, all threads are assigned the same priority (4), and willexecute in a round-robin fashion. In Case-2, the four threads areassigned unique priorities of 1, 2, 3, and 4. In each case, we'llmeasure the number of context switches performed, as well as the timeit takes to complete an equal amount of work.

To read Part 2, go to: A contextswitch's operational flow – two examples
To read Part 3, go to: Understandingthe implications

William E. Lamie is co-founder andCEO of ExpressLogic, Inc., and is the author of the ThreadX RTOS. Prior tofounding Express Logic, Mr. Lamie was the author of the Nucleus RTOSand co-founder of Accelerated Technology, Inc. Mr. Lamie has over 20years experience in embedded systems development, over 15 of which arein the development of real-time operating systems for embeddedapplications

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.