Keeping your priorities straight: Part 2 - Operational flow (Two examples) -

Keeping your priorities straight: Part 2 – Operational flow (Two examples)


As discussed in Part 1 inthis series, we need to look closely at how the priorities of threadsimpact the total number of context switches performed by a system byexamining two cases: “Case-1”, in which all threads (4) are assignedthe same priority, and execute round-robin fashion and “Case-2,” wherethe four threads are assigned unique priorities of 1, 2, 3, and 4.

Threads A, B, and C check for a message from Thread D in theirrespective message queues. If one is found, they retrieve it. If not,they wait until one arrives. After retrieving a message from theirqueue, they return to look for another.

The threads are suspended (i.e., no longer ready to run) if nomessage is available in the queue, and resumed when a message appears(sent from thread D). At all times, the highest priority thread that isready to run is made active.

In Case-1, where all threads have the same priority, the threads runin a round-robin fashion where each thread runs until it is blockedwaiting for another message to appear in its queue. In Case-2, thehighest priority thread that is ready to run always runs.

To show how priority assignment affects performance, we need tocount the number of context switches that are performed in each case,between successive iterations of Thread D. This will represent onecomplete “Cycle” of the application in which three messages are sentto, and retrieved by, each of threads A, B, and C, as depicted in thissimplified flow-diagram in Figure 3,below :

Figure3 – Operational flow of example program, showing “cycles” consisting ofThread D sending messages to Threads A, B, and C and Threads A, B, andC retrieving those messages

The operational code for this simple system would look somethinglike the following (Figure 4, below ):

Figure4 – Pseudo code for example program. Note that Threads A, B, and Ccontinuously loop retrieving messages from Thread D. Thread D sends anumber of messages to each of the other threads, then relinquishes.

Case-1: All threads have the same priority (A=4, B=4, C=4, D=4)
In Case-1, we've assigned each thread a priority of 4. When all threadshave the same priority, they will run in a round-robin fashion, in theorder of their creation (A, B, C, D). In Figure 5 below , this executionevent trace shows how the example operates:

Figure5 – Event trace of Case-1: Equal Priorities. This shows each of ThreadsA, B, and C retrieving three messages, then Thread D sending eachThread three more messages. This cycle is repeated continuously

(Click here to view expandedimage. )

As you can see in Figure 5, Threads A, B, and C each read 3 messagesfrom their message queues. The “QR” ( queue_read) event indicates asuccessful read of one message from the queue.

But, after 3 messages are retrieved, the queue is empty, and thethreads are blocked until Thread D sends more messages. The “IS”(internal suspend) event indicates that the RTOS suspends the threadand returns to its scheduler which initiates a context switch. ThreadD, the “producer” thread, eventually gets to run, and sends moremessages to the queues (as shown by the “QS” event).

Note that as each of the first three messages is sent by Thread D(as indicated by the “QS” event), there is an Internal Resume (“IR”)event. This IR event refers to the fact that the thread waiting forthat message now may proceed, once it gets its next turn in theround-robin sequence.

After the third message is sent (one to each waiting thread), thesubsequent messages do not cause another IR, since those threadsalready have been resumed by the first message, which has not yet beenretrieved.

In this case, there is exactly one context switch each time a threadcompletes its processing (is blocked waiting for a message to be put onits queue, or is finished sending messages in the case of Thread D),allowing the next thread in turn to run. The result is that there are atotal of four context switches per cycle between Thread D's nth and n+1″relinquish” operations.

Note, in the event trace shown below in Figure 6 below, the two “RO” eventsindicating Thread D's Relinquish Operations as it completes sendingmessages, forming the Start and Stop boundaries of one applicationcycle.

Figure6 – Context Switch count for Case-1. Notice that only four contextswitches are required for a complete cycle of nine messages sent andreceived. The insert shows the count of various RTOS operations

(Click here to view expandedimage. )

Context switches between RO events are numbered, and the total (4)is computed by the “performance Statistics” display superimposed in theupper-right of Figure 6. In Case-1, nine messages are sent, ninemessages are received, and four context switches occur.

Case-2: All threads are given unique priorities
In Case-2, each thread is assigned a unique priority. Thread A=1,Thread B=2, Thread C=3, and Thread D=4. Since all threads have uniquepriorities, the highest priority thread that is ready to run is the onethat the RTOS will run. Figure 7 below shows the event sequence for Case-2:

Figure7 – Processing flow in Case-2. Note that after each message is sent byThread D in this case, a preemption occurs and control is immediatelytransferred to the thread waiting for that message.

(Click here to view expandedimage. )

An interesting situation occurs when unique priorities are assigned.Here, as we can see in Figure 7, each message that is sent by Thread Dcauses an immediate Internal Resume (“IR” event), just as in Case-1.Because Thread A is waiting for a message, when Thread D sends amessage to Thread A, it makes Thread A ready to run.

But, since Thread A has a higher priority than Thread D, it becomesthe highest priority thread that is ready to run. As a result, the RTOSpreempts Thread D and performs a context switch to Thread A (ContextSwitch #1 as shown in Figure 8 below ).This is different from the events of Case-1, where Thread A had thesame priority as Thread D, and no preemption was caused as Thread Dsent its message to Thread A.

Once Thread A retrieves its message from the queue (“QR” event), itonce again goes into a suspended state, waiting for another message (asindicated by the Internal Suspend, or “IS” event in Figure 8 below),and the RTOS does another context switch (Context Switch #2) back toThread D. This is repeated for Threads B and C, resulting in ContextSwitches #3, #4, #5, and #6). The scenario is repeated three times,resulting in Context Switches 7 “18. This completes a “cycle” of theapplication (from one Thread D Relinquish to the next).

Figure8 – Context Switches for a complete Cycle in Case-2. Note additionalRTOS event counts shown in the insert.

(Click here to view expandedimage. )

In Case-2, eighteen context switches occur for the same ninemessages, an increase of 350% system overhead. The unique priorityassignment strategy results in a significantly greater number ofcontext switches compared to the assignment of the same priority.

This can be seen between the two “RO” (Relinquish Operation) eventsnoted in Figure 8, compared to the previous case where only fourcontext switches occur .

Figure9 – When tasks are assigned unique priorities, 350% more contextswitches occurred, resulting in a significantly less efficient system.

As noted in Figure 9 above, through the selection of thread priorities, the exact sameapplication can have almost five times the number of context switchesthan it would have under optimal priority conditions.

Next in Part 3, we will look at the implications introduced by theuse of unique priorities especially as regards the additional RTOSoverhead that is imposed.

To read Part 1, go to: “The basicsof context switching.”

William E. Lamie is co-founder andCEO of ExpressLogic, Inc.,and is the author of the ThreadX RTOS. Prior to founding Express Logic,Mr. Lamie was the author of the Nucleus RTOS and co-founder ofAccelerated Technology, Inc. Mr. Lamie has over 20 years experience inembedded systems development, over 15 of which are in the developmentof real-time operating systems for embedded applications.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.