Effective use of Pthreads in embedded Linux designs: Part 2 –Sharing resources - Embedded.com

Effective use of Pthreads in embedded Linux designs: Part 2 –Sharing resources

As noted in Part 1, one of the primary functions of any operating system is to manage access to global resources by multiple threads/tasks/processes to avoid conflicts. This is a major issue in event-driven systems where threads may compete asynchronously for access to a global resource.

Consider the following scenario. Two threads of the same priority are each running the same code fragment:

   printf (“I am thread %dn”, n);

Let’s assume they’re operating under the SCHED_RR scheduling policy. In the absence of any kind of synchronizing mechanism, the result could be something like “II a amm ThThrreeadad 12”.

What is needed is some way to regulate access to the printer so that only one task can use it at a time. In Pthreads, that mechanism is called a mutex, which is short for “mutual exclusion”. A mutex acts like a key to control access to a resource.

Only the thread that has the key can use the resource. In order to use the resource (in this case a printer) a thread must first acquire the key (mutex) by calling an appropriate Pthreads service. If the key is available, that is the resource (printer) is not currently in use by someone else, the thread is allowed to proceed. Follow- ing its use of the printer, the thread releases the mutex so another thread may use it.

If however, the printer is in use, the thread is blocked until the thread that currently has the mutex releases it. Any number of threads may try to acquire the mutex while it is in use. All of them will be blocked. The waiting tasks are queued in priority order.

Mutex API.  The Pthreads mutex API follows much the same pattern as the thread API, as shown below:

   pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
   int pthread_mutex_init (pthread_mutex_t *mutex, const pthread_mutexattr_t *mutex_attr);
   int pthread_mutex_destroy (pthread_mutex_t *mutex);
   int pthread_mutex_lock (pthread_mutex_t *mutex);
   int pthread_mutex_unlock (pthread_mutex_t *mutex);
   int pthread_mutex_trylock (pthread_mutex_t *mutex);

There is a pair of functions to initialize and destroy mutex objects and a set of functions to act on the mutex objects. This also shows an alternate way to initialize statically allocated Pthreads objects. PTHREAD_MUTEX_INITIALIZER provides the same default values as pthread_mutex_init() .

Two operations may be performed on a mutex: lock and unlock. The lock operation causes the calling thread to block if the mutex is not available. trylock allows you to test the state of a mutex without blocking. If the mutex is available trylock returns success and locks the mutex. If the mutex is not available it returns EBUSY, as shown below:

   int pthread_mutexattr_init (pthread_mutexattr_t *attr);
   int pthread_mutexattr_destroy (pthread_mutexattr_t *attr);

   int pthread_mutexattr_getpshared (pthread_mutexattr_t *attr, int *pshared);
   int pthread_mutexattr_setpshared (pthread_mutexattr_t *attr, int pshared);

   int pthread_mutexattr_getkind_np (pthread_mutexattr_t *attr, int *kind);
   int pthread_mutexattr_setkind_np (pthread_mutexattr_t *attr, int kind);

Mutex attributes follow the same basic pattern as thread attributes. There is a pair of functions to create and destroy a mutex attribute object. We’ll defer discussion of the pshared attribute until later. There are some other attributes we’ll take up shortly.

Linux implements an interesting non-portable extension to the Pthreads mutex. The Pthreads standard explicitly allows non-portable extensions. The only requirement is that any symbol that is non-portable must have “_np ” appended to its name.

What happens if a thread should attempt to lock a mutex that it has already locked? Normally the thread would sim- ply hang up. Linux offers a way out of this dilemma. The “kind” attribute alters the behavior of a mutex when a thread attempts to lock a mutex that it has already locked:

Fast (PTHREAD_MUTEX_FAST_NP ) . This is the default type. If a thread attempts to lock a mutex it already holds it is blocked and thus effectively deadlocked. The fast mutex does no consistency or sanity checking and so it is faster.

Recursive (PTHREAD_MUTEX_RECURSIVE_NP ) . A recursive mutex allows a thread to successfully lock a mutex multiple times. It counts the number of times the mutex was locked and requires the same number of calls to the unlock function before the mutex goes to the unlocked state.

Error checking (PTHREAD_MUTEX_ERRORCHECK_NP ). If a thread attempts to recursively lock an error checking mutex, the lock function returns immediately with the error code EDEADLK. Furthermore, the un- lock function returns an error if it is called by a thread other than the current owner of the mutex.

Note the “_NP ” in the constant names.

Priority Inversion. Solving the resource sharing problem with mutexes sometimes leads to other problems. Consider a scenario involv- ing three threads. Threads 1 and 2 each require access to a common resource protected by a mutex. Thread 1 has the highest priority and Thread 2 has the lowest. Thread 3, which has no interest in the resource, has a “middle” priority.

The situation is illustrated graphically in Figure 5 . Thread 2 is currently executing and locks the mutex. So it gets the resource. Next an interrupt occurs which makes Thread 1 ready. Since Thread 1 has higher priority, it preempts Thread 2 and executes until it attempts to lock the resource mutex.

Thread 1 blocks and Thread 2 regains control. So far everything is working as we would expect. Even though Thread 1 has higher priority, it simply has to wait until Thread 2 is finished with the resource.

Figure 5. Priority inversion

The problem arises if Thread 3 should become ready while Thread 2 has the resource locked. Thread 3 preempts Thread 2. This situation is called priority inversion because a lower priority thread (Thread 3) is effectively preventing a higher priority thread (Thread 1) from executing.

There are a couple of possible solutions to this problem. One is called priority inheritance. If a higher priority thread attempts to lock a mutex that is already locked, the priority of the thread owning the mutex is temporarily raised to that of the thread attempting to lock the mutex. This prevents any lower priority thread from preempting the owner until it unlocks the mutex, at which time its priority reverts to its normal value.

The other solution is called priority ceiling. In this approach, when a thread locks a mutex, its priority is immediately raised to that of the highest priority thread that will ever try to lock the mutex. This is considered to be more efficient because it can avoid some unnecessary context switches. Note in Figure 5 that Thread 1 is switched into the processor only to be blocked a short time later when it attempts to lock the mutex. The priority ceiling protocol would prevent this.

These strategies are optional values for a mutex attribute called protocol as shown below:.

   int pthread_mutexattr_getprotocol (const pthread_mutexattr_t *attr, int *protocol);
   int pthread_mutexattr_setprotocol (pthread_mutexattr_t *attr, int protocol);

   int pthread_mutexattr_getprioceiling (const pthread_mutexattr_t *attr, int *ceiling);
   int pthread_mutexattr_setprioceiling (pthread_mutexattr_t *attr, int ceiling);
   int pthread_mutex_getprioceiling (const pthread_mutex_t *mutex, int *ceiling);
   int pthread_mutex_setprioceiling (pthread_mutex_t * mutex, int ceiling);

Protocol is an optional attribute for Pthread mutexes. The values for mutex protocol are:

  • PTHREAD_PRIO_NONE – Default. Thread priorities are not modified by locking the mutex.
  • PTHREAD_PRIO_INHERIT – Use the priority inheritance protocol.
  • PTHREAD_PRIO_PROTECT – Use the priority ceiling protocol.

If PTHREAD_PRIO_PROTECT is selected, you can set the priority ceiling value either in the mutex attribute or in the mutex itself.

Conditional Variable
There are many situations where one thread needs to notify another thread about a change in status to a shared resource protected by a mutex. For this we use a conditional variable, or just condition for short. A conditional vari- able must always have a mutex associated with it.

Threads that wish to be notified of the change of status wait on the conditional variable with the associated mutex locked. A thread that detects the change signals the conditional variable causing one of the waiting threads to wake up. A conditional variable operates much like a semaphore with the additional feature that it solves a potential race condition involving a mutex.

Consider a queue where one thread, Thread 1, reads the queue and another thread, Thread 2, writes it. Clearly each thread requires exclusive access to the queue and so we protect it with a mutex.

Thread 1 will lock the mutex and then see if the queue has any data. If it does, Thread 1 reads the data and unlocks the mutex. However, if the queue is empty, Thread 1 needs to block somewhere until Thread 2 writes some data. It also sets a flag so that the next thread to write to the queue recognizes that someone is blocked on it waiting for data. Thread 1 must unlock the mutex before blocking or else Thread 2 would not be able to write. But there’s a gap be- tween the time Thread 1 unlocks the mutex and blocks. During that time, Thread 2 may execute and not recognize that anyone is blocking on the queue.

The conditional variable solves this problem by waiting (blocking) with the mutex locked. Internally, the conditional wait function unlocks the mutex allowing Thread 2 to proceed. When the conditional wait returns, the mutex is again locked.

There’s an important distinction between mutexes and conditional variables. A mutex is used to guarantee exclusive access to a resource. A conditional variable is used to signal that something has happened.

Condition API. The basic operations on a conditional variable are signal and wait. A thread may execute a timed wait such that if the specified time interval expires before the condition is signaled, the wait returns with an error. A thread may also broadcast a condition. This wakes up all threads waiting on the condition. Figure 7 is an example of using a conditional variable with a queue

   pthread_cond_t cond = PTHREAD_COND_INITIALIZER;

   int pthread_cond_init (pthread_cond_t *cond, const pthread_condattr_t *cond_attr);
   int pthread_cond_destroy (pthread_cond_t *cond);

   int pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex);
   int pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, const struct time- spec *abstime);
   int pthread_cond_signal (pthread_cond_t *cond);
   int pthread_cond_broadcast (pthread_cond_t *cond);

   int pthread_condattr_init (pthread_condattr_t *attr);
   int pthread_condattr_destroy (pthread_condattr_t *attr);

   int pthread_condattr_getpshared (pthread_condattr_t *attr, int *pshared);
   int pthread_condattr_setpshared (pthread_condattr_t *attr, int pshared);

Threads, Processes and “Kernel Entities”
   int pthread_attr_getscope (const pthread_attr_t *attr, int *contentionscope);
   int pthread_attr_setscope (const pthread_attr_t *attr, int contentionscope);

Besidesscheduling policy and parameters, there are a couple of otherdimensions to real-time scheduling. Contention Scope is a description ofhow threads compete for resources. The contention scope attribute for athread has two possible values:

  • PTHREAD_SCOPE_PROCESS: A thread competes for resources only with other threads in the same process. Processes are scheduled by the kernel using the normal scheduling algorithm and some other scheduler manages the scheduling of threads within each process.
  • PTHREAD_SCOPE_SYSTEM: A thread competes for resources with all threads in the system. Thus the kernel is involved in thread scheduling.

Processscope is “cheap” in the sense that thread scheduling need not involve asystem call with the corresponding switch to kernel space. The threadcontext switch can be managed entirely in User Space by standard Clibrary facilities like setjmp() and longjmp() .On the other hand, since the process scope thread scheduler has novisibility into, or control over, thread scheduling in other processes, ahigh priority thread in one process could be preempted by a lowpriority thread in another process. System scope may offer lowerperformance owing to the necessary kernel call, but it is morepredictable because all threads are competing on the same basis.

Process scope raises an interesting question. What happens if a User Space thread calls, for example, read() ? Normally, a process that calls read() isblocked until the read data is ready. Does this mean that all threadsin the process are blocked? In some simple threads implementations theanswer is yes. But a fully conforming Posix Threads implementation mustnot block all threads when one thread calls a blocking system service.In practice this means that the system library must be “thread aware”and make use of features like non-blocking I/O or POSIX 1.b asynchronousI/O to avoid blocking.

Another way to look at the issue ofscope is to consider the mapping between threads and “kernel entities”. Akernel entity is the level of abstraction that usually exists betweenPosix threads and the processor. In a typical Unix/Linux system thekernel entity is a process. User Space threads, sometimes called a“library implementation”, represent a many-to-one mapping in thatmultiple threads exist within one schedulable kernel entity or process.

Bycontrast, threads managed under system scope, also called “kernelthreads”, typically represent a one-to-one mapping in that each threadis represented to the system by its own kernel entity. This is not tosay that each thread is an entirely separate process. Multiple threadsmay exist within a single process space but, for scheduling purposes,the kernel treats each thread as a separate schedulable “entity”.

Most of the prevalent threads implementation use a one-to-one mapping scheme making use of the clone() system call to create a kernel schedulable thread entity.

This issue of scope brings us back to the pshared attribute of mutexes and conditions. If a Pthreads implementation supports the _POSIX_THREAD_PROCESS_SHARED option, you can set the pshared attribute of a mutex or condition variable to the value PTHREAD_PROCESS_SHARED .This allows the object to be accessed by threads in differentprocesses. Of course the object must be created in global memoryaccessible to all processes. The default value for pshared is PTHREAD_PROCESS_PRIVATE.

Thread cancellation. We’llwrap up our brief look at Posix threads by considering the issue ofthread cancellation and “cleaning up,” an example of which is shownbelow:

   int pthread_cancel (pthread_t thread);
   int pthread_setcancelstate (int state, int *oldstate);
   int pthread_setcanceltype (int type, int *oldtype);
   void pthread_testcancel (void);
   void pthread_cleanup_push (void (*routine)(void *), void *arg);
   void pthread_cleanup_pop (int execute);

Manythreads run in an infinite loop. As long as the system is powered up,the thread is running, doing its thing. Some threads start up, do theirjob and finish. But there are also circumstances where it’s useful toallow one thread to terminate another thread involuntarily. Perhaps theuser presses a CANCEL button to stop a long search operation. Maybe thethread is part of a redundant numerical algorithm and is no longerneeded because another thread has solved the problem. The Pthreadscancellation mechanism provides for the orderly shutdown of threads thatare no longer needed.

But cancellation must be done with care.You don’t just arbitrarily stop a thread at any point in its execution.Suppose it has a mutex locked. If you terminate a thread that has lockeda mutex, it can never be unlocked. Pthreads allows each thread tomanage its own termination. So when you cancel a thread you’re usuallynot stopping it immediately, you’re politely asking the thread toterminate itself as soon as it’s safe or convenient.

Pthreads supports three cancellation modes encoded as two bits called cancellation state and cancellation mode (Figure 6 ).A thread may choose to disable cancellation because it is performing anoperation that must be completed. The default mode is Deferred meaningthat cancellation can only occur at specific points, called cancellationpoints, where the program tests if the thread has been requested toterminate.

Figure 6: Cancellation modes

Mostfunctions that can block for an unbounded time, such as waiting on acondition variable or reading or writing a file should be cancellationpoints and are defined as such in the Posix specification. The function pthread_testcancel() allows you to create your own cancellation points. It simply returnsimmediately if the thread has not been requested to terminate.

Whileasynchronous cancellation mode might seem like a good idea, it israrely safe to use. That’s because, by definition you don’t know whatstate the thread is in when it gets the cancellation request. It mayhave just called pthread_mutex_lock() .Is the mutex locked? Don’t know. So while asynchronous cancellation modeis in effect, you can’t safely acquire any shared resources.

Figure 7: Conditional Variable Example

Thread cleanup. When a thread is requested to terminate itself, there may be somethings that need to be “cleaned up” before the thread can safelyterminate. It may need to unlock a mutex or return a dynamicallyallocated memory buffer for example. That’s the role of cleanuphandlers. Every thread conceptually has a stack of active cleanuphandlers. Handlers are pushed on the stack by pthread_cleanup_push() and executed in reverse order when the thread is terminated or calls pthread_exit() . A cleanup handler takes one void * argument.

The most recently pushed cleanup handler can be popped off the stack with pthread_cleanup_pop() .Often the functionality of a cleanup handler is needed whether or notthe thread terminates. The execute argument specifies whether or not ahandler is executed when it’s popped. A non-zero value means execute.

Figure 8 shows the read thread (the main function) from Figure 7 with a cleanup handler added.

Figure 8. Cleanup handler example

We assume the default deferred cancellation mode. Note that pthread_cleanup_pop() is used to unlock the mutex rather than the normal mutex unlockfunction. Is it really necessary to push and pop the cleanup handler onevery pass through the while loop? It is if there is a cancellationpoint in the section called “do something with the data” where the mutexis unlocked.

This thread can only invoke the cleanup handler ifit has the mutex locked. If there are no cancellation points while themutex is unlocked then it’s safe to move the push cleanup call outsidethe loop. In that case we don’t really need pop cleanup. Note also thatpop cleanup can only be called from the same function that called pushcleanup.

Threads Implementation in Linux
Until kernelversion 2.6, the most prevelant threads implementation wasLinuxThreads. It has been around since about 1996 and by the timedevelopment began on the 2.5 kernel it was generally agreed that a newapproach was needed to address the limitations in LinuxThreads.

Amongthese limitations, the kernel represents each thread as a separateprocess, giving it a unique process ID, even though many threads existwithin one process entity. This causes compatibility problems with otherthread implementations. There’s a hard coded limit of 8192 threads perprocess, and while this may seem like a lot, there are some problemsthat can benefit from running thousands of threads.

The resultof this new development effort is the Native Posix Threading Library orNPTL, which is now the standard threading implementation in 2.6 serieskernels. It too is a one-to-one model, but takes advantage ofimprovements in the kernel that were specifically intended to overcomethe limitations in LinuxThreads. The clone() callwas extended to optimize thread creation. There’s no longer a limit onthe number of threads per process, and the new fixed time scheduler canhandle thousands of threads without excessive overhead. A newsynchronization mechanism, the Fast Mutex or “futex”, handles thenon-contention case without a kernel call.

In tests on an IA-32,NPTL is reportedly able to start 100,000 threads in two seconds. Bycomparison, this test under a kernel without NPTL would have takenaround 15 minutes.

Both flavors of real-time Linux, RTLinux and RTAI, implement Pthreads as real-time tasks executing in kernel space.

Atfirst glance, the Posix Threads API looks like it was invented by acommittee. Well, it was. Nevertheless, a lot of thought went into itsdesign, which becomes evident if you study it carefully. There are morefeatures in Pthreads than we’ve covered here and probably more thanyou’ll ever use. But used properly and creatively, Pthreads is apowerful tool for building highly reliable, responsive embeddedapplications.

Read Part 1: The Multitasking Paradigm

The LinuxThreads library  This is an older site. It has a very good FAQ and a number of links toother threads resources. The Pthreads implementation that this siteoriginally described has been integrated into libc 6.

The Portable Threads Library This is a User Space implementation using a non-preemptive prioritybased scheduler. The API is not Posix Threads but a Posix wrapper isavailable.

RTAI – the RealTime Application Interface for Linux RTAI includes a Posix API that runs in kernel space.

Butenhof, David R., Programming with POSIX Threads , Addison-Wesley, 1997. A thorough and well-written guide to Posix threads and asynchronous programming in general.

Doug Abbott is the principal of Intellimetrix, a consulting firm specializing in hardware and software for industrialand scientific data acquisition and embedded product applications. He’salso a popular instructor and seminar leader, teaching classes on Linuxand real-time programming in general. Doug has taught the techniques ofembedded programming and multi-tasking operating systems to hundreds ofprofessional engineers. This article is based on a paper he presented aspart of a class he taught at the Embedded Systems Conference on”Introduction to Pthreads: Asynchrnous programming in the Unix/LinuxEnvironment (ESC-308).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.