Advertisement

Effective use of Pthreads in embedded Linux designs: Part 2 –Sharing resources

Doug Abbott, Intellimetrix

June 21, 2014

Doug Abbott, IntellimetrixJune 21, 2014

As noted in Part 1, one of the primary functions of any operating system is to manage access to global resources by multiple threads/tasks/processes to avoid conflicts. This is a major issue in event-driven systems where threads may compete asynchronously for access to a global resource.

Consider the following scenario. Two threads of the same priority are each running the same code fragment:

   printf (“I am thread %d\n”, n);

Let’s assume they’re operating under the SCHED_RR scheduling policy. In the absence of any kind of synchronizing mechanism, the result could be something like “II a amm ThThrreeadad 12”.

What is needed is some way to regulate access to the printer so that only one task can use it at a time. In Pthreads, that mechanism is called a mutex, which is short for “mutual exclusion”. A mutex acts like a key to control access to a resource.

Only the thread that has the key can use the resource. In order to use the resource (in this case a printer) a thread must first acquire the key (mutex) by calling an appropriate Pthreads service. If the key is available, that is the resource (printer) is not currently in use by someone else, the thread is allowed to proceed. Follow- ing its use of the printer, the thread releases the mutex so another thread may use it.

If however, the printer is in use, the thread is blocked until the thread that currently has the mutex releases it. Any number of threads may try to acquire the mutex while it is in use. All of them will be blocked. The waiting tasks are queued in priority order.

Mutex API. The Pthreads mutex API follows much the same pattern as the thread API, as shown below:

   pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
   int pthread_mutex_init (pthread_mutex_t *mutex, const pthread_mutexattr_t *mutex_attr);
   int pthread_mutex_destroy (pthread_mutex_t *mutex);
   int pthread_mutex_lock (pthread_mutex_t *mutex);
   int pthread_mutex_unlock (pthread_mutex_t *mutex);
   int pthread_mutex_trylock (pthread_mutex_t *mutex);


There is a pair of functions to initialize and destroy mutex objects and a set of functions to act on the mutex objects. This also shows an alternate way to initialize statically allocated Pthreads objects. PTHREAD_MUTEX_INITIALIZER provides the same default values as pthread_mutex_init().

Two operations may be performed on a mutex: lock and unlock. The lock operation causes the calling thread to block if the mutex is not available. trylock allows you to test the state of a mutex without blocking. If the mutex is available trylock returns success and locks the mutex. If the mutex is not available it returns EBUSY, as shown below:

   int pthread_mutexattr_init (pthread_mutexattr_t *attr);
   int pthread_mutexattr_destroy (pthread_mutexattr_t *attr);

   #ifdef _POSIX_THREAD_PROCESS_SHARED
   int pthread_mutexattr_getpshared (pthread_mutexattr_t *attr, int *pshared);
   int pthread_mutexattr_setpshared (pthread_mutexattr_t *attr, int pshared);
   #endif

   int pthread_mutexattr_getkind_np (pthread_mutexattr_t *attr, int *kind);
   int pthread_mutexattr_setkind_np (pthread_mutexattr_t *attr, int kind);


Mutex attributes follow the same basic pattern as thread attributes. There is a pair of functions to create and destroy a mutex attribute object. We’ll defer discussion of the pshared attribute until later. There are some other attributes we’ll take up shortly.

Linux implements an interesting non-portable extension to the Pthreads mutex. The Pthreads standard explicitly allows non-portable extensions. The only requirement is that any symbol that is non-portable must have “_np” appended to its name.

What happens if a thread should attempt to lock a mutex that it has already locked? Normally the thread would sim- ply hang up. Linux offers a way out of this dilemma. The “kind” attribute alters the behavior of a mutex when a thread attempts to lock a mutex that it has already locked:

Fast (PTHREAD_MUTEX_FAST_NP). This is the default type. If a thread attempts to lock a mutex it already holds it is blocked and thus effectively deadlocked. The fast mutex does no consistency or sanity checking and so it is faster.

Recursive (PTHREAD_MUTEX_RECURSIVE_NP). A recursive mutex allows a thread to successfully lock a mutex multiple times. It counts the number of times the mutex was locked and requires the same number of calls to the unlock function before the mutex goes to the unlocked state.

Error checking (PTHREAD_MUTEX_ERRORCHECK_NP). If a thread attempts to recursively lock an error checking mutex, the lock function returns immediately with the error code EDEADLK. Furthermore, the un- lock function returns an error if it is called by a thread other than the current owner of the mutex.

Note the “_NP” in the constant names.

Priority Inversion. Solving the resource sharing problem with mutexes sometimes leads to other problems. Consider a scenario involv- ing three threads. Threads 1 and 2 each require access to a common resource protected by a mutex. Thread 1 has the highest priority and Thread 2 has the lowest. Thread 3, which has no interest in the resource, has a “middle” priority.

The situation is illustrated graphically in Figure 5. Thread 2 is currently executing and locks the mutex. So it gets the resource. Next an interrupt occurs which makes Thread 1 ready. Since Thread 1 has higher priority, it preempts Thread 2 and executes until it attempts to lock the resource mutex.

Thread 1 blocks and Thread 2 regains control. So far everything is working as we would expect. Even though Thread 1 has higher priority, it simply has to wait until Thread 2 is finished with the resource.


Figure 5. Priority inversion

The problem arises if Thread 3 should become ready while Thread 2 has the resource locked. Thread 3 preempts Thread 2. This situation is called priority inversion because a lower priority thread (Thread 3) is effectively preventing a higher priority thread (Thread 1) from executing.

There are a couple of possible solutions to this problem. One is called priority inheritance. If a higher priority thread attempts to lock a mutex that is already locked, the priority of the thread owning the mutex is temporarily raised to that of the thread attempting to lock the mutex. This prevents any lower priority thread from preempting the owner until it unlocks the mutex, at which time its priority reverts to its normal value.

The other solution is called priority ceiling. In this approach, when a thread locks a mutex, its priority is immediately raised to that of the highest priority thread that will ever try to lock the mutex. This is considered to be more efficient because it can avoid some unnecessary context switches. Note in Figure 5 that Thread 1 is switched into the processor only to be blocked a short time later when it attempts to lock the mutex. The priority ceiling protocol would prevent this.

These strategies are optional values for a mutex attribute called protocol as shown below:.

   #if defined (_POSIX_THREAD_PRIO_PROTECT) || defined  (_POSIX_THREAD_PRIO_INHERIT)
   int pthread_mutexattr_getprotocol (const pthread_mutexattr_t *attr, int *protocol);
   int pthread_mutexattr_setprotocol (pthread_mutexattr_t *attr, int protocol);
   #endif

   #ifdef _POSIX_THREAD_PRIO_PROTECT
   int pthread_mutexattr_getprioceiling (const pthread_mutexattr_t *attr, int *ceiling);
   int pthread_mutexattr_setprioceiling (pthread_mutexattr_t *attr, int ceiling);
   int pthread_mutex_getprioceiling (const pthread_mutex_t *mutex, int *ceiling);
   int pthread_mutex_setprioceiling (pthread_mutex_t * mutex, int ceiling);
   #endif


Protocol is an optional attribute for Pthread mutexes. The values for mutex protocol are:
  • PTHREAD_PRIO_NONE – Default. Thread priorities are not modified by locking the mutex.
  • PTHREAD_PRIO_INHERIT – Use the priority inheritance protocol.
  • PTHREAD_PRIO_PROTECT – Use the priority ceiling protocol.

If PTHREAD_PRIO_PROTECT is selected, you can set the priority ceiling value either in the mutex attribute or in the mutex itself.

Conditional Variable
There are many situations where one thread needs to notify another thread about a change in status to a shared resource protected by a mutex. For this we use a conditional variable, or just condition for short. A conditional vari- able must always have a mutex associated with it.

Threads that wish to be notified of the change of status wait on the conditional variable with the associated mutex locked. A thread that detects the change signals the conditional variable causing one of the waiting threads to wake up. A conditional variable operates much like a semaphore with the additional feature that it solves a potential race condition involving a mutex.

Consider a queue where one thread, Thread 1, reads the queue and another thread, Thread 2, writes it. Clearly each thread requires exclusive access to the queue and so we protect it with a mutex.

Thread 1 will lock the mutex and then see if the queue has any data. If it does, Thread 1 reads the data and unlocks the mutex. However, if the queue is empty, Thread 1 needs to block somewhere until Thread 2 writes some data. It also sets a flag so that the next thread to write to the queue recognizes that someone is blocked on it waiting for data. Thread 1 must unlock the mutex before blocking or else Thread 2 would not be able to write. But there’s a gap be- tween the time Thread 1 unlocks the mutex and blocks. During that time, Thread 2 may execute and not recognize that anyone is blocking on the queue.

The conditional variable solves this problem by waiting (blocking) with the mutex locked. Internally, the conditional wait function unlocks the mutex allowing Thread 2 to proceed. When the conditional wait returns, the mutex is again locked.

There’s an important distinction between mutexes and conditional variables. A mutex is used to guarantee exclusive access to a resource. A conditional variable is used to signal that something has happened.

Condition API. The basic operations on a conditional variable are signal and wait. A thread may execute a timed wait such that if the specified time interval expires before the condition is signaled, the wait returns with an error. A thread may also broadcast a condition. This wakes up all threads waiting on the condition. Figure 7 is an example of using a conditional variable with a queue

   pthread_cond_t cond = PTHREAD_COND_INITIALIZER;

   int pthread_cond_init (pthread_cond_t *cond, const pthread_condattr_t *cond_attr);
   int pthread_cond_destroy (pthread_cond_t *cond);

   int pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex);
   int pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, const struct time- spec *abstime);
   int pthread_cond_signal (pthread_cond_t *cond);
   int pthread_cond_broadcast (pthread_cond_t *cond);

   int pthread_condattr_init (pthread_condattr_t *attr);
   int pthread_condattr_destroy (pthread_condattr_t *attr);

   #ifdef _POSIX_THREAD_PROCESS_SHARED
   int pthread_condattr_getpshared (pthread_condattr_t *attr, int *pshared);
   int pthread_condattr_setpshared (pthread_condattr_t *attr, int pshared);
   #endif



< Previous
Page 1 of 2
Next >

Loading comments...