Ring buffer basics - Embedded.com

Ring buffer basics

The ring buffer's first-in first-out data structure is useful tool for transmitting data between asynchronous processes. Here's how to bit bang one in C without C++'s Standard Template Library.

The ring buffer is a circular software queue. This queue has a first-in-first-out (FIFO) data characteristic. These buffers are quite common and are found in many embedded systems. Usually, most developers write these constructs from scratch on an as-needed basis.

The C++ language has the Standard Template Library (STL), which has a very easy-to-use set of class templates. This library enables the developer to create the queue and other lists relatively easily. For the purposes of this article, however, I am assuming that we do not have access to the C++ language.

The ring buffer usually has two indices to the elements within the buffer. The distance between the indices can range from zero (0) to the total number of elements within the buffer. The use of the dual indices means the queue length can shrink to zero, (empty), to the total number of elements, (full). Figure 1 shows the ring structure of the ring buffer, (FIFO) queue.

Figure 1: Structure of a ring buffer.

The data gets PUT at the head index, and the data is read from the tail index. In essence, the newest data “grows” from the head index. The oldest data gets retrieved from the tail index. Figure 2 shows how the head and tail index varies in time using a linear array of elements for the buffer.

Figure 2: Linear buffer implementation of the ring buffer.

Use cases
Single process to single process
In general, the queue is used to serialize data from one process to another process. The serialization allows some elasticity in time between the processes. In many cases, the queue is used as a data buffer in some hardware interrupt service routine. This buffer will collect the data so that at some later time another process can fetch the data for further processing. This use case is the single process to process buffering case.

This use case is typically found as an interface between some very high priority hardware service buffering data to some lower priority service running in some background loop. This simple buffering use case is shown in Figure 3 .

Figure 3: A single process to process buffer use case

In many cases, there will be a need for two queues for a single interrupt service. Using multiple queues is quite common for device drivers for serial devices such as RS-232, I2C or USB drivers.

Multiple processes to single process

A little less common is the requirement to serialize many data streams into one receiving streams. These use cases are quite common in multi-threaded operating systems. In this case, there are many client threads requesting some type of serialization from some server or broker thread. The requests or messages are serialized into a single queue which is received by a single process. Figure 4 shows this use case.

Figure 4: Multiple processes to process use case.

Single process to multiple processes

The least common use case is the single process to multiple processes case. The difficulty here is to determine where to steer the output in real time. Usually, this is done by tagging the data elements in such a way that a broker can steer the data in some meaningful way.Figure 5 shows the single process to multiple processes use case. Since queues can be readily created, it is usually better to create multiple queues to solve this use case than it would be to use a single queue.

Figure 5: Single process to multiple processes use case.

Figure 6 shows how to reorganize the single process to multiple process use case using a set of cascaded queues. In this case, we have inserted an Rx / Tx Broker Dispatcher service, which will parse the incoming requests to each of the individual process queues.

Figure 6: Single process to multiple process use case using a dispatcher and multiple queues.

Managing overflow
One must be able to handle the case where the queue is full and there isstill incoming data. This case is known as the overflow condition.There are two methods which handle this case. They are to drop thelatest data or to overwrite the oldest data. Either style may beimplemented. In our case, I will use the drop latest incoming datamethod.

Design features
In this very specific software queue implementation, I shall use theKISS principle to implement a very simple ring buffer queue. The basicpurpose here is to create a queue which can handle a stream of bytesinto a fixed buffer for the single process to single process use case.This type of ring buffer is very handy to have for a simple bufferedserial device. The design features for this simple queue is as follows:

1. The buffer will contain a fixed number of bytes. This number of bytes will be set by a macro definition in the header file.

2. The overflow condition will be managed via the drop latestinformation process. This means in the event of an overflow, the latestincoming data will be dropped.

Given these features leads us to our first listing. Again, the 1stlisting (Listing 1 ) is the main ring buffer header file. This filedefines all the interfaces and attributes required for our very simplering buffer implementation.

Listing 1. The interfaces and attributes for a simple ring buffer object .

 #ifndef __RINGBUFS_H    #define __RINGBUFS_H    #define RBUF_SIZE    256    typedef struct  ringBufS    {      unsigned char buf[RBUF_SIZE];      int head;      int tail;      int count;    } ringBufS;    #ifdef  __cplusplus      extern  "C" {    #endif      void  ringBufS_init  (ringBufS *_this);      int   ringBufS_empty (ringBufS *_this);      int   ringBufS_full  (ringBufS *_this);      int   ringBufS_get   (ringBufS *_this);      void  ringBufS_put   (ringBufS *_this, const unsigned char c);      void  ringBufS_flush (ringBufS *_this, const int clearBuffer);    #ifdef  __cplusplus      }    #endif#endif


The simple ring buffer record
Inspection of Listing 1 shows very little is required toimplement the simple ring buffer. Also, the buffer size is fixed via theRBUF_SIZE macro, which in this case happens to be 256 bytes. Thefollowing is a list of what is contained within the ringBufS record.

This is the managed buffer. The size of this buffer is set by theRBUF_SIZE macro. In our case, we are managing a 256 byte buffer ofunsigned characters.

This is the head index. The incoming bytes get written to the managedbuffer using this index. The arithmetic for this index is in modulo-256.This is because the RBUF_SIZE macro is defined as 256. Of course, if wedefined a different value for the buffer size then the modulusarithmetic will change accordingly.

This index is used to retrieve the oldest data in the queue. This indexalso follows the same modulus arithmetic as in the HEAD index case.

This field is used to keep track of the total number of elementscurrently in the queue. The maximum value of this field is set by theRBUF_SIZE macro.

The simple ring buffer methods
There are six(6) methods for the simple unsigned character ring buffer. A brief description of these methods is shown in Table 1 .

Table 1: The set of ring buffer queue methods.


Listing 2 shows the implementation of the simple ring bufferinitialization process. In this case, we simply clear all of the objectmemory. This not only flushes the queue. It also clears the buffer too.

Listing 2: initializing the simple ring buffer.

 #include  #include  "ringBufS.h"void ringBufS_init (ringBufS *_this){    /*****      The following clears:        -> buf        -> head        -> tail        -> count      and sets head = tail    ***/    memset (_this, 0, sizeof (*_this));}

Listing 3 shows the implementation for checking the empty status of thequeue. In this case we just check to see whether or not the count fieldis zero.

Listing 3. Testing the queue empty condition.

 #include  "ringBufS.h"int ringBufS_empty (ringBufS *_this){    return (0==_this->count);}

Of course, we also need to check the queue full condition. Listing 4 shows that this is simply a check of the count field against the buffersize.

Listing 4. Testing the queue full condition.

 #include  "ringBufS.h"int ringBufS_full (ringBufS *_this){    return (_this->count>=RBUF_SIZE);}

In Listing 5 we get a byte from the queue. Notice that the returnvalue in this method is an integer. We use the value of -1 to signalthat an attempt was made to retrieve a value from an empty queue.

We introduce the modulo_inc() method here. This method encapsulates the modulus arithmetic required for updating the buffer indices.

Listing 5. Getting a byte from the queue.

 #include  "modulo.h"#include  "ringBufS.h"int ringBufS_get (ringBufS *_this){    int c;    if (_this->count>0)    {      c           = _this->buf[_this->tail];      _this->tail = modulo_inc (_this->tail, RBUF_SIZE);      --_this->count;    }    else    {      c = -1;    }    return (c);}


Listing 6 shows how we put a byte to the queue. This listing shows the incoming byte is dropped if the queue is full.

Listing 6. Placing the data into the queue.

#include  "modulo.h"#include  "ringBufS.h"void ringBufS_put (ringBufS *_this, const unsigned char c){    if (_this->count < RBUF_SIZE)    {      _this->buf[_this->head] = c;      _this->head = modulo_inc (_this->head, RBUF_SIZE);      ++_this->count;    }}

Listing 7 shows how to flush the queue. In this case we set the countand the indices to zero. We can optionally clear the buffer to zero (0)if required. In some cases this may be useful for diagnostics purposes.

Listing 7. Flushing the queue.

 #include  #include  "ringBufS.h"void ringBufS_flush (ringBufS *_this, const int clearBuffer){  _this->count  = 0;  _this->head   = 0;  _this->tail   = 0;  if (clearBuffer)  {    memset (_this->buf, 0, sizeof (_this->buf));  }}

First the simple ring buffer
I present a very simple fixed size ring buffer first-in-first-out queue.Also, I mentioned some of the use cases for the ring buffer. Thissimple ring buffer can be used in a wide variety of simple serial deviceinterfaces. In fact, I have used this simple construct many times inseveral projects.

The next step is to go over this simple case and to analyze the coredesign pattern within this simple ring buffer implementation. We canextend this simple pattern to give us a construct that has a bit morecapability and can be used in more complex systems. We can extend thissimple design pattern to accommodate the following:

1. Queuing up different data types
2. Varying the user buffer and user buffer size
3. Reuse the core design pattern in a type-safe way

Extending this simple ring buffer design to a more reusable and extensible design will be a topic for a future article.

To access the code in this article, download the rinBufS.zip file in Embedded.com's source code library.

Ken Wada is president and owner of AuriumTechnologies, an independent product design and consulting firm inCalifornia's Silicon Valley. Ken has over 25 years of experiencearchitecting and designing high-tech products and systems, including theFASTRAK vehicle-sensing system for toll roads and bridges. Hisexpertise includes industrial automation, biotechnology, and high-speedoptical networks. Ken holds four patents. You may reach him at .

Related posts:

14 thoughts on “Ring buffer basics

  1. I am concerned about this code implementation because the get function and the put function both perform a read-modify-write operation on the count variable. If the get function is called from an interrupt and the put function is called from a background t

    Log in to Reply
  2. I usually sacrifice one byte in queue and get rid of the 'count' variable, using only put and get indexes. Then the interrupt and background processes will write to their own index only.
    Of course, if no OS is used, it is always a good idea to disable the

    Log in to Reply
  3. I entirely agree with HikerBobPA. In my opinion having the count variable negates the most important attribute of a ring buffer, in that so long as the write operation is atomic, the ring buffer can be used without locking out the reader or writer process

    Log in to Reply
  4. No need to sacrifice the byte.
    Empty is (in == out)
    Full is when adding a byte would make the buffer appear to be empty, or ((in+1) == out).
    Note that you must do the full test -before- adding the byte or you corrupt the last byte in the buffer.

    Log in to Reply
  5. No you don't corrupt the last byte in the buffer as it never gets used. The maximum amount of data you can store in the buffer is BUFSIZE-1. When you get to this point writing to the last byte would cause in==out so the byte is never written.

    Log in to Reply
  6. All good comments here. Indeed, the code example I presented has a read-modify-write variable. It is important to protect fields such as this by making the modify process atomic in nature. I did not show how to do this in this example because the implement

    Log in to Reply
  7. Can you explain why the indexes need to be protected if the write operation is atomic, which it usually is, and only one process will ever write to a particular index.

    Log in to Reply
  8. It really depends on the processor that you use. On some processors, if you use the int type and the integer native is 16-bits, and the processor happens to be an 8-bit machine, then the index update is not atomic at all.

    The best way to figure this out i

    Log in to Reply
  9. One advantage of the count is that when it exceeds the ring buffer size, you can add a small amount of code to determine the max size that the buffer needed to be during testing. This can be used to help size the buffer correctly.

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.