3 driver design techniques for microcontrollers

A fundamental skill that embedded software developers need to master is understanding how to write drivers. Within an embedded system, there a typically two types of drivers: microcontroller peripheral drivers and external device drivers that are connected through an interface like I2C, SPI, or UART. In many cases today, microcontroller vendors provide example drivers for their chips that can be leveraged as-is or may require modifications for production. External drivers may include pseudo code, but developers are responsible for writing the driver themselves almost always.

It’s important to realize that there is more than one way to write a driver and the way that it is written can dramatically affect system performance, energy consumption and many other factors that we like to track as we develop a product. In this post, we will examine several common driver design patterns and how they affect application code. We will start with the basic and work towards more complex patterns.

Technique #1 – The Polled Driver

The first technique, and the most fundamental, is to develop a driver that polls the peripheral (or external device) to see if it is ready to either send or receive information. Polling drivers are very easy to implement since they typically do nothing more than poll a flag. For example, an analog to digital converter (ADC) driver might start a conversion sequence and then simply block processor execution and constantly check the ADC complete flag. This code would look something like the following:

Adc_Start();
while(ADC_COMPLETE_FLAG == FALSE);
AdcResults = Adc_ReadAll();
return AdcResults;

As you can see, the code above is constantly polling ADC_COMPLETE_FLAG, which is presumably mapped to a hardware bit, in order to see when data is available. While testing a hardware bit like this is referred to as polling, it results in several characteristics that are useful to discuss.

First, when we have a driver that is using polling, in most implementations the driver will be a blocking driver. This means that once we call the driver, it will not return from the driver until we have the results we need. There are other implementations where we could just have the driver check for the result once and then return. In this case, the application is responsible for polling the driver and we would consider the driver to be non-blocking. From a design stand-point, it’s up to the developer to decide where the polling should take place. In the driver alleviates the application from having to do it, but if the application does it, there is flexibility to perform other activities and poll the driver at a lower rate.

Next, in general, polling is very easy to implement. Usually all that a developer needs to do is watch a few bits in a register and monitor them to decide when to interact with the device. Finally, while it’s easy to implement, polling is generally considered to be inefficient. Other techniques such as using interrupts can just notify the CPU when something needs to be done which makes polling fairly inefficient. I often relate polling to a kid in a car on a long trip who is constantly asking “Are we there yet?”. Polling is constantly asking “Are you ready yet? How about now? Now?”.

This brings us to a more efficient, yet slightly more complicated driver implementation which is to use interrupts.

Technique #2 – Interrupt Driven Drivers

Using interrupts in a driver is fantastic because it can dramatically improve the codes execution efficiency. Instead of checking constantly for whether it’s time to do something, an interrupt tells the processor that the driver is now ready and we jump to handle the interrupt. In general, we have two types of interrupt-driven driver mechanisms we can use: event driven and scheduled. An event-driven driver will fire an interrupt when an event occurs in the peripheral that needs to be handled. For example, we may have a UART driver who will fire an interrupt when a new character has been received in the buffer. On the other hand, we might have an ADC driver that uses a timer to schedule access for starting sampling or processing received data.

Using an interrupt-driven driver, while more efficient, can add additional implementation complexity to the design. First, the developer needs to enable the appropriate interrupts for use in the driver such as receive, transmit and buffer full. I’ve generally found that developers struggle to get interrupts to work due to the complexity of modern interrupt controllers. They often require interrupts to be set in a general register, at the peripheral level and then sometimes even to have priorities and other settings configured. Several years ago, I put together a step-by-step guide for configuring interrupts that can be downloaded here.

Next, the use of interrupts can introduce the need to follow a whole additional set of best practices. For example, it’s good practice to:

  • Keep interrupts short
  • Declare shared variables as volatile
  • Handle high priority items and then offload to the application for processing

You don’t want to have an interrupt in your driver with thousands of lines of code that get executed when the event occurs. Instead, you want to process the critical task, like take a character from the UART buffer and place it into a circular buffer for the application.

Finally, we also need to worry about issues like interrupts being disabled, interrupt timing and run rates, priority and whether its possible for an interrupt to be missed. While some of these items seem like the extra complexity might not be worth the effort, the improvement to execution times can be dramatic. For example, a battery-operated device could go into a deep sleep mode and only wake up to store a character in the buffer and then go back to sleep. Huge amounts of energy could be saved by doing this.

There are also situations where using interrupts in the driver is really the best way to handle peripheral events. For example, you can write a polled I2C driver but writing one that interrupts on different events occurring in the transmission sequence for ack, nack, etc results in a cleaner, smaller and more efficient driver.

We will look at the code for an interrupt-driven driver in an upcoming post. For now, let’s look at the third technique we can use for writing a driver which is to utilize the direct memory access (DMA) controller.

Technique #3 – DMA Driven Drivers

There are some drivers that move a large amount of data through the system such as I2S and SDIO. Managing the buffers on these types of interfaces can require constant action from the CPU. If the CPU falls behind or has to handle another system event, data could be missed or delayed, which can cause noticeable issues to the user such as an audio skip. Developers concerned with throughput can instead use the DMA controller to move the data around the microcontroller for the CPU.

The idea behind these drivers is that the DMA controller can move data around the microcontroller in the following ways:

  • Peripheral to memory
  • Memory to memory
  • Memory to Peripheral

The advantage to using DMA is that the CPU can be off doing other things while the DMA channel is moving data around for the driver, essentially get two things done at one.

While using the DMA controller in a driver to reduce the need to have the CPU execute is highly desirable, most microcontrollers have a limited number of DMA channels available. For this reason, every driver can’t be written to use DMA. Instead, developers need to choose the peripherals that will be bandwidth constrained and will highly benefit from DMA such as interfaces for external memory, ADCs and communication channels.

In applications that don’t have I2S or SDIO, developers could use DMA to move incoming UART characters into a circular buffer that will be processed once a certain limit has been set. This limit could be monitored either by polling an application structure or setting an interrupt through the DMA controller. As you can imagine, DMA drivers are the most efficient implementation for a driver, but they can also be complicated to implement depending on the developer’s skill level and whether they have used DMA before. That shouldn’t prevent a develop though from attempting to use DMA in their drivers.

Conclusions

In this post we have examined three primary techniques that embedded developers can use to write drivers for their microcontroller peripheral and for external devices. In order to summarize these techniques comparatively, Table 1 below shows each technique that we have discussed along with the relative complexity to implement and the resulting execution efficiency.

Table 1: Relative complexity and
efficiency of driver design technique.

Technique Complexity Efficiency
Polling Low Low
Interrupt Medium Medium
DMA Medium High

In general, developers should by default use an interrupt driver implementation over a polling implementation unless the peripheral being used is fast, i.e. several Mbps. DMA can be used for any driver, but I’ve generally reserved DMA channels for interfaces that require high through-put such as external memory or communication interfaces. The option you select will be highly dependent upon the end application.

In the next post, we will be exploring how we can go deeper into these concepts by looking at how we can develop a simple driver for an analog to digital converter.


Jacob Beningo is an embedded software consultant, advisor and educator who currently works with clients in more than a dozen countries to dramatically transform their software, systems and processes. Feel free to contact him at jacob@beningo.com, at his website www.beningo.com, and sign-up for his monthly Embedded Bytes Newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.