Maximizing efficiency in IoT projects


March 27, 2017

MattGordonThom.Denholm,-March 27, 2017

For a developer perusing the datasheets of the latest microcontrollers, it’s easy to assume that efficient use of CPU resources, including memory and clock cycles, is, at most, a minor concern with today’s hardware.  The latest 32-bit MCUs offer flash memory and RAM allocations that were unheard of in the embedded space just a short time ago, and their CPUs are often clocked at speeds once reserved for desktop PCs.  However, as anyone with recent experience developing a product for the IoT knows, these advances in hardware have not occurred in a vacuum; they have been accompanied by pronounced changes in end-user expectations and design requirements.  Accordingly, it is perhaps more important now than ever for developers to ensure that their software runs with the utmost efficiency and that their own time is spent in an efficient manner.

The software running on modern embedded systems tends to come from a variety of sources.  Code written by application developers is often combined with off-the-shelf software components from an RTOS (real-time operating system) provider, and these components may, in turn, utilize driver code originally offered by a semiconductor company.  Each piece of code can be written to optimize efficiency, but this article will focus on efficiency within off-the-shelf software components.  Two components in particular will serve as the foundation for the examination of resource efficiency given herein: a real-time kernel and a transactional file system. 

A Real-Time Kernel: The Heart of an Efficient System
A real-time kernel is the centerpiece of the software running in many of today’s embedded systems.  In simple terms, a kernel is a scheduler; developers who write application code for a kernel-based system divide that code into tasks, and the kernel is responsible for scheduling the tasks.  A kernel, then, is an alternative to the infinite loop in main() that often serves as the primary scheduling mechanism in bare-metal embedded systems. 

Using a real-time kernel delivers major benefits, including improved efficiency.  Developers who choose to base their application code on a kernel can optimize the use of processor resources in their system while achieving more efficient use of their own time.  Not all kernels are created equal, however, and efficiency gains are not guaranteed as a result of simply deciding to adopt a kernel for a new project. 

A key area where kernels may differ and where CPU resources can be used with widely varying degrees of efficiency is scheduling.  By offering an intelligent scheduling mechanism that allows tasks to run in response to events, a kernel helps developers achieve efficiency gains over an infinite loop, in which tasks (or functions, in other words) are executed in a fixed sequence.  The exact efficiency of a kernel-based application depends, in part, on how its scheduler is implemented.  A kernel’s scheduler—which is just a passage of code responsible for deciding when each task should be run—is ultimately overhead, and this overhead must not nullify the benefits that can be achieved by moving away from a bare-metal system. 

Figure 1: In the µC/OS-II scheduler, each task priority is represented by a bit in an array. (Source: Micrium)

Typically, in a real-time kernel, scheduling is priority-based, meaning that application developers assign priorities (which are oftentimes numbers) to their tasks, and the kernel favors the higher-priority tasks when making scheduling decisions.  Under this scheme, the kernel must maintain some type of data structure that tracks the priorities of an application’s different tasks along with the current state of each of those tasks.  An example, taken from Micrium’s µC/OS-II kernel, is shown in Figure 1.  Within OSRdyTbl[], the 8-entry array (of eight-bit elements) shown here, each bit represents a different task priority, with the least-significant bit in the first element corresponding to the highest priority and the most-significant bit in the last element signifying the lowest priority.  The array’s bit values reflect task state: A value of 1 is used if the task at the associated priority is ready and a 0 is used if the task is not ready. 

Accompanying OSRdyTbl[] as part of µC/OS-II’s scheduler is the single eight-bit variable shown in the figure, OSRdyGrp.  Each bit in this variable represents an entire row, or element, in the array: A 1 bit indicates that the corresponding row has at least one ready task, while a 0 bit means that none of the row’s tasks are ready.  By scanning first OSRdyGrp and then OSRdyTbl[] using the code shown in Listing 1, µC/OS-II can determine the highest-priority task that is ready to run at any given time.  As the listing indicates, this operation is highly efficient, requiring just two lines of C code.

y             = OSUnMapTbl[OSRdyGrp];

OSPrioHighRdy = (INT8U)((y << 3u) + OSUnMapTbl[OSRdyTbl[y]]);

Listing 1: Scheduling can be accomplished with just two lines of C code in µC/OS-II.

Continue reading on page two >>


< Previous
Page 1 of 3
Next >

Loading comments...