Embedded Systems and Multitasking Software
Multitasking has entered the mainstream of embedded systems design, thanks in large part to the combination of more powerful processors and great advances in operating system software by vendors who focus on the size and performance needs of the embedded market. Multitasking lets designers allocate processing resources among several tasks. If you've ever printed a spreadsheet while editing a word document and dialing in to your ISP at the same time, you've experienced the joys of multitasking on the desktop. In this quick overview we will look at the advantages and disadvantages or risks which multitasking presents to the embedded designer.
Multitasking is a technique to allocate processing time among various duties or jobs, which the overall software program must perform. This usually means that the software is divided into tasks, or smaller subsets of the total problem and at run-time, creating an environment that provides each task with its own virtual processor. A virtual processor typically includes a register set, a program counter, a stack memory area, and a stack pointer. Not that only the executing task uses the actual processor resources. The other tasks are all using virtual resources. A multitasking run-time environment controls overall task execution. When a higher-priority task needs to execute, the currently running task's registers an are saved in memory and the higher-priority tasks registers are recovered from memory. The process of swapping the execution of tasks is commonly called context switching and context-switching time is a commonly quoted specification for operating systems targeting the real-time or embedded systems market.
With today's technology, the transfer of processor control to another task is invisible to the application software; the allocation logic is not embedded inside the application, but is assigned by the run-time environment. Exactly how this is done depends on many design goals set by the run-time or operating system vendor, but includes consideration for:
- A task's priority, the importance assigned to the task by the
developer, an assignment which may shift during run time as
- The way the scheduling algorithm is designed, which usually gives highest priority tasks first crack at the processor, and tasks with equal priority ranking in FIFO order.
One task may pre-empt another, causing the pre-empted task to be suspended temporarily. This generally happens when an interrupt routine makes a higher priority task ready.
Multitasking is particularly useful when a software project has to be divided up among multiple programmers, because each programmer can concentrate on his particular section of code and not really worry too much about how his code with interact with the other sections. Without multitasking, most embedded applications used a control loop approach to overall control. This has two disadvantages. One is that the overall application's performance is sensitive to the control loop design and to changes in the control loop design, which often fall victim to the "law of unintended consequences." Another is that the worst case response time is generally one complete pass through the control loop. With multitasking, on the other hand, the response time is generally faster and deterministic. Since high priority tasks can pre-empt low priority tasks, the worst case response time is generally the context switching time. Because context switching is fast and deterministic, code developers can concentrate on the requirements of the task without worrying about what effect each task might have on other tasks or on overall system response times. This is particularly valuable after an embedded application has been deployed and then must be modified or maintained in the field. The multitasking nature tends to isolate effects and make revisions less risky. Changes in one section of code are less likely to affect performance in other sections of code.
This isolation effect is particularly noteworthy in situations where a project is divided up among multiple individual programmers since it is not possible for each engineer on a large project to know the exact processing and response time requirements of all the other parts of the application. Multitasking in fact forces the applications designer to break the overall application into multiple pieces, so assigning the multiple pieces to multiple programmers is just the next logical step.
While it is true that multitasking adds some to the overall systems overhead; multitasking operating environments are bigger and more complex, it is also true that multitasking can eliminate the need or excessive polling within the control loop. Polling is also a major contributor to systems overhead, and eliminating polling via multitasking can actually improve performance.
Perhaps the biggest downfall of multitasking embedded applications is that they always require more memory than simple control loop run-time environments. The kernel needs a stack, as do the individual tasks, as well as other programming tools such as queues, mailboxes, and semiphores. And while multitasking has its advantages, solutions with lots of tasks can be complex to fathom, difficult to organize by task priority, and can use up lots of memory for task stacks. In addition, lots of tasks can lead to lots of task switching, which leads to lots of context switching time, the "hysteresis loss" of multitasking systems. If a system is really complicated by lots of tasks, it may not be feasible to understand how the system will operate under all possible conditions. For example, it is conceivable that a low priority task may never get to run, if indeed there are dozens and dozens of higher priority tasks to hog CPU resources. Finding these kinds of system vagaries is one of the challenges facing embedded systems designers, but one which will be easier with the current generations of multitasking operating systems and debugging support tools that are coming to market.