Multitasking has entered the mainstream of embedded systemsdesign, thanks in large part to the combination of more powerfulprocessors and great advances in operating system software byvendors who focus on the size and performance needs of the embeddedmarket. Multitasking lets designers allocate processing resourcesamong several tasks. If you've ever printed a spreadsheet whileediting a word document and dialing in to your ISP at the sametime, you've experienced the joys of multitasking on the desktop.In this quick overview we will look at the advantages anddisadvantages or risks which multitasking presents to the embeddeddesigner.
Multitasking is a technique to allocate processing time amongvarious duties or jobs, which the overall software program mustperform. This usually means that the software is divided intotasks, or smaller subsets of the total problem and at run-time,creating an environment that provides each task with its ownvirtual processor. A virtual processor typically includes aregister set, a program counter, a stack memory area, and a stackpointer. Not that only the executing task uses the actual processorresources. The other tasks are all using virtual resources. Amultitasking run-time environment controls overall task execution.When a higher-priority task needs to execute, the currently runningtask's registers an are saved in memory and the higher-prioritytasks registers are recovered from memory. The process of swappingthe execution of tasks is commonly called context switching andcontext-switching time is a commonly quoted specification foroperating systems targeting the real-time or embedded systemsmarket.
With today's technology, the transfer of processor control toanother task is invisible to the application software; theallocation logic is not embedded inside the application, but isassigned by the run-time environment. Exactly how this is donedepends on many design goals set by the run-time or operatingsystem vendor, but includes consideration for:
- A task's priority, the importance assigned to the task by thedeveloper, an assignment which may shift during run time aswell
- The way the scheduling algorithm is designed, which usuallygives highest priority tasks first crack at the processor, andtasks with equal priority ranking in FIFO order.
One task may pre-empt another, causing the pre-empted task to besuspended temporarily. This generally happens when an interruptroutine makes a higher priority task ready.
Multitasking is particularly useful when a software project hasto be divided up among multiple programmers, because eachprogrammer can concentrate on his particular section of code andnot really worry too much about how his code with interact with theother sections. Without multitasking, most embedded applicationsused a control loop approach to overall control. This has twodisadvantages. One is that the overall application's performance issensitive to the control loop design and to changes in the controlloop design, which often fall victim to the “law of unintendedconsequences.” Another is that the worst case response time isgenerally one complete pass through the control loop. Withmultitasking, on the other hand, the response time is generallyfaster and deterministic. Since high priority tasks can pre-emptlow priority tasks, the worst case response time is generally thecontext switching time. Because context switching is fast anddeterministic, code developers can concentrate on the requirementsof the task without worrying about what effect each task might haveon other tasks or on overall system response times. This isparticularly valuable after an embedded application has beendeployed and then must be modified or maintained in the field. Themultitasking nature tends to isolate effects and make revisionsless risky. Changes in one section of code are less likely toaffect performance in other sections of code.
This isolation effect is particularly noteworthy in situationswhere a project is divided up among multiple individual programmerssince it is not possible for each engineer on a large project toknow the exact processing and response time requirements of all theother parts of the application. Multitasking in fact forces theapplications designer to break the overall application intomultiple pieces, so assigning the multiple pieces to multipleprogrammers is just the next logical step.
While it is true that multitasking adds some to the overallsystems overhead; multitasking operating environments are biggerand more complex, it is also true that multitasking can eliminatethe need or excessive polling within the control loop. Polling isalso a major contributor to systems overhead, and eliminatingpolling via multitasking can actually improve performance.
Perhaps the biggest downfall of multitasking embeddedapplications is that they always require more memory than simplecontrol loop run-time environments. The kernel needs a stack, as dothe individual tasks, as well as other programming tools such asqueues, mailboxes, and semiphores. And while multitasking has itsadvantages, solutions with lots of tasks can be complex to fathom,difficult to organize by task priority, and can use up lots ofmemory for task stacks. In addition, lots of tasks can lead to lotsof task switching, which leads to lots of context switching time,the “hysteresis loss” of multitasking systems. If a system isreally complicated by lots of tasks, it may not be feasible tounderstand how the system will operate under all possibleconditions. For example, it is conceivable that a low priority taskmay never get to run, if indeed there are dozens and dozens ofhigher priority tasks to hog CPU resources. Finding these kinds ofsystem vagaries is one of the challenges facing embedded systemsdesigners, but one which will be easier with the currentgenerations of multitasking operating systems and debugging supporttools that are coming to market.