Choosing the best real-time scheduling method for your embedded device - Embedded.com

Choosing the best real-time scheduling method for your embedded device

At one time, most embedded systems had modest software requirements —typically, a few thousand source lines of code. Today, however, anembedded system may contain hundreds of thousands or even millions ofsource lines, and use a large number of software components thatinteract in complex ways. In fact, research suggests that the code basefor the average embedded project is doubling in size about every 10months.

Typically, the many subsystems, processes, and threads that make upa modern embedded system are developed in parallel with one another.The design is divided among multiple development groups, each with itsown performance goals, task-prioritization schemes, and approaches toruntime optimization.

Once these subsystems are integrated into a common runtimeenvironment, all parts of the system must provide adequate responseunder all operating scenarios, including normal system loading, peakperiods, and failure conditions.

Given the parallel development paths, performance issues invariablyarise when integrating the subsystems from each development team. Infact, many of these issues emerge only during integration andverification testing, when the cost of software redesign and recodingis at its highest.

Unfortunately, few designers or architects are capable of diagnosingand solving these problems at a system level. Designers must juggletask priorities, possibly change thread behavior across the system, andthen retest and refine their modifications. The entire process caneasily take several calendar weeks, resulting in increased costs anddelayed product.

Striking a balance
To complicate matters, many embedded systems today are dynamic,network-connected devices that can (or must) support new softwarefunctionality throughout their mission life. This upgradability offersnumerous advantages, but it also poses a multitude of designchallenges.

How, for instance, can a device download and run new softwarecomponents, without compromising the behavior and realtime performanceof existing components? And how can the device perform these upgradessafely, while maintaining continuous availability and a high level ofsecurity?

In such cases, it is rarely feasible to add more hardware orprocessing power to handle the new features. Systems designers musttherefore employ advanced techniques to guarantee that every softwarecomponent, both old and new, can always access the computing resourcesit needs, including CPU time.

Partitioning as a Solution
Recently, the concept of partitioning has gained mindshare as a way tomanage system complexity and to ensure higher system availability.Briefly stated, this approach allows design teams to compartmentalizesoftware into separate partitions, where each partition is allocated aguaranteed portion (or budget) of system resources.

Each partition provides a stable, known runtime environment thatdevelopment teams can build and verify individually. This partitioningenforces resource budgets, either through hardware or software, toprevent processes in any partition from monopolizing resources neededby processes in other partitions. Systems designers can choose betweenhardware or software partitioning. Time, cost, and performanceconsiderations will heavily influence which approach they select.

Figure1 — Comparison of partitioning approaches.

Hardware partitioning
The concept of hardware partitioning isn't new. In fact, most largesystems provide some level of hardware separation for differentfunctions. For example, in the networking industry, each card (linecard, switch card, control plane processor) within a shelf may have itsown dedicated processors. Even with this division of responsibilities,specific cards, such as supervisory processors and control planeprocessors, are often stretched thin by the sheer volume and complexityof the software that they must host.

Typically, an embedded system has a central processing complex thatprovides the overall brains for the system. As software complexityincreases, it becomes a challenge to integrate all software subsystemsinto this centralized complex. A hardware partitioning approach employsdedicated hardware to run these various subsystems, with the goal ofminimizing resource contention between them. (For a description of suchan approach, see “ExtremePartitioning.”

Hardware partitioning uses added hardware to reduce the risk ofsoftware development and to minimize overall project-cycle time. Itthus offers a feasible alternative for systems that ship in limitedvolumes and where the deployed system cost is small in comparison tothe development cost. For products where price is important, softwarepartitioning is more appropriate.

OS-controlled partitioning
An embedded operating system (OS) is a good candidate to enforcepartition budgets since the OS already controls access to underlyingcomputing resources. For instance, some realtime OSs (RTOSs) providememory protection to prevent coding errors in one process from damagingmemory used by other processes or by the OS kernel.

A partitioning OS can also help control CPU time. To accomplishthis, the OS implements a partitioning scheduler that guarantees CPUcycles for each partition while still providing the deterministic,realtime response required by embedded systems. Depending on the OS,the scheduler will either provide fixed partition budgets or offer amore dynamic approach that allows partitions under heavy load to accessCPU cycles unused by other partitions.

By leveraging OS-controlled partitioning, systems designers canaddress many of the issues associated with complex design and systemintegration. For example, the designer can divide the system intopartitions that are constructed and tested according to their partitionbudget.

This approach removes complex CPU contention issues and therebysimplifies system integration. The net effect is improved time tomarket — provided that the partitioning scheduler can be dropped intothe existing design. If developers must redesign their applications tofit with the OS partitioning model, then the costs of usingOS-controlled partitioning could easily offset the savings.

Scheduler efficiency is also an issue. An inefficient partitioningscheduler will require additional (or faster, more expensive) hardwareto deliver the same level of responsiveness as a traditionalpriority-based preemptive scheduler. Thus, designers must carefullyconsider scheduler efficiency when selecting a partitioning-schedulerapproach.

Application-level partitioning
If developers are using an OS incapable of enforcing partition budgets,they can still implement application-level partitioning. For instance,the developer could develop a monitor application that continuallyexamines the CPU usage of every thread to detect when a thread consumestoo much CPU. To succeed, this approach requires techniques to notifythe thread and to have the thread “back off” or yield in some way.

This approach uses a complex design that provides only a coarselevel of CPU control. It also relies on the correct behavior of allparticipating threads. Without considerable refinements, this approachcan make it hard to differentiate between a runaway error condition anda thread that is simply busy performing its intended function.

Besides spending time and effort to build the monitor application,the developer must also implement behaviors in each participatingthread to monitor the thread's CPU use. The cost of such developmentquickly multiplies as more threads are added. An application-levelapproach to controlling CPU utilization can also be difficult tomaintain and scale. As a result, developers spend valuable designresources on this solution instead of developing core product features.

Figure 1, above, summarizesthe merits of each partitioning approach. The case for a partitioningOS becomes even more compelling as we consider the additional benefitsit can offer.

The Case for OS-controlledPartitioning
A key aspect of OS-controlled partitioning is the ability to guaranteeCPU time. To date, OSs have used multithreading and priority-basedscheduling to help control which tasks consume CPU cycles. Systemsdesigners can assign a priority to each thread and then schedule thethreads for execution by using a

Figure2 — Priority scheduling ensures that the most critical tasks gainaccess to the CPU, but it can also cause problems when a high-prioritytask inadvertently or maliciously consumes all available CPU cycles.For instance, Task A prevents all other tasks from accessing the CPUafter T4.

scheduling policy such as round robin or first in, first out (FIFO).Priorities dictate which queue of ready threads will execute first andthe scheduling policy dictates how threads of equal priority share theCPU.

<>Priority-based scheduling, Figure2, above , provides an easy method to define the schedulingpriority of every task. It does pose a problem, however: If a giventask is even one priority level higher than another task, thehigher-priority task has the power to completely starve theless-critical task of CPU time. For instance, let's say you have twoprocesses, process A and process B, where A has a slightly higherpriority than B. If process A becomes swamped with work or becomes thetarget of a denial of service attack, it will lock out process B (aswell as any other lower-priority process) from accessing the CPU.

This problem occurs in all fields of embedded software. In theautomobile, process A might be the navigation display and process B theMP3 player — if the navigation system consumes too many CPU cycles whenperforming a route calculation, it can starve the MP3 player and causeMP3s to skip. In a network element, process A might be the TCP/IP stackand routing protocols and process B the SNMP agent. In an industrialcontrol system, process A could be the robot-arm control loop andprocess B the human machine interface (HMI).

No matter what market an embedded system is targeted for, task orprocess starvation represents a serious concern. Services provided bylower-priority threads — including diagnostic services that protect thesystem from software faults or denial-of-service attacks — can bestarved of CPU cycles for unbounded periods of time, therebycompromising system availability. The inability to provide guaranteed

Figure3 — While fixed partition scheduling will prevent a high-priority task(for instance, task A) from consuming all CPU cycles, it can alsosacrifice performance by wasting unused CPU cycles in a given partition(for instance, partition 3).

CPU budgets for lower-priority tasks can also make it difficult tointegrate subsystems from multiple development teams, since it allowstasks in one subsystem to starve tasks in other subsystems — a problemthat may not become obvious until integration and verification testing.These issues become more acute as system complexity (and the number ofthreads) increases.

To address this problem, some RTOSs offer a fixed partitionscheduler (Figure 3, above ).Using this scheduler, the system designer can divide tasks into groups,or partitions, and allocate a percentage of CPU time to each partition.With this approach, no task in any given partition can consume morethan the partition's statically defined percentage of CPU time.

For instance, let's say a partition is allocated 30% of the CPU. Ifa process in that partition subsequently becomes the target of a denialof service attack, it will consume no more than 30% of CPU time.

Fixed partition schedulers have their drawbacks, however. First,each partition must be defined at system build time and requires codemodification to implement and change. Also, because the schedulingalgorithm is fixed, a partition can never use CPU cycles allocated toother partitions, even if those partitions haven't used their allottedcycles.

This approach squanders valuable CPU cycles and prevents the systemfrom handling peak demands. Because of this wasted time, fixed cyclicalschedulers can achieve only 70% CPU utilization. Manufacturers must, asa result, use more expensive processors, tolerate a slower system, orlimit the amount of functionality that the system can support.

Adaptive Partitioning
Another approach, called adaptive partitioning, shown in Figure 4, below, addresses thesedrawbacks by providing a more dynamic scheduling algorithm. Like staticpartitioning, adaptive partitioning allows the system designer toreserve CPU cycles for a process or group of processes.

The designer can thus guarantee that the load on one subsystem orpartition won't affect the availability of other subsystems. Unlikestatic approaches, however, adaptive partitioning uses standardpriority-based scheduling when the system isn't under full CPU load orattack. As a result, threads in one partition can access any spare CPUcycles unused by threads in any other partition.

This approach, which was pioneered by QNX Software Systems, offersthe best of both worlds: it can enforce CPU guarantees when the systemruns out of excess cycles (for maximum security and guaranteedavailability of lower-priority services) and can dispense free CPUcycles when they become available (for maximum performance).

Adaptive partitioning offers several advantages, including theability to:

(1) provide CPU timeguarantees when the system is heavily loaded — this ensures that allpartitions receive their fair budget of CPU time
(2) use realtime,priority-based scheduling when the system is lightly loaded — thisallows systems to use the same scheduling behavior that they do today
(3) make use of free CPU timefrom partitions that aren't completely busy — this gives otherpartitions the extra processing time they need to handle peak demandsand permits 100% processor utilization
(4) overlay the adaptivepartitioning scheduler onto existing systems without code changes —applications and system services can simply be launched in a partition,and the scheduler will ensure that partitions receive their allocatedbudget
(5) dynamically add andconfigure partitions at runtime, enabling the system to adjustprocessor consumption in response to fault conditions or otherscenarios
(6) guarantee that recoveryoperations have the CPU cycles they need to repair system faults,improving mean time to repair for high availability systems (7) stopmalicious code from stealing all the CPU time through a denial ofservice attack

Partition inheritance
To further enhance performance, adaptive partitioning can providepartition inheritance. When a server process (for instance, a devicedriver or file system) executes requests on behalf of a clientapplication, the client is billed for the server's time. As a result,server processes can run with minimal CPU budget — only the budgetrequired to perform background tasks. Moreover, systems designers don'thave to reengineer the server budget when more clients are added.

Figure4 — Adaptive partitioning prevents high-priority tasks from consumingmore than their assigned CPU percentage unless unused CPU cycles becomeavailable in the system. For instance, tasks A and D can run in timeallocated to partition 3 because tasks E and F don't require the restof their budgeted CPU cycles. With a fixed partitioning scheduler, thisfree time would be wasted.

Faster debugging and testing
Adaptive partitioning offers benefits throughout the productdevelopment cycle to speed design, implementation, and testing.Consider a typical design cycle that consists of a high-level design, asubsystem-level design, implementation and unit testing, and systemintegration and final verification.

At the high-level design stage, systems designers can use adaptivepartitioning to define CPU budgets, allowing each development group toimplement their own priority schemes and optimizations within theirassigned budget.

When subsequently testing their partition, a development group canload a simple program into other partitions to consume CPU. Thistechnique effectively loads the system and causes the adaptivepartitioning scheduler to enforce the assigned budgets. The developmentgroup can then test the operation and performance of code within theirpartition under a simulated worse-case load condition.

Adaptive partitioning can also simplify day-to-day testing anddebugging. For example, in the unit testing phase, code defects willoccasionally cause runaway conditions that bring debugging to a halt.

In these situations, the system appears to be locked and thedeveloper can recover only through a reset – thereby losing key debugand diagnostic information. But by creating a partition that guaranteesCPU time for console login and remote debugging, developers can collectinformation critical to understanding and correcting these defects.

At system integration time, problems typically arise because ofunforeseen runtime interaction and CPU contention. These problems occurlargely because there is no effective way to “budget” CPU utilizationacross different subsystems developed by different design groups.Adaptive partitioning alleviates this problem at the design stage.Also, developers can configure the target system with a “debug tools”partition to ensure that system diagnostic information can be retrievedwhile the system is under test.

Increased system availability
When a hardware or software subsystem fails in a high availabilitysystem, automated recovery functions must return the system to a properoperating state. The faster these functions execute, the lower the meantime to repair (MTTR) and the greater the overall system availability.Adaptive partitioning helps by ensuring that CPU time is available forthe fault detection and recovery functions.

In systems that typically run at very high CPU utilization,processes that monitor system health and report errors don't get anopportunity to run in a timely manner. The CPU guarantees provided byadaptive partitioning address this problem and ensure that routinediagnostic functions run as intended. These functions can thus detectand report problems before the problems result in hard failures.

In the most severe cases, the user must intervene to revive asystem. In these cases, the system must notify the user of the failureand provide some way of diagnosing the problem. Again, adaptivepartitioning helps by ensuring the system has enough CPU cycles toalert the user and to provide guaranteed access to the user interface,be it a system console, remote terminal, or other method.

Minimum risk
Adaptive partitioning doesn't require code changes, nor does it changethe programming model or debugging techniques that designers arealready familiar with. It thus offers immediate benefits with littleassociated risk. Nonetheless, a fixed-cycle scheduler may be desirablein some situations.

To address this requirement, a properly implemented adaptivepartitioning scheduler will allow the system designer to configure asystem with fixed partition budgets and no CPU time “borrowing.” Systemdesigners are thus free to choose the scheduling behavior that bestmeets their system requirements.

Partitioning Plus Performance
Embedded software is becoming so complex that, without some form ofpartitioning, system designers and software engineers will behard-pressed to satisfy the conflicting demands for performance,security, time to market, innovative features, and system availability.

An OS-controlled approach to partitioning goes a long way towardaddressing these requirements by providing each subsystem with aguaranteed portion of CPU cycles, while still delivering thedeterministic, realtime response that embedded systems require.

Device manufacturers can, as a result, readily integrate subsystemsfrom multiple software teams, allow new and upgraded components to runwithout compromising the behavior of existing components, streamlinesoftware debugging, and protect their systems from denial of serviceattacks and other network-based exploits.

If the partitioning model also provides a flexible, efficientscheduler that allows partitions under load to borrow unused CPU timefrom other partitions, then manufacturers can realize these variousbenefits without having to incur the cost of faster, more expensivehardware.

Jason Clarke is field applicationsengineer, Robert Craig, PhD, is senior software developer, KerryJohnson is Senior Product Marketing Manager, and Paul Leroux istechnology analyst at QNXSoftware Systems.

This article isexcerpted from a paper of the same name presented at the EmbeddedSystems Conference Silicon Valley 2006. Used with permission of theEmbedded Systems Conference. For more information, please visit www.embedded.com/esc/sv.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.