Deadline Scheduling -

Deadline Scheduling


Most real-time operating systems are based on static priorityscheduling. But how can systems be truly real time withoutan awareness of deadlines?

I have not yet used a deadline scheduler or implemented one for serious use, and I've only read a fraction of the “relevant literature.” I cannot call myself a real-time scheduling expert. I am, however, an interested and half-committed neophyte. In the last few years, I've spent many weeks with real-time scheduling fanatics in the Real-Time Java Working Group and the Real-Time Java Experts Group. The Real Time Specification for Java has carefully-prepared slots for improved scheduling algorithms. They are there because the experts group wants the specification to be relevant after the real-time community moves away from static priority scheduling. There is a serious possibility this movement will happen.

I started out profoundly skeptical. I have used static priority scheduling for decades and never found a scheduling problem I could not solve. More fundamentally, I, like most real-time programmers, am a control freak. I can think of too many ways a complex, authoritarian scheduler could break a system.

The potential of dynamic priority is closely coupled with “write-once-run-anywhere.” Priorities are all the scheduler needs to know if the only critical fact is the relative ranking of tasks. That implies that only the highest priority task has hard real-time characteristics. If there is more than one real-time task, the scheduler might serve the system better if it knows when each task needs to complete and how long it will take. In the best case, the scheduler can tell you what deadlines it can guarantee, and meet those commitments. This isn't exactly write once run anywhere, but it is the best we can expect in a real-time environment. A priority scheduler cannot make guarantees, and it may fail in unexpected ways if it is moved out of the environment for which it was designed.

This may sound too positive. I am not sure dynamic priority scheduling is ready for widespread use today. It has practical problems starting with the extreme difficulty of putting a strict limit on how much processor time a task will require even in a specified environment, and ending with the NP-completeness of the general scheduling problem.

It is, however, something anyone seriously concerned with real-time systems design needs to track.

Some definitions

We are used to fixed-priority schedulers. The term is a little misleading since priorities are not necessarily fixed. They can be freely changed under program control, but the scheduler is not allowed to change them. Aging (which I will discuss in detail later) pushes this definition, but fixed priority schedulers with aging are still considered fixed priority. This is mainly a non-issue since aging is not common in real-time schedulers.

Priority alterations by priority inheritance or priority ceiling protocols are also considered okay for fixed-priority schedulers. The scheduler is probably involved in the priority changes, but the motivation is wrong, so these practices do not constitute dynamic priority scheduling.If priorities are assigned at design time (as by rate monotonic analysis, pr RMA), that is termed static priority assignment.

Hard real-time tasks must finish at a particular time. Something really bad will happen if they don't.

Soft real-time tasks have a preferred completion time, but the consequences of not completing on time are not totally catastrophic. There is a continuum between not real-time and hard real time. In the world of engineering trade-offs, failure is seldom totally unthinkable.

The target completion time for a task is called its deadline.

Non-schedulable entities use a share of a system resource (like CPU time) but are not controlled by the scheduler. Interrupt service routines are a good example.

A dynamic priority scheduler may never deal with priority. It arranges scheduling queues by execution eligibility, and if tasks want to communicate a metric of relative importance to the scheduler, they call it importance. The scheduler controls the order of the queues, and it may reshuffle them on any significant event: passage of time, completion of a task, change of a lock's state, or appearance of a new runnable task. The effective priority of tasks may be in constant flux, it is invisible to programmers, and they should not be concerned with it (except programmers implementing the scheduler).

Preemption is the interruption of a lower-priority task when a higher-priority task becomes ready to run. Systems that don't support preemption are called run until block because that is what tasks do under those systems. A system that preempts tasks is called preemptive.

“Classical” scheduling

Often priorities do an excellent job of communicating a programmer's desires to the scheduler. Actual deadlines are not clearly articulated, and many of them are soft. The importance of tasks is relative, for example:

  • No matter what is going on, if someone pushes the abort button, deal with it immediately.
  • The power-fail line is just a little bit less important than the abort button.
  • ….
  • When nobody else wants time, record events on a printed log.
  • In any left-over time, run system diagnostics.

For many systems this level of description is complete and useful. It is wonderfully easy to implement a scheduler for it, and the scheduler is efficient to the point of trivial. It may well use only a small constant amount of time for any scheduling event.

Programmers particularly like the fact that they can easily predict and control what task should be running at any moment.

Rate monotonic analysis

This idea is from Liu and Layland.[1] They proved that if a fixed priority preemptive system is to run a set of independent periodic tasks, no better way exists to statically assign priorities to the periodic tasks than to assign them such that tasks with shorter periods get higher priorities.

That thought has been considerably extended, but the fundamental idea remains. Techniques have even been developed to allow RMA scheduling of aperiodic tasks (by squeezing them into periodic allocations), and to analyze systems with interactions between tasks (by bounding the blocking time for the interactions with techniques like priority ceiling protocol).

One particularly important feature of RMA is that a system's feasibility can be analyzed. Using RMA, an architect can gather the facts about a system, then turn the analytical crank until it produces a “yes, schedule it like this” or “no, this cannot be scheduled with a fixed priority scheduler.” Impossibility results are particularly useful.

Unlike some products of the academic real-time community, rate monotonic analysis can be applied to ordinary commercial systems. Every serious real-time operating system has a fixed priority preemptive scheduler suitable for analysis.

Rate monotonic scheduling's limits

Rate monotonic analysis is comparatively simple because it makes simplifying assumptions. The assumptions apply to a substantial number of problems, but RMA is not the universal tool that solves all scheduling problems.

Perhaps most offensive to the standard thrifty embedded engineer is the CPU time that RMA uses. The simple, general version of RMA cannot commit to the feasibility of a system that uses more than 69% of available CPU time. If the architect falls back on old fashioned time-line analysis, RMA can be pushed all the way to 100% utilization, but we're into tedious hand simulation, not elegant algebra.

In some systems, nearly every event is periodic, but most have a significant aperiodic component. The real world is like that. Aperiodic events can be handled in a strictly periodic system by servicing them in periodically scheduled time slots, but that effectively reduces the system to polling. The system designer has to decide how often to poll for aperiodic events, and how long to allow for servicing them.

Then there is the issue of soft real time. RMA can incorporate a measure of importance by bumping a task's priority above the level dictated by its period. This will push failure toward tasks that have not been bumped, but it runs contrary to the whole idea of RMA. The procedure analyzes feasibility, and then guarantees success. It is not designed to work for systems that are infeasible.

Limitations of fixed priority scheduling

The worst problem with fixed priority scheduling is that it is oblivious to deadlines; that is, blind to hard real time. It is like a traffic light that operates on a timer with no traffic sensors. Consider Figure 1. The top timeline (H) represents a high-priority task. The lower timeline (L) represents a lower-priority task. The scheduler executes the higher priority task first, easily completing its computation before its deadline. When task H completes, the scheduler switches to task L, but not enough time is available to complete it before its deadline. If the scheduler were aware of the deadlines, it could have run L before H, and met both deadlines. An experienced real-time programmer could finesse this problem by manipulating priorities as the system runs, or by playing with locks, but at this point the engineer is fighting the scheduler, rather than working with it.

Advantages of fixed priority scheduling

The overwhelming advantage of fixed priority scheduling is that it is entrenched in existing practice:

  • Every mainstream operating system or task switching kernel supports fixed priorities.
  • Real-time engineers have been using it all their professional careers.
  • Books, papers, and professional development courses are available on the use of fixed priority scheduling.

Embedded engineers may not be risk takers, but real-time engineers are generally risk-averse to the extreme. For them, the track record bullet may be necessary and sufficient.

Control is another big point. Dynamic priority schedulers tend to be complex and sometimes use heuristics. Embedded engineers want control of every resource. They don't feel comfortable with a scheduler that essentially says “trust me, it will be great.”

Efficiency matters. A fixed priority scheduler needs a few hundred bytes and a tiny share of the CPU time. A dynamic priority scheduler may go way beyond that.

And for many systems, priority is the best abstraction for scheduling.

Solved problems

Over the years, fixed priority scheduling has been well polished. Time sharing, and some real-time systems, need to allow all tasks to make some progress. The notion of aging was developed for this. It gives each task what amounts to an effective priority that is gradually increased while the task waits to execute. Eventually every task will get an effective priority that will give it at least a brief slice of CPU time.

A high-priority task may find itself waiting for a low-priority task to release a resource. This drops the effective priority of the high-priority task below the low-priority one. This can be designed around, but it is pernicious to debug, and it causes trouble for RMA-oriented analysis. Priority inheritance and priority ceiling protocols were invented to solve this problem. They boost the priority of the low-priority task while it holds a resource that a higher-priority task needs.

Until the mid '80s, formal analysis of fixed priority scheduling was either extremely tedious, or impossible. Liu and Layland's RMA gave engineers formal tools to analyze fixed priority systems.

Soft real time

It might not be overstating the case to say that fixed priority schedulers are only suitable for soft real-time systems. The fact that they have been successfully used for numerous hard real-time systems is testament to the determination of engineers who have forced hard real-time constraints into systems that were not designed for them.

How can a system that has no concept of a deadline be said to support meeting them? True, a good real-time OS is designed to stay out of the application's way, but this only means that it does not unnecessarily interfere with hard real-time processing. A bare processor does as well.

Is hard real time an issue? Few systems are rigorously hard real time. Life goes on after a missed deadline. Perhaps a truly hard real-time system is in the same class as truly “non-stop” computing: important, but rare and expensive.

Hard real time may be rare, but applications for the technology are not. Consider a server for streaming media. The world will not end if it fails to deliver packets in time, and it may be okay if it occasionally misses deadlines. The server will be judged partly on how many streams it can serve concurrently, and partly on the quality of those streams. Software that can determine when a server is fully committed with enough certainty that it can commit nearly 100% of the server and still deliver flawless streams is valuable. The same problem/opportunity extends through the network that delivers the stream. Every switch and router can benefit from software that understands deadlines.

Consider a disk file system. The classic system is optimized for throughput, but that may not be appropriate in a time-sharing system. Perhaps the file system would benefit from deadline-scheduling technology applied to scheduling I/O requests. In an embedded system this technology might appear hidden in an intelligent disk drive.

Consider elevator scheduling (for real elevators). What if the elevator were scheduled, not to minimize travel, but to keep user's wait time to a minimum? I'd rather like it if the elevator would give me a time commitment when I pressed the call button. Setting the time commitment when users select a destination might be an interesting exercise in sociology.

Deadline scheduling

Dynamic priority scheduling generally starts with deadlines, or for periodic tasks, sets of deadlines. The scheduler determines the execution order for tasks based on the deadlines.

A scheduler can get a lot of mileage out of deadlines, but to make guarantees it needs cost as well. If a task registers a deadline and the amount of CPU time it will need to reach that deadline, the scheduler can perform feasibility analysis. If the scheduler determines that it can construct a schedule that will meet the new deadline as well as all the deadlines it has already accepted, it will accept the new deadline. Unless the scheduler is allowed to do this type of admission control it cannot make realistic commitments.

If all tasks are well behaved and never attempt to use more CPU time than they requested in their cost metric, admission control is sufficient. If tasks sometimes underestimate their processor requirements, a permissive scheduler will allow the task that had an incorrect cost metric to compromise the deadlines of other tasks. To prevent this, the scheduler may implement an enforcement policy. If a task attempts to exceed its budget, something bad will happen to it. Maybe it will be removed from the system. Maybe it will just be preempted and allowed to resume later. In any case it is likely to miss its deadline, but it will not affect the correctness of the rest of the schedule. This type of policy is familiar to users of old batch computers, which actually used job scheduling policies very like modern deadline scheduling.


The first hard real-time dynamic priority scheduler is called Earliest Deadline First (EDF). Liu and Layland proved that if a schedule exists that will meet all deadlines, a schedule computed by always executing the task with the earliest uncompleted deadline meets all deadlines.

EDF is attractive. It is efficient and easy to compute. It is easy to reason about. It has formed the basis of a large part of the dynamic priority scheduling research.

EDF has only one serious drawback. The theory only says it is optimal for loads that can be scheduled. It does not address overloads. EDF degrades ungracefully when it is overloaded. This is not an issue that should trouble a serious hard real-time system. If any degradation is catastrophic, there can be no graceful degradation.


EDF scheduling is fairly easy to implement. The execution queue is sorted by the time to the next deadline. When a task becomes active it must be inserted in the execution queue, or, if its deadline is before the deadline of the currently executing task, it must preempt the current task.

If all tasks are strictly periodic (they only become active because of the passage of time), preemption is not necessary. This simplifies the scheduler and reduces context switching overhead.

Feasibility analysis is simple and O(n), if tasks are allowed to be executed anywhere in their period; that is, early results are acceptable, and tasks are prepared to run at any time. Seen another way, this permits tasks to have jitter roughly equal to their period. Finer control of deadlines puts analysis in the pseudo-polynomial time class. That is only a good class if the alternative is exponential time.

Slack time

Slack time, or laxity, is the amount of time between when a task would complete if it started now and its next deadline. This is the size of the available scheduling window.

Scheduling tasks based on slack time requires cost metrics for scheduling, but it works as well as EDF, degrades a little more gracefully under overload, and extends relatively well to scheduling multiple processors. The algorithm is called Least Laxity First, or LLF. Its operation is well described by its name.

Least laxity first needs to be modified a little to prevent excessive context switching. If multiple tasks have nearly the same laxity, strict LLF would have them context switch every time the scheduler gets control. LLF modifies the algorithm so the scheduler does not call for context switch if the switch will not change the deadlines that are met.

Aperiodic tasks

The scheduler would love to schedule everything, but some things are out of its control. Interrupt service, DMA, and tasks that for some reason need to be scheduled at higher “priority” than the scheduler disrupt the pre-planned execution sequence. These are called non-schedulable entities and are anathema to a deadline scheduler. Some analysis techniques allow for non-schedulable entities, but they decrease the efficiency of the scheduler. Systems that are designed for a deadline scheduler make strenuous efforts to reduce the number of non-schedulable entities and the amount of CPU time that they need.

Aperiodic tasks are a challenge for dynamic priority schedulers:

  • Feasibility analysis is generally a demanding task. It is not really suitable for execution while the system is trying to meet deadlines.
  • What if an event arrives, and feasibility analysis says it cannot be serviced?
  • The load of aperiodic events is usually described as a statistical distribution. This is nearly unstudied territory.

Aperiodic tasks/events can be scheduled using priority scheduling at higher priorities than the dynamic priority tasks. This works when the aperiodic events need hard real-time service and the periodic events are relatively soft real time. All the parts of this system are well studied. Its major problem is that the scheduler must view all the aperiodic activity as non-schedulable entities. This makes it inefficient at scheduling periodic tasks since it has to allow for huge non-schedulable overhead.

A periodic server that is scheduled by the dynamic priority scheduler can serve the aperiodic events. This approach brings the aperiodic events under the control of the scheduler. Scheduling theorists have worked hard to make this approach practical. The problem is how to give the aperiodic events good response time without reserving too much time for them. To give the aperiodic events nearly the response time they would have in a priority scheduled system, the server must be scheduled with a short period. To handle peak loads of aperiodic events that happen to arrive in a cluster, the cost of the server must be high. That is a recipe for reserving all the processor time for service of aperiodic events.

If the aperiodic events can be classified as soft real time, the scheduler can do a better job with them. This is not unreasonable. For instance, keyboard input would like to be serviced in about a tenth of a second, but if it occasionally needs two-tenths of a second the world will not end. The scheduler can do clever things like handing unused time forward from period to period, so the aperiodic server can reserve a little more than the average cost per period instead of the maximum cost per period. (See Spuri and Buttazzo's work on an algorithm called the Total Bandwidth Server.[2])

How we can get there

How can we bring dynamic priority scheduling out of its academic and niche market backwater into the mainstream of embedded real-time work? The hooks in the Real Time Specification for Java should help. Outside the Department of Defense, few groups can afford to support the construction of a production-quality run-time environment just so they can try dynamic priority scheduling. Java is available almost everywhere, and its strengths (it's easy to program and extremely portable) align well with the strengths of dynamic priority scheduling. They both depend to some extent on cheap processor cycles for success.

Then people have to start talking about dynamic priority scheduling to the general embedded real-time community. A “killer app” would help a lot. Media servers, or offering guaranteed network quality of service might be it. Or it might appear at the other extreme. Maybe the killer app will be guaranteeing hard real time for cyclic engine controller activities while sharing the processor with a varying load of entertainment and environmental control applications.

All the NP-complete and otherwise complex algorithms don't help. Real-time programmers don't like heuristic algorithms, and exponential time requires a compelling justification.


Real time and write once run anywhere are fundamentally at odds. A real-time load that runs successfully on one computer very likely depends on the hardware of that computer and the mix of other activities with which it must share the machine. Change anything, and the program may become infeasible, that is to say, fail.

Dynamic priority scheduling doe not give WORA, but it can let an application know before it starts whether it will be able to complete successfully. This is compelling for distributed and multiprocessor systems where the program can use feasibility analysis to scale the resources until it becomes feasible. It is useful, and probably the best we can hope for on uni-processor systems.

Heads up

This article is a heads up. Dynamic priority scheduling may not be the tool you need today, but it has tremendous potential. When it gets here it could be the most important thing ever to hit real-time programming.

Peter Dibble is a senior scientist for Microware and a member of the Real Time Java Experts Group. In addition to his extensive Java and real-time performance research, he has written numerous technical articles and several books. Peter holds a PhD in computer science from University of Rochester. Contact him at .


1. Liu, C.L. and J.W. Layland, “Scheduling Algorithms for Multi-Programming in a Hard Real-Time Environment,” Journal of the Association of Computer Machinery (ACM), Vol. 20, No. 1, Jan. 1973, pp. 46-61.

2. Spuri, M. and G.C. Buttazzo, “Scheduling Aperiodic Tasks in Dynamic Priority Systems,” Journal of Real-Time Systems, Vol. 10, No. 2, 1996.

Return to Table of Contents

1 thought on “Deadline Scheduling

  1. “Although in the 18 years since Peter wrote this, both hard and soft (including dynamic) real-time scheduling have moved on (a great deal more progress has been made in scheduling theory outside the computer field), I still highly recommend it as a quick o

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.