Agile development of real-time systems
Where Agility meets (clashes) with real-timeReal-time systems are often developed using a classical development method. The reasons for this are two-fold and centre around the fact that real-time systems have a strictly defined set of interfaces.
Because the interfaces and the real-time behaviour are defined so strictly, there is on the one hand no need for an agile approach (in that a real-time system has a fixed functionality that has been defined long before the project started), and on the other hand there is little scope for an agile approach (in that it is not easy to implement a subset).
This last point is also a consequence of the fact that many real-time systems are intricately designed software functions and hardware components, where small changes to either may violate the real-time properties, thereby rendering the product useless.
However, we argue that both those points are invalid, and that there is both scope to use an agile development method, and that there is a benefit from using an agile development method. Our argument relies on building systems using a predictable architecture, and with support from tools that predict and check timing properties of the system under development.
Predictable architectures
We define an architecture to be predictable when the programmer can reason about timings of a program written for that architecture. Most modern architectures are very unpredictable. Standard architecture items such as caches for program and data, pipelines with pipeline hazards, interrupts and shared memory make it difficult to predict how long a section of code will take to execute.
Each of those features can affect timings by an order of magnitude. Even though a worst case execution time (WCET) prediction may be feasible, it is usually so far off from typical timing behaviour that using the WCET will result in an over-engineered and uneconomical system. Operating systems running in conjunction with the program may worsen predictability by descheduling a process, or by granting exclusive access to a resource to another process.
Predictable architectures, such as the XMOS XCore, enable the programmer to reason about the timings of their code with tight bounds. On a predictable architecture, the difference between best- and worst-case are typically only caused by data-dependent behaviour of algorithms.
The programmer may need a tool to help them to make the prediction, but there is a close relationship between the source code of the program, and the required time to execute. Innocent small changes to the source code will not lead to a big change in timings on a predictable architecture. This in contrast with a pipelined or cached architecture, where any change could shorten execution time by an order of magnitude.
Since shared memory, interrupts, and process switching cause unpredictable timing, predictable equivalents have to be offered for use by programs on a predictable architecture.
The predictable equivalent of process switching is true concurrency: either in the form of multi-core processing where processes that run on different cores are truly concurrent and are not scheduled, or in the form of hardware threads that are switched on an instruction by instruction basis, which offers a predictable alternative to true concurrency.
Click on image to enlarge.
Interrupts are traditionally used to serve real-time tasks - but at the same time they make real-time reasoning hard for the remainder of the code. On a predictable architecture the replacement is events, where a program responds to events in well defined places in the program.
Event driven programming is a method that is decades old, but using it at architecture level offers the flexibility to deal with real-time requests in such a way that it does not randomly interfere with other code segments.
This is shown in Figure 1 above. Traditionally, an interrupt forces a register save and restore, and causes a task to be interrupted (in this example C interrupts A, causing unpredictable behaviour in task A).
In a predictable machine with multiple threads and events, a thread waits for an event which makes it explicit that one of multiple tasks may happen at a known point, avoiding register save and restore and making timing behaviour explicit.
Shared memory is unpredictable since access to it must be regulated. If two processes exchange data through a piece of shared memory, then a process should not read the shared data while the other process writes in that location - if it does it could read inconsistent data.
Instead of using shared memory, we assume that all threads use memory private to the thread only, and that threads communicate by means of messages passed over channels.
An obvious timing dependency is that a thread will be blocked if it waits for a message, but this is the only place where the programmer has to worry. Indeed, the programmer knows which thread will be waiting for which, and can build an argument (or even proof) that threads will not unduly wait for each other.


Loading comments... Write a comment