Non-real-time vs. soft real-time vs. hard real-time applications -

Non-real-time vs. soft real-time vs. hard real-time applications

My chum, Rich Quinnell, recently posted a column titled: Embedded systems survey uncovers trends & concerns for engineers. Based on this, / community member Bob Snyder emailed Rich some very interesting comments and questions.

Rich brought me into the conversation and — with Bob's permission — we decided I should post this blog presenting his comments and questions, and soliciting input from the EE Times and communities. So here is Bob's message:


I really appreciate the hard work that the folks at UBM do to provide the embedded survey results.

It seems as though non-real-time, soft real-time, and hard real-time applications have very different requirements. I have been closely studying the survey results for many years and trying to understand (or imagine) how the responses to some of the questions might be correlated.

For example, it would be nice to know how the overall movement to 32-bit MCUs (most of which have cache) breaks down by application (e.g., non-real-time, soft real-time, and hard real-time).

Are popular 32-bit MCUs, such as ARM and MIPS, being widely adopted for hard real-time applications where worst-case execution time is at least as important as average-case execution time, and where jitter is often undesirable? If so, do people disable the cache in order to achieve these goals, or simply throw MIPS at the problem and rely upon statistical measures of WCET and jitter?

Performance penalty of completely disabling the cache
Microchip's website explains how to completely disable the cache on a PIC32MZ (MIPS 14K core). The article says that doing this will reduce performance by a factor of ten:“You probably don't want to do this because of lower performance (~ 10x) and higher power consumption.”

Somebody at the University of Toronto ran a large set of benchmarks comparing various configurations of a specific ARM processor. When they compared a fully-enabled cache configuration to an L2-only configuration, the L2-only setup was six times slower (shown in the red rectangles below). It seems reasonable to assume that if L2 had also been disabled, performance would have been even worse.

Based upon this data, it seems reasonable to conclude that when the cache is completely disabled on a 32-bit micro, the average performance is roughly ten times worse than with the cache fully enabled.

Why would anyone use a cache-based MCU in a hard real-time application?
The fastest PIC32 processor (the PIC32MZ) runs at 200 MHz. With the cache fully disabled, it would effectively be running at 20 MHz. The 16-bit dsPIC33E family runs at 70 MHz with no cache. Admittedly, the dsPIC will need to execute more instructions if the application requires arithmetic precision greater than 16 bits. But for hard real-time applications that can live with 16-bit precision, the dsPIC33E would seem to be the more appropriate choice.

I am having trouble understanding the rationale for using an ARM or PIC32 in a hard real-time application. These chips are designed with the goal of reducing average-case execution time at the expense of increased worst-case execution time and increased jitter. When the cache is disabled, they appear to have worse performance than devices that are designed without cache.

Atmel's 32-bit AVR UC3 family has neither an instruction cache nor a data cache, so this is not a 32-bit issue per se. But it seems that the majority of 32-bit MCUs do have cache and are targeted at soft real-time applications such as graphical user interfaces and communication interfaces (e.g. TCP/IP, USB) where WCET and jitter are not major concerns.

Breakdown by market segment
It seems to me that there will always be a large segment of the market (e.g., industrial control systems) where hard real-time requirements would mitigate against the use of a cache-based MCU.

It would be interesting to see the correlation between choice of processor (non-cached vs. cached, or 8/16 vs 32 bits) and the application area (soft real-time vs. hard real-time, or GUI/communications vs. industrial control). I wonder if it would be possible to tease that out of the existing UBM survey data.

Looking at the 2014 results
With regard to the question: “Which of the following capabilities are included in your current embedded project?” We see that over 60 percent of projects include real-time capability. The question does not attempt to distinguish between hard and soft real-time. And the 8/16/32-bit MCU question does not distinguish between cached and non-cached processors. Nevertheless, it might be interesting to see how the 8/16/32 bit responses correlate with the real-time and non-real-time responses, or signal-processing responses. I find it hard to believe that a large number of projects are using cached 32-bit processors for hard real-time applications.

It is interesting to note that every response for the capabilities question shows a falling percentage between 2010 and 2014. This suggests that other categories may be needed. I suppose it is possible that fewer projects required real-time capabilities in 2014, but it seems more likely that there was an increase in the number of projects that required other capabilities such as Display and Touch, which are not being captured by that question.

Thanks for considering my input, Bob Snyder.

It's certainly true that non-real-time, soft real-time, and hard real-time applications have different requirements, but I'm not sure how best to articulate them. Do you have expertise in this area? Do you know the answers to any of Bob's questions? How about Bob's suggestions as to how we might consider refining our questions for the 2016 survey? Do you have any thoughts here? If so, please post them in the comments below.

11 thoughts on “Non-real-time vs. soft real-time vs. hard real-time applications

  1. “there is even more aspects to take into consideration hen just the level of timeliness. Event occuring once a year might still be a hard real time but the scale is something else than what is required from a software based PWM controller or serial protoco

    Log in to Reply
  2. “Hi Jussi — that's an interesting point that an event may be extremely infrequent but still demands hard real-time response — thanks for sharing.”

    Log in to Reply
  3. “Bob make a very good point about the effects of cache on obtaining a deterministic response. For those hard real time cases where you need a fast deterministic response, then cache is not appropriate. However, there are a lot of hard real time industrial

    Log in to Reply
  4. “The performance charts look like they are run on multicore A series ARM with a Linux as on A. In my books thats not a very good sample to represent a reference platform for hard real time applications. Multicore is a no-no for any event based real time ap

    Log in to Reply
  5. “”Where I work, we're using 32 bit processors but all the time critical processing is done in the FPGAs.”nnUsing a 32-bit processor for non-real-time tasks in conjunction with an FPGA for the time-critical tasks would clearly have some big advantages.

    Log in to Reply
  6. “Most “high end” FPGAs have processor cores (either hard or soft) so what's available now is the MCU inside the FPGA. The problem with that is that FPGAs generally don't have Flash so then you need external memory for program storage and the cost goes up

    Log in to Reply
  7. “Some MCU architectures have a small dedicated processor for hard real time handling;nfor instance the Freescale MPC5676 and other powertrain control MCUs have eTPU, an event-driven RISC processor, while the main core is a PowerPC with cache.nSome Infine

    Log in to Reply
  8. “I had this concern when I tried to answer the survey, is they mean a soft or hard real time capability.nI wish also if they added a question about using Multiple processors/multiple OS in your designn”

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.