A different take on asserts - Embedded.com

A different take on asserts


A few additions to assert macros will make them useful for analyzing execution times.

Expert programmers use the assert() macro to seed their code with debugging statements. So should we all.

Readers unfamiliar with assert() should check out Niall Murphy's columns in the April and May 2001 issues of Embedded Systems Programming . Niall does a great job of describing the this debugging construct. Basically, if assert() 's argument is nonzero, it flags an error message.

The macro must have no effect on the system—it shouldn't change memory, interrupt state, or I/O, and it should make no function calls. The user must feel free to sprinkle these at will throughout the program. Disabling the asserts in released code must never change the way the program operates.

The macro generally looks something like:

#define assert(s)
  if (s)
    printf(“Error at %s %d: “, __FILE__, __LINE__)

It's sort of an odd way to write code—if the condition is true (in which case assert() takes no action), there's a null pair of braces. Why not test the negation of the statement “s”?

The answer lies in my, uh, assertion above: the macro must have no effect on the system. Code it without the else and poorly written software, perhaps including an unmatched else from another if (this is a debugging construct after all; it's role is to look for dumb mistakes) will cause the assert to change the system's behavior in unpredictable ways.

Embedded issues
Few embedded systems make use of the assert() macro since we're usually in an environment with neither display console nor printf statement. Where do the error messages go? Equally problematic, embedding strings (Error at and the file name) eats valuable ROM.

One alternative is to replace the printf with a software interrupt. Plenty of processors have a one byte or single word instruction that vectors the execution stream to an interrupt handler. Write code to capture the exception and take some sort of action. Log the error to a data structure, flash an LED, or put the device in a safe mode.

Initialize your debugging tools to automatically set a breakpoint on the new handler. You'll forget the breakpoint is set. But once in a while, Surprise! The system suddenly stops, the debug window highlighting the first line of the exception routine. You've captured a bug in its early phases, when it's easiest to track.

Systems constrained by small ROM sizes can leave out the __LINE__ and __FILE__ arguments. Pop the stack and log the caller's return address. It's an easy matter to track back to find which assertion threw the exception.

Most developers embed assert() inside a preprocessor conditional. Flip a compile-time switch and they all disappear. This gives us a highly-debuggable version of the code for our own use and a production release not seeded with the macros.

But I'm not keen on shipping code that differs from our test version. NASA's commandment to “test what you fly and fly what you test” makes an awful lot of sense for building other embedded systems as well as spacecraft. When ROM and CPU cycles aren't in short supply, leave the asserts seeded in the code, enabled, not #defined out. If you believe the system can merrily proceed despite an assertion failure, code the exception handler to log the error, leaving debugging breadcrumbs behind, and return.

Is your system connected to the 'net? Why not send an e-mail message logging the error with a snapshot of the crash@yourcompany.com ? Be sure to include the firmware's version number.

This is yet another reason to get and use a version control system. If your code isn't static, if new versions proliferate like politicians promising tax relief, the stack contents will be meaningless unless you can recreate earlier releases.

Embedded systems should use asserts to check housekeeping problems, like stack growth. On an x86 processor, the instruction mov ax ,sp grabs the current stack pointer. Embed this or a similar instruction for your CPU into an assert that then compares against a nearly-full stack. Seed every task with the macro.

Check malloc 's return value in an assert, too, if you're gutsy enough to use it in firmware. If malloc fails, it always returns an error code. Ignore such errors at your peril!

Real-time asserts
I'm frustrated that so many developers of real-time systems think only in the procedural domain, working diligently to build a sequential series of instructions that implement some algorithm, blithely ignoring the timing of their code. Building real-time code means managing time as seriously as we build if-elses. Track free CPU microseconds as well as available bytes of ROM.

There's surprisingly little interest in managing time. C and C++ compilers give us no help. They should produce a listing which gives us the execution time, in clock cycles for each statement in our program. We compiler users have no idea if a simple ++i takes a microsecond or a week. More complex statements invoke the compiler's runtime library, truly a black hole of unknown execution times.

Years ago a friend worked on a Navy job using the CMS-2 language. The compiler was so bug-ridden that they adopted a unique coding strategy: write a function, compile it, and see if the assembly output made sense. If clearly wrong, change something in the CMS-2 source—even spacing—and hope a recompile produced better results. In effect, they made random changes to their source in hopes of taming the compiler.

We do the same thing when the compiler doesn't produce timing information. How do we fix a slow snippet? Change something—almost anything—and hope. That's more akin to hacking than to software engineering. Compiler vendors do their customers a horrible disservice by not documenting best, worst, and average execution times.

A cool product, despite its high price, was Applied Microsystem's CodeTest. Among other things, it measured the execution time of each function in a program, seeding the code with special instructions whose actions were captured by a hardware probe. Yet Applied failed, never able to generate much interest in that product. MetroWerks (www.metrowerks.com) sells the device now; I wish them success and hope more of the embedded community seeks tools for managing the time part of their applications.

In lieu of buying timing tools, why not build real-time asserts, macros that helps us debug in the time domain?

The oldest timing trick is to drive an output bit high when entering a routine, and then low as we exit. Probe the port with an oscilloscope and measure the time the signal is high.

Rather than dropping a pair of outport() C statements directly into the code, I prefer to define assert-like macros. Never sprinkle low-level I/O into anything other than drivers, except in the most ROM-constrained systems.

Call them RT_set_assert() and RT_unset_assert() —the RT meaning real-time—to clearly indicate that these are not your father's asserts anymore.

This brain-dead trick has been around forever. But you can learn a lot about the behavior of a system by extending the idea just a bit. Want to measure min and max execution time of a function or interrupt service routine (ISR)? Set a digital scope to store multiple acquisitions on the screen. Trigger on the I/O bit's positive edge. You'll see the bit go high, a clear space on the screen, and then a lot of hash before it goes low. The clear space is the minimum time. The interval from the positive edge to the end of the hash is the maximum execution time.

I often want to know the total amount of time spent servicing interrupts. Heavy interrupt loads are a sure sign of a system which will be a bear to debug and maintain. Drop the RT_assert pair into every ISR. That is, every ISR toggles a common bit. If the scope trace stays mostly low, there's little loading. Mostly high, and kiss your family goodbye for the duration of the project.

We developers are being shortchanged by our tools; the disappearance of the in-circuit emulator gives us ever less insight into our projects' time domains. But we're lucky in that most high-integration CPUs include plenty of timers. Even simple systems today use RTOSes sequenced by interrupts from a timer. If you're not using an RTOS, do program one of these peripherals to track time for the sake of performance measurements.

Create a simple RT_assert pair to compute elapsed time over a function or region of code. RT_enter_assert() saves the clock; RT_exit_assert() figures time, in tics or microseconds. In a preemptive system it's best to pass an argument with a unique identifier for each code snippet being measured. This is then an index into a data structure, so that multiple instances of the assert don't clobber each other's data.

Logging every instantiation of the code chunk in a real system would generate a flood of data. Instead, have RT_exit_assert() compute the max execution time. Or, if needed, track min and max there.

How much accuracy do you need? If the timer is used for context switching, no doubt it only generates an interrupt every handful of milliseconds. Yet the hardware actually counts CPU clocks or CPU clocks divided by some very small number. The RT_assert macros can read OS-maintained time (which counts interrupts) and the timer's hardware. Resolution to a few microseconds is possible.

To “tare” the readings, compensating for the overhead contributed by the RT_assert s, issue the two macros back to back. Subtract the tare value to increase precision.

Of course, multitasking can skew the results. Preempting the code in the middle of a profiling operation then includes the execution time of the preempter. Just as we want to include the overhead contributed by the compiler's runtime routines, most of the time we want to know elapsed time, which includes preemption. Alternatively, disable interrupts during the measurement, though of course that may greatly affect system operation.

Some developers advocate using macros to send unique codes, a start/stop pair per function, to a parallel port. A logic analyzer then acquires the data. This is more trouble than it's worth. The analyzer quickly accumulates thousands or millions of data points; sometimes much more than it can store. It's impossible to make sense of the information without writing a computer program to empty the tool's buffer and convert raw codes and time into meaningful info.

Worse, the analyzer collects data only during the debugging phase, and only then when you're specifically taking performance measurements. I believe in proactive debugging—seed the code and the execution environment with tools and tricks to log data and flag errors all the time.

And that's where the idea of real-time asserts shines. Expand RT_exit_assert() just a bit. Pass a parameter that indicates the maximum allowed time. The macro computes the actual period and errors if it exceeds what we've expected. Maybe you leave this in only during debugging to grab your attention when the system snarls. Perhaps it's deployed with the production release to take some action when insane bottlenecks occur.

After all, missing a deadline is as much of a bug in many real-time systems as is a complete system crash. If we use asserts to capture errors in the procedural domain, we should do no less in the time domain.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at .

Your recent article reminded me of a bug I once ran into.

I had some debugging code for checking the range of some input value, a la:

#define IN_RANGE(x,min,max)

One day, I decided that this was only needed during debugging, and added a '#ifdef DEBUG' inside the macro.

Lo and behold, the program doesn't work anymore. It took me a good hour or so to figure out that I was using IN_RANGE() to make sure that a timer for a sleep loop had been initialized. (If the timer wasn't initialized, it was supposed to skip the loop, but since IN_RANGE now always returned one, it was stuck in the loop forever)

Just another example of why it's good to leave debugging code in there!


– Buddy Smith

Here's another approach to measuring time. It only requires one I/O bit and associated hardware, as does your suggestion. The extra it requires is RAM space, at one bit per interrupt.

The value of the I/O bit is (initially) the logical OR of *all* the bits in the status word (snippets 1 & 2 below). So if everything is running smoothly (bit mostly low), you don't need to worry about the status bits. If things are heading pear-shaped (bit mostly high), then you can start checking out the status bits individually (by routing them separately to the I/O bit rather than en masse, snippet 3 below) to determine which interrupt is hogging the processor.

/* snippet 1: */
unsigned int status_word;
unsigned int bit0 : 1;
unsigned int bit1 : 1;
unsigned int bit2 : 1;
unsigned int bit3 : 1;
/* etc… */
} status_bits;
} irq_status;

/* snippet 2: */
if (0==irq_status.status_word) {
lower_bit (IO_portA, bit0);
} else {
raise_bit (IO_portA, bit0);

/* snippet 3: */
if (0==irq_status.status_bits.bit0) {
lower_bit (IO_portA, bit0);
} else {
raise_bit (IO_portA, bit0);

– Paul Tiplady

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.