The basics of embedded software testing: Part 2: - Embedded.com

The basics of embedded software testing: Part 2:

Generally the traits that separate embedded software from applications software are:

• Embedded software must run reliably without crashing for long periods of time.

• Embedded software is often used in applications in which human lives are at stake.

• Embedded systems are often so cost-sensitive that the software has little or no margin for inefficiencies of any kind.

• Embedded software must often compensate for problems with the embedded hardware.

• Real-world events are usually asynchronous and nondeterministic, making simulation tests difficult and unreliable.

• Your company can be sued if your code fails.

Because of these differences, testing for embedded software differs from application testing in four major ways.

First, because real-time and concurrency are hard to get right, a lot of testing focuses on real-time behavior. Second, because most embedded systems are resource-constrained real-time systems, more performance and capacity testing are required. Third, you can use some real-time trace tools to measure how well the tests are covering the code. Fourth, you’ll probably test to a higher level of reliability than if you were testing application software.

Dimensions of Integration

Most of our discussion of system integration has centered on hardware and software integration. However, the integration phase really has three dimensions to it: hardware, software, and real-time.

To the best of my knowledge, it’s not common to consider real time to be a dimension of the hardware/software integration phase, but it should be. The hardware can operate as designed, the software can run as written and debugged, but the product as a whole can still fail because of real-time issues.

Some designers have argued that integrating a real-time operating system (RTOS) with the hardware and application software is a distinct phase of the development cycle. If we accept their point of view, then we may further subdivide the integration phase to account for the non-trivial task of creating a board support package (BSP) for the hardware. Without a BSP, the RTOS cannot run on the target platform.

However, if you are using a standard hardware platform in your system, such as one of the many commercially available single-board computers (SBC), your BSP is likely to have already been developed for you. Even with a well-designed BSP, there are many subtle issues to be dealt with when running under an RTOS.

Simon[8] does an excellent job of covering many of the issues related to running an application when an interrupt may occur at any instant. I won’t attempt to cover the same ground as Simon, and I recommend his book as an essential volume in any embedded system developer’s professional library.

Suffice to say that the integration of the RTOS, the hardware, the software and the real-time environment represent the four most common dimensions of the integration phase of an embedded product. Since the RTOS is such a central element of an embedded product, any discussion about tools demands that we discuss them in the context of the RTOS itself.

A simple example will help to illustrate this point. Suppose you are debugging a C program on your PC or UNIX workstation. For simplicity’s sake, let’s assume that you are using the GNU compiler and debugger, GCC and GDB, respectively. When you stop your application to examine the value of a variable, your computer does not stop.

Only the application being debugged has stopped running; the rest of the machine is running along just fine. If your program crashes on a UNIX platform, you may get a core dump, but the computer itself keeps on going. Now, let’s contrast this with our embedded system.

Without an RTOS, when a program dies, the embedded system stops functioning—time to cycle power or press RESET. If an RTOS is running in the system and the debugging tools are considered to be “RTOS aware,” then it is very likely that you can halt one of the running processes and follow the same debugging procedure as on the host computer.

The RTOS will keep the rest of the embedded system functioning “mostly normally” even though you are operating one of the processes under the control of the debugger. Since this is a difficult task to do and do well, the RTOS vendor is uniquely positioned to supply its customers with finely tuned tools that support debugging in an RTOS environment. We can argue whether or not this is beneficial for the developer; certainly the other tool vendors may cry “foul,” but that’s life in the embedded world.

Thus, we can summarize this discussion by recognizing that the decision to use an RTOS will likely have a ripple effect through the entire design process and will manifest itself most visibly when the RTOS, the application software, and the hardware are brought together. If the tools are well designed, the process can be minimally complex. If the tools are not up to the task, the product may never see the light of day.

Real-Time Failure Modes

What you know about how software typically fails should influence how you select your tests. Because embedded systems deal with a lot of asynchronous events, the test suite should focus on typical real-time failure modes.

At a minimum, the test suite should generate both typical and worst case real-time situations. If the device is a controller for an automotive application, does it lock up after a certain sequence of unforeseen events, such as when the radio, windshield wipers, and headlights are all turned on simultaneously? Does it lock up when those items are turned on rapidly in a certain order? What if the radio is turned on and off rapidly 100 times in a row?

In every real-time system, certain combinations of events (call them critical sequences ) cause the greatest delay from an event trigger to the event response. The embedded test suite should be capable of generating all critical sequences and measuring the associated response time.

For some real-time tasks, the notion of deadline is more important than latency. Perhaps it’s essential that your system perform a certain task at exactly 5:00P.M. each day. What will happen if a critical event sequence happens right at 5:00P.M.? Will the deadline task be delayed beyond its deadline?

Embedded systems failures due to failing to meet important timing deadlines are called hard real-time or time-critical failures. Likewise, poor performance can be attributed to soft real-time or time-sensitive failures.

Another category of failures is created when the system is forced to run at, or near, full capacity for extended periods. Thus, you might never see a mal loc( ) error when the system is running at one-half load, but when it runs at three-fourths load, mal loc( ) may fail once a day.

Many RTOSs use fixed size queues to track waiting tasks and buffer I/O. It’s important to test what happens if the system receives an unusually high number of asynchronous events while it is heavily loaded. Do the queues fill up? Is the system still able to meet deadlines?

Thorough testing of real-time behavior often requires that the embedded system be attached to a custom hardware/simulation environment. The simulation environment presents a realistic, but virtual, model of the hardware and real world.

Sometimes the hardware simulator can be as simple as a parallel I/O interface that simulates a user pressing switches. Some projects might require a full flight simulator. At any rate, regression testing of real-time behavior won’t be possible unless the real-time events can be precisely replicated.

Unfortunately, budget constraints often prohibit building a simulator. For some projects, it could take as much time to construct a meaningful model as it would to fix all the bugs in all the embedded products your company has ever produced.

Designers do not spend a lot of time developing “throw-away” test software because this test code won’t add value to the product. It will likely be used once or twice and then deleted, so why waste time on it?

A VHDL simulator could be linked to a software driver through a bus functional model of the processor. Conceptually, this could be a good test environment if your hardware team is already using VHDL- or Verilog-based design tools to create custom ASICs for your product.

Because a virtual model of the hardware already exists and a simulator is available to exercise this model, why not take advantage of it to provide a test scaffold for the software team?

This was one of the great promises of co-verification, but many practical problems have limited its adoption as a general-purpose tool. Still, from a conceptual basis, co-verification is the type of tool that could enable you to build a software-test environment without having to deploy actual hardware in a real-world environment.

Measuring Test Coverage

Even if you use both white-box and black-box methodologies to generate test cases, it’s unlikely that the first draft of the test suite will test all the code. The interactions between the components of any nontrivial piece of software are just too complex to analyze fully. As the earlier “shampoo” algorithm hinted, we need some way to measure how well our tests are covering the code and to identify the sections of code that are not yet being exercised.

The following describes several techniques for measuring test coverage. Some are software-based, and some exploit the emulators and integrated device electronics (IDE) that are often available to embedded systems engineers.

Because they involve the least hardware, I’ll begin with the software-based methods. Later I’ll discuss some less intrusive, but sometimes less reliable, hardware-based methods. Despite the fact that the hardware-based methods are completely nonintrusive, their use is in the minority.

Software Instrumentation

Software-only measurement methods are all based on some form of execution logging. Statement coverage can be measured by inserting trace calls at the beginning of each “basic block” of sequential statements.  

In this context, a basic block is a set of statements with a single entry point at the top and one or more exits at the bottom. Each control structure, such as a goto, return, or decision, marks the end of a basic block. The implication is that after the block is entered every statement in the block is executed.

By placing a simple trace statement, such as a print f ( ), at the beginning of every basic block, you can track when the block—and by implication all the statements in the block—are executed. This kind of software-based logging can be an extremely efficient way to measure statement coverage.

Of course, print f ( ) statements slow the system down considerably, which is not exactly a low-intrusion test methodology. Moreover, small, deeply embedded systems might not have any convenient means to display the output (many embedded environments don’t include print f ( ) in the standard library).

If the application code is running under an RTOS, the RTOS might supply a low-intrusion logging service. If so, the trace code can call the RTOS at the entry point to each basic block. The RTOS can log the call in a memory buffer in the target system or report it to the host.

An even less-intrusive form of execution logging might be called low-intrusion print f ( ). A simple memory write is used in place of the print f ( ). At each basic block entry point, the logging function “marks” a unique spot in excess data memory.

After the tests are complete, external software correlates these marks to the appropriate sections of code. Alternatively, the same kind of logging call can write to a single memory cell, and a logic analyzer (or other hardware interface) can capture the data.

If, upon entry to the basic block, the logging writes the current value of the program counter to a fixed location in memory, then a logic analyzer set to trigger only on a write to that address can capture the address of every logging call as it is executed. After the test suite is completed, the logic analyzer trace buffer can be uploaded to a host computer for analysis.

Although conceptually simple to implement, software logging has the disadvantage of being highly intrusive. Not only does the logging slow the system, the extra calls substantially change the size and layout of the code.

In some cases, the instrumentation intrusion could cause a failure to occur in the function testing—or worse, mask a real bug that would otherwise be discovered. Instrumentation intrusion isn’t the only downside to software-based coverage measurements.

If the system being tested is ROM-based and the ROM capacity is close to the limit, the instrumented code image might not fit in the existing ROM. You are also faced with the additional chore of placing this instrumentation in your code, either with a special parser or through conditional compilation.

Coverage tools based on code instrumentation methods cause some degree of code intrusion, but they have the advantage of being independent of on-chip caches. The tags or markers emitted by the instrumentation can be coded as noncachable writes so that they are always written to memory as they occur in the code stream.

However, it’s important to consider the impact of these code markers on the system’s behavior. All these methods of measuring test coverage sacrifice fine-grained tracing for simplicity by assuming that all statements in the basic block will be covered. A function call, for example, might not be considered an exit from a basic block.

If a function call within a basic block doesn’t return to the calling function, all the remaining statements within the basic block are erroneously marked as having been executed.

Perhaps an even more serious shortcoming of measuring statement coverage is that the measurement demonstrates that the actions of an application have been tested but not the reasons for those actions.

You can improve your statement coverage by using two more rigorous coverage techniques: Decision Coverage (DC) and Modified Condition Decision Coverage (MCDC). Both of these techniques require rather extensive instrumentation of the decision points at the source code level and thus might present increasingly objectionable levels of intrusion. Also, implementing these coverage test methods is best left to commercially available tools.

Measuring More than Statement Execution

Design coverage (DC) takes statement coverage one step further. In addition to capturing the entry into the basic blocks, DC also measures the results of decision points in the code, such as looking for the result of binary (true/false) decision points. In C or C ++ , these would be the if, for, while, and do/while constructs. DC has the advantage over statement coverage of being able to catch more logical errors. For example, suppose you have an if statement without an else part:

if (condition is true)

{

< then do these statements >;

}

< code following elseless if >

You would know whether the TRUE condition is tested because you would see that the then statements were executed. However, you would never know whether the FALSE condition ever occurred. DC would allow you to track the number of times the condition evaluates to TRUE and the number of times it evaluates to FALSE.

MCDC goes one step further than DC. Where DC measures the number of times the decision point evaluates to TRUE or to FALSE, MCDC evaluates the terms that make up the decision criteria. Thus, if the decision statement is:

if (A | | B)

{

< then do these statements >;

}

DC would tell you how many times it evaluates to TRUE and how many times it evaluates to FALSE. MCDC would also show you the logical conditions that lead to the decision outcome.

Because you know that the if statement decision condition would evaluate to TRUE if A is TRUE and B is also TRUE, MCDC would also tell you the states of A and B each time the decision was evaluated. Thus, you would know why the decision evaluated to TRUE or FALSE not just that it was TRUE or FALSE.

Hardware Instrumentation

Emulation memories, logic analyzers, and IDEs are potentially useful for test-coverage measurements. Usually, the hardware functions as a trace/capture interface, and the captured data is analyzed offline on a separate computer. In addition to these three general-purpose tools, special-purpose tools are used just for performance and test coverage measurements.

Emulation Memory. Some vendors include a coverage bit among the attribute bits in their emulation memory. When a memory location is accessed, its coverage bit is set. Later, you can look at the fraction of emulation memory “hits” and derive a percent of coverage for the particular test. By successively “mapping” emulation memory over system memory, you can gather test-coverage statistics.

One problem with this technique is that it can be fooled by microprocessors with on-chip instruction or data caches. If a memory section, called a refill line , is read into the cache but only a fraction of it is actually accessed by the program, the coverage bit test will be overly optimistic in the coverage values it reports. Even so, this is a good upper-limit test and is relatively easy to implement, assuming you have an ICE at your disposal.

Logic Analyzers. Because a logic analyzer also can record memory access activity in real time, it’s a potential tool for measuring test coverage. However, because a logic analyzer is designed to be used in “trigger and capture” mode, it’s difficult to convert its trace data into coverage data. Usually, to use a logic analyzer for coverage measurements, you must resort to statistical sampling.

For this type of measurement, the logic analyzer is slaved to a host computer. The host computer sends trigger commands to the logic analyzer at random intervals. The logic analyzer then fills its trace buffer without waiting for any other trigger conditions.

The trace buffer is uploaded to the computer where the memory addresses, accessed by the processor while the trace buffer was capturing data, are added to a database. For coverage measurements, you only need to know whether each memory location was accessed; you don’t care how many times an address was accessed. Thus, the host computer needs to process a lot of redundant data.  

For example, when the processor is running in a tight loop, the logic analyzer collects a lot of redundant accesses. If access behavior is sampled over long test runs (the test suite can be repeated to improve sampling accuracy), the sampled coverage begins to converge to the actual coverage.

Of course, memory caches also can distort the data collected by the logic analyzer. On-chip caches can mask coverage holes by fetching refill lines that were only partly executed. However, many logic analyzers record additional information provided by the processor.

With these systems, it’s sometimes possible to obtain an accurate picture of the true execution coverage by post-processing the raw trace. Still, the problem remains that the data capture and analysis process is statistical and might need to run for hours or days to produce a meaningful result.

In particular, it’s difficult for sampling methods to give a good picture of ISR test coverage. A good ISR is fast. If an ISR is infrequent, the probability of capturing it during any particular trace event is correspondingly low.

On the other hand, it’s easy to set the logic analyzer to trigger on ISR accesses. Thus, coverage of ISR and other low-frequency code can be measured by making a separate run through the test suite with the logic analyzer set to trigger and trace just that code.

Software Performance Analyzers. Finally, a hardware-collection tool is commercially available that facilitates the low-intrusion collection method of hardware assist without the disadvantage of intermittent collection of a logic analyzer. Many ICE vendors manufacture hardware-based tools specifically designed for analyzing test coverage and software performance.

These are the “Cadillac” tools because they are specifically designed for gathering coverage test data and then displaying it in a meaningful way. By using the information from the linker’s load map, these tools can display coverage information on a function or module basis, rather than raw memory addresses.

Also, they are designed to collect data continuously, so no gaps appear in the data capture, as with a logic analyzer. Sometimes these tools come already bundled into an ICE, others can be purchased as hardware or software add-ons for the basic ICE. These tools are described in more detail on the next page.

Performance Testing

The last type of testing to discuss in this series is performance testing. This is the last to be discussed because performance testing, and, consequently, performance tuning, are not only important as part of your functional testing but also as important tools for the maintenance and upgrade phase of the embedded life cycle.

Performance testing is crucial for embedded system design and, unfortunately, is usually the one type of software characterization test that is most often ignored. Dave Stewart, in “The Twenty-Five Most Common Mistakes with Real-Time Software Development” [9], considers the failure to measure the execution time of code modules the number one mistake made by embedded system designers.

Measuring performance is one of the most crucial tests you need to make on your embedded system. The typical response is that the code is “good enough” because the product works to specification.

For products that are incredibly cost sensitive, however, this is an example of engineering at its worst. Why overdesign a system with a faster processor and more and faster RAM and ROM, which adds to the manufacturing costs, lowers the profit margins, and makes the product less competitive, when the solution is as simple as finding and eliminating the hot spots in the code?

On any cost-sensitive embedded system design, one of the most dramatic events is the decision to redesign the hardware because you believe you are at the limit of performance gains from software redesign.

Mostly, this is a gut decision rather than a decision made on hard data. On many occasions, intuition fails. Modern software, especially in the presence of an RTOS, is extremely difficult to fully unravel and understand. Just because you can’t see an obvious way to improve the system throughput by software-tuning does not imply that the next step is a hardware redesign.

Performance measurements made with real tools and with sufficient resources can have tremendous payback and prevent large R&D outlays for needless redesigns.

How to Test Performance

In performance testing, you are interested in the amount of time that a function takes to

execute. Many factors come into play here. In general, it’s a nondeterministic process, so

you must measure it from a statistical perspective. Some factors that can change the execution time each time the function is executed are:

• Contents of the instruction and data caches at the time the function is entered

• RTOS task loading

• Interrupts and other exceptions

• Data-processing requirements in the function

Thus, the best you can hope for is some statistical measure of the minimum, maximum, average, and cumulative execution times for each function that is of interest. Figure 2-2 below shows a performance analysis test tool, which uses software instrumentation to provide the stimulus for the entry-point and exit-point measurements. These tags can be collected via hardware tools or RTOS services.

 

Figure 2-2: Performance analysis tool display showing the minimum, maximum, average, and cumulative execution times for the functions shown in the leftmost column.

Dynamic Memory Use

Dynamic memory use is another valuable test provided by many of the commercial tools. As with coverage, it’s possible to instrument the dynamic memory allocation operators malloc() and free() in C and new and delete in C ++ so that the instrumentation tags will help uncover memory leakages and fragmentation problems while they are occurring. This is infinitely preferable to dealing with a nonreproducible system lock-up once every two or three weeks. Figure 2-3 below shows one such memory management test tool.

 

Figure 2-3: Memory Management Test Tool.

(Note from the trenches: Performance testing and coverage testing are not entirely separate activities. Coverage testing not only uncovers the amount of code your test is exercising, it also shows you code that is never exercised (dead code) that could easily be eliminated from the product. I’m aware of one situation in which several design teams adapted a linker command file that had originally been written for an earlier product.

The command file worked well enough, so no one bothered to remove some of the extraneous libraries that it pulled in. It wasn’t a problem until they had to add more functionality to the product but were limited to the amount of ROM space they had.

Thus, you can see how coverage testing can provide you with clues about where you can excise code that does not appear to be participating in the program. Although removing dead code probably won’t affect the execution time of the code, it certainly will make the code image smaller. I say probably because on some architectures, the dead code can force the compiler to generate more time-consuming long jumps and branches. Moreover, larger code images and more frequent jumps can certainly affect cache performance. )

Conceptually, performance testing is straightforward. You use the link map file to identify the memory addresses of the entry points and exit points of functions. You then watch the address bus and record the time whenever you have address matches at these points.

Finally, you match the entry points with the exit points, calculate the time difference between them, and that’s your elapsed time in the function. However, suppose your function calls other functions, which call more functions. What is the elapsed time for the function you are trying to measure? Also, if interrupts come in when you are in a function, how do you factor that information into your equation?

Fortunately, the commercial tool developers have built in the capability to unravel even the gnarliest of recursive functions. Hardware-based tools provide an attractive way to measure software performance.

As with coverage measurements, the logic analyzer can be programmed to capture traces at random intervals, and the trace data—including time stamps—can be post-processed to yield the elapsed time between a function’s entry and exit points. Again, the caveat of intermittent measurements applies, so the tests might have to run for an extended period to gather meaningful statistics.

Hardware-only tools are designed to monitor simultaneously a spectrum of function entry points and exit points and then collect time interval data as various functions are entered and exited. In any case, tools such as these provide unambiguous information about the current state of your software as it executes in real time.

Hardware-assisted performance analysis, like other forms of hardware-assisted measurements based on observing the processor’s bus activity, can be rendered less accurate by on-chip address and data caches.

This occurs because the appearance of an address on the bus does not necessarily mean that the instruction at that address will be executed at that point in time, or any other point in time. It only means that the address was transferred from memory to the instruction cache.

Tools based on the instrumentation of code are immune to cache-induced errors but do introduce some level of intrusion because of the need to add extra code to produce an observable tag at the function’s entry points and exit points.

Tags can be emitted sequentially in time from functions, ISRs, and the RTOS kernel itself. With proper measurement software, designers can get a real picture of how their system software is behaving under various system-loading conditions. This is exactly the type of information needed to understand why, for example, a functional test might be failing.

(Note from the trenches : From personal experience, the information that these tools provide a design team, can cause much disbelief among the engineers. During one customer evaluation, the tool being tested showed that a significant amount of time was being spent in a segment of code that none of the engineers on the project could identify as their software.

Upon further investigation, the team realized that in the build process the team had inadvertently left the compiler switch on that included all the debug information in the compiled code. Again, this was released code. The tool was able to show that they were taking a 15-percent performance hit due to the debug code being present in the released software. I’m relatively certain that some heads were put on the block because of this, but I wasn’t around to watch the festivities. )

Interestingly, semiconductor manufacturers are beginning to place additional resources on-chip for performance monitoring, as well as debugging purposes. Desktop processors, Are equipped with performance-monitoring counters; such architectural features are finding their way into embedded devices as well. These on-chip counters can count elapsed time or other performance parameters, such as the number of cache hits and cache misses.

Another advantage of on-chip performance resources is that they can be used in conjunction with your debugging tools to generate interrupts when error conditions occur. For example, suppose you set one of the counters to count down to zero when a certain address is fetched.

This could be the start of a function. The counter counts down; if it underflows before it’s stopped, it generates an interrupt or exception, and processing could stop because the function took too much time. The obvious advantages of on-chip resources are that they won’t be fooled by the presence of on-chip caches and that they don’t add any overhead to the code execution time. The downside is that you are limited in what you can measure by the functionality of the on-chip resources.

Maintenance and Testing

Some of the most serious testers of embedded software are not the original designers, the Software Quality Assurance (SWQA) department, or the end users. The heavy-duty testers are the engineers who are tasked with the last phases of the embedded life cycle: maintenance and upgrade.

Numerous studies (studies by Dataquest and EE Times produced similar conclusions) have shown that more than half of the engineers who identify themselves as embedded software and firmware engineers spend the majority of their time working on embedded systems that have already been deployed to customers.

These engineers were not the original designers who did a rotten job the first time around and are busy fixing residual bugs; instead, these engineers take existing products, refine them, and maintain them until it no longer makes economic sense to do so.

One of the most important tasks these engineers must do is understand the system with which they’re working. In fact, they must often understand it far more intimately than the original designers did because they must keep improving it without the luxury of starting over again.

(Note from the trenches: I’m often amused by the expression, “We started with a clean sheet of paper,” because the subtitle could be, “And we didn’t know how to fix what we already had.” When I was an R&D Project Manager, I visited a large telecomm vendor who made small office telephone exchanges (PBX). The team I visited was charged with maintaining and upgrading one of the company’s core products. Given the income exposure riding on this product, you would think the team would have the best tools available.

Unfortunately, the team had about five engineers and an old, tired PBX box in the middle of the room. In the corner was a dolly with a four-foot high stack of source code listings. The lead engineer said someone wheeled that dolly in the previous week and told the team to “make it 25 percent better.” The team’s challenge was to first understand what they had and, more importantly, what the margins were, and then they could undertake the task of improving it 25 percent, whatever that meant. Thus, for over half of the embedded systems engineers doing embedded design today, testing and understanding the behavior of existing code is their most important task. )

Summary

The end of the product development cycle is where testing usually occurs. It would be better to test in a progressive manner, rather than waiting until the end, but, for practical reasons, some testing must wait.

The principal reason is that you have to bring the hardware and software together before you can do any kind of meaningful testing, and then you still need to have the real-world events drive the system to test it properly. Although some parts of testing must necessarily be delayed until the end of the development cycle, the key decisions about what to test and how to test must not be delayed.

Testability should be a key requirement in every project. With modern SoC designs, testability is becoming a primary criterion in the processor-selection process. Finally, testing isn’t enough. You must have some means to measure the effectiveness of your tests.

As Tom DeMarco[3], once said, “You can’t control what you can’t measure.” If you want to control the quality of your software, you must measure the quality of your testing. Measuring test coverage and performance are important components but for safety critical projects, even these aren’t enough.

To read Part 1, go to: The why, when, where and how of testing

Arnold Berger is a Senior Lecturer in the CSS Department of the University of Washington Bothell. He can be reached at ABerger@bothell.washington.edu.

References

1) Hopper, Grace Murray. “The First Bug.” Annals of the History of Computing , July

1981, 285.

2) Horning, Jim. ACM Software Engineering Notes . October 1979, 6.

3) DeMarco, Tom. Controlling Software Projects . New York: Yourdon, 1982.

4) Leveson, Nancy, and Clark S. Turner. “An Investigation of the Therac-25

Accidents.” IEEE Computer , July 1993, 18–41.

5) Main, Jeremy. Quality Wars: The Triumphs and Defeats of American Business .

New York: Free Press, 1994.

6) Myers, Glen Ford J. The Art of Software Testing . New York: Wiley, 1978.

7) Ross, K.J. & Associates. http://www.cit.gu.edu.au/teaching/CIT2162/991005.pdf,

p. 43.

8) Simon, David. An Embedded Software Primer . Reading, MA:

Addison-Wesley, 1999.

9) Stewart, Dave. “The Twenty-Five Most Common Mistakes with Real-Time Software

Development.” A paper presented at the Embedded Systems Conference, San Jose,

26 September 2000.

Additional Reading

1) Barrett, Tom. “Dancing with Devils: Or Facing the Music on Software Quality.”

Supplement to Electronic Design , 9 March 1998, 40.

2)  Beatty, Sean. “Sensible Software Testing.” Embedded Systems Programming , August

2000, 98.

3) Myers, Glenford J. The Art of Software Testing . New York: Wiley, 1978.

Simon, David. An Embedded Software Primer . Reading, MA: Addison-Wesley, 1999.

This article by Keith Curtis is based on material from “EmbeddedSystems: World Class Design” edited by Jack Ganssle, used with permissionfrom Newnes, a division of Elsevier. Copyright 2008. For more information aboutthis title and other similar books, please visitwww.elsevierdirect.com.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.