Testing code that uses an RTOS API
Embedded application software, which is designed to work under the control of a real time operating system, presents an interesting debugging and testing challenge. The code is most likely littered with RTOS system API (Application Program Interface) calls, which need to be verified, along with the logic applied to any response received. Ideally, the testing process would involve linking the application to the RTOS and debugging. However, this introduces a number of other unknowns and necessitates a target execution environment (which may or may not be the final hardware). It would be useful if this testing could simply be done on a host computer, as PCs are readily available. This article explores an approach to making progress in testing such code by running it natively and using a "test harness".
The concepts discussed here arose after a talk about OS-aware debuggers at a conference and someone asked whether there is a good technique for unit testing of code for a multi-threaded application. They were considering an environment where a number of engineers were working on an embedded application (using Nucleus RTOS). Each guy was developing one or more tasks, which interact with one another and those written by other engineers. The questioner was wondering how these engineers could make some solid progress with testing and debugging ahead of building the complete system.
Obviously, the quickest and easiest way to test out some code is to compile and run it on a desktop computer. This is essentially a zero-cost, easy to use debug environment. However, it does not help so much when the code being tested interacts with an RTOS via a number of API calls, as obviously the code will not even link with these calls unresolved.
A trivial early step is to comment out API calls or create dummy stubs that just allow the link to complete. This might enable basic program logic to be checked, but really does not allow too much progress, unless a program is largely comprised of complex algorithms. What is needed are API calls that respond sensibly. Normally, the only way to get "intelligent" responses from API calls is to actually link with an RTOS and run the code on a target system or to use a complex host-based simulation environment. The idea that came out of the discussion was for an RTOS test harness.
The Test Harness Idea
This test harness would simply be a library of functions corresponding to all (or most) of the API calls of the RTOS in use. These functions would accept the correct type and number of parameters and a call would result in a "sensible" response. Some basic parameter checking may result in an error return, otherwise a "success" response is likely. Where full API functionality can easily be simulated (e.g. dynamic memory allocation), the API call can be even more intelligent and appear to respond just like the OS. A separate module, containing some global data structures, would enable the developer to tune the response of API calls so that they can test their code's handling of API call responses, including various failure scenarios.
This approach has clear limitations compared with running the code on a real RTOS, but would give the opportunity to check much more logic before introducing other complexities.
The questioner went away to consider the implementation of this test harness in his team. (And I was left wondering whether such a "product" would be of interest to a wider body of users. I would be very interested to learn whether this approach does have wide appeal.)