Using requirements traceability with model-driven development

John Thomas and Jared Fry, LDRA

June 11, 2012

John Thomas and Jared Fry, LDRAJune 11, 2012

Parallel Traces
The central innovation of MDD is a separation between the development and coding process. This separation, however, can cause issues with requirements traceability. If the development and coding processes exist independently, establishing traceability for both can be complicated and error-prone. Requirements traceability is meant to be a continual effort spanning the entire software development lifecycle. If implementation and design are not linked, traceability work performed during design needs to be repeated during implementation.

The best-of-breed modeling tools do mitigate this danger somewhat. Many modeling tools provide means of establishing automatic connections between the model elements and their corresponding code. While that automatic mapping works in theory, many modeling tools are combined with customized build processes, additional hand-written code, and custom code generators that make the connections between code and model more obscure.

Even when traceability between code and model has been utilized, there may be disconnects that exist between testing occurring in the model and testing source code in the target environment. In the MDD paradigm, it is important to verify model behavior according to the high-level requirements, and ensure that tests exist to prove that model execution.

One of the clarifications made by the DO-178C MDD supplement is to specify a need for model-verification as part of the overall software verification process. To achieve this level of verification, modeling tools provide mechanisms to test the model within the modeling environment.

While this testing is useful for verifying the design, it does not necessarily constitute actual executable object-code verification. This distinction between source-code level testing and model-level testing means that traceability connections between requirements and model tests does not automatically count for traceability between source-code tests and requirements. Again, the typical solution to this problem is to create parallel traceability chains for source-code tests and model-based tests, leading to redundant testing and the additional complexity of manually integrating the testing results.

It is possible to mitigate the disconnect between model-based design and source code verification. Requirements traceability tools can be utilized that are able to pull requirements and traceability evidence from multiple sources. By gathering together low-level requirements that are represented as model elements and high-level requirements from external documents or databases, traceability links can go directly to the design elements linked to source code.

Moreover, results from both source code testing and model-based testing can also be presented together. Requirements traceability tools allow traceability evidence to be organized as one seamless flow, joining requirements, lower-level requirements, design, source code, and tests. This helps to avoid the added complexity of maintaining and integrating two separate traceability efforts (Figure 3 below).

Click on image to enlarge.

Figure 3 - Directly linking to model elements avoids duplicate links and simplifies the traceability process.

In addition, separate testing requirements can be reduced by utilizing test-case reuse. This involves isolating the input and output information used in the model-based testing, and then exporting it to a format that can be read by source-code testing tools. With the proper verification and modeling tools, this process can even been automated.

What is important from the traceability side is that traceability evidence can be re-used. If certain test-case parameters were already shown to be traceable to a requirement, these parameters can then be re-applied on the source code level and the same justifications and artifacts used in the requirements to model test linkage can be used in the requirements to source code test linkage.

Despite its popularity MDD has presented safety-critical applications with several new challenges. One issue of particular importance has been reconciling requirements traceability with a model-driven development approach. MDD blurs many of the traditional lines between design, requirements, and implementation, and so it can be easy to misplace the model in the traceability process.

In addition, model-based design removes the design process from the source-code level testing process, which creates the danger of duplicate and erroneous traceability work. Underneath these new challenges is the question of how to integrate a new method of software development into the critical software development practices. The answer of using innovation and best practices applies here as well.

The addition of the model-based development and verification supplement in DO-178C shows that work is being done to develop best practices for incorporating models into lifecycle traceability. Modern traceability and testing tools are actively innovating to simplify and automate this process.

Requirements traceability, especially when done with an unclear understanding of the traceability process, can be painful. When done right, requirements traceability can provide verification on all levels of the software development lifecycle. The gains of model-based design cannot come at the cost of safety, but with the proper tools, no trade-off is necessary.

John Thomas is a Field Application Engineer for LDRA Ltd. He graduated from Rutgers University. Before working for LDRA, John worked as a developer for several companies and was previously Project Manager for Castel Enterprises, Ltd. He joined LDRA in 2011 and has worked with LDRA clients on coverage testing, target integration and requirements traceability.

Jared Fry is a Field Application Engineer for LDRA Ltd. He graduated from Western New Mexico University with degrees in mathematics and computer science. His career began in the defense industry working for Lockheed Martin. There, Jared served as a software engineer working on various projects ranging from missile and radar systems, training simulations, and software testing. With LDRA he leverages these experiences as a consultant, assisting clients throughout the development process to produce a quality and certifiable product.

< Previous
Page 3 of 3
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search