At the start of a contract, stakeholders set out their vision for what they want from the delivered application. The project team then works to represent that vision as a set of requirements from which development can begin. The requirements should act as a blueprint for development, with each high-level software requirement mapping to a lower-level requirement, design and implementation. However, all too often, the team’s efforts diverge from this blueprint over time, resulting in an application that doesn’t align with the requirements. At best the stakeholder is disappointed. At worst, the development team opens itself up to litigation and costly remedial work.
Additional challenges arise with new applications for connected cars, interactive medical devices or industrial IoT applications that call into question when the development process comes to an end. Any newly discovered vulnerability or system compromise implies an additional requirement to counter it, bringing with it a new emphasis on traceability even into the product-maintenance phase.
This is where the lifelong commitment begins. By constructing trace links between requirements and development components from the very beginning of a project, problems such as missing or non-required functionality can be discovered earlier, making it easier and less costly to remedy. Implementing a comprehensive, strategic and automated requirements traceability process can significantly impact project deadlines and budgets, can avoid gaps between the stakeholder’s vision of the application and what is ultimately delivered, and results in an effective support vehicle for deployed connected systems.
The Art of Requirements
If all stakeholders are to share a common commitment to requirements then they must be understandable, unambiguous and precise. There are two common approaches to defining requirements, each with advantages and disadvantages:
|Uses natural language so requires no special training||Stakeholder may prefer layman’s language while contractor leans towards technical jargon
Language is inherently imprecise and prone to ambiguity
One approach to overcoming disadvantages is to apply rules when writing requirements in much the same way as the MISRA standards are applied to C and C++ code, for example:
Use paragraph formatting to distinguish requirements from non-requirement text
List only one requirement per paragraph
Use the verb “shall”
Avoid “and” in a requirement by refactoring as multiple requirements or specifying in more general terms
Avoid conditional language such as “unless” or “only if” that are likely to lead to ambiguous interpretation
The use of key words also helps if some members of the development team are less fluent in the chosen requirements language than others.
|Reduced dependence on natural language is ideal for international teams that do not share a common language
Graphical representation switches the angle of analysis from a line-by-line, itemized list of desired features to a user-focused view of how the system will interact with external elements and what value it will deliver
|Not everyone involved in the project will understand the nuances of use case diagrams|
Each use case or user story comprises several scenarios. The first scenario as illustrated in Figure 1 is always the “basic path” or “sunny-day scenario” in which the actor and system interact in a normal, error-free way.
click for larger image
Figure 1 – This example of a “sunny-day” scenario from an “allow authorized access” use case shows how a system is expected to behave when a valid key card is swiped. (Source: LDRA)
Requirements Traceability and Management
Requirements traceability is widely accepted as a development best practice to ensure that all requirements are implemented and that all development artifacts can be traced back to one or more requirements. Standards such as DO-178C, IEC 61508, ISO 26262 and IEC 62304 require bi-directional traceability and put a constant emphasis on the need for the derivation of one development tier from the one above it. Paragraph 5.5 c of DO-178C typifies this:
“Trace data, showing the bi-directional association between low-level requirements and source code, is developed. The purpose of this trace data is to:
Enable verification that no source code implements an undocumented function.
Enable verification of the complete implementation of the low-level requirements.”
The level of traceability required by standards such as this vary, dependent on the criticality of the application. For example, less-critical avionics applications designated DO-178C level (or DAL) D are known as “black box,” meaning that there is no focus on how the software has been developed. That means there is no need to have any traceability to the source code or software architecture. It is only required that the system software requirements are traced to the high-level requirements and then to the test cases, test procedures and test results.
For the more demanding DO-178C levels B and C, the source code development process is considered significant and so evidence of bi-directional traceability is required from the high-level requirements to the low-level requirements and then to the source code. Level A projects require traceability beyond the source code down to the executable object code.
While bi-directional traceability is a laudable principle, last-minute changes of requirements or code made to correct problems identified during test tend to put such ideals in disarray. Many projects fall into a pattern of disjointed software development in which requirements, design, implementation and testing artifacts are produced from isolated development phases, resulting in tenuous links between the requirements stage and the development team.
Processes like the waterfall and iterative examples show each phase flowing into the next, perhaps with feedback to earlier phases. Traceability is assumed to be part of the relationships between phases; however, the mechanism by which trace links are recorded is seldom stated. The reality is that, while each individual phase may be conducted efficiently thanks to investment in up-to-date tool technology, these tools seldom contribute automatically to any traceability between the development tiers. As a result, the links between them become increasingly poorly maintained over the duration of projects.
The answer to this conundrum lies in the “trace data” between development processes that sits at the heart of any project. Whether or not the links are physically recorded and managed, they still exist. For example, a developer creates a link simply by reading a design specification and using that to drive the implementation. The collective relationships between these processes and their associated data artifacts can be viewed as a requirements traceability matrix (RTM). When the RTM becomes the center of the development process, it impacts on all stages of safety critical application development from high-level requirements through to target-based testing.
Figure 2 reflects the importance that should be attached to the RTM. Project managers must place the same priority on RTM construction and maintenance as they do on requirements management, version control, change management, modeling and testing.
click for larger image
Figure 2 – RTM sits at the heart of the project defining and describing the interaction between the design, code, test and verification stages of development. (Source: LDRA)
The RTM must be represented explicitly in any lifecycle model to emphasize its importance as illustrated in Figure 3. With this elevated focus, it becomes the center of the development process, impacting on all stages of design from high-level requirements through to target-based deployment.
click for larger image
Figure 3 -The requirements traceability matrix (RTM) plays a central role in a development lifecycle model. Artifacts at all stages of development are linked directly to requirements matrix and changes within each phase automatically update the RTM. (Source: LDRA)
At the highest level, requirements management and traceability tools can initially provide the ability to capture the requirements specified by standards such as the DO-178C standard. These requirements (or objectives) can then be traced to Tier 1 – the application-specific software and system requirements.
These Tier 1 high-level requirements might consist of a definitive statement of the system to be developed (perhaps an aircraft flap control module, for instance) and the functional criteria it must meet (e.g., extending the flap to raise the lift coefficient). This tier may be subdivided depending on the scale and complexity of the system.
Tier 2 describes the design of the system level defined by Tier 1. With our flap example, the low-level requirements might discuss how the flap extension is varied, building on the need to do so established in Tier 1.
Tier 3’s implementation refers to the source/assembly code developed in accordance with Tier 2. In our example, it is clear that the management of the flap extension is likely to involve several functions. Traceability of those functions back to Tier 2 requirements includes many-to-few relationships. It is very easy to overlook one or more of these relationships in a manually managed matrix.
In Tier 4 host-based verification, formal verification begins. Using a test strategy that may be top-down, bottom-up or a combination of both, software simulation techniques help create an automated test harnesses and test case generators as necessary. Test cases should be repeatable at Tier 5 if required.
At this stage, we confirm that the example software managing the flap position is functioning as intended within its development environment, even though there is no guarantee it will work when in its target environment. DO-178C acknowledges this and calls for the testing “to verify correct operation of the software in the target computer environment.”
However, testing in the host environment first allows the target test (which is often more time-consuming) to merely confirm that the tests remain sound in the target environment. In our example, we ensure in the host environment that function calls to the software associated with the flap control system return the values required of them in accordance with the requirements they are fulfilling. That information is then updated in the RTM.
Our flap-control system is now retested in the target environment, ensuring that the tests results are consistent with those performed on the host. A further RTM layer shows that the tests have been confirmed.
Maintaining the Requirements Traceability Matrix
The RTM is a best practice whether a standard insists on it or not. However, maintaining an RTM in spreadsheets is a logistical nightmare, fraught with the risk of error and permanently lagging the actual project status.
Constructing the RTM in a suitable tool not only maintains it automatically, but also opens up possibilities for filtering, quality checks, progress monitoring and metrics generation (Figure 4). The RTM is no longer a tedious, time-consuming task reluctantly carried out at the end of a project; instead it is a powerful utility that can contribute to efficiently running the project. The requirements become usable artifacts to drive implementation and testing. Furthermore, many of the trace links may be captured simply by doing the day-to-day work of development, accelerating RTM construction and improving the quality of its contents.
Modern requirements traceability solutions extend the requirements mapping down to the verification tasks associated with the source code. The screenshot in Figure 4 shows an example. Using this type of requirements traceability tool, the 100% requirements coverage metric objective can be clearly measured, no matter how many layers of requirements, design and implementation decomposition are used. This makes monitoring system completion progress an extremely straightforward activity.
click for larger image
Figure 4 – Traceability from high level requirements down to source code and verification tasks. (Source: LDRA)
Connectivity and the Infinite Development Lifecycle
During the development of a traditional, isolated system, that is clearly useful enough. But connectivity demands the ability to respond to vulnerabilities identified in the field, essentially for as long as the product lives. Each newly discovered vulnerability implies a changed or new requirement, and one to which an immediate response is needed—even though the system itself may not have been touched by development engineers for quite some time. In such circumstances, being able to isolate what is needed and automatically test only the functions impacted becomes much more significant.
Connectivity changes the notion of the development process ending when a product is launched, or even when its production is ended. Whenever a new vulnerability is discovered in the field, there is a resulting change of requirement to cater for it, coupled with the additional pressure of knowing that in such circumstances, a speedy response to requirements change has the potential to both save lives and enhance reputations. This lifelong obligation shines a new light on automated requirements traceability.
The delivery of a requirements traceability matrix (RTM) can be contractually imposed or recognized as a best practice for successful projects. However, the creation of a useful and error-free RTM can only happen when the requirements are of sufficient quality and the process is managed to:
Ensure that requirements embrace functional, safety and security-related issues
Accept that requirements will change over the life of the project
Employ a development process that embraces and responds to change
Manage the quality of requirements
Let the requirements drive development
Build an RTM from the start of the project
Use the RTM to manage progress and improve project quality
Use the RTM to respond quickly and effectively to security vulnerabilities after product deployment
The end result will be a project that finishes on time and on budget, that avoids gaps between the stakeholder’s vision and what is delivered, and that results in an effective support vehicle for deployed connected systems.
Mark Pitchford has over 30 years’ experience in software development for engineering applications. He has worked on many significant industrial and commercial projects in development and management, both in the UK and internationally. Since 2001, he has worked with development teams looking to achieve compliant software development in safety and security critical environments, working with standards such as DO-178, IEC 61508, ISO 26262, IIRA and RAMI 4.0. Mark earned his Bachelor of Science degree at Nottingham Trent University, and he has been a Chartered Engineer for over 20 years. He now works as Technical Specialist with LDRA Software Technology.