Hardware/software design requirements planning: Part 4 - Computer software approaches - Embedded.com

Hardware/software design requirements planning: Part 4 – Computer software approaches

It is not possible for a functioning system to exist that is entirely computer software because software requires a machine medium within which to function. Systems that include software will always include hardware, a computing instrument as a minimum, and most often will involve people in some way.

Software is to the machine as our thoughts, ideas, and reasoning are to the gray matter making up our mind. While some people firmly believe in out-of-body experiences for people, few would accept a similar situation for software.

A particular business entity may be responsible for creating only the software element of a system and, to them, what they are developing could be construed a system, but their product can never be an operating reality by itself.

This is part of the difficulty in the development of software; it has no physical reality. It is no wonder then that we might turn to a graphical and symbolic expression as a means to capture its essence.

We face the same problem in software as hardware in the beginning. We tend to understand our problems first in the broadest sense. We need some way to capture our thoughts about what the software must be capable of accomplishing and to retain that information while we seek to expand upon the growing knowledge base.

We have developed many techniques to accomplish this end over the period of 50–60 years during which software has been a recognized system component.

The earliest software analytical tool was flow charting, which lays out a stream of processing steps similar to a functional flow diagram (commonly in a vertical orientation rather than horizontal, probably because of the relative ease of printing them on line printers ), where the blocks are very specialized functions called computer processes.

Few analysts apply flow diagramming today, having surrendered to data flow diagramming (DFD) used in modern structured analysis, the Hatley-Pribhai extension of this technique, object-oriented analysis, or unified modeling languauge (UML).

Alternative techniques have been developed that focus on the data that the computer processes. The reasonable adherents of the process and data orientation schools of software analysis would today accept that both are required, and some have made efforts to bridge this gap.

All software analysis tools (and hardware-oriented ones as well) involve some kind of graphical symbols (bubbles or boxes) representing data or process entities connected by lines, generally directed ones.

Some of these processes begin with a context diagram formed by a bubble representing the complete software entity connected to a ring of blocks that correspond to external interfaces that provide or receive data.

This master bubble corresponds to the need, or ultimate function, in functional analysis, and its allocation to the thing called a system.

The most traditional technique was developed principally by Yourdon, DeMarco, and Constantine. It involves expansion of the context diagram bubble into lower-tier processing bubbles that represent subprocesses just as in functional analysis.

These bubbles are connected by lines indicating data that must pass from one to the other. Store symbols are used to indicate a need to temporarily store a data element for subsequent use. These stores are also connected to bubbles by lines to show source and destination of the data.

Since the directed lines represent a flow of data between computer processing entities (bubbles), the central diagram in this technique is often referred to as a data flow diagram.

In all software analysis techniques, there is a connection between the symbols used on the diagrammatic portrayal to text information that characterizes the requirements for the illustrated processes and data needs.

In the traditional line-and-bubble analysis approach, referred to as data flow diagramming, one writes a process specification for each lowest-tier bubble on the complete set of diagrams and provides a line entry in a data dictionary for each line and store on all diagrams.

Other diagrams are often used in the process specification to explain the need for controlling influences on the data and the needed data relationships.

All of this information taken together becomes the upon which the software will run, a language or languages that will be used, and an organization of the exposed functionality into “physical” modules that will subsequently be implemented in the selected language through programming work.

A good general reference for process and data-oriented software analysis methods is Yourdon’s Modern Structured Analysis. Tom DeMarco’s Structured Analysis and System Specification is another excellent reference for these techniques.

Much of the early software analysis tool work focused on information batch processing because central processors, in the form of large mainframe computers, were in vogue.

More recently, distributed processing on networks and software embedded in systems has played a more prominent role, revealing that some of the earlier analysis techniques were limited in their utility to expose the analyst to needed characteristics.

Derek Hatley and the late Imtiaz Pirbhai offer an extension of the traditional approach in their Strategies for Real-Time System Specification to account for the special difficulties encountered in embedded, real-time software development.

They differentiate between data flow needs and control flow needs and provide a very organized environment for allocation of exposed requirements model content to an architecture model. The specification consists of information derived from the analytical work supporting both of these models.

Fred McFadden and Jeffrey Hoffer have written an excellent book on the development of software for relational databases in general and client-server systems specifically titled Modern Database Management.

With this title, it is understandable that they would apply a data-oriented approach involving entity-relationship (ER) diagrams and a variation on IDEF1X. The latter is explained well in the Department of Commerce Federal Information Processing Standards Publication (FIBS PUB) 184. McFadden and Hoffer also explain a merger between IDEF-1X and object-oriented analysis.

The schism between process-oriented analysis and data-oriented analysis, which has been patched together in earlier analysis methods, has been joined together more effectively in object-oriented analysis (OOA), about which there have been many books written.

A series that is useful and readable is by Coad and Yourdon (volumes 1 and 2, Object Oriented Analysis and Object Oriented Design, respectively) and Coad and Nicola (volume 3, Object Oriented Programming).

Two others are James Rumbaugh et al., Object Oriented Modeling and Design, and Grady Booch, Object-Oriented Analysis and Design with Applications.

From this dynamic history emerged unified modeling language (UML), which has become the software development standard. UML did correct a serious flaw in early OOA by encouraging the application of Sullivan’s notion of “form follows function” to software development again. Figure 1.3-5 below provides a summary view of the UML approach.

Figure 1.3-5 UML overview ( To view larger image, click here).
While not part of UML, the author encourages the use of a context diagram to organize the use cases applied in UML. For each terminator of the context diagram we can identify some number of use cases, each one of which is then analyzed from a dynamic perspective using some combination of sequence, communication, activity, and state diagrams.

Requirements flow from this dynamic analysis. The use of sequence, communication, and activity diagramming requires the analyst to identify next-lower-tier product entities, which are in turn analyzed from a dynamic perspective. This process continues so as to satisfy the requirements identified from the dynamic analysis.

Verification requirements
Figure 1.3-1 earlier notes that verification requirements are paired with the requirements classes included on the diagram and thus not specifically addressed as a separate class of requirements.

The requirements classes illustrated in that figure will normally appear in the requirements section of your specification (Section 3 of a military specification). The verification requirements will normally appear in a quality assurance or verification section of your specification (Section 4 of a military specification) by whatever name.

For every performance requirement or design constraint included in the specification, there should be one or more verification requirements that tell how it will be determined whether or not the design solution satisfies that requirement. The first step in this process is to build a verification traceability matrix listing all of the requirements in the left column by paragraph number, followed by a definition of verification methods.

The latter can be provided by a column for each of the accepted methods, which may include test, analysis, examination, demonstration, and none required. An X is placed in this matrix for the methods that will be applied for each requirement (more than one method may be applied to one requirement). The matrix is completed by a column of verification requirement paragraph numbers. There should be one verification requirement defined for each X in the matrix.

We must also identify at what level of system hierarchy the requirement will be verified. For example, if the requirement for an aircraft altitude control unit requires that it maintain aircraft barometric altitude to within 100 feet, we could require at the system level that a flight test demonstrate this capability with actual altitude measured by a separate standard and that it not deviate by more than 100 feet from the altitude commanded.

At the avionics system level, this may be verified through simulation by including the actual altimeter in an avionics system test bench with measured altitude error under a wide range of conditions and situations.

At the black-box level, this may be stated in terms of a test that measures an electrical output signal against a predicted value for given situations. Subsequent flight testing would be used to confirm the accuracy of the predictions.

The requirements for the tests and analyses corresponding to proving that the design solution satisfies the requirements must be captured in some form and used as the basis for those actions.

In specifications following the military format, Section 4, Quality Assurance, has been used to do this, but in many organizations this section is only very general in nature, with the real verification requirements included in an integrated test plan or procedure.

Commonly, this results in coverage of only the test requirements, with analysis requirements being difficult to find and to manage. The best solution to this problem is to include the verification requirements (test and analysis) in the corresponding specification, to develop them in concert with the performance requirements and constraints, and to use them as the basis for test and analysis planning work that is made traceable to those requirements.

Applicable documents
Requirements come in two kinds when measured with respect to their scope. Most of the requirements we identify through the practices described in this section are specific to the product or process we are seeking to define.

Other requirements apply to that product or process by reference to some documentation source, external to the program, prepared for general use on many programs by a large customer organization, government agency, or industry society.

These external documents commonly take the form of standards and specifications that describe a preferred solution or constrain a solution with preferred requirements and/or values for those requirements.

The benefit of applicable documents are that they offer proven standards, and it is a simple process to identify them by a simple reference to the document containing them in the program specification. The downside is that one has to be very careful not to import unnecessary requirements through this route.

If a complete document is referenced without qualification, the understanding is that the product must comply with the complete content. There are two ways to limit applicability.

First, we can state the requirement such that it limits the appeal, and therefore the document applies only to the extent covered in the specification statement. The second approach is to tailor the standard using one of two techniques.

The first tailoring technique is to make a legalistic list of changes to the document and include that list in the specification. The second technique is to mark up a copy of the standard and gain customer acceptance of the marked-up version.

The former method is more commonly applied because it is easy to embed the results in contract language, but it can lead to a great deal of difficulty when the number of changes is large and their effect is complex.

Process requirements analysis
The techniques appropriate to product requirements analysis may also be turned inwardly toward our development, production, quality, test, and logistics support processes.

Ideally, we should be performing true cross-functional requirements analysis during the time product requirements are being developed.

We should be optimizing at the true system level, involving not only all product functions but process functions as well. We should terminate this development step with a clear understanding of the product design requirements and the process design requirements.

This outcome is encouraged if we establish our top-level program flow diagram at a sufficiently high level. We commonly make the mistake of drawing a product functional flow diagram only focused on the operational mission of the product.

Our top-level diagram should recognize product development and testing, product manufacture and logistic support, and product disposition at the end of its life. This should truly be a life cycle diagram. Figure 1.3-6 below is an example of such a total process functional flow diagram.

Figure 1.3-6. Life cycle functional flow diagram.
System development (F1), material acquisition and disposition (F2), and integrated manufacturing and quality assurance (F3) functions can be represented by program evaluation review technique (PERT) or critical path method (CPM) diagrams using a networking and scheduling tool.

The deployment function (F5) may entail a series of very difficult questions involving gaining stakeholder buy-in as well as identification of technical, product-peculiar problems reflecting back on the design of the product.

At least one intercontinental ballistic missile program was killed because it was not possible to pacify the inhabitants of several Western states where the system would be based. Every community has local smaller-scale examples of this problem in the location of the new sewerage treatment plant, dump, or prison. It is referred to as the not-in-my-backyard problem.

The traditional functional flow diagram commonly focuses only on function F6 and often omits the logistics functions related to maintenance and support.

This is an important function and the one that will contribute most voluminously to the identification of product performance and support requirements. Expansion of F6 is what we commonly think of as the system functional flow diagram.

The system disposition function (F7) can also be expanded through a process diagram based on the architecture that is identified in function F1.

During function F1, we must build this model of the system and related processes, expand each function progressively, and allocate observed functionality to specific things in the system architecture and processes to be used to create and support the system.

All of these functions must be defined and subjected to analysis during the requirements analysis activity and the results folded mutually consistently into product and process requirements.

Decisions on tooling requirements must be coordinated with loads for the structure. Product test requirements must be coordinated with factory test equipment requirements. Quality assurance inspections must be coordinated with manufacturing process requirements. There are, of course, many, many other coordination needs between product and process requirements.

To read Part 1, go to: Laying down the ground work.
To read Part 2 , go to: Decompostion using structured analysis
To read Part 3, go to: Performance requirements analysis

Jeffrey O. Grady is president of JOG SystemsEngineering, Inc. and Top of Form Adjunct Professor, University of California,San Diego, Ca. He was a founding member of the International Council on SystemsEngineering (INCOSE).

Used with permission from Newnes, a division of Elsevier.Copyright 2011, from “System Verification” by Jeffrey O. Grady. For more informationabout this title and other similar books, please visit www.elsevierdirect.com.

This article provided courtesy of Embedded.com andEmbedded Systems DesignMagazine. Sign up for subscriptions and newsletters

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.