I don't need no stinkin' requirements! - Embedded.com

I don’t need no stinkin’ requirements!

Ask a new or a seasoned embedded systems developer (a.k.a. “the coder”) in a casual environment to briefly describe how he or she intends to execute the next project, and the answer will be something like this: “We'll get the requirements, design the system, write the code, and then verify it.” The very same day, this coder can be in a conference room at the formative stages of said next project and as soon as he gets a whiff of what the project is about, the code is being written (at least in his head) and the architecture has been locked-in.

The same coder, a day or a week later can be in a very similar meeting (perhaps the same room) lamenting that there are not enough requirements to do his design and until the requirements stop changing he cannot proceed. What accounts for this seeming contradiction?

Every project has risk, and the greatest risk is that the requirements will either not be well-defined at the start or will change many times before the code is completed. At the same time, engineers are tasked with “getting the job done” within the constraints of the situation (and doing so safely, again within the constraints of the situation). Because most commercial projects have a limited market window, time is often the key constraint, whether the push is to meet a holiday sales cycle, a key conference or exhibition, beat a competitor to market, or simply meet an expected schedule. Since time is always limited, requirements will suffer and therefore change or develop throughout the project. I am reminded of a cartoon I saw as a new graduate engineer at my first job over 20 years ago. A manager talking to his coding team says, “You lot start coding. I'll go and see what they want.” I was developing avionics software at the time, and many days it seemed to accurately reflect our work environment.

As a member of the marketing staff, I admit we start projects before the requirements are set, compounded by our understanding of what is possible, evolving customer desires (which are imperfectly divined at best), and a constantly changing competitive landscape (especially true for hot, new technology areas). But at the same time, engineers really thrive on their work and so often plunge headfirst into a new project.

So, for the engineer as coder, the real challenge is how to successfully begin and complete a project fast enough that the product generates maximum revenues, knowing that until it actually gets on the market virtually any requirement is subject to change. Engineers will also want to take into account that if it is successful, there will be follow-ons and derivatives required at even a faster pace.

The only sane strategy is to expect and handle change. This begins by expecting everything to change and then subdividing and separating aspects of the design behind strong interfaces. In this way, maintaining the interfaces and the changes underneath will not have an impact on the rest of the design. Incidentally, this type of programming is very deliberate and doesn't happen by chance or in an environment where code is simply cranked out as a reaction to requirements. Certainly requirements are important to designing these interfaces, but you can separate the critical structural ones from the features and, with the following design approach, move forward quickly and adapt quickly. The recipe is:

  • Start with the structure,

  • Establish communications,

  • Build-in the timing, and

  • Enable swappable features.

Step 1: Start with the structure
When discussion of a new project begins, the major structural decisions are generally agreed upon, things like what the product is: a toaster, a blade server, a handheld navigation system, or a cell phone. From this “product” definition a system definition emerges where the primary features are fleshed out and the real engineering begins—taking the product and subdividing it into smaller systems. This point is where the structural design of the subsystems is determined and nailed down. High-level product features affect the marketability of a product as do the detailed features, but this system definition, the subsystem partitioning, doesn't have a strong impact on what the customer specifically experiences and is therefore fairly safe from major disruption as features change.

What do you include in the structure definition? This is where the host and peripheral devices are chosen (at least the class of devices). Put another way, these devices are essentially the high-level hardware requirements and represent the first hardware-software partitioning. The structure of an individual subsystem now needs to be defined and designed. The structure is the frame upon which the rest of the application hangs. It doesn't define the features but rather is the framework into which the features will plug-in.

Let's use a modern kitchen appliance as our example. Consider a high-end countertop convection oven with digital controls and an LED display.

The high-level structure of this product dictates a heating system (electric heating elements and controls), a fan system (the convection part), a power supply, and a user interface with touch-control buttons and LED display. Looking at the embedded system design, we have a power system (managing the safety, perhaps watching for power loss), the oven system with subsystems controlling the heating elements and the fan using temperature feedback, and there is the user interface for user input and display, which may additionally have menu/recipe storage and presets. This structural definition is pretty stable, with a low risk of change.

What is at high risk of change at this point are things like whether the design consists of one super-MCU doing everything with all possible hardware inside the package (a fantasy) or several MCUs and off-the-shelf peripherals (most likely) whose details are highly subject to change. The structure must transcend the specific hardware device selections and concentrate on the application. Whether the design has one or several MCUs, there will be one main MCU and that is our focus as we define the structure. The structure definition focuses on the end-application, shown in Figure 1.


Click on image to enlarge.

On the left of the diagram are the inputs (blue blocks): temperatures, user interaction, and saved recipes. On the right are the outputs (red blocks): the heating element, the convection fan, and the display to the user. Tying it all together, in the center, is the control logic block where the central decision-making takes place. Because this design will still need to be allocated amongst the lower-level subsystems and devices, there isn't a need to know or decide how the arrows in this diagram are implemented. That doesn't need to happen until after Step 4. Note that even if it is known and rock-solid at this point, it doesn't change what happens during Steps 2 through 4.

Step 2: Establish the communications
Each one of the blocks in Figure 1, at this point, can be considered a distinct subsystem. The communications between them is the interface definition. Nailing down the interfaces is critical to capturing the essence or structure of the product and allowing the requirements in each subsystem to evolve independently (that is, limiting any impact on the project schedule and otherwise). The focus of this step is the control logic, the green block—the heart of the product.

Communications refers to the information flowing into and out of the control logic block; this step may involve messaging supported by an RTOS (real-time operating system), a physical communications interface like I2C or SPI, or simple function calls. It's still too early to know all of these details, and the goal is to set the interface so the control logic block never has to know which method has been implemented. In this way, the core application (as implemented in the control logic block) proceeds unaffected by fluctuations in the other blocks.

To establish the communications, identify from the point of view of the control logic what is needed from each input or output block. Consider “Temperature sensing” for instance. From this block, the control logic needs to know the current temperature (assume for the example that there is a single temperature zone and therefore only one temperature to worry about). The format of the temperature is an important aspect of this interface, since changes to this greatly affect the implementation of the control logic. Therefore, set the format and define the function that the control logic block will use: perhaps a C-language function prototyped as int getTemperature(void); where the result of the temperature is a signed fixed-point integer representing the temperature times 10 (in other words, 353.7 degrees would be returned as 3537). Units would also be important and either this would be defined as part of the getTemperature interface description or two different functions are defined, one for temperature in Fahrenheit and the other in degrees Celsius (getTemperatureF() and getTemperatureC() ).

Continue defining interfaces for all the blocks (depicted in Figure 2 as blue or red “IF” blocks contained inside the green “Control logic” block).


Click on image to enlarge.

Step 3: Build-in the timing

Timing is a critical design requirement and as such is subject to modification as the project proceeds. At this stage, the most important thing to determine is how the different blocks will be timed. For example, will the temperature block asynchronously update and the control block just access the most recent value, or will the completion of a new update interrupt the control logic and start a new frame? These types of decisions need to be made and depend upon several factors, such as how fast something happens and how fast a response is expected.

The simplest design is for each block to operate independently, providing the most current data to the control logic (in the case of inputs) or executing the most current command from the control logic (in the case of outputs). In general, the fewer timing or order-of-execution constraints there are, the better. As a consequence, identify all possible timing dependencies and try to eliminate them. For example, looking at the temperature block, one possible design is for the temperature conversion to be initiated by the control logic block whenever it wants a new temperature. This surely provides the most up-to-date value, but at what cost? Either the control logic will have to wait for a conversion to complete and slow down the whole system, or the device providing this conversion must be fast enough to not slow the system, which will mean a more expensive solution.

Next, take a deeper look and separate the normal operating conditions (for instance, the oven is heating and the new temperature will change the current heating command) from the conditions requiring immediate action (like sensing a dangerously high temperature, perhaps caused by an electrical short). Urgent and unusual situations can be allocated an interrupt and dealt with immediately while normal sensing proceeds asynchronous to the control logic. By doing a similar situational analysis, you can set the timing boundaries for each block, defining the worst-case timing and ensuring it meets product requirements while giving each functional block maximum room to operate.

Step 4: Enable swappable features
From Steps 2 and 3, you can see that maximum flexibility is being designed-in for each block. Step 4 concentrates on taking that concept to all aspects of the product. Each feature can be treated in a similar fashion, at least the features affecting the embedded design. Should we use a set of tactical keys for the user inputs or modern capacitive sensors? Should the memory for the recipe database be 8 Kbytes of internal flash or something removable like an SD card that a user could upgrade with new recipes (by removing the card, plugging it into a standard card reader on their computer and accessing our web site)? Does the convection system use a single fan or multiple fans with several temperature zones?

For every feature, define a wrapper that allows the details of each feature to change under the hood while the system sees a consistent interface. Just like the example for the temperature, a similar approach can be applied to “wrapping” every feature. One way to look at this is to create feature objects that contain the algorithms and details but present the results in a consistent and easy-to-use way to the control logic. For instance, if the display might need to display English, French, Chinese, or Korean prompts, the feature object would have a property for “display language” and a function void setLanguage(char newLang) to change the language displayed. Add more languages by updating the low-level display functions, not the control logic.

What about a major change like going from a simple one-temperature-zone design to multiple zones? Even if such a major change were to occur, the affected pieces in the temperature, heater, and convection blocks would get updated and perhaps a new control function, but at least everything wouldn't change. But you can go one better than that by having the “what if” brainstorming session and driving any potentially affected pieces into a feature block. The goal of this step is to create a set of pieces that can undergo radical changes with limited impact to the system overall.

An additional side benefit is that feature objects are highly reusable when they're designed to shield the rest of the system from their own changes. The same flexibility of change within the system is provided to future systems. If you have multiple teams working on different but related products, this approach provides the ability to share technology and innovation, especially if the products share a similar structure to begin with.

Object-oriented design without its language
Careful readers will have noticed that by following the four steps, it looks like I have directed this design to become object-oriented—partitioned into objects each with their specific data or implementation hidden. Guilty as charged.

The benefits I assert you will gain by this approach, such as limiting the effect of any single change on the rest of the design and reusability, proponents of object-oriented design and programming have touted for decades. And most new programming languages and variants structurally support object-based design. But an “object-oriented” design does not succeed or fail based upon language selection–no language by itself will turn code churned out without a strong design into a success. Similarly, you can achieve the benefits described with any language, and since C is in common-usage in embedded design, I chose to think of this design approach using C. In fact, one of the best object-oriented designs I have worked on used Intel PL/M on an 80C86 processor and was successful due entirely to the design; the language was barely one step up from assembly.

Although there is no miracle cure for late-cycle requirement changes, the steps I've laid out above will lead toward a successful object-based design that can integrate requirement changes as they occur with the least impact on the overall system and schedule.

Jon Pearson began designing commercial embedded systems in 1986 for commercial aircraft, telecommunications satellite systems, and notebook computers. Jon has been driving marketing efforts at Cypress Semiconductor since 2000 when the first programmable system-on-chip mixed-signal array devices were created. Currently Jon leads the development tools marketing efforts for PSoC and other programmable devices.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.