An architecture for designing reusable embedded systems software, Part 1 - Embedded.com

An architecture for designing reusable embedded systems software, Part 1

The drive to reduce product development cycle times has led to the need for designing reusable code. It's apparent that reusing code across software projects decreases project development time. Nonetheless, software engineers often resist developing reusable code because they're burdened by time-to-delivery commitments.

From design to documentation, reusable code requires that project managers assign additional resources up front. Project managers must decide whether to initially spend the extra effort in designing reusable software modules, which would benefit the long term, or to first quickly design the software to meet their clients' deadlines and later rework the modules to be reusable.

In an effort to reduce the development time of designing reusable software, adopting an architectural template that can be applied from project to project would be beneficial. The template would define hardware-independent reusable modules and an interface layer that is hardware dependent–changing when the hardware in the system changes. By applying the architecture template consistently across several program platforms, the goal would be to decrease the development time from one project to another while improving the maintainability of the software product.

Let's start off by viewing the overall system as an object that is partitioned into several smaller objects or layers. Some of these objects are written to be reusable while others, by design, are dependent on the hardware and have to change when the hardware changes. We can divide the system into four distinct layers as shown in Figure 1. The perceived system behavior, the outermost layer of the system, is where the users expect a certain behavior, typically defined in a functional requirements document. For example, when someone pushes the call button for an elevator, they expect that within some reasonable amount of time the elevator door will open allowing the person to enter and choose a desired floor.

The next layer is the “hardware” layer. This layer defines the various sensors and actuators used to provide information to the core software object. This layer defines the terminators for the system, which in many cases is dictated by the customer. These terminators are often constraints applied to the design.

For example, let's assume that the project requires a vehicle velocity input. One customer might define the terminator for this input as the vehicle speed, derived from the engine control module and transmitted via a two-wire serial communications link. Another customer, requiring the same input, may specify that the terminator is a wheel speed sensor, which requires processing in order to derive the vehicle speed. In either case, the customer dictates the type of terminator that is available to the system even though the core requirement is vehicle speed.

The “core-software” layer, by design, is a hardware-independent layer. This layer determines the appropriate action given a set of inputs and drives the outputs to a desired state. This part of the software executes the algorithms necessary to provide the desired function that the customer has required. The inputs and outputs (in other words, external terminators to the system) to this layer are pre- or post-processed by the interface layer and the data is typically passed to the layer through random access memory (RAM) variables.

Between the hardware and core software layers is the “interface” layer. This layer links the hardware with the software and, by design, is tied to the hardware making it reusable for any other system that uses identical hardware. Figure 1 is an illustration of the layers that compose the entire system.

When applying this model to an antilock braking system (ABS) on an automotive vehicle (see Figure 2), the outer layer is the level that the user interacts with the system. The main input at this level is the brake pedal that translates the force applied at the pedal to braking torque at the wheels. The core software determines if a wheel lock-up is imminent and regulates the individual wheel brake pressures to maintain the maximum traction forces at the tire-to-road contact patch. The perceived system behavior is added stability, increased steering ability of the vehicle, and in most cases, reduced stopping distance.

The hardware layer that interfaces to the core processor consists of four wheel-speed sensors and valves at each of the wheels to regulate the brake pressure. The hardware layer may also contain a communication interface for system diagnostics. The electronic control unit and the microcontroller(s) make up the processing intelligence for the system. Keep in mind that the interface layer is hardware dependent. The interface layer handles the methods for transferring the information necessary to the core software and for converting the core software's calculated output signals into a usable signal by the various peripherals that drive the loads.

For example, an accelerometer sensor signal supplying a voltage that is proportional to an acceleration, given in either meters/sec^2 or units of gravity (Gs), would have a specification that translates voltage into acceleration. A one-volt signal might be equivalent to the sensor measuring 1G. The interface layer would convert a voltage from the analog signal to a mathematical unit of measurement that best represents the desired signal. Since many systems are designed around a fixed-point central processing unit, the interface layer should also be responsible for taking the “real” units signal and converting it into a scaled integer value that is easily processed by the microcontroller.

The same approach can be taken for an output variable calculated by the core software. For this example, let's assume that the core software calculates a desired current that flows through a solenoid/load that is controlled within some desired tolerance. The output from the core software is a current value that is desired to flow through the solenoid/load. The hardware-software interface circuit may consist of a pulse-width modulated output port from the microcontroller that regulates the current through the load by modulating the on/off time for the solenoid. Therefore, the output from the core software is in units of current (amps), which the interface software takes and converts into the required pulse-width modulated signal, in percent on-time or duty cycle, for achieving that current.

Table 1 defines a few systems' hardware layer, interface layer, and the core software responsibility.

View the full-size image

If the software architecture takes on the defined structure that is illustrated in Figure 3, the core software can be designed to be independent of the hardware implementation. It's still necessary to define the input/output ranges and resolution requirements for processing the data. This is especially important when designing a system that is base on a fixed-point processor. When using a floating-point processor, the interface layer is a simple conversion layer that takes a raw measured signal and converts it to a desired unit of measurement.

View the full-size image

The objective is to develop the interface layer to translate the physical hardware signals into variables that can be directly used by the core software and vice versa. If the interface layer is designed correctly, it will take on the form of a hardware-software interface specification (HSIS). It should be designed for easy modification in the case that the systems' context diagram is altered (for example, if a sensor in the system is changed or the microcontroller is suddenly obsolete). The interface layer is the main component in the software that is modified so that the core software would require minimal changes, if any.

Interface-layer building blocks
Key to the operation of this software structure is the software interface, which has three essential components:

1. Microcontroller specification (ECU_HSIS.H ).

2. I/O signals interface specification (I/O Signal #1, #2, #n ).

3. I/O interface macros. (Interface.h , Interface.c )

One implementation of the interface layer would be to define three separate header files for each of the components. Figure 3 illustrates the partitioning of the software interface layer and how it relates to the overall system.

The ecu_hsis.h defines the low level interfaces to the microcontroller that both the interface and the signal header file reference. Other microcontroller specifications are also captured in this header file such as timing parameters that may be used for example in driving pulse-width-modulated outputs. The signal and interface header files become microcontroller independent because the ecu_hsis.h file encapsulates the peripheral level I/O into higher level references through macros.

The signal.h files encapsulates the data further by taking into account the raw sensor specification and provide the raw integer signal. The signal.h file provides a method for interface.h to access sensor independent signal types. The interface.h file modifies the raw signals that it receives from signal.h by applying the appropriate integer scaling/resolution and zero point offset required for the hardware independent functions. The basis of the interface.h file is to provide a means of getting data from the real world and putting data out there.

Ideally if the microcontroller changes, one would only need to modify the ech_hsis.h file. If the sensors in the system are changed, only modification to the appropriate signal.h file is required to take into account the new sensor specification.

By explicitly designing an interface layer that is hardware dependent, the core software layer can be engineered to be hardware independent. Both layers are reusable; the interface layer would be reused on other projects that implement the identical hardware. By design, the core software layer, which is independent of hardware, consists of reusable software modules.

The recommendations made in this article are not all encompassing but should be considered as a starting point for software architectures. To provide a guide to developers, more information will be available online in two additional articles in a series. They will provide more details of the structure of the building blocks in this architecture as well as some guidelines to its implementation.

Next, in Part 2: The portable code software structure building blocks.

Dinu P. Madau is a software technical fellow with Visteon. He has been developing software for embedded systems for over 22 years. He has an MSE in computer and electrical control systems engineering from Wayne State University and a BSE in computer engineering. Dinu has developed safety-critical software for anti-lock brakes, vehicle stability control, and suspension controls and is currently working in Advanced Cockpit Electronics and Driver Awareness Systems at Visteon, developing systems leveraging vision and radar technologies. He can be reached by e-mail at .

1 thought on “An architecture for designing reusable embedded systems software, Part 1

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.