Your new algorithm does the job on a real-time, rapid prototyping computer but will it work on your actual target, a highly complex automotive electronic control unit? On-target rapid prototyping, an emerging trend in embedded systems development, may provide the answer.
Step back 15 years and envision an automotive-powertrain R&D or advanced-production engineer touting a hot new algorithm during a design review. All eyes turn to the grizzled project manager who inevitably grumbles, “Great, but will it drive an engine?” Months later, a hand-coded version of the algorithm appears, courtesy of the software department, and dynamometer or proving-ground tests begin. Eventually, these tests answer the project manager's question but only after significant time and expense.
Now observe the R&D engineer today, touting yet another hot new algorithm with the added twist that it did indeed drive an engine in the lab or on the test track via the ever-present rapid-prototyping computer. All eyes once again turn to the grizzled project manager who inevitably asks, “Great, but will it drive an engine using real production hardware?”
Is this progress? Definitely. One should first assess if it's even feasible for an algorithm to provide the correct behavior for a highly complex system, such as those found in many of today's automotive electronic control units (ECUs). Rapid prototyping on powerful real-time computers helps provide that understanding. However, one also needs to know if the algorithm is practical; that is, will it work on an 8-, 16-, or 32-bit resource-constrained ECU? In this article, I'll discuss how on-target rapid prototyping can lend a hand and how it differs from conventional rapid-prototyping approaches.
It's been more than 50 years since the manipulation of block diagrams was described by T. M. Stout.1 Controls and signal processing engineers have been in love with them ever since. Block diagrams are the preferred way to specify (in other words, model) a complex mathematical algorithm.
A classic model of a feedback controller or DSP algorithm is shown in Figure 1.
Figure 1: Feedback controller model
With the emergence of finite state machines, another modeling capability is now available. When used with block diagrams, finite state machines provide a way to create complete behavioral models for embedded systems containing both event-based and time-based components. Examples of such systems include transmission control modules, gas turbine controllers, and flight management systems. These systems are likely to use block diagrams for specifying the digital processing, filters, and lookup tables, while state machines and flow diagrams provide the mechanisms for modeling fault detection, built-in testing, and mode and shift logic.
If you don't have a sophisticated plant or environmental model, you can substitute the plant shown. It's straightforward to substitute the plant shown in Figure 1 into a software test harness by using stimulus input signals and scopes, instead of carefully formulated dynamic system equations. Thus, you can perform bench-type testing from your desk- or laptop.
The model can be as detailed or as high-level as is appropriate for a given development stage. Perhaps your initial objective is to ensure that the system is stable and that performance objectives such as rise time and overshoot are satisfied. So your model might be based on a second or third order system using double-precision math. At the end of design, the final objectives may require an extremely detailed model to explicitly define all aspects of the required behavior. You may need to specify data in the form of signals and parameters in fixed point by adding scale factors, bias, overflow, and rounding logic. You may also incorporate robustness code into the model to check and protect against events such as divide-by-zero or array-out-of-bound accessing. Onboard Diagnostics II and other diagnostic routines are then folded in, often in such large quantities that they eventually constitute the majority of the model, leaving the control logic as a small (but important) player. In effect, these types of details transform the model into a classic software design specification document. But the model is more than that for one important reason: it simulates.
Simulating a model enables developers to check that a system's behavioral requirements have indeed been satisfied early in the design process. OEMs and their suppliers who exchange executable models instead of paper specifications note that this process improves overall communication and reduces round-trip iterations to clarify requirements.
It's now a common practice to simulate system models containing block diagrams and state machines, and it's not unusual to see system models containing tens or, in some cases, hundreds of thousands of blocks. These numbers are impressive when you consider that some blocks and state machines themselves represent 10 or more lines of code. The key to success here is to manage the model as you would a formal software specification by developing modeling guidelines, partitioning the model, holding model reviews, and so on.
But not all embedded systems have enough spare memory or possess the computational speed necessary to implement such massive creations. So for smaller (for example 16-bit) applications with 32K ROM and 2K RAM, the modeler should recognize from the start that collections of deeply nested hierarchical state machines or filters made up of complex H-infinity matrix math operations probably just won't fit. The modeler for these applications needs to understand which block constructs yield the best code and the models should be carefully parameterized and reviewed to ensure maximum code efficiency. With this mindset and today's technology, it's certainly possible to deploy your model in a mass-production, low-cost hardware environment.
High-complexity applications have accelerated the acceptance of Model-Based Design and code-generation technologies. They've also fostered industry guidelines and best practices. Several years ago an automotive leadership council, composed mostly of major OEMs and suppliers, published a production-intent modeling style guidelines document, based on Simulink.2 Many companies in a variety of industries now use this as a starting point for their internal standards. Other documents describing tips and techniques for production-intent modeling topics, such as fixed-point design and code generation, have also guided developers.3
But having a model that executes on a host computer can only offer so much. The key enabler is automatic code generation, which transforms models into C code that can run virtually anywhere with push-button automation.
Getting code from models
It's impossible to list all the development and test activities for which companies use automatically generated C code. The key is simply to realize that each component of the system shown in Figure 1 can manifest itself and connect to other components as software, hardware, or remain a model. With that basic understanding, your process can assume almost any shape or form, not just a waterfall or V diagram.
Some of the more popular activities based on automatically generated code from models include:
- Simulation acceleration:
Code is generated and compiled for both the plant and controller models. It executes on the host computer and runs much faster than interpretive simulation. This is a popular way to do statistical analysis and parameter sweep studies, such as those involving Monte Carlo methods.
- Rapid prototyping:
Code is generated just for the controller model. The code is then cross-compiled and downloaded to a high-speed, floating-point, rapid-prototyping computer where it executes in real time. I/O is typically managed by a memory pod or emulation device that's connected to the rapid-prototyping computer and an existing ECU, perhaps one still residing in a vehicle. Other I/O options include communication via buses, such as controller area network (CAN), or other I/O devices, which may require some custom signal processing and power electronics. The controller parameters are tweaked “on-the-fly” during test drives or in the lab involving the actual plant (an engine, for example) and allowing for the insertion of new code to bypass existing ECU code. Success is declared when performance requirements are met, proving that the new algorithm is feasible.
- On-target rapid prototyping:
As with rapid prototyping, code is generated just for the controller model. However, the code is then cross-compiled and downloaded to the embedded microprocessor or ECU used in production, or perhaps to a close cousin of it configured with a little more memory and I/O. An ECU that uses an integer processor would need a more detailed fixed-point model as opposed to the floating-point processors and models used for rapid prototyping. I/O is managed via standard ECU devices. The host computer then interfaces directly with the on-target rapid-prototyping ECU hardware, perhaps residing in fleet vehicles. Controller parameters are tweaked “on-the-fly.” Success is declared when performance requirements are met, proving that the new algorithm is practical and should indeed work in a production environment.
- Production code generation:
Code is generated for the detailed controller model and downloaded to the actual embedded microprocessor or ECU as part of the production software build. No simulation activity is associated with this step. The key here is to ensure that the final build has fully integrated the automatically generated code with existing legacy code, I/O drivers, and real-time operating system.
- Software-in-the-loop testing:
This step executes the production code for the controller in an instruction-set simulator, debugger, or within the modeling environment itself, exercising the plant model and interacting with the user. This type of testing doesn't execute in real time.
- Processor-in-the-loop testing:
This step is similar to software-in-the-loop in that it executes the production code for the controller. However, real I/O via CAN or serial devices is used to pass data between the production code executing on the processor and a plant model executing in the modeling environment. As with software-in-the-loop testing, processor-in-the-loop testing doesn't execute in real-time.
- Hardware-in-the-loop testing:
Code is generated just for the plant model. It runs on a highly deterministic, real-time computer. Sophisticated signal conditioning and power electronics are needed to properly stimulate the ECU inputs (sensors) and receive the ECU outputs (actuator commands). Whereas rapid prototyping is often a development or design activity, hardware-in-the-loop testing serves as more of final lab test phase before road or track tests commence. A recent article in Embedded Systems Programming magazine discussed hardware-in-the-loop concepts.4
Comparing rapid prototyping
Figure 2 shows the conventional bypass and on-target rapid prototyping approaches. Note the similarities. The configuration for both solutions uses the host machine (which doesn't run in real-time), the ECU, and plant (in this case, an engine). The only major difference is the deployment target. The bypass solution uses a real-time computer to run the new application and a way to connect to the ECU (via a memory pod or network), whereas the on-target solutions run directly inside the ECU.
Figure 2: Conventional bypass and on-target rapid prototyping approaches
After looking at these diagrams, it's not a stretch to realize that the automotive industry has envisioned this on-target approach for years. It just took the code-generation technology time to get there. Each approach fits a certain need. The following list compares these two approaches:
Bypass rapid prototyping:
- Uses PC or nontarget-based hardware
- Is used for testing new ideas and green-field research
- Places less emphasis on code efficiency
- Allows hardware to be inside the vehicle or in trunk; often a single vehicle
- Places less emphasis on accurate modeling of I/O latency
- Works well for new programs
On-target rapid prototyping:
- Uses ECU or near-production hardware
- Is used for developing and refining algorithms and ideas during the development process
- Can be deployed in a production-intent lab environment or used in fleets of test vehicles
- Emphasizes accurate modeling of scheduling and I/O
- Provides quick path to production
- Works well for delta changes to existing programs
- Uses existing ECU hardware, thus less expensive and more convenient to implement
The key point in Figure 2 is that on-target rapid prototyping blurs the distinction between conventional rapid-prototyping and production-code generation. In effect, on-target rapid prototyping offers a refinement of the rapid-prototyping code, enabling it to be easily taken into the final production build.
Getting on target
You have two ways to obtain an on-target rapid prototyping system: build or buy. If you decide to buy, a number of commercial offerings are available. The hardware is either the actual product ECU or a slightly more enhanced version of it. The software consists of target blocksets in Simulink representing peripheral devices and operating systems tasks and resources.
IAV, an automotive engineering company headquartered in Germany, supports its customers by employing an on-target rapid prototyping solution called the Universal Control Unit. It consists of several microcontrollers including Motorola MPC555, Infineon C167, Infineon TriCore, as well as an OSEK target. Additional block support for the Universal Control Unit is provided for interactive tuning over CAN Calibration Protocol (CCP) using CAN.
Other companies offer combined hardware and software systems for on-target rapid prototyping. In addition, a growing number of companies offer embedded targets that consist of software blocksets and device drivers. The hardware is obtained separately.
If you want to build an embedded target, it would be easier just to use your existing build environment and interface directly to the automatically generated code. This approach, sometimes termed algorithm export, can be implemented with code generators that allow you to specify that a model or subsystem within the model is reusable. The code generated would have the appropriate entry points and function-call signatures that interface with your existing scheduler and build process. However, if the application requires greater flexibility or is already modeled, it's probably worth the effort to develop an embedded target blockset as described in the next section.
Building a baseline embedded target
The most fundamental requirement for an embedded target is that it generate a real-time executable from a model or subsystem. A number of capabilities are typically included in an embedded target but not necessarily all at once. A good first step might just be to generate code, automate the build process, and download to the target microprocessor using an existing tool chain (compiler/linker/debugger). This is sometimes called a baseline target.
The approach for developing a baseline target will depend on the modeling environment you've chosen. The key is to understand the details of the code generation process and enter hooks into that process. See Figure 3, which shows the code generation process and the entry hooks into the process. These entry hooks are the points with which the model developer is able to interact and control the target code environment generated from that model.
Figure 3: Entry hooks into an example code generation process
One of the first steps in the code-generation process shown in Figure 3 is to compile the model into an in-memory representation. At this point, it may be useful to use the target entry or target compiled hooks to add certain consistency checks to ensure that the model is suitable for deployment on your target. Checks might include confirmation that only integer code is used for fixed-point microcontrollers or DSPs. You might also want to check that only discrete time blocks are used.
The next step in the code-generation process is often the actual outputting of the C code. Here you might use the target make hook to ensure that data is placed into the proper memory segments and that an appropriate main program was created, with single or multitasking scheduling capabilities. Template-driven technology might be available for accomplishing these tasks. It would also be a good idea to invoke source-code analysis or Lint tools to verify that the code satisfies in-house code standards.
Finally, after the code is generated and the make process concludes, an automatic download utility can be invoked using the target exit hook to deploy the executable onto the target. Typically, this is done with a debugger utility. If the debugger supports command script files, this can be straightforward to implement. Another option is to invoke the instruction set simulator (ISS) and execute the code purely on the host. Some customers want to execute the code in an ISS and compare the results with the model that's simulating inside the modeling environment. This co-simulation approach facilitates host/target verification efforts.
Building an advanced embedded target
A more powerful embedded target would not only deploy code generated from a model onto a baseline production processor, but it also would ensure that the target executable could interact with the external world, outside of the debugger.
Some of the more desirable features for a turnkey production target include:
- I/O driver and peripheral device support
- Code generation and targeting options for algorithm export, instruction-set simulation, processor-in-the-loop simulation, or download to target
- User-controlled placement of individual variables in flash, RAM, or other special memory sections
- Support for target interaction and parameter tuning
Device drivers are represented as model blocks that support either hardware I/O capabilities of the target CPU or I/O features of the development board. Driver development can be difficult and time-consuming, given all the different operational modes available in even a single I/O subsystem. However, it can be constrained on a project-by-project basis if you know what modes you plan to use.
Another item to consider is how you want the device driver block to behave during host simulation. One way to simplify development is to create the block with a pass-through option. When pass-through is enabled, the driver block input is passed through to the output during simulation, thus bypassing the driver block itself. This option affects simulation only. The code generated from the block interacts normally with I/O hardware on the target. This approach makes it easy to go from simulating to generating embedded systems code by using the same model.
Processor-in-the-loop simulation requires that the plant or test harness model running on the host interact with the code running on the target. One way to do this is with some type of external mode operation that may be provided with the modeling environment. For example, a serial mode interface (RS-232) could be used between the host computer and target since many hosts and targets have a serial port available. The main task here is to develop a serial driver for the target hardware and include external mode support files used by the modeling environment during the build process.
Executable placement in flash memory or RAM is typically controlled by the target's download utility. To support this capability you will likely need to provide multiple linker command files, multiple debugger scripts, and possibly multiple make or project files.
One approach prevalent in the automotive industry to support target interaction and tuning is to use the CAN bus in conjunction with an ASAP2 file and CCP. Several host-based user interfaces connect to a CCP-enabled target and provide data viewing and parameter tuning. Supporting these tools requires the implementation of CAN hardware drivers and CCP for the target, as well as ASAP2 file generation.
The effort required to build a full-featured target is considerable, so check with your vendor to see if documentation is available to serve as a guide. Consultants who have done this type of work previously are another good resource. And with a little luck, you may even find that a nice turnkey target that suits your needs is already available off-the-shelf.
Companies such as Jaguar are actively employing on-target rapid prototyping and code generation. In Jaguar's case, they used the Microgen product from the UK company add2 Limited as their general-purpose ECU hardware, based on the Motorola MPC555 microcontroller. During Engine Management Systems (EMS) development, they ran automatically generated code on the Microgen to simulate the vehicle transmission control unit. This allowed the engine and EMS to be tested through a number of drive cycles. By testing the new features on real hardware, Jaguar improved the quality of specifications they provide to their suppliers, who code the production ECU. They also reported that increased use of general-purpose and cost-effective ECU hardware enable the simultaneous trial of prototype control modules across a fleet of engineering vehicles, thereby enabling a rapid evaluation of proposed features.
Tom Erkkinen is an embedded applications manager for The MathWorks, Inc. His focus is production code-generation products and related technologies. Tom has worked in the aerospace and automotive embedded system field for more than 15 years. He has a BS from Boston University and an MS from Santa Clara University. Tom's email address is .
- Stout, T. M., “A Block-diagram Approach to Network Analysis, Trans.” AIEE, Applications and Industry , Vol. 71, pp 225-260, 1952.
- MathWorks Automotive Advisory Board (MAAB), “Controller Style Guidelines for Production Intent Development using MATLAB, Simulink, and Stateflow,” 2001, http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=4280&objectType=file.
- Siva Nadarajah, et. al., “Tips for Fixed Point Modeling and Code Generation,” The MathWorks, Inc., 2002, http://www.mathworks.com/company/newsletters/digest/nov02/fixpt_tips_v5_digest.pdf.
- Ledin, Jim, “Simulation Takes off with Hardware,” Embedded Systems Programming , 2002.