Editor’s Note: In Part 1 of a four part tutorial on modeling tools, Shelley Gretlein of National Instruments provides a brief introduction to the range of modeling methodologies and why you should consider their use in your embedded designs.
Creating a model for your embedded system provides a time-saving and cost-effective approach to the development of dynamic control systems, based on a single model maintained in a tightly integrated software suite. Throughout this series, you will discover:
- Reuse By using modern modeling software tools you can design and perform initial validation in off-line simulation. You can then use models to form the basis for all subsequent development stages.
- Quality Modeling , combined with hardware prototyping, will reduce the risk of mistakes and shorten the development cycle by performing verification and validation testing throughout the development. This iterative approach results in improved designs, both in terms of performance and reliability. Development errors and overhead can be reduced through the use of automatic code generation techniques.
- Save time Design evaluations and predictions can be made much more quickly and reliably with a system model as a basis. The cost of resources is reduced, because of reusability of models between design teams, design stages, and various projects and the reduced dependency on physical prototypes.
These advantages translate to more accurate and robust control designs, shorter time to market, and reduced design cost.
Modeling is a broadly used term casually applied to disparate concepts ranging from behavioral and structural models to more simulation-centric methodologies. The challenge with broad terms or broad concepts of course is knowing when, where, and how it applies to your own application.
Modeling, in its most abstract sense, is a methodology by which some representation is created to describe and/or communicate an aspect of the system that is not easily or sufficiently captured through system implementation. We will loosely refer to the domain of modeling focused on describing actors or functions and entities, their states, inputs, structure, and views of these over time as “Architectural Modeling.” We will refer to the domain of modeling focused on simulating the behavior of any given system entity as “Simulation Modeling.”
Figure 1 shows a classic statechart diagram. Statecharts were invented by David Harel of the Weizmann Institute of Science in the 1980s. By adding hierarchy, concurrency, and communication to state diagrams, Harel created a more expressive form of the state diagram. He invented the diagram while helping to design a complex avionics system, presumably finding the existing tools for such a system lacking. In the 1990s, statecharts were adopted as a behavioral diagram within the Unified Modeling Language (UML) specification.
The classic state diagram consists of two main constructs: states and transitions. In Figure 1, the state diagram describes a simple vending machine with five states and seven transitions to illustrate how the machine operates. The machine starts in the “idle” state and transitions to the “count coins” state when coins are inserted. The state diagram shows additional states and transitions when the machine waits for a selection, dispenses a soda, and gives change.
In addition to hierarchy and concurrency, statecharts have features that make them valuable for complex embedded systems, as shown in Figure 2 .
Statecharts have a concept of history, allowing a superstate to “remember” which substate within it was previously active. For example, consider a superstate that describes a machine that pours a substance and then heats it. A halt event may pause the execution of the machine while it is pouring. When a resume event occurs, the machine remembers to resume pouring.
Click on image to enlarge.
Both of these diagram types provide expressions of overall system behavior while also visually describing key states and behaviors within the overall system.
Figure 3 , by way of contrast, represents a simulation model for a FIFO (first in, first out) element within a system. A FIFO element is a way to organize and manipulate data relative to time in terms of when you acquire it. In the case above, the FIFO might be used to model communication between two hardware devices via a bus. The FIFO definition provides a basic abstraction and interfaces to a FIFO supplying basic read and write operations, FIFO count, and so on. Presumably, VHDL (VHSIC hardware description language) could be re-used in either the actual system or for system modeling via a VHDL simulator. Designers and implementers would agree on this interface definition and then provide implementations to provide data that emulates expected data transmissions through the FIFO over time.
It’s important to note that while the code snippet does not visually convey a tremendous amount of architectural information, it does represent a critical abstraction of an element and its interface which then allows for both unit testing at the API boundary and for building systems that switch between a simulatable instance of the FIFO and a real world bus with FIFO mechanics and constraints.
Ideally, both the Architectural and Simulation aspects of your modeling should deliver better design insights, more re-use of design, test, and implementation artifacts, earlier and more tight integration of the test and debug phases, and more rapid iteration between design and implementation.
Simulation modeling enables earlier and in-parallel execution of development phases, which is in stark contrast with traditional methodologies, particularly waterfall approaches, which are by design highly sequential.
For engineering and design tasks, especially related to Embedded Systems, you typically use software modeling as the initial approach to roughing-in or framing your overall application design. Software models vary greatly in terms of format, level of detail, and functionality – so much so that some embedded engineers do not realize they are even modeling. Some software models are behavioral, some are simply visual aids for understanding and architecting, while others are used more as frameworks for ensuring consistency among similar applications or for facilitating communication among teams of engineers.
In essence, software modeling ranges from sketches on a whiteboard showing functional elements and their relationships to far more complex and rigorous modeling activities and frameworks like UML (Unified Modeling Language). UML is an object modeling and specification language most often used in software engineering-related applications.
Click on image to enlarge.
Figure 4 – Software modeling ranges from sketches on a whiteboard to far more complex and rigorous activities, languages, and frameworks.
The challenge for the embedded systems designer is to know what type and level of modeling is most appropriate for their unique situation and the problem at hand. In effect, it is about the age-old art of selecting the right tool for the right job. For large, complex, multi-team efforts, formal specification of systems via UML may increase design correctness and the efficiency of communication between teams.
On the other hand, for a single developer or small team working on a fairly simple embedded system, it may prove to be overly heavyweight and generally a drag on team efficiency. The same tradeoffs hold true for simulation modeling. Whether a system needs simulation or not is highly dependent on the nature of the embedded system itself and the nature of the real world elements with which the embedded system interacts.
For example, if the embedded system is a widely available and simple processor-based system with simple digital control of a relay or switch, then developing simulation models for the processor and relay would not really add much benefit beyond direct implementation on the processor and a simple test harness to exercise the digital control. On the other hand, if the embedded system includes a Field Programmable Gate Array (FPGA) and is controlling an expensive and complex real-world device, then it might make sense to both simulate the control logic on the FPGA to avoid the time-intensive FPGA synthesis, as well as simulate the expensive real-world device to avoid damaging or destroying it.
In the rest of this series, we will be discussing how designing your embedded system with well-defined structure, clear visualization of relationships between components, and with well-defined abstractions enables productive and efficient embedded designs.
Shelley Gretlein is Director of Software Product Marketing at National Instruments. Currently focused on growing the application and success of graphical system design globally, Gretlein is responsible for the development strategy and worldwide evangelism of the LabVIEW software platform including LabVIEW Real-Time and LabVIEW FPGA. She joined National Instruments in 2000 and holds a bachelor’s degree in computer science and management systems as well as minors in Mathematics and French from the Missouri University of Science and Technology.
Used with permission, this article is based on material by Shelley Gretlein written for inclusion in “Software Engineering for embedded systems,” edited by Robert Oshana and Mark Kraeling, to be published early in 2013 by Morgan Kaufmann, a division of Elsevier, Copyright 2013. For more information about “Software engineering for embedded systems,” and other similar books, visit www.elsevierdirect.com .