San Jose, Calif. – Many oftoday's embedded systems designs are studies in contradiction. At thesame time that the market is demanding designs that are faster and morecomplex, it is also demanding lower power, lower cost and reduced spaceand memory requirements. In addition, all that code has got to be asbug-free as possible. And, oh yes, it has to be ready by yesterday.
Fortunately, each spring the EmbeddedSystems Conference presents speakers whose aim is to helpdevelopers resolve such mind-numbing contradictions. There are classesthat help designers use existing tools and languages in new ways, andto look at new methodologies that will help them deal with theincreased complexity of typical designs.
Of the 265 or so presentations and classes given at the conference,several caught my attention as offering especially original orchallenging information for developers in both categories.
Classes on the Universal Modeling Language (UML) have long been astaple at the Embedded Systems Conference. But in the past most of theclasses have been focused on its use in combination with theobject-oriented C++ language. When there has been a discussion of C,still the lingua franca of most embedded systems developers, it hasusually been couched in terms of object-oriented or object-basedmodeling.
This time Bruce Powell Douglas, chief UML evangelist in theTelelogic systems and software modeling division, takes a differentslant. In this class he focuses on a functional-based modelingtechnique that should appeal to developers who are more comfortablewith traditional C concepts such as files, functions and variables thanthey are with classes, operations and attributes.
Douglas deals primarily with the FunctionalC UML Profile, a subsetof the UML for the modeling of functionally-oriented C-based systems.With this tool developers can program functionally with UML diagrams byusing a UML stereotype called a file, which is simply a graphicalrepresentation of a source file.
This UML stereotype is capable of containing all the elements that Cdevelopers are used to dealing with, including variables, functions,types, etc. The file is added to the diagram and used to partition thedesign into elements, much in the same way a class is used to partitiona program in Object Oriented programming.
“The introduction of natural C concepts such as files, functions andvariables into the UML as a profile now enable the C developer toreceive all the benefits of Model-Driven Architecture while thinkingand working the way they are used to,” said Douglas. “Through theprocess of visualization, it is now possible to incorporate legacy codeinto the development environment without changing a single line,enabling C developers to reuse their legacy code (IP), either as is oras a starting point.”
Robert Oshana ,well-known at the ESC and on Embedded.com for his articles and classeson DSP embedded software development, takes aim on the need fordeveloping correct, complete and testable requirements for softwaredesigns using a technique he calls
“Trying to define a large multidimensional capability of a complexembedded system within the limitations of a linear two-dimensionalstructure of a document becomes almost impossible,” he said. “At theother end of the scale, the use of a programming language is toodetailed. This is nothing more than 'after the fact specification'which is just documenting what was implemented rather than what wasrequired.”
It can be difficult to specify the total behavior of a complexsystem because of the total number of possible uses of the system. “Butthis is precisely what needs to be done in order to ensure completenessand consistency in our designs,” Oshana said. In this class, Oshanadescribes how software developers at TI used sequence enumeration toaccomplish this task in several of their projects.
“Sequence enumeration is a way of specifying stimuli and responsesof an embedded system,” he said. “This approach considers allpermutations of input stimuli.”
Sequence enumerations consist of a list of prior and current stimulias well as a response for that particular stimulus given the priorhistory. Equivalent histories are used to map certain responses. Thistechnique maps directly to a state machine implementation.
“The strength of sequence enumerations is that the techniquerequires the developer to consider the obscure sequences that areusually overlooked,” he said.
In this class, DavidKalinsky focuses on how to use the new breed of
“Dynamic analysis tools have been popular for many years,” he said.”They observe executable code at run-time, and report violationsperformed by the code as it runs. These tools are excellent atdetecting bugs such as dynamic memory corruption and resource leaks.”
But, said Kalinsky, dynamic analysis tools also have down sides:defects are found late in the software development process, since thesetools require an operating executable version of your code. The moresubtle bugs, the ones involving integrated software units, are onlyfound after software integration is complete.
“And the dynamic analysis itself only covers those test cases thatwere actually run,” he said. “So if test cases were run for only 95%line coverage under the watchful eye of a dynamic analysis tool, thenthat tool would likely be failing to identify many, many otherdefects.”
Static analysis tools, on the other hand, are tools that analyzesoftware code bases for bugs and other defects without actually runningthe programs that are built from the software. They're based onalgorithms that go oversource code with the finest of fine-toothed combs searching forsoftware faults.
Faults can be found early in the development process, so as toimprove software quality as well as save developers time. Such toolscan cover the execution paths through a code base in a fully automatedway, and identify complex defects, including bugs involvinginteractions among multiple procedures.
“In the world of embedded software, static analysis tools canidentify randomly appearing bugs such as those caused by theinterleaving of tasks in a preemptive multitasking environment when asoftware developer has neglected to properly protect a criticalsection,” he said.
“Bugs such as those can be identified consistently and repeatably bythe defect searching algorithms of a static analysis tool ” even thoughthe bug will not appear consistently or repeatably at applicationsoftware run-time.”
Taught by DavidB. Stewart, PhD ., director of software engineering at InHandElectronics, Inc., Rockville, Maryland, this class is about as hands-onas you can get, focusing on how hardware tools such as
“There exist many powerful techniques to debug software, includinguse of symbolic debuggers, emulators, and the always popular 'print'statements,” said Stewart. “However, some of the hardest-to-find bugsin an embedded system will never be found using these methods.”
Hard problems to debug include glitches, timing errors, memorycorruption, problems with interrupt handlers, and errors in devicedrivers. Instead, a logic analyzer can be used to test and debug thereal-time execution when all else fails. Logic analyzer methods, hesaid, provide a highly-precise window to monitor the real-timeexecution of code where other debugging techniques fail.
The drawback to this approach, Stewart admits, is that it requiresspecialized hardware and more effort than more traditional debuggingtechniques such as using a symbolic debugger or lots of printstatements.
“Consequently, use of a logic analyzer should supplement existingdebug methods, not replace them,” he said. “If anytime print statementsor a symbolic debugger are adequate to resolve a problem efficiently,then use them. However, the moment they are unable to providesufficient information to quickly pinpoint the root of a problem, thenconsider the methods presented in this class.”
The course looks to be a complete and intensively hands-onintroduction to the use of logic analyzers in this supplemental way.Stewart provides side by side comparisons between the way a logicanalyzer would be used, versus, say, print statement debugging, andprovides hints on how a software developer should interpret the data.
While the data collected using a logic analyzer may appear morecryptic and require some effort to make use of it, he said the mainbenefit is that real-time debugging information can be gathered with aresolution finer than a microsecond.
“In comparison, a print statement can easily take 2 or 3milliseconds if connected through a high-speed link, or tens ofmilliseconds if through a slow serial connection,” he said.
Other software development classes and presentations worth checkingout at the ESC Spring include: (1)