Navigating through New Development Environments - Embedded.com

Navigating through New Development Environments


Navigating through New Development Environments

by Jack G. Ganssle


As the software development process grows more complex, system specifications and design are becoming critical features in the development process. New tools are becoming available to help you get your product to market, but are they enough?

“If humans possess one attribute that's enough to cause a migraine, it's that their ability to invent new things almost always exceeds their capacity to make them function correctly.” — Samuel Greegard

Mr. Greegard’s insightful comment summarizes the angst that comes with building embedded systems. Things are so complex that we just do not understand all of the implications of even the simplest decision we make. Change one line of code and suddenly unrelated features break. Alter a seemingly small parameter in the specification and disaster seems to cascade throughout all of the modules.

For many years, embedded systems development seemed to be unrelated to the rest of the computer industry. Practices and techniques that were commonplace in information technology (IT) just had no common role in the microprocessor world. The relatively recent emergence of OOP (object-oriented programming) is now bringing embedded software development back into the mainstream of programming, as tools and techniques from both IT and embedded development converge.

System design
Nowhere is this convergence more complete than in system specification and design. Before the '90s, most firmware developers ultimately resorted to the “just write some code and ship it” philosophy of software design. That technique works pretty well when you're writing less than about 10k lines of code. Formal specification and design become necessary as size increases, real-time constraints surface, and multiple programmers work on one job.

We do know that an error in specification costs one to two orders of magnitude more to correct than an error in coding, so spending time up front clearly saves a lot downstream. Similarly, tools targeted at creating a more correct design are priceless.

In the past, CASE proponents' and methodology gurus’ pleas for us to adopt better methods have fallen on mostly deaf ears. Now, though, we can see a clear trend toward UML (Unified Modeling Language) as a tool to help us create better systems.

UML itself is nothing more than a notational system for creating models of complex systems. UML's “unified” moniker derives from its high level of abstraction. UML diagrams can model the entire system’s specification, including both hardware and software. With the line between code and transistors getting fuzzier, this level of abstraction carries real benefits.

Rational Software's Rose (www.rational.com ) was one of the first useful UML tools, and is widely used today. Like most such products, it lets you build your models on-screen. Others include I-Logix's Rhapsody (www.ilogix.com) and ObjectGEODE from Verilog (www.verilogusa.com ).

Most of these products compile the UML design directly into C++ code. Developers have always been wary of automatic code generators, wonderful dream though they may be. Do you trust the tool to create correct code? In embedded systems, where ROM may be precious, will the generator perform as well as a skilled programmer? Efficiency constraints may also require intelligent tuning.

UML is based on objects, so C++ is the natural resultant language. But it seems C++ is still a long way from total acceptance in embedded systems. In an informal survey I took of compiler vendors last year, it seems that only about 15% of those who buy cross compilers actually use them to create object-oriented code. The rest still use C++ as a “super C.” A tool that cranks only pure OOP C++ might not match your development environment (though some will indeed produce C).

But code generation is an optional step. UML's greatest strength is that it's an accepted, powerful way of creating a system spec and design. Create a design using your own pseudo-code abstractions and no one else will understand what you're doing. Use an accepted convention — like UML — and you're suddenly all speaking the same language.

Many developers, therefore, are using the UML tools just for specification and design, converting the resulting diagrams by manual coding.

A debate rages among vendors about UML's perceived weaknesses at dealing with real-time issues. Some vendors offer UML extensions for time-based systems. ARTiSAN (www.artisansw.com), for instance, uses their extensions to model any real-time system including those based on an RTOS.

Though UML does require you to go through a defined sequence of steps to create the model, it is not a methodology or a software process per se. It's a tool you use to implement your process.

The term “process” is a hot issue, but certainly the mother of all processes must be the Capability Maturity Model (CMM), promoted by the Software Engineering Institute (www.sei.cmu.edu). The CMM breaks development strategies into five levels. Level 1 is really the total anarchy typical of most embedded work. At each succeeding level, the organization has mastered a set of skills that leads to more predictable products.

While the CMM is probably one of the most significant things any organization can undertake to improve their code, it's a very difficult process, one requiring total commitment from every level of the organization. An alternative is the Personal Software Process, advocated by Watts Humphrey in his book A Discipline for Software Engineering (New York: Addison-Wesley, 1995).

UML by itself is inadequate; coupled with a reasonable process, even one as easy to master as Humprey’s, UML becomes a very powerful system specification language.

Without a process, without adequate specs and system design, your project is doomed.

Firmware tools
Many code generation tools come from the IT and Unix worlds, ported to the embedded arena more by clever marketing than by technology. A few areas of firmware development, though, are uniquely embedded, requiring specialized embedded tools. One of the more frustrating tasks we face is creating startup code, that awful low-level chunk of assembly language that initializes stacks, peripherals, constants, and such. Compiler vendors usually offer startup code models; some come with complete code that will work right out of the box.

Other vendors, like Aisys (www.aisys.co.il), complement the startup code with a program that generates complete device drivers. An expert system-like interface leads you through configuration of each peripheral. It then generates C or assembly code to completely implement the driver, insulating you from the error-prone tedium of translating incomplete databook specs to working code.

Stenkil Software (www.stenkil.se) offers a similar product for a hundred different Hitachi processors. Stenkil's tool also generates a target monitor that exercises each peripheral in-circuit, proving that the device is running properly.

Managing code
Old-timers fondly remember the good old days when an entire embedded application fit nicely into 4K of ROM on board an 8051. Application size, measured in lines of code, has been doubling about every two years, to the point where now 150,000 lines is more common. Yet, especially in the eight-bit world, a lot of small applications still exist, and the astonishingly low price of small microcontrollers means that small programs will always be common. But cheap 16- and 32-bitters, DSP, and consumers' demands for nifty gadgets with ever-more features mean that high-end systems sporting huge hunks of code are flooding into the market.

As a product grows to beyond about 50,000 lines of code, no one engineer can comprehend even his or her own firmware, and when dozens of people contribute to a huge project, most will have only the vaguest idea of which functions do what when.

The sheer size of these projects means memory and good intentions just aren't adequate for tracking and managing the code. A number of vendors have jumped into the fray, offering tools that help manage files, changes, and the relationships between code modules.

Almost back to the beginnings of recorded history, Unix offered a variety of version control systems (VCS), programs that track changes made to the source so you can always build any version of your product. Every development group, even when the entire company is composed of but one engineer, simply must use a VCS of some sort. In a single-user environment, the VCS practically eliminates lost files (you back up the server every night, which holds the VCS database), and gives traceability to the code. Did a user find that feature A no longer works? Using the VCS, you can determine exactly which release caused the problem, and then track how the changes created the trouble.

Once a project grows beyond a single developer, a VCS is the first line of defense against two people making incompatible changes to the same module at the same time. Traditional VCSes restrict write-access to modules, letting only one user at a time change the code; others may get the module for read-access only.

Now some vendors have expanded this concept to weaken the “only-one-user-changes-a-module-at-a-time” philosophy. If the file is large, you can reasonably assume that programmer A will be working on one section of the module while another developer edits another portion. Both Rational Software and Continuus Software (www.continuus.com) offer products that include intelligent conflict detection, which identifies inconsistencies caused by incomplete or conflicting sets of changes prior to the build. Thus, if two people change the same line of code, the tool will not only signal the problem, but help to create a resolution.

Most VCS systems (Rational, Continuus, Microsoft, and Intersolv, for example) now support widely dispersed development teams, managing changes with a code database available via the Internet to developers spread across 24 time zones. Some, like Intersolv's PVCS Tracker, even automatically let developers know when particular changes occur.

Development is becoming like shift work — three shifts a day, workers now checking in electronically rather than physically, all pursuing the time-to-market holy grail.

While VCS tools evolve from version control to change management, other tools give you different ways to look at the source itself. When dealing with hundreds of thousands of lines of code it's awfully hard to find particular functions and variables; even harder is understanding the relationships between different functions, objects, methods, and the like.

TakeFive Software (www.takefive.com), for example, offers their SNIFF+ line of source code engineering tools that build a database of the code, cross referencing relationships to present them in a graphical representation. It's a low-hassle way of interpreting cross-reference trees, call-graphs, or class components.

For example, when changing a method you may have to modify portions of the code that use that method, perhaps, because of polymorphism, modifying only particular instances of the method. The fact that SNIFF+ understands source code makes it a much better tool than a traditional “global search and replace” editor.

Debugging
Because different vendors focus on different parts of the development process, source code engineering tools segue between coding and debugging. With the advent of OOP in general, and specifically C++ in embedded systems, traditional debugger vendors now offer tools that also include graphical views of the source.

The GNU gurus at Cygnus Solutions (www.cygnus.com) offer a product called Source Navigator that works with GDB to help debug C++ constructs. Like many other C++ debuggers, a class hierarchy browser lets you view an entire class tree, or focus on an individual inheritance relationship. Their Include Browser shows complex include relationships graphically, as does a cross-reference browser that helps with navigating refers-to and referred-by relationships.

Green Hills Software's (www.ghs.com) MULTI debugger is one of a number of products highly tuned to the needs of C++ development. Class and source browsing lets you click on the name of an object to display its members in the command pane; double-clicking displays them in a special display window where they can be monitored as you step through your program. When a displayed member is itself compound, double-clicking on the line that contains that member opens another window to display it broken down into its elements.

MULTI also debugs template functions. C++ generates multiple instances of each function, one for each combination of argument types actually used in the program. Every time the function is referenced, the compiler determines which of the generated functions to use, depending on the argument types. Several functions, therefore, effectively share the same source code. MULTI reverses this process to display the template definition as the source code for the function. Single-stepping through the program single-steps through the source statements associated with the function definition. If you encounter a procedure call inside the template function, MULTI will take you to the source for the appropriate procedure.

Peripheral vision
Much of the debugging technology we use was derived from tools developed for the desktop. But embedded systems have unique debugging challenges that require specialized resources. Peripheral registers, for example, are a constant source of problems for developers. With some complex parts requiring literally hundreds of configuration parameters, we need some run-time aids to show bit settings in a comprehensible form.

Peripherals come in two flavors: the inside high-integration CPUs, and those on other chips on the board. A lot of debuggers now give access to CPU peripherals, but few offer any help for the other parts. Perhaps someday we'll see libraries of register info we can integrate into our debug environments; until then you're pretty much sentenced to fiddling bits and bytes once you move off-chip.

On-chip peripherals are a different story. Vendors who focus on one or two processor families usually include high-level views of internal peripheral registers. Vendors offering a debugger family for a wide range of processors tend to have less register support. Software Development Systems (www.sdsi.com), for instance, has long sold the SingleStep debugger for higher-end Motorola processors. SingleStep allows you to display and modify peripheral registers in a high-level context, without referring to a stack of databooks.

One problem that frustrates developers working with real-time control applications is monitoring variables as they change over time. A control system running a chemical process, for instance, takes in data from distributed sensors and alters factory operation to keep the product's texture, color, composition, and other factors within acceptable limits. The developer must understand how the code transforms basic inputs into control parameters, yet most embedded systems are closed beasts that offer little insight into their operation. SurroundView, an intriguing new product from RTview (www.rtview.com), addresses this need by sending critical variables and data out to a debug terminal in near real time. You add SurroundView's monitor, a low-priority task, to your application. The supplied debugger-like application then displays each selected variable in any of a number of formats. Want to see how an A/D converter's output changes over time? SurroundView will pop up a graph showing counts vs. time.

RTOSes bring yet another wrinkle to debugging firmware. If you're serious about real time, then you need an environment that caters to time issues. Traditional procedural-only debugging is fine in a static environment, but it gives no insight into what happens when.

An RTOS is an extremely powerful resource for partitioning a problem into tractable small units (tasks). Most include messaging protocols that let you pass data between tasks robustly and reentrantly. Yet running an RTOS means that suddenly it's rather difficult to predict the time sequence of operations in the code.

Most serious debugger vendors offer task-aware debugging features designed to show the critical data structures in the RTOS itself (like the task control block), as well as contents of queues, mailboxes, and semaphores. While the RTOS layers a new level of abstraction on the application, a task-aware debugger gives you insight into how this new layer actually operates, both procedurally and over time.

The time view shows what task runs when; some even show how tasks get suspended. (For example, if it's a blocked resource, that might indicate a priority problem.)

When timing information comes from a hardware tool like a logic analyzer or emulator, you get a totally non-intrusive view of the code's operation. More advanced processors challenge hardware tools: a cache defeats logic analyzers, and many emulators, prefetchers, and pipelines all create subtle errors between what the processor is really doing and what the databus shows. For high-end systems, emulation and logic analysis are giving way to software-based solutions. Typically you add a debug task to the RTOS to monitor system operation at near real time. Or, as in Applied Microsystem's (www.amc.com) CodeTest, a utility instruments the code, adding in statements that echo processor information to the bus at the cost of 5% to 10% execution time.

With nearly a hundred vendors offering RTOSes, we're faced with an almost infinite number of possible tool combinations. Any one RTOS works with only a restricted set of compilers; the RTOS/compiler duo is then compatible with an even smaller set of debuggers.

This lack of compatibility has spawned a bewildering array of partnering between tool companies. Suffice to say that if you're using an RTOS, be sure to select a compatible toolchain: RTOS, debugger, ICE/BDM, and the rest. Talk to the vendors, find out what combination of products works really well together, and base at least some of your selection criteria on that.

Debuggers are going well beyond the context of the CPU itself. Embedded systems process streams of data from all sorts of sources. Thankfully, some of these are standardized. If you're sending ASCII data over an RS-232 link, hook a PC up to the serial lines to monitor the data. Similarly, if you're working with USB, figure on buying a protocol analyzer designed to convert the USB stream back to intelligible data. Hitex (www.hitex.com), a traditional ICE vendor, now sells USB analyzers that work in conjunction with their emulators.

The CAN serial protocol took Europe by storm and is slowly finding its way into systems here in the U.S. CAN is another complex serial scheme, and Hitex again offers a line of tools designed to help view the data in meaningful form.

All in the process
The lines between tools are blurring. Specification tools generate code and might even come with some debugging capabilities. Compilers integrate with debuggers while debuggers link back to version control and source management utilities. It's clear that the art of software creation remains in flux.

No tool hands you bug-free code on time that meets budget and schedule needs. Tools are a necessary adjunct to development, and will indeed help you be more productive, but good code stems first from a disciplined, repeatable software process. Process comes first, tools second.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars and helps companies with their embedded challenges. His “Break Points” column appears monthly in this magazine. Contact him at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.