Design Con 2015

Trends in Hardware/Software Codesign

Larry Mittag

January 01, 1996

Larry MittagJanuary 01, 1996




Trends in Hardware/Software Codesign

by Larry Mittag

Designing hardware and software simultaneously is a key factor in reducing time-to-market. Although some vendors are talking about tools to facilitate the task, we still have a long way to go before codesign becomes a realistic design methodology.

It's a familiar story. News of the product has been leaked to the press or outright announced by the marketing department. Customer inquiries are starting to appear. And the sales force is eager to sell. Practically everyone in the company is ready and looking forward to the introduction of the new product, which is scheduled to take place at the big trade show.

Everyone except for the programmers, who are still trying desperately to get the software ready.

Why is this such a familiar scenario? Well, the fact that the system is behind schedule is probably not a major mystery. The combination of an over-eager marketing department and overly optimistic estimates from the engineers has probably done its damage to the project schedule, especially after that new wonder chip didn't quite work out and the system hardware had to be redesigned. And that new set of requirements that sneaked in with the redesign certainly didn't help either.

The fact that the project isn't quite done is actually fairly easy to understand. These kinds of problems have been common on practically every project I have worked on, and I am relatively certain I am not unique in that respect. The real question is this: why is it always the software that is still being worked on? What is there about software that makes it the fall guy for late projects?

The simple answer to that question is shown in Figure 1. This figure shows a typical generic schedule for an embedded systems project, one that includes both hardware and software development. The most telling point about this schedule is the order of operations. There is a fairly significant amount of overlap at the beginning of the project, but in general there isn't much going on in the software department between the time the initial coding is complete and when software testing begins. This slack time is more than made up at the end of the project, however, where all the work is being done by the software group. In other words, it's the software group's fault if the project is late because they were the last to work on it.

Why do it?
This delay has effects beyond the loss of sleep of the software staff at the end of a project. The "waiting for hardware" gap can be a significant drag on the completion time of hardware/software projects, a situation that can cause serious concerns for companies developing such products. Not only is the product coming out later, but engineers tend to get restless during periods of inactivity, and restless engineers can mean high turnover. If enough programmers leave before you get to system integration time, there may be no system integration.

In today's competitive marketplace, time-to-market is the overriding priority. This situation means that projects cannot afford gaps like this in the schedule, which is a major force behind the current interest in hardware/software codesign. This concept, simply stated, is to have as much as possible of the hardware and software effort overlap completely, instead of the more linear scheduling that has been traditional. The key factor to efficient scheduling of a project is elimination of dependencies, and the goal of codesign is to eliminate what can be the biggest delay factor in the project--the Software Waiting for Hardware Gap, or SWHG.

Linear project schedule
The typical project schedule shown in Figure 1 was not dictated out of a burning bush or mandated by union regulations, but it is in fact how most embedded projects are being developed. This ubiquity usually means there are some very good reasons behind it.

If we look at very small embedded projects such as single-engineer projects, this schedule makes perfect sense. An engineer skilled in both hardware and software would probably tend to do the majority of the hardware work first, and then write code to use that hardware. If the hardware can be quickly and easily breadboarded, and there is no programming talent sitting idle, there may be little need for an extensive codesign effort.

Things start getting a little more complicated if the hardware can't be easily prototyped. If the design is too complex to wire-wrap, then it may be relatively expensive to generate a printed circuit board for initial testing, a phase of the project where changes are virtually certain to be made.

This problem has already been addressed in the hardware domain through the availability of extensive simulation libraries. These simulation libraries consist of behavioral models of most common logic functions, generally existing as software descriptions in hardware description languages such as Verilog or VHDL.

These descriptions can be wired together by simulators provided by the major electronic design automation (EDA) vendors. This simulation capability allows hardware engineers to create working simulations of entire hardware designs. These simulations can then be tested and debugged just like the wire-wrapped prototype boards that have been so familiar in the past.

This simulation capability has had a couple of effects on the overall project schedule. It now takes longer for the hardware engineers to create the first physical prototype, which means the schedule gap has grown a bit, but when the prototype does show up, it is often in much better shape than it was in the past. In fact, the chances are good that the first prototype will be reasonably bug-free, something that certainly could rarely be said before simulation was available.

But we keep getting back to that pesky gap in the schedule. That gap isn't a real problem in our hypothetical one-person project because that lone engineer can concentrate on getting the hardware working and then do the software, but that model doesn't really work well on larger projects with separate hardware and software groups. So what does?

Alternatives for early target access
There are a few techniques available currently to allow a smoother transition into full system integration when the hardware is available. I have listed some of these techniques below, along with some notes from my own experience.

Port to similar, readily-available hardware first. One option in some circumstances involves doing initial development of the software on a readily-available target system that is as similar as possible to the eventual target system. This situation can allow most of the system-dependent code to be wrung out in a stable environment, making the eventual transition to the "real" target system much less painful.

This technique was used on a telecommunications system I worked on fairly recently. The software effort was anticipated to be extensive, and the company realized early in the project that there would be a significant delay before hardware would be available. We made a conscious decision to model our custom-made hardware after a commercially-available VME board. The commercial board had the same CPU and many of the I/O chips that were going to be in the custom hardware. We put some effort early in the project into creating VSB add-on boards that would support the missing peripherals.

The effort was very worthwhile in this particular instance. We gained some important hardware knowledge early on about one of the new I/O coprocessors they were using, and that knowledge helped us make a successful hardware design. On the software side, we had a stable platform to work on much earlier than we would have otherwise. We used this platform to debug, test, and optimize the code such that it was reasonably solid by the time we got to system integration. This resulted in significant time savings for the overall project.

But this technique is far from a generic cure for the SWHG gap. The combination of circumstances that allowed this technique to be used in this case included the ready availability of a suitably similar target system, a budget that was large enough to support the overhead expense of building the extra hardware to support the commercial system, and the time to do the eventual port to the actual target hardware. But on a project that can meet these parameters, there can be a significant savings in the overall schedule time for the project.

Strictly segregate the software design. This step should be taken on any project, as we are learning with large embedded systems. There is really no excuse for distributing I/O dependencies throughout the application code. Not only does this distribution make the code more difficult to debug before the actual hardware becomes available, but it poses a significant barrier to code reuse when the hardware eventually changes.

Figure 2 represents a software design that distributes detailed knowledge of hardware interfaces throughout the code. This distribution makes it very difficult to debug the interface, because it can get hammered from so many different directions. Also, if the particular interface chip ever changes, this distribution of knowledge will mean a significant rewrite of many sections of code to adapt to the change.

Figure is a much better design. Knowledge of the interface is all contained in one place, which is accessed by the application code through a generic I/O interface. If the details of the interface change over the life of the system, as such details are wont to do, it is relatively simple to make the changes in the software.

Granted, there is a little more overhead involved in the design shown in Figure 3. Overhead has long been considered a deadly sin in embedded systems, one to be avoided at all costs. This situation was a tradeoff we could afford in the days of slow CPUs and tightly-restricted memory, because the code we were writing for those systems was so much less complex than what we are being called on to create these days. It is still important to write tight, efficient code, but that code must be reusable, robust, and on schedule.

Actually, I suspect most readers out there who are working on medium or large embedded systems are already very familiar with this technique. The recent increased interest in real-time operating systems (RTOSes) represents a more mature approach to software development for embedded systems than we have seen in the past, where RTOS software was either home-grown or nonexistent. Most of the projects I have been involved with over the last few years have had a significant segregation in the software group between the applications programmers and the systems programmers, which allows each to create their part of the system without having to worry as much about the other's turf, leading to better overall system design, extra overhead, and so on.

Do I/O programming on a different platform. This is a fairly specialized technique, but it can be a very good one. It is related to the "similar platform" technique described above, but the final port may involve more work.

I ran into this situation a couple of years ago on a small project where I was tasked to develop device drivers for a couple of interfaces to a Motorola 68000 system, the hardware for which was still being developed. This particular job offered a bonus for on-time completion, so I was especially motivated to meet the ambitious schedule. The budget for this project was too small to allow for special test hardware development, but there were PC boards readily available that had the I/O chips I would be using in the target system. I decided to use these boards to develop and test the device drivers, this being a much more efficient use of my time than bothering the hardware engineers while they did their thing.

As it turned out, the project was one of the more satisfying ones I have worked on. I had time to fine-tune the drivers without the pressure of being on the critical path for the project, and the eventual port to the target was finished well ahead of schedule. And the bonus money was icing on the cake.

As I said, this technique is a somewhat specialized one. But the technique illustrates that there are sometimes options available to inventive programmers, especially now that we are programming in high-level languages to more generic OS interfaces. In most systems the details of working with a specific CPU are becoming a much smaller part of the application, and that can offer chances for us to be productive for a longer part of a shorter project development cycle.

Simulation
More than a few of you are probably wondering about that hardware simulation capability I referred to earlier. The question that may have come to mind is why can't we run our software on that hardware simulation, instead of waiting for the real hardware to pop out the back end?

This same question has occurred to people at some of those EDA companies, people who are very interested in making their libraries of hardware simulations more valuable. In fact, this situation is one of the primary motivations behind Mentor Graphics' acquisition of Microtec Research (MRI). There is still a significant gap between the hardware design tools and those of the software side, but in some respects they are solving very similar problems.

Simulation is a very good example. Some RTOS vendors have been selling simulation software for the last few years, but in my experience it hasn't seen much use. Simulation software is often difficult to use, and the I/O interfaces available are generally weak at best. On top of that, the simulations that do full emulation of the target CPU tend to be extremely slow. By the time I got the memory space defined, simulated a gross approximation of the I/O environment, and got code downloaded into the simulation, I ended up with an unworkably slow and not terribly accurate simulation. It was almost a relief to get onto that buggy first hardware prototype, because at least the thing ran (and failed) in real-time.

But if these extensive system hardware simulations being used by the hardware groups could run actual code reasonably fast, that could be a tremendous boost for systems development. The running of actual code is the goal of these EDA companies, and the purchase of MRI indicates how seriously Mentor takes that goal.

But Mentor isn't the first EDA company to integrate MRI software into EDA libraries. That bell has been rung by a company named Eagle Design Automation, and the resulting product is available today. I discussed this product with Eagle's President Gordon Hoffman while doing background research for this article.

The simulation results that have been achieved by Eagle are fairly impressive. Eagle claims to have a worst case simulation speed of about 1,500 instructions per second, with full simulation of I/O hardware. I say fairly impressive, because there are a few caveats in their approach. They do not provide a full simulation of the CPU, opting instead for running C code on the simulation host in native mode. They provide instead a set of library routines that simulate the I/O interface of the target CPU and that interface into the hardware simulation. This approach to system simulation is shown in Figure 4.

I am not faulting them for this approach, far from it. Theirs is potentially a useful tool that could be a significant improvement on the other early-access schemes discussed above. This tool allows access to simulations of custom hardware much earlier than any of the techniques previously described, and provides that access in a controllable workstation environment.

But there are a couple of potential flaws in this approach. For example, it would not be possible to get completely accurate timing information out of the system simulation. To get exact cycle counts for the target CPU, you need to provide a full simulation of the target CPU, one that you can run native code on. Of course, the tradeoff here is that such a full simulation takes much longer to develop for a CPU than the two to four weeks per CPU that Eagle claims for their current simulations. Given the rate that CPUs are being developed these days, it could be a daunting task just to keep up. But keeping up with full CPU simulation is one of the tasks that Mentor/MRI are looking to take on. I discussed the plans for the future with Vincent Ratford and Callan Carpenter from Mentor and Jim Ready from MRI. I found the discussion quite enlightening.

A primary goal of the merger was to interface the hardware simulation libraries of Mentor with the CPU simulation capability and software interface of MRI's products. This situation could provide complete system simulation, allowing full disclosure of timing and debug information to both the hardware and software engineers.

But this information could be costly in terms of simulation time. Handling a system simulation at this level of detail could strain even the fastest workstation, and it could prove unworkable for anything less. I discussed this problem with the Mentor trio, and they presented me with a series of options for hardware emulation and simulation that could solve that problem. One of these options is shown in Figure 5.

The idea exhibited in this figure is to use an emulator or other "real" hardware to play the part of the CPU in the system. This box is then interfaced to the simulation of the rest of the system, running on the simulation host. Realization of this design could lead to the best of both worlds, a simulation that could gradually be phased over into reality as the pieces are produced in physical form.

A real surprise that came during the discussions with Mentor/MRI was the time frame they were claiming for initial product releases. Mentor/MRI claimed the first integrated products from the merger would be released by mid-1996. When it came to details on which of the possibilities shown here would be present in that initial release, Mentor/MRI was understandably less precise, but they are contending they will produce combined products in a relatively short time frame.

The real question is whether simulation can break out of the niche areas it currently inhabits and become mainstream technology for embedded systems. For this to happen, two things need to occur. Customers must step forth who will pay real money for the solutions that simulation technology represents, and they will step forth only if simulation does a better job of solving test and debug problems than the real hardware--or does it earlier in the development cycle. If the simulation can't be trusted to accurately model the real system, or it takes longer to set up and use than to create a physical incarnation of the real system, then that customer will not pay for the simulation.

As to the second requirement, vendors must achieve a critical mass of accuracy, ease of use, and ubiquity for simulation technology. Accuracy means the desired level of detail of the simulation must be maintained, and that information must not be bogus in any way, shape, or form. Ease of use means that the software cannot be allowed to go out to lunch at random times, and the user must be working primarily within the solution space rather than mucking with the tools to make them work at all.

The last point, ubiquity, means that there must be standard interfaces into several levels of the simulation. As a customer, I don't want to hear about how I don't have access to simulations of a particular CPU unless I use tools from XYZ EDA vendor, or that I need to buy an emulator from a particular vendor because the simulator is hard-wired to the proprietary command set for that vendor's products.

Complete understanding needed
The goal of completely overlapped hardware/software codesign is a difficult one. Ideally, the two sides will eventually work with design tools that completely understand each other. If this understanding takes place, it should be possible one day to produce embedded systems that run successfully the first day they become physically incarnate. If we do reach that ultimate goal someday, I suspect that the work of being a software or hardware engineer will be much more rewarding, albeit somewhat less exciting. But we have a long way to go before that goal is realized. Hardware and software development are similar in concept, but the devil is always in the details.

I discussed the current state of hardware simulation with some hardware engineers, and I came away with the definite impression that there was still work to be done in that world. The simulation tools have an uneven track record, and the good ones are still too expensive for use in the normal hardware designs. The good tools tend to be used to model custom ASICs and expensive high-performance custom hardware. In fact, I was told that with many standard designs using current chip sets, the implementation was considered so "goof-proof" that it wasn't worth setting up the simulator. The chances of failure were low enough to be worth turning actual hardware right away.

Until we reach the point where simulation presents a complete solution, I hope that I have presented you with some options that are workable for you. Remember that the project schedule is not just the problem of management. In today's workplace, we all have more responsibility to produce better products more efficiently.

Larry Mittag is a contributing editor for Embedded Systems Programming. He is also the lead consultant for Mittag Enterprises, a consulting firm that specializes in embedded systems software and communications applications. Mittag has over 18 years experience in the embedded systems industry, and holds degrees in physics and secondary education.

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER