Hardware Vs. Software

October 09, 2001

JackGanssle-October 09, 2001

Hardware Vs. Software
"I think the time is long overdue for the software community to take a hard look at the tools that the chip community is currently using to do the SOC (systems-on-chip) designs," writes embedded systems developer Howard Smith. "Whether the design is described in Verilog or VHDL, there are excellent simulation tools to verify and test the design. These tools are designed to work on three different levels: a behavioral level, a synthesized gate level, and a synthesized gate level with delay information. The designer needs to create a test bench that becomes the test suite. Test benches can be designed for the entire SOC, or a subsystem within the SOC, or even just a simple function. The same tool can do all of the testing, even the integration testing.

"Chip designers don't have the luxury of a quick recompile to fix a problem," continues Smith. "It is more like six to ten weeks to get a respin of the silicon, and very expensive, too. So, they have to 'get it right' or pay a very heavy penalty (late to market and way over budget). To avoid the penalty, they have created software tools that help ensure that the design is correct before it is converted to silicon.

"I think the software community would greatly increase their productivity if they would just step away from the coding exercise and think about the software design process. Then they may be able to see where a better set of tools would be a great help.

"One other chilling thought for the software community: I am seeing SOC designs that are now including hardware implementations of things like TCP/IP stacks. Maybe the hardware guys will solve the problem. Their tools can do really cool things like state machines! And they know that it has to be right the first time!"

Thanks, Howard, for your perspective. My usual response to the "we hardware guys have to get it right" argument is that the firmware generally implements incredibly complex functionality. But this argument is getting a bit less defensible as hardware in more and more cases assumes traditional software functions. The Pentium 4 has 45 million transistors, a hugely complex beast by any measure. Yet it works extraordinarily well. A similar bit of code of that size (four million lines? 40 million?) would typically be rife with problems.

At the risk of simplifying the issues, I think Howard argues that hardware folks invest much more time and money into building reliable simulation and test environments than do firmware people. Their motivation is the extreme costs of mistakes, since any error means spinning a new piece of silicon.

Contrast that to the firmware world: defects have no obvious cost (we're talking pre-release errors, during the normal debugging cycle). Just keep debugging till the damn thing finally works. Or at least until our tests detect no more mistakes.

I've watched a lot of projects go from inception to release (or to the trashcan). It's extremely common to see firmware testing get shortened as deadline pressures intensify. We defer, delete, or destroy tests in an effort to SHIP. That just does not happen when designing chips.

Howard suggests that the superior tools of the chip designer are important. Perhaps. I wonder if the developers' attitudes are more crucial. The cost of failure looms in every engineer's mind -- and seeds his nightmares -- when confronted with the "go to silicon or check the tests" decision. This attitude and, frankly, fear, drives the designers to create near-perfect simulation environments.

That's much less common with any sort of software.

What do you think? Is it meaningless to compare hardware and software development? Or are there lessons we need to learn?

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. He founded two companies specializing in embedded systems. Contact him at jack@ganssle.com. His website is www.ganssle.com.

Reader Feedback

The very issue of hand shaking between Hardware and Software Engineers are very well addressed in the SEI's (Software Engineering Institute)CMMI(Capability Maturity Model Integrated)Model. It is always suggested, a system engineer does the Hardware In Loop Simulation Test for a Mission Critical Software based on the Process Areas as defined in CMMI model.

Many organisations have realised the importance of a disciplined SDLC as mentioned in the Model and have reaped the fruits in terms of ROI,adherence to project schedule, quality etc as warranted by a reliable software.

Managers are required to drive there organisations to adapt models like CMMI(Capability Maturity Model Integrated).

Visweswara
Senior Consultant
Satyam Computers


For another viewpoint let us step back and look again at software process and developers. Hardware and electronics developed from physics/ physical processes and therefore there is strong tradition to look at other sciences/ outside methods for testing and solutions, like Xray analysis, chemistry for packages. Whereas software has principally evolved from maths, and like mathematics, software tries to be self sufficient, no looking at other sciences for solutions.

For example there is no equivalent of JTAG/ logic analyser in the software world, you do not get special PCs with hardware modified/ adapted for software testing. Even in the the embedded field, BIST and DFT are just growing beyond catch phrases. So I think we should look at testing software by means other than software, after all if only Euclid had looked around he would have realized one of his axiom was not. That is we need to create independent methodologies to develop and check software besides executing it. For example, no gear box manufacturer checks the gears crystal by crystal, nor runs all of them 100s of miles before delivery, yet they work reliably.

kalpak dabir
proprietor
polar systems and devices


Software is really only a hardware abstraction. Computer languages are merely tools to allow a human being to conceptualize massively configurable HW.

Once the compiler and the assembler are finished with your code, you are left with an image that is impressed on a flash or rom part which is then HW. A state machine in your processor is running it's fingers over a series of bumps in ROM like the music box of ages ago playing from a cylinder. Sure someone composed the music on the cylinder, but it is all HW now.

Regardless of whether you write in machine code, assembly, or C++, it is all the same result in the end. - a mask ROMable image that is as much HW as any other part of the system.

Software and firmware aren't really real. HW is real.

Jim Cicon
FW Engineer
Hewlett Packard


While I agree that programmers and their processes need to be monitored, the problem as I see it is uninformed management, that do not "quite" understand software or the development thereof.

This entity called "software" has the unique distinction of being based more on art, and supported by science; this is because software can easily extend well beyond the logic of the today's hardware (this is not to say that hardware design is inferior, because the software designs of today will be in the hardware designs of tomorrow).

Companies that view software design as an abstraction. Software has been allowed to become the "quick fix"; it is far too easy to change. And the complexity of some projects might not lend themselves to "band-aids".

I have found that the Software Development Life Cycle (SDLC) model really works to remove a lot of problems in the beginning of the project. The process model is based on a requirements document must be adhered to; I have only seen one (1) in my experience outside of the SDLC environment. In order to fulfill the goals of SDLC, management must support and enforce the policies, otherwise, the quality achievement as stated in the article will be harder to reach.

Vernon Davis
Software Engineer
Advanced Energy Industries, Inc.


Compare simply the languages used to design. In VHDL you define new (sub-)types for nearly every variable. If you try to assign an invalid value you get an error. If a variable has valid values from 1 to 5 you can't assign a 6. In C i can assign a pointer to an int without error. VHDL is derived from the ADA language, which can also restrict invalid values. The Tools are here. Of course this is only one single point, but it is typical for the immaturity of software development to use the wrong tools for the wrong things. I have done both, VHDL and C, and the VHDL boys are the better engineers measured in terms of quality.

Ingo Knopp


It seems to me that the driver in the hardware vs. software debate is COST. Prior to going to silicon, a manager somewhere has to hand over a heap-load of cash to pay for the next spin. Better get it right the first time. Contrast that with the up-front cost of a firmware release (hey, they're paying me anyway, why not spend that time creating a release) and you've got your answer.

We can see the middle ground in FPGA design -- the cost to change a reprogrammable array is somewhere in between the hardware and firmware cost, and engineers see defect rates somewhere in-between hardware spins and firmware -- FPGA designs are mostly right, but some defects usually pop up during hw/sw integration.

Lets face it; a large number of tools exist to help firmware designers, but we either refuse to buy them or don't spend the up-front time to use them correctly. If the cost of firmware defects can be accurately tracked and reported to management, the smart manager will look for a way to reduce that cost, and firmware testing methods will be allowed to mature.

David Cuddihy
Principal Engineer
ATTO Technology Inc.


Hello,

I just wanted to voice my opinion on the subject of hardware versus software. I am a firmware and software programmer, and have been for many years. To me, there are a lot of factors related to this issue. There are the common ones, such as lazy programmers who use debuggers to more or less design the programs as they go. This drives me crazy.

Other issues are time limits. Often I see unrealistic time predictions for completion dates on software and firmware. This is partially due to management/marketing, and the programmers themselves. Of course, marketing wants the product tomorrow, who doesn't, but the programmers are also to blame because many will give a time estimate without actually doing much, or any, analysis on the project. The estimate of "four weeks" becomes "three months" once the programmer realizes they forgot to take some aspect of the program into consideration.

Finally, there is the issue of reusability. This relates directly to the hardware for the most part. Where I work, we have a lot of code with a few bugs. The code itself is used across a lot of our products with minor changes all encapsulated in IFDEF statements. This in itself is not a problem because the code is designed to run in a specific environment, so if something goes wrong it can easily be tracked down. On the hardware side, the hardware itself is wired to do a very specific task.

Code is often not written "application specific" any more, because it is not time or cost efficient. There are many abstractions in code which invariably lead to design flaws, coding flaws and implementation flaws. Also, as with one incident where I work, one of our analog to digital converters became obsolete. The hardware was re-designed to accommodate the new chip and worked great, but the code still had to run on both old and new hardware. The result is even more complex code to fit the needs of a small variation in the hardware. This can also introduce bugs as again, the hardware is designed for one specific task, and the firmware must be altered to allow for this new component.

In all, I don't think it comes down to the tools, attitude, design philosophies or anything else, it is a combination of everything. Let's face it, this stuff is complex, and firmware/software is expected to run on a target platform with minimal design changes. When was the last time you heard someone say "Hey, great job designing that new piece of hardware, now let's design all new firmware for it as well!". I'll tell you, I haven't heard that one before. Usually, we are looking for ways to fit most of the old code into the new hardware to get the product out on time.

Regards,
Raymond Lee


Jack,

You've hit the nail on the head with your editorial about hardware verification environments. I have kind of a unique perspective on this.

I'm a software engineer by trade, and have had experience building embedded hardware and firmware, as well as more "traditional" SW applications in C, C++ and Java. However, I've spent the past year or so developing and refining a multithreaded C++-based API for automating functional testing of RTL and gate-level HDL designs for complex ASIC's and SoC's. Among other features, the API allows embedded code to be written and linked into a C++ test, with memory-mapped reads and writes dispatching to "transactors" that wiggle the pins of a bus-functional-model that takes the place of the one or many micros / DSP's in an SoC design. Once the tests pass at this (much faster) behavioral level, the synthesizable RTL for the processor cores can be substituted, and the same code cross-compiled and loaded into RTL memory models as machine instructions, making for a co-development and co-verification environment.

When eliciting requirements for this system from our verification and RTL teams, I was amazed at the excruciating attention to detail and discipline these guys have. A silicon re-spin is the stuff of designers' nightmares all right, particularly for those working in a free-lance contract house like Intrinsix. The repeat business and referrals that come from our many satisfied customers are the lifeblood of our business model. The negative consequences of a failure in this industry are the economic corrolary to the physical and human consequences of a train wreck or a stray cruise missile. The depth and breadth of system-level testing stand in stark contrast to my past experience designing embedded systems and software; even in the "safety-critical" arena of passenger and freight railroad automation! (Here's a survival tip - make like the bus and stop at *all* railroad grade crossings... :) )

Embedded software engineers have a lot to learn from ASIC verification engineers, particularly as more firmware begins running on SoC's and becomes part of the delivered system platform. But the level of testing will always correlate very strongly with the degree of negative consequences. I've attended your "Building better firmware faster" seminar, and you're absolutely correct when you state that most customers aren't prepared to pay the price of low-defect software. Only in industries where the alternatives are far more costly and visible do we see a real commitment to the necessary steps for creating high-reliability software-based systems.

Eldridge Mount
Software Engineer
Intrinsix Corp.

Loading comments...

Most Commented

Parts Search Datasheets.com

KNOWLEDGE CENTER