Hardware Vs. Software - Embedded.com

Hardware Vs. Software


“I think the time is long overdue for the software community to take a hard look at the tools that the chip community is currently using to do the SOC (systems-on-chip) designs,” writes embedded systems developer Howard Smith. “Whether the design is described in Verilog or VHDL, there are excellent simulation tools to verify and test the design. These tools are designed to work on three different levels: a behavioral level, a synthesized gate level, and a synthesized gate level with delay information. The designer needs to create a test bench that becomes the test suite. Test benches can be designed for the entire SOC, or a subsystem within the SOC, or even just a simple function. The same tool can do all of the testing, even the integration testing.

“Chip designers don't have the luxury of a quick recompile to fix a problem,” continues Smith. “It is more like six to ten weeks to get a respin of the silicon, and very expensive, too. So, they have to 'get it right' or pay a very heavy penalty (late to market and way over budget). To avoid the penalty, they have created software tools that help ensure that the design is correct before it is converted to silicon.

“I think the software community would greatly increase their productivity if they would just step away from the coding exercise and think about the software design process. Then they may be able to see where a better set of tools would be a great help.

“One other chilling thought for the software community: I am seeing SOC designs that are now including hardware implementations of things like TCP/IP stacks. Maybe the hardware guys will solve the problem. Their tools can do really cool things like state machines! And they know that it has to be right the first time!”

Thanks, Howard, for your perspective. My usual response to the “we hardware guys have to get it right” argument is that the firmware generally implements incredibly complex functionality. But this argument is getting a bit less defensible as hardware in more and more cases assumes traditional software functions. The Pentium 4 has 45 million transistors, a hugely complex beast by any measure. Yet it works extraordinarily well. A similar bit of code of that size (four million lines? 40 million?) would typically be rife with problems.

At the risk of simplifying the issues, I think Howard argues that hardware folks invest much more time and money into building reliable simulation and test environments than do firmware people. Their motivation is the extreme costs of mistakes, since any error means spinning a new piece of silicon.

Contrast that to the firmware world: defects have no obvious cost (we're talking pre-release errors, during the normal debugging cycle). Just keep debugging till the damn thing finally works. Or at least until our tests detect no more mistakes.

I've watched a lot of projects go from inception to release (or to the trashcan). It's extremely common to see firmware testing get shortened as deadline pressures intensify. We defer, delete, or destroy tests in an effort to SHIP. That just does not happen when designing chips.

Howard suggests that the superior tools of the chip designer are important. Perhaps. I wonder if the developers' attitudes are more crucial. The cost of failure looms in every engineer's mind — and seeds his nightmares — when confronted with the “go to silicon or check the tests” decision. This attitude and, frankly, fear, drives the designers to create near-perfect simulation environments.

That's much less common with any sort of software.

What do you think? Is it meaningless to compare hardware and software development? Or are there lessons we need to learn?

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. He founded two companies specializing in embedded systems. Contact him at . His website is .

Reader Feedback

The very issue of hand shaking between Hardware and Software Engineers arevery well addressed in the SEI's (Software EngineeringInstitute)CMMI(Capability Maturity Model Integrated)Model. It is alwayssuggested, a system engineer does the Hardware In Loop Simulation Test fora Mission Critical Software based on the Process Areas as defined in CMMImodel.

Many organisations have realised the importance of a disciplined SDLC asmentioned in the Model and have reaped the fruits in terms ofROI,adherence to project schedule, quality etc as warranted by a reliablesoftware.

Managers are required to drive there organisations to adapt models likeCMMI(Capability Maturity Model Integrated).

Senior Consultant
Satyam Computers

For another viewpoint let us step back and look again at software process and developers. Hardware and electronics developed from physics/ physical processes and therefore there is strong tradition to look at other sciences/ outside methods for testing and solutions, like Xray analysis, chemistry for packages. Whereas software has principally evolved from maths, and like mathematics, software tries to be self sufficient, no looking at other sciences for solutions.

For example there is no equivalent of JTAG/ logic analyser in the software world, you do not get special PCs with hardware modified/ adapted for software testing. Even in the the embedded field, BIST and DFT are just growing beyond catch phrases. So I think we should look at testing software by means other than software, after all if only Euclid had looked around he would have realized one of his axiom was not. That is we need to create independent methodologies to develop and check software besides executing it. For example, no gear box manufacturer checks the gears crystal by crystal, nor runs all of them 100s of miles before delivery, yet they work reliably.

kalpak dabir
polar systems and devices

Software is really only a hardware abstraction. Computer languages are merely tools to allow a human being to conceptualize massively configurable HW.

Once the compiler and the assembler are finished with your code, you are left with an image that is impressed on a flash or rom part which is then HW. A state machine in your processor is running it's fingers over a series of bumps in ROM like the music box of ages ago playing from a cylinder. Sure someone composed the music on the cylinder, but it is all HW now.

Regardless of whether you write in machine code, assembly, or C++, it is all the same result in the end. – a mask ROMable image that is as much HW as any other part of the system.

Software and firmware aren't really real. HW is real.

Jim Cicon
FW Engineer
Hewlett Packard

While I agree that programmers and their processes need to be monitored, the problem as I see it is uninformed management, that do not “quite” understand software or the development thereof.

This entity called “software” has the unique distinction of being based more on art, and supported by science; this is because software can easily extend well beyond the logic of the today's hardware (this is not to say that hardware design is inferior, because the software designs of today will be in the hardware designs of tomorrow).

Companies that view software design as an abstraction. Software has been allowed to become the “quick fix”; it is far too easy to change. And the complexity of some projects might not lend themselves to “band-aids”.

I have found that the Software Development Life Cycle (SDLC) model really works to remove a lot of problems in the beginning of the project. The process model is based on a requirements document must be adhered to; I have only seen one (1) in my experience outside of the SDLC environment.In order to fulfill the goals of SDLC, management must support and enforce the policies, otherwise, the quality achievement as stated in the article will be harder to reach.

Vernon Davis
Software Engineer
Advanced Energy Industries, Inc.

Compare simply the languages used to design.In VHDL you define new (sub-)types for nearlyevery variable. If you try to assign an invalidvalue you get an error. If a variable has validvalues from 1 to 5 you can't assign a 6. In C ican assign a pointer to an int without error.VHDL is derived from the ADA language, which canalso restrict invalid values. The Tools are here.Of course this is only one single point, but it istypical for the immaturity of software developmentto use the wrong tools for the wrong things.I have done both, VHDL and C, and the VHDL boys arethe better engineers measured in terms of quality.

Ingo Knopp

It seems to me that the driver in the hardware vs. software debate is COST. Prior to going to silicon, a manager somewhere has to hand over a heap-load of cash to pay for the next spin. Better get it right the first time. Contrast that with the up-front cost of a firmware release (hey, they're paying me anyway, why not spend that time creating a release) and you've got your answer.

We can see the middle ground in FPGA design — the cost to change a reprogrammable array is somewhere in between the hardware and firmware cost, and engineers see defect rates somewhere in-between hardware spins and firmware — FPGA designs are mostly right, but some defects usually pop up during hw/sw integration.

Lets face it; a large number of tools exist to help firmware designers, but we either refuse to buy them or don't spend the up-front time to use them correctly. If the cost of firmware defects can be accurately tracked and reported to management, the smart manager will look for a way to reduce that cost, and firmware testing methods will be allowed to mature.

David Cuddihy
Principal Engineer
ATTO Technology Inc.


I just wanted to voice my opinion on the subject of hardware versussoftware. I am a firmware and software programmer, and have been for manyyears. To me, there are a lot of factors related to this issue. There arethe common ones, such as lazy programmers who use debuggers to more orless design the programs as they go. This drives me crazy.

Other issues are time limits. Often I see unrealistic time predictionsfor completion dates on software and firmware. This is partially due tomanagement/marketing, and the programmers themselves. Of course,marketing wants the product tomorrow, who doesn't, but the programmersare also to blame because many will give a time estimate without actuallydoing much, or any, analysis on the project. The estimate of “four weeks”becomes “three months” once the programmer realizes they forgot to takesome aspect of the program into consideration.

Finally, there is the issue of reusability. This relates directly to thehardware for the most part. Where I work, we have a lot of code with afew bugs. The code itself is used across a lot of our products with minorchanges all encapsulated in IFDEF statements. This in itself is not aproblem because the code is designed to run in a specific environment, soif something goes wrong it can easily be tracked down. On the hardwareside, the hardware itself is wired to do a very specific task.

Code is often not written “application specific” any more, because it isnot time or cost efficient. There are many abstractions in code whichinvariably lead to design flaws, coding flaws and implementation flaws.Also, as with one incident where I work, one of our analog to digitalconverters became obsolete. The hardware was re-designed to accommodatethe new chip and worked great, but the code still had to run on both oldand new hardware. The result is even more complex code to fit the needsof a small variation in the hardware. This can also introduce bugs asagain, the hardware is designed for one specific task, and the firmwaremust be altered to allow for this new component.

In all, I don't think it comes down to the tools, attitude, designphilosophies or anything else, it is a combination of everything. Let'sface it, this stuff is complex, and firmware/software is expected to runon a target platform with minimal design changes. When was the last timeyou heard someone say “Hey, great job designing that new piece ofhardware, now let's design all new firmware for it as well!”. I'll tellyou, I haven't heard that one before. Usually, we are looking for ways tofit most of the old code into the new hardware to get the product out on time.

Raymond Lee


You've hit the nail on the head with your editorial about hardwareverification environments. I have kind of a unique perspective on this.

I'm a software engineer by trade, and have had experience building embeddedhardware and firmware, as well as more “traditional” SW applications in C,C++ and Java. However, I've spent the past year or so developing andrefining a multithreaded C++-based API for automating functional testing ofRTL and gate-level HDL designs for complex ASIC's and SoC's. Among otherfeatures, the API allows embedded code to be written and linked into a C++test, with memory-mapped reads and writes dispatching to “transactors” thatwiggle the pins of a bus-functional-model that takes the place of the oneor many micros / DSP's in an SoC design. Once the tests pass at this (muchfaster) behavioral level, the synthesizable RTL for the processor cores canbe substituted, and the same code cross-compiled and loaded into RTL memorymodels as machine instructions, making for a co-development andco-verification environment.

When eliciting requirements for this system from our verificationand RTL teams, I was amazed at the excruciating attention to detail and disciplinethese guys have. A silicon re-spin is the stuff of designers' nightmaresall right, particularly for those working in a free-lance contract houselike Intrinsix. The repeat business and referrals that come from our manysatisfied customers are the lifeblood of our business model. The negativeconsequences of a failure in this industry are the economic corrolary tothe physical and human consequences of a train wreck or a stray cruise missile.The depth and breadth of system-level testing stand in stark contrast to mypast experience designing embedded systems and software; even in the”safety-critical” arena of passenger and freight railroad automation!(Here's a survival tip – make like the bus and stop at *all* railroad gradecrossings… 🙂 )

Embedded software engineers have a lot to learn from ASICverification engineers, particularly as more firmware begins running onSoC's and becomes part of the delivered system platform. But the level oftesting will always correlate very strongly with the degree of negativeconsequences. I've attended your “Building better firmware faster”seminar, and you're absolutely correct when you state that most customers aren'tprepared to pay the price of low-defect software. Only in industries wherethe alternatives are far more costly and visible do we see a realcommitment to the necessary steps for creating high-reliabilitysoftware-based systems.

Eldridge Mount
Software Engineer
Intrinsix Corp.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.