Advertisement

How embedded projects run into trouble: Jack’s Top Ten – Number Three

September 24, 2018

Jack Ganssle-September 24, 2018

They say trouble comes in threes, and the number three reason on my list of why projects go south indeed leads to a bundle of afflictions.

3 – Poor resource planning

The embedded world differs from all other types of computer work in that we’re normally dealing with a dearth of resources. Underestimate the size of a PC application by an order of magnitude and, well, who cares? The user has 16 GB of memory and a 3 GHz processor. Is storage a problem when 1 TB of disk costs $50?

With firmware, your resource estimates have got to be reasonably accurate from nearly day one of the project. At a very early point we’re choosing a CPU, clock rate, memory sizes and peripherals. Those get codified into circuit boards long before the code has been written. Get them wrong and expensive changes ensue.

But it’s worse than that. Pick an 8-bitter, but then max out its capabilities, what then? Changing to a processor with perhaps a very different architecture and you could be adding months of work. That initial goal of $10 for the cost of goods might swell a lot, perhaps even enough to make the product unmarketable.

Digi-Key lists 75,000 distinct part numbers for microcontrollers. Your job: pick the perfect part for the requirements, such as they are, and as they may change some unknown time in the future. Prices range from $0.24 to $884. Your boss wants the former.

The good news is that we generally know something about the I/O requirements, so can narrow the search by picking some number of SPIs, UARTs, timers, etc. But things change. Tomorrow the sales people may want the thing to talk to a customer’s unexpected CAN bus.

Or there’s memory. Is that part with 384 bytes of flash going to be big enough? It does have 16 bytes of RAM. Can’t get much of a stack in that, though. But even modest memory requirements may force you out of an MCU into a CPU with external flash. The huge extra costs will make the accountants’ hair catch fire.

Will the ISA and clock rate permit acceptable response to the real world? Sure, we’d like to put a big honkin’ multi-GHz speed queen on the PCB, but that comes at all sorts of costs: power needs, heat dissipation, dollars, and much more. Generally, we have to use a device that runs fast enough, but with not a lot of overhead.

One company I have visited on multiple occasions has a rule that their CPUs must operate at 99% processor loading. Crazy, huh? But their volumes are in the billions per year, and extra transistors detract from the bottom line. Their careful analysis shows that, for them, the huge extra engineering cost involved pays off in the long run.

In the 80s I bid on a classified government contract to build a system that had to process real-time data that came very fast. Blithely assuming an interrupt could handle this, I was very unhappy to discover, post-award, that there was no way an interrupt’s overhead could keep up with the firehose of data. I spent three weeks finding the four instructions that could poll fast enough to keep up.

That was a classic example of poor resource planning.

A rule of thumb suggests, when budgets allow, doubling initial resource needs. Extra memory. Plenty of CPU cycles. Have some extra I/O at least to ease debugging.

What is the proven method of estimating resource needs early in the project? There seems to be only one tried and true approach: experience. How does one gain that? From bitter experience. As Fred Brooks said: “Good judgment comes from experience, and experience comes from bad judgment.”

Next week: Somehow the firmware business hasn’t learned anything from the quality movement. We should.

 

Jack Ganssle (jack@ganssle.com) writes, lectures, and ruminates about embedded systems. His blog is here: www.ganssle.com.

 

Loading comments...