Design Con 2015

Is multicore hype or reality?

February 01, 2008

JackGanssle-February 01, 2008

In the supercomputing world, similar dynamics were at work. Gallium arsenide logic and other exotic components drove clock rates high, and liquid cooling kept machines from burning up. But long ago, researchers recognized the futility of making much additional progress by spinning the clock-rate wheel ever higher and started building vastly parallel machines. Most today employ thousands of identical processing nodes, often based on processors used in standard desktop computers. Amazing performance comes from massively parallelizing both the problems and the hardware.

To continue performance gains, desktop CPU vendors co-opted the supercomputer model and today offer a number of astonishing multicore devices, which are just two or more standard processors assembled on a single die. A typical configuration has two CPUs, each with their own L1 cache. Both share a single L2, which connects to the outside world via a single bus. Embedded versions of these parts are available as well and share much with their desktop cousins.

The problem with SMP
Symmetric multiprocessing (SMP) has been defined in a number of different ways. I chose to call a design using multiple identical processors that share a memory bus an SMP system. Thus, multicore offerings from Intel, AMD, and some others are SMP devices.

SMP will yield performance improvements only (at best) insofar as a problem can be parallelized. Santa's work cannot be parallelized (unless he gives each elf a sleigh), but delivering mail-order products keeps a fleet of UPS trucks busy and efficient.

Amdahl's Law gives a sense of the benefit accrued from using multiple processors. In one form, it gives the maximum speedup as:

where f is the part of the computation that can't be parallelized, and n is the number of processors. With an infinite number of cores, assuming no other mitigating circumstances, Figure 1 shows (on the vertical axis) the possible speedup versus (on the horizontal axis) the percentage of the problem that can't be parallelized.

The Law is hardly engraved in stone as there are classes of problems called "embarrassingly parallel" where huge numbers of calculations can take place simultaneously. Supercomputers have long found their niche in this domain, which includes problems like weather prediction, nuclear simulations, and the like.

The crucial question becomes: How much can your embedded application benefit from parallelization? Many problems have at least some amount of work that can take place simultaneously. But most problems have substantial interactions between components that must take place in a sequence. It's hard at best to decide at the outset, when one is selecting the processor, how much benefit we'll get from going multicore.

< Previous
Page 2 of 4
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER