My Februarycolumn in Embedded Systems Design Magazine was an attempt to showthat the emperor, at least when talking about multicore technology, hasno clothes. Multicore is being hyped as the solution to clock ratestagnation, when it really addresses two problems:
– A handful of “embarrassingly parallel” problems can derive greatperformance benefits from SMP.
– In many applications one can reduce power consumption by using moreprocessors at slower clock rates.
Actually, there is a third problem that multicore solves: thevendors' need to sell us more transistors as they continue to exploitMoore's Law.
Now a study in IEEE Spectrum shows that even for the classic embarrassingly parallel problems likeweather simulations multicore offers little benefit. The curve in thatarticle is priceless. As the number of cores grow from two to 64performance plummets by a factor of five. Additional processors nullifyeach other.
Call it the Nulticore Effect.
One might think that more CPUs equals faster systems, but intraditional symmetrical multiprocessing groups of cores sharing thesame memory bus, a bus that even with a single core is already ascongested as Highway 101 at rush hour. Memory simply can't keep up witha single-cycle machine that can swallow a couple of instructions pernanosecond.
We all know this; it's the reason a modern processor is crammed fullof complex circuits like pipelines and cache. Every access to the busentails numerous wait states which bring the system to a screechinghalt. Add more cores, all demanding access to that same bus, and systemperformance is bound to drop.
Other problems surface. We know that absent scheduling algorithmslike RMA (rate monotonic analysis) – which itself is highly problematic- preemptive multitasking is not deterministic. Though most embeddedsystems use preemptive multitasking, there's no way to insure thesystem won't fail from a perfect storm of interrupts and task switches.
And it's hard – really hard in a complex system – to getmultitasking right. Add in multiple cores, each of which is constantlyblocking the others from memory, and determinism looks about as likelyas every school kid's plan to become an NBA star.
Reentrantly sharing memory is tough enough with a single processor;when many share the same data the demands on developers to produceperfectly locked and reentrant code become overwhelming.
Then there's the little issue of parallelizing programs, an unsolvedproblem that is to supercomputing what the holy grail is to the KnightsTemplar – plenty of rumors, lots of speculation, but no hard results.
There are a lot of smart people working on these problems and I'veno doubt they will be solved at some point. But today a generallybetter approach is asymmetric multiprocessing, where each core has itsown memory space. More on that later.
Jack G. Ganssle is a lecturer and consultant on embeddeddevelopment issues. He conducts seminars on embedded systems and helpscompanies with their embedded challenges. Contact him at . His website is .