Will Moore's Law doom multicore?

March 11, 2013

JackGanssle-March 11, 2013

I have some problems with the paper:

  • The authors assume an Intel/AMD-like CPU architecture. That is, huge, honking processors whose entire zeitgeist is performance. We in the embedded space are already power-constrained and generally use simpler CPUs. It's reasonable to assume a mid-level ARM part will run into the same issues, but perhaps not at 8 nm.
  • They don't discuss memory contention, locks, and interprocessor communication. That's probably logical as their thesis is predicated on power constraints. But these issues will make the results even worse in real-world applications. The equations presented indicate no bus contention for shared L2 (and L2 is always shared on multicore CPUs) and none for main memory accesses. Given that L1 is tiny (32-64KB) one would expect plenty of L1 misses and thus lots of L2 activity... and therefore plenty of contention.
  • The models analyze applications in which 75% to 99% of the work can be done in parallel. Plenty of embedded systems won't come near 75%.
  • It appears the analysis assumes cache wait states are constant: three for L1 and 20 for L2. Historically that has not been the case--the 486 had zero wait state cache. It's hard to predict how caches in the future will behave but assuming past trends continue, the paper's conclusions will be even worse.
  • The paper figures on a linear relationship between frequency and performance, and the authors acknowledge that memory speeds don't support this assumption.


The last point is insanely hard to analyze. Miss rates for L1 and L2 are extremely dependent on the application. SDRAM is very slow for the first access to a block, though succeeding transfers happen very quickly indeed. But any transaction could take just three cycles (if in L1) to hundreds. One wonders how much tolerance a typical hard real-time system would have for such uncertainty.

Two conclusions are presented: the pessimistic one is the chicken little scenario where we hit a computational brick wall. Happily, the paper address a number of more optimistic possibilities ranging from microarchitecture improvements to unpredictable disruptive technologies. The latter has driven semiconductor technology for decades, and I for one am optimistic that some cool and unexpected inventions will continue to drive computer performance on its historical upward trajectory.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at jack@ganssle.com. His website is www.ganssle.com.

< Previous
Page 2 of 2
Next >

Loading comments...

Parts Search Datasheets.com

KNOWLEDGE CENTER