Taming the Hydra"When I hear the word 'culture' I reach for my revolver," deadpanned Hermann Göring, a man who was clearly all business. In the technology business, it's also tempting to shoot the messengers, especially when they're all abuzz over processor cores and dual-core processors. Processor "cores" and their apparent proliferation seem to have captured the imagination of those with little imagination--or recollection of history. Intel's marketing people even cagily trademarked the name Core as the successor to the spectacularly popular Pentium brand. Now any conversation about processor cores automatically involves trademarked name-dropping. Clever marketing, that.
Over in the engineering world, processors cores have been with us since, well, processors. Every microprocessor and microcontroller in the world has a core in the same way that every vertebrate (certain celebrities excepted) has a brain. The core is simply the brain, the engine, the center, of any processor. It's where code gets executed and data gets shuffled. It's the little man in the radio who sings.
You can even buy cores from a number of microprocessor intellectual-property firms, and then design your own chips around them. This has been going on for at least 10 years, and many of those core licensees have put more than one into a chip, creating multicore processors. Chips with two, four, 10, or more processors are commonplace. Published research indicates that the average is 2.3 processor cores per chip. In short, this has all been done before.
The trick is obviously not creating multicore processors, it's programming them. Writing and debugging code for one processor is hard enough. Are you ready to take on two at a time? It can be done (all those multiprocessor chips must be running something) but it's not easy. Dual-core processors are usually synchronized, running in lockstep using the same cache and same bus interfaces. They share an instruction set and memory or I/O resources, but execute two different streams of code. Like the Hydra, they have separate minds but one body. Who but Hercules can tame such a beast?
In mythology, Hercules got help from his nephew, Iolaus. And so it is with us. To attack this multiheaded monster we'll need help from software-tool vendors. That means new tools that balance software loads among two or more processors. Debuggers will have to evolve, too. This is all tractable work and it's already underway. Expect to see a stream of product announcements on this front over the coming year.
But new compilers and debuggers won't be enough. We're facing new programming languages. Current languages like C don't express parallelism well. Oh, compilers can identify threads of execution or independent constructs and extract some small amounts of parallelism here and there, but the language itself prevents large-scale parallelism. If we're to exploit these new multiprocessor chips, we're going to have to swallow hard, roll up our sleeves, and tackle a truly Herculean task.
Jim Turley is the editor in chief of Embedded Systems Design. You can reach him at email@example.com.
New languages? There are plenty of existing languages that provide good large-scale parallelism well. Ada certainly does, since its beginnings, and still does today. Quality compilers for Ada are widely available (including one that is a part of the GCC suite). And a set of new capabilities for Ada is entering the home stretch of standardization.
I'm sure that there are other existing languages out there that also would fill the bill (I'm most familar with Ada because of my job). We don't need to reinvent the wheel here!