Attack the parallel worlds of parallel programming - Embedded.com

Attack the parallel worlds of parallel programming

The announcement that Intel and Microsoft will invest $10M over five years in the setting up and running of a Parallel Computing Lab at UC Berkeley is excellent news for our industry. My hope is that this will contribute to solving the significant challenge facing the next generation of software developers in efficiently and rapidly deploying software onto multicore silicon platforms.

Professor Kurt Keutzer, one of the key researchers working in the lab, says we can expect to see the first results in the summer of 2010. This timeframe is impressive; in research terms, this is a compressed timeline for delivery of first results on such a complex problem. It's yet another indicator of the importance of finding a productive and industrial-strength solution as soon as possible.

This announcement should signify the continued important contribution that large corporations make in funding academic research. In today's turbulent economic conditions, it's a credit to these companies that they continue to fund research. This requires vision to be able to see beyond the current quarter's P&L and fund raw research with all its inherent risk. Kudos to them.

While thinking about this, I realized that there's a real parallel universe operating alongside all of this research into the ultimate parallel programming paradigm. The customers I deal with on a daily basis are already deploying multithreaded software onto multicore platforms. They're neither using new languages nor new tools. As engineers do, they're simply (or perhaps not so simply) applying their brains to the problem. They'd probably be the first to admit, privately at least, that their implementations are not the most efficient and not deployed as quickly as they'd like. However, they're still launching new products into the market at an amazing rate.

A story I have heard from customers several times, which illustrates my point, involves video systems. Recognizing that multicore platforms are the only way to deliver the throughput needed for today's video standards, the software developer new to multicore programming will probably imagine that the most efficient way to implement what he/she needs is to feed a series of frames into a pipeline of independent processing units. Having partitioned the code thus, the developer quickly finds out that the limiting effect is the amount of memory traffic required to process each frame. In other words, the parallel tasks that make the most sense are not always in the obvious places. It seems to me that tens of thousands of developers around the world are repeatedly learning similar lessons in this all-pervasive new multicore world.

Don't get me wrong. The academic research around multicore programming is vital. It's an important enough area that when next-generation solutions are presented to the waiting world, someone will likely uncover and coin a law of parallel programming as ubiquitous as that of Moore and Amdahl. In fact the UC Berkeley Parallel Computing Lab is actually the latest in a list of significant academic institutes that are putting their considerable reputations on the line by proposing to deliver some or all of what the industry needs to properly exploit multicore architectures.

In the meantime, let's not forget today's developers, working against the clock, who are stretching themselves trying to meet increasingly difficult performance and power-consumption targets through programming multicore platforms. These engineers are teaching themselves how to use today's languages and methodologies to achieve their goals, and it's not easy. They're making mistakes, learning, making more mistakes, and learning more; but in the end they're getting their jobs done. Shouldn't tool vendors be doing more to help them? Shouldn't our industry be doing more to share the knowledge and experience that exists for the benefit of all developers? After all, isn't that how academic research works?

In fact, these observations led me to propose the setting up of a Multicore Programming Practices (MPP) Working Group under the auspices of the Multicore Association and, when the Association approved my proposal, to agree to co-chair the group with Max Domeika from Intel. The idea behind this group is to provide those hardworking engineers with some practical guidelines based on the current practices in use by other engineers who have learnt how to write parallel software the hard way.

People may say that multicore software programming with today's languages is just a temporary issue until the real parallel programming solution comes along. But let's not forget that the “gap” in stopgap is often a lot wider than anyone ever thought it would be.

David Stewart is the CEO of Critical Blue. He has over 25 years of experience in the EDA and semiconductor industries. This includes ten years at Cadence, where he started the company's System-on-Chip (SoC) Design facility. David has served on the board of several technology startups and one venture capital firm. He can be reached at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.