Parallel programming critically requires tool support because of itsunfamiliarity and intrinsic difficulty. Programming tools are notlimited to compilers for efficient executable generation. They shouldhelp in systematically finding defects and performance bottlenecks, aswell as debugging and testing programs.
Without these tools, parallelism is likely become anobstacle that reduces developer and tester productivity and makesdevelopment more expensive and of lower quality.
Compilers for high-performance processors have traditionally focused onILP extraction. This approach has been followed also in embeddedcomputing for signal and media processing. These application domainsare characterized by vast amounts of ILP and word-level parallelism, aswell as by regular data access patterns.
For this reason, processor architectures for signaland media processing usually feature a large number of parallelexecution units and follow architectural templates reminiscent ofgeneral purpose VLIW and vector processors . These processorsheavily rely on compiler support to keep execution units busy andachieve massivespeedups over single-instruction pipelined execution.
Compilers for data-parallel processors have evolvedvery rapidly, and they feature aggressive allocation and schedulingalgorithms that achieve utilizations which are comparable tohand-optimized code .
A survey of this interesting field is outside thescope of the chapter, and the reader is referred to one of the manysurveys and textbooks available [37, 38].
One interesting point is worth mentioning, however.Even within a single-processor engine, architectures are becomingincreasing interconnect dominated. More specifically, the communicationbetween execution units the register file is the communicationbottleneck that limits scalability in the number of functional unitsthat can be successfully operated in parallel.
For this reason processor architectures are evolvingtoward clustered organizations, with multiple register files andcomplex interconnects.
These in-processor interconnects, also called scalaroperand networks  have strong similarities with chip-level NoCs, inthat they need to be scalable and to provide huge bandwidth.
However, since their main purpose is to deliverinstruction operands to functional units and results to register files,their latency is extremely tightly bound (e.g., one or two clockcycles). Hence, computation and communication cannot be decoupled via anetwork interface and its connection-oriented services, which wouldimpose unacceptable latency penalties.
Consequently, scalar operand networks are tightlycoupled with the instruction set architecture, and they are viewed asexplicit resources during compilation. Communication links areallocated and scheduled exactly like functional units and registers.
From the topological viewpoint, scalar operandnetworks are usually full or partial crossbars, even though multi-hoptopologies will most likely appear in the future. As the focus of thisbook is on chip-level NoCs, we refer the interested reader to the manyinteresting references in the literature that deal with scalar operandnetworks architecture and scaling trends .
Coarse-grain parallelism extraction
While compilers for ILP extraction are now mature and successful inindustrial applications, the massive amount of research work on TLPdiscovery and extraction has not enjoyed a similar degree of success.
The most common approaches in the industrial practicerely on explicit directives, inserted by the programmer to drive thecompiler toward good task-level parallelizations. This pragmaticsemi-automated approach can be verysuccessful, even though its implementation is quite challenging.
One of the main issues in semi-automated TLP is thedifficulty for programmer in understanding and using programannotations for parallelism when specifying computation in a sequentialfashion. In other words, if the programming language has an implicitsequential semantics, the programmer is required to manage continuousshifts between a sequential and a parallel programming model, whichultimately lead to inefficient or incorrect code.
As a result the learning curve for the programmer isquite steep and parallelization is done as an afterthought, bymodifying sequential code. This process is inefficient and often leadsto sub-optimalresults.
An example of semi-automated approach is OpenMP. OpenMP provides a set of pragmas that aim at giving thecompilation system directives on how to automatically generate amulti-threaded execution flow.
The programmer inserts pragmas, such as #pragma Ompparallel, immediately before loops that should be parallelized, and thecompilation system decides how to split the target loop in parallelthreads that are then distributed among the available processors.
Thus, the programmer is relieved from the burden ofmanaging data sharing and synchronization, while the compiler isfacilitated in focusing efforts where the programmer expects mostadvantages from parallelization. OpenMP is gainingacceptance in high-performance chip multi-processors, such as IBM'sCell.
Example. Cell uses OpenMP pragmasto guide parallelization decisions. It then uses the pre-existingparallelization infrastructure of the proprietary IBM XL compiler. Inthe first pass of the compiler machineindependent optimizations areapplied to the functions outlined by the programmer. In the second passof optimization, the target functions are cloned, then optimized codesare generated for both the master processor (SPE) and the slavedata-parallel processors (PPEs).
The Master Thread runs on the PPE processor andmanages work sharing and synchronization. Note that cloning of the callsub-graph of the target functions occurs in the second pass ofoptimization, after interprocedural analysis. Thus, the compiler canlink the appropriate versions for all the library function calls forthe SPE and the PPE.
An important issue is the limited-size local memoryof the SPEs, which must accommodate both code and data. Hence, there isalways a possibility that a single SPE-allocated block will be toolarge to fit. Since OpenMP generally is used to drive parallelizationof tight loop nests, code spilling is not very common. Data spilling isinstead a much more critical problem.
Fortunately data is managed explicitly and theoptimizing compiler can in principle insert all the necessary datamanipulation function (e.g., DMA-accelerated data transfers) in anautomated fashion. This is however most easily said than done, and inreality a sub-optimal data organization choices cannot be fullyremedied by background DMA-basedmemory transfers.
The critical challenge for the success of OpenMP andsimilar approaches is in the efficient implementation of theparallelization step. In an NoC architecture, communication latenciesand network topology should be accounted when deciding the degree ofparallelization of a loop (declared by the programmer as”parallelizable'').
In other words, it may be more efficient toparallelize the loop to a limited degree, if then the various parallelthreads can be mapped on a set of topologically close computationalunits.
Aggressive parallelization may be inefficient if itrequires long-range data communication across many network switches.Clearly, a significant amount of research and development work isrequired to develop NoC (or more generally communication) awaresoftware parallelization tools, that can account for NoC communicationlatencies, and can at the same time interact with NoC mapping toinfluence the way parallel threads are allocated onto the networktopology.
At the same time, OpenMP appears somewhat limited inits expressiveness for what concerns the important issue of dataallocation and partitioning: in other words, it is based on an uniformmemory access abstraction which does not fit well to NoC-baseddistributed architectures.
Testing and debugging
Program testing and debugging is of the utmost importance. In fact,application parallelism introduces new types of programming errors, inaddition the usual ones in sequential code. Incorrect synchronizationcan cause data races and livelocks which are extremely difficult tofind and to understand, since they appear as non-deterministic and hardto reproduce.
Conventional debugging, based on breakpointing andre-execution, does not work in this context, because errormanifestation depends on subtle timing relationships between paralleltasks. The simple insertion of a breakpoint or of instrumentation codechanges the timing of a program and may completely hide many seriousbugs. Even if the problem can be reproduced, its diagnosis can beextremely difficult to show, since the stateof the system is distributed and extremely difficult to analyze.
Language support for development ofcorrect-by-construction programs is therefore as critical as languagesupport for efficient compilation. Low-level hardware-related featuresfor communication and synchronization should not be exposed to theprogrammer, not only because of code portability issues, but above allfor avoiding the creation of intricate program logic which becomesimpossible to debug.
Clearly, even though well-designed languages cangreatly help in writing correct code, systematic defect detection toolsare extremely valuable. One promising approach is to use static programanalysis to systematically explore all possible executions traces of aprogram. In this way it becomes possible to find errors that are hiddenvery deep in the program logic, and are almost impossible to reproduceby standard testing.
Similar techniques are used with an increasingdegree of success in hardware verification, and are known as “formalverification'' techniques. Hardware is highly parallel and complex, andit is notoriously hard to verify, but unfortunately, softwareverification is even more difficult.
One of the main challenges is the size of the statespace (which relates to the number of possible execution trace) ofprograms. Its size is formidable, especially if we consider parallelprograms (remember that the size of the state space for two programsrunning in parallel is the product of the two component's statespaces). Hence, its complete exploration is hopeless, and in many casewe must limit ourselves to formally checking only someprogram properties .
Since formal software verification is not a panacea,developers critically require traditional debuggers to explore andunderstand the complex behavior of their parallel programs. Two mainapproaches are used in this area.
First, tracing and logging tools aim at keepingtrace, and tagging messages and shared data accesses by variousthreads, allowing the developer to directly observe a parallel programpartially ordered execution. Important features that require furtherdevelopments are: the ability to follow causality trails across threads(such as chains of accesses to distributed objects), to replay messagesin queues and reorder them if necessary, step through asynchronous callpatterns (including callbacks).
The second approach is reverse execution, whichpermits a programmer to back up in a programs execution history andre-execute some code. This technique is expensive and complex, but itis becoming viable thanks to the increased capabilities of executioncores, which have a large amount of shadow storage and are increasinglyused as virtual processors, emulatingvia micro-code well-known instruction sets.
Software testing will also have to undergosubstantial extensions. Parallel programs have non-deterministicbehaviors and are therefore much more difficult to test. Basic codecoverage metrics (e.g., branch or statement coverage), will have to beextended to take into account multiple concurrently executing threads.
Single-thread coverage is very optimistic, as itcompletely neglects the notion of global state required by multipleexecution threads. Thus, stress tests will have to be augmented by moresystematic semi-formal techniques (e.g., model-checking-likeassertions).
In embedded systems, system simulation and emulationtechniques are extremely useful and relevant for debugging. Simulationrelies on the concept of virtual platform , as exemplified in Part4 which are run on powerful host machines.
When a parallel application is run on a virtualplatform, its execution can be very carefully monitored, stopped androlled back in ways that are not possible when debugging on the targethardware, and the developer has much more control on fine-grain detailsof the execution.
The main shortcoming of virtual platforms is thatthey executed ad orders-of-magnitude slower rates than actual hardware,and reproducing complex incorrect behaviors with billion-instructiontraces may require a huge amount of time.
Emulation platforms greatly accelerate simulationspeed, but they are extremely expensive and traditionally they havebeen used only for hardware debugging. However, new approaches toaffordable MPSoC platform emulation are emerging that show promise .
Software abstractions and software development support for NoCsare still immature, as the corresponding hardware architectures arejust now emerging from the laboratories. It should be clear however,that NoCs will be successful in practice only if they can beefficiently programmed. For this reason we believe that the topicssurveyed here will witnesssignificant growth in research interest and results in the next fewyears.
To read Part 1, go to
To read Part 2, go to SoCobjectives and NoC needs.
To read Part 3, go to Onceover lightly
To read Part 4 go to
To read Part 5, go to Task-Level Parallel Programming
To read Part 6, go to Communications-exposed Programming
Used with the permission of the publisher, Newnes/Elsevier, this seriesof seven articles is based on material from “
Luca Benini is professor at theDepartment of Electrical Engineering and Computer Science at theUniversity of Bologna, Italy. Giovanni De Micheli is professor anddirector of the Integrated Systems Center at EPF in Lausanne,Switzerland.
For additonal resources aboutmulti-core NoC and SoC hardware and software design on Embedded.com, goto Moreon Multicores and Multiprocessors.
 F. Boekhorst, “AmbientIntelligence, the Next Paradigm for Consumer Electronics: How will itAffect Silicon?,'' Internationa l Solid-Stat e Circuit sConference , Vol. 1, 2002, pp. 28″31.
 G. Declerck, “
 W. Weber, J. Rabaey and E. Aarts(Eds.), Ambient Intelligence. Springer, Berlin, Germany, 2005.
 S. Borkar, et al., “
 D. Culler and J. Singh,
 L. Hennessy and D. Patterson,Computer Architecture ” A Quantitative Approach, 3rd edition, MorganKaufmann Publishers, 2003.
 Philips Semiconductor,
 M. Rutten, et al., “
 ARM Ltd,
 B. Ackland, et al., “
 G. Strano, S. Tiralongo and C.Pistritto, “OCP/STBUS Plug-in Methodology,'' GSP X Conferenc e2004.
 M. Loghi, F. Angiolini, D.Bertozzi, L. Benini and R. Zafalon, “AnalyzingOn-Chip Communication in a MPSoC Environment,'' Desig n an dTes t i n Europ e Conferenc e (DATE )2004, pp. 752″757.
 F. Poletti, P. Marchal, D. Atienza,L. Benini, F. Catthoor and J. M. Mendias, “AnIntegrated Hardware/Software Approach For Run-Time Scratch-padManagement,'' in Desig n Automatio n Conference ,Vol. 2, 2004, pp. 238″243.
 D. Pham, et al., “
 L. Hammond, et al., “
 L. Barroso, et al., “
 Intel Semiconductor,
 Texas Instruments, “
 M. Banikazemi, R. Govindaraju, R.Blackmore and D. Panda. “MP-LAPI: An Efficient Implementation of MPIfor IBM RS/6000 SP systems'', IEE E transaction s Paralle lan d Distribute d Systems , Vol. 12, No. 10, 2001,pp. 1081″1093.
 W. Lee, W. Dally, S. Keckler, N.Carter and A. Chang, “AnEfficient Protected Message Interface,'' IEE E Computer ,Vol. 31, No. 11, 1998, pp. 68″75.
 U. Ramachandran, M. Solomon and M.Vernon, “HardwareSupport for Interprocess Communication,'' IEE E transaction sParalle l an d Distribute d Systems , Vol.1, No. 3, 1990, pp. 318″329.
 H. Arakida et al., “
 F. Gilbert, M. Thul and N. When, “
 S. Hand, A. Baghdadi, M. Bonacio,S. Chae and A. Jerraya, “AnEfficient Scalable and Flexible Data Transfer Architectures forMultiprocessor SoC with Massive Distributed Memory,'' Desig nAutomatio n Conferenc e 2004, pp. 250″255.
 P. Paulin, C. Pilkington, E.Bensoudane, “StepNP:A system-level Exploration Platform for Network Processors,'' IEE EDesig n an d Tes t o f Computers ,Vol. 19, No. 6, 2002, pp. 17″26.
 P. Stenström, “
 M. Tomasevic, V.M. Milutinovic, “
 I. Tartalja, V.M. Milutinovic, “
 P. Kongetira, K. Aingaran and K.Olukotun, “Niagara:a 32-way Multithreaded Sparc Processor,'' IEE E Micro ,Vol. 25, No. 2, 2005, pp. 21″29.
 H. Sutter and J. Larus, “
 R. Stephens, “
 E. Lee and D. Messerschmitt, “
 W. Thies, et al., “
 M. Bekoij, Constraint DrivenOperation Assignment for Retargetable VLIW Compilers, Ph.D.Dissertation, 2004.
 R. Allen, K. Kennedy,
 P. Faraboschi, J. Fisher and C.Young, “Instruction Scheduling for Instruction Level ParallelProcessors,'' Proceedings of the IEEE, vol. 89, No. 11, 2001, pp.1638″1659.
 M. Taylor, W. Lee, S. Amarasingheand A. Agarwal, “ScalarOperand Networks,'' IEE E Transaction s o n Paralle lan d Distribute d Systems , Vol. 16, No. 2, 2005,pp. 145″162.
 P. Mattson, et al., “
 M. Sato, “
 N. Genko, D. Atienza, G. DeMicheli, J. Mendias, R. Hermida, F. Catthoor, “
 C. Shin, et al., “
 M. Michael, “