ParaSail: Less is more with multicore

S. Tucker Taft, AdaCore

June 19, 2012

S. Tucker Taft, AdaCoreJune 19, 2012

No Race Conditions
A race condition is a situation in which two concurrent computations manipulate the same object without adequate synchronization, resulting in potentially unexpected or undefined behavior. ParaSail eliminates the possibility for race conditions, by compile-time checks as described above under the No Parameter Aliasing section, plus the lack of global variables, and the rules for concurrent objects. Essentially ParaSail eliminates race conditions by construction.

A more general definition of race condition might be any computation whose result depends on the relative timing of two sub-computations.

ParaSail does allow for such computations. However, these kinds of intentional race conditions will only occur if the programmer creates one or more concurrent objects, and then manipulates them concurrently from parallel computations.

If no concurrent objects are manipulated in parallel, then the result of the ParaSail program is deterministic. For example, the Word_Count function we showed at the beginning of this article is highly parallel, but its result is independent of the relative rate at which the various parallel sub-computations are performed.

Of course concurrent objects may be essential in a given computation, and concurrent objects are often used to represent the external environment, such as a file system, database, or external process. If interactions with the external environment are performed from parallel sub-computations, then clearly the results will depend on the relative rate of these sub-computations.

This is the very nature of a real-time or interactive application, and so for these any sort of determinism is already compromised by the non-deterministic nature of a real-time or interactive environment.

In ParaSail, the programmer is in control of the amount of non-determinism in the application, and the results will never be undefined due to inadequate synchronization; concurrent access is only permitted to concurrent objects, which have well-defined semantics in the presence of concurrent callers.

 No Pointers
We end with a discussion of pointers, perhaps the worst impediment to easy parallel programming. Why are pointers so bad? Because they interfere with the critical divide-and-conquer approach to parallel programming. To divide and conquer a problem, it is necessary to be able to separate one part of the problem from another.

If pointers are used to represent the fundamental data structures, cleanly dividing one half of a data structure from another is not so easy. The programmer may “know” that the structure is a binary tree, but what happens if through some bug or undocumented “feature” there is actually some sharing happening between parts of the tree?

Is it safe to send one thread off to update the left “half” of the tree while simultaneously sending another to update the right “half,” if there is some possibility of them meeting up somewhere later in the middle of their updates?

So how does ParaSail avoid pointers? If we look back on the sections above, we see that ParaSail is a flexible language with a familiar class-and-interface-based object-oriented programming model, but it has no need for pointers.

The region-based storage management model, the availability of a null value for any type, the ability to assign a non-null value into a previously null component (or vice-versa), makes it possible for objects to grow and shrink over their lifetime without any explicit use of pointers.

Yes perhaps behind the scenes the ParaSail implementation is using pointers, but by eliminating them from the language-level semantics of the language, we avoid essentially all of the problems that pointers can create for a parallel program.

Other ParaSail features we have not discussed above, such as the uniform ability to define how the indexing and slicing operations (A[I] and A[I..J]) work for any container-like type, mean that constructs like directed graphs are easily represented in ParaSail without pointers.

The graph would be a container object, a node identifier would be used for indexing into the graph container, and edges in the graph would be represented by pairs of node identifiers. Here is an example of a directed graph interface:

Click on image to enlarge.

If an “indexing” operator is defined as above, then we can use the A[I] notation with objects of a type defined by the given module. There are other operators that if defined, give access to other ParaSail syntax.

In general ParaSail uses a syntactic sugaring approach which turns uses of special syntax such as indexing, slicing, iterators, etc., into a series of calls on specific operators. If the operators are defined for the relevant types, the syntax is allowed.

Note that we are not giving parameter names for most of the above operations. Parameter names are generally optional on the specification, and if the type names of the inputs are unique, the type names may be used to refer to the corresponding input, both at the call site and within the body of the operation.

We will leave implementing the DGraph module as an exercise for the reader. We will also post an example implementation on the ParaSail blog [1].

The goal of ParaSail is to make parallel programming easy and safe. The main innovations in ParaSail are represented by what impediments have been eliminated in the name of that goal, and in the flexibility and ease of use of what is left.

Programming in ParaSail is a pleasure, both because so many of the nasty debugging problems are eliminated from the start by the fundamental semantics of the language, and because the ability to use parallelism in a simple and efficient way can make the expression of the solution to a problem more natural.

ParaSail’s region-based, pointer-free data structuring model makes it easy to create data structures that naturally support a divide-and-conquer approach, and the checking provided at compile time eliminates many of the run-time failures and race conditions that can dramatically slow down the development and testing of a complex, parallel application.

S. Tucker Taft is currently director of language research at Adacore, and is also founder and CTO, SofCheck, Inc., which he started in 2002 to focus on providing tools and technology to enhance software development quality and productivity. Prior to that Mr. Taft was a Chief Scientist at Intermetrics, Inc. and its follow-ons for 22 years. He graduated from Harvard College in 1975, Summa Cum Laude in Chemistry.

[1] ParaSail blog:
[2] Scala language:
[3] False sharing:
[4] Cyclone region-based memory management:

< Previous
Page 7 of 7
Next >

Loading comments...