Using open-source GNU, Eclipse & Linux to develop multicore Cell apps: Part 2 - Embedded.com

Using open-source GNU, Eclipse & Linux to develop multicore Cell apps: Part 2

At a high level, this series of articles on programming the Cellprocessor (starting with Part 1 )- and in the bookupon which it is based – is just a collection of instructions andrecipes for converting human-readable text files into binaryexecutables.This conversion process, called building , is the subjectof this part in the series and focuses on two topics:

1. The SDK tools (ppu-gcc, spu-gcc, ppu-as,spu-as, etc.) thatperform the build process

2. The makefiles that direct how the buildshould be performed

If you're already familiar with the GNU Compiler Collection (GCC)and its tools, you may only want to just skim this chapter.ppu-gcc andspu-gcc have the same options as regular GCC tools and are used in thesame way. There's also nothing new about the makefiles used to buildCell applications.

But if you're unacquainted with GCC or you've forgotten how to useit, follow this chapter closely. The build process for the Cell isn'thard to understand, but there's nothing more annoying than a mysteriousld error or misplaced library. It's better to spend time now learningthe tools than to lose time later debugging errors.

Getting Started . Most of the components in the CellSoftware Development Kit (SDK) were created by IBM, but the basic buildtools were developed by Sony. Wisely, Sony chose to base its tools onthe GCC.The GCC toolchain has gained legions of developers since itsrelease more than 20 years ago, and it's easy to see why: It supports abroad number of processors, it's released under the GNU Public License,and its compiling standards are as high as they come.

GCC tools have been ported to run on over 50 differentprocessorarchitectures. Here we will be concerned with only two: the PowerPCProcessor Unit (PPU) and Synergistic Processor Unit (SPU).The PPU andSPU both reside on the Cell but have different instruction sets. Thatis, an application compiled to run on the PPU will not be ableto run on the SPU, and vice versa. For this reason, the SDK providesseparate sets of tools for both architectures. This part in the seriesboth sets in detail.

The eight SPUs perform the brunt of the Cell's computation, but wecan only interact with the Cell through its PPU.Therefore, this sectiondescribes the PPU development tools first and the SPU tools second.Both are based are on GCC, so the difference between the two isn'tsignificant.

Building Apps for the PowerPC Processor Unit (PPU)
The material presented here uses an explanation-demonstration approach.That is, concepts are explained first and then demonstrated withexample code. This works well for theory-oriented topics such asmatrices and frequency transforms, but when it comes to detail-orientedtopics such as GCC usage, the reverse approach is better:

Start with a working example and then explain why the example works.This way, the meaning and importance of the details become clear at thestart.

The example code available on line is divided into directories namedafter chapters. Each chapter directory is divided into projectdirectories.A project is a set of files that combine to produce asingle application.

In the Chapter3 directory, ppu_project contains a source file calleda.c and a directory called head_dir. a.c is a simple C source file, andits code is presented in Listing 3.1 below .

Listing 3.1 Basic PPU Source File: a.c

#include
#include “x.h”
#include “y.h”
/* Display the values of x and y */
int main() {
    printf(“x = %u, y = %un”, x, y);
    return 0;
}

This source file displays the values of x and y, but doesn't declareeither. These variables are declared and initialized in header filesx.h and y.h, both located in head_dir. Listings 3.2 and 3.3 below show the code of both header files.

Listing 3.2 Simple PPU Header File: x.h
/* Declare the value of x */
  unsigned int x = 4;

Listing 3.3 Simple PPU Header File: y.h
/* Declare the value of y */
  unsigned int y = 9;

The goal of this example is to convert these three files into asingle executable called a. From a developer's standpoint, thiscan be performed in three ways:

The long way: Execute ppu-cpp, ppu-gcc, ppu-as, and ppu-ld asseparate executables.
The short way: Execute all the executables simultaneously withppu-gcc.
The right way: Execute the make command, which executescommands listed in a makefile.

Most of this series will use make to build applications, butthe long way is the most instructive and is the subject of thisdiscussion.

(Note: On a Cell-based system, the GCC executables arelocated in /usr/bin. On an x86-based system, the GCC executables areplaced in /opt/cell/toolchain.

Figure 3.1 below depicts the four steps of the PPU buildprocess. For each operation, the text on the left shows the command tobe executed, and the text on the right explains what the commandaccomplishes.

Figure3.1 The PPU build process

Let's look more closely that the stages that form the developmentprocess: preprocessing, compiling, assembling, and linking.

The PPU Preprocessor, ppu-cpp
Preprocessing is the first and simplest of the four steps. At thisstage, only the lines of code starting with the pound sign (#)matter.The statements on these lines are called directives, and themost common C directives are #define and #include.

When the preprocessor encounters #define followed by an identifierand replacement text, it substitutes the replacement text wherever theidentifier is found. For example, if the directive is

#define NUM_ROWS 64

the preprocessor will substitute 64 wherever NUM_ROWS appears incode.

The most common directive is #include. When this directive precedesthe name of a header file, the preprocessor inserts the contents of theheader file into the source code. To see how this works, change to thedirectory containing a.c and execute the following command:

ppu-cpp -Ihead_dir a.c -o a.i

ppu-cpp is the C preprocessor for PPU code.The -o option tells it toplace its output in a file called a.i.This file contains the originalsource code of a.c and the contents of stdio.h, x.h, and y.h.

There are three #include directives in a.c. The first surrounds theheader name, stdio.h, in angular brackets, <>.This identifiesstdio.h as a system header file, and by default, ppucpp will lookthrough /usr/local/include, /usr/include, and/usr/lib/gcc/ppu/x.y.z/include to find it.

The x.h and y.h headers are placed inside double quotes, so bydefault, ppu-cpp searches only the current directory. If ppu-cpp iscalled with -I followed by a directory name, that directory will beincluded in the search. In this case, ppu-cpp is executed with-Ihead_dir, so the preprocessor will find x.h and y.h in the localhead_dir directory.

The build process usually removes any files created during thisstage, so you probably won't see *.i/*.ii files in your day-to-daybuilds. But when you encounter bugs related to headers and #definemacros, you may find it helpful to look through the preprocessorresults.

The PPU Compiler, ppu-gcc
After ppu-cpp finishes preprocessing, ppu-gcc compiles the result. Codecompilation is a complex subject and lies beyond the scope of thisseries, but put simply, the compile operation analyzes the structure ofthe source code and translates its high-level, machine-independentinstructions into low-level, processor-specific instructions.

These instructions are part of the processor's assembly language. Tosee what the PPU's assembly language looks like, enter the followingcommand:

ppu-gcc -S a.i -o a.s.

The -S option tells ppu-gcc to compile the code in a.i and performno further steps. The -o option tells ppu-gcc to place its assemblyoutput in a.s. If the compiler finds errors in the code structure, itwill not produce a.s, but will direct error messages to the console.

If you look at the content of a.s, you'll see a series of barelyreadable instructions like ld and std, followed by numbers andpunctuation. In mybook, I explain how to write assembly for the SPUs, but the rest ofthis series is only concerned with coding in C/C++.

Configurability is one of GCC's chief advantages. There are manyways to tweak and constrain ppu-gcc's operation, and Table 3.1 below lists 12 of the most popular options. Most are identified by a hyphenand a letter, such as -S in the preceding example.You can see the fulllist of options by running man ppu-gcc.

Table3.1 Common Compile Options for ppu-gcc

Debugging is the topic of the next part in this series, but youshould know that -g tells the compiler to insert debug information intothe compiled result. Further, it ensures that each line of code iscompiled separately.This way, you can step through the application andsee the effect of each individual line of code.

When you optimize compilation with -On, you not only remove thedebug information, you also tell the compiler to rearrange statementsto improve performance and reduce code size.The optimization level, n,ranges from 0 to 3. The default setting is -O0, which performs nooptimization at all. Higher-level optimization tasks include thefollowing:

.n -O or -O1: Merge identical constants, attempt to removebranches, optimize jumping
n -O2: Align loops, functions, and variables, removenull-pointer checks, reorder instructions
n -O3: Inline simple functions, rename registers, parse allsource before compiling code

Each optimization level performs all the tasks of lower levels.Greater optimization produces faster, smaller executables, but takesmore time.An IBM engineer once told me that his team uses -g forapplications they intend to debug and -O3 for everything else.

The -std=standard option is useful if your code needs to meet aspecific code standard. Example values include ansi, c99, and c++98.Thegnu98 standard is used by default for C code, and gnu++98 is used forC++.You can suppress compiler warnings with -w, but it's better to use-Wall, which tells ppu-gcc to generate a warning for any questionableaspect of code.

By default, ppu-gcc doesn't just compile; it calls all theexecutables in the build process, from ppu-cpp to ppu-ld. In thisexample, the -S option tells ppu-gcc to compile, but not assemble, thecode in a.i.The -c option compiles and assembles the code, but doesn'tcall the linker.The -v option tells ppu-gcc to list all the commands itexecutes as it runs.

The last two compiler options are specific to PowerPC devices, whichincludes the PPU (PowerPC Processor Unit).The PPU supports AltiVecinstructions for vector processing, and ppu-gcc has built-in functionsfor dealing with these instructions. The -maltivec option enables thesebuilt-in functions and -mno-altivec disables them.

The PPU Assembler, ppu-as
After the high-level code is converted into assembly instructions, theassembler translates the textual assembly code into binary machineinstructions.The assembler output is placed in an object file, *.o.These object files are formatted according to the ELF (Executable andLinking Format). To create an object file from a.s, enter thefollowing:

ppu-as a.s -o a.o

It's much simpler to assemble code than compile it, and there arefewer options available to configure the assembly. There are nooptimization levels, although -o still identifies the output file.

If you enter man ppu-as, you'll see that most of the options dealwith lowlevel details like bit ordering and instruction set extensions.None of the example code in this series configures or constrains theoperation of ppu-as.

The PPU Linker, ppu-ld
The final stage of the build process is linking, and though you'llrarely call the linker directly, it's important to know what it does.ppu-ld performs two main tasks: It searches for the object code neededto construct the output file, and it either links the objects togetheror makes sure they can be linked together during execution.

In the example project, ppu-ld can't create an executable with a.oalone. To see why, enter nm a.o on the command line.This command liststhe symbols in a.o, and the output will look like the following:

0000     D main
              U printf
0000     D x
0004     D y

The U next to printf stands for Undefined, and if you attempt tocreate an executable, with

ppu-ld a.o -o a

you'll receive an error because of the undefined printfreference.You'll also receive a warning because ppu-ld can't find thesymbol (_start) that identifies where the application should start inmemory.

To handle these problems, three steps need to be performed:

1. Link against the C library. printf is defined in the Clibrary, libc.so, so this library must be included in the link. To dothis, use -l followed by the library's abbreviated name. Theabbreviated name is formed by removing lib from the start of thelibrary name and the suffix from the end. The abbreviated name oflibc.so is just c, so the required option is -lc.

2. Identify the dynamic linker. libc.so is a dynamic library,which means its functions are linked at runtime. The dynamic linker,/lib64/ld64.so.1, handles this operation, so it must be identified with—dynamic-linker /lib64/ld64.so.1.

3. Link initialization files. ppu-ld needs special code tolaunch PPU applications, and this can be found in three standardinitialization files: /usr/lib64/crt1.o, /usr/lib64/crti.o, and/usr/lib64/crtn.o.

Now the executable can be created with the following link command:

ppu-ld a.o /usr/lib64/crt1.o /usr/lib64/crti.o /usr/lib64/crtn.o —dynamic-linker /lib64/ld64.so.1 -lc -o a

(Note: The backslash () makes it possible to enter asingle command across multiple lines. )

To run the executable, enter ./a at the command line.This displaysthe values of the x and y variables, declared in x.h and y.h,respectively.

By default, ppu-ld searches for libraries in /lib64, /usr/lib64,/usr/local/lib64, and /usr/powerpc-64/lib64. It looks for sharedlibraries (*.so) first and static libraries (*.a) second. To addanother location to its search path, use -L followed by the directoryname. You can also name search directories with the environmentvariable, LD_LIBRARY_PATH.

Much of this series focuses on the SDK libraries, so it's importantto know how to identify them for the linker.

Table 3.2 below lists a portion of the options availablefor ppu-ld.To see the full list, enter man ppu-ld at the command line.

Table3.2 Common Link Options for ppu-ld

The Short Way: Building PPU Applications with ppu-gcc
In the real world, developers don't call ppu-cpp, ppu-gcc, ppu-as, andppu-ld separately for each build. It's much easier to let ppu-gccmanage the entire process by itself. For example, the code inppu_project can be preprocessed, compiled, assembled, and linked with asingle statement:

ppu-gcc -Ihead_dir a.c -o a

This command produces the same result as the four commands in Figure 3.1 earlier , but gets rid ofthe intermediate files (a.i, a.s, a.o).Also, you don't have to specifythe initialization object files (crt1.o, crti.o, and crtn.o) or the Clibrary (libc.so).

This is because ppu-gcc already knows the basic settings needed forC/C++ applications. Clearly, this is much more convenient thanperforming each step of the build separately.

But a complication arises: How do you set options for other toolswhen you're only running ppu-gcc? For example, how can you make sure anoption is directed to the linker but not the compiler?

The answer is simple: Precede assembler options with -Wa and linkeroptions with -Wl. For example, to get the linker version, enter thefollowing:

ppu-gcc -Ihead_dir -Wl,-v a.c -o a

This directs the -v option to ppu-ld. If -Wl is removed, ppu-gccreceives the -v option and prints every step of its build process inaddition to the linker version. It's much more convenient to enter onecommand rather than four, but even the ppugcc command can be a burden.

If an application needs multiple libraries and header files frommultiple locations, entering the entire ppu-gcc command for each buildwill quickly become tiresome.

For this reason, developers regularly use make , whichexecutes build commands stored in a special file called a makefile.Makefiles will be described shortly, but first you need to understandhow applications are built for the Cell's Synergistic Processor Units,or SPUs.

Building Apps for the Synergistic Processor Unit (SPU)
In addition to the PPU tools, the SDK provides a set of similarly namedtools for the SPU.These are spu-cpp, spu-gcc, spu-as, and spu-ld.Change to the spu-project directory in Chapter3 on line, and you'llfind the same files as were in ppu-project.

There's more to the similarity than just the names; the SPU toolsfunction just like their PPU counterparts. To see what I mean,preprocess a.c with

spu-cpp -Ihead_dir a.c -o a.i

Then compile the result with

spu-gcc -S a.i -o a.s

and assemble the code with

spu-as a.s -o a.o

The SPU link operation is slightly different from that for the PPU.First, crt1.o, crti.o, and crtn.o are in /usr/spu/lib rather than/usr/lib64. Second, the link requires two libraries:

libc.a (-lc) and libgloss.a (-lgloss), both in /usr/spu/lib.

These libraries are mutually dependent, so their flags, -lc and-lgloss,must be surrounded by –start-group and –end-group. Thecomplete link command is

spu-ld a.o /usr/spu/lib/crt1.o /usr/spu/lib/crti.o/usr/spu/lib/crtn.o
            -o a-L/usr/spu/lib –start-group -lc -lgloss –end-group

The entire build can also be performed with a single call tospu-gcc:

spu-gcc -Ihead_dir a.c -o a

If you enter this command, you'll see an executable that looksexactly like the PPU executable. But the two files are really quitedifferent.The SPU application doesn't really execute independently; thePPU starts the SPU, sends the application to the SPU, receives theSPU's output, and terminates the SPU's operation.

Make and Makefiles
The basic concept behind make and makefiles is simple. The make command looks for a file called Makefile in the current directory. IfMakefile exists, make reads its commands and executes them.Thisprovides many important advantages over entering commands on thecommand line:

*A build command needs to be typed only once (inside the makefile).

*Once the makefile is created, users don't need to think about howthe application is built.

* Build commands can be modified and extended with small changes tothe makefile instead of retyping the entire command.

* Makefiles can be generalized to build different types ofapplications in different languages and environments, and can performnonbuild activities such as installing and archiving.

* Makefiles can be organized in a hierarchy in which a mastermakefile contains all possible build commands and dependent makefilesspecify which commands should be run.

Let's start with a demonstration. Log on to the Cell system andinstall the netpbm image manipulation library:

yum install netpbm netpbm-devel

Next, go to the /opt/cell/sdk/src directory and look at the group ofcompressed TAR (tape archive) files. Decompress the archives with

cat *.tar | tar xvi

A series of directories will be created, each containing examplecode. From within the /opt/cell/sdk/src directory, execute

make

When make starts, it finds Makefile in the current directory andexecutes its commands. In this case, Makefile does little except callthe commands in the master file make.footer, located in/opt/cell/sdk/buildutils. All the makefiles in the SDK rely onmake.footer, so it's a good idea to glance at its contents.

If the content of make.footer looks familiar, you can skip the restof this section. But if you've never seen anything like make.footerbefore, pay close attention; all the example code in this seriesrequires a solid understanding of make and makefiles. Besides, once youstart creating makefiles, you'll never go back to the command line.

This subsection presents makefiles as they should be presented: fromthe simple to complex. Simple makefiles are good for specific builds,but as you incorporate more advanced features, your makefiles will bemore flexible and better suited for general purpose development.

Anatomy of a Makefile
Makefiles differ widely depending on the writer and purpose, but mostconsist of four types of statements:

Dependency lines :Lines that identify a file to be created (target) and the files neededfor its creation (dependencies)
Shell lines: Contain the commands that build a target from its dependencies
Variable declarations: Textsubstitution statements that function like #define directives in C/C++
Comments: Lines that start with # andprovide additional information

This subsection describes each of these statements and then presentsan example makefile that incorporates all of them.

Makefile Dependency Lines
When make examines the content of a makefile, it looks for twopieces of information: the name of the file it should build and thenames of the files needed to build it. The file to be built is calledthe target, and the files needed to build the target are calleddependencies.

A makefile provides this information with dependency lines.Adependency line contains the target name, a colon, and names ofdependencies separated by spaces. Its basic syntax is given by

target: dependency1 dependency2 …

For example, if you want make to build an application called appusing source files src1.c, src2.c, and src3.c, the dependency line is

app: src1.c src2.c src3.c

If the target file already exists, make checks to see when thetarget and dependency files were last modified. If one of thedependencies is more recent than the target, make rebuilds the target.If the target is up-to-date, make takes no action.

When make is called with no arguments, it processes the firstdependency line in the makefile. But if make is called with the name ofa target, such as in the command make target_name, makesearches for a dependency line whose target is target_name.

If a dependency can't be found, make searches for a rule that buildsthe missing dependency.This way, dependency lines can be chainedtogether and processed recursively.

For example, the previous section showed how a.i is created froma.c, how a.s is created from a.i, and how a.o is created from a.s.These relationships are identified with the following dependency lines:

a.o: a.s
a.s: a.i
a.i: a.c

make attempts to process the first line, but when it can'tfind a.s, it searches for a line whose target is a.s.The second linehas a.s as a target, so make looks for its dependency a.i.

a.i isn't available either, so make tries to process the third line,in which a.i is a target. When make finds a.c, it builds a.i and usesa.i to build a.s. Finally, make uses a.s to build the original target,a.o.

Target and dependency files can be of any type: source code orobject code, text or binary. All that matters is that make knows whatto build and what files it needs to build it. But dependency linesdon't specify how the target should be built. For this, you need to addshell lines.

Shell Lines
Shell lines tell make what steps to perform when building the targetidentified in the preceding dependency line. Shell lines contain thesame type of commands as those you'd enter on a command line. In amakefile, shell lines follow dependency lines and each shell line muststart with a tab.The syntax is given by

target: dependency1 dependency2 …
command1
command2

The combination of a dependency line and its following shell linesis called a rule. A complete rule tells make what target to build, whatfiles are necessary, and what commands must be processed to build thetarget.

As a simple example, the previous section explained how to createthe executable a with a single ppu-gcc command. In a makefile, thiswould be identified with the following rule:

a: a.c
   ppu-gcc -Ihead_dir a.c  -o   a

Each rule can have multiple shell lines, and the commands don't haveto take part in building the target. For example, the following ruletells make to print the working directory (pwd) before executingppu-gcc and list the contents of the current directory (ls) afterward.

a: a.c
pwd
ppu-gcc -Ihead_dir a.c -o a
ls

make processes shell lines from top to bottom. By default, itprints each shell line as it processes it.

Before leaving this topic, one point needs to be emphasized: Starteach shell line with a tab, not spaces. If you use anything other thana single tab, make will give you an incomprehensible error like thefollowing:

*** missing separator. Stop.

You deserve better, so precede each shell command with one tab.

Makefile Variables and Comments
Variables and comments make it easier to modify and read makefiles.Amakefile variable is an identifier that represents text, and theseidentifiers commonly consist of capital letters.

Each variable declaration has the form X=Y, and wherever makeencounters $(X) in the makefile, it replaces the reference with itscorresponding text, Y.

Let's say your makefile contains the following rules:

long_file_name: long_file_name.c other_file_name.o
            ppu-gcclong_file_name.c other_file_name.o -o long_file_name

other_file_name.o: other_file_name.c
            ppu-gcc -cother_file_name.c -o other_file_name.o

You can make this more readable by replacing each occurrence oflong_file_name with $(LONG) and each occurrence of other_file_name with$(OTHER).This is done using variables, as shown in the following lines:

LONG=long_file_name
OTHER=other_file_name

$(LONG): $(LONG).c $(OTHER).o
        ppu-gcc $(LONG).c $(OTHER).o -o$(LONG)
$(OTHER).o: $(OTHER).c
        ppu-gcc -c $(OTHER).c -o$(OTHER).o

This variable usage does more than just increase readability. If afile's name changes, you don't have to modify every occurrence of itsname.You just need to alter the variable declaration, and themodification will propagate throughout the file.

Makefile comments are even simpler to understand than variables.Each comment starts with a #, and the comment continues until the endof the line.This works like the C++ comment marker //. The followinglines show how comments work:

CFLAGS=-O3 -Wall -v            # Define the build options
PCC=ppu-gcc                           # Identify the build tool

# Build the output executable from input.c
output: input.c

            $(PCC)$(CFLAGS) input.c -o output

Makefile syntax can be hard to read and easy to forget, so it's agood idea to insert comments regularly. This is particularly importantfor large projects that require a hierarchy of makefiles.

A Simple Makefile Example
Now that you have a basic understanding of how makefiles are written,it's time for a simple example. The Chapter3directory available in the SampleCode Download has a make_basicfolder that holds the same source files as were in the ppu_projectfolder. It also contains the makefile presented in Listing 3.4 below

Listing3.4 Basic Makefile: Makefile

When you execute make, it calls ppu-gcc and spu-gcc and builds twoexecutables: ppu_a and spu_a. Most of this makefile is straightforward,and it uses the same rules, macros, and comments as discussed earlier.But the first rule may seem odd:The target, all, isn't a file. Evenstranger, the first rule has no shell commands.This is because all is aphony target. Phony targets and other makefile aspects are discussednext.

Advanced Makefile Development
Variables make life easier, but there are three other features thatmake constructing makefiles even more convenient:

Phony targets : Targets that don't represent actual files
Automatic variables : Makefile variables with predefined meanings
Pattern rules : Rules for building files based on filenamepatterns

Following is an explanation of each of these techniques andconcludes with an advanced makefile example. Once you understand howthese features work, you'll have no problem grasping the makefiles inth example code.

Phony Targets
A phony target is a target that doesn't represent an actual file.Theyare generally used in two situations:

* To build multiple, unrelated targets with a single call to make
* To execute commands that don't involve building a real target

In the first usage, the phony target has multiple dependencies butno shell lines. In the second usage, the phony target has one or moreshell lines, but no dependencies. The makefile in Listing 3.4earlier contains the following dependency line:

all: ppu_$(FILE) spu_$(FILE)

make won't find either of the dependencies at first, so it has tobuild both files as targets.

With dependency lines like this, one call to make can build multipleindependent targets. all is the common name for this kind of phonytarget, but the name isn't important; what's important is that thephony target is the first target in the makefile. make is usually usedto perform build tasks, but phony targets make it easy to executeunrelated commands.

These targets are commonly used to remove intermediate filesgenerated during the build process.The following rule shows how thisworks:

clean:

               echo Remove object/assembly files
               rm *.o *.s

When make clean is executed, make will process both shell lines andremove object and assembly files in the current directory. The cleantarget has no dependencies because it's not meant to be built.

In this case, a problem arises if make finds an existing file calledclean; it may decide clean is up to date and not execute the target'sassociated commands. For this reason, it's a good idea to formallyidentify phony targets as phony.This is done by making themdependencies of a target called .PHONY. This is best explained with anexample:

.PHONY:   clean

clean:

                   echo Remove object/assembly files
                   rm *.o *.s

The first rule tells make that clean is a phony target.When makeclean is called, make will process the shell lines without checking tosee whether a file called clean exists.

Automatic Variables
When you write shell lines in a makefile, you can use predefinedvariables whose text depends on the rule's target and dependencies. Table3.3 below lists each of these variables and the information theyhold.

Table3.3. Makefile automatic variables

Of these, the $@ and $^ variables are the most commonly used becausethey take away the need to rewrite names of targets and dependencyfiles. For example, if a makefile rule is given as:

foo: foo.c bar.o baz.o
           $(CC) foo.c bar.o baz.o -o foo

it can be replaced by

foo: foo.c bar.o baz.o
            $(CC) $^ -o $@

which is harder to read but easier to type. It also makes themakefile more portable, especially if the target and dependency namesare also variables. This construction is used throughout the makefilesin the example code.

The text representation of an automatic variable is recomputed foreach rule processed, so the value of $@ changes from rule to rule.Automatic variables can only be used in the shell lines, not in thedependency lines above them.

Pattern Rules and Built-In Rules
Pattern rules are like regular rules, but the target is identified by apattern, not a specific file. Makefile patterns use % as a wildcard,usually followed by a file suffix.

For example, the pattern %.c refers to any file with the .csuffix.The following pattern rule compiles all C source files intoobject files with the same name (but different suffixes):

%.o: %.c
               $(CC) $^ -o $@

This pattern is used so frequently that you don't need to enterit.That is, make knows that object files are created by compilingsource files (.c, .cpp) with similar names.

For example, an executable named app depends on foo.o and bar.o, andthese object files are compiled from foo.c and bar.c. The correspondingrule might look like the following:

OBJS=foo.o bar.o

app: $(OBJS)
               $(CC) $^ -o $@

Notice that this rule doesn't tell the compiler how to build foo.oand bar.o. It doesn't have to. make will use the default compiler,named by the CC variable, to compile the source files in the currentdirectory.To disable built-in rules, run make with the -r option.

An Advanced Makefile
The Chapter3 directory availablefor download online has a folder called make_adv, whichcontains the same files as those in make_basic.The difference is thatthe makefile (Listing 3.5 below ) in make_adv uses two phony targets, all and clean, and itsshell lines are constructed with automatic variables.

Listing3.5. Advanced Make File: Makefile

The first real dependency line identifies the object file a.o as adependency, but this file isn't in the current directory. Thanks tobuilt-in pattern rules, make compiles the source file a.c intoa.o automatically. make uses ppu-gcc to compile because the cccompiler is set tp ppu-gcc.

The clean target is identified as phony with the followingdependency line:

.PHONY: clean

To remove object files after the build, enter make clean.

There are many more aspects of makefiles to explore, such asmakefile directives and response files.

In fact, the Free Software Foundation (FSF) development processgenerates makefiles automatically with its Autotools suite (automake,autoconf, libtool). But the features presented in this brief treatmentwill suffice for most of the Cell development tasks you encounter.

Conclusion
Normally, learning how to program two different processors with twodifferent sets of tools is a difficult process. But PPU and SPUapplications can be built with similar GCC based tools, so thedifficulty is significantly reduced.

This part in this series has described the operation of bothtoolsets, and if any topic wasn't covered thoroughly, there's plenty offreely available information about GCC available on the web.

The first part explained each tool in the PPU development chain,from ppu-cpp to ppu-ld. This level of detail is unnecessary forpractical usage, and you'll probably never use anything besidesppu-gcc.

However, compile and linking errors crop up constantly, especiallywhen you're building large applications with far-flung source files andlibraries. When they do, you'll be better able to understand theproblems if you understand the tools.

As the previous discussion of make files makes clear, the makefilelanguage is unintuitive and unlike any other, so writing these filesmay seem unwieldy at first. But once your build process consists of asingle word (i.e., make), you'll be glad you chose to place yourcommands in makefiles rather than the command line.

Next in Part 3, Building Applications for the Cell Processor.
To read Part 1, go to “Introducing the Cell Processor

Matthew Scarpino lives in the San Franciso Bay area anddevelops software to interface embedded devices. He has a master'sdegree in electrical engineering and has spent more than a decade insoftware development. His experience includes computing clusters,digital signal processors, microcontrollers and field programmable gatearrays and, of course, the Cell Processor.

This series of articles is reproduced from the book “Programmingthe Cell Processor”, Copyright © 2009, by permission ofPearson Education, Inc.. Written permission from Pearson Education,Inc. is required for all other uses.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.