By their nature many embedded systems teams are small. Lots of projects are developed where one software engineer is all that is required, and as a result a company might have only one programmer who works with engineers from other disciplines, but does not deal with other software engineers on a day to day basis.
If you are that guy, and you ever move to an organization with a large software team then you will see some changes in work practices and that transition is an interesting journey which we will explore in this article.
Being aware of the set of tools and practices discussed here will help you jump more quickly from a one-man environment into a team environment, and it also might be useful to be aware of these issues when you are interviewing and you want to show that you are aware of the kind of processes that are in place in most large software teams.
Of course the techniques discussed here may well be used by a solo software engineer, but in my experience I regularly bump into programmers who have not yet required these techniques and this leads to a learning curve when they change to a team environment.
The first and probably most important tool that allows programmers to work together is a source code control system. This allows multiple programmers to edit the same set of source files, allows the programmers to prevent, or resolve, any situation where two engineers have modified the same file. It also manages a single central copy of the source so there is never any debate about which is the latest definitive copy of the source, no matter how many engineers have taken local copies of it.
I discuss version control in greater depth in . There are many commercial and free tools that provide version control, but since the same principles apply to all of them, using any version control tool makes it straightforward to move to another.
A very closely related topic is release management. If you work on your own with a single copy of the source then building a release is straightforward. Once there are multiple developers involved, you need to ensure that a release is reproducible.
Different developers may have different tools installed on their PCs. different environment variables may impact the build. Locally installed scripts might have modifications that one developer needed for debugging, but those modifications should not be used when building the release.
The release may be checked with a manual checklist which directs the engineer to confirm the compiler version, scripts being used and other factors. Even better a release script checks all files out of version control, and builds the release in a fresh directory to ensure that the entire build is free of any residue from recent local builds.
A log file should record the version built, and also the versions of any compilers and other tools used to generate that build. Recording any generated checksums is also a useful way of ensuring that released software can be traced back to its source.
When it comes to automating a build in this way, the project files created by an IDE have several disadvantages. One is that cosmetic information, such as the window size, and build information, such as the path to search for include files, are mixed into the same project file.
The second problem is that the project files might not be humanly readable by a person and so comparing project files from one release to the next might be difficult.
To automate a build it is advantageous to control it from the command line. This means that the build can be called from the same script that retrieved all of the files from the source control system. If your build spans several directories, and if the build takes more than a minute or two, then it may be worth investing in make.
This tool monitors dependencies between files, so that it will only recompile the files that have changed, or files that require recompilation because a header file that they depend on has changed. This can reduce the time required to do a partial build ” though when doing a full release it is recommended to build all files.
The make process is controlled from a file that contains a set of rules which indicate how to perform the build. This file is called the makefile, and it contains enough information to generate the calls to the compiler. Command line arguments to make can control the options that get passed to the compiler. For example the command 'make' might do a full build while 'make debug' would create an alternative build with some test and debug flags turned on.
As well deciding which files to recompile, there may be other activities that you want to manage using make. For example on one of my graphics projects we had a number of bitmaps in Windows .bmp format.
A utility could convert each bitmap into a large C array and store it in a .c file, which could then be compiled into the embedded application. The makefile was configured to run the bitmap conversion script on any bitmaps that had changed since the last build.
This ensured that the build always had bitmaps that reflected the latest .bmp files. It also removed the chore of calling the bitmap conversion script by hand for multiple files.
To make this type of mechanism automatic, it is important to always have access to utilities from the command line. Unfortunately some tool vendors forget this when providing a GUI front end to their utilities.
Once the number of programmers increases then the need for greater separation between subsystems increases. Grouping source files into separate directories is one way of managing that separation. Some developers also place all the .h header files into an include directory, and the .c source files in to a separate one ” though I personally tend to prefer the practice of keeping .c and .h files in the same directory.
Be aware that a lot of compilers and other tools do not cope well with searching through multiple directories. You will not hit any problems here with a full featured 32-bit compiler, but many 8-bit tools are geared towards very small projects and so may not be fully tested in an environment where multiple directories are used.
If the compile and link process works, you may later find that the debugger is not able to locate the source files. I worked on one project where it was necessary to copy the source from multiple directories into one temporary directory and point the debugger at that directory to allow the debugger to display the source code. The copying process was tedious so we soon built it into the makefile.
A final tip on breaking out source into multiple directories is to avoid using the space character in the name of a file or directory. Windows has encouraged spaces in directory and file names for the last several versions, but many other tools, including make, may not be able to tolerate them.
Another feature of C that tends to be used far more on bigger teams is conditional compilation. When you are sharing your code with multiple programmers, you may need some debug features for your own use, and you may wish to provide other debug features to your co-workers. So, for example, a snippets of code may be surrounded by
In this case the person using your PWM code may want to turn on debugging form that portion, or not, depending on what they are doing. Different developers will have different needs, and so each one will have their own set of conditional compilation flags which will probably be set in their local build script. See  for more details on conditional compilation.
As a project rolls forwards the team leader must maintain a growing list of small and large jobs that must be assigned to engineers on the team. Many of these jobs are bug fixes, some are enhancements, and some are statements of pieces of functionality that are not yet complete.
Small teams start out by maintaining this list in a spreadsheet. However such a list is limited, and there are bug tracking tools available that allow better task management.
A dedicated tool allows a central repository of issues to be viewed by all developers. Each issue has an owner, and a status which indicates if it is still open, closed, or perhaps deferred until a later release. When viewing the issues, an engineer might choose to only view the issues assigned to him or her, or only view issues opened after a certain date.
A well managed issues list allows the team leader to analyse on a week by week basis how many issues remain to be resolved and how many new issues have been opened. Ideally a decreasing number of open issues will indicate that the number of bugs in the system is under control.
A bug tracking tool may run as an application on the PC, or it may be web based. Some tools require a database be installed on a central server. Once you start using a bug tracking tool, you can not imagine working without one.
Different programmers have different styles. If a large body of code is to look consistent, then a single coding standard must be applied by all programmers. Individual programmers sometimes have to put aside their preference for particular placement of the curly braces so that as a team you can produce a consistent product.
As well as controlling the appearance of the code, the coding standard can also apply design rules. For example, multiple inheritance might be forbidden in C++, or malloc might be disallowed in a safety critical C system. Having the rules written down in one central coding standards document makes it easier to educate the team about the rules.
Some typical rules are discussed by Michael Barr in 'Bug-killing standards for firmware coding'. The number of readers comments attached to the on-line copy of this article shows how the choice of rules can become a religious war ” and finding a compromise within a team is part of the challenge of working with a diverse group.
A coding standard can also enforce some conventions that are widely used in the C community, but individual programmers might not be aware of them. I remember describing to one graduate programmer how the following lines are used in header files to avoid issues of recursive nested inclusion:
// rest of header file
The graduate explained to me how he had found a much better way of solving the same problem. His solution may well have been better, but that completely misses the point. It is often necessary to conform to the conventional way of doing something, so that other people reading your code can understand it without difficulty.
At this point you might be thinking that coding standards, and working with bigger teams, might stifle your creativity, and restrict the set of solutions that you can apply to your technical challenges. Don't be too disheartened.
There is scope to break coding standard rules, and apply non-conventional solutions, but if you write something that is going to be more challenging for others to maintain, then be sure that the reward, in either code efficiency, or design elegance, is worth that sacrifice in the area of consistency. The purpose of a coding standard is to encourage consistency, not to stifle genius !
As a solo developer you may well have had coding standards, but on a larger team it is more important that the standard is documented so the entire team know the rules. It can also be advantageous to automate the checking of some of those rules. Sometimes code checking tools such as lint  can help.
In other cases a simple script using text search tools like grep allow for simple checks. For example if the rule is that the C library function malloc() is not allowed on this project, then a simple executable script could grep each source file and report a non-conformance if the string 'malloc' appears in any source file. This sort of automation is particularly useful if the team has high turnover and it is hard to be certain that all new members get fully up to speed on the project's coding standard.
An Audience for Your Code
The larger the software team, the greater the number of engineers who will read, and have an opinion on, your code. Some organizations have a formal code review process, which means that others will read and critique your code. Even without code reviews the other engineers maintaining code closely related to yours will have to read and understand your work.
Many issues that might seem trivial when operating solo can become a big deal in shared code. Meaningful variable names reduces the memory workload of the reader. Someone coming fresh to your code is going to have to be able to navigate from the position of a function call to the body of that function.
A good IDE can help here, but that is not always available. In C I like to prefix all function names with the name of the file. This makes it a lot easier to track down the body, and also to know that function's subsystem.
More readers means that the value of any comments or documentation increases dramatically. You are no longer writing reminders for yourself, but a narrative for the next explorer of the coding landscape that you created.
I recall a friend describing two exceptionally gifted engineers that were on his team. One wrote high quality, but highly complex code ” the guy was obviously a genius, but it was a challenge for other engineers to raise their game to his level.
The comment on the second colleague was that reading his code was like reading a book, such was the level of documentation and the consideration shown for the other engineers that had to maintain his code.
It is up to you to decide which of these characters you would seek to emulate ” I certainly prefer to be working with the more wordy programmer. In many projects the maintainability of the code is a far more important objective than a few saved CPU cycles.
While many other programmers will be reading your code, you will also be reading and editing the code of other engineers. Always be on the lookout for new things you can learn from others. In the case where you have changed job and also changed industry, you may be trying to pick up techniques that are specific to the industry that you just entered.
When it comes to editing code originally written by others some diplomacy is required. There is a concept called egoless programming. This means that all of the code belongs to the team and no one should get posesive about any part of it.
In practice this quaint notion rarely applies. The person who wrote one subsystem will remain the main expert in that area, and will always be ideal person to make any modifications.
If we remove ego from the equation, then we also remove pride in a job well done, and that reduces the engineer's motivation to ensure that his code remains useful and usable.
Having said that, there will always be cases where it makes sense for one engineer to make a change across a number of subsystems some of which were written by others. The other engineers can see that change appear in the version code history, assuming they look for it.
I often perform the courtesy of e-mailing the engineer responsible for a file if I have made a change to it that might cause surprise. This gives the other engineer the opportunity to let me know if he spots any problems that change might cause. Using e-mail for this type of notification is not a very efficient system, but I have not come across any other tool support for passing this kind of information between engineers.
The rest is attitude
This article has introduced a few techniques that are based on tools and a few suggestions that are based on etiquette. Transitioning from working alone to being part of a team can be challenging and rewarding.
I believe all engineers should be keen to learn, and you will learn far more from the engineers working around you than you will ever learn from books (or even from articles such as this). So part of the reward of working on a team project is that you will be a better engineer by the end of it.
The second part of the reward is that you get to be part of something bigger than you could have created on your own. Becoming a team programmer is just as significant a skill as any of the technologies or programming languages that you learn along the way. Do not underestimate the scale of adjustment necessary when you move from being a solo programmer to a team player.
 'Control the Source', Niall Murphy, Embedded Systems Programming, March 2004.
 'Introduction to Make' by Jennifer Vesperman.
 'Conditional Compilation in C', by Guy Lecky-Thompson.
 'Bug-killing standards for firmware coding' by Michael Barr, available at http://www.embedded.com/design/opensource/216200567 'Introduction to Lint', Nigel Jones.
Niall Murphy has been designing software user interfaces for over 14 years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. He welcomes feedback and can be reached at email@example.com. His web site is www.panelsoft.com.