Start-up companies face the dilemma of trading between time, quality and cost. Consequently the development environment of young development teams is typically not as mature as in the established R&D departments.
However, there are a some quick ways to make significant improvements in productivity adopting some simple and cost effective tools and methods. In this article we explain what these methods are and how they can be realized considering the constraints and present a novel and unique in-house tool.
The problem we faced
Abilis was the first company to launch a joint tuner and demodulator DVB-T chip, the AS-102. This chip was also the first chip in the market with a fully software configurable receiver.
This gave customers the opportunity to have unique features, but more importantly, allows us to adapt or improve the receiver with better algorithms without the need for a costly chip re-spin. Consequently, the software development effort is significantly larger than in any other comparable chip.
This chip can be used for DVB-T dongles and in set-top-boxes. We can add an external LNA to improve the receiver's sensitivity even further; add an IR diode for remote control support, and LEDs for customers wishing to visually show various receiver states to simplify the customer support effort.
In summary, the software needed to be very flexible to accommodate the various customer needs. On the other hand, it is in the interest of the developing team to have one single SW build in order to keep the maintenance burden low. This poses many challenges but they can be overcome by careful design.
This paper is organized as follows. First we'll discuss the approach taken to establish a development environment. Then we introduce each entity of the environment. Finally we conclude this paper with a summary and a roadmap.
The approach to software development we took
Wikipedia provides a good overview under the topics “software development” and “software testing”. In general, we choose and picked what suited our needs and adopted known ideas also.
First we analyzed what tools and methods are in place. Then we need to understand how we can measure the performance and quality of the code base. This is key, otherwise we cannot measure any improvement and determine the impact of changes. Another parallel activity is to document and deploy processes capturing best practice. Of course it is important that the team members embrace, adopt and implement different working methods in order to succeed.
Software version control
It is common practice to manage software versions using a free or commercial tool. We use svn due to in-house expertise.
The selection process should also consider other aspects beside the tool support, such as a tool's ability to create a graph of the historical branches. We produced two processes to complement the tool, a svn user guideline and a software release process.
The svn guideline defines how branches are to be named and what the branching structure shall be, i.e. how developers shall use svn. The SW release process describes how a release is generated, step-by-step, including the file naming and the version number convention.
Further, we crated also a release content spreadsheet, kind of tick box with a parameter list, to support the release manager, because of the large number of possible features that the product can support.
Now we know where and how the source code is managed, thus we can move on to the compilation and release generation aspects.
Again, here we followed common practise and have a script and makefiles that generated the various object, executable and binary files. However, we observed that having one single svn repository for the source code reduces the flexibility for making SW patches. This is illustrated below in Figure 1 below .
|Figure 1: Managing source code: single-and multi-entity svn repository.|
The top in Figure 1 shows one single svn repository with different entities such as for firmware or for DSP code. The Release is generated based on the latest revision number, here #117 in the example.
For every Release, all object files are produced anew and then linked to yield the final executable. Obviously, a simple patch is not possible because each time, all object files have to be created and only a full Release is generated and available.
An alternative is to create a svn repository for each entity, shown at the bottom of Figure 1, and to provide interfaces where each entity can be plugged in. This creates the flexibility to provide customers with an updated version of only one specific entity.
Bug tracking tool
Here we selected another free tool for bug tracking, Bugzilla, because it is widely used and has good support.
First we defined an observation tracking or bug tracking process and documented what suits best our needs. We made subsequently two changes to the tool in order to adapt it to our needs. Every week we hold a Bugzilla meeting and review the status of each open entry.
But first, we had to re-design the state machine to our needs, and a summer student build a system that generates graphs that show the software maturity over time.
Figure 2 below depicts our new state machine. The reporter (Rep) enters an observation which is set to new. The weekly observation meeting changes the state of each entry. The integration and test team (I&T) and the customer support team (FAE) ensure that every fix is tested before we consider an observation being closed.
|Figure 2: The modified Bugzilla state machine|
Also useful is to know the number of each observation in a given state over time. The state machine shows the various states possible. Once a week, during the night, an automatic script generates graphs that visualize this.
Figure 3 below shows the gap between closed and all bugs at a glance, and if the team can catch-up with issues. Further we see the number of bugs being worked on, and if there is a trend in the number of bugs being raised.
|Figure 3: Top-level view of the software maturity based on Bugzilla data.|
Figure 4 below shows how many observations have been parked, means being either enhancements, won't fix or invalid. This can give clues about the maturity of the requirements. We can also see if there is a backlog in the validation of the fixes and respond accordingly, or if bug fixes have not been
|Figure 4: Second view on the observation status.|
We developed an in-house tool that allows us to test every night the quality of the code base, called ToT (top of tree). The results can tell us that the latest code changes have no detrimental impact on the system and that the basic receiver performance is maintained.
Moreover, having always a working proper code base enables us to quickly respond to new demands, otherwise several release candidates may be needed to arrive at the same level of code maturity.
The whole system is very lean, as shown in Figure 5 below . We use a DVB-T PCI signal-generation card, both VHF and UHF are supported, which is connected to an AS-102 evaluation board.
|Figure 5: Regression test system set-up.|
The test system is written in Tcl and comprises various scripts. The test coverage includes the API, tune and scan tests, modulation tests, a long TS-packet streaming test and the unit test for the processor unit, for both USB and SPI applications.
The results of the regression test system are presented in a browser (Figure 6, below ) and accessible from everywhere. We also show the revision number used for the build and the number of build errors, when one object cannot be compiled. By clicking on a number we get the list of all tests with all test parameters, and can quickly spot which particular test failed.
|Figure 6: GUI snapshot with results from each daily test run: red are failures and green are passed tests.|
The cumulative sum of test failures can be shown in a different window (Figure 7 below ), where green are streaming test failures and red is the sum of all other test failures.
|Figure 7. Cumulative sum of test failures|
We further can extract the number of compilation errors from the build data and monitor the number of compilation warnings. Figure 8 below shows the resulting improvement by making compiler warnings visible over time.
|Figure 8. Improvements achieved by making compiler warnings visible over time.|
Summary and future work
In this report we explained and showed how a small to medium sized company can put methods and tools in place to improve software quality over time.
We used known free tools such as svn and Bugzilla, adapted them to our needs, and developed a simple regression test system to shorten the time from entering a bug into the code base and knowing that there is a bug, to 12 hours.
We have a second regression test system in place that allows each developer to test his code before he commits it into svn and the ToT. In addition to the aforementioned tools we also deployed processes to ensure continuity and traceability and coding standards and code review guidelines.
One last piece of advice to developers who read this is to derive or develop one's own method that suits the business in question best and not to adopt everything or enforce a method religiously. The circumstances make a methodology successful, or not.
Going forward, we are considering whether or not to employ a static code analysis tool to identify risks, e.g. pointer issue, memory not de-allocated, or another tool that checks the code against the coding guideline. Of added value is also the ability to monitor the comments in the code because there is limited amount of time in a fast moving industry to produce documentation.
Rudolf Tanner is Engineering Manager at href=”http://www.abiliss.com”>Abilis Systems Sarl in Geneva. He has a degree in electrical engineering from the University of Applied Sciences in Muttenz, Switzerland. In 1999, he was granted a PhD degree from the University of Edinburgh, Scotland, for his work on non-linear receivers for DS-CDMA.
Stephane Martinelli is principal software engineer at Abilis Systems with a total of 15 years of experience in real-time embedded software in the areas of wireless telecommunications and smart cards. He holds a master degree in Microtechnical Engineering from the Swiss Federal Institute of Technology in Lausanne (EPFL).
Norman Bailey is a Software Test Engineer at Abilis Systems. He has 11 years of experience as a Systems Engineer in the Machine Automation field writing machine-control and vision system software and graphical user-interfaces. He holds a Bachelor of Technology degree in Electronics Technologyfrom Norther Montana College in Havre, Montana.
Pierrick Hascoet is software engineer at Abilis Systems with many years of experience in real-time embedded software development, Linux andWindows driver development.