Maximizing the benefits of continuous integration with simulation

The role of software is becoming more important than ever in embedded systems. It provides the opportunity for competitive differentiation and can significantly accelerate and increase revenues. However, companies are often not willing to wait a year or two for new and updated software, so there is increased pressure to release faster. The way developers work is fast evolving, regardless of the type of code or system in development. Market observation shows that new practices such as Agile development, continuous integration, continuous deployment and cross-functional teams are being established for the development of embedded software. Embedded systems development is clearly very different from developing a web or an IT-based application; and target hardware does not always lend itself to adopting these new development methodologies.

Most of the challenges can be categorized as being related to access, collaboration or automation. Let investigate the first of these challenges: access or the availability of the hardware required to move to continuous integration and testing. Continuous integration is often considered to be a useful means to achieve maximum velocity in embedded development. Although it can deliver many benefits, continuous integration can also highlight hardware limitations that prevent its effective attainment.

Lifecycle
Due to market needs, the traditional lifecycle – design, platform development, application development, test and integration, deployment and maintenance – is changing in order to achieve greater access. Test and integration is now a continuous and ongoing activity that happens throughout the product lifecycle: it occurs during the development of a product and after deployment when the product is in the maintenance phase.


Figure 1 – Product lifecycle and continuous testing

Software is also often updated in deployed products, so it is being continuously tested and integrated, which means that the traditional post-development test and integration phase is changing and even disappearing in some cases. It has not been entirely eliminated but it’s largely been encompassed in Quality Assurance (QA), which is perhaps less about doing integration and basic tests and more about ensuring that proper testing is done before product shipment. In this area, there are two opposing views: either production quality is part of the continuous test and integration system, where it is the final step in that process, meaning that the end-goal of the continuous integration is to have production quality; or there is a dedicated QA phase that is a specific activity with specific equipment and comes after and is separate from the continuous test and integration system.

Concept
The basic ideas behind continuous testing and integration include: provide faster feedback to developers; build and test the system piece by piece to avoid big-bang integration and big failure; assess most changes; reduce the waiting time to test systems; find errors in the smallest possible context; and find unique errors at each level. This is often difficult to achieve because there is limited access to often-fragile and scarce target hardware, which causes long latency to run tests that makes the fixing of bugs more expensive. A long time to run tests means it takes time to get feedback and regressions creep back in, and using only two levels of test increases the likelihood that bugs in intermediate levels are not caught.


Figure 2 – Typical time frames of continuous practices

Target hardware can be too costly or impractical to obtain enough of it. Developers may have to wait for access to a test lab or specific equipment, or may have to wait for a result to be reported back from a test lab, or may lose time with setup or configuration changes after they gain lab time, all causing a loss of velocity, thought flow and momentum. This access bottleneck also causes an inability to scale out the test and integration; and it may cause quality problems if a lack of access to hardware causes developers to reduce their test matrix to what possible rather than what should be tested. This may cause late stage integration problems that should have been avoided by introducing continuous integration.

Next page

Hardware and Software Labs
A typical hardware lab (Figure 3) involves: a system-under-test; a target-provisioning PC that pulls code from a build server and programs and drives the system; and a data generator/real-world model (either a standard PC or specialized hardware) — all coordinated by a test manager server.


Figure 3 – The hardware test lab

The hardware test lab is essential for final QA: It is used to test what is shipped and a simulator cannot replace this hardware. However, there are issues: It is difficult and expensive to maintain; it is a limited resource, so integration tests run less often than they really should; complete automation is hard to achieve; there is a limited set of intermediate system configurations as hardware is typically rigged for system-level or unit tests rather than supporting intermediate steps; and it requires various subsystem setups that are often hard to come by. Finally, testing is shaped by available hardware and not necessarily by what developers ideally want to test.

When simulating (Figure 4), the system-under-test and real-world model are replaced with a simulation, and the provisioning and service is replaced with simulation functions. This transforms the whole test lab into software program, which can be run on any PC, encapsulated and shared. It turns a hardware management problem into a software management problem without changing the tests.


Figure 4 – The simulation test lab

Simulation provides flexibility – with a simulated test lab, any PC can be any testing target and the test setup is just software. This means that tests can be run as soon as they are available; testing targets can be shifted immediately; test setups can be easily scaled up and down to fit each integration step; it is easier to maintain old setups as they are now just software; and it allows surges on one particular system, which avoids resources standing fallow. Additionally, adding a lab setup is as easy as adding a new server.

World Simulation
A simulation factoring in the outside world that the system interacts with is particularly important for embedded control systems, which can be achieved by simulating the control computer and running the whole software stack upon it, using the same software as on the real system (Figure 5). This software talks to simulated I/O devices using the normal drivers, and the simulated I/O devices talk to a simulation of the controlled system. This is the system that the control computer is part of, and it contains simulators for the actuators and sensors. Finally, this system simulation interfaces to a model of the world in which the system operates.


Figure 5 – Building an integrated simulation system

In fact, there could be dozens of system and world models all interacting with each other and the simulated computer system. In a typical mechanical system, the system and environment simulations already exist as part of a model-based or simulation-based workflow. In a networking system, the world model tends to be the only model, as there is no involved mechanical system.

Certainly the simulation of the control computer board is the most difficult access issue, as it is necessary to simulate components such as the processor, I/O devices, storage and networking, among other functions. But it is then easy to add multiple boards in a single simulation setup, making it ideal for networked, rack-mounted and distributed systems. The simulator system provides the automation features needed to build the virtual lab, which is more than just hardware; it is hardware under simulation control.

Removing Limitations
Overall, simulation provides a solution that is more stable, dependable and flexible. By augmenting physical hardware with a simulation based test lab, it is easy to equip developers with the systems they need, both for spontaneous testing and continuous integration based testing. It saves time, because there is no wait for access to hardware, and prepared setups and configurations can be used immediately. Limitations are removed when it comes to lab configurations, which means developers can test what they really need to test, and obtain faster feedback. It is also possible to increase quality and scale out testing without necessarily having to increase costs. Late stage integration bugs and emergencies can be reduced, and projects and products can be delivered on time and under budget.

While access to physical hardware has traditionally impacted the effectiveness of continuous integration and testing, but technology such as Simics can enable new software practices, because it delivers unlimited target access and high levels of flexibility when continuously testing and integrating embedded system software.

2 thoughts on “Maximizing the benefits of continuous integration with simulation

  1. “All valuable, but I think there are a few important points that have been missed.nnRather than run continuous integration daily it is better to do it on a per-checkin basis. That tightens the feedback loop and can typically identify which changes caused

    Log in to Reply
  2. “I agree that CI is usually run on check-in, at least the first few loops. It didn't quite come out like that in the article, but that is what we usually see. At some point, it is time to collect code streams from many places, and then tests tend to star

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.