2028 - Software quality: economically important & standardized - Embedded.com

2028 – Software quality: economically important & standardized

November 2028 is the 40th anniversary of ESD. Click here read other 2028 lookbacks.

In retrospect, the state of software development for embedded systems in 2008 was not much to be proud of. The cost to develop and test software was high, yet almost all code had bugs lurking within. Some bugs were hugely embarrassing, such as the one in the BMW that locked its passengers inside.

Others were life-threatening, but most were simply expensive annoyances. Software was treated as somehow less critical than the hardware, so many of its risks were underestimated or simply ignored. Software development for embedded was still more of a craft than an engineering discipline. The dominant languages, C and C++ were appallingly ill-defined and badly implemented. The defect density of delivered software averaged around five bugs per 1,000 lines.

Attitudes to software quality had already begun to change by 2008 with the emergence of an entire class of static-analysis tools that made it easy to find software bugs. By 2010, their use was considered best practice and they were deployed widely.

However, software developers yearned for additional ways to reduce the risk of bugs. Hardware developers were able to prove that their designs met the specification using model checking, but no such tools were available for software. The use of modeling languages was a step in that direction but didn't completely solve the problem.

The pivotal event that changed the industry was the train crash in Paris in 2012 that killed 60 passengers. Investigators traced the cause back to a software bug in the control software. The software supplier was found to have used sloppy practices during development but under a loophole in EU law could not be held liable for damages.

The outrage this provoked prompted the EU to propose legislation to close the loophole and allow software developers to be sued for damages due to software failures if they couldn't prove that they had used best efforts to attain a defect density of less than five bugs per 10,000 lines. This was a tenfold decrease in typical bug density and was considered very difficult and expensive to achieve using then-current software development practices.

The proponents of the legislation argued that this was the whole point–the industry needed to make drastic changes. The embedded systems industry bellowed protestations, but to no avail. The law passed and was phased to come into full effect by 2020.

The passage of this legislation changed the economics of embedded systems development. Whereas previously software was thought of as low-risk activity that had to be done as cheaply as possible, it became a source of risk that could not be easily ignored. Companies scrambled to adapt.

Thus began the rise to prominence of Correctness by Construction (C by C), the software development methodology most commonly used today in 2028. This was not a new idea, having been around in various forms since the 1980s. The UK company Praxis Critical Systems Ltd had been using such techniques since the early 1990s and had achieved very low defect densities while simultaneously being three times more productive than industry norms.

The basic principle is simple–use mathematical reasoning to determine fitness for purpose. In practice, this meant the use of static-analysis tools to prove properties of programs. Although static-analysis tools were (and still are) useful for finding bugs, they couldn't be used to supply such proofs because the uncertainties in language definitions and the use of unsafe features made it impossible to do so without generating unacceptable numbers of false positives.

The best solution would have been to design a different language from the ground up to have no inconsistencies or ambiguities, but as this was clearly infeasible, the next best thing was to take existing popular languages and to (a) forbid use of the problematic features, and (b) extend the languages to allow programmers to express invariant properties.

The use of language subsets was already well accepted back in 2008. The Motor Industry Software Reliability Association (MISRA) had just extended their popular rules for C into C++. Annotations to express invariants were also widely accepted, as programmers had been using assert directives for years.

What was different is that it became possible for tools to automatically prove with a high degree of confidence that programs written in those subsets did not have run-time errors and didn't violate the invariants. Because this could be done at compile time and without writing test cases, it became easier to write high-quality software.

In 2018, a safe dialect of C based on these principles became an ISO standard. To developers, even though the method was based on sound mathematical principles, programmers were mostly insulated from having to know this, so it still felt like programming in a familiar language. Now in 2028, C by C is firmly established as the dominant paradigm for software development. Bugs are still with us today and probably always will be. However, we are much less likely to encounter them.

Paul Anderson is vice president of engineering at GrammaTech.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.