Advertisement

MarkPitchford

image

Mark Pitchford has over 30 years’ experience in software development for engineering applications. He has worked on many significant industrial and commercial projects in development and management, both in the UK and internationally. Since 2001, he has worked with development teams looking to achieve compliant software development in safety and security critical environments, working with standards such as DO-178, IEC 61508, ISO 26262, IIRA and RAMI 4.0.

MarkPitchford

's contributions
Articles
Comments
    • Dozens of tools are designed to tell you whether your C or C++ code violates MISRA rules, but identifying violations flagged by an analysis tool represents only one part of the compliance process for the development team as a whole.

    • The vulnerability of connected devices drives the need for a lifelong commitment to building and maintaining the links between requirements and application components.

    • There is a limitation with the prediction of dynamic behaviour through static analysis. The code itself is not executing, but instead is being used as the basis for a mathematical model. It will never be possible for the model to precisely represent the code because such representation is mathematically insoluble for all but the most trivial examples. In other words, the goal of finding every defect in a nontrivial program is unreachable unless approximations are included, which by definition will lead to “false positive” warnings. And if these “false positives” are also used as the basis for defensive code then they would clearly impact on the functionality of the code itself! There is a place for static analysis in the adherence to coding standards, and sales of relevant tools suggest that there is proven worth in the static analysis of dynamic behaviour for some circumstances. Ultimately, however, they cannot replace dynamic analysis. This article expands the argument further: http://www.eetimes.com/design/embedded/4213633/Think-static-analysis-cures-all-ills--Think-again-?cid=NL_Embedded&Ecosystem=embedded

    • Here, here! I couldn’t agree more and this was exactly the idea I was trying to get across in my article that Bernie also referenced. No question, static analysis has earned its reputation for helping programmers create quality programs. However, I find, as it seems you did, that the definition of "test" implied by the article a little confusing - static analysis is just one aspect of software test, not something which contrasts with it! There are also one or two understandable preconceptions about the capability of modern dynamic test tools; for example, automatically generated test cases can dynamically test a large majority of the code without incurring the expense of user developed test cases. I agree entirely that Dynamic Analysis and the Static Analysis of Dynamic Behaviour tend to find different subsets (albeit overlapping) of all bugs within an application and it would be easy to justify the use of both approaches for safety critical work purely from a "due diligence" standpoint. Neither dynamic analysis nor static analysis nor requirements traceability that I also referred to “proves” robustness. But, each contributes significantly and the projects that can afford to put their code through all of these disciplines will reap the rewards in far better quality software that requires less maintenance, enjoy more portability and avoids those costly recalls, litigation, etc. Far too often, companies look at the immediate development bottom line vs. the long-term cost analysis. However, many development teams in less high risk environments haven't the luxury of unlimited funds and, under those circumstances, test managers often decide between techniques as well as between tools. Then, it is important to consider which approaches address most of the issues - that is, "biggest bang per buck". And, at that point, the "No competition" premise in the title doesn't hold so well!

    • I’m glad that I have raised so much debate! Just to clarify, Paul, founder of GrammaTech, seems to assert that I've said the modern static analysis tools are useless. I haven’t. I'm saying that if you can’t afford to buy all types of tool then here is what each type can offer. I looked up the word “test” and found it defined as “to ascertain (the worth, capability or endurance) of (a person or thing) by subjection to certain examinations.” We are not talking about static analysis vs. test, but looking at static analysis as an aspect of test. I do not intend to suggest that “the new generation are no different...” My point is that a combination of all techniques offers the most comprehensive assurance that programming faults will be found. Failing that, it is useful to know what is possible using each capability in isolation. I have no issue with the idea that “These techniques give them the capability to find deep semantic errors in huge code bases.... “ and stand corrected on scalability given recent developments with the proviso that a “low level” of false positives is a relative term. They are comparatively very rare in dynamic analysis. I agree that “There is no contradiction in observing that [new generation static analysis] ... can also subsume the capabilities of the earlier generation” of STATIC ANALYSIS tools. There IS contradiction in such a suggestion for dynamic analysis tools. As I suggested, “mutually exclusive benefits could justify the application of both techniques”. There is no “dichotomy” implied. However, when there is a limited budget I don’t think it irresponsible to seek the biggest bang per buck. As noted, dynamic analysis does not “prove” robustness. Indeed, none of these techniques can “prove” robustness, but each can provide differing evidence. It seems to me that Paul and I share a lot of the same beliefs about quality programming and the pish and pat of exchange takes away from the point I was trying to make.