While connected systems have brought easier monitoring, upgrading and enhancement, they’ve also presented more vulnerable attack surfaces. Defending against such attacks can be tough.
Applying multiple levels of security—for instance, secure boot for correct image loads, domain separation, and attack surface reduction—ensures that if one level fails, others remain. While secure application code alone cannot provide sufficient protection in an unsecure environment, it does play a key role in a system designed with security in mind.
This remains true regardless of the preferred development lifecycle. So, embedded development teams increasingly embrace DevOps principles, while others prefer the V model traditionally associated with functional safety standards such as DO-178C for aerospace, ISO 26262 for automotive, and IEC 62304 for medical devices.
From DevOps to DevSecOps for defense in depth
The DevOps approach integrates development and operations teams and was designed specifically to respond to changing circumstances. DevOps brings clear benefits to many embedded applications. For example, new market demands can be met faster through more integrated product development, and perhaps most important, application patches and updates such as over-the-air (OTA) security for automotive software can be applied much faster than with other approaches.
DevSecOps—which stands for development security operations—expands on DevOps principles with a “shift left” principle, designing and testing for security early and continuously in each software iteration.
Defense-in-depth and the process model
Traditionally, the practice for secure embedded code verification has been largely reactive. Code is developed in accordance with relatively loose guidelines and then subjected to performance, penetration, load, and functional testing to identify vulnerabilities.
A more proactive approach ensures code is secure by design. That implies a systematic development process, where the code is written in accordance with secure coding standards, is traceable to security requirements, and is tested to demonstrate compliance with those requirements as development progresses.
One interpretation of this proactive approach integrates security-related best practices into the V-model software development lifecycle that is familiar to developers in the functional safety domain. The resulting secure software development life cycle (SSDLC) represents a shift left for security-focused application developers, ensuring that vulnerabilities are designed out of the system ( Figure 1 ).
Figure 1 The use of security test tools and techniques in the V model is based on the secure software development life cycle (SSDLC) framework. Source: LDRA
Although the context differs between DevSecOps and the SSDLC, shift left implies the same thing for both—an early and ongoing consideration of security ( Figure 2 ).
Figure 2 The DevSecOps process model makes use of security test tools and techniques. Source: LDRA
Shift left: What it means
The concepts behind the “shift left” principle should be familiar to anyone developing safety-critical applications because for many years, functional safety standards have demanded a similar approach. Consequently, the following best practices proven in the functional safety domain apply to security-critical applications as well:
Establish requirements at the outset
Undocumented requirements lead to miscommunication on all sides and create rework, changes, bug fixes, and security vulnerabilities. To ensure smooth project development, all team members must understand in the same way all parts of the product and the process of its development. Clearly defined functional and security requirements help ensure they do.
Such requirements are likely to define a complete system for V-model developers, and merely an iteration for those applying DevSecOps. However, the principle remains the same. This is not to say that software can never be used as an “intellectual modeling clay” to create a proof of concept, but the ultimate result of such experimentation should be clearly defined requirements—and production code appropriately developed to fulfill them.
Provide bidirectional traceability
Bidirectional traceability means that traceability paths are maintained both forward and backward, and automation make it much easier to maintain in a changing project environment ( Figure 3 ).
Figure 3 Automation make bidirectional traceability much easier to maintain. Source: LDRA
- Forward traceability demonstrates that all requirements are reflected at each stage of the development process, including implementation and test. The consequences of changed requirements or of failed test cases can be assessed by applying impact analysis. The revised implementation can then be retested to present evidence of continued adherence to the principles of bidirectional traceability.
- Backward traceability, which highlights code that fulfills none of the specified requirements, is equally important. Otherwise, oversight, faulty logic, feature creep, and the insertion of malicious backdoor methods can introduce security vulnerabilities or errors.
Any compromise of a secure embedded application demands a changed or new requirement, and one to which an immediate response is needed—often to what source code development engineers have not touched for a long time. In such circumstances, automated traceability can isolate what is needed and enable automatic testing of only the affected functions.
Use a secure language subset
For development in C or C++, research shows that roughly 80% of software defects stem from the incorrect usage of about 20% of the language. To address this, developers can use language subsets that improve both safety and security by disallowing problematic constructs.
Two common subsets are MISRA C and Carnegie Mellon Software Engineering Institute (SEI) CERT C, both of which help developers produce secure code. The two standards have similar goals but implement them differently.
In general, development of new code with MISRA C results in fewer coding errors because it has stricter, more decidable rules that are defined on the basis of first principles. The ability to quickly and easily analyze software with reference to MISRA C coding standards can improve code quality and consistency and reduce time to deployment. By contrast, when developers need to apply rules to code retrospectively, CERT C may be a pragmatic choice. Analyzing code against CERT C identifies common programming errors behind most software security attacks.
Applying either MISRA C or CERT C results in more secure code. The manual enforcement of such standards on a code base of any significant size is not practical, so a static analysis tool is required.
Adhere to a security-focused process standard
In safety-critical sectors, appropriate standards frequently complement those focused on functional safety. For example, J3061 “Cybersecurity Guidebook for Cyber-Physical Vehicle Systems”—soon to be superseded by ISO/SAE 21434 “Road vehicles – Cybersecurity engineering”—complements the automotive ISO 26262 functional safety standard. Should the need arise, automated development tools can be integrated into developer workflows for security-critical systems and can accommodate functional safety demands concurrently.
Automate SAST (static) and DAST (dynamic) testing processes
Static analysis is a collective name for test regimes that involve the automated inspection of source code. By contrast, dynamic analysis involves the execution of some or all source code. The focus of such techniques on security issues results, respectively, in static application security testing (SAST) and dynamic analysis security testing (DAST).
There are wide variations within these groupings. For example, penetration, functional, and fuzz tests are all black-box DAST tests that do not require access to source code to fulfill their function. Black-box DAST tests complement white-box DAST tests, which include unit, integration, and system tests to reveal vulnerabilities in application source code through dynamic analysis.
Test early and often
All the security-related tools, tests, and techniques described have a place in each life cycle model. In the V model, they are largely analogous and complementary to the processes usually associated with functional safety application development.
The requirements traceability is maintained throughout the development process in the case of the V model, and for each development iteration in the case of the DevSecOps model.
Some SAST tools are used to confirm adherence to coding standards, ensure that complexity is kept to a minimum, and check that code is maintainable. Others are used to check for security vulnerabilities but only to the extent that such checks are possible on source code without the context of an execution environment.
White-box DAST enables compiled and executed code to be tested in the development environment, or better still, on the target hardware. Code coverage facilitates confirmation that all security and other requirements are fulfilled by the code, and that all code fulfils one or more requirements. These checks can even go to the level of object code if the criticality of the system requires it.
Robustness testing can be used within the unit test environment to help demonstrate that specific functions are resilient, whether in isolation or in the context of their call tree. Traditional fuzz and penetration black-box DAST testing techniques remain valuable, but in this context are used to confirm and demonstrate the robustness of a system designed and developed with a foundation of security.
Paving the way for security with automated tools
Before starting the software development process, developers should have access to automated tools such as testing software that expedites development, certification, and approval processes. Using these tools to support their work through the entire lifecycle while following the best practices associated with the “shift left, test early” approach helps improve the security of connected embedded systems that continue to bring such significant changes to the industry.
>> This article was originally published on our sister site, EDN.
|Mark Pitchford is technical specialist with LDRA Software Technology. Mark has over 30 years’ experience in software development for engineering applications and has worked on many significant industrial and commercial projects in development and management, both in the UK and internationally. Since 2001, he has worked with development teams looking to achieve compliant software development in safety and security critical environments, working with standards such as DO-178, IEC 61508, ISO 26262, IIRA and RAMI 4.0. Mark earned his Bachelor of Science degree at Nottingham Trent University, and he has been a Chartered Engineer for over 20 years. He now works as Technical Specialist with LDRA Software Technology.|
- Zero-trust security counters multiple cyberthreats
- The end of the develop-first, test-later approach to software development
- Using Linux with critical applications: Like mixing oil and water?
- Software testing is crucial for embedded system safety and security
- Software quality: Balancing risk and cost
For more Embedded, subscribe to Embedded’s weekly email newsletter.