Advertisement

Device security: when 'how' becomes just as important as 'what'

August 10, 2018

RobertB94505-August 10, 2018

Over the last few years security in embedded devices has moved from something that was considered important in a few select industries to what has become a primary consideration in just about any type of device. From the smart home to the automobile, industrial controllers to medical equipment, securing devices from the outside world is more important today than ever before.  

Engineers usually think of security as a collection of features to build into their devices. This is often accomplished as simply as integrating well-known security measures (encryption, intrusion detection, etc.). More sophisticated development teams might conduct some type of software security analysis such as Thread and Operability Analysis (THROP) to better understand the kinds of threats the device will be subject to and design protections against those threats.

These kinds of techniques while necessary, do not complete the full story. No matter how much analysis we perform, features we add, or how thoroughly we test the device; the device will be subject to attacks that don’t even exist today which might come into light at some point during the lifetime of the device.

As a result, we need to make the device as difficult as possible to exploit once it’s infiltrated.

Device safety
We can take our cue from the world of device safety. The recognition of software’s role in the safety of devices has grown along with the importance of software in these devices. This has led to the formalization of software safety principles in avionics (DO-178C), general industrial (IEC 61508 – as seen in Figure 1), automotive (ISO 26262) and many other industry specific standards.

click for larger image

Figure 1: The IEC 61508-1 (Edition 2) basic overview of a safety development lifecycle. (Source: IEC)

What these standards have in common is that they consider not only the determination of hazards that would lead to safety issues (the what), but they also focus on the developmental lifecycle (the how) with an eye towards verifying that the system does precisely what it was intended to do, and thus reduce or eliminate the possibility of any unforeseen issues that might prevent the system from being as safe as intended. For security, our goals are no different, but instead of concentrating on a safety development lifecycle, we focus on a security development lifecycle.

The security development lifecycle
This concept has not yet been widely deployed in embedded device development. To see real-world examples of a secure development lifecycle, you would have to look at Microsoft’s SDL or the secure development lifecycle maintained by the Open Web Application Security Project (OWASP), which mostly targets Web and App developers, but these principles are transferable to any kind of embedded device development.

Another interesting resource is from the Society of Automotive Engineers (SAE) and their J-3061 publication, “Cybersecurity Guidebook for Cyber-Physical Vehicle Systems.” While not technically a standard, it is a collection of practices that are worthwhile to think about when creating a security-focused development lifecycle.

Establishing and following best practices
What does a security-focused development lifecycle entail? It can be broken into a collection of practices that you are either doing or you should be doing. The most important areas to consider are requirements, design, implementation, and testing (Figure 2). The most significant thing to do within each of these steps is to incorporate a verification step into the activity. (The standards mentioned in this article go into great detail on verification, so if you want more information go to the standard of your choice.)

click for larger image

Figure 2: The software security development lifecycle can be broken down into a collection of best practices. (Source: Mentor, a Siemens business)

First, when a development organization starts a project, they try to determine what the device will do. This is then translated into functional requirements of the type engineers traditionally work with. When we consider the addition of trying to make that device secure, we have to think about the environment the device will operate in; what kinds of threats the device will be subject to and how do we prevent or minimize the damage if those threats do occur? This is determined using some kind of cyber security threat and risk assessment, the results of which will make it clear what the strategies will be to secure the device. These strategies will then be translated to high-level security requirements, which is where the development process begins. Note that at this point the requirements may be physical (“Add anti-tamper tape to the outside of the enclosure”), or software (“Support user access control”) or any other aspect of the device.

At this point, the requirements (functional, security, safety, etc.) will often be stated at a very high level. This is perhaps the biggest issue with any type of embedded development. While we expect that requirements to start out as high-level statements (“Add the XYZ feature”, “Protect the device from common DDoS attacks”, etc.) many teams take these requirements and run directly into the design and development stages without first fleshing the requirements out (which seems to happen with software requirements more than other disciplines). This can lead to several issues, including ambiguity in the requirements which leads to misunderstandings of what was intended by the involved stakeholders as well as to an inconsistent understanding of the requirements between teams, resulting in mismatches between development, test, product management, etc. on what the requirement actually means.

Avoiding typical pitfalls
Because of the potential pitfalls in devising a security-focused development lifecycle, there are three important concepts to keep in mind:

  • Requirements should be unambiguous. If the requirements directly from the threat and risk assessment are unambiguous, fantastic, if not, then the high level requirements that we start with need to be decomposed and expanded upon to remove ambiguity.

  • Requirements should be reviewed by all applicable stakeholders (development, test, documentation, quality assurance, system integration, etc.) and it should be verified that each of these stakeholders have a common understanding of what the requirements mean.

  • Requirements should be approved by the product owner, especially if the decomposition of requirements has been performed by a different function than the product owner (such as an architect or system engineer).

Note that it doesn’t matter how the requirements are stated, whether they are traditional waterfall-type requirements or user stories, stated formally or informally – what’s important is that they are clear, commonly understood, implementable, and approved (Figure 3). The same concepts apply to the designs that are derived from this process.

click for larger image

Figure 3: Simplified Software Security Lifecycle based on SAE J-3061. (Source: Mentor, a Siemens business)

Implementation                                                   
As the designs are completed and well understood (or at least well underway) we can consider implementation. There are several things to consider in terms of verifying the implementation of a system:

  • All code should be statically analyzed for correctness. Static analysis will not show that the implementation meets its requirements, it will only verify that common coding issues are not present in the implementation. There are many open source and commercial tools available for this, although the quality varies wildly; the tool should be able to verify conformance standards such as MISRA™ and SEI-CERT-C, which incorporate best practices for development that will protect against the most common classes of cybersecurity attacks.

  • All code should be peer reviewed, mostly focusing on the basic question: “Does the code satisfy the requirements and designs?” The main goal should be to convince other sets of eyes that the code actually does what was intended and serves as a safeguard against any ambiguity or differences of opinion that remain. Often, code reviews focus on items like avoidance of bad programming practice and adherence to standards, which is better enforced through static analysis; the time spent in reviews should be spent on the most important tasks.

Remember that the compiler should be considered as well – all code should compile without unexpected warnings. These steps are simply general best-practices that are doubly important in a cyber-security context.

The final step: testing
When it comes to the testing of the implementation, the same general guidelines should be applied to the development of the implementation itself. The test design should be reviewed by the relevant stakeholders to make sure that what the testers test matches what the implementers implemented and what the product manager expected. Also, the test cases themselves should be peer reviewed just as the application code is.

Conclusion
While there are other aspects of product development to consider when creating a secure device, the three main fundamental considerations to ensure success are:

  • Unambiguous, reviewed, and approved requirements that include security
    requirements from an initial threat review.

  • Development standards that minimize known code issues that could be exploited.

  • Testing that takes the security requirements into account just as it does for product functionality.

Attention to these foundational steps will lead to products that are not only more secure, but also higher in quality – and that’s something both your customers and accountants will certainly appreciate. So now when we talk about securing a connected device, it’s just as important to consider how the device is created as well as what type of device you’re planning to create.


Robert Bates is Mentor’s chief safety officer responsible for the safety, quality and security aspects of Mentor‘s embedded product portfolio targeting the industrial, automotive, and aerospace markets.  In his role, Rob works closely with customers and certification agencies to facilitate the safety certification of devices to IEC 61508, ISO 26262 and other safety certifications. Before moving to Mentor, Robert was a software development director at Wind River, where he was responsible for commercial and safety certified operating system offerings, as well as both secure and commercial hypervisors. Robert has 25 years of experience in the embedded software field, most of which has been spent developing operating system and middleware components to device makers across markets and regions.

 

Loading comments...