Security needs more than checklist compliance
One of the ways to gauge security in an electronic system is determining if a product complies with specific security requirements. Yet often such determination is treated as a checklist of security capabilities that must be incorporated to meet compliance for a particular application. Simply adhering to checklists does not ensure security, though, and can actually create vulnerabilities.
Generally speaking, security is very broad topic that has a different meaning for different applications. Requirements and use-cases can differ drastically from one application to another, implying that the security architecture for one may not work optimally for another. This is especially true when working with general purpose microcontrollers that are designed to support a variety of applications. A "security block" can't simply be dropped into the design and be completely effective.
Figure 1 : Ethernet Third party IP with standard interface
Implementing security is very different than integrating a 3rd party Intellectual property (IP) block, such as adding Ethernet to a System-on-chip (SoC) design. The Ethernet block adheres to a specific standard and has a defined external interface like RMII/MII. The block also has a defined internal interface like AXI or AHB to connect to system interconnect (Figure1). Due to these standard interfaces, there is very little that can go wrong.
Compare this to security IP, which typically is well spread across the chip. As an example, Figure 2 shows some of the IP components a secure system may have, though this is not an exhaustive list. All the "Red" lines show some of the side band signals between various components within the SoC that are not governed by any standard protocol.
Figure 2: Security IP across System-On-Chip
Some of these non-standardized side band signals include:
- Secure/Non-Secure/Invasive/Non-Invasive signals from a processor with Trust Zone capability (Like the ARM Cortex-A family) routed to the system debug controller or other central controller, if any.
- On-chip Cryptography engine memory signals. The engine could have dedicated secure memory or may end up using part of system memory with additional protection attributes as its secure memory.
- Tamper signals from on-chip and off-chip sensors to on-chip Master key storage in battery domain (for cases where key must be retained when main supply is not available or removed).
- Dedicated signals to pass Master keys (from battery domain) to the crypto engine.
- Side band signals from the Secure JTAG/Debug module that block system access based on various security attributes.
- Public-Key hash or unique ID per part number that gets passed from on-chip fuses, one time programmable memory, or some sort of on-chip Flash to different parts of the security blocks/modules.
Of course this is just one way to architect security across the chip. There can be other implementations, but these would still keep security pretty much distributed across the chip.
This lack of a standard architecture or interfaces reduces the effectiveness of compliance checklists. Compliance checklists can certainly help define high-level requirements and force usage of certain cryptographic algorithms or random number generators to meet certain entropy requirements, but they often do not dictate implementation. This lack can open up a window to various side channel attacks. If security is not architected correctly a design can be vulnerable even though it may still meet compliance requirements.
So does that mean compliance requirements and standards should also dictate implementation? Can they possibly reduce side channels attacks by doing so, making system more secure?
Opinion may differ, however there can be severe implications if standards enforce certain implementations. Security can become even more of a challenge when there is inflexibility in terms of how certain features get implemented. For example, automotive SHE (Secure Hardware Extensions) specifications rely on the notion that any associated security keys are to be stored in on-chip Flash. An SoC that is designed in a technology that does not offer Flash will violate this specific SHE requirement. Even though there may be others ways to securely store the keys, developers cannot use that technology and be compliant.
For certain applications having tight control over implementation may provide the perception of higher levels of security, but it can also create holes. If there is a hidden vulnerability in the existing implementation, for instance, it gets automatically built into the design when there is no choice of how a particular feature may get implemented.
Another example is PCI (Payment Card Industry) PTS Specifications for Point of Sale Terminals. Below is the snippet from those Specifications (Requirement No: A3):
The security of the device is not compromised by altering:
- Environmental conditions
- Operational conditions
(An example includes subjecting the device to temperatures or operating voltages outside the stated operating ranges.)
An example of environmental conditions is the device temperature range, so if the temperature of the device goes beyond what the device is designed for, tamper circuitry can trigger a "tamper event" and wipe out the keys. This seems a reasonable requirement as it can help avoid cold boot attacks.
Similarly an example of operational conditions is the SoC voltage range specified in the datasheet. So, if the voltage of the coin cell that powers the tamper circuitry goes beyond its defined operating range, that should trigger a tamper event. This also seems like an obvious requirement to avoid a hacker or bad guy launching a voltage attack by making the part operate at a voltage where the part is not guaranteed to operate properly, and then extracting the keys.
However the above requirement opens up a grey area: how do clock variations get interpreted as a part of "operational conditions?" One can over-clock the device as one of the potential attacks, for instance, and yet meet the above requirement.
It is up to the PCI lab as to how this requirement is treated even though a security microcontroller may already include "No Clock" detection or "Under/Over Frequency detection" to take care any tampering scenarios where external crystal gets manipulated. While the device may include a range for under-clock and over-clock detection, the standard does not tell us what the range should be. So each silicon vendor may have different thresholds for under and over-clock detection, and that may still leave a window of opportunity for an attacker to launch a sophisticated side channel attack.
Being too specific in defining certain features in a compliance requirement can potentially minimize side channel attacks. If, however, that forces a specific implementation (one that is perceived as more secure) to be the only choice, the specific feature may also create severe issues and adversely affect security. Standards therefore should create a good balance by enforcing a particular feature yet keeping the implementation flexible for developers/integrators.
Treating security as a checklist is thus a big mistake. It leads people to claim security on things that are not secure just because they seem to meet certain compliance standards. A compliance checklist may be build on top of most common attack points for a particular application, for instance. But while such a checklist is good to take as a base to avoid most common attacks/vulnerabilities, it cannot guarantee system to be fully secure. Also, there are companies that will check a box (to meet specific compliance requirements), and then will look for the cheapest way to get a product out the door, opening up new vulnerabilities in the process.
Because standards are constantly evolving, and are also usually lagging the vulnerabilities and always catching up, one must always do a thorough, detailed potential vulnerability analysis for a particular targeted application. This should also be done for every design, even variations on prior designs. Often people think that re-using their security architecture from previous generation parts will work perfectly for a new design, but it may happen that new features that get added have an impact on the underlying security architecture because security is closely tied with how it actually gets implemented. One good example is debug architecture: it has to adapt for every new design and may affect security of the silicon.
Creating a secure system is always a challenge and one must go beyond checklists to implement what is necessary rather then what is just minimally required to achieve compliance. A thorough, detailed vulnerability analysis for every design can help to reduce side channel attacks that checklists do not cover. There is no such thing as a fully secure system, of course, but one can raise the bar so as to make the cost of attack non-viable, and is all that is required for a sufficiently secure system.
Continue reading on Embedded's sister site, EDN: "Security needs more than checklist compliance."
Join over 2,000 technical professionals and embedded systems hardware, software, and firmware developers at ESC Boston May 6-7, 2015, and learn about the latest techniques and tips for reducing time, cost, and complexity in the development process.
Passes for the ESC Boston 2015 Technical Conference are available at the conference's official site, with discounted advance pricing until May 1, 2015. Make sure to follow updates about ESC Boston's other talks, programs, and announcements via the Destination ESC blog on Embedded.com and social media accounts Twitter, Facebook, LinkedIn, and Google+.
The Embedded Systems Conference, EE Times, and Embedded.com are owned by UBM Canon.