Advertisement

Securing the IoT: Part 2 - Secure boot as root of trust

Yann Loisel and Stephane di Vito, Maxim Integrated

January 11, 2015

Yann Loisel and Stephane di Vito, Maxim IntegratedJanuary 11, 2015

Security of electronic devices is a must in today’s interconnected world. There is plenty of evidence [1] to show that when the security of a device on the IoT is compromised, you must be cautious, even suspicious of that device and the whole IoT. You most certainly cannot rely on a hacked device for secure data exchange, processing, or storage.

In Part 1 of this article, we focused on the identification of security risks and argued that the best security is embedded in electronic devices. We emphasized countermeasures, specifically public key-based algorithms.

In Part 2 we concentrate on a secure boot, which is the “root of trust” and the cornerstone of an electronic device’s trustworthiness. Note that this discussion assumes that the reader understands the difference between a private and public key in cryptography. You can refer to Part 1 to find plenty of discussion in a Google search of the terms. Here we will demonstrate how device security can be implemented conveniently and how devices can even be updated in the field. The DeepCover secure microcontrollers will serve as trust-enabling example devices to secure the IoT.

Root of trust starts with trusted software
The only solution to protect against attacks that try to breach the casing (i.e., the hardware) of an electronic device is to use a microcontroller that starts executing software from an internal, immutable memory. (Note that not every microcontroller has this setup and capability.) The software stored in the microcontroller is considered inherently trusted (i.e., the root of trust) because it cannot be modified.

Such impregnable protection can be achieved using read-only memory (ROM). Alternatively, flash (EEPROM) memory internal to the microcontroller can also be used to store the root-of-trust software, if suitable security exists. Either there is a fuse mechanism to make this flash memory non-modifiable (as a ROM) once the software has been written into it, or there is a proper authentication mechanism that allows only authorized persons to write the root-of-trust software in flash memory.

If this early software can be modified without control, trust cannot be guaranteed. "Early" means that it is the first piece of software executed when the microcontroller is powered on. Hence the requirement for inherent trustworthiness of this initial software. If this software is trustworthy, then it can be used for verifying the signature of the application before relinquishing the control of the microcontroller. It is like a castle built on strong foundations.

Booting into a secure state
At power-on, the device’s microcontroller starts running the root-of-trust code from a trusted location (e.g., ROM, trusted internal flash). This code’s primary task is to start the application code after successful verification of its signature. Verification of the signature is done using a public key previously loaded in the microcontroller using Method 1:Self-certification or Method 2:Hierarchical certification (both methods are discussed in Part 1).

These protected microcontrollers can still be used in their usual manner by developers for writing software, loading, and executing the software via JTAG, and debugging. A secure microcontroller does not make development harder.

Methods to guarantee a public key. Public key integrity, authenticity, and identity must all be guaranteed. This can be done in different ways:

Method 1: Self-certification. The recipient of the digital content receives the public key from the sender in person, or the sender transmits the public key in a way that leaves no doubt about the legitimate origin and ownership of the public key. Then this public key (also called a root key) can be trusted, as long as it is stored where it cannot be modified by unauthorized persons.

Method 2: Hierarchical certification. In this method a hierarchy of verifiers guarantees the origin of the public key. Public key infrastructures (PKIs) provide the definitions of such hierarchies. The physical association between a public key and the identity of the key owner is a certificate. Certificates are signed by intermediate entities (i.e., certification authorities) of the PKI hierarchy.

Assume that a person wants to have a certified public key. That person generates a pair of keys and keeps the private key in a safe, hidden place. Then in principle, a certification authority meets this person face-to-face and thoroughly verifies the identity of that person. If authenticated, the identity information (name, organization, address, etc.) is attached to the public key and the resulting document is signed by the certification authority’s private key. This permanently binds the identity information to the public key.

The resulting signature is attached to the certificate. If any one element among the identity information, the public key value, or the certificate signature is tampered with, then the certified signature becomes invalid. Therefore, the information contained in that certificate cannot be trusted. The certification authority’s public key can, in turn, be certified by yet another certification authority.

Certificate validity is verified by using the same cryptographic signature verification scheme as for digital content. The signature verification guarantees the integrity and authenticity of the certificate and, consequently, of the information contained in the certificate: the public key and the identity (Figure 3).


Figure 3: A diagram of a digital signature, how it is applied and verified

Releasing the software and signing the code
Once the software is completely finished and tested, and then audited and approved by a certification laboratory or an internal validation authority, it must be released. Releasing software for a secure boot requires one additional, important step: the binary executable code signature. Signing the code, in fact, “seals” the code (it cannot be further modified without those modifications being detected) and authenticates it (the identity of the approver is well established).

The code is sealed because, if modified, the associated signature will become invalid—the integrity of the digital signature will no longer be complete. The code is also authenticated because it was signed by a unique, undisclosed, private key guarded zealously by its owner—the persons in charge of signing the code.

Signing the code is an important step for certified software. Once the software has been approved by an external or internal validation authority, it cannot be changed.

Taking ownership of the device
Taking ownership of a device is done by personalizing the root of trust in the microcontroller, the immutable code that handles the secure boot. But we also need to load the public code verification key owned by the software approver into the device (see Part 1 of this article). Recall that this key is fundamental and shall be trusted.

There are two schemes to personalize the root of trust. One approach uses a small key hierarchy and the other approach has no key. We will now examine both approaches.

In the first approach (Figure 4) the device’s root of trust already contains a root public ‘key verification key’ (inherently trusted as part of the root of trust). We call it the master root key (MRK). Restated simply, that key is hard coded in the root of trust, which is used to verify the public code verification key (CVK). (See Method 2 earlier) As a consequence, the public CVK must be signed prior to being loaded into the microcontroller. For this signature operation, the signing entity is the owner of the root of trust (i.e., the silicon manufacturer who owns the private key matching the public key hardcoded in the root of trust).

Once the public CVK has been loaded and accepted by the root of trust, the root of trust's key is not used any more, except for internally rechecking the CVK at each boot to ensure that it has not been modified or corrupted, or for updating the public CVK. The CVK is now used to verify binary executable code.

This personalization step has significant potential benefit: it can be executed in an insecure environment because only the correctly signed public key can be loaded into the device. This personalization step, moreover, is not a big hurdle as the same CVK can be deployed on all devices. Note that the CVK can be stored in external, unprotected memory since it is systematically reverified before being used.


Figure 4: Code verification key (CVK) is verified by the MRK before being used to verify the executable code and then execute it.

A second, simpler approach to personalize the root of trust uses no prior key. Consequently, the public CVK has to be loaded either into internal memory that can be written only by trusted software running from that internal memory, or into non-modifiable memory such as one-time-programmable (OTP) memory or locked flash (EEPROM) in a secure environment. A trusted environment is needed to ensure that the intended public key is not replaced by a rogue key, because the root of trust cannot verify this key. This key also has to be internally protected by a checksum (CRC-32 or hash) to ensure that there is no integrity issue with that key. Alternatively, to save precious OTP space, the key can be stored in unprotected memory, but its checksum value is stored in internal OTP memory.

As described here, one can imagine a multiparty signing scheme where multiple entities (e.g., the software approver and an external validation authority) have to sign the executable code. One can also imagine more complex hierarchies with different code verification keys, more intermediate key verification keys, and even multiple root keys. The ultimate process used really depends on the context and the security policies required for the application.

During this personalization step, other microcontroller options can be set permanently, for example disabling the JTAG. While useful during development, the JTAG must be disabled on production parts, otherwise the root of trust can be bypassed.

Downloading app code during manufacturing
One step in the manufacturing process consists of loading the previously sealed/signed binary executable code into the device.

A public key scheme has the following unique advantages:
  • No diversification is needed.
  • No secret is involved.
  • The binary executable code is sealed by the software approver and cannot be modified.

The signed binary executable code, therefore, can be loaded anywhere. Only this code will be loadable and executable by the electronic device; other binary executable codes will be rejected.

Using public key cryptography for this process is extremely valuable as no security constraints are imposed upon the manufacturing process.

Deployment, field maintenance
Secure-boot-based devices are deployed in the field like others. However, to update the executable code in the field, you must sign it with the software approver’s private key and load it into the device by an appropriate means like a local interface or a network link.

If the software approver’s code-signing key is compromised, the associated public CVK can also be replaced in the field, provided that the new key is signed by a public-key-verification key previously loaded into the device. (This key-verification key is a ‘revocation-then-update’ key.) However, compromising the root key (MRK) is not an option because this root key cannot be replaced in the field. Proper private key management policies do mitigate this risk.

 

< Previous
Page 1 of 2
Next >

Loading comments...