Securing the IoT: Part 2 - Secure boot as root of trust - Embedded.com

Securing the IoT: Part 2 – Secure boot as root of trust

Security of electronic devices is a must in today’s interconnected world. There is plenty of evidence [1] to show that when the security of a device on the IoT is compromised, you must be cautious, even suspicious of that device and the whole IoT. You most certainly cannot rely on a hacked device for secure data exchange, processing, or storage.

In Part 1 of this article, we focused on the identification of security risks and argued that the best security is embedded in electronic devices. We emphasized countermeasures, specifically public key-based algorithms.

In Part 2 we concentrate on a secure boot, which is the “root of trust” and the cornerstone of an electronic device’s trustworthiness. Note that this discussion assumes that the reader understands the difference between a private and public key in cryptography. You can refer to Part 1 to find plenty of discussion in a Google search of the terms. Here we will demonstrate how device security can be implemented conveniently and how devices can even be updated in the field. The DeepCover secure microcontrollers will serve as trust-enabling example devices to secure the IoT.

Root of trust starts with trusted software
The only solution to protect against attacks that try to breach the casing (i.e., the hardware) of an electronic device is to use a microcontroller that starts executing software from an internal, immutable memory. (Note that not every microcontroller has this setup and capability.) The software stored in the microcontroller is considered inherently trusted (i.e., the root of trust) because it cannot be modified.

Such impregnable protection can be achieved using read-only memory (ROM). Alternatively, flash (EEPROM) memory internal to the microcontroller can also be used to store the root-of-trust software, if suitable security exists. Either there is a fuse mechanism to make this flash memory non-modifiable (as a ROM) once the software has been written into it, or there is a proper authentication mechanism that allows only authorized persons to write the root-of-trust software in flash memory.

If this early software can be modified without control, trust cannot be guaranteed. “Early” means that it is the first piece of software executed when the microcontroller is powered on. Hence the requirement for inherent trustworthiness of this initial software. If this software is trustworthy, then it can be used for verifying the signature of the application before relinquishing the control of the microcontroller. It is like a castle built on strong foundations.

Booting into a secure state
At power-on, the device’s microcontroller starts running the root-of-trust code from a trusted location (e.g., ROM, trusted internal flash). This code’s primary task is to start the application code after successful verification of its signature. Verification of the signature is done using a public key previously loaded in the microcontroller using Method 1:Self-certification or Method 2:Hierarchical certification (both methods are discussed in Part 1 ).

These protected microcontrollers can still be used in their usual manner by developers for writing software, loading, and executing the software via JTAG, and debugging. A secure microcontroller does not make development harder.

Methods to guarantee a public key. Public key integrity, authenticity, and identity must all be guaranteed. This can be done in different ways:

Method 1: Self-certification. The recipient of the digital content receives the public key from the sender in person, or the sender transmits the public key in a way that leaves no doubt about the legitimate origin and ownership of the public key. Then this public key (also called a root key) can be trusted, as long as it is stored where it cannot be modified by unauthorized persons.

Method 2: Hierarchical certification. In this method a hierarchy of verifiers guarantees the origin of the public key. Public key infrastructures (PKIs) provide the definitions of such hierarchies. The physical association between a public key and the identity of the key owner is a certificate. Certificates are signed by intermediate entities (i.e., certification authorities) of the PKI hierarchy.

Assume that a person wants to have a certified public key. That person generates a pair of keys and keeps the private key in a safe, hidden place. Then in principle, a certification authority meets this person face-to-face and thoroughly verifies the identity of that person. If authenticated, the identity information (name, organization, address, etc.) is attached to the public key and the resulting document is signed by the certification authority’s private key. This permanently binds the identity information to the public key.

The resulting signature is attached to the certificate. If any one element among the identity information, the public key value, or the certificate signature is tampered with, then the certified signature becomes invalid. Therefore, the information contained in that certificate cannot be trusted. The certification authority’s public key can, in turn, be certified by yet another certification authority.

Certificate validity is verified by using the same cryptographic signature verification scheme as for digital content. The signature verification guarantees the integrity and authenticity of the certificate and, consequently, of the information contained in the certificate: the public key and the identity (Figure 3 ).

Figure 3: A diagram of a digital signature, how it is applied and verified

Releasing the software and signing the code
Once the software is completely finished and tested, and then audited and approved by a certification laboratory or an internal validation authority, it must be released. Releasing software for a secure boot requires one additional, important step: the binary executable code signature. Signing the code, in fact, “seals” the code (it cannot be further modified without those modifications being detected) and authenticates it (the identity of the approver is well established).

The code is sealed because, if modified, the associated signature will become invalid—the integrity of the digital signature will no longer be complete. The code is also authenticated because it was signed by a unique, undisclosed, private key guarded zealously by its owner—the persons in charge of signing the code.

Signing the code is an important step for certified software. Once the software has been approved by an external or internal validation authority, it cannot be changed.

Taking ownership of the device
Taking ownership of a device is done by personalizing the root of trust in the microcontroller, the immutable code that handles the secure boot. But we also need to load the public code verification key owned by the software approver into the device (see Part 1 of this article). Recall that this key is fundamental and shall be trusted.

There are two schemes to personalize the root of trust. One approach uses a small key hierarchy and the other approach has no key. We will now examine both approaches.

In the first approach (Figure 4 ) the device’s root of trust already contains a root public ‘key verification key’ (inherently trusted as part of the root of trust). We call it the master root key (MRK). Restated simply, that key is hard coded in the root of trust, which is used to verify the public code verification key (CVK). (See Method 2 earlier) As a consequence, the public CVK must be signed prior to being loaded into the microcontroller. For this signature operation, the signing entity is the owner of the root of trust (i.e., the silicon manufacturer who owns the private key matching the public key hardcoded in the root of trust).

Once the public CVK has been loaded and accepted by the root of trust, the root of trust's key is not used any more, except for internally rechecking the CVK at each boot to ensure that it has not been modified or corrupted, or for updating the public CVK. The CVK is now used to verify binary executable code.

This personalization step has significant potential benefit: it can be executed in an insecure environment because only the correctly signed public key can be loaded into the device. This personalization step, moreover, is not a big hurdle as the same CVK can be deployed on all devices. Note that the CVK can be stored in external, unprotected memory since it is systematically reverified before being used.

Figure 4: Code verification key (CVK) is verified by the MRK before being used to verify the executable code and then execute it.

A second, simpler approach to personalize the root of trust uses no prior key. Consequently, the public CVK has to be loaded either into internal memory that can be written only by trusted software running from that internal memory, or into non-modifiable memory such as one-time-programmable (OTP) memory or locked flash (EEPROM) in a secure environment. A trusted environment is needed to ensure that the intended public key is not replaced by a rogue key, because the root of trust cannot verify this key. This key also has to be internally protected by a checksum (CRC-32 or hash) to ensure that there is no integrity issue with that key. Alternatively, to save precious OTP space, the key can be stored in unprotected memory, but its checksum value is stored in internal OTP memory.

As described here, one can imagine a multiparty signing scheme where multiple entities (e.g., the software approver and an external validation authority) have to sign the executable code. One can also imagine more complex hierarchies with different code verification keys, more intermediate key verification keys, and even multiple root keys. The ultimate process used really depends on the context and the security policies required for the application.

During this personalization step, other microcontroller options can be set permanently, for example disabling the JTAG. While useful during development, the JTAG must be disabled on production parts, otherwise the root of trust can be bypassed.

Downloading app code during manufacturing
One step in the manufacturing process consists of loading the previously sealed/signed binary executable code into the device.

A public key scheme has the following unique advantages:

  • No diversification is needed.
  • No secret is involved.
  • The binary executable code is sealed by the software approver and cannot be modified.

The signed binary executable code, therefore, can be loaded anywhere. Only this code will be loadable and executable by the electronic device; other binary executable codes will be rejected.

Using public key cryptography for this process is extremely valuable as no security constraints are imposed upon the manufacturing process.

Deployment, field maintenance
Secure-boot-based devices are deployed in the field like others. However, to update the executable code in the field, you must sign it with the software approver’s private key and load it into the device by an appropriate means like a local interface or a network link.

If the software approver’s code-signing key is compromised, the associated public CVK can also be replaced in the field, provided that the new key is signed by a public-key-verification key previously loaded into the device. (This key-verification key is a ‘revocation-then-update’ key.) However, compromising the root key (MRK) is not an option because this root key cannot be replaced in the field. Proper private key management policies do mitigate this risk.

Cryptography is not enough
Assume now that appropriate keysmanagement and good protection practices are followed. Assume too thatmanufacturing security, trust, and confidence are guaranteed. Now,finally, assume that good cryptography is chosen, like standardizedalgorithms, long enough keys, high-quality random numbers. Nonetheless,some major security threats still remain and severely expose thedevice’s assets.

The public key is stored in a location of flashmemory that is locked, i.e., it cannot be modified any more. If theintegrity of the preprogrammed public key or the secure boot is based ona lock mechanism in flash memory or in OTP memory, then the strength ofthe integrity depends only on the strength of this lock technology. Anyattacker who can defeat this technology can defeat the integrity of thetargeted asset itself.

Similarly, a software check of digitalsignatures should be performed. So, besides compliance with thealgorithm, there are several ways to verify the software—some methodsrobust and others less so. Robustness means resistance to errors,unexpected problems, mistakes, abnormal environmental conditions (e.g.,low temperature, bad powering from power glitches), or corrupted bytes.These constraints are usually addressed and managed by the software andhardware validation of the device.

But robustness also meansresistance to specific, deliberate, focused attacks. These maliciousattacks can be performed randomly, without any real knowledge of theplatform, (i.e., a ‘black box’ attack). Or the attack might come afterserious study of the device, (i.e., assaults ranging from ‘grey’ to‘white box’ attacks and corresponding to the attacker’s level ofunderstanding of the platform). In each instance the attacker is lookingfor weaknesses or limitations that can be converted into attack paths.

Twosimple examples illustrate this idea. Our first example involvessoftware good practices, which require you to check the bounds andlengths of inputted data before processing them. The use of staticanalysis tools, intended to assess the code source quality, is also partof good practices. These actions help developers and auditors to easilyimprove and guarantee the code quality. It is also recognized that anefficient developer will detect misformatted bunches of bytes.Regrettably, these checks are not regularly implemented in the rush todeliver new software as quickly as possible. Because, moreover, these“good practice” controls add development time, test and validation time,and code size, they are often considered less critical or evenunnecessary compared to basic functioning. Consequently, bufferoverflows (see Figure 5 ) might emerge after implementing communicationprotocols, even those considered theoretically secure such as TLS, ormemory-to-memory copies like the copy from external NAND flash to RAMbefore running an application. These buffer overflows are usuallywell-functioning attacks on the regular operational software.

Figure5: Buffer overflow effects

Another example involves choices ofprocesses and is called a fault attack (Figure 6 ). It is agreed from theabove section that a digital signature check is powerful for detectingany integrity/authenticity failure in data/code. In fact, the check canbe performed on the bytes where they are stored before they are copiedto the application’s running memory. But in some other operatingscenarios, the byte copy is performed before the check has occurred.

Thismeans that these bytes are almost ready to be used/run, even thoughthey are not matching the digital signature. If the attacker is able toskip the check step by triggering a power glitch or any other kind ofsmall, nondestructive fault, the normal process can be disturbed andskip the check operation, thereby enabling the loaded bytes to be run asa genuine code.

More generally speaking, making a securitymechanism rely only on a single implementation mechanism weakens thissecurity and motivates attackers to focus on circumventing theimplementation.

Figure6: Effect of a fault attack: the fault modifies the normal path of theprogram by changing the test condition result.

Implement the bestsolution

Today there are secure microcontrollers (such as Maxim’s MAX32590 ) that feature a root of trust containing a preloaded, immutableroot key. The root of trust containing the MRK in these securemicrocontrollers is either in ROM, internal OTP, or internal flash thatis locked at the factory. Because the memory technology used to storethe key is immutable, the integrity of the MRK is guaranteed. Finally, achecksum is still computed over the key to ensure that no glitchhappens before that key is actually used.

Toinitialize the customized preloaded key, customers submit their publicCVK to the manufacturer for signature. The manufacturer signs (i.e.,certifies) the customer’s public key using their private root key storedin an HSM [2], which is strictly controlled. They then send the signedpublic key back to the customer. This process is quick and is requiredonly once before the software is released for the first time. (Note thatthis secure step is not needed during the software development.) Acustomer can then load and replace the manufacturer’s key with their ownkey and download their signed binary executable code.

Thisprocess is flexible because the same key is programmed onto every part,thus making the personalization process easy. It is even possible for amanufacturer to personalize (i.e., customize) parts with the customer’skey before the part is even manufactured—a major reduction of hassle forthe customer. This personalization step can also be done by thecustomer itself. As an interesting shift of liability, this latter keypersonalization step allows customers to take ownership of themicrocontroller themselves. The ROM code in DeepCover MCUs (such asMaxim’s MAX32550 ) allows keys to be revoked and replaced in the fieldwith no loss of trust.

Itis important to note that this key certification process cannot bebypassed. It is not optional. Even secure parts in development enforcethese same principles, with a single difference: they can be provided inlimited quantity with test keys activated to lessen the exposure of thecustomer’s key during development or parts evaluation.

It isnot enough to design a secure solution for today. The best practicalsolutions will design for future upgrades too. The most trustworthysecurity supports future-proof RSA (up to 2048 bits) and ECC (up to 521bits) signature schemes. Moreover, PCI PTS labs have audited their code.In addition, hardware accelerators make the code verification atstartup almost invisible so the security protocols do not put any extraburden during the boot process. The digital content digest is computedon the fly as the code is copied from flash to executable RAM. Then thesignature verification process takes very little extra time.

DeepCoverdevices combine these security mechanisms with other protections suchas JTAG-ICE deactivation, which makes code dump, modification, orreplacement impossible because only one mechanism can address anysecurity on flexible/programmable parts. All ‘doors’ shall either beclosed or keyed to provide full confidence to stakeholders.

Conclusion
Wehave seen that a secure boot is an inexpensive but critically importantsecurity mechanism that can be used for devices on an IoT or in almostany application where assets are to be protected. The overalldevelopment and manufacturing processes remain simple andstraightforward even with this secure boot. The extra steps involve onlyloading a public code verification key (CVK) in each device and signingthe binary executable code to be loaded into the device.

We cannotallow any security breach of electronic transactions, critical systemssuch as nuclear plants, or implantable medical devices. A secure boot,the root of trust, is one step to secure the IoT. It makes that trustpossible and simple to implement.

Part 1: Public Key Cryptography

References and endnotes
[1]Higgins, Kelly Jackson, Smart Meter Hack Shuts Off the Lights ,DarkReading, 10/1/2104, reports about Spanish smart meters that havebeen hacked. See also Maxim Integrated application note 5537, Smart Grid Security: Recent History Demonstrates the Dire Need ,  and Maxim Integrated applicationnote 5545, Stuxnet and Other Things that Go Bump in the Night .

[2] To assist the customer’sproduction environment, Maxim provides a turnkey key-management solution using a HSM that meets PCI PTS 4.0 constraints.

YannLoisel is Security Architect at Maxim Integrated . After receiving hisdegree in Cryptography, Yann began work at the French DoD, rising to theposition of Cryptanalysis Team Manager. He then joined SCM MicrosystemsGmbH, where he managed the security of smart card readers and DVBdecoders. He filed five patents there, and participated instandardization bodies such as FINREAD and DVB. He next became ChiefSecurity Officer at Innova Card, a fabless company providing secure ICdevices for trusted terminals. He joined Maxim Integrated Products whenMaxim acquired Innova Card in 2008. Acting as Security Architect, he nowmanages all security-related topics at Maxim, including physicalprotection, cryptography, applications security, and certifications. Hehas filed five patents in these domains.

Stephane Di Vito has been Security Expert in Maxim Integrated’sMedical, Energy, and Embedded Security business group for the lastthree years. Stephane has been working in secure embedded softwareengineering for 14 years. He previously spent five years working atGemalto, specializing in highly secure firmware embedded into smart cardprocessors. Then he worked at Atmel’s former security ICs department,where he developed embedded security modules for industrial, gaming,health, financial ,and privacy protection applications. After Atmel hespent three years at Newsteo, a startup company specializing in wirelessindustrial measurement, as firmware designer and developer, beforejoining Maxim Integrated. He is now in charge of designing anddeveloping secure software for secure microcontrollers for industrialapplications.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.