Enhance system security with better data-at-rest encryption - Embedded.com

Enhance system security with better data-at-rest encryption


Embedded systems designers can protect sensitive data that's on a device's hard drive (data-at-rest) by using encryption techniques.

Click for more content from ESD April 2012

In 2010, the television network CBS aired a program demonstrating how discarded office copiers are gold mines for private information, trivially harvested from disk drives within the machines.1 From copiers randomly selected from a used copier warehouse, investigators recovered lists of wanted sex offenders, drug-raid targets, architectural design plans, personal identification information (name, address, Social Security number), and medical records—including blood-test results and a cancer diagnosis.

When asked whether this could be prevented, one copier company said that customers could purchase a $500 option that will erase copied images from the hard drive after use. Give the guy who wrote those couple lines of code a bonus!

Another obvious solution to this problem is data-at-rest protection. Data-at-rest protection is a when data stored on a device and not in transit, known as data at rest , is either encrypted or follows certain protocols that include encryption to protect the data from unauthorized access. The storage media for an embedded system may include hard disk drives, flash memory, and attached USB thumb drives.

Compliance regulations
Medical sector: the Health Industry Portability and Accounting Act (HIPAA) requires that patient data stored within medical devices is protected.
Financial sector: the Payment Card Industry (PCI) data security standard (PCI DSS) requires the protection of credit card information within financial processing systems.
Government and security-conscious enterprises: Data-at-rest protection within smartphones and tablets is a requirement if handhelds are used for the processing of sensitive information.

As witnessed by the photo copier story, seemingly benign, mundane office equipment is often vulnerable and not protected. On the other hand, many modern embedded systems do have encrypted storage-protection requirements, driven by intellectual property protection, digital rights management, sensitive customer information, and more. Compliance regulations in certain industries require that sensitive stored data be protected with appropriate data-protection protocols that include encryption. See sidebar for examples.

This article discusses approaches for protecting data-at-rest.

Choosing the storage layer

Figure 1: Data-at-rest protection choices by layer.
Click on image to enlarge.

As shown in Figure 1 , developers may choose from multiple layers in the data-storage stack to apply data-at-rest protection protocols.

Hardware layer: With full-disk encryption (FDE) , the entire medium used for storage is encrypted. All the data that goes on the storage medium is encrypted, including certain hidden files, such as the operating system's temporary files and swap space. The advantage is such files are not exposed. However, the drive itself is not encrypted, leaving the master boot record exposed.

When FDE is handled within the medium peripheral itself, it's referred to as a self-encrypting drive (SED) . SEDs are common in the laptop market. The advantage of SEDs for the embedded systems developer is that little or no new software must be written to take advantage of the data-protection facilities. Encryption is performed with specialized hardware within the storage device, offloading the main embedded applications processor. If self-encrypting storage media is feasible, it's an excellent choice due to ease of use, excellent performance, and the ability to hide the storage encryption key from the main applications processor and memory. Unfortunately, many embedded systems will be unable to use the available standalone SED products due to form-factor limitations.

Block manager layer:
Encryption can be performed at the next level up, the device-management layer, typically a block-oriented driver. Protection at this level may cover the entire managed device (FDE). The performance implications of this approach vary. If the embedded platform contains a symmetric encryption accelerator, the overhead is likely to be reasonable, while a purely software cryptographic implementation may cause a dramatic loss in performance. Embedded systems developers can architect the encryption facilities such that the device driver calls out to generic medium block encryption routines, ensuring that software is easier to maintain across different generations of the embedded product that may use different types of storage.

File system layer: The next candidate for data-at-rest protection is the file system. The major advantage of implementing storage protection at the file system layer is to provide finer granularity over the choice of information that requires storage confidentiality. This is especially important if encryption is performed in software with minimal or no hardware acceleration. Depending on the file system implementation, developers may be provided options for encryption at the volume level or at the individual file level.

Applications layer: Finally, applications can add their own data protection, either using underlying file-system encryption features or a custom implementation. For example, an audit logging application can encrypt its audit records prior to calling the standard file system output functions.

For volume, file, or application-level data protection, developers can employ separate keys for these groups of data rather than a single key for the entire system. This is a sensible application of “least privilege” principles.

Developers resorting to custom, application-level approaches will also need to design their own key-management system, whereas users of encrypting file systems or SEDs can use the key-management framework provided by the product supplier.
Which encryption algorithm?
Data-at-rest presents some unique challenges for encryption algorithms relative to network security protocols.

For data-at-rest protection, an encryption algorithm must be performed without adding additional storage space: A plaintext media block is encrypted in place, generating a ciphertext block of the same size. The most basic encryption mode, electronic code book (ECB) , would provide this memory conservation but is not suitable for data-at-rest encryption since any two same plaintext blocks will encrypt to the same ciphertext, making it easy for an attacker to find patterns in the data and potentially derive information. We must consider other modes, most of which require an initialization vector (IV) . However, to avoid space expansion, the data-protection system must include a means for implicitly deriving this IV.

Implicit IV derivation poses a surprisingly difficult challenge for common encryption modes. Many modes require uniqueness: The same IV must never be reused for a particular key. For example, with counter mode, a predictable counter can be used, but the same number can never be repeated for a given key. For cipher block chaining (CBC) mode, a unique and unpredictable number must be used. Network security protocols have the freedom to generate the IV and send it along as part of the transmitted data; for the Advanced Encryption Standard with CBC (AES-CBC), each transmission can generate a new random number for the IV and transmit this IV to the receiver. But for data-at-rest, we have no room to store the IV for subsequent decryption.

The obvious source for an implicit IV would be the sector number and offset for a particular data block. Using this combination provides every disk block with a unique input value. However, as data is read and written over time, the same sector and offset are reused for the same key. This implies a serious weakness in the applicability of common encryption modes for data-at-rest protection. Numerous other weaknesses of common modes, especially CBC, have been identified when applied to data-at-rest protection protocols. Clemens Fruhwirth has written an excellent paper discussing these weaknesses.2

Tweakable ciphers

Figure 2: Tweakable block cipher overview.
Click on image to enlarge.

The good news is that cryptographers have worked diligently to address this encryption mode challenge. Liskov, Rivest, and Wagner introduced the concept of a tweakable block cipher in 2002.3 The basic idea of a tweakable cipher is to apply the IV concept to the single-block cipher itself rather than to a chaining mode built on top of the block cipher. As shown Figure 2, the block cipher converts a plaintext block to a ciphertext block, using both the traditional key as well as the tweak as inputs.

The practical application of tweakable ciphers for the data-at-rest protection problem is the property that the cipher's security doesn't preclude reuse of the IV; thus, media sector number and block offset within the sector provide a perfect fit for tweak selection.

In 2007, IEEE's Security in Storage Working Group (SISWG) published standard P1619.4 The IEEE P1619 standard defines the XTS-AES cipher mode as a result of a thorough study of numerous potential tweak-based algorithms for use in data-at-rest protection.

This choice is further bolstered by NIST in “Special Publication 800-38E”, which approves the XTS-AES cipher mode and references its definition in IEEE P1619-2007.5 NIST has also amended FIPS 140-2 to include XTS-AES as an approved cipher for validation.6

The tweak algorithm found in XTS-AES is based on and almost identical to the one originally created by noted cryptographer Phillip Rogaway, called XEX.7 In addition to strong security, XEX (and hence XTS-AES) are also designed for efficiency when applied to storage of many sequential data blocks (as is common with file storage).

Figure 3: The XTS-AES data-at-rest encryption cipher.
Click on image to enlarge.

The XTS-AES block cipher is depicted in Figure 3 . Oddly this cipher requires twice the keying material; for 128-bit security, 256 bits of key must be used. The first half of the key is used to process the plaintext; the second half is used to encrypt a 128-bit representation of the sector number, which acts as the primary tweak, as shown in Figure 3. The result of this encryption is fed to a function that performs a Galois field multiplication (implemented as a sequence of shifts and XORs) of the encryption result with a Galois constant derived from the secondary tweak, the numeric index of the data block within the sector.The result of this Galois multiplication is used twice. First it's added (XOR) to the plaintext block, which is then encrypted with the first key half. The Galois result is added (XOR) again to the plaintext block encryption result to create the final ciphertext block.

Decryption is similar; however, while the AES-ECB decryption algorithm is used to process the ciphertext, the tweak cipher remains the same, using the AES-ECB encryption algorithm.

In practice, data is stored to media in sectors. Therefore, the block encryption algorithm shown earlier must be executed in a loop across the entire sector. Note that while XTS-AES handles partial blocks, that part of the algorithm is often unnecessary. For example, the common sector size of 512 bytes will result in 32 block encryptions, and most media-management layers will access a full sector at a time. For such a system, given a function, xts_encrypt , which takes the sector number and size in bytes, plaintext block, and encryption key as input, the simple code sequence in Listing 1 handles the sector encryption.

sector_encrypt(uint8_t *sector, uint32_t sector_num, uint32_t       sector_size, uint8_t key[]){    uint32_t i;    assert((sector_size % AES_BLOCK_SIZE) == 0);      /* 512 % 16 */    for (i = 0; i < sector_size/AES_BLOCK_SIZE; i++)  /* 32x */      xts_encrypt(sector+i*AES_BLOCK_SIZE, key, sector_num, i);}

It's also easy to see from this code sequence that XTS-AES is parallelizable. If the embedded system contains an AES hardware accelerator (especially one that has direct support for XTS mode), this implementation should be modified to take advantage of the accelerator's ability to process multiple AES blocks at once. Furthermore, if the media allows for sector size configurability, developers may want to vary the sector size to see if better throughput (potentially at the expense of slightly reduced space efficiency) can be achieved.

When selecting data-at-rest protection products, avoid legacy approaches that use weaker modes (numerous CBC-based implementations have been commercialized). Employ the NIST- and FIPS-approved standards instead.

Managing the key
The primary purpose of data-at-rest protection is to ensure that information residing on lost or stolen media cannot be accessed by unauthorized parties who must be assumed to have complete physical access to the disk. Thus, the symmetric storage encryption key must never be stored in the clear on the disk. However, it's often necessary to store an encrypted copy of the symmetric key on the disk (or perhaps an attached Trusted Platform Module, if available). The key is unwrapped for active use while the system is executing in an authorized manner. For personal computers such as laptops and smartphones, unwrapping is triggered by successful authentication of the user (such as using a password, smartcard, biometric, or multiple factors).

Generating the key
A typical method of storage encryption key establishment is to convert user credentials into a key using a key derivation function (KDF) . A popular KDF used to convert passwords is the password-based key derivation function, version 2 (PBKDF2). PBKDF2 is defined in the RSA Laboratories' specification PKCS #5 and duplicated in RFC 2898.8,9 PBKDF2 applies a hash function to the password concatenated with a salt (random bitstring). To make password cracking more difficult, the standard recommends that the hash output be rehashed multiple times. The recommended minimum hash iteration count is 1,000, although the number is expected to increase over time. Apple's iOS 4.0 uses 10,000 iterations. In 2010, RIM BlackBerry's encrypted backup service was determined to be vulnerable due to faulty application of PBKDF2. Instead of following the standard, the BlackBerry software used an iteration count of one.10

When the password is used to directly generate the storage encryption key, a change in password changes the encryption key, thereby forcing re-encryption of the entire protected media. To avoid this problem, a permanent, unique encryption key is created when the media is initially provisioned, and the key is wrapped (encrypted) with the password-derived key. With this two-level keying scheme, a periodic password change only requires rewrapping of the encryption key.

The user-authentication approach may be sufficient for limited types of attended embedded systems that can tolerate user intervention whenever the protected volumes must be unlocked. Nevertheless, this approach is not sufficient for large classes of unattended embedded systems. If the embedded system encounters a fault and automatically reboots, the encrypted volumes must be able to get back online without manual credential input.

Remote key provisioning
We can consider two classes of unattended embedded systems: those that have a remote management network interface and those that do not. For the latter, the embedded system lacks any mechanism for dynamic interaction that can unlock an encryption key. In this case, if information value demands data-at-rest protection, the designer is advised to incorporate a cryptographic coprocessor that provides physical tamper-resistant key storage and internal execution of the data encryption algorithm. The device driver sends plaintext to this encryptor and receives ciphertext for storage on disk and similarly requests decryption of disk blocks as needed.

For network-enabled embedded systems, a remote management server holds a database of the provisioned data-encryption keys. A server connection is initiated by the embedded system whenever a data-encryption key must be unlocked (such as at boot time). The embedded system and server mutually authenticate, and the server provides a copy of the embedded system's provisioned data-encryption key over the secured channel.

Key escrow
When implementing a data-at-rest protection system, developers must consider key escrow to guard against the possibility that the authentication information used to unlock the storage encryption key will be lost.

There are situations where the system owner may need to extract the data from storage, such as after a system failure. In most system designs, holding a copy of the data encryption key in an off-site secure location is advisable in order to prevent loss of data when the data encryption key is no longer accessible. If the embedded system lacks a network management interface, the internally-stored key must be exportable onto media for off-site escrow storage (such as in a secure vault). If the system supports network management and remote key provisioning, developers need to ensure that remotely provisioned keys are retained on a secure server or copied to protected offline media.

Advanced threats
The authentication software that runs to unlock the encrypted media must itself be trustworthy and tamper-protected. For example, the embedded operating system may incorporate the authentication function directly. The embedded operating system image (and any preceding boot loaders) is not encrypted; only the rest of the medium, which contains sensitive files, is protected. If the embedded operating system is not trusted (such as at risk of containing malware or vulnerabilities that would permit the loading of malware), the authentication process could be subverted. For example, a key logger could record the user's password, enabling recovery of the storage encryption key and all of the encrypted data.

If we assume the embedded operating system is trustworthy, we still must ensure that anything executing prior to launch of the operating system is trusted. This is a good example of the need for secure boot.

In some cases, the designer may want the embedded operating system image to be encrypted. When FDE is in use and a sophisticated operating system (such as Linux) resides on the encrypted disk, pre-boot authentication may be employed: a small portion of the encrypted disk contains a mini-operating system that is booted for the sole purpose of performing the authentication and unlocking the medium prior to booting the full operating system. If the embedded operating system is a secure microkernel, a separate pre-boot authentication module is not required.

Attacks against pre-boot authenticators have been successfully perpetrated. For example, the system is booted to a malicious operating system (such as a alternative booting from an external USB drive) that tampers with the pre-boot code to steal the authentication credentials as they are input.11 Secure boot can prevent this attack as well; the signature of the modified authenticator will fail to match the known good version, aborting the boot process.

Another example of advanced threat is the cold-boot attack . Unless the embedded system is using a self-encrypting hard drive where the keys are stored within the media and never exposed to the main processor, disk encryption requires that the storage encryption key be kept in memory (in the clear) while the system is operational, invoking the encryption and decryption algorithm to access data. When the system is turned off, RAM is unavailable, and the only copy of the encryption key is itself encrypted. Or is it? In some systems, RAM is not immediately cleared. An attacker boots the system using a malicious operating system that grabs the plaintext key in RAM. This attack has been performed successfully.12

Data-at-rest protection within an embedded system equipped with secure boot and a trusted operating system impervious to remote attack can still be defeated by removing the protected media and booting it on a different computer that lacks this secure environment. Binding the storage encryption key to its intended embedded system platform can prevent this attack. In this case, the permanent storage encryption key is derived (in whole or in combination with user credentials) from a platform-specific key, such as a fused one-time programmable key or TPM key (if applicable). Even if the user's credentials are stolen, the storage encryption key cannot be derived outside of the targeted embedded platform. The downside of this extra level of defense is that a hardware failure that prevents access to the platform credential will render the data permanently inaccessible (unless the derived storage encryption key itself is securely escrowed).

Protect your customers
Embedded systems developers looking to incorporate data-at-rest protection into their next designs are faced with a plethora of design choices and constraints. This article provides designers with an overview of the key issues to consider. Special considerations for data-at-rest protection include the use of government-approved symmetric encryption algorithms designed specifically for such applications and proper management of the long-term keys typically used for this purpose.

Dave Kleidermacher is CTO of Green Hills Software. He writes a column on Embedded.com about security issues and he teaches at the Embedded Systems Conference.


  1. Keteyian, Armen. "Digital Photocopiers Loaded With Secrets" CBSnews.com, dated April 20, 2010 9:35 PM, www.cbsnews.com/2100-18563_162-6412439.html
  2. Fruhwirth, Clements. "New Methods in Hard Disk Encryption." Institute for Computer Languages Theory and Logic Group, Vienna University of Technology, July 18, 2005.
  3. Liskov, M., R. Rivest, and D. Wagner. "Tweakable Block Ciphers," 2002. MIT and UC Berkeley. www.cs.berkeley.edu/~daw/papers/tweak-crypto02.pdf
  4. Security in Storage Working Group of the IEEE Computer Society Committee. IEEE P1619, Standard for Cryptographic Protection of Data On Block-Oriented Storage Devices , 2007.
  5. National Institute of Standards and Technology (NIST). "NIST Special Publication 800-38E, Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Storage Devices." January 2010. csrc.nist.gov/publications/nistpubs/800-38E/nist-sp-800-38E.pdf
  6. Information Technology Laboratory, NIST. "FIPS Pub 140-2: Security Requirements For Cryptographic Modules. http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf
  7. Rogaway, Phillip. "Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes OCB and PMAC," September 24, 2004. http://www.cs.ucdavis.edu/~rogaway/papers/offsets.pdf
  8. RSA Laboratories. PKCS #5 v2.0: Password-Based Cryptography Standard. March 25, 1999.
  9. PKCS #5: Password-Based Cryptography Specification Version 2.0; Internet Engineering Task Force, Request for Comments: 2898; September 2000.
  10. NIST National Vulnerability Database, CVE-2010-3741. web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-3741
  11. Turpe, Sven, et al. "Attacking the BitLocker Boot Process," Proceedings of the 2nd International Conference on Trusted Computing (TRUST 2009), Oxford, UK, April 6-8; LNCS 5471, Springer, 2009.
  12. Halderman, J. Alex, et al. "Lest We Remember: Cold Boot Attacks on Encryption Keys," Proceedings of USENIX Security '08, pp. 45-60.

This content is provided courtesy of Embedded.com and Embedded Systems Design magazine.
See more content from Embedded Systems Design and Embedded Systems Programming magazines in the magazine archive.
This material was first printed in April 2012 Embedded Systems Design magazine.
Sign up for subscriptions and newsletters.
Copyright © 2012
UBM--All rights reserved.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.