Security of electronic devices is a must in today’s interconnected world of the Internet of Things (IoT). Electronic devices range from smart connected refrigerators to uranium centrifuge control systems. When the security of a device is compromised we can no longer rely on the device for secure data exchange, processing, or storage. If electronic transactions, critical systems such as nuclear plants, or implantable medical devices are hacked, then the global trust would be impacted dramatically.
This is the first article in a two-part series on security for the Internet of Things (IoT). In Part 1 we describe how to identify and then assess the security risks for a connected electronic device. We explain how the best, proven security is designed into electronic devices. Our focus is on countermeasures, specifically public key-based algorithms.
In Part 2 we focus on the importance of a secure boot and the “root of trust”, which are the cornerstones of an electronic device’s trustworthiness. We will demonstrate how device security can be implemented conveniently and how devices can be updated in the field. DeepCover secure microcontrollers will serve as example trust-enabling components to secure the IoT.
The connected world reaches out
Our lives are increasingly surrounded by interconnected electronic devices in what is now called the IoT or even the Internet of Everything. The IoT and all secure portable devices as well as industrial and medical equipment have software running within the hardware. They ease our days, answer our needs, control electrical functions in our households, protect our lives in medical equipment, and provide us utility services (water, gas, electricity) through smart grids or by controlling power plants.
Secure personal devices and the IoT have altered personal behavior for many of us. The technology extends our arms, our wills, and our minds beyond our bodies to help us communicate and consume. Manufacturers and many industries are embracing the IoT for business efficiencies and data tracking (i.e., Industry 4.0). Energy and water utilities are realizing the efficiencies and intelligence that they will gather with data management and data mining from remote access to smart meters  on an IoT network. Banks and payment processors now enable fast transactions with smart cards, at any time and any place, using free (or almost free), colorful, touch terminals. Home health with the IoT—ECG monitoring, glucose dispensers, or insulin pumps—is improving lives and saving time and money for both patients and medical facilities. Projections estimate that there will be 88M mobile POS connections in 2018 . Clearly, the connected electronic devices have definite value, but they have definite vulnerabilities too.
Recognize the security risks
It has become so easy, so comfortable surfing on the web from almost everywhere with our smartphones that we have forgotten about our old 56k modem. But today’s connected devices and the instant accessibility to a bright world also give us a misguided sense of confidence. We should remember a sad but simple truth: the investments, connections, and transactions over the Internet or IoT whet the appetite of hackers.
The security risks come from competitors, lone predators, and criminal organizations. Competitors are more inclined to duplicate/clone technology—the magical smartphones or the ink cartridges—often saving them years of R&D efforts, The others will be more interested in stealing payment cards, PIN codes, keys in payment terminals, or in blackmailing individuals, perhaps by sabotaging an account or remotely shutting down a portable medical device. We can also imagine terrorist threats by remote hacking of energy smart meters for energy distribution at industrial plants or hospitals . There is no need for more examples here. Suffice it to say that the security risks are all around us.
The risks to the stakeholders are numerous:
- Loss of reputation. “The battery that you (manufacturer ‘x’) claimed as genuine has exploded in my laptop.”
- Loss of IP. “The terrific algorithm I’ve developed in my video decoder during the last five years has been copied and duplicated. And I did not patent it to avoid disclosure of my tricks!”
- Loss of money . “Tens of payment terminals are hacked in my retail chain store, so fake transactions are performed and/or cardholder sensitive data are stolen. Customers are going to blame me and I will need to identify the hackers.”
- Loss of goods. “I just read about the hack of an energy meter published on the web and already thousands of dishonest subscribers are implementing it to pay a lower bill.”
- Loss of health. “My insulin pump does not dispense any more, or it dispenses too much. Who ordered a change in delivery times?”
- Loss of control of vital infrastructures. “Who turned the lights off in the whole city?
Obviously, any provider of electronic devices must have two objectives: first, deliver new, powerful and cost-effective devices or services; and, second, be totally committed to the robustness, liability, and security of their product. This is, in fact, the only way for them to keep the confidence of users, stakeholders, and consumers.
Analyze the risks
The above objectives, ambitious to be sure, are the sine qua non conditions for the longevity of a business. But recognizing security risks is only a first step in delivering secure products. Each provider must also employ a strict process of ongoing risk analysis.
Risk analysis, in its most simple form, is a three-step process. It starts by evaluating the assets, the goods to be protected, for their strengths and weaknesses. Then evaluate any potential attackers and profile their possible methods. Finally, examine any possible attack paths. Any consistency among the assets, attack method, and attack paths puts the device (the asset) at risk.
Consider a possible scenario. If hacking an energy smart meter with a simple Bluetooth connection saves someone 20% on a monthly bill, there is a high risk of massive, even wide-spread fraud. Similarly, if it costs very little to acquire the binary code of an application that controls the water and energy usage of household appliances, some dishonest competitors (or suppliers?!) would do it.
The options for action following a risk analysis require case-by-case decisions.
- Take the legal/contractual approach. This avenue is always cost effective and worth setting up. A device manufacturer can easily ask subcontractors (e.g., manufacturing plants) to sign a nondisclosure agreement (NDA) and to promise to be honest and faithful .
- Implement technical countermeasures . These steps will protect devices against dishonest partners, subcontractors, and outlaw/unreachable attackers. Technical countermeasures guarantee that a device’s expected behavior and functions are controlled, sealed, and sanctioned by the manufacturer. Nothing can then either modify defined operation or access protected functions.
When a manufacturer uses legal contracts with suppliers and technical countermeasures in a device, it is protecting its own assets and safeguarding the device against unauthorized tampering and theft of IP. A manufacturer is also ensuring the safe, reliable operation of the device for a user.
Makes sense so far, but how do you really implement countermeasures in a device? Part of the answer is cryptography in the software. We will now see how cryptography can be used as a toolbox to ensure device security and provide the trust and confidence for both the manufacturer and end user.
A secure boot? We are not going to say a great deal about a secure boot in this article because it is a major focus in Part 2 of this article. Nonetheless, we cannot discuss cryptography without some mention of a secure boot.
Electronic devices are composed of a set of electronic components mounted on a printed circuit board (PCB) with usually one (or more) microcontrollers that run embedded software. The software is seen as digital content and stored in memory in a binary, executable format. Enabling trust in the executed software is a fundamental expectation, and this trust is enabled thanks to the secure boot.
A secure boot is a process involving cryptography that allows an electronic device to start executing authenticated and therefore trusted software to operate.
Public key-based signature verification
Existing public key cryptography schemes  verify, conveniently and securely, the integrity and authenticity of digital content. Integrity means that the digital content has not been modified since it was created. Authenticity means that the same digital content has been released by a well-identified entity. These two fundamental characteristics are provided by the digital signature scheme and is required so the digital content (i.e., the binary executable code) can be trusted by an electronic device.
The integrity of digital content is guaranteed by a mechanism called the ‘message digest’, i.e., a secure hash algorithm like the famous SHA-1, SHA-256, and most recently the SHA-3. A message digest is like a “super cyclic redundancy check (CRC)”  but produces more bytes in the output. For instance, the SHA-256 algorithm produces a 32-byte output; a CRC-32 produces only 4 bytes. There is an important, fundamental property of a secure hash algorithm: it is impossible to forge digital content that produces a predefined hash value.
The corollary is that two different random digital contents produce two different hash values. (The probability of having two different digital contents producing the same hash value is virtually zero.) Consequently, if some bytes of the digital content are changed, the hash value of the digital content changes. In addition, unlike a CRC, it is not possible to append some bytes to the modified digital content so that the resulting hash value will match the original, non-modified digital content’s hash value. Therefore, with a hash algorithm guarding the digital content, it is not possible to secretly modify that digital content. Lastly, computing a hash is like computing a CRC: no cryptographic keys are involved.
The authenticity of digital content is guaranteed by the public key-based digital signature scheme itself (i.e., a cryptographic recipe). Public-key cryptography is based on pairs of keys. Anyone can possess a pair of keys: one private key stored secretly (e.g., KPRIV ), and one public key (e.g., KPUB ) publicly available to anyone. The private key can be used to sign a digital content. The issuer of the digital content uses its own secretly held private key to identify itself as the issuer. The public key can be used by anyone to verify a digital content’s signature. Those two keys are tied together. Indeed, signing content with KPRIV produces digital signatures that can be successfully verified by KPUB only. No other public key can work. Conversely if a signature is successfully verified using KPUB , then it was unquestionably signed by KPRIV and no other private key.
Digital signature generation involves two steps. The first step consists of hashing the digital content and producing a hash value with the properties explained above. In the second step the former hash value is “signed” using the uniquely owned, undisclosed private key of the digital content author. This second step produces a value (the 'signature') that is attached to the original digital content.
Now anyone who wants to verify the digital content signature has to perform the two following steps. In the first step the digital content is hashed again, as in the signature generation process. Then in a second step, the resulting reconstructed hash value is used as an input to the signature verification algorithm, together with the signature attached to the digital content and the public key. If the algorithm determines that the signature is authentic, this proves that the digital content is identical to the original digital content (the integrity), and that the author of this digital content is really who he claimed to be (the authenticity) (Figure 1 ).
Public key-based digital signature schemes work because the private key can be used for signing content only by the owner of this private key and no one else. Therefore, the private key has to be kept secret in good hands. Yet the public key need not be confidential because anyone can verify a digital content’s signature. The only fundamental requirement for a public key is trustworthiness.
Please note that 'public' here does not mean insecure . The public key is freely accessible because it gives no indication about the private key; one cannot calculate the private key knowing the public key. Moreover, the public key does not allow anyone to perform personally identifiable actions like signing digital content.
Nevertheless, as anyone can generate a pair of keys there must be a mechanism to verify the identity of the public key owner. Suppose that a public key has no strong binding with an identity. Then if you successfully verify the digital signature of a digital content with that public key, you still cannot trust this digital content because you do not know who actually signed this digital content.
Therefore, public key integrity, authenticity, and identity must all be guaranteed. This can be done in different ways.Method 1: Self-certification. The recipient of the digital contentreceives the public key from the sender in person, or the sendertransmits the public key in a way that leaves no doubt about thelegitimate origin and ownership of the public key. Then this public key(also called a root key) can be trusted, as long as it is stored whereunauthorized persons cannot modify it.
Method 2: Hierarchicalcertification. In this method a hierarchy of verifiers guarantees theorigin of the public key. Public key infrastructures (PKIs) provide thedefinitions of such hierarchies. The physical association between apublic key and the identity of the key owner is a certificate.Certificates are signed by intermediate entities (i.e., certificationauthorities) of the PKI hierarchy.
Assume that aperson wants to have a certified public key. That person generates apair of keys and keeps the private key in a safe, hidden place. Then, inprinciple, a certification authority meets this person face-to-face andthoroughly verifies the identity of that person.
Ifauthenticated, the identity information (name, organization, address,etc.) is attached to the public key and the resulting document is signedby the certification authority’s private key. This permanently bindsthe identity information to the public key. The resulting signature isattached to the certificate. If any one element among the identityinformation, the public key value, or the certificate signature istampered with, then the certified signature becomes invalid and theinformation contained in that certificate cannot be trusted. Thecertification authority’s public key can, in turn, be certified by yetanother certification authority.
Certificate validity isverified by using the same cryptographic signature verification schemeas for digital content. The signature verification of the certificateguarantees the integrity and authenticity of the certificate and,consequently, of the information contained in the certificate: thepublic key and the identity (Figure 1).
As a result, beforeusing a public key, one must first verify the validity of that publickey's certificate by using the certification authority’s public key.Then make sure that this certification authority’s public keycertificate is also valid by using the public key of the parent signingauthority, and so on. Therefore, a chain of verifications can occur withsuccessive certification authority’s public keys until it is ultimatelytrusted as a root key. The root key, you will recall, is trustedbecause it was obtained using Method 1.
Verifyingsoftware. When applied to software, this public key-signature techniqueallows you to trust executable binary code. Now you simply consider thesoftware as digital content. The sender of this digital content is thesoftware approver, the one charged with accepting the software validatedfor a device. The receiver is the electronic device. The softwareapprover generates a pair of keys and loads the public verification keyinto the electronic device once during manufacturing. The private key iskept in a safe place, as explained below. The software approver signsthe generated code before loading it into the electronic device by usingits own private key. Then at power-on, the electronic device can usethe preloaded public key to verify the integrity and authenticity of thebinary code before running it.
Benefits of ECC vs. RSA cryptography
Publickey-based cryptography has been an RSA algorithm for several decades.But in the last few years elliptic-curve cryptography (ECC) has emergedand spread through the security industry. Elliptic curve-based signatureverification is the same order of magnitude compared to RSA, but usesfar less computational resources. Its key sizes are much smaller, thusreducing the memory footprint. A secure application of RSA now requiresat least 2048 bits of security; RSA keys need 256 bytes. Equivalentelliptic-curve keys are only 224 bits long  and the keys are only 28bytes. Elliptic-curve cryptography is, therefore, the preferred choicefor securing newer devices.
Caveats about secret keys vs. public keys
Asexplained above, public-key cryptography is based on pairs of keys. (Akey pair is made of a public key and a private key.) The private key isstored secretly because it allows a receiving device to authenticatecontent—only the key owner should be able to sign content. Converselythe public key is available to anyone, because anyone can verify asignature. This is not harmful or risky.
Public keys do not needto be protected against disclosure and, therefore, do not require any ofthe countermeasures designed to prohibit access to the key value.Unlike a secret key, a public key does not need to react to tamperevents by deleting/erasing its verification key. No side-channelcountermeasures are required. The only required protection mechanismsmust target keys substitution/modification and modified softwarebehavior.
All this makes the device design simpler. Thealgorithms involved are not subject to export regulations because theydo not include encryption, but merely a digital content digest (i.e., ahash algorithm and signature verification). Note, finally, that thedigital signature verification algorithm (see Figure 1) must still berobust enough to protect against intentional or accidental disturbances:power glitches, badly formatted digital content and digital signatures.
Limitationsof a secret key. Secret key cryptography looks simpler than thepublic-key system discussed above because there is one unique key forsigning, verifying signatures, and no need for certificates.
Infact secret key-based cryptographic algorithms such as AdvancedEncryption Standard (AES), which is based on the cryptographicalgorithm, FIPS 197: Advanced Encryption Standard.NIST, are not suitablefor protecting software integrity in the field because the same secretkey has to be used for signing and verifying signatures. The secret keymust, therefore, also be stored inside the deployed electronic devices.However, protecting secret keys stored in electronic devices againstdisclosure is not an easy task. Disclosure of the secret key can happenat design, during manufacturing, and in the field. At design, insiderscould leak the key to an outsider. At manufacturing, third parties coulddump the secret keys from the device memory and leak them out. In thefield, attackers can reverse engineer the key (e.g., memory dump, faultattacks, power analysis).
Protecting a secret key. You canmitigate the disclosure of secret keys by secret key diversification,i.e., by using a different key for each deployed device. This makessense, but the manufacturing becomes much more complex and requirestrusted, secure manufacturing plants and huge databases of keys. Beyondmanufacturing, you can also install anti-theft defenses in a deviceitself. A secure electronic device must detect tamper events and thendestroy its secret keys. There must also be countermeasures to resistpower disruptions or faults directed against the encryption algorithm.Encryption algorithms are, moreover, subject to export regulations, sothese devices can have regulatory issues involving internationalmarkets.
Secret key cryptography is conceptually simpler, butforces manufacturers to expose secrets in the field and implementexpensive, and yet imperfect, countermeasures. The countermeasuresdescribed above protect a secret key, but they also make the design of asecure device more complex and costly. Secret key cryptography has beenused against threats regarding confidentiality, but is toorisky/expensive for only verifying software signatures. Ultimately, itresolves and protects against some security problems that can justifythe burden of its heavy key management.
Private keymanagement. Obviously proper management of a private key is a criticalrequirement for key-based cryptography. Anyone who steals a private keycan sign arbitrary executable code that will then be successfullyverified by the electronic device. Therefore, software approvers muststore the private key somewhere strongly protected from disclosure; theymust make sure that no one can use the key (even without disclosure).Some certifications such as PCI PTS 4.0 require the use of hardwaresecurity modules (HSMs) to manage keys. HSMs are tamper-resistantdevices capable of securely generating and using pairs of keys.
Whilepublic keys can be exported, the associated private keys remain in theHSM. Using the private keys in the HSM (e.g., for signing digitalcontent) requires a prior strong multifactor authentication. The personwho needs to sign a binary executable code must use a smart card and aPIN code to unlock the private key in the HSM. Depending on the securitypolicy, two persons or more may be required for that operation. Inaddition, HSMs have to be kept in safety deposit boxes to maximize thesecurity when they are not used. This process ensures that only trustedpersons sign binary executable code.
Public-keycryptography removes some security constraints usually required formanufacturing subcontractors, including loading software in thesedevices. As public-key cryptography involves no secrets and yet theauthenticity of the loaded software is guaranteed, we get the best ofthe two worlds.
To ensure the integrity of the IoT, we cannot allowany security breach of electronic transactions, critical systems such asnuclear plants, or implantable medical devices. An IoT must securefinancial, industrial, and medical devices if the broad-basedcommunications are to thrive. Ultimately, IoT devices require the trustand the security mechanisms discussed here make that trust possible andsimple to implement.
In our next article, Securing the IoT: Part 2- A Secure Boot, we will explain that the best way to protect deviceson the IoT is a secure boot, also called a root of trust. A secure bootis an unbreakable wall, a hard barrier against attackers trying tobreach the casing of an electronic device. We will show how, ultimately,the best defense is to use a microcontroller that starts executingsoftware from an internal, immutable memory.
References and endnotes
. Generating value from smart meter data , Centre for SustainableEnergy. For more backgroundinformation, see:
Maxim Integrated application note 5832, Water andPower in the Internet of Everything
Maxim Integratedapplication note 5536, Energy Measurement and Security for the SmartGrid—Too Long Overlooked
MaximIntegrated application note 5144, Environmental Benefits of Smart Meters ;
Maxim Integratedapplication note 5725, Silicon, Security, and the Internet of Things . This article also discusses theInternet of Everything.
[2.] Tang, Wincey, Mobile Point-Of-Sale Shipments Surge by 50 Percent in 2013, but Many Devices Go Unused , IHSTechnology, February 05, 2014.
. Higgins, Kelly Jackson, Smart Meter Hack Shuts Off the Lights , DarkReading, 10/1/2104, reports about Spanish smart meters thathave been hacked. See also Maxim Integrated application note 5537, Smart Grid Security: Recent History Demonstrates the Dire Need , and Maxim Integrated applicationnote 5545, Stuxnet and Other Things that Go Bump in the Night
. Maxim Integrated application note 5486, Securing the Life Cycle in the Smart Grid and applicationnote 5631, Ensuring the Complete Life-Cycle Security of Smart Meters
YannLoisel is Security Architect at Maxim Integrated . After receiving hisdegree in Cryptography, Yann began work at the French DoD, rising to theposition of Cryptanalysis Team Manager. He then joined SCM MicrosystemsGmbH, where he managed the security of smart card readers and DVBdecoders. He filed five patents there, and participated instandardization bodies such as FINREAD and DVB. He next became ChiefSecurity Officer at Innova Card, a fabless company providing secure ICdevices for trusted terminals. He joined Maxim Integrated Products whenMaxim acquired Innova Card in 2008. Acting as Security Architect, he nowmanages all security-related topics at Maxim, including physicalprotection, cryptography, applications security, and certifications. Hehas filed five patents in these domains.
Stephane Di Vito has been Security Expert in Maxim Integrated’sMedical, Energy, and Embedded Security business group for the lastthree years. Stephane has been working in secure embedded softwareengineering for 14 years. He previously spent five years working atGemalto, specializing in highly secure firmware embedded into smart cardprocessors. Then he worked at Atmel’s former security ICs department,where he developed embedded security modules for industrial, gaming,health, financial ,and privacy protection applications. After Atmel hespent three years at Newsteo, a startup company specializing in wirelessindustrial measurement, as firmware designer and developer, beforejoining Maxim Integrated. He is now in charge of designing anddeveloping secure software for secure microcontrollers for industrialapplications.