Heartbleed and its impact on embedded security

Heartbleed, the recently popularized bug in the open source security library OpenSSL, is the worst kind of coding error. This is because not only is the code used all over the structure of the Internet, but also because no one can really understand the extent of the damage. This is why it is likely that Heartbleed will go down in the annals as one of the worst security exploits of the decade.

However, the majority of publications in the last few weeks have focused on the security of consumer’s usernames and passwords. It’s also worth understanding how Heartbleed affects and informs us about security in embedded designs. But first, let’s start at the details of the bug known as Heartbleed and its effect on Internet security.

OpenSSL, TLS, and Encryption
There are many detailed explanations of Heartbleed available on the Internet. But let me briefly summarize:

Heartbleed is a code issue that is part of OpenSSL, a common open source library for doing secure communications using the SSL (secure socket layer) protocol. The newer versions of the SSL protocol go by the name TLS (transport layer security) instead. TLS can be used to add secure communication to a number of Internet services, but use of TLS with web servers is the most common. When used on a web server it creates a secure communication channel denoted by web addresses starting with “https://”.

TLS for web servers provides two key features:

* First, TLS encrypts the communication channel between each web browser and the web server so that the communications cannot be read if intercepted. This protects all the traffic, but specifically protects sensitive information such as the username and password when a user logs in to a website.

* Second, TLS also provides a way to prove and authenticate that the web server in question is the web server that was requested. This is done by the web server proving it possesses secret data (a private key) that corresponds with a publicly available identity (a certificate).

These two features are crucial to secure Internet communication and TLS for web servers. The actual means by which this security is accomplished is via complex math from the field of cryptography.

Details of Heartbleed
The problem with Heartbleed is that it leaks relevant information through a coding bug in the OpenSSL library. This coding bug is sadly simple. A correct length check was not added to the code when a feature was added a couple of years ago.

Figure 1. How the TLS Heartbeat Extension functions in OpenSSL normally.

The feature was from a relatively new specification, known as the TLS Heartbeat Extension (Figure 1 above ), specified in RFC 6520. The TLS Heartbeat Extension was added to help keep TLS connections alive, when firewalls and other network devices might decide to time them out. While it is a worthwhile feature, introduction of this feature also introduced a bug.

The outcome from this rather simple coding error is pretty dire. If a special malicious network packet is sent to a server with the Heartbleed code bug, the attacker sending the packet can obtain up to 64kb of memory from the OpenSSL process. This is because the malicious packet causes not only the relevant Heartbeat data to be sent as would happen in the normal case, but also adjacent memory as well (Figure 2 below ). Remotely obtaining memory from a web server is always bad as it may contain information that is sensitive. However, this is far worse because the memory in question is located in or near the OpenSSL library, so it is likely to be sensitive.

Figure 2 A malicious packet exploiting the Heartbleed bug, returning memory beyond the Heartbeat payload.

What sensitive information could be nearby? One thing is decrypted communications from other web clients to the web server. This is the most common thing found when using Heartbleed scanning tools. Usernames and passwords can easily also be in the 64kb of memory sent back. Or it could be the contents of the web pages another user on the server has retrieved, which could contain personal information like bank balances or credit card numbers. Either way, data sent encrypted to the web server may be leaked.

Worse yet is the possibility that the 64kb of memory contains the web server’s private key. This short piece of secret data is used mathematically to prove that a web client is talking to the right server. If an attacker is able to get this information they could impersonate the web server. Also, it may be possible to decrypt the historic traffic to the web server with the private key. This would be highly useful for intelligence agencies and others trying to intercept secure communications.

Furthermore, more than 64kb of data can be obtained. 64kb of data can be repeatedly obtained without any discernable log entries. An attacker is likely to run the attack many times and see what can be obtained from the web server’s memory. Most likely, they’ll be able to obtain some usernames and passwords. However, in the last few weeks attackers have also demonstrated the ability to retrieve a private key.

Ultimately, the maddening thing about Heartbleed for the security community – is that we cannot understand how bad the security breaches can be . It’s unclear if anyone exploited the Heartbleed bug because exploiting this leaves no discernable trace. The best case scenario is that no one knew about the code bug and attack until recently.

The worst case scenario is that that many knew about the code problem – and years of Internet traffic have been exposed to nefarious parties. Regardless of what happened, in practice little can be done. The best approach is to patch OpenSSL, change all web server private keys, and recommend that all users change their passwords.

Impact on embedded systems
So far the focus has been on web servers. However, how does this translate to embedded? First, let’s examine how these problems translate to OpenSSL on embedded. And second, let’s investigate some of the underlying causes and how they translate to embedded systems.

First, how are embedded systems with OpenSSL affected? The most pressing question for some is: my firmware contains a version of OpenSSL with the Heartbleed bug (versions 1.0.1 through 1.0.1f). Do I need to release a new firmware image? The answer is yes. If your embedded device is running a server using OpenSSL on the Internet – scanners will eventually find the device. And once found, critical data can be obtained from the device when the Heartbleed bug is exploited.

However, what if the embedded device instead only uses OpenSSL in its client capacity? Is there cause for concern?

Unfortunately, there is trouble with OpenSSL clients as well. There is a more complex attack called Reverse Heartbleed that can attack OpenSSL clients who try to connect to servers. While this attack is harder to accomplish than regular Heartbleed, it’s a good idea to update any firmware that uses OpenSSL for TLS.

These are just some general guidelines, but it may be worth having an audit instead. My company, among others, offer services to review embedded code and investigate what security risks exist in embedded designs.

The need for embedded security
Beyond simply whether a design uses OpenSSL, the Heartbleed bug gives us a window into how impactful a simple coding error can be on system security. This is especially true, when systems are connected to the Internet. Over the last few weeks – web servers have been quickly patched, certificates have been reissued in a hurry, and users have been rapidly given notice about changing their passwords.

However, for embedded systems the time-table of vulnerability remediation can often be longer. Testing cycles can often take months to ensure regressions aren’t introduced. Also, there may not be a simple way to deliver a patch. Even if there is a simple way to patch, some embedded system users may be too conservative to adopt the patch in a timely manner. And this is all assuming that the embedded device is a somewhat recent one; end-of-life devices may only have one patching mechanism available – the trash can.

Since updating systems is more challenging for embedded, the need for embedded security is greater as the potential of a mistake to have long lasting consequences is greater. As an embedded community we need to be vigilant about security issues. It is important to be educated and think deeply about system security design.

Security Education
Security starts with a general awareness of computer security issues and theory.

Security goes beyond a computer science niche. When I teach undergraduate computer science, I often am reminded that embedded is one niche amongst many in computer science. Often security is seen as another niche, but I would argue that computer security should not be seen this way. Security is a core piece of computer science theory, just as data structures and algorithms. It is essential as an embedded community to keep abreast of the ways of thinking about secure coding and system design.

There are a number of great resources out there. There are books on the topic, for one aimed at embedded developers there is “Embedded Systems Security: Practical Methods for Safe and Secure Software and Systems Development,” by David and Mike Kleidermacher. There are prolific bloggers as well who keep up-to-date with the security news; personally, I’ve found Bruce Schneier’s blog often a good place to start.

Security, unlike basic algorithms, seems to be more fluid in new discoveries and ways of thinking. Therefore, it’s always helpful to stay up-to-date with the latest security theory, vulnerabilities, and techniques.

One problem, though, with some perspectives on computer security is that often the picture is rendered incompletely. Some writers view security as encryption. This is an incomplete view. Security should be seen instead as a system designed to ensure certain critical properties, for example: confidential communications, restriction to authorized users, and mitigating the potential risk of faulty code.

Security Design
It’s essential to examine design patterns which help create secure systems. In “Embedded Systems Security” a design pattern for secure embedded systems is suggested called PHASE (Principles of High Assurance Software Engineering). PHASE consists of the following concepts:

Minimal implementation . Prefer the simplest solution to a problem, in terms of code complexity. Work through to make code increasingly simple.

This is especially true with embedded where C is a predominate language. The C language harnesses a lot of power, but does not have easy built-in language primitives to prevent things like buffer-over runs and buffer-over reads. Practicing minimal implementation reduces the amount of complex code, making subtle behavior hopefully more manifest.

At Green Hills Software, we wanted to build a website that was immune to hackers. One of my colleagues in engineering was able to create one in 300 lines of code. This was impressive, since the popular Apache web server is 200,000 lines of code. Once the code was only 300 lines long, we were clearly able to see how the web server behaved and if the code was correct.

In the case of OpenSSL, the code size is similar – roughly 175,000 lines of extremely complex C code maintained by four main developers. This complexity was bound to cause problems in maintaining code at some point.

Component architecture . Build a system in small components where each component is well understood by an engineer working on the product, comes from a trusted source, or used in a way that is insulated from system security. Also make sure these components are adequately separated from one another by the operating system in order to ensure that components are abstracted and use well documented interfaces.

In the case of OpenSSL, the architecture is fairly monolithic – everything is linked together into an application using the process. This means that the Heartbeat bug extension could not only exploit its own memory, but also the memory of the server using OpenSSL. On some embedded designs using OpenSSL this could be substantially worse – if OpenSSL is linked in to a monolithic kernel, it may have access to all memory.

Least privilege . Components should only be given access to system resources that they need. This includes system resources, but also access to other components.

In the case of OpenSSL, as mentioned above, the library gets access to the memory of the process which linked OpenSSL inside of it and possible access to all memory in some embedded designs. The system should be designed to mitigate this kind of unnecessary access.

Secure development process . It’s not just about code and architecture, but also process and discipline. The process by which important code is written and validated is critical. And this process should become increasingly more conservative as the design identifies the component as more critical to the security of the system.

Having multiple parties at a code review and an understanding of what is specifically being reviewed is important. Adding tests when new requirements are added is also important, as it helps ensure that new code paths will be exercised. Truly critical code should not only have 100% code coverage, but also should have testing of 100% of the requirements.

In the case of OpenSSL, the sad thing about the Heartbleed bug is that the failures were surprisingly human. Here’s how the code that contained the Heartbleed bug was introduced into OpenSSL:

1. A standard for the TLS Heartbeat extension (RFC 6250) is created as part of the IETF standards process.

2. One of the people who wrote this standard submits a patch to the OpenSSL maintainers in order to implement the TLS Heartbeat extension in OpenSSL.

3. An OpenSSL maintainer reviews the patch and ultimately commits it to version control, thus adding it to the next product. This commit was made at 1AM on New Year’s Day (UK time). This is a bad time to commit potentially security-critical code.

4. OpenSSL was adopted in source form by a number of different Linux distributions and other products. They were, theoretically, supposed to also review the source change.

5. It is likely no new tests were written for the TLS Heartbeat extension. The TLS Heartbeat extension standard is short in length and there are very few cases that need to be tested. One section of the standard specifies a requirement that if tested would have likely exercised and caught the Heartbeat bug.

Independent expert validation. Having outside experts review your code and system design is essential. The validation should be at both a code and a system level.

First, this may be outside certifying bodies; we’ve found NIAP’s Common Criteria program to be helpful for validating the security properties of our INTEGRITY operating system. Programs like IEC 61508 and ISO 26262 help validate the correctness of code as it relates to safety. All these programs force a discipline and a formalism to ensure that the system is correct and haphazard changes are not made.

Second, outside consultants can be used as well to evaluate system security. When my colleagues and I do such consulting, we usually start on overall system security architecture as that’s the easiest way to prevent minor coding errors from becoming catastrophic ones. The goal of such engagements is also to understand the risks of the system and how to mitigate them through system design, coding, and testing practices.

In the case of OpenSSL, it turns out that the premise that “many-eyes” having access to the source code begets less buggy code failed. “Many-eyes” examining code can be a good thing overall, but it does not imply security. There is a certain discipline that is employed in validation that is not employed in casual code review. While the change that introduced the Heartbleed bug did have at least one code reviewer, a more disciplined approach from independent validation would have a higher probability of catching this type of error.

Conclusion
Would some of these recommendations prevent a bug like Heartbleed? Probably.

Creating a system that insulated better against these kinds of failures in code and system architecture would have made this less probable. Also, having a secure development process and outside validation and review might have made this less probable as well.

When things are critical to the security of the system, vigilance is required. This vigilance is even more important in embedded systems that likely will not get updated as often. The impact of Heartbleed should definitely cause us to pause to reflect on how we should go forth in securing our embedded designs.

Thomas Cantrell, Engineering Manager, Mobile Security, Green Hills Software, is an expert in networking protocols, embedded security, and a member of Green Hills Software’s IoT Security Advisors team. He presently works on building innovative solutions for Green Hills' mobile virtualization platform. He has also worked on the development of network security protocols and cryptography in various products at Green Hills. He teaches both in industry conferences and in the classroom, serving as an adjunct lecturer for various computer science classes at Westmont College in Santa Barbara. He holds a Bachelor of Science degree in computer science from Westmont College, Santa Barbara.

3 thoughts on “Heartbleed and its impact on embedded security

  1. Great article. It nicely makes clear what the Heartbleed bug stems from – I'm wiser now. Is “nefarious parities” a typo, or a bad attempt at a pun?

    Log in to Reply
  2. The author claims that the 'many eyes' approach failed, which misleading, because it was a Google code review that found the bug. It did take far too long, because the community got complacent and didn't put enough resources into reviewing this critically

    Log in to Reply
  3. As an author of quite a bit of FOSS software, I absolutely agree.

    Sometimes big players to step up and pay/sponsor developers but only for features they want. I have been sponsored by a few companies, but not enough to work on the software full time.

    Fre

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.