In January 2018, computer security researchers disclosed two critical processor vulnerabilities that malicious programs could exploit to leak secure data: Meltdown and Spectre.
The engineering community and the public at large are accustomed to software vulnerabilities requiring frequent app updates or installation of operating system patches. These were different — hardware was the culprit, and hardware is not cheap to update.
The only practical approach is to release new software that, at the cost of making the system slower and less energy efficient, masks vulnerable hardware functions or avoids their use. Meltdown and Spectre sparked a series of investigations into hardware security.
Researchers already unveiled numerous more vulnerabilities, including Foreshadow, ZombieLoad, RIDL, and Fallout. These hardware flaws compromise the security of personal computers, smartphones, and even the cloud.
Figure 1: An attacker process extracts secret data from a victim process through a microarchitectural side channel based on a line fill buffer (LFB). Victim and attacker processes are in different security domains. (Source: S. van Schaik et al., RIDL: Rogue In-Flight Data Load)
What about embedded systems?
The common theme around processor vulnerabilities has been that modern, high-end implementations feature advanced performance optimization functions that, as it turns out, may be leveraged for nefarious purposes.
Embedded systems, on the other hand, often use relatively simple processor implementations. As a closed environment, they should be more tightly controlled by the vendor. In fact, this question came up during a panel on verification and compliance for open instruction set architectures (ISAs) at the DVCon US 2019 conference. Open ISAs, in particular RISC-V, but also MIPS and others, offer advantages over proprietary architectures, and are getting increasing attention by the semiconductor industry and embedded devices community.
Embedded processors are used in many networked systems, such as factories, smart homes, internet of things (IoT) devices, medical devices, and consumer electronics, as well as for autonomous driving, airplanes, and other safety- and security-critical applications.
In contrast to common belief, embedded platforms run software from multiple and often untrusted sources. As examples, consider platforms that allow users to run third-party apps or that run large software stacks sourced from multiple vendors and open-source libraries. In order to maximize hardware utilization and reduce cost, critical and non-critical applications are executed on the same physical processor. An automotive electronic control unit (ECU) could execute infotainment code alongside safety-critical functions on the same processor core, for instance.
Until recently, security has mainly focused on the software stack with hardware providing low-level functions, such as a root of trust. The RISC-V foundation puts emphasis on enabling implementation of secure platforms and mechanisms to prevent untrusted code from impacting the integrity of critical system functions. These security features are essential to authenticate software updates. In theory, everything should be fine. Untrusted software can run only within its defined envelope, unable to break out or steal secrets from the secure enclave.
Vulnerabilities are not exclusive to high-end processors
Unfortunately, there is a complication.
Recently, computer scientists have unveiled a new type of attack, dubbed the Orc attack, that threatens simple processors commonly used in embedded applications. Crucially, the authors have demonstrated that minor implementation decisions may create or prevent severe hardware vulnerabilities. “The key point here is that even simple design steps, like adding or removing a buffer, can inadvertently introduce covert channel vulnerabilities in pretty much any processor,” says Mo Fadiheh, member of the Kaiserslautern-Stanford team that discovered the Orc attack.
Ultimately, Orc and side-channel attacks break the isolation between privileged- and user-level domains. Encryption and secure authentication can be circumvented. Malicious agents can infer secret data, including passwords, social security numbers, and medical records. Exposing secret keys used for authentication of firmware updates could allow an attacker to load its own code and execute it with higher privileges or replace certain functions in the operating system. A backdoor or kill switch for a DoS attack could be added. The possibilities are endless. “Theoretically, a hacker could use an Orc attack to assume control of an autonomous vehicle or to commandeer networked computers on the internet of things,” says team member Subhasish Mitra, professor of electrical engineering and computer science at Stanford University.
The industry is aware of these risks and is actively seeking solutions. Infineon, for example, was involved in research that contributed to the discovery of the Orc attack.
A systematic method to prevent hardware vulnerabilities
Proving the absence of microarchitectural side channels is complex. Hardware security verification goes beyond ensuring that ISA security features have been implemented correctly. Developing and analyzing a threat model is also insufficient as it requires identifying attack scenarios in advance.
The same team that discovered Orc devised a powerful method to detect hardware vulnerabilities during design before mass production and deployment of ICs. Unique program execution checking (UPEC) systematically detects vulnerabilities that can derive from the processor microarchitecture and minor implementation choices.
Figure 2 : UPEC creates two instances of the same computing system containing the same data, except for some protected secret data. Formal verification-based analysis reveals if it is possible to construct processes that, despite not being authorized to access the secret data, execute differently in the two systems. (Source: M. R. Fadiheh et al., Processor Hardware Security Vulnerabilities and their Detection by Unique Program Execution Checking.)
“Orc demonstrates that serious flaws can result from seemingly innocuous design decisions chip designers make every day,” says Professor Mark D. Hill, a computer architecture expert from the University of Wisconsin-Madison. “With UPEC, designers can be much more confident that they will find and eliminate all potential covert channel flaws in their designs.”
Embedded systems require high-integrity ICs
In embedded systems, it is not possible to fully trust and control all layers of the software stack. Therefore, microarchitectural side-channel attacks leveraging vulnerabilities in the hardware implementation are a real threat to security even when using simple processor cores.
Open-source hardware based on the RISC-V ISA provides opportunities for a greater degree of security scrutiny. However, malicious agents can also perform a detailed analysis of the design and identify low-level vulnerabilities.
Once an embedded device is deployed, it is difficult and costly to replace the processor. The alternative is to prevent side-channel attacks by identifying hardware vulnerabilities before deployment. UPEC is a powerful technology for hardware security verification. Based on formal verification property checking, UPEC systematically identifies vulnerabilities in the hardware register transfer level (RTL) design model without relying on expert knowledge to guess where issues might be.
The current implementation of UPEC was built by leveraging IC integrity assurance products provided by OneSpin Solutions.
Security is a cornerstone of IC integrity, alongside trust, functional correctness, and safety. None are independent. Security vulnerabilities or hardware Trojans can compromise the safety of an autonomous vehicle, for example. IC integrity is of paramount importance to our digital society.
— Raik Brinkmann is a co-founder of OneSpin Solutions, as well as its president and CEO.
>> This article was originally published on our sister site, EE Times: “Side-Channel Attacks on Embedded Processors.”