Safeguard your FPGA system with a secure authenticator -

Safeguard your FPGA system with a secure authenticator

In the 21st century counterfeiters increasingly target electronic devices because the profit from selling copied electronics can be very rewarding. Furthermore, certain types of electronic components like FPGAs—and especially their attached peripherals/subsystems—can be relatively easy to copy. Consequently, all too often end users cannot be sure that they are using genuine OEM equipment. This can be quite disappointing when the end user encounters substandard counterfeit devices, items that break down easily or do not work properly. But perhaps more gravely, consider the implications of using a substandard or inaccurately calibrated surgical device. This can be deadly.

What about OEMs who see parts of their products being counterfeited? OEMs face revenue loss when their goods are counterfeited in the market. This phenomenon is not limited to emerging markets. Indeed, the supply chains for many electronic goods stretch worldwide, so counterfeit goods can find their way into virtually any supply chain.

This article outlines the general problems that counterfeiters pose for OEMs and end customers. It then turns to FPGA systems and explains how OEM system designers can address their concerns about the counterfeiting of FPGAs. Designers can secure their FPGA bitstream, protect IP, and prevent attached peripheral counterfeiting through use of secure authentication ICs that implement a SHA-256 or ECDSA algorithm.

Many reasons to counterfeit FPGA embedded devices
FPGAs and their attached components are especially susceptible to counterfeiting. Let’s begin with attached peripherals, subsystems, consumables, and sensors. A common business model is for OEM vendors to sell base equipment (e.g., the printed circuit board with the FPGA) at a very low gross margin or even at a loss; meanwhile they rely on sales of attached peripherals.

Gross margins on the attached peripherals are usually significantly higher and thus the attached components are targeted by counterfeiters. Think about printers and printer cartridges; gaming consoles and controllers; medical devices and attached disposable sensors.

The list goes on and on. For some of these peripherals or consumables, OEMs typically want to protect against reconfiguration or usage beyond the specified product lifetime. End users have been known to refill their printer cartridges, use batteries longer than their intended use, or use one-time-use medical sensors more than once which, in this case, can spread disease.

The FPGA itself is also a target for copying. Low-end FPGAs do not have any protection mechanisms against copying or theft of intellectual property (IP), and the motivations for copying are abundant. In modern supply chains, OEMs might wish to build their end product using third-party contract manufacturing. Subcontractors can be a useful extension of a supply chain; they can manufacture embedded systems efficiently and in a cost-effective manner. Unfortunately, unscrupulous contractor manufacturers (CMs) have been known to build more widgets than contracted. They are thus producing bootleg products of the same quality and authenticity as the OEM’s. Indeed, by overbuilding, an unscrupulous CM freeloads on all of the R&D and marketing costs that the OEM incurred.

Besides copying there are other ways that counterfeiters can short-change OEM vendors. Some FPGAs utilize a “soft feature” setting. For example, initially engineers will design and manufacture fully featured FPGA systems. Then to save time and effort, and to achieve different price points or feature levels, they use the same base equipment but will defeature certain aspects in firmware. This effort, however, creates a new problem: a smart customer who needs several fully featured systems now could just buy one such unit and several cheaper units with the reduced features. Then, copying the firmware from the full-featured unit into the simpler units, all behave like the full-featured unit but for a lower price. This shortchanges OEM system vendors.

Sometimes companies create and sell reference designs (RDs), but not the physical hardware itself. These RDs can be subsequently bought, licensed to, and manufactured by third-party customers. Because the RD itself is not a physical material, its reuse is hard to track. Consequently, the developers of a RD require barriers and secure protection to prevent unauthorized use of their IP. Also for revenue reasons, these IP owners want to track and confirm the number of RD uses. The bottom line for the RD developer is straightforward: what’s to stop the customer vendor from underreporting how many widgets are built? How can they secure their IP?

The unprotected FPGA is easy to counterfeit
Before we examine security measures, let us first explain why it is easy to counterfeit an FPGA. Static-RAM-based (SRAM) FPGAs have few safeguards to protect proprietary IP (the configuration data) or the FPGA implementation against illegal copying and theft. Once the configuration data is loaded from the bit file, it is held in SRAM memory cells, which can easily be probed to determine their contents.

In addition, without some type of security mechanism to protect the configuration data or bit file before it is loaded into the chip, that data is open to snooping. Prowling through that data is possible because the bit stream is usually stored in a separate memory chip read by the FPGA at power-up when it loads its configuration pattern. A cloner could simply copy the configuration file and create clones of the original. These lower-end FPGAs do not have built-in encryption or a trusted boot that would otherwise protect the configuration file from copying.

One way to make the SRAM-based FPGAs a little more secure is to leverage multichip packaging and mount the nonvolatile (NV) memory inside a package along with the FPGA. Yet if someone opens the package, the data interface between the memory and the FPGA is exposed and the configuration pattern can be compromised. Multichip packaging can also be an expensive endeavor.

The structure of the configuration bit stream (i.e., the sequence of data elements and how they are coded and identified) is largely undocumented. The obscurity, complexity, and size of the bit stream make the reverse-engineering process difficult and time consuming, although theoretically possible. If successful, even partial reverse engineering of the configuration stream allows a thief to hack a set-top box to steal services or to tamper with powertrain settings in a vehicle, causing liability problems for the original manufacturer. Moreover, third-party consultants are available to provide the reverse engineering services for a fee.

High-end FPGAs protect internal IP with bit-stream encryption and a trusted boot. These protection mechanisms greatly mitigate the risk to the IP on the bit file. Furthermore, these FPGAs do not store sensitive data in insecure SRAM. Implementing these cryptographic security mechanisms can become cumbersome, however. Added manufacturing steps increase cost. More importantly, if these cryptographic keys are programmed during contract manufacturing, then the subcontractor will know the keys and the OEM can no longer be assured that their IP is safe.

How to combat counterfeiting

We have established the reasons for counterfeiting and explained at a high level how it is done. Now it is time to discuss how to prevent counterfeiting of FPGA systems. As we will see, there is a cost-effective solution to each of the problems described above. Designers can include secure challenge-and-response authentication in their FPGA systems.

Shared secrets in symmetric authentication. The basic premise is to embed a hardware authentication IC on the board that needs to be protected. If peripherals need to be protected, embed the authentication IC in the peripheral itself, in the peripheral cabling, or the connector. If the FPGA itself should be protected, then embed an authentication IC on the FPGA’s PCB. In all instances, the peripheral or FPGA can be considered a “host” and the authentication IC can be considered a “slave.”

Both the host and slave should be preprogrammed with a shared secret. Programming the secret should occur in a trusted factory environment. This is easy for the host side, as FPGA designers can embed the secret in the configuration file and use protections to ensure that the secret remains protected. On the slave side, the authentication IC can be preprogrammed in a trusted factory environment. Think of the secret as a password.

Due to the “magic” of modern cryptography, it is not necessary to expose the secret across the data line from host to slave. Indeed, DeepCover secure authenticators work by storing a secret in protected memory that cannot be read out. However, this secret can still be used as an input to a cryptographic algorithm, and the resulting output of the algorithm—which cannot be reversed to discover the inputs—will be shared with the host.

A test for authenticity takes place with a challenge-and-response sequence (Figure 1 ). The host sends a challenge to the slave and expects a specific response back from the slave. Then the host compares the actual response to what it expected. If these match, then authenticity is assured.

Figure 1: Testing for authenticity with a challenge-and-response sequence.

For this to work securely, the hash algorithm requires the following properties:

  • Irreversible; it must be computationally infeasible to determine the input by analyzing the output
  • Collision-resistant; it must be computationally impractical to find more than one input message that produces a given output
  • High avalanche effect; any change in input must produce a significant change in the output

More detail on how the challenge and response works is found in Figure 2 . The letters (i.e., A, B, C) correspond to data flows. This is an illustration of “symmetric authentication,” which uses an algorithm like SHA-256. Here are the steps:

  1. Generate a random number and send it as a challenge (A) to the secure authenticator.
  2. Instruct the authenticator to compute a hash based on its secret key, the challenge A, its unique ID, and other fixed data. The hash is the output of the algorithm block (B).
  3. The FPGA computes a hash (C), based on the same input and constants used by the secure authenticator and the FPGA's secret key.
  4. Use the hash computed by the secure authenticator (B) as the response and compare it to the expected response (C).

If the expected response and the actual response are identical, then the FPGA knows that the authenticator is genuine. If genuine, your security goals—IP protection, counterfeit protection, etc.—are achieved.

Figure 2: Challenge-and-response authentication flows in greater detail. This process proves the authenticity of the hash originator—the secure authenticator.

Public key asymmetric algorithms. The security scheme described above is based on a symmetric system in which the FPGA host and authenticator IC slave share a common secret. This system has a potential vulnerability if hackers discover the secret on either side of the system. Secure authenticators are purposefully designed to resist such attacks on the slave side. But what about the host side in which an FPGA holds a secret?

To avoid the need to protect a host-side secret, designers can use a public key, asymmetric-based algorithm like ECDSA (Figure 3 ). Such algorithms utilize a private key on the slave authenticator side of the system and a public key on the host side of the system.

The public key’s only function is to verify that data signed by the private key is authentic. By analyzing a public key, it is computationally infeasible to discover the private key; signed data cannot be generated using a public key. Thus, hardware protections are not needed on the host side of the system for protecting a key/secret, and public key distribution to host systems is easier and less cumbersome.

Figure 3: Authentication flows for ECDSA. The private key use used by the authenticator to sign data which is known by both entities. The FPGA host verifies the signed data using its public key.

Counterfeit protection of a peripheral
To implement counterfeit protection for peripherals attached to an FPGA, a specific system configuration should be used (Figure 4 ). The FPGA and its corresponding security implementation reside in an embedded system. A peripheral device—perhaps a sensor, disposable, consumable, or another separate embedded system—is the object that a designer wishes to secure. A secure authentication IC resides in this peripheral device.

Figure 4: Block diagram for counterfeit protection of peripherals.

If the FPGA itself needs to be protected, then the hardware setup shown in Figure 5 should be used. The FPGA and its corresponding security implementation reside in an embedded system along with the secure authentication IC.

Figure 5: Block diagram for protecting the FPGA itself.

Today there are reference designs optimized specifically for DeepCover secure authenticators and specific FPGAs. We’ve put together these reference designs to shorten time to market and make it easier to incorporate authentication into an FPGA system. Designers can order the hardware: an authentication IC soldered on a Pmod board for easy interfacing with FPGA demo boards. Designers can also download the software files needed to implement the design within the FPGA demo environment. Files include schematics, layout, and perhaps most importantly, the reference firmware.

Designers of FPGA embedded systems face many potential security threats including counterfeiting of peripherals and copying of an FPGA implementation. Using a secure authenticator along with an optimized reference design protects FPGA systems from these potential issues.

Michael D’Onofrio is Embedded Security Business Manager at Maxim Integrated . He graduated from Duke University with a degree in Markets & Management and Public Policy.

1 thought on “Safeguard your FPGA system with a secure authenticator

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.