Protecting Binary Executables

Matt Fisher

February 01, 2000

Matt FisherFebruary 01, 2000

Any company that develops embedded products must make a substantial investment of resources to get to the product stage. This article surveys the techniques available for protecting the resulting binary executable from hackers and/or reverse engineering by potential or current competitors.

Companies in the embedded industry usually make substantial investments of time, money, and intellectual resources to solve typical design problems. The result of a successful project is a product in which the company probably has a longer-term financial interest. Protecting embedded software from unwanted reverse engineering by potential competitors or hackers is generally of the utmost importance.

The purpose of this article is to survey some options for protecting binary executables stored in a product’s EPROM. This is an introductory, high-level treatment of the subject; additional techniques surely exist that I won’t cover in this survey. With this article I intend to foster awareness of the subject and to pique your interest to explore further if your circumstances warrant. The survey of protection techniques follows a brief overview of decompilation. Following the survey I’ll present two scenarios for which some of these techniques may prove worthwhile.


Most ESP readers are familiar with the compilation process. Compilation enables programmers to write software in a high-level language, expressing program flow in a way that is closer to how the human mind works — more abstract than what today’s microprocessors can handle. The high-level language/compiler combination reduces the abstraction to a set of specific instructions the microprocessor can understand.

Decompilation, then, is simply the reverse process: taking the low-level instructions executed by the processor and backing out the higher-level intent of the human programmer.

Decompiling is the process of generating source-level code from the executable object code embedded in a product’s memory. It is a matter of reverse engineering the software.

Certainly the high-level intent of an application can be at least partially deduced by simply using the product. The user interface, system inputs and outputs, and general product behavior can be readily observed by any end user. Additionally, the architectures and instruction sets for commercial microprocessors are publicly available. Therefore, we can assume that someone who is familiar with your product and the microprocessor it uses already knows a good deal about your application, whether or not they have access to the software. This person is familiar with the processor’s addressing modes, on-chip peripherals, address space, and so forth. So even though real-time interactions and algorithms are “hidden” in the object code, they can be recovered to some degree (maybe to a large degree) by decompiling the executable object code, which is often stored in a read-only memory (ROM) chip. This information could then be used to your competitive disadvantage.

What follows is a survey of possible techniques to discourage the would-be reverse engineer. Almost all of these techniques will only slow down the decompiling effort; they will not keep the reverse engineer from succeeding, given enough time, resources, and desire.

Methods of protection

Use a copyright notice in the binary file. This is certainly not a technical protection scheme — only a legal one. But including an ASCII copyright notice in the binary image at least notifies anyone reading the image that the software is legally protected and should not be tampered with. A copyright notice will not hinder an ethically challenged competitor or hacker, but it only takes up a handful of bytes in ROM, so little reason exists not to include it.

Use a checksum value over the whole application. This method allows you to detect modifications to the executable image. You’ll need a boot program, which stores the correct checksum for your binary image. The boot program itself should reside in nonvolatile, protected memory, preferably inside of the microprocessor itself if it has on-board ROM of some type.

The boot program must also be able to calculate a checksum from the ROM image, and compare the result to the stored value. If the two values differ, the boot program inhibits execution of the application. This method offers no protection against reading and decompiling the code; it only protects against putting an altered ROM in the product and having the product still function.

Note that neither the boot program nor the stored checksum should be part of the ROM image that you’re trying to protect. If the boot program can be located and decompiled, then this method offers no protection at all. That’s why locating the boot program within the microprocessor itself is a good idea — the program is much more difficult to retrieve that way. Many of today’s microcontrollers offer some on-board ROM or flash which can be used for this purpose. The checksum can be stored in EEPROM or flash.

Encrypt string data. If your product has any sort of human user interface, the software likely includes text strings. Compilers typically store the string data in ASCII format, and by default place this string data into constant data sections in the binary image. Many binary file editors will display the ASCII interpretation of the raw hex data, so visually scanning the binary file and finding the address of each string is a simple matter. Once you know the addresses, you can search the executable code to see where they’re used and draw some conclusions about what each section of code does.

One protection technique, then, is to encode text strings so they don’t show up as readable ASCII in the binary image. (The encoding takes place outside of your application software, so that only the encoded representations appear in the binary image.) On a character-by-character basis, each character in the string can be mathematically manipulated in some way (for example, ANDing with a constant eight-bit “key”) before being stored. The character must then be decoded before being sent to an output device. Or the encoding can take place on groups of characters.

In either case, you’ll have to modify your language’s output facilities, or perhaps write your own from scratch. In C, for instance, printf() has no inherent decryption capability, so you might choose to write your own version of printf() that knows how to turn an encoded string into legitimate ASCII data at run time.

If you are going to use string encryption, it’s good practice to store critical information that is required to decode strings in the microprocessor’s internal memory. For example, if you AND each character with an eight-bit key, the key should be stored within the microprocessor, so that it does not appear in the binary image anywhere.

Write your own operating system. If your application includes a commercial OS, certain “signatures” appear in the binary image. The OS must include a start-up sequence, which is probably well documented by the OS vendor. Other run-time facilities will show up in the executable, and once these are found their interaction with your application code can be more quickly deciphered. And if the OS code is well designed and well coded, a decompiler program might have little trouble reconstructing the source code for OS functions.

If you use your own OS, a reverse engineer might need more time to figure out how your run-time environment works. However, writing your own run-time environment may not always be an option. Project schedules or legacy designs may dictate that you use a commercial OS. A home-grown kernel may only be a viable option for small, lower-memory projects that don’t require accelerated design cycles.

Scramble address lines through extra logic. Another technique to hamper decompilation is to scramble address and/or data lines on your product’s PC board(s). Instead of routing the address and data lines directly from the microprocessor to the ROM, you can insert some extra logic in between. An EPLD or FPGA can be a good source for such logic. So the address put out by the microprocessor, for example, will not be the physical ROM address to which an access is made. The ROM image will appear nonstandard, since the addresses have been essentially encoded. This fact alone might tip off a reverse engineer that something has been done to protect the code.

This method has other drawbacks, though. If your product can be instrumented with a logic analyzer, a reverse engineer can observe the "real" data being read by or written from the microprocessor. So while a decompiler may have a hard time with the ROM image, instruction sequences can still be determined at run time. In addition, you have the overhead of designing around the address/data line scrambling logic yourself, which may require you to have some type of special utility to create your ROM-able image, since linkers generally won’t support this type of operation.

Replace library functions. This method follows the same philosophy as using your own kernel. Since programming languages have standard libraries of functions, you could choose to not use these functions and instead write your own. This would make searching for common functions, such as printf(), more difficult. But this adds a significant overhead to your design cycle, since you have to reinvent the wheel.

Write lousy code. This option may seem ridiculous, but in fact might slow down the potential reverse engineer. The more convoluted a software design is, the more difficult it is to figure out, even at the source-code level (especially if it is also poorly documented). I’m certain there are a myriad of ways that code can be made to be "lousy," and I won’t begin to offer suggestions for writing lousy code here.

I’m not really suggesting that this is a viable protection scheme, since it flies in the face of every software methodology written.

If the goal is to write well designed code for easier maintenance, then making poorly designed code in an attempt to protect it seems silly. However, critical algorithms may be obfuscated somewhat to hamper their decompilation and interpretation.

Implement critical algorithms in silicon. This is perhaps one of the most realistic options for product protection. If your application contains proprietary algorithms that are crucial to understanding how your product really works, implementing those algorithms in hardware rather than as code may be a good idea. Even if someone successfully decompiles your ROM, key pieces of the logic will be missing. For many applications, this may be enough protection to keep your product from being copied or modified.

Seal the electronics. The circuit board that includes your ROM could be encased in some sort of hard plastic sealant (potting or conformal coating) to make access to the device impossible. A would-be reverse engineer would have to physically destroy the product to get at the software. While this method may offer suitable protection against decompiling, it simultaneously presents a maintenance problem for you—the circuit board cannot be legitimately accessed to repair or replace the ROM and any other parts encased in sealant.

Use a home-grown microprocessor. If you have the time, resources, and need, designing your own microprocessor would provide a high level of protection against code decompilation. Assuming the architecture and instruction set for your processor are not publicly documented, the ROM image should be meaningless to anyone not familiar with your design. But this method is a large commitment, since you not only have to design the processor, you also have to create any compilers, assemblers, linkers, and debuggers to work with it.

However, an alternative does exist. Some microprocessor core companies provide licenses for their cores that include development tools to modify existing instructions, or add your own new instructions. The processor development tools allow you to generate a new core, plus they can generate a compiler for software development on your specialized processor. (Tensilica’s Xtensa Processor Generator is one example of such a configurable processor environment.) As a result, you can create your own unique instructions to implement critical algorithms which need special protection. Burn an FPGA with your modified core and you have a working prototype for software development. The final microprocessor can then be fabricated as an ASIC.

Two scenarios

Let’s discuss two basic scenarios that might warrant the protection of a binary executable. In the first scenario, we must keep someone from modifying the code in a particular device so that features do need to thwart decompilation, to protect intellectual property rights in a competitive environment.

The first scenario is exemplified by various communications services, such as cable and satellite television or the caller ID telephone feature. In both cases the service provider supplies a customer with special equipment. In the cable TV case, the customer needs a cable TV converter box; with caller ID, the customer needs a display unit which indicates who is calling. The service provider often charges relatively little for the hardware itself, since it isn’t the hardware that generates long-term revenue. The hardware simply serves as a gateway for a customer to access information (the cable program or the caller ID), and it is this information for which the customer must pay.

In these two cases, we are interested in protecting the firmware running in the customer’s hardware from being altered in a way that enables the customer to get access to the service without paying for it. But it isn’t necessarily important to keep someone from decompiling the software to understand how it works — no revenue will be lost unless the software is modified in an attempt to circumvent billing.

The manufacturer of the cable converter or caller ID unit could employ the checksum technique to ensure that the embedded software cannot be altered and still operate. If packaging limitations allow, the electronics could be encased in sealant to make access to the ROM difficult or impossible without destroying the unit. Better yet, both methods can be used at the same time to offer increased protection. Even if a hacker were willing to physically damage a unit to gain access to the binary executable, he would be unable to modify the code and still use the unit to obtain service.

The second scenario leans more toward the protection of intellectual property and the threat of bona fide competition in the marketplace, not just casual hacking. Any product, particularly those that are cutting-edge in their specific industries, can be a target of reverse engineering by an unethical competitor. In these cases it may be worth the cost to implement some of the more aggressive protection techniques listed in the survey. In particular, the home-grown microprocessor technique may prove to be a viable option.

Consider a hypothetical example: let’s say your company has developed a new type of portable medical diagnostic device that allows patients to monitor some aspect of their own health in real time (this is a hypothetical example, remember). Your company is pursuing a patent on the technology, but you need to get the device to market as quickly as possible. In this case, especially if the potential market is large, you may decide to modify an existing processor core to hide key aspects of your algorithms. In this way potential competitors will not be able to unlock the secrets of your application unless they first get information about your specialized processor and the instruction set it uses.

Weighing the costs

The code protection techniques that aren’t too costly to implement can be somewhat readily overcome by competitors, while those techniques that are more iron-clad may require a substantial investment. It may be that the cost to implement such an iron-clad method overwhelms the “normal” cost of development for the product. The bottom line is, if someone is willing to spend the time and money, little can be done to stop reverse engineering of embedded code. Even if you use custom hardware, there are clever engineers who understand a commercially available CPU well enough to make logical (and often correct) assumptions about the details of your design.

A custom microprocessor with an unpublished instruction set, or physically sealing the electronics in some manner, might satisfactorily protect the software, but these may not be viable solutions for many applications (particularly in smaller organizations with smaller development budgets). Ultimately, the decision of whether or not to put protection schemes in place may in fact be a business decision and not a purely technical one. The company must weigh the cost of implementing code protection vs. the potential risk of intellectual property infringements or loss of service revenue, and decide what is best for its particular situation.

Matt Fisher is a software engineer who has been working with embedded systems for the past five years. He has experience in the defense, telecommunications, and electronics industries, and holds a BS and MEng in electrical engineering from Rensselaer Polytechnic Institute in Troy, NY. He can be reached at

Loading comments...