The hacker’s nightmare cipher -

The hacker’s nightmare cipher

As recently as 20 years ago, many microcontrollers still used OTP memory, and it was uncommon for embedded systems to support firmware updates. Today’s microcontrollers have flash memory, and most have minimal bootloaders built in to allow easy reprogramming. For a long time, the format for software updates was plain hex files as produced by most compiler/linker systems. Depending on the microcontroller, these hex files were programmed via UART, CAN or other communication interfaces. There was no encryption or authentication; anyone with access to the communication interface used could reprogram the microcontroller.

Today, many embedded systems have wireless interfaces. Sometimes wireless capability is added to systems designed years ago when no one could envision that remote access would open the systems to getting hacked. As published hacks have shown, once an intruder has gained access to this level (as with internal UART or CAN communications), microcontrollers can easily be reprogrammed with malicious code.

To address this vulnerability, many microcontrollers now feature security peripherals and functions that protect the bootloading/code-loading mechanisms. Cryptographic methods protect the software/firmware load processes with encryption and authentication. We at Embedded Systems Academy have introduced CANcrypt, a security framework for CAN communication.

However, the cryptographic methods commonly used for Internet communication can’t always be easily adapted to microcontrollers that don’t have security peripherals. A pure software solution might require too many resources (code or execution time) to fit an existing system. So what can we do to protect our minimalistic microcontroller devices from getting flashed with malicious code?

We want to protect our code even if the microcontroller has no security peripherals. We need a lightweight solution that is good enough not to be easily hacked. In recent years, I have seen various home brew bit shifting and mangling methods offer some protection.

However, when it comes to hacking encrypted hex files, such solutions have a downside. The same downside also exists with more common cryptographic methods like AES: Where the ciphertext is garbled, an attacker has a pretty good idea what the plaintext looks like. It will be either hex format or binary assembly code with recognizable patterns. Using trial and error methods with key guessing or brute force trials, you can determine if the decoding was a success just by seeing if the decoded version is a hex file or has some of the expected patterns. So you immediately know when decoding was a success.

Luckily, it’s easy to make a hacker’s life more difficult: just ensure that both the plaintext and the ciphertext look alike! If the encoded hex file looks just like another hex file with valid code (and can be flashed as such), then to truly determine if a decryption method was successful, an attacker needs to erase and re-flash the target device and give the program a try.

As a result, key guessing or brute force hacking is no longer an interesting option. Not only do the individual tries take too long, there is also the issue of the flash memory’s lifetime. Flash memory typically has a limited number of erase/write cycles so you might not have as many tries as you need for a brute force attempt.

The remaining question you might have is how to encode the hex code, the plaintext?

Well, it is time for the title of this article to kick in: What would cause the possible attacker the biggest nightmare? Yes of course: BUGS! And more BUGS!

To protect your software, simply introduce real bugs to your code. Weird concept, you say? Not if you do it in an automated, controlled way, based on a shared key only known to the sender and the receiver. It can be so easy and fun to purposely introduce bugs; there is so much room for creativity. I prefer the subtle ones: here and there replace a “branch on greater or equal” with a “branch on greater”. You can also modify branch and jump destination addresses or immediate values or switch registers; the options are endless…

If you make these changes to a hex file, make sure that all checksums in the file are modified accordingly. You still want the hex file to have the correct format; otherwise it would be easy to spot the substitutions.

For example, assume you use a 32-bit shared key for encoding and decoding. Work your way through the hex file and identify a spot for potential modification, for example, an immediate value or a branch address. Then based on the key and a counter (just counting through the bytes of the hex file), decide if and how to modify the value. Sometimes less is better, as huge changes can be obvious. Compilers generate code in established ways, so someone with good compiler/assembler knowledge might spot the more obvious introduced bugs.

But the wonderful thing is: could an attacker who successfully found and removed some bugs ever be sure that all introduced bugs were found?

And if you want to make it even harder for them, use a code obfuscator before running your encoding scheme. Then all relations to compiler-generated code vanish.

At Embedded Systems Academy we have used variations of this method for a while now (primarily ARM Thumb-2 code) for custom CAN/CANopen based bootloaders. The resources required are minimal; the added code overhead to an existing bootloader can be below 100 bytes, and the delay for decoding can be close to unmeasurable. Where needed, the security key can also be combined with a serial number, allowing you to provide device-specific encoded hex files.

Today, whenever we start the utility to generate an encoded hex file, we smile, almost feeling pity for the poor soul who attempts solving this by debugging. Ah well, how bad could it be? Maybe you want to give it a try? We could provide you a few examples. After all, we only introduce about 50 or so really subtle and tiny bugs… per kilobyte of code.

Before happily adapting this method, here are some things you should be aware of:

How much of the code remains real plaintext?
As stated above, our current version introduces some 50 changes in some 1000 bytes of code, so 95% remains unchanged. A hacker with access to multiple encodings can find some of the bugs. Here again, an obfuscator can help, ensuring that the differences between file versions are more than subtle.

What happens to strings?
In the version described here, all strings remain unchanged, so if you have secret information in strings, you would need to treat them separately.

What happens if code gets flashed that is not decrypted correctly (for example, using the wrong key)? Will the code be full of bugs?
For any loadable code you should have a higher level of protection (for example, your own bootloader) that only allows the flashed code to execute if it is authentic. This level should perform some non-obvious checksum/hashing test. I say non-obvious so as not to give attackers something to look for when they try to decode an encrypted hex file.

How are the keys exchanged?
This scheme uses a symmetric key (same key used to encode and to decode), and at some point the key needs to get into the embedded device. As with other security solutions, some key management will be required. A simple approach is to make the key part of the bootloader; this way your key management is limited to keeping track of which key is integrated with which bootloader version. For more advanced key management systems including CAN-based key generation, see my book Implementing Scalable CAN Security with CANcrypt .

Is it perfect?
Of course this method isn’t foolproof. One downside is that you must protect your encryption utility. As soon as hackers have access to it, they can analyze its code or try different inputs to learn exactly what it does. Even so, we sure wish all hackers working on files encoded with this method a good night’s sleep!

Olaf Pfeiffer studied technical computer science at the Cooperative University in Karlsruhe. He is one of the founders of the Embedded Systems Academy. Together with his partners, he wrote the books Embedded Networking with CAN and CANopen and Implementing Scalable CAN Security with CANcrypt. He is a regular speaker at the international CAN Conferences and other events. Olaf is chairman of the CiA 447 standardization group that defines how CANopen is used in special purpose vehicles (including police cars, taxis) and for automotive accessories.

2 thoughts on “The hacker’s nightmare cipher

  1. “While this is fun, I can't take it seriously. For one thing, it clearly violates Kerckhoff's principle, a core rule of cryptography design going back well over a century. That is: you must assume that the opponent knows every aspect of the system EXCEPT

    Log in to Reply
  2. “I am unsure where I gave the impression that this is meant as a replacement for real encryption? Of course it is not.nnMaybe I should have been more detailed here: “However, the cryptographic methods commonly used for Internet communication canu2019t

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.