Design Con 2015

Generating random numbers

Eric Uner

May 24, 2004

Eric UnerMay 24, 2004

Randomness can be a Good Thing. If your system generates truly random numbers, it can avoid and withstand network packet collisions—just one of many applications. Here's what you need to know about random numbers.

Most embedded systems engineers and developers treat the ability to generate random numbers like the air we breathe: we never notice how badly we need it until it isn't there, and we're far too unconcerned with the quality of what we have. Maybe I'm extending an already stretched metaphor, but embedded systems need good random numbers as much as we need good, clean air. Not being a biologist or chemist, I can't tell you what "good" means pertaining to air, but I can show you what a "good" random number is and how to generate one.

But first let me support my metaphoric comparison by offering a couple of examples.

Going random
It's 3:00 AM, and Mother Nature has unleashed a severe storm on the Midwest. The Chicago suburbs are hit with a massive power outage, and a quarter of a million homes lose power. It's not terribly inaccurate to assume most of those homes have cable TV. At 3:30 AM, crews have restored power, and 250,000 cable set-top boxes are ready to boot up. They begin by turning on at full power and broadcasting to find their network. They can't find their networks, however, because all those devices are draining the cable system and preventing it from providing adequate power to most units. Consequently, boxes begin to fail. This power-up problem and other issues have resulted in the common designs you see today—a box that has to be plugged into a wall outlet and even requires a phone line for out-of-band management. What this system really needed was a good random number generator (RNG). That way, all devices could have spread out their boot-up over the course of some interval in time, hence putting a much smaller strain on the infrastructure.

A more commonly encountered example (unless your power goes out as often as mine) is the random back-off that occurs when network packets collide or the random delay before transmission as part of a collision-avoidance mechanism. The physical network may be wired (Ethernet), wireless (802.11), or any network where multiple subscribers access a shared resource simultaneously.

Let's take a simple example of your desktop computer sending a TCP packet over an Ethernet cable at the same time as your peer's desktop computer. This happens frequently on busy networks, but let's say, as an example, that you and your friend are the only ones with computers on the network, and your computers both transmit at the same time. As Figure 1 shows, this causes a collision, which can result in data or corruption.


Figure 1: Network packets colliding

The standard Carrier Sense Multiple Access/Collision Detection (CSMA/CD) protocol used by Ethernet specifies that each computer should now wait for a random time before retransmitting. The way it's supposed to work is that the computers generate random numbers that are different from each other so eventually one computer transmits before the other. Let's assume that your network cards use an augmented binary exponential back-off algorithm, as many of them do. This back-off algorithm shown in Equation 1 chooses a random number, r, and waits that number of milliseconds before trying again. If that doesn't work, the algorithm doubles r, adds a new value for r, and tries again.

Equation 1:

Where:
t
is the time to wait before trying again
n is the number of tries (0,∞]
r is some random integer

Say both computers choose the same value for r. The problem obviously continues. Now let's say neither computer has a good RNG, so they choose the same number again, having locked onto the same algorithm to generate the back-off time. The problem will never go away.

Aside from the networking applications of good random numbers, there are hundreds of other applications, from point-of-sale systems that print random coupons, to games (those incoming aliens need a starting position), to simulation.

Random number generation is also vital to cryptography. At the core of a good crypto implementation is the ability to generate secure random keys, like those your Web browser generates when establishing an SSL connection. As we'll see later, cryptographic applications depend more on pseudo-random number generators, or PRNGs.

There are many different kinds of random numbers. Now that you're sufficiently convinced that you need a good RNG, let's explore the two primary types.

Two generators
Two families of algorithms are used to generate random numbers: linear and nonlinear. And you'll care about only two types of random numbers: truly random and pseudo-random. I can't overstate how important it is to understand the difference between these two random number types.

Truly random
Random number generators (RNGs), generate numbers that are said to be truly random. The mathematical definition of a truly random number is a number that can't be described by any number smaller than itself. Quick, think of a number. Was it five? (That's what I always pick, though seven is most popular.) The number you chose was not a random number per se, because I can describe your choice with a set of probabilities. I never mentioned the range one to ten, but about 99% of readers are likely to assume that range. But what if I ask for another number? Knowing the assumption was false, a sly electrical engineer may pick something like 1.602 x 10-19. You can see that any statistical model begins to break down quickly once you eliminate artificial boundaries and assumptions.

We're more interested in random sequences and sets than individual numbers. In the network collision and back-off scenario, it's fine if both computers pick the same random number once—they just need to pick different ones for successive values. In any case RNGs generate numbers that no one could possibly predict, like rolling dice or flipping a coin. The lack of predictability is the RNG's primary benefit. When developing a telephony system I once used the time between and the duration of phone calls to create random numbers. The resulting sequences of integers defied any attempt at a mathematical model. Intel uses thermal noise to generate random numbers in the Pentium chip.

RNGs, though, have some pretty potent disadvantages, including:

  • you can never prove mathematically that the numbers will be random
  • you can't reproduce random numbers for tests or analysis
  • the input parameters for the generators can usually be manipulated.
That last one is a real kicker. Take the example of a Linux distribution that used the time between network packets as random data involved in creating random numbers. Flood the box with Internet Control Message Protocol pings, and you have just turned an RNG into a nice, predictable stream of numbers.

Pseudo-random
Enter the pseudo-random number generator (PRNG). PRNGs produce deterministic sequences of numbers that have many significant statistical characteristics in common with true random numbers. Which statistical commonalities you choose to require depends on your application, and we'll discuss some common ones in a moment. In almost every case, however, it's desirable for the numbers to have a small autocorrelation coefficient—that is, the numbers have as little relationship to each other as possible. Say you take successive values Y from your PRNG, the coefficient for Lag k can be described as:

Equation 2:

For our purposes, we're only interested in the first coefficient, so k=1 for our test. Before you go off and start writing code for this, try John Walker's excellent ent utility, which you may download for free at www.fourmilab.ch/random. This program takes a file of data and runs several tests important to determining how much "random" is in your random numbers. Utilities such as ent measure many other properties beyond autocorrelation, including:

  • frequency—too many ones or zeros in the bit stream
  • longest runs—oscillations in the bit stream
  • spectrum—periodic features of the bit stream
  • template matching—too many occurrences of an n-bit stream of ones or zeros
  • Χ2 (chi-squared) distribution—how far the PRNG's data varies in distribution from truly random data
The Χ2 distribution deserves slightly more explanation as it's a common test that you'll likely find important. In general—and I'm glossing over a lot of assumptions here—the calculation can be represented as:

Equation 3:

where Y is the number of times the PRNG generated each possible value after generating n numbers, and p is the probability of each number occurring in a truly random data set. The resulting Χ2 will be larger in proportion to how easy it is to just guess each number.

Picking and choosing
With so many tests and so many statistical values to shoot for, how do you know which test to run or which algorithm to choose? The first step is to determine which characteristics of the PRNG are most important, then rank them. Here's a list of characteristics to rule in or out:

  • determinism
  • distribution
  • chaos
  • security
  • performance
Determinism
If you don't need to repeat or model the numbers, or test them (which you almost always will), you can skip the rest of this article and go find an RNG or use a characteristic of your device to make an RNG. Think of anything that's random about your device. If it's a set-top cable box, for example, use the time between channel changes. Radio-frequency devices can use white noise. Keep in mind that if you decide you don't need determinism, you must not, by definition, need any of the other characteristics on the list. This is because you can't empirically test the numbers, since you can never repeat the same series twice or prove that future data will be as "good" as past data.

Truthfully, when you think you don't need determinism—90% of the time you're mistaken.

Distribution
Distribution is the degree to which the numbers are evenly spread over the possible values. Whereas the Χ2 test measures the relationship between distributions, occasionally you'll need to be sure your values are spread out over all arbitrary set size. Say your system is trying to generate numbers from zero to eight, possibly to choose a radio-frequency channel in a cordless phone. You want to rotate among the frequencies randomly so as not to get stuck in a loop that always uses the same frequencies in the same order as a neighboring phone, and you want to take advantage of your entire spectrum as much as possible. A simple fix to this is the algorithm shown in Equation 4. It's a modified version of what's called an inverse congruential generator:

Equation 4:

where r is a running static integer that's allowed to overflow, p1 and p2 are prime numbers, and Y is the value you're interested in using for your random choice. For some example code see Listing 1.

Listing 1: Inverse congruential PRNG

unsigned int rprime;
static unsigned int r;
.
.
.
/* begin critical section */
if (r==0 || r==1 || r==-1) r=rprime; /* keep from getting
stuck */
r = (9973 * ~r) + ((Y) % 701); /* the actual algorithm */
Y = (r>>24) % 9; /* choose upper bits and reduce */
/* end critical section */

Notice that I chose p1=9,973 and p2=701. You may have to adjust these values to get your distribution if you need more than one byte of randomness, but these values will work quite well for values of t in the range 0 to 255. I also used the upper byte of the running value r. This type of PRNG usually yields "more" randomness in the more significant bits.

As shown in the plot from Figure 2, the data from 1,200,000 iterations of this algorithm yields a nice, flat average value, so the numbers are well distributed. The percentages from this run are shown in Table 1.

Figure 2: Inverse congruential PRNG results

Plot of average value from 1,200,000 runs

Results from "ent":

  • Optimum compression would reduce the size of this 1,200,000 byte file by 0%.
  • Chi square distribution for 1,200,000 samples is 4,529.49, and randomly would exceed this value 0.01% of the times.
  • Arithmetic mean value of data bytes is 127.5424 (127.5 = random).
  • Monte Carlo value for Pi is 3.136720000 (error 0.16%).
  • Serial correlation coefficient is -0.006601 (totally uncorrelated = 0.0).

Table 1: Inverse congruential PRNG results

Y % Y % Y %
0 11.40 3 11.10 6 10.80
1 11.29 4 11.10 7 11.15
2 11.32 5 11.72 8 11.13

Chaos
You may have noticed I didn't show an initial value for r or the value for r' in the Listing 1. I omitted this value to point out the importance of initial conditions. If you're looking for different devices running the same algorithm to generate different random values, you're looking for chaotic behavior. Embedded systems typically don't have a lot about them that is unique, maybe just a software or hardware serial number, and these are usually sequential. So how can a small difference in any initial data you have produce wildly differing sequences of numbers? Just use a chaotic algorithm.

Back before people associated chaos with psychedelic fractal images, Edward Lorenz, a meteorologist working at MIT, began using computers to predict the weather. Lorenz found that very small truncation or rounding errors in his algorithms produced large changes in the resulting predictions. This led to the study of "strange" or chaotic attractors. A Lorenz system can be described as:

Equation 5:

which can be used as a test for chaotic behavior. Where d is a constant, the system exhibits chaotic behavior when the final inequality is true. Chaotic functions are actually fairly hard to come by, which is why I gave you one in this example. The PRNG from Equation 4 is quite chaotic. Figure 3 represents a chart of the difference between two runs of Equation 4, one with r initialized to 9,973 and the other with r initially set to 9,974—a difference of just one.


Figure 3: Inverse congruential PRNG chaos


Figure 4: OpenSSL RAND_bytes() chaos

As you can plainly see, a great deal of difference already occurs in the first 20 iterations. In fact, the two series won't generate values within one byte of each other for at least 100 million iterations. Compare this with Figure 4, which shows the results from the OpenSSL random function RAND_bytes() with initial values just one apart from each other. The OpenSSL function clearly has little chaos in it, so developers would have to use very different initial conditions—a sequential serial number will not do.

Security
The OpenSSL function, however, is much better at producing random numbers than the algorithm in Equation 4 and Listing 1. In the OpenSSL generator, there's less correlation between the numbers, and so the results are more difficult to predict when you don't know the initial seed and the current place in the sequence. The degree to which the values from a PRNG are difficult to guess and back-calculate to the initial value is directly proportional to the security of the algorithm and therefore its usefulness in cryptographic functions.

The best thing to keep in mind when creating a secure algorithm is that binary operators are your friends. Recall that operators like OR do not obey the same postulates of equality as mathematical operators. They aren't reversible. The same is not true for arithmetic operations like multiplication, which make it simple to determine the missing data. This means that you can use binary operations to hide the missing terms and make your algorithm more secure.

Overflows can also be an ally, if they don't cause undue interrupts or errors in your system. Clearly when information overflows or is shifted off the upper or lower bits of data, there's no way for someone to calculate the lost data.

Let's examine a simple algorithm that uses both overflow and binary operators to make reasonably secure PRNGs. This algorithm has a good distribution, is suitable for use in generating cryptographic key material, but is not quite as chaotic as the previous algorithm in Listing 1. This one gets a touch hairy to define in mathematical notation, so we'll go right to the source code shown in Listing 2.

Listing 2: A reasonably secure PRNG

static unsigned int xorTable[64] = {0x7be9c1bd...0x088aa102};
static unsigned int r = 31468; // your seed
static unsigned int q = 0, n = 0;
unsigned int i;
unsigned char Y;
.
.
.
// begin critical section
for(i=0;i<>
{
   q = (r + xorTable[choice & 0x3f]) ^ n*xorTable[n++ & 0x3f];
   if (q==r)
   {
      r+=xorTable[choice & 0x1f] ^ choice; continue;
   }
   else
   {
      r = q;
      break;
   }
}
n++;
Y = r >> 24;
// end critical section
return(Y);

This algorithm uses a table of 64 integers generated at random using the algorithm from Listing 1 and stored in xorTable. Note that q, r, and n can overflow and that the algorithm uses an exclusive-OR and a pair of ANDs. This simple algorithm makes remarkably good PRNGs that have all the characteristics we've talked about so far. The federal government, however, requires the addition of a secure hash algorithm, SHA-1. See the sidebar below for more detail.

SHA-1
Per Section 513 of the Information Technology Management Reform Act of 1996 and the Computer Security Act of 1987, the National Institute of Standards and Technology (NIST) issues Federal Information Processing Standards (FIPS). FIPS certification is required to operate in certain federal installations. To get the certification you must use a FIPS-approved PRNG (visit www.itl.nist.gov/fipspubs). Before you go digging up the laws, all you really need to do is take your algorithm and run it through a secure hash algorithm, SHA-1. According to NIST, running your numbers through SHA makes them more secure, since NIST has evaluated the SHA-1 algorithm and found its use makes back-calculating the seed value impossible. Source code for SHA-1 is available from a number of vendors and open-source libraries, but use these with care to ensure both correctness and adherence to the distribution license. Secure hashing, as well as duplicate value checks and running required statistical models, are only applicable in environments where you need FIPS-approved PRNGs used in creating cryptographic key material. If you're not sure you need FIPS certification, it's likely that you don't.

Performance and real-time concerns
You may be wondering why you wouldn't want to use just any algorithm to make some seeds then use SHA to make the resulting data more secure or less biased. The answer is simple; SHA will do to the performance of your system what dragging 10 lead anchors from your car will do to its speed. SHA is slow and impractical without hardware acceleration.

Three critical characteristics of an embedded system's PRNG define its performance and usability:

  • the number of CPU cycles the PRNG requires to generate a number
  • the determinism of that time
  • the number of cycles that must be inside a critical section
The two algorithms I've given you already in Listing 1 and 2 are lightweight, as long as your processor can handle the multiplies and divides. In both cases, the number must be generated inside a critical section for deterministic results. Removing that protection may not have a significant impact on the algorithm's effectiveness, but you also may not be able to prove that.

Let's look at one final algorithm that uses only binary operations and some addition, is very fast, and works well in a hard real-time environment. Unlike the previous two algorithms that generated randomness in only the high-order bits, this hard real-time PRNG algorithm in Listing 3 makes a full 32-bits worth of pseudo-random data. Note that it uses a table of 128 random numbers, rather than 64 as in the reasonably secure PRNG in Listing 2.

Listing 3: Hard real-time PRNG

static unsigned int xorTable[128] = {0x7be9c1bd...0x088aa102}; // random data
static unsigned int Y = 31468; // your seed
static unsigned int iterations = 0, lastUpper = 0;
unsigned int r, n, pos1, pos2, choice;
.
.
.
// begin critical section
r = Y;
n = iterations;
choice = lastUpper;
// end critical section

pos1 = (r ^ xorTable[choice & 0x3f]) + xorTable[n & 0x3f];
pos2 = ~pos1;
r += ~r + ~n ^
   (xorTable[(n+r) & 0x7f] < (((r="" &="" 0x1f000000)="">> 24))) ^
   (xorTable[choice & 0x7f] < (((r="" &="" 0x000f0000)="">> 16))) ^
   (xorTable[pos2 & 0x7f] < (((r="" &="" 0x00001f00)="">> 8))) ^
   (xorTable[pos1 & 0x7f] < (((r="" &="" 0x0000000f)="" )))="">
   (xorTable[(n+pos1) & 0x7f] >> (((r & 0x1f000000) >> 24))) ^
   (xorTable[choice & 0x7f] >> (((r & 0x001f0000) >> 16))) ^
   (xorTable[pos2 & 0x7f] >> (((r & 0x00000f00) >> 8))) ^
   (xorTable[(r+pos2) & 0x7f] >> (((r & 0x0000001f) )));

choice=r>>24;

// begin critical section
Y = r;
n++;
lastUpper = r >> 24;
// end critical section

return(Y);

The downside of this algorithm is that it's slower than the others in processors that have a dedicated instruction unit for handling integer math, as most modern microprocessors have. However, in an IBM PowerPC 750 running at 366MHz, there was no difference between the reasonably secure PRNG algorithm in Listing 2 and the hard real-time PRNG algorithm in Listing 3 in the time it took to generate 80 million pseudo-random bytes. You may want to run your own tests on your system to determine which algorithm performs best for you. The good news about the hard real-time PRNG algorithm, shown in Listing 3, is that it's not at all sensitive to your seed, unlike the other two, which may take some time to settle into good statistically random data streams with small keys.

Options
If these algorithms are too processor-intensive for your needs, try a simple table lookup. Mathematicians consider the digits of p to be random, so you could simply create a table of these digits. For your convenience, I've posted a file of the hexadecimal notation for the first 500,000 digits of p at ftp://ftp.embedded.com/pub/2004/06uner. A quick PRNG implementation could load up a table with these digits then pick one based on either a rotating counter or, to improve results, a counter that increments based on the system time, as shown in Listing 4.

Listing 4: Table lookup PRNG

static unsigned int piTable[262143] = {0x0870884d...0x1fdff6e6}; // Pi data
static unsigned int n = 0;
.
.
.
// begin critical section
n += sysTimeSecs();
// end critical section
return(piTable[n & 0x0003ffff]);

No single PRNG is good for all situations. If you're not sure what you need, the algorithm in Listing 3 provides the best mix of security, distribution, chaos, and performance.

Feel free to use and abuse the algorithms I offer here, though take care to not play with the numbers beyond the seed values, as I've spent a great deal of time calculating the number of bit shifts and mods to get the best results. Some other popular algorithms that come with full source code are:

Although my tests show you won't get the randomness you may need in under 20 million bytes of data, ISAAC has reasonably good security and the Mersenne Twister generates a great deal of data before it shows a clear pattern.

So, there you have it—several algorithms for generating quality random numbers and testing your implementation.

Eric Uner is chief engineer and cryptographic security officer for Bodacion Technologies. Eric has been developing secure embedded systems for 12 years and has a degree in computer engineering from Iowa State University. You can reach him at uner@bodacion.com.


Reader feedback

This is a fascinating article! I've been looking for literature on RNGs that's both approachable (leaving out high level things like Galois Fields) but very informative (LCGs/ICGs), and with example code to boot! Very good!

Bryan Fishell
Power Engineer
Unisys Corporation

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER