Advertisement

All about Quine-McClusky

August 19, 2004

JackCrens-August 19, 2004

Using K-maps to simplify logic equations in hardware and software makes a lot of sense, but Quine-McClusky is more a systematic approach.

If you've been following this column for the last few months, you know that we've been talking about symbolic logic and the hardware circuits used to implement it. It wasn't really my intention to talk much about the hardware, since the focus of this column is on software tools. But I've noticed that most programmers don't know much about symbolic logic unless they've taken courses in digital circuitry. Since I believe that software can sometimes be simplified by using symbolic logic, I set out to teach the fundamentals. One thing led to another, and the next thing I knew, I was describing how to build counters with J-K flip-flops.

One doesn't have to wire up digital chips in order to learn and use symbolic logic. But if the end product of the exercise is a circuit, there's good reason to try to simplify the logic as much as you can before implementing it. After all, chips cost money, as do circuit boards. This fact leads us very naturally into the subject of logic minimization, which was really the point of the series.

I think perhaps the best way to explain the value of logic minimization is by analogy to ordinary algebra. Consider the following function in Equation 1.


Equation 1: A function begging for minimization

Looks pretty daunting, doesn't it? Some of those coefficients, such as the 87 or the 99, don't look too promising. But if you plot the graph of the function, you'll see that it's a simple parabola. Something must be going on to make the function seem simple. The function can, in fact, be greatly simplified. Probably the most straightforward approach is to divide the numerator by the denominator, using synthetic division. Or, if you prefer, you can factor both top and bottom polynomials, to get:


Equation 2

This, of course, immediately simplifies down to:


Equation 3

Which function would you prefer to implement in software? Equation 1 or 3? When the simplification involves hardware, as it does in logic implementations, simplifying is very important. It may not seem so important in software, and perhaps it's not. One might argue that, hey, either form works in the software. Yes, Equation 1 is more complicated, but both forms give the same result, and anyway, my CPU is really fast.

But look again. In the factored form of Equation 2, there are two sets of multiple poles; the denominator goes to zero at x = 1 and x = 1/2. Guess what happens when you divide by zero? Simplifying the equation gets rid of the poles, which greatly improves the behavior of the function.

Karnaugh maps revisited
If I've convinced you that it's important to simplify logic expressions before implementing them—either in hardware or software—then I've made my point. The question remains, how do we simplify the logic equations? We have almost all the tools of ordinary algebra—factoring, grouping, and so on—at our disposal, but it would be nice to have some systematic method that's guaranteed to give—or at least has a good chance of yielding—optimally simple forms.

In earlier columns I've shown you the Karnaugh map (K-map), which is among my favorite tools for dealing with logic. Figure 1 shows a typical K-map from one of those columns. The context is not important.


Figure 1: A typical K-map

As you can see, the K-map is an array of cells, each of which represents one possible combination of inputs. We order the rows and columns of the matrix in a rather special way. The order is, in fact, the Gray code, which has the property that, between any two adjacent values, only one bit has changed.

This is important, because when we see that the same output is desired for two adjacent cells, it means that the value of the one bit that's different, doesn't affect the result. For example, the two cells in row 11, columns 01 and 11, differ only in the value of A. This should tell us that the value of A doesn't matter. (If you're wondering, the shaded cells in Figure 1 are "don't care" cells. These combinations of inputs should never occur, so we're free to encroach on these cells if it helps us reduce the logic.) For Figure 1, the equivalent logic equation is:


Equation 4

The K-map works great when the number of input variables is four or less. In a pinch, we can stretch it to five or six. After that, forget it. The relationships between groups gets so confusing that the approach quickly loses its value. Even for only four inputs, it's not always easy to decide how best to collect the cells into groups. Especially when there are multiple maps, as in the case of the seven-segment display, it's not at all obvious which terms will work best.

For the cases where we're using logical variables in software, four inputs are almost always enough, so the K-map works fine to help us optimize our software logic. But when we want to get serious about logic minimization, a more systematic approach, and one that can handle larger sets of terms, is called for.

The way of truth
You can express logical relationships in at least three ways. The K-map is one way, logic equations are another. We could even express them as circuit diagrams. Electronic Workbench, which I've mentioned before, not only lets you use multiple forms, but it will transform one to the other.

Of all these ways, though, the most fundamental one is definitely the truth table. The truth table captures, with no possibility of misinterpretation, the expected output for every combination of inputs. The truth table contains a row for each combination, and the expected output. The truth table corresponding to Figure 1, for example, is shown in Table 1.

This truth table gives us exactly the same information as the K-map in Figure 1. The first three columns represent three alternative ways to express the inputs. The second column gives the values of the individual bits, the third is the equivalent logic term, and the first is simply the equivalent decimal integer.

For the record, every row in the truth table has a name—it's called a minterm. Every cell in the K-map is also a minterm. As the name implies, it's the cell of minimum size, and it therefore depends on the values of all the inputs.

The fourth column gives the desired output for each case. The only two required non-zero values are the values for rows 7 and 9 (0111, 1001, , or ). The x's in the last few cells mean that we don't care what the output is; these combinations should never occur anyway.

From the truth table, we need to build a structure of some kind, suitable for minimization. Up until now, that structure has been the K-map. It's easier to see patterns in the K-map than in the truth table itself. In the past, we would transcribe the desired outputs to a K-map, then seek simplifications.

The Quine-McClusky method
Like the K-map, the Quine-McClusky method (hereafter called Q-M) seeks to collect terms by looking for entries that differ only in a single bit. The only difference from a K-map is that we do it by searching, rather than mapping. The beauty of the Q-M method is that it takes over where the K-map begins to fail. It's capable of minimizing logic relationships for any number of inputs. It also happens to be nicely suited to a computerized solution.

Table 2 shows another truth table (here, I've shown only the desired 1's for clarity).

Now let's build a second table that contains only the minterms that require the output to be true. That table, complete with cell numbers, is shown in Table 3.

Just as with the K-map, we try to minimize the logic by finding "adjacent" cells that can be combined. The only difference is, this time the adjective "adjacent" doesn't refer to physical location but to bit pattern.

This business of matching up entries is the heart of the Q-M method. I'll go through it slowly so nobody will miss the technique.

Consider the first entry, which is number 0, pattern 0000. We ask ourselves, what other entries can we possibly match up to this one? Remember, the two entries can differ only in a single bit. That pretty well narrows down the range. This pattern can only be matched by:

(1) = 0001
(2) = 0010
(4) = 0100
(8) = 1000

The table doesn't contain any of these entries, so pattern number 0 remains unmatched, and we move on to the next entry, 3 = 0011. Again, there are four possible matching patterns, each generated from 0011 by changing a single bit:

(2) = 0010
(1) = 0001
(7) = 0111
(11) = 1011

There is no entry for index 2 or 1, but there are for 7 and 11. Entries 3 and 7 differ only in the second bit, which corresponds to input B:

0011 <=> 0111

What is this match telling us? The same thing it would tell us for two adjacent cells in a K-map: For this particular pattern, the output doesn't depend on the second input, which is B. To reflect this fact, we create a new entry:

(3,7) = 0x11

The next obvious question is: Where do we put this new entry? Answer: We create a new column. As it turns out, we also must leave some information behind in the previous column, to indicate which terms we've used. The reason is this: If a term has been checked, it's subsumed by one of the terms in the next column, so we can be assured that its contribution is accounted for. But if it's unchecked, (like 0000, for which we found no match) it must be included explicitly in the final expression.

Similarly, entries 3 and 11 reduce to x011. At this point, the table should look like Table 4. I've included the combined index numbers for clarity. In practice, we probably wouldn't bother, depending only on the patterns to guide us.

Now let's proceed down the remaining entries, looking for matches. You should find a total of six, as shown in Table 5. Note that the search gets easier as we go, because we need only search below the current position; if a match exists with an earlier entry, it's already been accounted for.

What shall we do next? Why, we repeat the process, looking for matches in the last column. As we do, note that the only thing that can match an x is another x in the same place. This is the case, for example, for entries (3,7) and (11,15). At the end of this search, we have a new column and a new set of check marks. Note that two different combinations (3,7)-(11,15) and (3,11)-(7,15) yield the same result. Clearly, we don't need the same term twice, so we'll delete the duplicate.

No further reductions are possible, so we're done. For this example in Table 6, the final patterns are given by the unchecked entries.

The last two terms can be further simplified by factoring. Q-M can't help us with that. We must still do the factoring ourselves. The end result , and an optimized one, is:


Equation 5

Streamlining the process
You can see that the Q-M method accomplishes the same end result as we get from a K-map. I think you can also see that it's rather tedious, and fraught with error. With Q-M, we trade off a convenient and intuitive method to get both a capability to handle more input variables, and an (almost) guaranteed optimality.

Although there's not much we can do to remove the tedium, one trick enables us to streamline the process. Consider the first minterm of our previous example: 0000. Recalling that any match can only vary in a single bit, it should be obvious that only the patterns with one non-zero bit—that is, 0001, 0010, 0100, and 1000, can possibly match it. There's no need to search the remainder of the table; only the terms with a single bit qualify.

Similarly, if a given minterm has n bits, it can only be matched by terms with n-1 or n+1 bits. This suggests that we can gain some efficiency by ordering the terms into groups with 0, 1, 2, 3, or 4 ones. When searching for matches, we need only look in the groups just above and below the current one. Here's another example, to show what I mean.


Equation 6

In Table 7A, I've ordered the terms according to the number of zeroes.

Now let's do the reduction as before. You're on your own on this one. See if you get the same thing I do. In Table 7A, I've streamlined the process by eliminating the cell numbers.

Note that, when I sort the entries by their 1's count, it greatly reduces the number of places I have to look for a match. The term (4) = 0100, which has one 1, can only match entries with two (or zero, but that one doesn't exist).

Now, this is important: The ordering by number of 1's works when there are no x's. In subsequent columns, we must arrange the terms by location of the x's AND number of 1's.

The final terms are the unchecked ones in Table 7B. In the form of a logic equation, we end up with:


Equation 7

Which we can then further factor to:


Equation 8

What implications?
Time for a new term. Each of the terms in Equation 7 represents a grouping of adjacent cells. We could as easily have gotten them from a K-map. These terms have a name: prime implicant. That's simply a fancy name for the groups of a K-map.

Having gone from eight minterms and 31 logical operations to six prime implicants and seven logical operations, you should be feeling pretty good about things. You might think that we've arrived at the optimal representation. But you'd be wrong. The reason you'd be wrong is one that would be obvious from a K-map. In fact, let's just show the K-map so you can see what's going on (remember, Q-M doesn't replace the K-map for four inputs; its forte is solving the larger problems).


Figure 2: Example 2 K-map

From this K-map in Figure 2, it's easy to see the problem: It's not that our solution doesn't give the proper cover; it clearly does. The problem is, it covers too well. Two of the 1's in the region are also covered by other terms—terms that aren't necessary. When we use K-maps, it's easy to see when we've got all the 1's covered. With Q-M, it's not so easy.

What we need is a way to know when we can omit a term. Since the Q-M method is a tabular method rather than a graphical one, it should not surprise you to learn that the solution involves another table.

Let's build a table like Table 8, with the prime implicants in the left-hand column, and the minterms in the row at the top. A check mark in any cell shows that this particular minterm is generated by the corresponding prime implicant.

Now look at the check marks in the columns for 0100 and 1100 (colored blue in the table). These check marks appear only in the bottom row. This means that these particular minterms can only be generated by the prime implicant x10x. It's an essential prime implicant. The adjective, "essential," should tell you all you need to know. We must include this term, or miss the two 1's in the 00 row of the K-map.

But if we must include the x10x prime implicant, we also get, for free, the minterms colored yellow. That, in turn, means we can delete the check marks required for those minterms in other rows. This could possibly free up prime implicants that are no longer needed. It doesn't, in this case, but it could.

I think you can see how things go from here. We can use coloring to process the other prime implicants, or we could redraw the table. If we redraw it, we should eliminate the rows and columns that we've already identified. Table 9 shows the reduced matrix.

We have no other columns with a single check mark. We do, however, have two rows with them, and those minterms are also generated by other prime implicants. Operating on the assumption that more is better, we might as well select, for the next prime implicant, one that has more checks in its row. There are three such choices, and not much to recommend one over the other. However, we note with interest that the prime inplicants 0x11 and x011 have 11 in the last two bits, which hints at the possibility of factoring. Let's select one of those—0x11—as the next prime implicant. We get the new table, Table 10, by eliminating rows and columns again:

From here, it's not hard to figure out what other prime implicant we should use. It's 10x1, the one that picks up both remaining minterms. We end up with three prime implicants, which cover the desired function:


or

or


Equation 9

So we've reduced the prime implicants from six down to three, and the logical operations to six. That's pretty hard to beat.

Wrapping up
Now you've seen the Quine-McClusky algorithm in all its splendor. It may seem tedious, but it's very reliable, and it's really your only option once the number of inputs gets much greater than four. I can't really guarantee you, even now, that the results will be optimal. Part of that conclusion depends on your definition of "optimal." It might mean minimizing the number of prime implicants, as we've done above, but it might not. Likewise, it may or may not mean reducing the number of operations. In the end, it could come down to something like where the traces have to run on the circuit board.

Even so, the Q-M method gets us about as close to a fully automated optimization as we are likely to find. It should not surprise you to learn that many software packages exist that do Q-M optimization. Now you know how they work. Use them, if you have the need, but don't forget that sometimes a little human judgement may still be required.

There may be cases where one set of prime implicants is preferred over another. For example, in the seven-segment decoder that we discussed a few months back, we have seven different outputs, and therefore seven different K-maps or Q-M reductions. To get the "global optimum," we should really give preference to prime implicants that also appear in other output expressions.

I'll leave that as an exercise for the student.

Jack Crenshaw is a senior software engineer at Spectrum-Astro and the author of Math Toolkit for Real-Time Programming, from CMP Books. He holds a PhD in physics from Auburn University. E-mail him at jcrens@earthlink.net.

Loading comments...