It's only logical - Embedded.com

It’s only logical

From Aristotle to Boole, Jack traces a history of logic and then explains how to simplify Boolean equations.

It's funny how sometimes when you're in the middle of a revolution, it's hard to notice. Things progress so smoothly and seamlessly, life seems pretty normal. Sometimes in the dark of night, I reflect on how many changes the world has seen in my lifetime and that of my immediate family.

My father was born in 1905, which means that he got to see the coming of the telephone, the telegraph, radio, TV, the car, the airplane, and space flight. He went from chugging along dirt roads at 15mph in the family Model T, to a 30,000 mile-per-year life cruising the Interstate with the cruise control set on a brisk 85 and the air-conditioned cabin awash in FM-soothed luxury. He went from his first flight in a barnstorming Jenny to routinely crossing the country at 600mph.

If you think that's something, consider my grandfather. He died in 1948 at age 80, which means he was born in 1868—only three years after the Civil War. Imagine the differences he saw.

When I was studying physics in college (not long thereafter), we didn't buy a lot of our lab equipment, which was all vacuum tube stuff. We made it. I recall an instrument in the electronics junkroom that was some professor's failed attempt to build a decade counter using tubes. Commercial units had about five tubes per decade and ran $1,000 and up. Now you can buy two decade counters on a chip, for about a buck, quantity one.

In the '60s, several vendors were making logic units for the military market. A single AND-gate, one per edge-connected circuit board, cost $500. I could only watch from the sidelines.

Around 1964, Fairchild got my attention with their line of Resistor-Transistor logic (RTL) ICs in low-cost, epoxy packages. A dual gate cost 80 cents; a single flip-flop, $1.50. I was hooked, big time.

Well, no wonder. From where I sat, it was a thousandfold reduction in cost. But consider that at that price, the 256MB of RAM in today's low-end PC would have cost more than $3 billion! Instead, it costs around 40 bucks—a further reduction by a factor of 80 million. At that rate, your Lexus should have cost, say, the price of a glass of water. Not the cost of the glass, not the price at a restaurant; just of the water itself.

In the early days of computing, the great John von Neumann was often asked, why the excitement over computers. They were, after all, just like ordinary desk calculators, only faster. von Neumann formulated his principle: any change in something by a couple orders of magnitude is not just a change in speed, but a change in kind.

The principle is easy to illustrate. A typical brisk walk is around 5 mph. Two orders of magnitude faster is a jet plane. Two more orders, and you've got an interplanetary mission to Mars. No one in their right mind would ever suggest that going to Mars was like walking, only faster. It's a change in kind.

Even so, something is sometimes lost in the change. Back when bits cost a dollar apiece, we didn't have very many, but we sure knew how they worked. These days, few people do. Hence the motivation for this column. I'm going to teach you a little logic.


Figure 1: A logic inverter

Electronics 101
My starting spot might surprise you. It's a circuit diagram, as shown in Figure 1.

What's that you say? You don't do electronics? Not to worry. This circuit is so simple, even a computer science grad can understand it. The transistor is basically a current amplifier. Though the rules about how transistors work are amazingly simple (and you'd do well to learn them), we don't need even them, in this case.

Here, the transistor is used simply as a switch. Inject a little current, Ib , at the base B, and the transistor is turned on, allowing large currents to flow from collector (C) to emitter (E). Stop the current at the base, and you also stop the current through the collector; the switch is turned off. Simple.

The transistor is connected to its power supply via a resistor, Rc . When the switch is off, almost no current flows through Rc . The voltage drop across it is negligible, and any load at the output “sees” the +5V supply. On the other hand, when the transistor is turned on (conducting), it looks pretty much like a short circuit to the load. The entire supply voltage is dropped across Rc , and the output voltage is low—just barely above ground.

The rules, so far, are ridiculously simple:

  • Current in, Vout low
  • No current in, Vout high

How do we get the current into the base? Simply by applying a positive voltage at Vin . The base itself looks like a short circuit to any positive voltage. Resistor Rb is fairly high, so we can safely apply a voltage to it without frying the transistor. Though the transistor itself is a current device, the circuit works on voltages, according to these new rules:

  • Vin high, Vout low
  • Vin low, Vout high

Now you can see why it's called an inverter. Whatever level the input is, the circuit inverts it.


Figure 2: An RTL NAND gate

Your first gate
Figure 2 shows a NAND gate (more on the name, later). Here I've connected two inverters in parallel, so that they share a common resistor Rc . For the record, this is precisely the circuit used in those Fairchild RTL logic chips, two gates for 80 cents. Just as with the inverter, this circuit still provides an output that's either high or low. The key difference is that it now responds to either input.

Recall, if a transistor switch is turned on (conducting), it looks like a short to the power supply and output. In Figure 2, that's true of either switch. The output can be high only when both transistors Qa and Qb are off. That only happens when both inputs A and B are low.

It's normal to describe the function of any logical device by a truth table, which gives the expected output for every possible combination of inputs. Table 1 shows you what a truth table looks like.

Table 1: The NAND gate
A B Out
low low high
low high low
high low low
high high low

Table 2: NAND gate truth table
A B Out
0 0 1
0 1 0
1 0 0
1 1 0

No Boole
All logic devices work pretty much like the circuits I've drawn, in that their outputs are either high or low, but never (except during the brief transition) in between. This is the antithesis of fuzzy logic. A thing is either high or low, on or off, true or false. The temptation to assign logical meaning to the output voltage is overwhelming, and it's one we won't resist.

Logic has been around a long time, at least since the time of Aristotle. The kind of logic I described is called Aristotelian logic. No shades of gray, just true or false.

In 1847, mathematician George Boole introduced symbolic logic, which lets us write logical expressions in a manner much like ordinary algebra. Boole chose to assign the numeric symbols 0 and 1 to the two possible states of a logical variable. In electronic circuits, we almost always assign 0 to the low voltage, 1 to the high. In Boolean terms, Table 1 would be written as shown in Table 2.

As with ordinary algebra, we tend to assign values to things that follow Aristotelian logic. If A is a true thing, it has a value of 1. If it's false, the value is 0.

Boole's great contribution was to define mathematical operations on logical variables that look very much like their corresponding algebraic operators: '+' for OR, '*' or '. ' for AND.

In symbolic logic, the rules are:

OR AND
0 + 0 = 0 0 . 0 = 0
0 + 1 = 1 0 . 1 = 0
1 + 0 = 1 1 . 0 = 0
1 + 1 = 1 1 . 1 = 1

Except for the somewhat shocking last formula in the OR column, these rules are exactly what they would be in ordinary algebra. As you can see, the two operations complement each other. The result of an OR operation is true if either input is true; the result of an AND is true only if both inputs are true.

The neatest part is that even the precedence rules of algebra—including the effect of parentheses—still work. This means that I should be able to write down rather complex equations in symbolic logic and simplify them in the usual fashion, by factoring, expanding, and so on.

Even if you never solder a single wire in your life, even if your only contact with embedded systems is the C code that runs in them, you can still get value from the way logical relationships can be simplified.

To complete Boolean algebra, we need one more operator: the unary operator that corresponds to negation. In Boolean algebra, it's called the NOT, and its symbol is usually a bar over the number or variable. Thus:

Now, I'll explain the naming convention for the NAND gate. The output of the gate is high (true, 1) only if both input A and input B are not true (false, 0).

We can describe the output as:


(1)

Or, more concisely:


(2)

Either way, you should read this, “NOT A AND NOT B.”

Why do we use NAND gates instead of AND gates? Simply because the electronics requires it. As you saw in Figures 1 and 2, each circuit introduces a natural inversion. We've learned to live with this fact. We could add an inverter to each input to straighten things out, and indeed you can buy true AND-gates and OR-gates if you choose. They don't get much use, though, because most designers rightly blanch at the notion of adding delay-producing and power-eating circuitry into electronic products, just to keep from blowing the minds of analysts. Better to make us analysts work a little more. It will do us good; keep our brains nimble.

Perhaps the designers of chips had that in mind when they converted to the newer, transistor-transistor logic (TTL). In TTL circuits, the truth table for a NAND gate is different from RTL. It's shown in Table 3.

Table 3: TTL NAND gate truth table
A B Out
0 0 1
0 1 1
1 0 1
1 1 0

The TTL NAND gate implements the logic:


(3)

which you should read as:


(4)

Confused? Join the club. It wouldn't be confusing, though, if I hadn't started with the ancient history of RTL logic. Most of today's designers have never seen or used an RTL gate anyhow, so they aren't confused at all.


Figure 3: NAND and NOR symbols

The electronics industry has adopted standard symbols for NAND and NOR gates. They're shown in Figure 3, for both RTL and TTL logic (other technologies conform to the TTL rules). The little circles on each connection imply an inversion. Thus, for RTL, the circles are on the inputs. For TTL, they're on the outputs. Back in the days of RTL logic, it was common to use negative logic, in which 1 was represented by a low voltage, and 0 by a high. It seems a little weird, but it was more natural in that logic family.

One final, significant formula from symbolic logic remains. It's called DeMorgan's theorem, and looks like this:


(5)

DeMorgan's theorem gives us a way of transforming expressions to a form that may lead to simplifications. One handy value of DeMorgan's theorem is that it allows us to switch from positive to negative logic and back.

By use of DeMorgan's theorem, we can sometimes modify a subexpression to a form that either is simpler or lets us get rid of an inverter.


Figure 4: Cross-coupled NAND gates configured as a flip-flop

Memory check
Before moving on with more logic, I'd like to show you one more circuit. In Figure 4, we have two NAND gates, named P and Q, cross-connected together. Note that I'm using the RTL symbols, which have inputs that are active low. The circuit has two inputs; call them A and B.

Now, imagine that both A and B are are initially held low. Take a look at the first two lines of Table 2, and you'll see that if one of the inputs is held low, the gate acts like an inverter on the other input. In this case, the two gates feed back into each other, each acting like an inverter. Suppose, for example, that P is low. That value feeds back into the cross-tied input of Q, which forces it high, which “forces” P low, which forces Q high, ad infinitum. The positive feedback assures us that P and Q will always have opposite states, though which one is which is arbitrary.

Next, suppose that we apply a high input to A. Table 2 tells us that P will go low, regardless of the value of Q. That forces Q into the opposite state. The same happens if we apply a high input to B. The one thing we must never do is to make both A and B high at the same time. It won't break the circuit or anything; it will only disable the feedback connections.

This circuit is a flip-flop. Its two outputs are always complementary, as long as A and B are never high at the same time. The flip-flop will stay forever, or at least until someone turns off the power. The state is defined by whichever input was high last. Because outputs P and Q are complementary, convention says we should rename P to . Similarly, we should rename A to be (for NOT reset) and B to (for NOT set). We have an R-S flip-flop, the simplest kind.

By now, you have no doubt deduced the value of the R-S flip-flop. If not, the title of this section should give you a clue. It's a memory cell. It remembers whichever input was high last.

Back in 1965, I was thrilled to use such gates. I wired up one of my 80-cent Fairchild dual gates in this fashion, on a real, honest-to-gosh, breadboard (a student's drawing board, actually—sort of gives a whole new meaning to the phrase, “Back to the drawing board”). I added circuitry to drive a light bulb and sat there in fascination, as I could turn the light on and off at will.

About that time, a friend walked by and asked what I was doing. “Oh, man, I just created a memory cell,” I bubbled and showed him how I could set the flip-flop into either state at will. “Humph,” he opined. “I can do the same thing with a light switch.”

My friend had no vision. In fact, to my knowledge, he's never owned a computer. But I saw the value of the flip-flop circuit, and I'm now the proud owner of 8,589,934,592 of the little suckers. Plus a few billion more tucked around in various appliances.

One last word on digital electronics. You'll note that, in the description of the flip-flop, I was careful to use terms like “high” and “low,” rather than 1 and 0, to describe the behavior. That's because electronics are not logic diagrams. The symbol for a NAND gate may describe its behavior in general, but the nature of the electronics is driven by the circuitry. Like the circuits of Figures 1 and 2, some real gates have inputs that are active low. No mere symbol can really capture that fact, so when we actually design circuits, we tend to fall back on the high/low terminology.

RTL logic has been obsolete since the '70s, but the circuits are still golden. They've been replaced only for reasons of speed and power, not value. If you need logic behavior in a hurry, you could do worse than a few transistors hooked together RTL-fashion. On the other hand, if you're planning to put together circuits with modern chips, make sure you understand where the little circles on the symbol belong, and design accordingly.

The half-adder
Let's look at another logic pattern that's not only fundamental, but also plays a central role in computer arithmetic. Imagine that we're adding two single binary bits, with a possible carry from a previous addition. We have three inputs and one output, and the truth table looks like Table 4. The two outputs are called X and Cy (for carry).

Table 4: Half-adder truth table
A B C Cy X
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1

We can write logical equations that capture the rules. To convert this table to an equation, write down every combination that results in X being true. Now OR them together. Ditto for Cy. For X, you'll get:


(6)

Now let's factor the expression where we can:


(7)

We see the pattern in the first pair of parens rather often. It's called the Exclusive-OR, or XOR for short. It's so common that we've even given it its own symbol:


(8)

Of course, shortening the expression doesn't change its value. To evaluate , we'd still need the three gates of the longer expression. Except that we don't, because the chip vendors have nicely provided us with XOR gates.

It would appear that we could implement X with two XOR gates and a NOR. Except that we don't, because we can do tricks with DeMorgan's theorem. Consider the subexpression in the second pair of parens in Equation 7. Applying DeMorgan's theorem gives:



The first and last terms on the right don't matter; A cannot be both true and false at the same time, so that term is always 0. Ditto for B. The equation becomes:


(9)

In short, the term in the rightmost parens in Equation 7 is simply the NOT of the XOR. We can write the equation:

Or, even more concisely:


(10)

For reasons that should now be clear, the XOR gate is also called the half-adder. Two half-adders make an adder; does that surprise you?

For completeness, we should also give the equation for Cy which is:


(11)

We can simplify this a bit, and the simplification teaches us some valuable new lessons. As the expression is written, it will require 11 gates to implement. But notice that the first and fourth terms can be collected and factored:


(12)

Now, the term in parens is always true. C can only take on two values; it's either true or it's not. Hence:

and Equation 8 becomes:

Factoring the last two terms gives our familiar half-adder:


(13)

The last XOR is, in fact, the same one computed for deriving X, so we get it for free. The full adder can be implemented with five gates. Compare that to the 22 we would have needed without simplifying.

Simple is good
It's impossible to overemphasize the value of simplifying logic equations. The reasoning is quite simple and practical. When we're working out the Boolean algebra for a given application, we only need to do it once. However, when we build a system with logic gates, we may sell millions of them. If we can work the analysts a little harder at the front end, to save a few gates in production, we'll easily amortize the labor.

You've seen at least one example of the value of simplification. You've also gotten enough of a taste to see that the manipulations we need to do to simplify are not always obvious. A lot seems to depend on how cleverly we can factor and manipulate the terms. When I'm doing simplification by manipulating equations, there's always that nagging feeling that I could have gotten more simplicity by a different approach.

If you're getting the idea that it would be nice to have formal—even automated—rules for simplifying logical expressions, you'd be right. At least two powerful manual approaches pretty much guarantee us an optimal simplification, plus a variety of computer implementations of great power. I'll be talking about them more in my next column.

I should mention that the need for simplification seems to ebb and flow, as the technology changes. Back when we used to implement logic using discrete gates, a colleague of mine did his Ph.D. dissertation on a new, computerized way to minimize the logic. His technique was in great demand—for about a year. That's when EPROMs became cheap.

When you have a cheap EPROM, you don't really need minimization. You can simply use all the inputs as address lines on the EPROM and program it to generate the desired outputs. The EPROM's address decoding takes the place of discrete gates. It's overkill and a terrible waste of gates; but the gates are there anyway, already, and the chip is cheap, so who cares? It always seemed to me a great pity that my friend had solved the problem of logic minimization just at a moment in history where people stopped caring.

Fortunately for him, I suppose, the pendulum has since swung back the other way. Nowadays, the hot chips are programmable array logic (PAL) chips, programmable gate arrays, and custom ASICs, all of which can implement discrete gates. They're back in vogue because the equations we need to implement are orders of magnitude more complex than those we needed in 1976. Saving gates has become important again.

Electronics Workbench
Many computer tools have been designed to help develop logic circuits. A few years ago I worked with one for programming PALs. You could input the desired behavior by entering a set of logic equations, however primitive and poorly organized. The tool would then reduce the equations to a minimized set. If asked, it would then generate the programming instructions for programming the PAL. The tool included a logic simulator, so you could simulate the behavior of the final product before burning the design into silicon.

One of my favorite tools was one called Electronics Workbench (EW). EW was one of several electronic circuit simulations then available. All that I know of are based on either SPICE or PSPICE, two of the major circuit solvers around. I currently have two called Tina (from DesignSoft) and B2 SPICE A/D 2000 (Beige Bag Software). Both are excellent, but they're geared to analog circuitry. The thing I liked about EW was that it not only let me mix analog and digital simulated circuitry, it would also do a lot of the work for me.

Think of a given set of logic rules. We might represent them in at least three ways: with a truth table, logic equations, or a circuit implementation. All three contain the same information, and one can easily think of cases where it would be nice to translate from one representation to the other.

EW enabled you to do just that. Typically, you might input the requirements via a truth table, since that's the most direct way of defining what has to be done. With the click of a mouse, EW would then convert the truth table into a set of minimized logic equations. One more click, and it would realize the equations in terms of gates. Alternatively, if you already had a circuit design, EW would turn it back into truth table or equations. I found the ability to convert from one form to the other extremely useful.

Was it always perfect? Not at all. For starters, EW didn't use negated inputs on the gates. That is, it used AND instead of NAND gates. Second, it always used 2-input gates, even though gates with more inputs are available. I found that I always needed to hand-optimize the circuit quite a bit to get what I wanted. Even so, the ability to get even a first cut at a circuit realization seemed, to me, a big plus.

Next month, I'll be showing you the two main ways to minimize logic equations, and we'll also look at a couple of classic logic circuits. See you then.

Jack Crenshaw is a senior software engineer at Spectrum-Astro and the author of Math Toolkit for Real-Time Programming, from CMP Books. He holds a PhD in physics from Auburn University. E-mail him at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.