# It's only logical

October 8, 2003

From Aristotle to Boole, Jack traces a history of logic and then explains how to simplify Boolean equations.

It's funny how sometimes when you're in the middle of a revolution, it's hard to notice. Things progress so smoothly and seamlessly, life seems pretty normal. Sometimes in the dark of night, I reflect on how many changes the world has seen in my lifetime and that of my immediate family.

My father was born in 1905, which means that he got to see the coming of the telephone, the telegraph, radio, TV, the car, the airplane, and space flight. He went from chugging along dirt roads at 15mph in the family Model T, to a 30,000 mile-per-year life cruising the Interstate with the cruise control set on a brisk 85 and the air-conditioned cabin awash in FM-soothed luxury. He went from his first flight in a barnstorming Jenny to routinely crossing the country at 600mph.

If you think that's something, consider my grandfather. He died in 1948 at age 80, which means he was born in 1868—only three years after the Civil War. Imagine the differences he saw.

When I was studying physics in college (not long thereafter), we didn't buy a lot of our lab equipment, which was all vacuum tube stuff. We made it. I recall an instrument in the electronics junkroom that was some professor's failed attempt to build a decade counter using tubes. Commercial units had about five tubes per decade and ran \$1,000 and up. Now you can buy two decade counters on a chip, for about a buck, quantity one.

In the '60s, several vendors were making logic units for the military market. A single AND-gate, one per edge-connected circuit board, cost \$500. I could only watch from the sidelines.

Around 1964, Fairchild got my attention with their line of Resistor-Transistor logic (RTL) ICs in low-cost, epoxy packages. A dual gate cost 80 cents; a single flip-flop, \$1.50. I was hooked, big time.

Well, no wonder. From where I sat, it was a thousandfold reduction in cost. But consider that at that price, the 256MB of RAM in today's low-end PC would have cost more than \$3 billion! Instead, it costs around 40 bucks—a further reduction by a factor of 80 million. At that rate, your Lexus should have cost, say, the price of a glass of water. Not the cost of the glass, not the price at a restaurant; just of the water itself.

In the early days of computing, the great John von Neumann was often asked, why the excitement over computers. They were, after all, just like ordinary desk calculators, only faster. von Neumann formulated his principle: any change in something by a couple orders of magnitude is not just a change in speed, but a change in kind.

The principle is easy to illustrate. A typical brisk walk is around 5 mph. Two orders of magnitude faster is a jet plane. Two more orders, and you've got an interplanetary mission to Mars. No one in their right mind would ever suggest that going to Mars was like walking, only faster. It's a change in kind.

Even so, something is sometimes lost in the change. Back when bits cost a dollar apiece, we didn't have very many, but we sure knew how they worked. These days, few people do. Hence the motivation for this column. I'm going to teach you a little logic.

Figure 1: A logic inverter

Electronics 101
My starting spot might surprise you. It's a circuit diagram, as shown in Figure 1.

What's that you say? You don't do electronics? Not to worry. This circuit is so simple, even a computer science grad can understand it. The transistor is basically a current amplifier. Though the rules about how transistors work are amazingly simple (and you'd do well to learn them), we don't need even them, in this case.

Here, the transistor is used simply as a switch. Inject a little current, Ib, at the base B, and the transistor is turned on, allowing large currents to flow from collector (C) to emitter (E). Stop the current at the base, and you also stop the current through the collector; the switch is turned off. Simple.

The transistor is connected to its power supply via a resistor, Rc. When the switch is off, almost no current flows through Rc. The voltage drop across it is negligible, and any load at the output "sees" the +5V supply. On the other hand, when the transistor is turned on (conducting), it looks pretty much like a short circuit to the load. The entire supply voltage is dropped across Rc, and the output voltage is low—just barely above ground.

The rules, so far, are ridiculously simple:

• Current in, Vout low

• No current in, Vout high

How do we get the current into the base? Simply by applying a positive voltage at Vin. The base itself looks like a short circuit to any positive voltage. Resistor Rb is fairly high, so we can safely apply a voltage to it without frying the transistor. Though the transistor itself is a current device, the circuit works on voltages, according to these new rules:

• Vin high, Vout low

• Vin low, Vout high

Now you can see why it's called an inverter. Whatever level the input is, the circuit inverts it.

Figure 2: An RTL NAND gate

Figure 2 shows a NAND gate (more on the name, later). Here I've connected two inverters in parallel, so that they share a common resistor Rc. For the record, this is precisely the circuit used in those Fairchild RTL logic chips, two gates for 80 cents. Just as with the inverter, this circuit still provides an output that's either high or low. The key difference is that it now responds to either input.

Recall, if a transistor switch is turned on (conducting), it looks like a short to the power supply and output. In Figure 2, that's true of either switch. The output can be high only when both transistors Qa and Qb are off. That only happens when both inputs A and B are low.

It's normal to describe the function of any logical device by a truth table, which gives the expected output for every possible combination of inputs. Table 1 shows you what a truth table looks like.

 Table 1: The NAND gate A B Out low low high low high low high low low high high low

 Table 2: NAND gate truth table A B Out 0 0 1 0 1 0 1 0 0 1 1 0

No Boole
All logic devices work pretty much like the circuits I've drawn, in that their outputs are either high or low, but never (except during the brief transition) in between. This is the antithesis of fuzzy logic. A thing is either high or low, on or off, true or false. The temptation to assign logical meaning to the output voltage is overwhelming, and it's one we won't resist.

Logic has been around a long time, at least since the time of Aristotle. The kind of logic I described is called Aristotelian logic. No shades of gray, just true or false.

In 1847, mathematician George Boole introduced symbolic logic, which lets us write logical expressions in a manner much like ordinary algebra. Boole chose to assign the numeric symbols 0 and 1 to the two possible states of a logical variable. In electronic circuits, we almost always assign 0 to the low voltage, 1 to the high. In Boolean terms, Table 1 would be written as shown in Table 2.

As with ordinary algebra, we tend to assign values to things that follow Aristotelian logic. If A is a true thing, it has a value of 1. If it's false, the value is 0.

Boole's great contribution was to define mathematical operations on logical variables that look very much like their corresponding algebraic operators: '+' for OR, '*' or '.' for AND.

In symbolic logic, the rules are:

 OR AND 0 + 0 = 0 0 . 0 = 0 0 + 1 = 1 0 . 1 = 0 1 + 0 = 1 1 . 0 = 0 1 + 1 = 1 1 . 1 = 1

Except for the somewhat shocking last formula in the OR column, these rules are exactly what they would be in ordinary algebra. As you can see, the two operations complement each other. The result of an OR operation is true if either input is true; the result of an AND is true only if both inputs are true.

The neatest part is that even the precedence rules of algebra—including the effect of parentheses—still work. This means that I should be able to write down rather complex equations in symbolic logic and simplify them in the usual fashion, by factoring, expanding, and so on.

Even if you never solder a single wire in your life, even if your only contact with embedded systems is the C code that runs in them, you can still get value from the way logical relationships can be simplified.

To complete Boolean algebra, we need one more operator: the unary operator that corresponds to negation. In Boolean algebra, it's called the NOT, and its symbol is usually a bar over the number or variable. Thus:

Now, I'll explain the naming convention for the NAND gate. The output of the gate is high (true, 1) only if both input A and input B are not true (false, 0).

We can describe the output as:

(1)

Or, more concisely:

(2)

Either way, you should read this, "NOT A AND NOT B."

Why do we use NAND gates instead of AND gates? Simply because the electronics requires it. As you saw in Figures 1 and 2, each circuit introduces a natural inversion. We've learned to live with this fact. We could add an inverter to each input to straighten things out, and indeed you can buy true AND-gates and OR-gates if you choose. They don't get much use, though, because most designers rightly blanch at the notion of adding delay-producing and power-eating circuitry into electronic products, just to keep from blowing the minds of analysts. Better to make us analysts work a little more. It will do us good; keep our brains nimble.

Perhaps the designers of chips had that in mind when they converted to the newer, transistor-transistor logic (TTL). In TTL circuits, the truth table for a NAND gate is different from RTL. It's shown in Table 3.

 Table 3: TTL NAND gate truth table A B Out 0 0 1 0 1 1 1 0 1 1 1 0
< Previous
Page 1 of 2
Next >

• 11.06.2013

• 03.15.2001

• 09.05.2013

• 12.04.2013