Encoders provide a sense of placeOne of the most exciting embedded systems I've worked on was a huge gauge that measured the thickness of metal in a steel mill. A frighteningly radioactive cesium source, encased in a ton of lead, shot gamma rays through as much as 4 inches of steel. An ion chamber measured just how much of the radiation made it through the steel, providing the raw data for a thickness calculation. Two embedded PDP-11 minicomputers and a handful of Z80s drove the 7-ton sensor assembly back and forth on a railroad track so the gauge could read the steel's thickness at any point across the plate's 14-foot width. Without question, debugging the code that ran the sensor back and forth on the track was the most fun of the project! One bug sent the monstrous assembly through an electronics cabinet, causing no end of recriminations.
A problem we faced on this project was feeding the sensor's position on the railroad track back into the computer. Inaccurate positional information would invalidate all of the thickness data. The end customer was forking over better than $2 million for the system; they expected correct data all the time. In this case the solution was fairly simple: we put a shaft encoder on one of the wheels. The encoder transmitted a 12-bit binary code representing position back to the computer.
Measuring position is important in most factory control environments, the home of a lot of embedded systems. More often than not some sort of encoder provides all of the location data to a computer.
An encoder is a mechanical device coupled to a rotating shaft that provides a digital representation of the shaft's position. Modern encoders almost exclusively use optical techniques to convert the shaft's angle to digital form. A beam of light shines though a disk fixed to the shaft. Photocells detect marks on the disk. Depending on the type of encoder, these marks will represent either an absolute or relative position.
Shaft encoders convert position information into binary code. A 12-bit encoder will output 0000 with the shaft at zero degrees and 0FFFF at just shy of 360 degrees. It's pretty easy to compute the shaft's angular position via a formula or lookup table given only the number of encoder bits.
Sometimes binary is less than ideal. The science of information theory teaches us that straight binary eats up a lot of "channel capacity." As the encoder slowly rotates the binary value increases monotonically, thankfully keeping the software that reads it simple. Unfortunately, with each code change only one bit might be different (say, from 000 to 001) or many bits might change (from 7FF to 800). Trouble lurks when many bits change at the same time; if the cable is long all that switching can induce crosstalk that corrupts the data.
Gray code is one variation of binary that eliminates the problem. A Gray code encoder will change only one bit at a time as the shaft rotates. Table 1 shows the relationship between Gray and binary. Note that for each sequential entry in the table, the Gray code changes by only a single bit. This reduces the demands made on the transmission channel, whether they are simply wires or a radio link.
Table 1: Gray and binary codes
|Gray Code||Binary Code|
While Gray code might be ideal for an encoder's output, it's pretty hard to use in internal computations. Standard practice is to convert Gray to binary using a table translation scheme.
Computing absolute position is easy if we know the encoder's resolution (in distance per revolution). One unknown still exists: what does the "zero" position correspond to?
The zero position of any mechanical system is when it is all the way at one extreme or the other. Sure, it's possible to park the system at the zero position and have a technician manually rotate the encoder to output 0000. Then it's easy to figure distance just in terms of offset from the 000 position.
If anything in the mechanical part of the beast slips, though, the data will be wrong. It's far better to add a limit switch actuated when the system moves to the zero position. Then the software can read the encoder whenever the limit is detected and apply this reading as an offset to all position calculations. It saves the tedious calibration step and reduces errors.
A shaft encoder implicitly contains direction information. The software will know if the shaft is rotating clockwise or counterclockwise by examining the direction of the code change. This is important in bidirectional systems, particularly when the shaft might be controlled by external forces other than the computer.
Whenever moving parts are involved be wary of backlash. High-quality encoders themselves have no inherent backlash. In other words, if the direction of rotation changes there will be no count uncertainty due to mechanical play in the unit. Unfortunately, a perfect encoder might still see backlash from play in the rest of the mechanics. When a motor starts spinning, play in the gearing might make the encoder not see the first few millimeters of travel. Where accurate positioning is important the software might have to make the system always approach a final position from the same direction, thus always working with constant backlash errors. It's better to use low-backlash gears if you can convince the mechanical group to go along with the extra cost.
Never put an encoder on a powered wheel. If the motor's startup makes the wheel slip for an instant before it grabs the track, the encoder position will be wrong.
Another type of encoder gives a pulse stream as the shaft rotates, rather than absolute position data. The software must count the number of pulses and infer position indirectly.
Before today's scribed glass disks, encoders looked rather like toothed gears. The beam of light was interrupted by the rotating teeth, giving rise to the name "toothed encoders." The name stuck even as the technology passed the concept by.
Shaft encoders with binary codes are ideal for some systems but have a number of inherent problems. Their resolution is limited. It's difficult to make an accurate encoder with more than 12 bits of resolution—4,096 counts per revolution, which is just not enough for some applications. A toothed encoder can generate tens of thousands of pulses per revolution.
In other cases the encoder is used not so much to indicate position as to command the software to read an I/O port. A simple analogy is the distributor in a car. At each of four positions per revolution (in a four-cylinder engine) the distributor causes a contact to close, firing off a spark plug. In the world of embedded systems a scanning colorimeter might use a rotating diffraction grating to sweep thousands of colors of light across a sample. The software must read reflected energy at each color. If the grating's shaft is connected to a toothed encoder, then each of the thousands of pulses can interrupt the CPU and make it read the reflected light. In this case we don't care about the shaft's absolute position so much as we need a "read data now" interrupt from the moving pieces.
Remember the bad old days of punched cards? Some card readers had a toothed encoder coupled to the shaft that moved cards through the scanner. One revolution of the shaft corresponded to the length of the whole card. Eighty separate pulses per revolution came from the encoder. Each pulse meant "read a character now."
Most toothed encoders come with twin outputs. One is the pulse stream indicating relative position. The other is a "zero" pulse that is asserted only once per revolution, indicating the start of a rotation. Using the zero and count pulses the program can indeed come up with the same kind of absolute position information output from a shaft encoder. If the encoder is calibrated just like a shaft encoder, the cheaper toothed version will give accurate position information.
One downside of computing position this way is that a toothed encoder gives no information about the direction of rotation of the shaft. The pulse stream looks the same either way. Further, if the mechanical assembly is moved when the computer is turned off the position will be incorrect. Unless, of course, the computer recalibrates everything on power up.
Most of the systems I've worked on required very fast response to each encoder pulse. Only rarely can one afford the luxury of polling a port to find that a pulse is asserted.
All polled loops are subject to varying degrees of latency. Consider the polling code in Listing 1. The loop will fall through immediately if the bit becomes asserted just before the input instruction is executed. If it goes high just after this same instruction, it will execute the entire loop again, doubling the detection time. This variable latency is sometimes deadly.
Listing 1: A polled loop is subject to variable latency
in a,port ; read pulse port and a,80 ; isolate pulse bit jz loop ; jump if no pulse
Again consider the case of a car's distributor. Variable latency will make the engine run rough, since the time of each spark plug ignition will dither. In the case of a typical instrument collecting data every 50μs, a 5μs dither represents 10% acquisition uncertainty. In a high-speed encoder system minimizing latency becomes a sort of search for the Holy Grail. I've spent weeks yanking just a few microseconds out of the code to control the dither.
One obvious solution is to connect the pulse stream to the processor's interrupt input. As we all know, an interrupt will immediately stop the CPU and vector off to the interrupt service routine (ISR). Actually, "immediately" is not quite true. Different instructions take different amounts of time to execute. Sometimes the range is several orders of magnitude, particularly when a multiply instruction is compared with a NOP. Conventional interrupt handlers are no better than a polled loop in minimizing latency.
Worse, interrupts are slow. When handling very fast data the interrupt structure just might not be able to keep up. After all, a vectored interrupt usually requires an acknowledge cycle to get the interrupt source, several pushes to stack a return address and other context information, and an indirect read from memory. All of this takes time—sometimes quite a few microseconds. In C the dither is worse, as many compilers fritter away too much time by pushing all of the registers, not just the few that get modified by the ISR.
Some processors support several types of interrupts. Consider the Z180/Rabbit family, which has an oddball mode left over from their 8008 heritage. On an interrupt, external hardware can jam an instruction into the execution stream. This mode bypasses all conventional interrupt processing.
The code in Listing 2 takes advantage of a jammed NOP instruction. The interrupt does nothing but exit the halt condition and execute a NOP. Latency is just the processor's raw interrupt latency, which is usually only one or two machine cycles. With no interrupt servicing overhead, the code runs about as fast as possible.
Listing 2: Taking advantage of a jammed NOP instruction
halt ; wait for interrupt <process interrupt> jmp loop
The technique can be improved a bit where speed is a real problem. Probably the interrupt-processing code will maintain a count of the number of pulses received and will exit the loop when the count is exceeded. Keep the count in the HL register pair. Then, jam an INC HL instruction instead of NOP. This single byte opcode will perform some useful work in the code, slightly increasing the system's performance.
A lot of CPUs have low-power sleep modes. Though some wake very slowly indeed, like getting a teenager out of bed for an early morning school call, others leap into action as soon as an interrupt occurs. Put the processor to sleep, wait for an encoder pulse, and let the interrupt immediately start data collection.
Most processors are not so well endowed. The x86 real-mode family, for example, can only handle an interrupt with the conventional slow service routine. Still, options exist even in these cases. Connect external hardware that drives the processor into a permanent WAIT condition when a particular I/O cycle occurs. Then, have the encoder release the WAIT. The code will look like Listing 3.
Listing 3: Encoder releases the WAIT condition on processors with conventional slow service routine
out al,dx ; assert a WAIT <process data> ; come here after encoder pulse jmp loop
Again, the latency will be minimal and execution speed blindingly fast.
I've been discussing conventional toothed encoders where each pulse occurs exactly the same angular distance from the previous one. When rotated at a constant speed, the pulse output will represent one constant frequency.
On one project we rotated a diffraction grating to produce a linear sweep of colors. Unfortunately, the angle versus frequency of a grating is related to the sine-squared of the grating's angular position. For a while we used a hideously expensive and unreliable assembly of specially shaped cams to rotate it at the trigonometric rates, keeping a linear color output. The grating's irregular motion and constant accelerations burned up bearings in days. Then someone had a brilliant idea: why not make a special encoder to linearize the data?
A mechanical engineer removed the cams so the grating would rotate at a constant rate. Then, we had a special encoder made that gave pulses at different rates depending on the position of the shaft. These pulses were carefully spaced to compensate for the grating's sine-squared frequency characteristic. By taking data whenever a pulse arrived, the computer acquired an array which was linear over the spectrum.
The moral is that in the embedded world we have unique opportunities to solve complicated problems creatively. And we can still use assembly language, when appropriate.
Who says engineers don't have more fun?
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at firstname.lastname@example.org.