Extracting clock signals from high-speed communications - Embedded.com

Extracting clock signals from high-speed communications

One of the tricks to high-speed communication is embedding the clock signal within the data. Getting the clock back out, and using it to recover the data, requires some careful design.

Parallel buses are being replaced by serial buses to save spaceā€”and even to improve speed. Done right, a high-speed serial bus can transfer more data more reliably than a traditional parallel bus. Not surprisingly, there are challenges to making this work correctly, and one of those is separating the clock signal from the data. The fact that the signals are distorted by distance and time doesn't make it any easier.

High-speed transmission systems
In recent years as digital transmission systems require more capacity, there's a trend to replace parallel buses with high-speed serial lines. Although the transmission distances may be less than a meter, the design of such a link has many similarities to communication systems that span several meters or even tens of kilometers.

Data transmission systems are made up of three basic devices: a transmitter, a channel, and a receiver. If the system will have a serial link, whether for a communications system or a serial bus, the three elements are usually specified individually so that entire system, when put together, achieves some overall performance level. In a digital communications system the key specification is bit-error-ratio (BER). Transmitters are generally specified to have a certain waveform performance quantified by parameters such as eye-diagram opening, edge-speeds, and power or voltage levels.

The receiver is specified a bit differently. In many systems, receiver performance is defined to guarantee that it will be able to always achieve the desired system BER as long as the incoming signal meets a minimum performance level. As might be expected, this minimum acceptable performance level is often described in terms similar to the transmitter specifications. You can understand this better by examining how a high-speed digital receiver operates.

The high-speed receiver
In its most basic form, the receiver is a decision circuit. Its primary function is to decide if the incoming bits are logic 1s or logic 0s. The decision process is made easier and with less chance of error if the voltage or power separation between the incoming logic levels is as large as possible. This separation width partially drives the specification of the transmitter but it also drives the specification of the receiver. Given that a transmitter has a limit to the signal levels it can reasonably achieve and that the signal will be degraded going through the channel, there will be tradeoffs between how big the transmitted signal must be and yet how small it can get and still be accurately interpretable by the receiver.

Figure 1 shows a basic serial communication system and how a signal may be degraded when it reaches the receiver.


Figure 1: Serial communications system

Ensuring that the decision process occurs at an optimal time is critical to the receiver circuit decision process. Timing is most critical when a logic 1 is preceded or followed by a logic 0 or when a logic 0 is preceded or followed by a logic 1. In these cases, it's usually ideal for the decision to take place at the center of the bit period. For example, if the incoming bit stream is a 0, 1, 0 it's critical that the decision doesn't take place close to where the signal is switching between amplitude levels such as the 0 to 1 transition or signal “edge.” If the decision takes place near the edge, the likelihood of a bit error increases dramatically.

Some form of clock signal must be provided to the decision circuit to time the decision process. If the incoming data stream is at 1Gbps, then a 1GHz clock is needed. But not just any 1GHz signal will suffice. This signal must be highly synchronous with the incoming data stream. Just a small offset in frequency between the data rate and the receiver clock will cause the decision point to slip from its ideal time location and degrade the BER. You can solve this problem by deriving the receiver clock signal directly from the incoming data stream itself.

A clock-extraction circuit is often based upon a phase-locked-loop (PLL) architecture. A voltage-controlled oscillator (VCO) initially runs at a frequency close to the expected data rate. Part of the VCO signal is routed to a phase detector, which will compare the phase of the VCO signal with a portion of the incoming data stream. If the VCO is not at the same rate as the data signal, the phase detector will produce an error signal that is proportional to the frequency difference. This error signal is used as a controlling signal at the VCO to adjust its frequency and force the VCO to match, or lock to, the incoming data signal. The receiver circuit now has the critical capability of timing its decision circuit at exactly the rate of the data and in the center of the bit period for optimal BER performance.

Figure 2 shows how a clock signal can be derived from a data stream.


Figure 2: PLL-based clock extraction circuitry

The incoming data stream is likely to have some small fluctuations above and below its ideal rate. These fluctuations are commonly referred to as jitter. The receiver PLL must be able to respond to changes in the data rate so that the decision circuit continuously operates in the center of the bit. The ability of the PLL to tolerate jitter is determined by the bandwidth and gain of the VCO/phase detector feedback loop. The PLL then must have adequate bandwidth or loop-response speed to keep up with the anticipated rates of jitter. Thus, a third important aspect of the PLL design is the control loop.

You can think of the VCO as a flywheel that is kept spinning at the desired rate by incoming bit transitions. In other words, whenever a change occurs from 1 to 0 or from 0 to 1, the VCO has something to synchronize to. It's important for the VCO to stay on-frequency even when there are no transitions in the data bits, such as a long run of 0s or a long run of 1s.

You can keep the VCO on frequency by placing an effective low-pass filter between the output of the phase detector and the control input of the VCO. This filter provides stabilization to the VCO control signal and subsequently the VCO frequency. It would seem that the higher the stability, the better. But consider also that the filter controls how fast the jitter on the incoming data can be and still allow the VCO to track and follow. When the data signal has jitter, the error signal output of the phase detector will be an analog representation of the jitter. The phase detector is effectively a jitter demodulator. Since the error signal controls the VCO, the jitter from the data is transferred to the VCO. Again, this transfer is the result you want: it allows the decision circuit to stay synchronized with the data.

If the rate of jitter becomes fast enough (relative to the bandwidth of the control loop), the rapidly varying error signal from the PLL phase detector will be suppressed by the loop filter. High-frequency jitter doesn't reach the VCO. Thus the PLL, and subsequently the receiver using this PLL, can only tolerate data with jitter frequencies that are within the bandwidth of the loop filter.

There is an optimal range for the loop bandwidth of the clock recovery circuit. It must be wide enough to track out the expected jitter, but low enough to not respond to low”transition-density data sequences. This then requires another look at the specifications of the transmitter.

If there's excessive high-frequency jitter, the receiver won't be able to respond quickly and adjust the sampling time to the center of the bit period. A simple approach to the problem would be to specify that the transmitter jitter can't exceed a magnitude that would be large enough to cause the receiver to make mistakes. While this is technically possible, it would likely lead to an expensive transmitter. A more common approach is to specify that transmitter jitter at high frequencies can't exceed a certain magnitude. This approach recognizes that the receiver PLL should easily tolerate low-frequency jitter. By not penalizing a transmitter for having low-frequency jitter, the cost of the transmitter should be significantly reduced.


Figure 3: Triggering with an extracted clock removes common jitter

You can measure transmitter jitter in many ways, the most common method being with an oscilloscope. The eye-diagram, such as the one in Figure 3, is an overlay of all the bits of a data stream, overlapped on one another on a common one-bit period time axis. If the signal is jitter-free, all the 1-to-0 transitions will occur at the same time, and all the 0-to-1 transitions will occur at the same time. The eye diagram crossing point will have a very narrow width. If there is signal jitter, the transitions will occur at different times due to the varying data rate. In the eye diagram this signal jitter appears as a thick crossing point. Note that in this example, there's no way to know how fast the jitter is; you can only know how large the jitter is.

Triggering the scope
One way to filter out the low-frequency jitter and observe only the high-frequency jitter is to use an advanced technique for triggering the oscilloscope. The X-axis of the oscilloscope is time “relative to the triggering event;” the Y-axis is signal amplitude. Just like the clock used in the communications receiver to determine when to fire the decision circuit, a trigger signal is often a clock that's synchronous to the data being measured. A trigger event is usually defined as starting when the trigger signal crosses a defined amplitude threshold.

If the oscilloscope is triggered with a spectrally pure, jitter-free clock running at the same nominal rate as the data signal, any jitter present on the data is observed as a broad crossing point in the eye diagram. The crossing-point width is a direct indication of the magnitude of the jitter, relative to the jitter-free trigger signal. But what would be observed if the signal triggering the oscilloscope were derived from the data stream being measured? To understand this, recall the discussion on receiver PLLs above. When a clock is extracted from the data, it will include the same jitter that was present in the data, as long as the rate of the jitter was within the loop bandwidth of the PLL. For the moment, assume that all the data jitter is transferred to the VCO. If this VCO output is used to trigger the oscilloscope, no jitter will be observed on the data eye diagram even though the data jitter may be significant. Why is that?

Because the oscilloscope displays the signal relative to the triggering event, if the data signal is retarded slightly due to jitter, so will the extracted clock trigger. The position of the data signal on the oscilloscope display won't show any delay because its position relative to the trigger (which was delayed by an identical amount) has remained constant. Similarly, if the data signal is advanced due to jitter, it will still be measured as if it were in its ideal location because, once again, the trigger signal has the same jitter as the data. When jitter on the trigger is the same as the jitter on the signal being measured, that jitter is effectively common-moded out of the displayed waveform.

If it's important to observe the jitter on the signal, it would seem that using a triggering signal derived from the data is a bad idea. However, remember that the most important jitter to observe is the jitter that the receiver decision circuit in the communications system can't tolerate. Recall that this is high-frequency jitter, or jitter that's outside the receiver-loop bandwidth. High-frequency jitter alone can be observed on the oscilloscope if the test system clock-recovery circuitry is similar to that used in the communications system receiver.

Wide-bandwidth oscilloscopes can extract clocks from the signals being observed. Advanced clock-recovery schemes allow the loop bandwidth to be adjusted to control the spectrum of the jitter observed on the waveform. For example, if the loop bandwidth is set to 100KHz, jitter below 100KHz will be common to both the data and the trigger signal. This jitter will not be observed. Jitter that is above 100KHz is not passed to the recovered clock trigger. It will be present only on the data signal and will be displayed. Such jitter indicates an unusual effect of the loop-bandwidth filter in the clock-recovery PLL. This effect is a low-pass filter. However, from the perspective of the observed jitter on the oscilloscope, it has a high-pass effect. Only jitter that's above the bandwidth of the filter is observed. This approach then provides a solution to the transmitter test problem I discussed earlier. By using a clock-extraction circuit in the oscilloscope with performance similar to that of the receiver PLL, the jitter observed during testing of the transmitter is the jitter that the system receiver cannot tolerate.


Figure 4: Eye diagram using a jitter-free clock trigger

When system compliance test specifications are set for transmitters, they often include a clock-extraction circuit with a specific loop bandwidth. This is sometimes referred to as a golden PLL. Figure 4 shows an eye diagram that was obtained with a jitter-free clock trigger (the left diagram), while the eye diagram on the right shows the same signal observed with a golden PLL trigger. Note how the eye on the right has significantly less jitter and a wider opening than the eye on the left. Again, the right-side eye, triggered with a golden PLL, is a more accurate representation of the signal that a receiver decision circuit would see.

A subtle but important element of the test system golden PLL is that it needs to have a specific loop bandwidth for a variety of data patterns. Specifically, for consistent test results, the bandwidth should not change even if patterns with low or high transition densities are used. Thus, the loop gain (which affects loop bandwidth) needs to monitor the pattern and adjust gain to match the data pattern transition density. Also, while the golden PLL solves the compliance test problem to screen out excessive high-frequency jitter, it can present a problem for someone trying to examine the total jitter of a transmitter. This problem is also solved through adjustable loop gain in the clock-extraction circuit (which controls bandwidth). Reducing the loop bandwidth enables you to observe more of the jitter spectrum.

High-speed serial communications can be tricky for the uninitiated, but its usefulness is clear and it's certainly becoming more common. With a little care, any good embedded developer can tame even the fastest serial channels.

Greg D. Le Cheminant is a measurement applications engineer for the Agilent Technologies Digital Signal Analysis division. He holds a BSEET (1983) and MSEE (1984) from Brigham Young University in Provo, Utah. He began work at Hewlett-Packard (now Agilent) in 1985 as a manufacturing engineer. In 1989 he joined a marketing group involved in lightwave instrumentation.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.