# Separating jitter into random and deterministic elements for analysis

Jitter is one of the most widely used terms amongst engineers who design digital data links. Whether you design a new board or validate an old board, you'll run have to confront jitter and its components. The problem is how to separate jitter into random and deterministic components. Knowing those components can uncover the sources of jitter.

Jitter is simply the “wiggle” or time variation of digital clock's edge that each transition makes. That difference in timing is called TIE (time interval error). I'll use that as the basis for this discussion.

In an ideal world, rise and fall times would be infinitely fast and every edge of a digital stream would consistently fall the exact same time away from the last. The end result would be no jitter, no wiggling, and no eyes diagrams that suffer from being closed or close to closed. Unfortunately it is not a perfect world, and what once was conceived as an infinitely fast edge, becomes slower as tradeoffs occur as data rates increase.

A signal often must travel through inexpensive PCB material. That, combined with other factors, causes a signal to lose amplitude. Plus, a signal can couple with other signals, causing it to move from a perfectly timed edge to an edge that is now moving from the clock. That's the “wiggling” that **Figure 1** shows. Because of these real-world conditions, digital eye diagrams will close, making it harder for a receiver to distinguish between a logic 1 and a logic 0.

*Figure 1. Jitter occurs because the time between edges in a digital data stream will vary*.Jitter separation lets you learn if the components of jitter are random or deterministic. That is, if they are caused by crosstalk, channel loss, or some other phenomenon. Separating jitter enables engineers to better understand the systematic problems of their devices and to quickly find solutions to adjust for any errors in them. The jitter-separation concept seems simple enough. In an ideal world, all jitter separation techniques would work the same and give exactly the same answers. All the tools are looking at the same jitter.

Unfortunately this is not always the case. In fact, “answers” for different jitter separations can vary widely. The problem has become prevalent enough, that compliance tools can ensure that all designers use standardized separation techniques. What seems like a simple problem that can solved fairly simply is complicated by the fact the separation tool's answers will vary by test-and-measurement vendor and instrument.

So how does an engineer know which answer is correct? This is where science meets art and the user must use the tools available to him (the science) to decide which answer best represents what he or she is debugging (the art).

**Jitter separation challenges**

One reason that answers vary from instrument to instrument is that thereis often a limited amount of information available to do theseparation. A reasonable analogy is that of solving a linear system ofequations. It's well known you must have as many independent equationsas there are unknowns. If there are too few equations, or if theequations aren’t all linearly independent, then you can’t get a uniqueanswer. One way to get around this problem is to impose additionalconstraints or assumptions onto the system, which acts like adding moreindependent equations. This new system is then uniquely solvable.Solving for the unknown values of the different jitter components is asimilar problem, and typically software packages must impose additionalconstraints, or models, in order to obtain a unique, repeatable answer.

The most fundamental separation is betweencomponents that are deterministic (deterministic jitter, or DJ), andthose that are random (random jitter, or RJ). Sometimes these categoriesare defined as components that have bounded histograms and those thathave unbounded histograms, but are still referred to as DJ and RJ. Theconcept of determining which jitter components are random in versusdeterministic may seem simple, but the actual process is difficult. Toseparate jitter, an algorithm typically will need to not only separaterandom from deterministic jitter, but also identify classes ofdeterministic and random jitter. I'll limit my focus primarily to theseparation of deterministic and random jitter.

I've made the assumption that thejitter-separation algorithm used here is based on TIE. Clearly, thereare more ways to measure the total jitter of a signal. TIE is thevariance of an edge from a clock or data signal; from a pre-determinedclock.

In real-time oscilloscopes, the clock can be asoftware recovered clock or an explicit clock (meaning the actual systemclock). The clock is set to a specific rate and being “ideal” willplace its edges the same distance from each other across the entire datarecord. By tracking every edge in a data record and its variation fromthe ideal clock, the TIE builds up an entire base of records which formsthe base of jitter separation. Keep in mind that in the last few years,oscilloscopes have added deep memory depth, making it possible tocapture millions of histogram bins in a single acquisition. **Figure 2** shows the capture of atiming waveform with the histogram of the TIE record.

*Figure 2. A histogram of the time-interval error of data captured on an oscilloscope.*

**Separation of Data Dependent Jitter** Before separating deterministic jitter from random jitter, it'sgenerally easier and more accurate to remove any DDJ (data-dependentjitter) first. DDJ includes DCD (duty cycle distortion) and ISI(intersymbol interference). The reason for this is that measuring DDJ isoften relatively straightforward.

**Transform approaches: The spectral method**

Of the various methods for separating deterministic jitter from random,or RJ/DJ separation, conceptually the spectral method is perhaps thesimplest to understand. It begins by computing the FFT (fast-Fouriertransform) of the TIE record (**Figure 3** ).

*Figure 3. Oscilloscope software can show a spectral view of jitter components.*The FFT shows the jitter components in the frequency domain. Thespectral algorithm then chooses a threshold, which is typically anaveraging of the noise floor, and the algorithm looks for peaks that areabove the threshold. The peaks are considered to be the periodiccomponents and below the threshold are the random components. The amountof RJ (random jitter) can then be quantified as the RMS value of allthe random components in the spectrum (**Figure 4** ).

*Figure 4. Jitter-separation software shows the threshold chosen for jitter separation.*As the spectral method is simple in concept, it relies on some basicassumptions. It assumes that all DJ can be described as periodic innature, and therefore everything that falls into the noise floor of theFFT must be random. Unfortunately, that's not always the case. Thespectral method tends to work very well in the case where no crosstalkis present because these assumptions are typically true. When crosstalkoccurs, however, the spectral method will often confuse crosstalkcomponents as random in nature as opposed to their true deterministicnature. The end result is that total jitter can be over reported in thespectral method.

**Histogram approaches: Deconvolution**

Deconvolution is a method that separates the clear image from the randomnoise and makes it possible for them to distinguish individualcomponents in just about anything. The same idea applies in jitterseparation. When looking at jitter components, each component has itsown individual histogram. For instance, if you could look at themseparately, the periodic jitter histogram could look like **Figure 5** and the random jitterlooks like **Figure 6** .

*Figure 5. A histogram of periodic jitter looks Gaussian in shape.*

*Figure 6. Periodic jitter will appear at specific frequencies in a histogram.*The histogram of the TIE record is the convolution of the periodicand random histograms. The challenge is separating this TIE histrograminto individual components. This is what deconvolution attempts to do.When there are more than two components of jitter, the TIE histogram isthe convolution of all the individual components, but we are stillprimarily interested in removing, or deconvolving, out the randomcomponent from the rest (that is, doing RJ/DJ separation). Assumptionsmust often be added to make the problem solvable, and one that is alwaysmade here is that the RJ histogram can be modeled by a Gaussiandistribution. The effect of RJ can then be quantified as the standarddeviation of that distribution. When there is no crosstalk present, thisnumber should match the RMS value returned from the spectral method.

One method that is becoming increasingly popular for jitterseparation is the tail-fit method. The tail-fitting idea is based on theobservation that as you look further and further out on the tail of thejitter histogram, the more the shape of the TJ histogram approaches theshape of the RJ histogram. This observation holds up mathematically,where you can show that the TJ shape does asymptotically converge to theRJ shape. Because of this phenomenon, you can get an estimate of the RJhistogram by fitting a Gaussian function to the “tail portion” of thehistogram. The RJ value is then computed as just the standard deviationof the fit Gaussian. The *location* of the Gaussian, its mean, isan estimate of the maximum value of deterministic jitter. Thus, byfitting both the left and right tails, an estimate of the peak-to-peakvalue of DJ is simply the distance between the two peaks of theGaussians.

The difficulty of tail-fit is finding the right place to fit thecurve to the tail and yet be limited in the amount of data that can beused. As an algorithm looks to the middle of the histogram, the data isvery repeatable because these are the bins where most of the data pointsfall, and thus the confidence or precision of the algorithm is high;however, the accuracy is low. As an algorithm goes to the end of thetail, the precision gets lower because fewer data points fall in thesebins, but the shape of the TJ histogram more closely matches the RJhistogram so accuracy improves.

The problem is that it's very difficult to fit a curve at the end of atail with limited data. To truly find a good fit, a tail-fit algorithmmay require billions of data points, which isn’t practical in today'stest-and-measurement equipment because it this takes time and processingpower, both of which are at a premium. As a result, a tail-fitalgorithm must balance the difficult tradeoff of precision/confidenceversus accuracy. Again, the biggest disadvantage of the tail-fit methodis that it requires large amounts of data to get the best answer. Thus,an answer at shallow memory depths may have a tail that has incorrectanswers in its jitter separation.

The jitter-separation techniques presented here represent only asmall portion of the many methods that separate jitter into random anddeterministic components. There are polynomial, variations of tail-fit,and many others. Simply by searching for jitter separation in a websearch engine, literally hundreds of techniques can be found. I've onlymentioned a couple of separation methods, but there are many more. Sowhich is correct? That's where the science meets art. All techniques aresubject to caveats and cases that can make them less accurate.

As someone that must evaluate jitter as part of your job, you need toknow what the limitations are and what the signal that is beinganalyzed needs. Knowing this information, a you can properly identifythe correct separation method, thus ensuring that the designs arecorrect.

**Broig Asay** manages product planning and strategic marketing for Agilent’s high performanceoscilloscope business. Brig joined Agilent Technologies in 2005 as a Technical Support Engineer. During histime with Agilent, he has held the following positions: Marketing Operations Manager, where he oversawthe marketing budget and managed the technical support and learning products teams. Technical SupportEngineer, which he helped solve numerous customer problems. Previously to Agilent, Brig worked atMicron Technologies, Inc. as a Test Engineer. Brig graduated with an MBA from Northwest NazareneUniversity and BS Electrical Engineering from the University of Wyoming. He is a published technicalauthor. His article has also been published on Embedded.com's sister publication, *EDN Network.*