Managing the jitter vs power tradeoff in clock trees
Nearly all electronic systems need multiple clock signals for the processor(s) as well as the many associated peripheral ICs to establish the pacing of the system. These clock signals are usually derived from quartz crystals and can range in frequency from a few MHz to several hundred MHz.
Although these signals are needed by many ICs, it's not practical or desirable to have more than one crystal as a master clock source in most designs. Using multiple independent sources brings problems of synchronizing these clocks at their targets, adds cost, and increases board "real estate" requirements. Instead, designers nearly always choose to use a single master-clock oscillator as the source, which is distributed to components across the entire system.
However, this solution is not without its challenges. To begin with, no clock is perfect: Every clock, even a precision crystal in a properly designed oscillator circuit, has some associated jitter or minute timing variations around its nominal frequency, Figure 1a, which is equivalent to phase noise in the frequency domain, Figure 1b.
Figure 1. There are two equally valid ways to look at jitter: A) in the time domain, with a perfect clock (top) and clock with jitter (bottom) showing minute time displacements (phase shifts), and B) in the frequency domain, with the same perfect clock (top) and clock with jitter (bottom), which appears as frequency shifting around the nominal value. [from AN-817, Figures 1 and 2]
Understanding and characterizing jitter
Jitter is a short word but a complex subject with many technical subtleties. Not only are there many types of jitter, but also different metrics are used to assess it. The type and value of jitter has different consequences, depending on the specific application, see Table . Two informative references are AN-815 Understanding Jitter Units and AN-827 Application Relevance of Clock Jitter.
Table. Matching the jitter specifics to the application requires a deep understanding of various jitter perspectives on the legitimate ways it is measured and its impact. [from AN-827, Table 1]
When assessing jitter, it's important that designers do "tree analysis" of the many jitter specifications and how they aggregate, make sure the result is within bounds and the system-level impact of jitter is acceptable. It's also critical that engineers who intend to actually test and confirm their jitter analysis understand the challenges of doing so with the extremely fast clocks and low jitter values of today's designs. Every aspect of the confirming measurement scenario – layout, setup, test equipment, calculations – has sophisticated and subtle facets, and it is easy to perform tests that are inadvertently misleading and yield numbers that are too optimistic or pessimistic.
To characterize jitter, devices are tested using instruments such as the Keysight Technologies (formerly Agilent) E5502 Phase Noise Measurement Solution (see Figure). This instrument is specifically designed to make extremely low-level phase-noise measurements in both design-evaluation and production situations. It uses a phase detector with reference source technique to measure the single sideband (SSB) phase-noise characteristic of the clock or buffer output. The design of this unit begins with high-precision sources, but it also has an architecture that cancels out many of its internal errors so they do not become part of the final data on the device being tested.
There is no single number that simply summarizes jitter across all applications. Understanding which specifications are most relevant for a given design and how they are measured by the clock-component vendor, is critical for making the best choice in a given situation. This will allow the designer to properly understand the data sheet parameters in the context of the design's priorities and constraints.