Managing the jitter vs power tradeoff in clock trees - Embedded.com

Managing the jitter vs power tradeoff in clock trees

Nearly all electronic systems need multiple clock signals for the processor(s) as well as the many associated peripheral ICs to establish the pacing of the system. These clock signals are usually derived from quartz crystals and can range in frequency from a few MHz to several hundred MHz.

Although these signals are needed by many ICs, it's not practical or desirable to have more than one crystal as a master clock source in most designs. Using multiple independent sources brings problems of synchronizing these clocks at their targets, adds cost, and increases board “real estate” requirements. Instead, designers nearly always choose to use a single master-clock oscillator as the source, which is distributed to components across the entire system.

However, this solution is not without its challenges. To begin with, no clock is perfect: Every clock, even a precision crystal in a properly designed oscillator circuit, has some associated jitter or minute timing variations around its nominal frequency, Figure 1a, which is equivalent to phase noise in the frequency domain, Figure 1b.


(A)


(B)
Figure 1. There are two equally valid ways to look at jitter: A) in the time domain, with a perfect clock (top) and clock with jitter (bottom) showing minute time displacements (phase shifts), and B) in the frequency domain, with the same perfect clock (top) and clock with jitter (bottom), which appears as frequency shifting around the nominal value. [from AN-817, Figures 1 and 2]

Understanding and characterizing jitter

Jitter is a short word but a complex subject with many technical subtleties. Not only are there many types of jitter, but also different metrics are used to assess it. The type and value of jitter has different consequences, depending on the specific application, see Table [1]. Two informative references are AN-815 Understanding Jitter Units and AN-827 Application Relevance of Clock Jitter.


Table. Matching the jitter specifics to the application requires a deep understanding of various jitter perspectives on the legitimate ways it is measured and its impact. [from AN-827, Table 1]

When assessing jitter, it's important that designers do “tree analysis” of the many jitter specifications and how they aggregate, make sure the result is within bounds and the system-level impact of jitter is acceptable. It's also critical that engineers who intend to actually test and confirm their jitter analysis understand the challenges of doing so with the extremely fast clocks and low jitter values of today's designs. Every aspect of the confirming measurement scenario – layout, setup, test equipment, calculations – has sophisticated and subtle facets, and it is easy to perform tests that are inadvertently misleading and yield numbers that are too optimistic or pessimistic.

To characterize jitter, devices are tested using instruments such as the Keysight Technologies (formerly Agilent) E5502 Phase Noise Measurement Solution (see Figure). This instrument is specifically designed to make extremely low-level phase-noise measurements in both design-evaluation and production situations. It uses a phase detector with reference source technique to measure the single sideband (SSB) phase-noise characteristic of the clock or buffer output. The design of this unit begins with high-precision sources, but it also has an architecture that cancels out many of its internal errors so they do not become part of the final data on the device being tested.

There is no single number that simply summarizes jitter across all applications. Understanding which specifications are most relevant for a given design and how they are measured by the clock-component vendor, is critical for making the best choice in a given situation. This will allow the designer to properly understand the data sheet parameters in the context of the design's priorities and constraints.

Next page >

Dealing with jitter

In dealing with jitter, the inherent shortcoming of the master clockis just the initial issue. The clock oscillator usually lacks thecapacity to supply all the loads it must support, as well as drive thecircuit board’s tracks or cables that allow the clock signal to reachthose loads.

To overcome this lack of drive capability, a specialized clock-bufferIC is needed to boost and “fan out” the master clock in a tree-liketopology, Figure 2. The buffer is functionally simple and does just onething: It takes the clock source as its input and provides multipleoutputs replicating that clock input as perfectly as possible.


Figure2. Depending on the clock-tree topology used, there may be one or moreclock buffers between the clock source and the ultimate clock load.[from https://www.idt.com/support/clock-tree-design-service]

Although this function is often not viewed as glamorous andthus may not get much attention or respect, it plays an important rolein the overall performance, integrity, and consistent reliability of asystem and circuit. Buffer ICs are available which can drive two, four,eight, and even more loads, Figure 3, to closely match the needs of thedesign without additional cost or PC board real-estate burden.


Figure3. A clock buffer such as the IDT 5PB11xx provides a fan-out of four(designated a 1:4 buffer); the simple functional diagram does not needto show the internal design subtleties. [from “New LVCMOS Fan Out BufferFamily—Product Introduction Overview”, March 2015, page 7]

Using a buffer: simple, but not pain-free

The ideal clock buffer would pass on boosted versions of the inputclock without any added jitter, delay, distortion, or other penalty. Aswith all other components, that ideal part doesn’t exist — but some comevery, very close. Obviously, buffers take space, use power, and addcost. They also add their own jitter to the clock's inherent jitter.However, there is no practical alternative to using them, so theobjective must be to get a clock buffer that is well-matched to theapplication.

It's important to understand why low jitter in the buffer iscritical. The allowable jitter specification in many of today'sproducts, from networking equipment to high-end PCs to instrumentation,is very small. A typical value for many of these products is in therange of about 100 fsec (1 femtosecond is 10-15 or onemillionth of one billionth of a second) — about an order of magnitudethan what was considered high performance even just a few years ago.

In general, the buffer's jitter is far less than the clock's jitter,so the primary jitter source is the clock rather than its buffer. Thecomposite jitter seen at any load that is driven by the output of thebuffer is the rms (root mean square) combination of the clock and bufferjitter values. For example, a clock with jitter of 400 fsec (rms)followed by a buffer with 50 fsec (rms) results in an effectiveadditional rms jitter of just 5 fsec, so the buffer's contribution tothe combined rms jitter is quite modest compared to that of the clock.(Note that is just a top-level characterization of jitter, which is amultidimensional and complex subject as discussed earlier in the sectionentitled “Understanding and characterizing jitter. “)

Jitter expectations have tightened

If the contribution of the buffer to overall jitter is so small, whyworry about using better buffers with lower jitter? In brief, today'ssystem clock-speed trends and demands are affecting required parameters.As performance and clock speeds of systems increase, clocks must havemuch lower jitter than previous generations. As a consequence, clockbuffers with even lower jitter are important to maintain requiredperformance at the load.

A basic look at some rough numbers makes this clear: When the jitterof clock and buffer are equal –which could happen as clock-jitter valuescontinue to shrink – the resultant rms jitter will be √2 (≈1.4) higherthan the clock alone, which is an increase of almost 50%. As a result,the performance of the clock signal as seen by the various loads isdegraded significantly from its original value at the clock itself.Therefore, as clocks get better and have lower jitter, it's critical toalso lower the buffer's jitter even more, so the contribution of thebuffer to overall jitter is minimized.

Traditionally, the circuit designer's solution to the problem ofexcess jitter has required an unpleasant tradeoff among bufferperformance parameters. They can select buffers with lower jitter but todo this, those buffers use more current and/or voltage. In short, lowerjitter demands more current. Unfortunately, this means that bufferusers are stuck with higher-power, more inefficient components, whichreduce run time in battery-powered products, and increase heat, whichmust be dissipated regardless of power source or availability.

The good news is this traditional tradeoff is no longer required.Advanced clock buffers are able to achieve extremely low jitter without apower penalty. For example, IDT's 5PB11xx family of LVCMOS fanoutbuffers has rms additive phase jitter below 50 fsec over the range from12 kHz to 200 MHz, yet requires just 15 mA of core current.

The resultant benefit is clear by looking closely at the rms additivephase jitter plots over that range. Figure 4a shows the jitter at theinput of the 5PB11xx buffer from a 200-MHz source, while Figure 4b showsthe same parameter but at the buffer output. Even at this maximumfrequency (which is most challenging region) the additive jitter is only31.6 fsec.


(A)


(B)
Figure4. The additive effect of jitter added to the clock signal by the IDT5PB11xx is only about 32 fsec even with a 200-MHz input. [from “NewLVCMOS Fan Out Buffer Family—Product Introduction Overview,” March 2015,page 3, top-right plot; same source, this time lower-right plot]

The performance of advanced clock buffers such as the IDT 5PB11xxgoes beyond their ability to achieve low jitter without power penalty.Buffers in this class can operate from a supply between 1.8V and 3.3Vwithout performance fall-off, so the same part can be used in differentplaces in a larger design with multiple supply tails, and even acrossmultiple products. Not only does this simplify the BOM (bill ofmaterials), but it reduces the natural risk of added design “surprises”as multiple new parts are introduced.

For the 5PB11xx, the channel-to-channel output skew (differencesin relative timing between output channels) is just 50 picoseconds,critical to maintaining synchronization among multiple peripheral loads.Furthermore, advanced clock buffers achieve their performance andfunctionality in very small footprints. For example, the 5PB11xx isavailable in a tiny, 2 × 2 mm DFN package which is needed forleading-edge portable applications as well as a larger 3.9 × 4.9 mm SOICfor upgrading performance of older designs or use in new projects wherespace is not as cramped.

Of course, while even an advanced device operating at 200 MHz easesthe design challenge, it does not eliminate the circuit designer'sresponsibility. Good design rules still apply at 200 MHz, including 12to 18 inches (30 to 45 cm) maximum PC-board trace length between bufferand loads, using balanced lines on PC board to minimize noise and groundloops, and employing other standard high-frequency practices.

For designers who have been weighing the choice between lower-jitterbuffers with a power penalty they can't afford, versus lower-power unitswith inferior jitter specifications, advanced clock buffers eliminatethat tradeoff. Their availability in smaller packages and pin-outs thatare compatible with existing products, means they can use the buffers innew designs and also leverage their benefits in existing designs.

Baljit Chandhoke is Product Line Manager for Timing Products at Integrated Device Technologies, responsible for new product definition, product line management and interfacing with customers globally to help them meet their design challenges. Prior to joining IDT in 2011, Baljit was Product Marketing Manager at ON Semiconductor from 2006 – 2011 and Senior Applications Engineer at Cypress Semiconductor working on PLL SerDes and Video Equalizers from 2003 – 2006. Baljit completed Executive Education on Managing Teams for Innovation and Success from Stanford in 2014, completed his Master’s in Business Administration (M.B.A) from Arizona State University in 2009, M.S. in Telecommunications from University of Colorado Boulder in 2003 and got his Bachelors in Electronics and Telecommunications Engineering from University of Mumbai, India in 2000.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.