Using co-design to optimize system interconnect paths
Since the dawn of semiconductors, the dies, the packages, and the boards they reside on have typically been designed by different teams that focus their expertise between predefined boundaries.
This
article is from a class at DesignCon 2011. Click here for more information
about the conference.
Most have seen those flows where the die design gets thrown over the wall to the package designer, who then designs the package and throws the package footprint over the wall for the Printed Circuit Board (PCB) designer to incorporate into the board design.
As design speeds crept through the low hundreds of megahertz range, Signal Integrity (SI) engineers started to worry about signal power integrity (PI). Design teams started to realize that simply connecting the dots is a bad interconnect strategy, and the sprinkle-and-pray approach to decoupling capacitors became an expensive and risky approach to power integrity.
As clock speeds reached up into the mid-and upper-hundreds of megahertz range, second order effects that could be ignored before began causing significant problems. For example, package skew had to be properly accounted for in the overall interface timing. Package decoupling that used to help mitigate power integrity issues was no longer effective. These problems quickly became evident even though due diligence for SI and PI was done at the PCB level.
Today, memory interfaces have single-ended data rates in the 1GHz-plus range, and serial links are running upwards of 10 gigabits per second. The “throw-it-over-the-wall approach” has become completely ineffective. A precise design, analysis, and rules-based control of each of these signals is required at the die, package, and PCB level. The analysis and optimization performed on each one of these interconnection levels must be done in a global context.
The problem presents itself in both the electrical and physical domain. When designs are “thrown over the wall”, they are typically done by someone with little knowledge of the overall constraints of the system.
For example, the package designer may be dealing with a die where the pad ring, bump array (flip chip) or passivation openings (wirebond) are far from optimally located around the die. There have been cases where signals have been routed underneath and straight across the die core, and parallel bus interfaces routed at different lengths and scattered around the package footprint with little regard to return current, and many, many poorly routed and controlled differential pairs.
A poor package design can be mitigated at the PCB level only to a certain extent. This becomes impossible at today’s speeds. When the package design is optimized without system knowledge, then the package pinout and flow will most likely be the path of least resistance.
Frequent electrical problems seen with the un-optimized package design include power integrity and noise issues. As previously mentioned, the lack of, or improper use of package decoupling capacitors could possibly be worked around using decoupling at the PCB level on slower speed board designs.
However, today’s systems require careful planning and design of the power delivery network. Proper use of bulk, mid-tier, package, and die capacitors is necessary. In addition, proper placement, connection to, and value of the capacitors must be combined together with their respective voltage plane designs to which they connect to ensure that resonant frequencies are compatible with the operating frequencies of the design.
When these individual power delivery components are designed separately there is no opportunity to optimize the power delivery network and actually once connected together could produce some unexpected results.
In addition to power integrity problems, electrical problems are also seen with signal quality and timing. When package design is done with no knowledge of the overall system requirements, signals that ideally would be matched in their physical and electrical length may have significant arrival time differences due to different numbers of vias or other discontinuities in their path.
Lower speed designs will have enough margin to overcome these effects, but in single-ended interfaces operating at speeds in excess of 2 Gbps, there is literally no margin to correct these mismatches at the PCB level. Signal quality, crosstalk, and timing are all significant system-level challenges even with an optimized package, but trying to correct a poorly designed package at current bus speeds is close to impossible.
Physical design issues come into play as a full 3D problem. Large ASIC designs are currently limited to a 55 mm square package size. Anything larger would be challenging due to the manufacturing and reliability constraints.
ASIC packages come in two types -ceramic and organic. Ceramic packages provide the benefit of having a large number of layers to distribute the die signals, but are higher cost and single-sourced. For example, ceramic packages typically have 15 to 21 total layers with 6 to 10 dedicated layers to signal layers.
However, whenever possible, product management will resist having a single supplier for any device. In contrast, the cheaper organic substrates are typically limited to 6/4/6 layers, which in this particular stack-up will only provide three effective signal layers to break out the die.
The breakout of a 55 mm complex die in an organic package demands that the I/O pad ring be optimized to break out for the chosen package stack-up. A more typical organic stack-up is 3/2/3 or 4/2/4, which yields 1 or 2 layers to break out these high I/O count devices. The cost advantage of lower layer organics comes at the expense of engineering complexity.
As with the ASIC package there are also limits on the number of layers that can be provided in a PCB stackup. Engineering teams often find themselves up against the maximum number of layers for a given board thickness. This is especially true if there are high pin counts ASIC on the board because escaping their signals from their shadow will consume many board layers.
Another area that often gets addressed later in the design cycle is the pinout of the package and die connectivity to pre-pinout components. When the design is driven from the inside out to pre-pinout components, it is possible to end up with an interface where all the assigned signal pins are the mirror image of what is desired(Figure 1, below).
Connecting components where every “rats nest” cross one another creates a nightmare for the board layout engineers. Instead of a simple point-to-point connection on a minimal number of layers with a minimal number of vias, every signal on that interface will require a minimum of two vias on the PCB, and routing becomes much more challenging. This will drive up the PCB layer count, and use up most if not all the routing channels and in general, make the routing of other interfaces all the more challenging.

Figure 1: Un-optimized connectivity between a co-design BGA and PCB connector
At Cisco we started looking at how to optimize this system design challenge several years ago. Early methodologies involved a lot of meetings, emails, and use of tools such as Visio and Excel to manage the data and communicate to design teams.
It was tough not to get out of sync and even tougher when design changes had to be retrofitted. Fortunately, at about the same time, Cisco learned that Cadence was working on developing EDA tools that focused on bringing the die, package, and PCB data together for optimization.
Ever since, we have been collaborating and now anticipate being able to move to an EDA tool flow that allows us to start with fixed components, such as memory chips, that we know will be on our PCB and better account for the natural flow of their data path in toward the die on our next design.
In addition to optimizing the system interconnect, signal quality, and power delivery network, Cisco also learned of additional benefits from this chip-package-board co-design methodology that we will discuss throughout this article.


Loading comments... Write a comment