Using co-design to optimize system interconnect paths - Embedded.com

Using co-design to optimize system interconnect paths

Since the dawn of semiconductors, the dies, the packages, and the boards they reside on have typically been designed by different teams that focus their expertise between predefined boundaries.

This article is from a class at DesignCon 2011. Click here for more information about the conference.

Most have seen those flows where the die design gets thrown over the wall to the package designer, who then designs the package and throws the package footprint over the wall for the Printed Circuit Board (PCB) designer to incorporate into the board design.

As design speeds crept through the low hundreds of megahertz range, Signal Integrity (SI) engineers started to worry about signal power integrity (PI). Design teams started to realize that simply connecting the dots is a bad interconnect strategy, and the sprinkle-and-pray approach to decoupling capacitors became an expensive and risky approach to power integrity.

As clock speeds reached up into the mid-and upper-hundreds of megahertz range, second order effects that could be ignored before began causing significant problems. For example, package skew had to be properly accounted for in the overall interface timing. Package decoupling that used to help mitigate power integrity issues was no longer effective. These problems quickly became evident even though due diligence for SI and PI was done at the PCB level.

Today, memory interfaces have single-ended data rates in the 1GHz-plus range, and serial links are running upwards of 10 gigabits per second. The “throw-it-over-the-wall approach” has become completely ineffective. A precise design, analysis, and rules-based control of each of these signals is required at the die, package, and PCB level. The analysis and optimization performed on each one of these interconnection levels must be done in a global context.

The problem presents itself in both the electrical and physical domain. When designs are “thrown over the wall”, they are typically done by someone with little knowledge of the overall constraints of the system.

For example, the package designer may be dealing with a die where the pad ring, bump array (flip chip) or passivation openings (wirebond) are far from optimally located around the die. There have been cases where signals have been routed underneath and straight across the die core, and parallel bus interfaces routed at different lengths and scattered around the package footprint with little regard to return current, and many, many poorly routed and controlled differential pairs.

A poor package design can be mitigated at the PCB level only to a certain extent. This becomes impossible at today’s speeds. When the package design is optimized without system knowledge, then the package pinout and flow will most likely be the path of least resistance.

Frequent electrical problems seen with the un-optimized package design include power integrity and noise issues. As previously mentioned, the lack of, or improper use of package decoupling capacitors could possibly be worked around using decoupling at the PCB level on slower speed board designs.

However, today’s systems require careful planning and design of the power delivery network. Proper use of bulk, mid-tier, package, and die capacitors is necessary. In addition, proper placement, connection to, and value of the capacitors must be combined together with their respective voltage plane designs to which they connect to ensure that resonant frequencies are compatible with the operating frequencies of the design.

When these individual power delivery components are designed separately there is no opportunity to optimize the power delivery network and actually once connected together could produce some unexpected results.

In addition to power integrity problems, electrical problems are also seen with signal quality and timing. When package design is done with no knowledge of the overall system requirements, signals that ideally would be matched in their physical and electrical length may have significant arrival time differences due to different numbers of vias or other discontinuities in their path.

Lower speed designs will have enough margin to overcome these effects, but in single-ended interfaces operating at speeds in excess of 2 Gbps, there is literally no margin to correct these mismatches at the PCB level. Signal quality, crosstalk, and timing are all significant system-level challenges even with an optimized package, but trying to correct a poorly designed package at current bus speeds is close to impossible.

Physical design issues come into play as a full 3D problem. Large ASIC designs are currently limited to a 55 mm square package size. Anything larger would be challenging due to the manufacturing and reliability constraints.

ASIC packages come in two types -ceramic and organic. Ceramic packages provide the benefit of having a large number of layers to distribute the die signals, but are higher cost and single-sourced. For example, ceramic packages typically have 15 to 21 total layers with 6 to 10 dedicated layers to signal layers.

However, whenever possible, product management will resist having a single supplier for any device. In contrast, the cheaper organic substrates are typically limited to 6/4/6 layers, which in this particular stack-up will only provide three effective signal layers to break out the die.

The breakout of a 55 mm complex die in an organic package demands that the I/O pad ring be optimized to break out for the chosen package stack-up. A more typical organic stack-up is 3/2/3 or 4/2/4, which yields 1 or 2 layers to break out these high I/O count devices. The cost advantage of lower layer organics comes at the expense of engineering complexity.

As with the ASIC package there are also limits on the number of layers that can be provided in a PCB stackup. Engineering teams often find themselves up against the maximum number of layers for a given board thickness. This is especially true if there are high pin counts ASIC on the board because escaping their signals from their shadow will consume many board layers.

Another area that often gets addressed later in the design cycle is the pinout of the package and die connectivity to pre-pinout components. When the design is driven from the inside out to pre-pinout components, it is possible to end up with an interface where all the assigned signal pins are the mirror image of what is desired(Figure 1, below ).

Connecting components where every “rats nest” cross one another creates a nightmare for the board layout engineers. Instead of a simple point-to-point connection on a minimal number of layers with a minimal number of vias, every signal on that interface will require a minimum of two vias on the PCB, and routing becomes much more challenging. This will drive up the PCB layer count, and use up most if not all the routing channels and in general, make the routing of other interfaces all the more challenging.

Figure 1: Un-optimized connectivity between a co-design BGA and PCB connector

At Cisco we started looking at how to optimize this system design challenge several years ago. Early methodologies involved a lot of meetings, emails, and use of tools such as Visio and Excel to manage the data and communicate to design teams.

It was tough not to get out of sync and even tougher when design changes had to be retrofitted. Fortunately, at about the same time, Cisco learned that Cadence was working on developing EDA tools that focused on bringing the die, package, and PCB data together for optimization.

Ever since, we have been collaborating and now anticipate being able to move to an EDA tool flow that allows us to start with fixed components, such as memory chips, that we know will be on our PCB and better account for the natural flow of their data path in toward the die on our next design.

In addition to optimizing the system interconnect, signal quality, and power delivery network, Cisco also learned of additional benefits from this chip-package-board co-design methodology that we will discuss throughout this article.

Developing the methodology

The co-design methodology began to evolve at Cisco when we first realized there was a potential solution to our problems. As system engineers, we faced the constant challenge of finding solutions to compensate for poor package pinout and design.

It is always frustrating to untangle signals that could have been optimized in the package the first time around, and it is very difficult to force package changes once the design is finalized.

With the new power and signal requirement on our latest ASICs, we have found that bringing the package design inside Cisco provides many benefits. Focusing on the power delivery network, optimizing package decoupling capacitors allowed us to reduce the overall capacitor count used at the board level.

This, in turn, reduced the overall product cost. In addition, designing the package and board together gave us control over the package footprint, which had the potential to save our layout engineers a lot of time routing the board.

Traditionally, the package and board use separate design tools, and so we struggled with getting an optimized overall solution. Large-scale changes, such as moving a bus from the North to the South side of the package, were fairly straight forward.

But without an integrated view, it was difficult to see that a parallel bus could be mirror image between devices and/or how it would optimally connect to the components on the PCB.

This was the basis for our requirement to have an integrated board, package, and die environment.

The netlist data and how it is communicated to the different design teams has also evolved due to necessity. Visio and PowerPoint were the main way we communicated graphical data between teams, and netlist data was managed through Excel.

However, as we were optimizing the package and PCB, it was not uncommon to have change requests coming from the IC design team. Merging their changes into the same Excel spreadsheet, which had the changes from the package and the PCB, was a manual process and error prone.

There was a constant concern that something would go wrong in this process and the only verification step available was manually pouring over the data and checking line by line. This became quite tedious.

While we knew that collaboration among chip, package, and PCB design was necessary to build a reliable and cost-effective product, we quickly came to the conclusion that automation was required. We consulted with our EDA tool providers for chip, package, and PCB design tools.

Fortunately, the EDA companies were aware of the need for each of these design groups to communicate more effectively.

As Cadence supplies each physical implementation tool, it was a natural step that they help with this overall solution. They addressed this through the development of a full co-design solution, which Cisco has partnered with them to develop.

On the physical side, Cadence has existing capability to allow the I/O pad ring of the chip to be imported to the package design tools. They also have a method to bring the package and PCB design together into a single tool for optimization. However, these were separate steps and the need arose to integrate these together into a seamless flow.

On the logical side, Cadence had developed a connectivity management tool that supported the ability to review changes (ECOs) and either accept or reject those changes. They also had a verification methodology where the netlist could be compared to the physical design (LVS) and anything out of sync could be easily identified and corrected.

These methodologies were extremely useful in getting us past some of the stumbling points we had encountered using our in-house solutions. Unfortunately, the EDA solution was fairly new and not complete, so we continued to evolve our in-house solutions while collaborating with Cadence.

As we continued to evolve our methodology, we worked with our EDA tool suppliers for both implementation and analysis. We currently use multiple EDA suppliers’ tools to extract and analyze the entire system.

A best-in-breed solution is essential for our development cycle. Fortunately, Cadence is core to our process and the other suppliers’ tools communicate with Cadence tools to help the extraction and modeling of the system from the physical database. This allows us to combine system data into a single simulation.

Each iteration of our designs was becoming smoother from a methodology perspective. We eventually found a balance of EDA tools and in-house solutions that got us to a point where we could clearly see the advantage of bringing the package design internal to Cisco. However, we certainly preferred to have a pure EDA flow as it is expensive and risky to maintain in-house tools and software solutions.

Cadence was persistent in their efforts to close the gap between what they could provide and what was needed to have a comprehensive design flow. The latest versions of the EDA tools provide netlist management in a system connectivity manager (SCM), a single-window view of chip, package, and PCB together for easy optimization (Figure 2, below ).

We still see a lot of improvements that could be made to the flow, which we will discuss later in this article.

 

Figure 2: Using System Connectivity Manager to author hierarchical IC-Package-PCB netlist

However, without the co-design methodology that Cisco has developed with our EDA tool suppliers, we would be extremely challenged to meet the productivity and profitability goals we have been given by our management.

Target Methodology Objective

As with all design teams, Cisco’s end goal is to deliver a higher performing product in a shorter amount of time while reducing total development cost. However, given the ever increasing need to provide more features in less space, we anticipated that it would eventually become impossible to even deliver products unless we streamlined and automated our design flows.

Only with upfront planning among PCB, package, and ASIC design teams working in parallel are we able to minimize hardware cost, reduce design iterations, and meet performance requirements. This section describes the flows and tools we have found necessary to accomplish these interdependent goals.

Assigning design roles

Our design teams engage in both ASIC design flows and customer-owned tooling (COT) flows. In either case, roles are defined as follows:

*System architect: defines and owns the system logical netlist definition down to the pinout of the die.

*SI engineer and package designer : work together to define the pin assignments between the PCB and package teams and between the package and IC teams using codesign. Defines the IC pad ring and bump array in the context of the system. Owns signal and power integrity of the data path covering timing, signal quality, and power integrity throughout the system. Also owns the package implementation .

*PCB designer : supports the co-design collaboration with the SI engineer and package designer. Owns the PCB implementation .

*IC designer : supports the co-design collaboration with the SI engineer and package designer. Owns the IC implementation .

In an ASIC flow, a third-party team is usually responsible for the physical implementation of the IC. In a COT flow, the internal design team owns all aspects of the design.

Because each of these teams works together in the co-design process from the beginning of the project, we are able to define optimal pin assignments and I/O placement for the die floorplan very early in the design process. In addition, critical interface areas of the design are identified early. This allows us to bear down in these areas and apply resources to identify and itemize their unique requirements.

System Netlist Management

One major difficulty we have faced in the past is managing the netlist across the system and keeping track of net names as the signal traverses various levels of the design hierarchy. The Cadence System Connectivity Manager (SCM) has been a great help in managing this hierarchical connectivity maze.

SCM provides a hierarchical, table based system from which we are able to design and manage our complete system netlist down to the port of the I/O drivers on the die. Its ability to map net names among the PCB, package, and IC environments has proven quite convenient.

However, because SCM is a “table-based” environment, it should be pointed out that for designers who are accustomed to navigate system connectivity through a graphical netlist, SCM will take some getting used to. Also, at this time the integration of the die pad ring into SCM is still under development.

We define the PCB at the top level of the hierarchy and populate the design with each of the components that make up the PCB (Figure 3, below ). One of those components is, of course, the BGA package and die that will make up the new ASIC being designed.

 

Figure 3: System Connectivity Manager netlist hierarchy and component views, showing both the PCB and package component views ( To view larger image , click here).

We define the PCB at the top level of the hierarchy and populate the design with each of the components that make up the PCB. One of those components is, of course, the BGA package and die that will make up the new ASIC being designed.

The information that makes up the ASIC can be captured in SCM in several ways. If a preliminary or shell database of the package exists, then this database can be read in the SCM environment. Any existing connectivity within the database will automatically be hierarchically integrated in the SCM environment.

In addition, the new ASIC database can be created by pointing to standard or custom package footprint library and a die image, and then by defining connectivity between the die and the package.

The die image can be imported from a simple text file defining bump names, their x and y coordinates, net names, etc., or by reading in the die DEF and LEF libraries generated by an ASIC design tool such as Cadence Encounter Digital Implementation System (EDIS). The advantage of reading in the DEF/LEF libraries is that it provides full visibility into the die pad ring.

Connectivity of these components is easily defined and maintained in connectivity pane tables. Because this is only a logical tool, we connect the nets to the new ASIC fully knowing that they will be optimized from a physical standpoint later in the flow.

We next descend into the new ASIC package block to instantiate its internal components, which normally would include one or several die and possibly some on-package decoupling capacitors, and then define the connectivity between these components and the BGA pins.

The die footprint and its bump connectivity can be brought into SCM in several ways. Because SCM was design to take advantage of the implicit hierarchy that exists between PCB, package, and die, we are able to maintain different name spaces. SCM provides the ability to view multiple name spaces simultaneously (Figure 4 below ).

Once the connectivity is defined and components instantiated, we are able to generate a board file from the top level and a package file from the package block, and automatically launch the package or PCB layout tools for physical implementation.

 

Figure 4: System Connectivity Manager component connectivity pane. This illustrates the ability to maintain different name spaces between PCB, package, and IC environments. ( To view larger image , click here).

However, before any physical implementation tasks begin, the connectivity flow across the hierarchy must be untangled. The task of optimally untangling the connectivity across multiple levels of hierarchy is a complex one. At this time, SCM implements some simple heuristics to help the user with the pin assignments but will not guarantee that a physical implementation is possible, let alone optimal.

At any point in the flow, SCM can import the current status of the PCB or package tool database to sync it back with the logical definition. Because the co-design BGA is just that—a co-design component—if pin optimizations to the BGA were done in the package, for example, those swaps would be propagated to the PCB as well. Also, when a physical board or package database is read into SCM, it first does a comparison and reports on the differences between the two.

If the pinout of the co-design BGA or die changes, then a simple ECO process enables the changes to be propagated through the system. One of the benefits of this simple ECO process is that we don’t have to wait until our netlist is complete to begin floor planning and analysis.

We begin with the most critical part of the design, say a high-speed memory interface, and create the netlist for this interface for the PCB, package, and the IC pinout with I/O drivers. We will use this netlist to quickly define the pinout in the BGA and IC, route the nets, extract parasitics, and run various analysis on the bus to make sure things are working properly. We continue this process, incrementally adding each interface to the design, checking each extraction, and moving to the next interface.

The “design reuse” capability is a great benefit and is essential to our flows. Being able to take parts of existing designs, such as a memory complex from Design A or a processor complex from Design B and several custom ASICs from Design C, and import them into the current design is tremendously useful.

Each of the previous designs has a lot of technical details and intellectual property built into it. The faster we can leverage that previous effort in the next-generation design, the better the upfront definition and verification of the new product becomes.

SCM is an ideal place to support these types of transactions. Cadence has introduced interface-based design concepts that we think might be able to address this, but have yet to fully evaluate this technology.

PCB Package IC Layout

As we mentioned above, we have found that if we can begin the package and PCB layout and pin assignments before the IC design layout begins, we have a much better chance of meeting our design requirements.

The first major methodology shift for us was to bring the package design in-house so we could drive physical design constraints up and down the design hierarchy. When we know what components will be on the board, we can define the pinout for the ASIC BGA for optimal board routability.

We can then drive that pinout into the package and then into the IC pad ring. This gives us a very clean data path throughout the system. It is easier to route and, therefore, we can focus on optimizing the design for electrical performance interfaces such a SerDes or high-speed source-synchronous busses.

Of course, there are cases in which we have an IP block with a fixed pinout in the die. Because we have a robust co-design environment, we can map the pinout up to the board through the BGA to resolve any tangled nets at the PCB level, or we can resolve them in the package.

In many cases, the bumps are not built into the IP block, so we can resolve these types of tangles in the IC by optimizing the signal assignments of the nets from the package, then resolve the tangles in RDL routing from the bumps to the IP block. Because we haven’t invested in final place-and-route in any of the design fabrics, we have the flexibility to resolve these types of routing issues in the fabric of our choice.

 

Figure 5: Single canvas view including PCB, package, and IC connectivity show before optimization.

The second major methodology shift for us has been to view the layout of the PCB, package, and IC pad ring in a single canvas (Figure 5 above ). This is still a work in progress, but what we have seen in our early prototype that this provides us with a first-cut mechanism to optimize our data path between the board and IC through the package BGA (Figure 6 below ).

 

Figure 6: Same view as Figure 5, but with connectivity optimized between the PCB, package, and die.

Automatic pin optimization allows for global-level changes of pin assignment from PCB to package to IC, and from IC to package to board. Individual pin-swap capabilities provide for quick fine-tuning of assignments.

Re-ordering of individual I/O drivers allows us to optimize net assignment entering the die with respect to natural locations of the nets on the board (Figure 7, below ).

Once optimizations are complete, we read the results back into SCM, then export individual board and package files for layout. IC results are written from the single-canvas view and then imported directly into Cadence Encounter Digital Implementation System for the IC design. One key advantage of this flow is that the net names for each design space are preserved throughout the flow.

 

Figure 7: Optimized package and co-design die connectivity.

This view includes the abstracted view into the co-design die including I/O drivers and core hard macros. Although this flow shows great promise, there are still several missing pieces we need:

1) The ability to manage net assignments to the BGA and die by interface (this is in the Cadence roadmap, but it is yet to be evaluated)

2) Automatic I/O optimization (the environment only provides for automatic pin optimization to the BGA and bump array)

3) The ability to define, create, and manage tiles and bump covers for individual I/O groups on the die

4) The ability to pick and drag tiles on the die canvas

Now that we have tighter integration by doing the package design in-house, we are able to be more pro-active in the design of the system power delivery network (PDN).

Delivering power to the chip requires that we manage the path from the voltage regulator module, through the board, and through the package. Proper use of decoupling capacitors throughout the entire path is essential to an optimized PDN.

We have a strategic advantage over an outside company doing the package design. The package de-cap quantity and the location can now be optimized along with the system-level PDN. At lower frequencies and higher voltage swings, there were enough margins for us to work around these issues on the PCB using I/O de-caps.

However, today’s high-frequency designs offer far less margin and we need to optimize the PCB and package PDN together. This has resulted in fewer capacitors being used overall and lower overall product cost.

One major benefit of planning the BGA and DIE I/O pinout is that the package route is easier to complete when the correct flow has been established for the system. By planning our design from the system in and back, we can minimize the crossing of the busses, ensure the I/O’s are pinned out with the correct signal to ground/power ratios, and drive the correct rules into the PDN.

This will ensure a successful design. For many designs we have often needed to use ceramic packages to break-out these large, complex ASIC’s.

With these new tools and co-design methodology, we are excited to take the less expensive organic packages to the next level. The ability to manage our pinouts and thus the routability of the design enables us to make better use of organic packages.

Bringing package design in-house has allowed us to engage much earlier in the ASIC cycle than ever before. This early engagement allows us to explore design tradeoffs of the system in time to influence the design. One great example of this is the last backplane ASIC our team designed.

In parallel with the ASIC design, we had built the entire system-level signal integrity models of the SerDes links across the backplane. Having control and freedom in the package, we were able to study the effects of improvements in the package design that help the SerDes channels. By using these techniques and a low-impedance package, we were able to drastically (10x) increase the minimum eye opening of our system design. Many of these improvements were related to the actual pinout and via optimization of these channels.

One of the biggest resets we face in a new ASIC design is a die size change. Die size increases can really have a negative effect on the project schedule. Each die size increase will move the locations of the I/O placement and require a reroute of the package.

This inevitably happens near the end of the design cycle after much work has already been implemented in the package and board. The co-design flow we have prototyped helps us minimize the impact of increases in die size that are part of the process for us.

By providing tools to facilitate import from the ASIC tools and a remap into the system-in-package tools, we could minimize the amount of time required for this process. However, because our I/Os are neatly optimized to the BGA and PCB, doing a reroute isn’t as painful as it used to be.

Improvement in die fanout and package re-route is needed to help negate the impact of die changes.. We are hoping that the integration of Cadence Global Route Environment (GRE) routing features into the new tool sets will help in this area, and that Cadence will improve the flow into and out of the ASIC tools.

A robust co-design and implementation methodology enables us to explore different design solutions in our project and to check the feasibility of different approaches. We have explained how we will implement a specific interface throughout the system to verify that it will work as planned.

Pin optimization on the BGA and the die also allows us to explore different solutions. When we create a bump array in the IC, we can verify that the pitch of the bump array will provide sufficient routing resources to do the RDL routing in the die and to do the die breakout in the package.

A quick package router is needed to allow for quicker iterations inside the package. We have found that if we break out the routing for each of the components in the design and do pin assignment to the codesign objects based on the breakout patterns, we are able to get cleaner routes more quickly.

Conclusion

We believe that given the right tools and methodologies, we can reduce risk while also delivering better performing products in a shorter amount of time and at lower cost than we have in the past.

The early engagement coupled with our evolving co-design methodology will allow us to deliver the high-performance world-class products our customers demand with the potential of reducing layer requirements. We have discussed several process improvements related to the co-design methodology, as well as a few areas of improvement needed in this space. Key takeaways are summarized here:

The Cadence flow facilitates using existing designs with the System Connectivity Manager (SCM) to leverage outside information that helps drive our package design and relates this information to the ASIC teams.

It also enables us to visualize and integrate the die/package and PCB into a consolidated view, which really drives the flow from one area to the next. Our system-level netlist and routing are much smoother with this approach.

Integration early in the design cycle, before final place-and-route, is critical and ensures we have the flexibility we need to truly optimize the system-level interconnects. For improvement we need to see EDA investment in the I/O planning integration of these tool sets.

We need to leverage the information and flexibility from the top-level floor plan early in the design cycle to take this flow to the next level. Smoother integration with the sub-tools would be next on our list. From the customer perspective, the new additions to the cockpit are a huge step in this direction, and we are excited to continue working toward this goal.

Real Pomerleau is a technical lead engineer in the Systems and Silicon Engineering High Speed Design team at Cisco Systems Inc. Real has been with Cisco for over 11 years working in the area of Signal and Power Integrity, ASICS package and SIP design and Die, Package, Board co-design. Prior to working at Cisco, Real worked for Intel as an individual contributor, and Nortel Networks as a manager and member of the scientific community. Real received his BeSc, MBA, and MeSc from Western university London On, Duke University Durham NC, and Oregon Technical Institute Portland, OR respectively and was a PhD candidate at NC State in Raleigh NC. Real holds 2 US patents.

Stephen Scearce is the manager of the Systems and Silicon Engineering High Speed Design team at Cisco Systems Inc. Stephen has worked for Cisco for 10 years in the Signal integrity, Power integrity, and EMC fields. Prior to working at Cisco, Stephen worked for NASA LaRC as a research engineer in the Electromagnetic Research Branch HIRF team. Stephen has 4 current US patents. Stephen received his BSET and MSEE from Old Dominion University in Norfolk VA.

Tom Whipple is currently a Product Engineering Architect at Cadence Design Systems, Inc. with responsibility for System in Package and PCB-Package-IC Co-design Solutions. Tom has 17 years experience in EDA covering IC floor planning, synthesis, and now SiP and Co-design flows. Prior to Cadence, Tom was an IC design engineer at VLSI Technology, Inc. Tom received a BSEE from Brigham Young University and an MSEE degree from University of Arizona.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.