Design convergences: approaches on handling the coming tsunami - Embedded.com

Design convergences: approaches on handling the coming tsunami

In a recent article , Rajeev Madhavan, Chairman and CEO of Magma Design Automation, outlined changes that design teams are grappling with as they face multiple challenges in the designs they have to deliver. It is indeed true that designs are going to have to meet the following requirements: increasing levels of integration of analog and digital components on the same silicon substrate, lower power consumption targets (both standby and operational) and higher performance requirements. It is also true that these have to be done within compressed design schedules and with smaller design teams spread out all over the world.

In addition to these challenges design teams also have to worry about integration of their chips inside a package (single die, stacked die, MCM, SiP, etc), and on the board. This has to be done for power and signal integrity to ensure robust power supply and signal transmission for high performance designs. Additionally, cooling system design and EMI/EMC compliance are two key system-level design targets that require integrated chip-package-PCB analysis. So not only are designers challenged with chip-level integration, they also have to look at chip, package and PCB integration and design convergence.

As the engineering management in design houses target ways to reduce power in their designs, integrate multiple different types of IP in their silicon and ensure highest performance for their systems, they have to look at ways their different design groups (RTL, analog/IP design, SoC physical design, package/PCB design) work together. These design groups have different data formats, follow different design and analysis methodologies and target different goals. Adding to the complexity, these design groups may be separate and distinct organization wise.

Given the compressed schedules design teams have to work with, they need integrated solutions that can not only help them optimize and sign-off for these design targets (power, integration and speed) but also help them work together as one design entity even if they may be separated by corporate or data/flow/methodology boundaries. What they need are analysis flows that they can use throughout their design cycle optimizing and verifying the designs at every stage and feeding data to the other stages and design flows.

As Rajeev mentioned in his article , power has to be considered throughout the design process. But this is not only true for mobile devices where low power targets are required but also for high performance designs where performance per Watt is the deciding factor. Most of the optimizations that can be done to the design for power reduction are mostly achieved early in the design process where targeted changes can not only have the most impact but also are easier to implement. But changes done to the netlist at this stage to achieve power reduction can and often introduce unwarranted effects on the power integrity of the design later in the design process. Hence, a design for power methodology is required in which power analysis and reduction is started from early in the design process (RTL stage). Then, as the design progresses, power consumption at the block and chip level is monitored through regular regressions and checked as a design target. Violations are flagged and resolved as they appear preventing unwanted surprises later. As one progresses to the physical design stage, information from the RTL simulations are passed to provide more coverage for the gate-level design verification and sign-off.
As one integrates multiple analog and digital IP on the same die especially in the 40/28nm technology nodes, several issues have to be resolved to make the mixed-signal design environment part of the overall chip design flow. Robust power delivery continues to be a challenge and an increasing one as the power grid no longer is a homogeneous, uniform one across the whole design. It is now highly fragmented supplying power to 40+ domains on the same chip. Additionally given the custom nature of the analog layout, rigorous analysis becomes a must given the sensitivity of analog designs to power ground noise. Electromigration, both for power and signal lines, is a sign-off requirement especially for these technology nodes with shrinking wires and increasing current densities. On top of this, issues that were considered esoteric are becoming mainstream. ESD protection, long a purvue of I/O design teams and ESD experts is now a design challenge given the fragmented nature of the power grid (e.g. the ARM core may have its own power supply that never goes near the I/O pads) and the increasing cost of ESD protection and failure. Coupling of noise between the analog and digital blocks cannot be mitigated by over-design of the isolation structures alone. The coupling has to be simulated with the actual structures to understand the benefits of each guard ring design and configuration. Each of these analyses (IP sign-off, ESD integrity and substrate noise coupling) span multiple groups in a design team and hence an integrated analysis flow that can validate each component by itself and generate models that can be used at the next higher level is needed.

The second aspect of integration that is equally crucial is chip-package-PCB design convergence. This is a separate topic by itself given its significance and complexity. The quality of the package/PCB power delivery network design can determine whether the design will meet its GHz target. The design of IO along with the design of the signal and power traces in the package and PCB determine whether the DDR3 interface will meet its jitter spec. Thus, IC engineers cannot afford to ignore or design their parts in isolation. Nor can the package/PCB designers sign-off their components using abstract (or no) models of the IC. Given the separation that exists in the design flows, methodologies and teams, a model-based approach is best suited to bridge this divide. For this to work each of these teams (IC, package and PCB) have to provide relevant and meaningful models that capture the appropriate electrical signature (power, signal, thermal and EMI). Qualification of these models and standardizing their use in the design flows (e.g. S-parameter model of the package/PCB for on-die simulations or die models with package/PCB simulations) is required part of the chip-package-PCB design convergence flow.

Indeed, chip designers are facing several challenges going forward. The solutions that are needed have to be relevant not only for the specific design group but have to be comprehensive to provide an integrated flow for the entire design team. Only then can they ensure that they can deliver systems that work at spec for power, functionality and speed.

About the author:
Aveek Sarkar is vice president of product engineering and support at Apache Design Solutions. He joined Apache in 2003 from Sun Microsystems where he worked on several generations of UltraSparc processors. Mr. Sarkar holds a B.Tech from the Indian Institute of Technology, Kanpur, a MS EE from Oregon State University, and a MBA from Santa Clara University.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.