Today's mobile communication systems use sophisticated signalprocessing to achieve high transmission rates. Leading-edge wirelesssystems must contend with more challenges as designs are expected to bemultistandard and reconfigurable. Evaluations of various integrationstrategies must be performed to verify the feasibility of the proposedintegration approach, where issues such as performance, cost and riskneed to be considered.
The requirements of existing communication standards differ in termsof center frequency, signal bandwidth , SNR and linearity. This will considerablyaffect all radio front-end building blocks. A comprehensive trade-offanalysis is also needed to select the most appropriate architecture andmeet the individual circuit-block requirements.
The complexity of digital signal processing is also steadilygrowing. Digital blocks can compensate for some of the signalimpairments caused by analog front-end blocks.
To verify complicated digitalcompensation algorithms and the effect of analog non-idealitiessuch as phase noise,nonlinearity and mismatch, the analog and digitalblocks need to be simulated together. A key bottleneck in RF/basebandcodesign is the presence of the RF carrier signal at several gigahertzin the RF front end.
Simulating the bit error rate (BER) orpackage-error-rate(PER ) of acomplete telecom link at the transistor level—running thousands ofcycles of the modulated signal—is very expensive and often impractical.
Aside from this performance verification, where the actual design isvalidated against specification, another key requirement is thefunctional verification of the entire chip.
Simple implementation errors at the interface between the digitalcontrol circuitry responsible for various operating modes (e.g. powerup, power down, receive, transmit and band selection) and the analogfront-end are often the cause of expensive respins. IC designerstypically overcompensate and stick to budget requirements passed downfrom the system designer.
The IC designer could prove that a more relaxed specification withinthe IC will still meet system-level requirements. However, with no wayto prove the theory, time is spent optimizing circuitry that may not benecessary.
Systems involving baseband and analog/RF portions have traditionallybeen designed, simulated and verified separately due to the differentmindsets of the engineers and the tools of the two domains. The goalduring system-level design is to find an algorithm and architecturethat implement the required functionality while providing adequateperformance at minimum cost.
RFIC designers also faceseveral significant challenges during the actual implementation phase.With a large IC such as a wireless transceiver, high-speed requirementsmake circuits extremely sensitive to parasitics, including parasiticinductance, passive modeling and noise. Thus, the essence of the RFICflow is the ability to manage, replicate and control post-layoutsimulations and effects, and efficiently use this information at timelypoints throughout the design process.
RFIC design also requires specialized and unique analysis techniquesspecific to RF design. These are a cross between frequency domain andtime domain analysis methods, which are chosen on the basis of eithercircuit type and type or designer preference and comfort level.Ultimately, this requires a seamless environment that affords a choiceof simulation method.
Integration trends have also affected the RFIC world, which used tobe viewed as a separate, almost standalone entity. Today, many RFICscontain at least the ADC, DACand PLL functions, as well asa digital synthesizer that is created through the digital environmentand integrated on chip.
In other cases, RF content is being added to large SoCs as somedesign groups attempt a single-chip solution. Still others areintegrating by using system-in-package (SiP) techniques, which results in the sameverificationissues confronting RFIC and SoC methodologies.
Addressing these challenges requires a complete solution that must:
1) Provide comprehensivelinks between system-level design and IC implementation;
2) Enable IC verificationwithin a system-level context to leverage existing wireless libraries,models and testbenches;
3) Allow full-chip mixed-levelsimulation at different abstraction levels (language neutral);
4) Allow for detailed analysisat the block and chip levels at an optimized simulation time;
5) Manage and facilitatesimulation with full parasitics;
6) Contain layout automationthat can be used at appropriate points in the design; and
7) Allow for several levels ofpassive modeling throughout the design process.
These requirements must be met through a single environment that notonly facilitates the job of the RFIC designer natively, but alsointegrates with other domains such as AMS and digital.
This must include both a chip- and block-level perspective atmultiple abstraction levels, where the same design collateral can bepassed back and forth, thus facilitating verification andimplementation from the environment's point of view, independent ofphysical integration strategies.
The first place to start describing an RF IC flow is from a moreglobal methodology perspective and context.
The Advanced Custom Design (ACD)methodology in Figure 1, below describes a process geared towards mixed-signal design, which takesdesign tasks and parallelizes them, allowing for a top-levelperspective, for parasitic and analysis functions performed early andoften, and which ultimately enables the design to progress with as muchinformation as is available at any given point in time.
|Figure1.ACD methodology combines top-down approach speed and bottom-up accuracy.|
Predictability is the driving force behind the ACD methodology. Theneed for predictability is driven by two primary concerns: schedule,which must be met from the beginning of the design process, and whichnecessitates a fast path to tapeout; and performance requirements,which must be met to achieve first-pass success, and which require asilicon-accurate methodology.
To meet schedule requirements, RF designers need a fast design processthat supports thorough simulation and physical design. The top-downdesign process, when applied to both simulation and physical design,results in a fast design process. The design process is comprised ofmany tasks; many of today's chips contain multiple blocks from multipledesign domains.
Thus, it is imperative to design-in as many of these blocks andperform as many tasks as possible in parallel, leveraging as much ofthe top-level IP as possible throughout the process. This leads to theconcept of design evolution, where all a design's IP is leveraged as itmatures through the design process.
Using this concept, multiple abstraction levels—from high-leveldesign through detailed transistor-level design—are combined to supporta mixed-level approach that targets detailed design to only the pointsneeded for a given test. This also enables designers to leveragetop-level information for block design, and to subsequently re-verifythe blocks in the top-level context.
To achieve the required design performance, RF designers need asilicon-accurate design process. Silicon accuracy relies on base designdata, such as device models, that support accurate simulation, andtechnology files that support physical verification and analysis.
Test chips, which are often comprised of critical structures thatare known from past designs to be highly sensitive, are also used inthis process to verify the feasibility of a process and the accuracy ofits corresponding process design kit(PDK). Often, a design group will need to add components to thePDK to support a particular design style.
Device models may need to be expanded to combine or add corners, orto facilitate statistical modeling or other approaches the design teamrequires.
This silicon-accurate data is driven through the design processthrough detailed transistor-level analysis, including layoutextraction. The calibration of these lower-level silicon-accurateresults to higher levels of abstraction ensures that designs will meetperformance requirements. This comprises the bottom-up portion of theACD methodology.
In practice, the top-down and bottom-up processes work in parallel,producing a “meet-in-the-middle” approach. This meet-in-the-middleapproach balances the need for fast design processes with siliconaccuracy, which ultimately produces a predictable schedule and leads tofirst-pass silicon success.
The ACD methodology can be applied to a complex integration or to aparticular domain area. The methodology for each domain applies themeet-in-the-middle approach, combining top-down speed with bottom-upsilicon accuracy.
Figure 2 below depicts thewireless RFIC flow. The flow targets the RFIC designer and spans systemdesign down to IC implementation, following the meet-in-the-middleapproach described earlier.
|Figure2. Wireless RFIC flow spans system design down to IC implementation,using a meet-in-the-middle approach.|
The design collateral from the system design process is used as thefirst and highest abstraction level. System-level descriptions becomean executable testbench for the top-level chip. Models of thesurrounding system can be combined with a high-level model of the chip,producing an executable specification.
System requirements serve as the first specification to drive thechip-level requirements, and ultimately turn into repeatabletestbenches and regression simulations. Part of the leveragedsystem-level content is also the IP that determines the system relevantfigures of merit, such as EVM,BER and PER.
Mixed-level simulation allows a natural sharing of informationbetween the system and block designers. To enable the required linksfrom the system environment to the IC environment, it is essential thatthe underlying multi-mode simulation solution is language-neutral (fromsystem models in C/C++, SystemC and SystemVerilog todigital/mixedsignal/ analog behavioral HDL languages to Spice) and provides differentengines and algorithms dedicated to the specific needs for amultidomain circuit design.
Successful execution on a complex design is contingent on thoroughupfront planning. No design comes together smoothly by accident. With astrong initial plan that specifies which top-level requirements,block-level requirements and mixed-level strategies to use, ameet-in-the-middle approach can drive each block design to ensure fullcoverage of important design specifications and smoothly allow blocksto have different schedule constraints.
Therefore, the development of a comprehensive simulation strategy,which in turn leads to a modeling plan, is key. After realizing ahigh-level executable specification, the process continues byidentifying particular areas of concern in the design. Plans are thendeveloped for how each area of concern will be verified.
The plans specify how the tests are performed and which blocks areat the transistor level during the test. It is important to resist thetemptation to specify and write models that are more complicated thannecessary. Start with simple models and only model additional effectsas needed.
A formal planning process generally results in more efficient andmore comprehensive verification, meaning that more flaws are caughtearly and there are consequently fewer design iterations. Thesimulation and test plans are applied initially to the high-leveldescription of the system, where they can be debugged quickly. Onceavailable, they can be applied during the mixed-level simulations ofthe blocks, reducing the chance that errors will be found late in thedesign cycle.
The top-down process starts with HDL modeling for the entire RFIC addedto the system-level testbench. This would include all RF blocks alongwith any analog content and/or digital blocks.
The first step is to behaviorally model the full chip within atop-level testbench, which would verify some system tests such as EVMor BER. This at first verifies the partitioning, block functionalityand ideal performance characteristics of the IC. This behavioral setupthen serves as the basis to facilitate mixed-level simulations, whereblocks can be inserted at the transistor level and verified in atop-level context.
This full-chip and system setup can serve as the regression templateto allow for continuous verification as blocks mature, allowing for acontinuous evolution approach through the entire design. This isimportant, as any problems that are found can be detected at theearliest moment where time still exists to fix the problem and blockscan be designed in parallel to individual schedules.
Looking through the full simulation environment, several views ofthe same circuit will exist. This is likely to consist of a behavioralview, a pre-layout transistor-level view and several views of parasiticinformation. As blocks mature, it may be necessary to add moretransistor-level information to test RF/analog and RF/digitalinterfaces. This will require the use of a mixed-signal simulatorcapable of handling analog, digital and RF descriptions, and mixedbehavioral-level with transistor- level abstractions.
Pick the appropriate views of each block or sub-block and manage theruntime vs. accuracy trade-offs through simulation options such assending the transistors to a FastSpice simulator or keeping thetransistors in a full Spice mode. This configuration is highlydependent on the circuit and sensitivity of the interfaces. The abilityto manage these configurations effectively is vital, as these arerequired to be repeatable. This provides an effective mechanism to setup the continuous regressions that support the ACD Methodology.
A preliminary circuit design then takes place, allowing for earlycircuit exploration and a first-cut look at performance specifications.This early exploration leads to a top-level floorplan, which for RFICis sensitive to noise concerns and block-level interconnect.
At this stage, it is possible to synthesize passive components suchas spiral inductors to spec, and do an initial placement of them on thechip. This enables two key activities: creating early models for spiralinductors that can be used in simulation before the block-level layoutsare complete, and allowing for an initial analysis of mutual inductancebetween the spirals. Component models of each inductor can be generatedwithin this context for use in these simulations.
Simulation is performed using the designer-preferred method, eitherin the frequency or time domain. This depends on the circuit, type ofsimulation, or amount to be simulated, and is a judgment call by thedesigner.
A single PDK and associated environment allows for a smoothdetermination and selection of the simulation algorithm desired.Results are displayed through an appropriate display for the simulationtype selected. As circuits are completed at block level, they areverified within the top-level context with behavioral stimulus anddescriptions for the surrounding chip.
Layout automation (automated routing,connectivity-driven layout, design- rule- driven layout, placement )can be used judiciously.
The advantage to layout automation is that it's tied to theschematic and DRC rules, andallows for productivity gains. Analog-capable routers can help withdifferential pairs and shielding wires, and allow for manualconstraints per line.
This allows for a physical design process that is repeatable justlike the frontend process. It may take some time and overhead to set upthe initial tools, but this is made up as iterations are made throughthe design process. Engineering change orders (ECOs) are moreeffectively performed if a repeatable layout process is in place.
This is weighed against highly sensitive circuitry, which demands amanual approach. As layout is complete, electromagnetic (EM) simulationcanbe used to provide highly accurate models for passive components.
For example, several spiral inductors may be selected as highlycritical and a target for EM simulation; these can be swapped byreplacing the models that were created early in the design process, andmixed and matched with existing models. The designer then has fullcontrol over managing the spiral modeling process, again having theability to trade-off runtime vs. accuracy at their discretion.
Net-based parasitic extractionbecomes a key element of the process as layouts emerge. RF design ishighly sensitive to parasitic effects. As such, the ability to managedifferent levels of parasitic information becomes paramount, as thedesigner can describe which areas, lines and blocks can haveprogressively more or less parasitic information associated with them.
Less sensitive interconnects may require RC only, whereas moresensitive lines may require RLC. For lines with spirals attached, thesecan be extracted fully with RLC plus the associated inductor component,even with substrate effects added for the most sensitive lines.
Again, the lines that contain a “full” extraction can be mixed andmatched with the component models for passive components that werecreated earlier. In addition, as the top-level layout emerges, analysis- especially substrate noise -is used to ensure that noisy circuits(such as digital logic and perhaps PLLs) are not affecting the highlysensitive RF circuits.
Designers can check for this, and as they flag areas of concern,they can either modify the floorplan accordingly or add guardbandsaround the noisy circuitry. However, it is often impractical to bothsimulate the entire design at transistor level and include all theparasitic information.
One approach is to extract calibrated behavioral models using theextracted view of the design blocks. But this will not capture theeffects of the parasitics on the interconnect between blocks. Thus,hierarchical extraction capabilities to extract only parasitics of theinterconnect between design blocks need to be supported.
Calibrated HDL models
Finally, as blocks are completed, the initial behavioral models can beback-annotated for key circuit performance parameters, which canprovide more accurate HDL level simulation.
While this will not account for every effect, it can add morerealistic performance information at little runtime cost, allowing forfaster-level verification and perhaps reducing the amount of fulltransistor-level verification. In this way, the verification of a blockby mixed-level simulation becomes a three-step process.
First, the proposed block functionality is verified by including anidealized model of the block in system-level simulations. Next, thefunctionality of the block as implemented is verified by replacing theidealized model with the netlist of the block. This also allows theeffect of the block's imperfections on system performance to beobserved. Finally, the netlist of the block is replaced by an extractedmodel.
By comparing the results achieved from simulations that involved thenetlist and extracted models, the functionality and accuracy of theextracted model can be verified. From then on, mixed-level simulationsof other blocks are made more representative by using the extractedmodel of the block just verified rather than the idealized model.
When performed properly, bottom-up verification achieves detailedverification of very large systems. Behavioral simulation runs quicklybecause the details of the implementation are discarded while thedetails of the behavior are saved. Because the details of theimplementation are discarded, the detailed behavioral models generatedin a bottomup verification process are useful as blocks mature or forthird-party IP evaluation and reuse.
Especially for wireless systems including RF front-ends, bottom- upverification is mandatory when verifying the performance of largesystems. As mentioned earlier, RF system simulations at the transistorlevel (running thousands of cycles of the modulated signal) is oftenimpractical.
The use of advanced envelope analysis techniques instead oftraditional transient simulation would only provide a speed-up by afactor of 10-20x. And even bottom-up extraction using traditionalpassband models, where the RF carrier is still present, won't providethe required speed-up. Only the combination of bottomup modelextraction techniques with so-called complex baseband or low-passequivalent models (where the carrier signal is suppressed) will lead tosimulation times that enable PER analysis at the full-chip level.
Generating behavioral models that include the detailed behavior ofeven simple blocks can be difficult and requires a specialized skillnot commonly found in the design team. Thus, the team needs automatedtools and methodologies that generate detailed behavioral models withverified accuracy, and an open API to modify existing templatesaccording to specific application and/or technology needs.
Kurt Johnson is Group Director,Cadence Verticals, Cadence DesignSystems Inc.
To read a PDF version of this story, go to “DesignRFICs with greater speed, accuracy.”