Rethinking System-on-chip design at 65 nanometers and below - Embedded.com

Rethinking System-on-chip design at 65 nanometers and below

In recent weeks, executives from major IC houses and EDA companies havebeen talking about the design challenges facing hardware and softwaredevelopers as System on Chip designs proceed from 65, to 45, to 32nanometers.

For example, at the recent Future Electronics Horizons Forum inBudapest, Hungary, both Robert Ober, an executive in AMD’s office ofstrategy and technology, and Wally Rhines, chairman and CEO of MentorGraphics, talked about the need to work at higher levels of abstractionto create design files that generate both the hardware and softwareneeded.

Fortunately, they are not alone in the need for such tools and anumber of system level design startups companies, with varying degreesof success, have been moving in this direction. And the good news isthat we are already, as an industry, more than halfway there, yieldingsignificant improvements in the form of compact architectures, betteralgorithm implementation, multi-processing speed, power-tradeoffs,memory access and integration of hardware and software design flows.

While getting to the ideal design environments that Oberand Rhines   talk about is noteasy, it is achievable.

What it takes to get there
But getting to there dealing adequately with the information needed,cross pollination communication and a strong development methodology;depending on the nature of the anticipated system, subsystem, orelement of a subsystem.

The structure, composition, scale, or focal point of anew/incremental system design incorporates the talents and gifts of thedesigner using either a top-down or bottom-up design style. Is acentralized or distributed approach to processing the best approach toachieve the best price/power/performance? Is a symmetrical orasymmetrical topology warranted?

How to answer to such questions? One starts with a conceptual blockdiagram, refines the design specification based on integratedsimulation results, and is made available for everyone in the producthierarchy as early as possible.

There are many views for a “design” level specification and cancontain one or more conceptual block diagrams. The differences betweenblock diagrams might reflect the level of abstraction and detailrepresented, or conversely, the type of system. A statistical model ofa motherboard might include an abstract model of the processor but adetailed model of the memory hierarchy. After creating a conceptualblock diagram, what methodologies are available to evaluate the systemperformance in terms of system throughput, system power, systemlatency, resource utilization, as related to cost?

A design level specification captures a new or incremental approachto improving system throughput, power, latency, utilization, or cost;typically referred to as price-power-performance tradeoffs. At eachstep in the evolution of a design specification, well intentionedmodifications, or improvements, may occur. What happens to the systemdesign process if a well intentioned design specification changeimpacts the original conceptual block diagrams, such that design marginfor system throughput drops from 20% to 5%? The time required toevaluate a design modification before, or after, the system designprocess has started, can vary dramatically, and a design levelspecification will reduce the redesign time.

Figure1 Top-down design provides greater refinement of the design choices atsmaller geometries.

Early on, spreadsheets were popular for estimating averagethroughput, power, latency, utilization, and cost at the system levelwhen napkin-based designs hit the limits of scalability. As designbecame more parallel processing oriented, spreadsheets encounteredproblems with handling non-deterministic traffic, concurrentprocessing, estimating peak system performance, and resolving mistakesin a spreadsheet, that were not readily apparent. To resolve digitalsystem modeling issues, C/C++ provided internal synchronization in theform of software generated clocks or events, common resource objects,user-defined classes, etc.

The problems encountered with C/C++ were related to the envisioned“modest” programming effort. Software bugs were more difficult to findin increasingly complex software that resolved some of the EXCELspreadsheet issues. Nonetheless, better performance modeling resultswere obtained with substantial programming effort. It was difficult toexchange models from one company to another or one group to anothergroup. Their golden reference models lacked a common frame ofreference, or sometimes referred to as interoperability. In the early1990s, the combination of low cost workstations, and modeling toolsneeding a common frame of reference started to appear in themarketplace.

Several system level tools, such as BONeS Designer (Block-OrientedNetwork System Designer), OPNET Modeler, SES Workbench, CACI COMNeT andVirtual Component Co-Design (VCC) appeared to provide the notion oftime-ordered, concurrent system processes; embedded software;algorithms, and data types.

Many of these tools are graphically oriented, which reduces the needfor extensive C/C++ coding efforts, replacing standard modelingfunctionality with graphical representations of common functions. Ifspecific functionality was required, the user could create acustom-coded element, or use an existing library element, depending onthe libraries supported by the tool. The afore-mentioned tools focusedon improving modeling capabilities in terms of performance modeling,ease of use, model creation time, and post-processing of modelingresults. Some of the issues with these early system level modelingtools is that they were suited to specific classes of systems, addedtheir own syntax to graphical modeling, and sometimes lacked sufficientlibraries to solve many modeling problems.

System Level Modeling
As SoC designs shift from 65, to 45, to 32 nm, the system levelmodeling space is much more complex due to 30% to 50% increase indevices. This consists of both methodology-specific andapplication-specific modeling domains that overlap to some extent.

Table1: Application-specific modeling domains

Methodology-specific domains consist of discrete-event, cycle-based,synchronous data flow and continuous time. These models of computationprovide a modeling methodology for general classes of modelingproblems. The discrete-event model of computation is used for digitalportions of a design that may have a strong control flow component.

A discrete-event model of computation is very efficient for higherlevels of abstraction, as the current simulation time is based on bothdeterministic synchronous and asynchronous events. Discrete-eventmodels provide a user with both time-ordered (asynchronous) andconcurrent (synchronous) event modeling capabilities.

A cycle-based model of computation is similar to a discrete-eventmodel of computation with the proviso that the model is clock-driven,executing the simulation engine for each clock cycle. Cycle-basedsimulators provide a user with more modeling fidelity, meaning thatthey usually are used for more detailed modeling of digital systems,including verification of the final design.

A synchronous data flow model of computation is more DSP algorithmoriented, meaning the mathematical processing of Baseband digitalsignals, whether vectors, arrays, or complex data types. Internalprocessing of synchronous data flow type models can be simpler than adiscrete-event modeling engine, requiring the concurrence of tokens ateach modeling block to start processing, and the generation of newtokens to subsequent modeling blocks.

New Thinking needed in shift to 65nm and below
System level modeling is evolving to solve the original problem cited-how to determine quickly and efficiently the impact of designspecification change on the performance of a proposed system. Is thethroughput margin now 15% or 5% and the worst-case power maintained at1.2 Watts?

The design specification itself typically is a Word or Frame Makerdocument with text, block diagrams and tables. Simulation modelscontain more detail but are difficult to share with executive staff,marketing, manufacturing, or field support, simply because non-modelerslack the handling expertise commercial tools require. If the systemlevel, or golden level model, could be exchanged among design groupslocated around the globe, as the design specification, then a proposedchange might be evaluated by the marketing organization directly.

A number of advanced tools are beginning to emerge that addressthese issues. At Mirabilis Design Inc., for example, we have developeda toolmethodology based on the University of California's Ptolemy IIto embed a system level model into a design specification as a JavaApplet. Any internet browser can be used to view and simulate theembedded system level model within an HTML document. In other words,the design specification can now contain an “executable” system levelmodel that other people within the organization can run with a netbrowser.

No additional software or license is required. The executable modelis identical to the tool level model, containing all the parameters andblock attributes of the original model. Users can modify parameters tocreate different operating scenarios and evaluate the functional impacton the specification. To maintain a rigorous qualification process,changes to model connectivity and block operation must still beperformed by the original modeling team.

Figure2: Vision of a Platform Definition for System Specification

Mathematica from Wolfram Research enablesresearchers to create interactive calculations on the Internet andenable users to compute and visualize results directly from a webbrowser. Mechanical CAD and Product Lifecycle Management solutions havebeen providing VRML (Virtual Reality Modeling Language) basedapproaches to render the simulation models on custom Web-like viewers.National Semiconductor provides WebBench, a set of online toolsto create an optimized prototype using customer specification andNational’s semiconductors.

As we move toward the goals outlined by Ober and Rhines, a number oftechnologies need to be evolved. As Rhines points out in the article,merging the specification process and implementation is an enablingtechnology. Current behavior synthesis is focused on DSP algorithmimplementation using a narrow coding practice. For behavior synthesisto be truly valuable, the output must merge data-path and control-path.The implementation path for hardware and software are still disjointed.There must be better integration to ensure the system-leveloptimization is not lost during the synthesis process.

System specification occurs during the first 30% of the developmentcycle. Depending on the project schedule and the complexity of the newcapabilities, this can vary from 3 months to several years. During thisperiod, a number of architectures, sometimes, hundreds must beexplored. System specification exploration tools must be measured basedon new metrics that determine the adequacy of these new tools: modelingtime, easy of construction, breath and depth of high-level modelinglibraries, and open API to integrate legacy knowledge.

In a recent article in EETimes, Ralph von Vignau, director of infrastructure and standardsat Philips Semiconductor, suggested a high-level library of functionsis essential to accelerate system modeling. How do you select thehigh-level libraries? What are the important differentiating featuresof the high-level modeling libraries used in this new 45 nmenvironment: sheer numbers, degree of specificity, their quality, ortheir integration? Order of importance? All of the above?

Prior generations of graphical modeling tools might advertise 3,000plus libraries as a sign of modeling tool maturity and robustness.However, if a new Ultra-Wideband model evolved, these 3000 librarieselements would have limited reuse for UWB, since many are priorgeneration application specific libraries, or bottom-up componentlibraries.

SoC libraries: quality rather thanquantity
The designmethodology used at Mirabilis focuses on the quality andintegration of the system level libraries, such that they would have ahigh likelihood of reuse in a new technology, or another system levelmodel. This approach allows integration of as many as thirty bottom-upcomponent functions into a single, system level, easy to use, reusablemodule. Four queue blocks replace 24 prior generation queue blocksthrough polymorphic port support and block level menu attributes, whileimproving simulation performance.

The industry is rapidly addressing the gaps in merging the “conceptto system specification” design space. The challenges that still needto be tackled are making “concept to system specification” a standardcurriculum at Universities, increasing industry awareness, and mostimportantly, early adopters actually doing it.

One possibility is to form an IEEE forum for this emerging top-downdesign methodology, including the separation and mapping of behaviorand architecture, and hierarchical design elements. A more tacticalapproach has been adopted by Mirabilis Design Inc.: partner withuniversities and enhance the system design experience in the classroom.

Deepak Shankar is chief executiveofficer atMirabilis Design and has over 15 years experience in development,sales and marketing of system-level design tools. Prior to MirabilisDesign, Mr. Shankar was VP of Business Development at both MemCall, afabless semiconductor company and SpinCircuit, a supply chain jointventure of HP, Cadence and Flextronics. Prior to that, Deepak spentmany years in product marketing at Cadence Design Systems. Mr. Shankarhas an MBA from UC Berkeley, MS from Clemson University and BS fromCoimbatore Institute of Technology.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.