Using unified modeling methods to reduce embedded hardware/software development

Steve Brown

May 11, 2010

Steve BrownMay 11, 2010

Today’s Internet devices are powered by sophisticated electronic circuits driven by multiple layers of software. These circuits are so complex they are called Systems on Chip (SoC) because they contain all the sub-components of a powerful personal computer. SoC development costs continue to grow rapidly, driven by increasing demand for more functionality, device mobility, and improved usability.

These new capabilities demand more sophisticated software executing on multicore hardware and other special purpose accelerators to meet power and performance requirements. SoC development team productivity cannot keep up with the growth in complexity. The costs of developing the most complex SoCs now threaten to exceed $100M, requiring sales of tens of millions of units to return a profit on that investment (Figure 1, below.

Figure 1 – Product complexity is driving realization costs. (Source: IBS)

Software is widely acknowledged as the driver in system development. Systems companies, and some semiconductor companies, believe they are differentiated by their software. Many report their software engineering staff outnumbers hardware engineering, some by as much as a ratio of 10-to-1.

The industry is adjusting by creating standardized software packages such as Android, or strongly integrating their supplier ecosystem with approaches as dissimilar as Apple and Intel. It is often the dependencies between software and hardware that create lengthy critical paths in the development schedule, introducing cost and risk of missing project deadlines.

Development and integration of lower layer of “hardware-dependent” software is usually on the critical path in systems projects, and if improved, provides the highest potential to reduce project costs.

The problem is that in most cases software development does not begin until detailed hardware models are available and verified, models that may not even fully meet system requirements. Hardware-software integration occurs at the end of the project, when changes are expensive and lengthy to implement.

Often the fixes are implemented in software, leaving the hardware sub-optimal. Trying to manage the combination of deferred unknowns and risks drives up resource and schedule costs and jeopardizes quality at a level and scope which impacts the entire business.

Functional verification of hardware, software, and their interactions is the other large and growing critical path component of a systems project. Complexity is growing with increased function integration on the SoC and software applications, exponentially driving up functional verification costs.

In addition, the design process uses a lower-level RTL model to capture the design, which makes change difficult to implement and slow to verify. (RTL or Register Transfer Level models describe the registers and computations of a digital circuit.) Many difficult bugs are discovered near the end of the schedule, requiring costly iterations to fix and verify the design.

Industry support for the use of transaction-level modeling of hardware is growing as a way to parallelize hardware and software development and to speed the path from system design to silicon. (TLM or Transaction-Level Modeling describes function of the design without restricting the architecture choices.)

This is enabled by using the same TLM model for both hardware and software development (Figure 2 below), eliminating the need to create two models. Today’s SoCs are often 90 percent similar to a previous generation, enabled by interoperable RTL design methodology.

Figure 2 – Reduce project cost with shorter schedules. (Source: Cadence)

Despite the interest in TLM, advances in methodology are needed that enable the industry to exchange TLM intellectual property (IP). By adopting new approaches to SoC IP development that take into account the new reality of tighter interdependence between hardware and software, semiconductor companies can significantly reduce these costs and risks.

Enabling Early Software Development
Software can only be developed in the environment where it executes – the compute system itself, some early mock-up of the compute system, or an executable model of that compute system.

Executable virtual prototypes of hardware can be created that provide opportunities to develop software, evaluate architectural performance and hardware-software implementation tradeoffs, and begin functional verification of the system.

There is an important need to balance the requirements on the virtual platform: higher abstraction models provide faster execution for software development and can be created with less effort, but they do not have the accuracy needed for detailed hardware-software verification and system analysis, and they cannot be used as the source for automated silicon implementation.

More accurate modeling enables accurate software design and verification, as well as reuse of the models for developing silicon, but at the expense of execution performance.

With a more abstract virtual prototype, much larger software application systems can be developed. The software development process is very complex with multiple contributors, requiring complex revision control, release management, profiling, debugging, and other standard software management processes.

Now, companies are rapidly staffing software teams to address the complexities resulting from the development process, and management’s focus can become dominated by issues that arise. Suddenly, the entire team is focused simply on software.

Most software development has a direct impact on hardware development. Since software is a customer-facing aspect of the system, it strongly influences the requirements for the hardware. Refining the customer requirements into hardware requirements is a key critical path that more-accurate virtual prototypes address directly.

Using a TLM approach for virtual prototypes not only provides the early software development environment, but also connects to the design flow so that changes can be propagated to silicon efficiently.

Virtual prototypes require a model of the processor, and usually there are other IP blocks that are combined with the newly developed hardware models. These models have traditionally been built with proprietary languages and simulation engines to achieve the performance needed for software development.

The IP industry wants to provide their virtual platforms, which include those processor and peripheral models, to their customers without the complexity of these proprietary restraints.

A New Approach to Hardware IP Creation
The advantage of a TLM virtual prototype is the connection to the hardware design flow. It is also a higher productivity hardware design flow. Compared to today’s RTL hardware design flow, fewer bugs are introduced, correlated with the reduction in lines of source code.

Creating RTL is much faster using the automation of a high-level synthesis tool that translates the TLM to RTL and optimizes the RTL to meet the constraints of silicon dimension, clock rate, and power consumption.

Functional verification schedules are reduced because the TLM model runs faster at the higher level of abstraction, so bugs are found earlier in the project. Less effort is required to build a single, reusable, functional verification environment for both TLM and RTL – an environment that describes how to exercise the design to test its quality.

The cost of reusing hardware IP is reduced because the source can be mapped to new architectures by the high-level synthesis tool. Late bug fixes or minor requirement changes can be implemented rapidly with an engineering change order (ECO) capable high-level synthesis product.

High-level synthesis has been available for several years but only recently matured as an enabling technology. It can now support most common hardware structures, making it possible to develop an entire SoC using TLM as the golden source.

By cleanly separating the system constraints of silicon dimension, clock rate, and power consumption from the logic design source, the IP can be reused easily for new architectures by simply changing the constraints. The productivity for creating logic is multiplied by the application of abstraction and automation.

The full benefits of high-level synthesis cannot be achieved with a flow that simply produces RTL and uses it to generate the silicon manufacturing-level description called GDS-II. The TLM implementation flow must optimize the complete process from reading TLM through producing the resulting layout.

Functional verification requires an automated approach to explore corner case behaviors of the design and to increase the productivity of verification engineers specifying the enormous scope of the system operating conditions. The Open Verification Methodology (OVM) is an industry standard verification methodology for both TLM and RTL designs.

By leveraging the OVM, engineers can define an approach to verification from TLM through RTL that minimizes the effort to change the verification environment, reusing code throughout the process.

The use of metrics to measure the functional behaviors of the design enables engineers to focus the corner case exploration on those system behaviors not yet observed, rather than redundantly repeating results. Debugging must be integrated across all abstractions, and ideally be correlated to the TLM source.

To achieve all these benefits, a new IP modeling methodology is required that unifies early software development and hardware design. The methodology must enable creation of TLM models that support early software development, functional verification, and high-level synthesis while integrating the existing RTL methodology infrastructure.

A single model reduces the effort as well as the bugs introduced during coding. As this methodology becomes more widely adopted, it naturally defines new opportunities for IP reuse across the overall enterprise, as well as for transforming today’s third-party IP ecosystem.

Industry Hardware IP Trend
As a first critical step, the industry is standardizing on TLM using SystemC to represent system hardware and enable broad creation and adoption of virtual prototype models.

Closely associated, there are other new standards emerging to use TLM to model hardware intended for synthesis and to define reusable test benches for functional verification across multiple levels of abstraction. The opportunity is to align all of these methodologies and enable creation of a single hardware model that supports early “hardware-with-software” development, as well as higher-productivity system integration and verification.

Several leading-edge companies are embracing the move to TLM for early software development and for higher-productivity synthesis and functional verification. The initial phase as these companies develop their own TLM IP portfolios will be to synthesize new TLM down to RTL and combine that with the rest of their RTL portfolio to create new systems.

The next phase will be to design SoCs predominately with TLM and incorporate a lesser amount of legacy RTL IP. There are benefits to achieving TLM functional verification productivity and reuse at the RTL level, and the methodology must address incorporating legacy RTL. We are nearing the day when the first company will commit to producing all new IP using TLM. IP reusability is the main driver for a unified methodology.

A few years ago, a unified definition of RTL reuse expanded the opportunities for companies to form around the design and distribution of their RTL IP. Today, the goals of TLM IP reuse are a superset of RTL IP reuse.

TLM IP must support transaction-level virtual prototypes for early software development, high-productivity design flows using high-level synthesis to explore different architectures, and advanced functional verification of the TLM IP as well as the SoC that integrates it.

Third-party IP companies thrive on their ability to provide reusable IP. These companies must determine when they need to start providing IP to serve customers who have switched to TLM as their format of choice. Leading edge IP, such as in consumer and mobility products, probably will be the first to switch to TLM. Third-party IP providers that serve leading systems companies are already doing so. Others will soon follow, or be designed out of next generation projects.

What next?
The methodologies employed by these leading edge companies remain proprietary. They may be based on the TLM 2 standard defined in the Open SystemC Initiative (OSCI), but they are not broad enough in scope to define reuse and interoperability requirements to serve all the interests of virtual prototyping, high-level synthesis, and advanced functional verification.

The industry needs to invest in the definition of a standard methodology, and the best way to drive this process is to lead with IP. By offering IP that embodies the needed methodology, all participants will follow.

Processor and peripheral model libraries need to be made available in an open way so they can be combined easily with different virtual prototyping solutions. In addition, the industry needs an easy way to package a virtual prototype and provide it as the software development platform, or architectural requirements alignment, with their customers.

Conclusion
Exploding system development costs and shrinking schedules are driving the industry to a new level of abstraction – transaction level modeling (TLM) - enabling earlier software development and more productive hardware design and implementation.

This approach will enable both early software development and increased software-hardware development productivity reducing costs and controlling growing schedules.

Steve Brown is Product Marketing Director, System Design and Verification, at Cadence Design Systems, Inc.

Loading comments...

Parts Search Datasheets.com

KNOWLEDGE CENTER