Breathing life into hardware and software codesignFrom theory to practice, this article comes from one who's done it all. Hardware/software codesign is the goal of every (well, most) embedded systems designers. To have the hardware and the software both spring forth from the same designer's pen is enough to make any manager glad.
As consumers develop an insatiable desire for instant information, embedded systems are here to both satisfy the need and fuel the desire. These systems, from the consumer gadgets to the safety-critical technology, have permeated our lives almost to the point where we depend on them directly or indirectly not just for entertainment but for food, clothing, and shelter. The demand for increased embedding of hardware and software in multifaceted consumer products coupled with rising design complexity and shortening time-to-market is renewing the emphasis on bringing together the various segments of the embedded system design industry. Codesign of hardware and software is back in vogue. At long last, codesign, now known as embedded system-level design, is starting to mature.
This article describes the evolution of codesign, what went wrong with early codesign methods, and its revival and great hope: the transaction-level method.
Intro to codesign
Hardware/software codesign is a loose term that encompasses a large slice of embedded systems design, trade-off analysis, and optimization starting from the abstract function and architecture specification down to the detailed hardware and software implementation. If the method of using "interchangeable parts" introduced by Eli Whitney in 1799 is the precursor to the industrial revolution, then hardware/software codesign is the enabler of the embedded systems revolution.
Hardware/software codesign involves analysis and trade-offs: analyzing the hardware and software as they work together and discovering what adjustments or trade-offs you need to make to match your parameters. For example, anytime you debug a software driver on the hardware (or a model of the hardware), and you tweak the hardware or the software as a result, that's codesign. To put it more simply, any time you run a compiler you're doing codesign. The compiler tries to finagle the software code (to a degree that depends on your optimization flags) to make it match the fixed-hardware processor. In many cases engineers even use hardware hints (for example, the register keyword in C) in the programming language itself to suggest how the optimization or codesign should be done.
The field of codesign, however, has long been a victim of its own visionary breadth, torn between idealism and realism. Many practitioners have proclaimed it dead because its ideals of top-down embedded systems design starting from high abstraction layers have proven too unwieldy. The other camp—the codesign realists—says the practice is very much alive as bottom-up intellectual property (IP) component assembly.
A version of codesign has now come along that may please both camps. To make sense of it, let's first look at how codesign evolved.
Rise and decline
The codesign concept was born in the 1990s following the widespread adoption of hardware-synthesis tools and rising interest in software synthesis. Until that time, creating software was a manual undertaking for the most part. Microcontrollers were the mainstay of embedded systems, and programming them efficiently within strict performance and memory constraints was an art. Most early applications were control systems used in industrial and automotive domains.
In these applications, the system control—the interaction of the controller with the RTOS and the peripherals—was typically designed first, forming a shell into which the data-path components such as arithmetic logic units, multipliers, and shifters would fit. The software was typically used in such systems to add flexibility through programmability. The software wasn't so much application "feature" software but a soft implementation of the required control functions.
Industry awareness of the toll exacted by unreliable code and long development times led to academic curiosity in software modeling and automated synthesis. Academic research camps were polarized according to the application domain: a majority studied control-intensive applications while others worked on data-dominated applications.
Early on, researchers adapted methods for synthesizing hardware logic (for control) and data path (for data) to simultaneously design hardware/software and their interfaces. The motivating idea was to start from a single system-level specification and automatically generate both the hardware and the software to reduce design time and the time spent evaluating different implementation alternatives. At first, the researchers limited their focus to analyzing the trade-offs of low-level hardware/software implementation and the best methods for cosimulation, but complex target architectures made it increasingly hard to adequately analyze and optimize the system at the low hardware/software level. Newer 32-bit microprocessors and a variety of DSPs tailored for different applications were in greater use, and their advanced memory hierarchies with multilevel caches severely limited the accuracy of the abstract models built on function-control abstractions.
A solution at hand
Researchers focusing on function and architecture codesign in the late 1990s soon rectified this modeling inadequacy through a formal method in which engineers could coordinate (or codesign) refining the function and abstracting the architecture at a high level instead of analyzing trade-offs at the hardware/software implementation level, where options are more limited. Researchers basically realized that algorithmic analysis can deliver higher returns for lower effort if it has knowledge of the available architectural options. The simple example of a hardware resource-aware compiler (how many registers, multipliers, adders, and so forth) also shows that available optimizations can be tuned based on architecture information. This approach permitted a more educated, guided, and constrained codesign where the architecture sheds light onto the metaphorical "function shadow," which helps engineers make an educated choice of the best target implementation for the application at hand, as shown in Figure 1.
Figure 1: The ideal: (computation) function and (communication) architecture trade-off analysis
Automated synthesis promises to increase engineer productivity and enhance system quality by rapidly generating the hardware, software, and necessary interfacing mechanisms (even a dedicated RTOS optimized for the application), all guided by the system-constraint metrics. Those metrics are primarily performance and size, but also power consumption. Architecture once consisted of a few software partitions such as one or more microprocessors with an RTOS to manage the multiple software tasks and multiple hardware partitions. This design flow is shown in Figure 2. System architects would perform their codesign early, at the high level, where the greatest design function or architecture-change returns could be reaped, and then map the representation down to implementation after hardware/software partitioning. Hardware and software engineers would codesign and cosimulate at the implementation level.
Figure 2: Reality: Codesign at high- and low-abstraction levels with a mapping gap
Division of labor
Several codesign representations that could be mapped to hardware or software have been used. Many of these consist of a control/data flow graph onto which architectural estimate data (for example, abstract instruction timing for software, timing delay for hardware) is annotated for trade-off analysis and codesign. The estimates and the implementation become more accurate and concrete as the design moves down toward the lower abstraction layers.
Although research thrived on codesign modeling and analysis at the high level, the promise of a practical, highly productive design and verification process did not materialize for the industry at large. Codesign tools weren't widely used for system design since mapping the results of high-level algorithmic codesign onto a realistic architecture implementation remained elusive. Conversely, at the lower hardware/software layers coverification was king.
The gap between the system architects and application feature software developers on one hand and the implementers of hardware, middleware, and firmware on the other hand persisted; it even grew wider as design complexity mounted. The high-level modeling just didn't deliver on the links to implementation. The approaches that did have a path to implementation suffered from poor results. In practice, architects did the high-level modeling and codesign and then handed off the results to the developers to manually implement the system assisted by partially automated verification. The gap in the modeling spectrum causes design iterations if the specification and the implementation are not consistent. This fissure can also cause miscommunication between software and hardware teams. The software engineers design for an abstract model of the hardware many months before a hardware prototype is available. The best practices at large system houses focused on cosimulation as early as possible but the codesign process was just not in wide use. The field as a whole seemed doomed to fade out of existence.
Renewed vigor in codesign
The long-sought articulation models for computation and communication codesign are now starting to materialize. One particular "sweet-spot" model that's recently come to the fore is the transaction-level model (TLM), which captures system operation by presenting a timed version of the programmer's model of the system operation. In many ways, transactions have become a middle ground of interoperability for high-level codesign idealists and low-level realists. This middle ground is possible because transactions describe both behavior and communication and can be abstracted and refined to form a continuum of models that bridges the gap between the extremes at the top and bottom of embedded systems modeling. TLM has the potential to form the underpinnings of a realistic design flow that can achieve good-quality results given a set of design constraints.
The shortcoming of previous codesign methods is that a lot of work went into behavioral modeling, with communication aspects hidden within the entrails of the model of computation. TLM changes all that. Concurrency is a reality we're just now coming to terms with, and TLM is the vehicle through which we're revisiting these ideas, filling the breach between concurrent hardware and sequential software modeling.
Anatomy of a transaction
A transaction describes a partial order for a sequence of events. Transactions have labels and a time span. Any one transaction can be thought of as an abstract valuation for a set of signals. In effect, it's a macro value for a set of things that could be design signals or other abstract useful variables and messages.
Related transactions are grouped together in streams. In its simplest form, a stream is one signal where the transaction would be a specific value for that signal. Typically a stream is more like an object that captures the bus transfer type, and transactions on it might be single read, idle, single write, burst transfer, and so on. Streams can have overlapped transactions to model split or concurrent activity.
Figure 3: Transaction elements and relationships
The transactions therefore would consist of attributes, such as several subsignals, messages, or other variables that implement the bus handshake. Transactions can also be composed or decomposed to form aggregations and associations among varied transactions in one-to-one or one-to-many relations. Such relationships include predecessor-successor, parent-child, and the like. These concepts are shown graphically in Figure 3, which displays a trivial example of a generic bus. Tools such as debuggers typically use such views to present the data in an understandable fashion that abstracts and encapsulates the data for codesign and trade-off analysis.
Languages have not been a successful medium in which to analyze trade-offs because different application domains have different concerns and thus dissimilar modeling notations. Hardware/software codesign starting from a single all-encompassing language is just not realistic because systems are heterogeneous entities. System-level design languages such as SystemC and SystemVerilog, and a host of other hardware-verification languages (such as Temporal e) have come to terms with this realization and identified the TLM abstraction as a suitable bridging notion.
TLM is a description of the observable behavior (whether from a specification or from an actual implementation) that can serve as a cross-team conversation and documentation of what the final behavior should be; in this sense it's a behavior trace abstraction. High-level analysis techniques can view it as an abstract sequence of or (partially) ordered sequence of data and can use model refinement and composition in order to manipulate this trace abstraction for tradeoff analysis. Low-level models can focus on the concrete manifestation of protocol communication where validation tools can monitor and analyze such metrics as memory access, caching performance, and bus utilization.
Verification is undoubtedly a central part of design. The TLM abstraction is quite efficient at verifying the different component models; any one block can be swapped with a functional (timed or untimed) model or an implementation model (bus functional model or RTL). This capability can provide a lot of speedup in verification techniques such as hardware/software co-verification. This is where traditional hardware/software partitioning can be performed based on performance evaluation, and one task can be moved from one portion to another. Different cosimulation techniques are shown in Figure 4, with fast evaluation on the top, more accurate and slower on the bottom.
Figure 4: Transaction elements and relationships
This TLM therefore sidesteps the issue of an overall central modeling language and enables different domains to use appropriate modeling constructs. TLM forms a central modeling concept that allows both architects and implementers to quickly explore various functional and architectural trade-offs and alternatives. Indeed transaction-level modeling stems from representing the required communication among blocks, not modeling of the blocks behavior or of the interface or channel behavior; it's a presentation of the required function (specification) and the current operation (implementation) that focuses on demonstrating the proper system operation.
The model itself is a continuum of several TLMs with varying levels of detail. Its three primary sublevels are the programmers' view (PV); the programmers' view with timing (PVT), which typically includes a bus functional model of hardware and instruction set simulator (ISS) software abstraction; and the cycle-accurate or cycle-callable, which involves a mix of bus-functional and RTL model abstractions as shown in Figure 5. The multitude of levels reduces the mapping effort from one level to the next and provides for a stepped tradeoff analysis where models assist the optimization and mapping process to pick an optimal choice of implementation. The model capitalizes on the fact that design is really a "meet-in-the-middle" process, not purely a top-down or bottom-up but rather an up/down mixed process. Automation tools that can transform the model up to abstract detail and down to refine it are mushrooming in the context of TLM design and verification.
Figure 5: Trade-off analysis at the transaction level
Codesign nirvana at last
The once disparate "point-tool" codesign space is coming together. Early visionary research in codesign has found a large degree of validation with current application domains and with the renewed interest in a unified modeling and trade-off analysis level—that of transactions. Design automation tools such as simulators, analyzers, and debuggers that revolve around the TLM can now bridge the gap between the architecture and the implementation teams. This representation is a means for the industry segments to complement each other using the TLM as a medium for integration and cooperation.
Hardware and software are like ice and water: each has its own distinct characteristics yet their essence is the same. Codesign enables us to see beyond a particular hardware and software incarnation of an embedded systems design and analyze it at the core. Codesign is no longer the sole purview of large system houses. Cooperative efforts aimed at defining key analysis and tradeoff points that run across the hardware and software domains such as TLM are breathing renewed life into hardware and software codesign and its practice.
Dr. Bassam Tabbara is architect for research and development at Novas, where he leads the system-level debug and the assertion-based design and verification teams. He has a bachelor's in electrical engineering from UC Riverside and a master's and doctorate from U.C. Berkeley. His research interests include the optimization, synthesis, verification, debug, and codesign of embedded and hardware/software systems. You can reach him at email@example.com.