Expanding emulation’s reach with virtual devices - Embedded.com

Expanding emulation’s reach with virtual devices

In this Product How-to design article, Jim Kenney discusses the increasing importance of virtual device emulation in hardware/software co-design, using a Veloce customer’s experience with the Mentor Graphics VirtuaLAB’s ability to generate Ethernet traffic that exercises an edge router chip.

With the majority of designs today containing one or more embedded processors, the verification landscape is transforming as more companies grapple with the limitations of traditional verification tools. Comprehensive verification of multi-core SoCs cannot be accomplished without including the software that will run on the hardware.

The increasing prevalence of complex, multifunctional, networked devices and the rising importance of embedded software create a need for faster simulation run times and full system verification early in the design cycle (Figure 1, below). Hardware-assisted verification, or emulation, delivers the required capacity and performance for extremely fast, full SoC testing—hardware and software. However, for many, a prosaic barrier to the benefits of emulation has proven both stubborn and persistent. The high-cost of emulators has made them affordable only for companies with deep pockets.


Click on image to enlarge.

Figure 1. With software driving most functionality, embedded software execution is mandatory for comprehensive SoC verification.

Fortunately, recently introduced virtualization technologies promise to open a door in this wall. By spreading the usability and cost of emulators across many simultaneous users, virtual peripheral devices will make emulators a more common fixture at small and medium sized companies, as well as larger ones.

The reasons for this begin with the fact that a device driver cannot be run against an SoC and fully tested unless the device driver has a device to talk to. As those devices are outside of the SoC, they need to be connected to it in order to be tested. Therefore, full SoC verification of hardware and software requires connecting the chip to its end environment, including all of the peripheral devices; such as USB, Ethernet, displays, hard drives, and PCI Express. Typically, this has been done with physical hardware and speed bridges or speed adapters.

In emulation, this has been done using “in circuit emulation” (ICE), where physical devices are cabled to a speed adapter, which are in turn cabled into the emulator. This setup consumes expensive lab space and takes time to configure and debug.

With all the cables and external hardware, it is the least reliable part of the emulation environment. And it’s difficult to debug if something goes wrong. These physical peripheral setups are not only inflexible but also costly to replicate. Each supports only a single user, locking the machine down to a given project or a single SoC. Emulators cost so much money that they need to support many users; yet ICE limits multi-user flexibility.

Unlocking the door to emulation
The key is to virtualize all of this hardware. Virtual devices offer the same functionality as traditional In-Circuit (ICE) solutions, but without the need for additional cables and additional hardware units. Thus, they make emulation more readily available to all design teams within a company while increasing the flexibility, visibility, and capacity of emulation environments.

Virtualized solutions add a range of capabilities that promise to redraw the functional verification landscape. These include virtual host and peripheral models (called “virtual devices”) and software debug technologies enabled by transaction-based, co-model channel technology.

In this scenario, hardware accurate models of the peripherals run on a standard workstation, such as Linux, so that the SoC and software device drivers running in the emulator can interact with these hardware accurate models, just like with ICE, except that they are all implemented virtually. Because the virtual lab (Figure 2, below ) is entirely in software, a single emulator can support many users. The complete emulation environment can be instantly replicated and reconfigured.


Clickon image to enlarge.

Figure 2. The virtual lab target environment includes virtual peripherals.

The virtual lab setup enables emulators to be shared around the world and around the clock. The emulation environment is now data center compatible, as opposed to being confined to the role of a lab bench dedicated to a single project. Because of the ease of replication and configuration, engineers from anywhere in the world can take turns using the same emulator.

Each user needs only a single workstation connected to an emulator to verify their entire SoC environment. These workstations are highly reliable, low cost, and very compact. Instead of a mass of cables, speed adaptors, and a lab bench full of boards and stuff, there is only the emulator and a small rack of unit-high workstations.

With the emulation environment now housed in a data center format, it has to be reconfigurable by software, because users will not have physical access to it. With the virtual lab, teams around the world can reconfigure it via software. It is officially shared by multiple projects and geographies, and it can be reconfigured for another project with a different set of peripherals instantaneously—in the same amount of time it takes it to load a new design into the emulator.

Accurate and Fast
Virtual models are hardware accurate because they are based on licensed design IP; therefore they are just as accurate as ICE peripherals. Virtual models use the actual, synthesizable RTL that SoC designers license and put into their chips. Because it is synthesizable RTL, the peripheral model can be compiled into the emulator, and the DUT can talk to a full, accurate, RTL hardware representation of the peripheral, such as a USB 3.0 controller that is in the emulator (Figure 3, below).


Clickon image to enlarge.

Figure 3. A virtual USB 3.0 mass storage peripheral.

Continuing with USB 3.0 as an example, on the workstation side, there is a USB software stack and software that targets it for a mass storage function client or USB memory stick. The result is a functionally accurate USB memory stick—all done virtually between the RTL synthesized into the emulator linked over a co-model channel to the software stack and function client running on the workstation. This is why users do not lose any accuracy by going to the virtual approach.

Virtual devices are also very fast. They are not faster than ICE, but they are just as fast. Emulation speed is typically limited by how fast a design can run in the emulator, not by the virtual device or the communication to the workstation.

To demonstrate this, we can take a Veloce customer’s experience using the Mentor Graphics VirtuaLAB Ethernet capability to generate Ethernet traffic to exercise an edge router chip. They activated twelve 10G ports on their chip, with a million packets passing through each port, for 12 million packets total. It took 45 seconds to emulate it on Veloce. The customer estimated that it would take about thirty days to run a single port, with one million packets, in simulation. Doing all twelve ports would have been impractical to simulate.

The customer’s edge router chip was configured by their router control application running on a Linux workstation, just like their routers are configured by the same application. Via the co-model host, they linked the Linux box to the router chip design in the emulator.

It looked exactly like running their chip in the lab, except that it was all done virtually and done long before the hardware prototypes or silicon was available. The chip was compiled into the emulator, and the virtual Ethernet capability was shared between the Ethernet transactors in Veloce—written in RTL and compiled in the emulator—and a traffic generator on the co-model host. This setup allowed them to do all kinds of analysis, make modifications, analyze performance and bottlenecks, and fix their chip months before silicon.

To do the same thing with the classic, physical ICE approach they would need to purchase Ethernet testers, which are fairly expensive. Then they would need a bank of speed bridges to act as the speed adapters between the Ethernet testers and the emulator.

In order to support multiple users, as they could easily do with a virtual lab (Figure 4, below) they would have to duplicate that entire environment of Ethernet testers, speed bridges, and cables. When designs start getting up to 24, 48, 96, and even 128 ports, it becomes impractical to do this via ICE. The virtual lab is a much better approach.


Clickon image to enlarge.

Figure 4: Veloce VirtuaLAB setup compared to ICE.

Another benefit of the virtual approach pertains to how simulators and emulators support a capability called save and restore, or check point restart. With check point restart, a user does not have to re-emulate their RTOS every time they want to check out the device drivers and applications they wrote— as the RTOS only needs to be debugged once and it can take from one to ten hours to emulate the RTOS boot just to get to the device drivers.

However, if physical peripherals are connected to the emulator via the classic in-circuit approach, they are not going to be able to checkpoint their state and they are not going to be able to restore or restart their state. Physical peripherals simply cannot save register and restore states.

Virtual lab peripherals support save and restore of the complete environment, including the peripherals connected to the SoC as it sits in the emulator. This makes save and restore practical on a full-chip, with peripherals, at the emulation level.

Emulation Around the Clock, Around the World
A virtual lab emulation environment is very cost effective, providing simultaneous access to a single emulator for many software engineers. Virtual solutions provide accelerated simulation and software debug that increase verification productivity and design quality.

By delivering efficient multi-user support and data center compatibility, virtual peripherals will usher in emulation for companies large and small. Because a virtual emulation environment can be shared around the clock and around the world, it is now cost-effective for small and medium sized firms to use emulation. Indeed, with the virtual lab model, emulation’s time has finally come.

Jim Kenney has over 25 years of experience in hardware emulation and logic simulation and has spent the bulk of his career at Teradyne and Mentor Graphics Corporation. At Mentor Graphics, Jim had held responsibility for analog, digital, and mixed-signal simulation, hardware/software co-verification and hardware emulation. He is currently the Marketing Director for Mentor’s Emulation Division. Jim holds a BSEE from Clemson University.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.