Co-simulation for Zynq-based designs

June 09, 2017

Adam-Taylor-June 09, 2017

Heterogeneous System-on-Chip (SoC) devices like the Xilinx Zynq 7000 and Zynq UltraScale+ MPSoC combine high-performance processing systems with state-of-the-art programmable logic. This combination allows the system to be architected to provide an optimal solution. User interfaces, communication, control, and system configuration can be addressed by the Processor System (PS). Meanwhile, the Programmable Logic (PL) can be used to implement low latency, deterministic functions and processing pipelines that exploit its parallel, nature such as those used by image processing and industrial applications.

Communication between the PS and the PL is provided by several memory-mapped interfaces. These interfaces use the Advanced eXtensible Interface (AXI) to provide both Master and Slave communications in each direction.

Figure 1. Zynq Architecture Showing AXI Interconnect between the PS and PL (Source: Xilinx)

Figure 1. Zynq Architecture Showing AXI Interconnect between the PS and PL (Source: Xilinx)


In cases where configuration and control functions are performed by the PS, the general-purpose AXI Master interface is used from the PS to the PL. This enables the software (SW) to configure registers and hence the desired behavior of IP cores in the PL. In more complex operations, there may be a desire to transfer large amounts of data from the PL into the PS memory space for further processing or communication. These transfers will utilize the high-performance interfaces, which will require considerably more complex software to configure and use.

Verifying interactions between the PS and PL presents challenges to the design team. The 2015 Embedded Markets Survey identified debugging as one of the major design challenges faced by engineering teams and also identified a need for improved debugging tools. While bus functional models can be used initially, these models are often simplified and do not enable verification of the developed SW drivers and application at the same time. Full functional models are available, but these can be prohibitively expensive. When implementing a heterogeneous SoC design, there needs to be a verification strategy that enables both PL and PS elements to be verified together at the earliest possible point.

Traditionally, verification has initially been performed for each element (functional block) in the design in isolation; verifying all the blocks together occurs when the first hardware arrives. The software engineering team developing the applications to run on the PS needs to ensure the Linux Kernel contains all the necessary modules to support its use and has the correct device tree blob; this is normally verified using QEMU (short for Quick Emulator), which is a free and open-source hosted hypervisor that performs hardware virtualization.

Meanwhile, in order to correctly verify the PL design, the logic verification team is required to generate and sequence commands like those issued by the application software to verify that the logic functions as required. Both of these approaches, however, do not capture the true interaction of the software with the hardware, thereby making errors associated with this interaction very difficult to detect. This delays the development schedule and increases development costs as issues raised later in the development process are always more expensive to address and correct.

It is possible to use a development board as in interim step to verify the HW and SW interaction before the arrival of the final hardware. However, debug on real hardware can be complicated, requiring additional instrumentation logic to be inserted in the hardware. This insertion takes additional time as the bit file needs to be regenerated to include the instrumentation logic. Of course, this change in the implementation can also impact the underlying behavior of the design, thereby masking issues or introducing new issues that make themselves apparent only in the debugging builds.

Being able to verify both the SW and the HW designs using co-simulation, therefore, provides several significant benefits. It can be performed earlier in the development cycle and does not require waiting for development hardware to arrive, thereby reducing the cost and impacts of debugging. Furthermore, such an approach also provides more visibility with respect to registers and interactions between the PS and PL, all of which aids in the discovery and removal of bugs earlier in the process.

HW & SW Co-simulation

Co-Simulation between SW and HW requires the logic simulation tool used to verify the HW design to be able to interact with an SW simulation emulation environment.

The release of Aldec's Riviera-PRO (2017.10) enables just this HW and SW co-simulation by the provision of a bridge between Riviera-PRO and QEMU, thereby enabling the execution of the developed software for Linux-based Zynq developments.

Figure 2. Bridging the HW and SW verification environments (Source: Aldec)

Figure 2. Bridging the HW and SW verification environments (Source: Aldec)

This bridge has been created using SystemC Transaction Level Modelling (TLM) to define the communication channels between QEMU and Riviera-PRO. The concurrent verification of the SW and HW is facilitated by the bridge's ability to transfer information in both directions.

Within this integrated simulation environment, the engineering team is able to use standard and advanced debug methodologies to address any issues that may arise as the verification proceeds. In the case of Riviera-PRO, this includes such capabilities as setting break points within the HDL, examining data flow, and even analyzing the code coverage and paths that are exercised by the SW application running in QEMU. In the case of QEMU, the SW team can use Gnu DeBugger (GDB) to instrument both the kernel and the driver to step through the code using breakpoints.

This co-simulation approach has the benefit of not only providing greater visibility and debugging capability within the hardware simulation environment, but it also enables the same Linux kernel developed for the target hardware to be used within QEMU. Again, this provides for earlier verification that the Kernel correctly contains all the required packages and elements to support the application under development.

PWM Example

In order to demonstrate this co-simulation environment, a simple example was created. This example places an IP core within the PL and connects it to the Zynq PS over a general-purpose AXI interface. When enabled by an AXI access to its register space, the IP core will generate a pulse-width modulated (PWM) signal output. The duration of the PWM signal is selectable within a range of 0 to 100% and is again defined by a register within the IP core's register space. A typical use case for this core, therefore, requires software running in the Zynq PS to enable and configure the IP core. Simply simulating the IP core in isolation will not result in the desired operation of the core being adequately demonstrated. To correctly verify the IP core, we need to be able to enable and exercise the output pulse width from the PS when running a Linux operating system.

Continue reading on Embedded's sister site, EEWeb: "The Benefits of HW/SW Co-Simulation for Zynq-Based Designs ."


Loading comments...