Using emulation to debug software and hardware at the same timeToday’s system-on-chip (SoC) designs aren’t just hardware anymore. In the past, the creation of hardware chips was separate from the creation of the software to be executed on those chips, but today an SoC isn’t complete until you’ve proven that the intended software works – and works well – on the platform. In other words, today the SoC itself is a full-fledged embedded system.
This comes with a benefit. In the past, if there was a hardware problem the software programmer had to figure out how to code around it because the hardware was already done. By validating the software before declaring an SoC to be complete, you get an opportunity to fix hardware issues before they’re cast into silicon. But that benefit comes with a challenge: debugging efforts may lead to a hardware or a software cause. You can’t assume either one is correct.
Hardware debug before tape-out is traditionally done using simulation. Various other hardware validation strategies involving things like formal verification have augmented simulation to increase basic coverage and ensure that corner cases aren’t missed, but for debugging, simulation maintains its central role.
Software debug is traditionally done using debug engines, one per core. They take advantage of hardware features that provide some level of visibility into and control over the goings-on inside a processor. While there are a basic set of expected debug capabilities, your ability to diagnose issues is limited by the kind of access that the processor provides.
Traditional software debug also typically happens on the actual system, so you’re executing real code on real hardware at target system speeds. This allows you to get through large volumes of code to an offending routine relatively quickly.
These traditional techniques break down when debugging an SoC. Because there is no real hardware, code cannot be executed at true system speed. The hardware can theoretically be simulated as code is executed, with the benefit that you get all of the hardware visibility provided by the simulator: nothing is hidden. The problem is speed: this is an inordinately slow way to debug code.
If your SoC will be running programs over Linux, for example, you have to complete the Linux boot – billions of clock cycles – before your software can even begin executing. It is estimated that, at typical simulation speeds (around 10 Hz equivalent), a complete Linux boot would take over 28 years.
This is where emulation becomes critical. The SoC hardware is implemented in hardware, typically an FPGA or some other programmable element, giving it much higher speed. On such a system, the Linux boot can be accomplished as quickly as 15 minutes, depending on the actual speed being run.
Critically, however, emulators provide the kind of control and visibility that you get in a hardware debugger. You get the best of both software and hardware debug worlds: breakpoints and waveforms both play a part.
Regardless of the program being debugged, traditional hardware and software debug don’t know anything about each other. It would be inefficient if the two types of debug always had to be done independently, going back and forth and rerunning the software over and over in an attempt to locate a problem.
Allowing the two to work together, as if they were in sync, is highly preferable, and this can be done on an emulator. There are at least three ways to use hardware and software debuggers together in a manner that gets the most information from a single run.
The first relies on the software debugger as the primary tool. The idea is to get to a point in the program where things start going awry, and then bring the hardware debugger into play. So you set two breakpoints in your software, one each for the start and stop points of the waveform dump, and then start executing. At the first breakpoint, everything will come to a stop, and you can set up the hardware debugger to start dumping signals for waveform viewing. Execution then can resume, proceeding until the next breakpoint, when you turn the dumping back off (and finish execution if desired).
You can also set up the hardware debugger to trigger on a certain condition – say, when a particular function is reached. This is made possible by “software symbol awareness,” whereby software names can be used in the hardware debugger, and the translation between the software symbol – for instance, from function name to program counter value – happens automatically.
Software symbol awareness is made possible when the software is compiled with the debug flag on. There are a variety of tools that can then be used to extract those symbols for use by the emulator. Of course, that only helps with static symbols – like global or static variables or functions. Dynamically allocated variables, like local variables and parameters, can be located by calculating their position from the stack pointer.
By pre-defining when signal dumping should start and stop, you can start the execution from the software debugger, and the hardware debugger will automatically create the waveform.
The second way of having the two debuggers work together is by instrumenting the hardware. You can use assertions or additional logic to force some sort of activity – print a message, set another value, anything that can be done in hardware or via an assertion.
Both of these approaches use the software debugger to set things up and then see what happens in hardware. The third approach also assumes you run the software up to a point of interest before breaking, but at that point you can use the hardware debugger to force various values in the hardware, and then continue running to see what happens on the software side of things.
Currently no items