HW/SW co-verification basics: Part 1 - Determining what & how to verify - Embedded.com

HW/SW co-verification basics: Part 1 – Determining what & how to verify

The process of embedded system design generally starts with a set of requirements for what the product must do and ends with a working product that meets all of the requirements. Figure 6.1 below contains a list of the steps in the process and a short summary of what happens at each state of the design.

The requirements and product specification phase documents and defines the required features and functionality of the product. Marketing, sales, engineering, or any other individuals who are experts in the field and understand what customers need and will buy to solve a specific problem, can document product requirements.

Capturing the correct requirements gets the project off to a good start, minimizes the chances of future product modifications, and ensures there is a market for the product if it is designed and built. Good products solve real needs. have tangible benefits. and are easy to use.

Figure 6.1: Embedded System Design Process Requirements
System Architecture
System architecture defines the major blocks and functions of the system. Interfaces. bus structure, hardware functionality. and software functionality are determined. System designers use simulation tools, software models, and spreadsheets to determine the architecture that best meets the system requirements. System architects provide answers to questions such as, “How many packets/sec can this muter design handle'?” or “What is the memory bandwidth required to support two simultaneous MPEG streams?”

Microprocessor Selection. One of the most difficult steps in embedded system design can be the choice of the microprocessor. There are an endless number of ways to compare microprocessors, both technical and nontechnical. Important factors include performance. cost. power, software development tools, legacy software, RTOS choices. and available simulation models.

Benchmark data is generally available. though apples-to-apples comparisons are often difficult to obtain. Creating a feature matrix is a good way to sift through the data to make comparisons. Software investment is a major consideration for switching the processor. Embedded guru Jack Ganssle says the rule of thumb is to decide if 70% of the software can be reused: if so. don't change the processor.

Most companies will not change processors unless there is something seriously deficient with the current architecture. When in doubt, the best practice is to stick with the current architecture.
Hardware Design. Once the architecture is set and the processor(s) have been selected, the next step is hardware design. component selection. Verilog and VHDL coding. synthesis. timing analysis. and physical design of chips and boards.

The hardware design team will generate some important data for the software team Such as the CPU address map(s) and the register definitions for all software programmable registers. As we will see, the accuracy of this information is crucial to the success of the entire project.

Software Design. Once the memory map is defined and the hardware registers are documented, work begins to develop many different kinds of software. Examples include boot code to start up the CPU and initialize the system, hardware diagnostics, real-time operating system (RTOS), device drivers, and application software. During this phase, tools for compilation and debugging are selected and coding is done.

Hardware and Software Integration . The most crucial step in embedded system design is the integration of hardware and software. Somewhere during the project, the newly coded software meets the newly designed hardware. How and when hardware and software will meet for the first time to resolve bugs should be decided early in the project. There are numerous ways to perform this integration. Doing it sooner is better than later, though it must be done smartly to avoid wasted time debugging good software on broken hardware or debugging good hardware running broken software.

Two important concepts of' integrating hardware and software are verification and validation. These are the final steps to ensure that a working system meets the design requirements.

Verification: Does It Work?
Embedded system verification refers to the tools and techniques used to verify that a system does not have hardware or software bugs. Software verification aims to execute the software and observe its behavior, while hardware verification involves making sure the hardware performs correctly in response to outside stimuli and the executing software.

The oldest form of' embedded system verification is to build the system, run the software. and hope for the best. If by chance it does not work, try to do what you can to modify the software and hardware to get the system to work.

This practice is called testing and it is not as comprehensive as verification. Unfortunately, finding out what is not working while the system is running is not always easy. Controlling and observing the system while it is running may not even be possible.

To cope with the difficulties of debugging the embedded system many tools and techniques have been introduced to help engineers get embedded systems working sooner and in a more systematic way. Ideally, all of this verification is done before the hardware is built. The-earlier in the process problems are discovered. the easier and cheaper they are to correct. Verification answers the question, “Does the thing we built work'?”

Validation: Did We Build the Right Thing?
Embedded system validation refers to the tools and techniques used to validate that the system meets or exceeds the requirements. Validation aims to confirm that the requirements in areas such as functionality, performance, and power are satisfied. It answers the question, “Did we build the right thing?' Validation confirms that the architecture is correct and the system is performing optimally.

I once worked with an embedded project that used a common MIPS processor and a real-time operating system (RTOS) for system software. For various reasons it was decided to change the RTOS for the next release of the product. The new RTOS was well suited for the hardware platform and the engineers were able to bring it up without much difficulty.

All application tests appeared to function properly and everything looked positive for an on-schedule delivery of the new release. Just before the product was ready to ship, it was discovered that the applications were running about 10 times slower than with the previous RTOS.

Suddenly. panic set in and the project schedule was in danger. Software engineers who wrote the application software struggled to figure out why the performance was so much lower since not much had changed in the application code. Hardware engineers tried to study the hardware behavior, but using logic analyzers that are better suited for triggering on errors than providing wide visibility over a long range of time, it was difficult to even decide where to look.

The RTOS vendor provided most of the system software and so there was little source code to study. Finally, one of the engineers had a hunch that the cache of the MIPS processor was not being properly enabled. This indeed turned out to be the case and after the problem was corrected, system performance was confirmed. This example demonstrates the importance of validation. Like verification. it is best to do this before the hardware is built. Tools that provide good visibility make validation easier.

Human Interaction
Embedded system design is more than a robotic process of executing steps in an algorithm to define requirements, implement hardware, implement software, and verify that it works. There are numerous human aspects to a project that play an important role in the success or failure of a project.

The first place to look is the organizational structure of the project teams. There are two commonly used structures. Figure 6.2 below shows a structure with separate hardware and software teams, whereas Figure 6.3 below shows a structure with one group of combined hardware and software engineers that share a common management team.

Figure 6.2: Management Structure with Separate Engineering Teams
Separate project teams make sense in markets where time-to-market is less critical. Staggering the project teams so that the software team is always one project behind the hardware team can be used to increase efficiency. This way, the software team always has available hardware before they start any software integration phase.

Once the hardware is passed to the software engineers, the hardware engineers can go on to the next project. This structure avoids having the software engineers sitting around waiting for hardware.

A combined project team is most efficient for addressing time-to-market constraints. The best situation to work under is a common management structure that is responsible for project success, not just one area such as hardware engineers or software engineers. Companies that are running most efficiently have removed structural barriers and work together to get the project done. In the end, the success of the project is based on the entire product working well, not just the hardware or software.

Figure 6.3: Management Structure with Combined Engineering Teams
I once worked in a company that totally separated hardware and software engineers. There was no shared management. When the prototypes were delivered and brought up in the lab, the manager of each group would pace back and forth trying to determine what worked and what was broken.

What usually ended up happening was that the hardware engineer would tell his manager that there was something wrong with the software just to get the manager to go away. Most engineers prefer to be left alone during these critical project phases.

There is nothing worse than a status meeting to report that your design is not working when you could be working to fix the problems instead of explaining them. I do not know what the software team was communicating to its management, but I also envisioned something about the hardware not working or the inability to get time to use the hardware. At the end of the day, the two managers probably went to the CEO to report the other group was still working to fix its bugs.

Everybody has a role to play on the project team. Understanding the roles and skills of each person as well as the personalities makes for a successful project as well as an enjoyable work environment. Engineers like challenging technical work.

I have no data to confirm it, but I think more engineers seek new employment because of difficulties with the people they work with or the morale of the group than because they are seeking new technical challenges.

A recent survey into embedded systems projects found that more than 50% of designs are not completed on time. Typically, those designs are 3 to 4 months off the pace, while project cancellations average 11-12%, and average time to cancellation is 4-and-a-half months (Jerry Krasner of Electronics Market Forecasters June 2001 ).

Hardware/software co-verification aims to verify embedded system software executes correctly on a representation of the hardware design. It performs early integration of software with hardware, before any chips or boards are available.

The primary focus here is on system-on-a-chip (SoC) verification techniques. Although all embedded systems with custom hardware can benefit from co-verification, the area of SoC verification is most important because it involves the most risk and is positioned to reap the most benefit. The ARM architecture is the most common microprocessor used in SoC design and serves as a reference to teach many of the concepts discussed here.

The basics of co-verification
Although hardware/software co-verification has been around for many years, over the last few years, it has taken on increased importance and has become a verification technique used by more and more engineers. The trend toward greater system integration. such as the demand for low-cost, high-volume consumer products. has led to the development of the system-on-a-chip (SoC).

The SoC was defined as a single chip that includes one or more microprocessors, application specific custom logic functions. and embedded system software. Including microprocessors and DSPs inside a chip has forced engineers to consider software as part of the chip's verification process in order to ensure correct operation.

The techniques and methodologies of hardware/software co-verification allow projects to be completed in a shorter time and with greater confidence in the hardware and software. A good number of engineers in studies such as those in EETimes have reported spending more than one-third of their day on software tasks. especially integrating software with new hardware.

This statistic reveals that the days of throwing the hardware over the cubicle wall to the software engineers are ,one. In the future. hardware engineers will continue to spend more and more time on software related issues. This chapter presents an introduction to commonly used co-verification techniques.

Some Co-verification history
Co-verification addresses one of the most critical steps in the embedded system design process, the integration of hardware and software. The alternative to co-verification has always been to simply build the hardware and software independently. Try them out in the lab, and see what happens. When the PCI bus began supporting automatic configuration of peripherals without the need for hardware jumpers, the term plug-and-play became popular.

About the same time I was working on projects that simply built hardware and software independently and differences were resolved in the lab. This technique became known as plug-and-debug. It is an expensive and very time-consuming effort.

For hardware designs putting off-the-shelf components on a board it may be possible to do sonic rework on the board or change some programmable logic if problems with the interaction of hardware and software are found. Of course, there is always the “software workaround” to avoid aggravating hardware problems.

As integration continued to increase. something more was needed to perform integration earlier in the design process. The solution is co-verification. Co-verification has its roots in logic simulation.

The HDL logic simulator has been used since the early 1990s as the standard way to execute the representation of the hardware before any chips or boards are fabricated. As design sizes have increased and logic simulation has not provided the necessary performance. other methods have evolved that involve some form of hardware to execute the hardware design description. Examples of hardware methods include simulation acceleration, emulation. and prototyping. Here we will examine each of these basic execution engines as a method for co-verification.

Co-verification borrows from the history of microprocessor design and verification. In fact. logic simulation history is much older than the products we think of as commercial logic simulators today. The microprocessor verification application is not exactly co-verification since we normally think of the microprocessor as a known good component that is put into an embedded system design, but nevertheless, microprocessor verification requires a large amount of' software testing for the CPU to be successfully verified.

Microprocessor design companies have done this level of verification for many years. Companies designing microprocessors cannot commit to a design without first running many sequences of instructions ranging from small tests of random instruction sequences to booting an operating system like Windows or UNIX. This level of verification requires the ability to simulate the hardware design and have methods available to debug the software sequences when problems occur. As we will see, this is a kind of co-verification.

I became interested in co-verification after spending many hours in a lab trying to integrate hardware and software. I think it was just too many days of logic analyzer probes falling off, failed trigger conditions, making educated guesses about what might be happening, and sometimes just plain trial-and-error.I decided there must be a better way to sit in a quiet, air-conditioned cubicle and figure out what was happening. Fortunately for me, there were better ways and I was fortunate enough to get jobs working on some of them.

The first commercial co-verification tools
The first two commercial co-verification tools specifically targeted at solving the hardware/ software integration problem for embedded systems were Eaglei from Eagle Design Automation and Seamless CVE from Mentor Graphics. These products appeared on the market within six months of each other in the 1995-1996 time frame and both were created in Oregon. Eagle Design Automation Inc. was founded in 1994 and located in Beaverton.

The Eagle product was later acquired by Synopsys, became part of Viewlogic, and was finally killed by Synopsys in 2001 due to lack of sales. In contrast. Mentor Seamless produced consistent growth and established itself as the leading co-verification product. Others followed that were based on similar principles, but Seamless has been the most successful of the commercial co-verification tools. Today. Seamless is the only product listed in market share studies for hardware/software co-verification by analysts such as Dataquest.

The first published article about Seamless was in 1996, at the 7th IEEE International Workshop on Rapid System Prototyping (RSP '96). The title of the paper was: “Miami: A Hardware Software Co-simulation Environment.” In this paper. Russ Klein documented the use of an instruction set simulator (ISS) co-simulating with an event-driven logic simulator. As we will see in this chapter, the paper also detailed an interesting technique of dynamically partitioning the memory data between the ISS and logic simulator to improve performance.

1 was fortunate to meet Russ a few years later in the Minneapolis airport and hear the story of how Seamless (or maybe it's Miami) was originally prototyped. When he first got the idea for a product that combined the ISS (a familiar tool for software engineers) with the logic simulator (a familiar tool for hardware engineers) and used optimization techniques to increase performance from the view of the software, the value of such an idea wasn't immediately obvious.

To investigate the idea in more detail he decided to create a prototype to see how it worked. Testing the prototype required an instruction set simulator for a microprocessor, a logic simulation of a hardware design, and software to run on the system. He decided to create the prototype based on his old CP/M personal computer he used back in college. CP/M was the operating system that later evolved into DOS back around 1980.

The machine used the Z80 microprocessor and software located in ROM to start execution and would later move to a floppy disk to boot the operating system (much like today's PC BIOS). Of course, none of the source code for the software was available, but Russ was able to extract the data from the ROM and the first couple of tracks of the boot floppy using programs he wrote. From there he was able to get it into a format that could be loaded into the logic simulator.

Working on this home-brew simulation, he performed various experiments to simulate the operation of the PC, and in the end concluded that this was a valid co-simulation technique for testing embedded software running on simulated hardware. Eventually the simulation was able to boot CP/M and used a model of the keyboard and screen to run a Microsoft Basic interpreter that could load Basic programs and execute them. In certain modes of operation, the simulation ran faster than the actual computer!

Russ turned his work into an internal Mentor project that would eventually become a commercial EDA product. In parallel. Eagle produced a prototype of a similar tool. While Seamless started with the premise of using the ISS to simulate the microprocessor internals, Eagle started using native-compiled C programs with special function calls inserted for memory accesses into the hardware simulation environment. At the time, this strategy was thought to be good enough for software development and easier to proliferate since it did not require a full instruction set simulator for each CPU only a bus functional model.

The founders of Eagle, Gordon Hoffman and Geoff Bunza. were interested in looking for larger EDA companies to market and sell Eaglei (and possibly buy their startup company). After they pitched the product to Mentor Graphics, Mentor was faced with a build versus buy decision.

Should they continue with the internal development of Seamless or should they stop development and partner or acquire the Eagle product? According to Russ, the decision was not an easy one and went all the way to Mentor CEO Wally Rhines before Mentor finally decided to keep the internal project alive. The other difficult decision was to decide whether to continue the use of instruction set simulation or follow Eagle into host-code execution when Eagle already had a lead in product development.

In the end, Mentor decided to allow Eagle to introduce the first product into the market and confirmed their commitment to instruction set simulation with the purchase of Microtec Research Inc.. an embedded software company known for its VRTX RTOS, in 1996. The decision meant Seamless was introduced six months after Eagle, but Mentor bet that the use of' the ISS Would be a differentiator that would enable them to win in the marketplace.

Another commercial co-verification tool that took a different road to market was V-CPU. V-CPU was developed inside Cisco Systems about the same time as Seamless. It was engineered by Benny Schnaider, who was working for Cisco as a consultant in design verification, for the purpose of early integration of software running with a simulation of a Cisco router. Details of V-CPU were first published at the 1996 Design Automation Conference in a paper titled “Software Development in a Hardware Simulation Environment.”

As V-CPU was being adopted by more and more engineers at Cisco, the company was starting to worry about having a consultant as the single point of failure on a piece of software that was becoming critical to the design verification environment. Cisco decided to search the marketplace in hope of finding a commercial product that could do the job and be supported by an EDA vendor.

At the time there were two possibilities, Mentor Seamless and Eaglei. After some evaluation, Cisco decided that neither was really suitable since Seamless relied on the use of instruction set simulators and Eaglei required software engineers to put special C calls into the code when they wanted to access the hardware simulation.

In contrast, V-CPU used a technique that automatically captured the software accesses to the hardware design and required little or no change to the software. In the end, Cisco decided to partner with a small EDA company in St. Paul, MN, named Simulation Technologies (Simtech) and gave them the rights to the software in exchange for discounts and commercial support.

Dave Von Bank and I were the two engineers that worked for Simtech and worked with Cisco to receive the internal tool and make it into a commercial co-verification tool that was launched in 1997 at the International Verilog Conference (IVC) in Santa Clara. V-CPU is still in use today at Cisco. Over the years the software has changed hands many times and is now owned by Summit Design.

Defining co-verification
At the most basic level HW/SW co-verification means verifying embedded system software executes correctly on embedded system hardware. It means running the software on the hardware to make sure there are no hardware bugs before the design is committed to fabrication.

As we will see here, the goal can be achieved using many different ways that are differentiated primarily by the representation of the hardware, the execution engine used, and how the microprocessor is modeled.

But more than this, a true co-verification tool also provides control and visibility for both software and hardware engineers and uses the types of tools they are familiar with, at the level of abstraction they are familiar with. A working definition is given in Figure 6.4 below .

Figure 6.4: Definition of Co-Verification
This means that for a technique to be considered a co-verification product it must provide at least software debugging using a source code debugger and hardware debugging using waveforms as shown in Figure 6.5 below ..

HW/SW Co-Verification is the process of verifying embedded system software runs correctly on the hardware design before the design is committed for fabrication. Co-verification is often called virtual prototyping since the simulation of the hardware design behaves like the real hardware, but is often executed as a software program on a workstation.

Figure 6.5: Co-Verification Is about Debugging Hardware and Software
Using the definition given above, running software on any representation of the hardware that is not the final board, chip, or system qualifies as co-verification. This broad definition includes physical prototyping as co-verification as long as the prototype is not the final fabrication of the system and is available earlier in the design process.

A narrower definition of co-verification limits the hardware execution to the context of the logic simulator. but as we will see, there are many techniques that do not involve logic simulation and should be considered co-verification.

Benefits of Co-Verification
Co-verification provides two primary benefits. It allows software that is dependent on hardware to be tested and debugged before a prototype is available. It also provides an additional test stimulus for the hardware design.

This additional stimulus is useful to augment test benches developed by hardware engineers since it is the true stimulus that will occur in the final product. In most cases, both hardware and software teams benefit from co-verification. These co-verification benefits address the hardware and software integration problem and translate into a shorter project schedule, a lower cost project, and a higher quality product. The primary benefits of co-verification are:

1) Early access to the hardware design for software engineers
2) Additional stimulus for the hardware engineers

Project Schedule Savings
For project managers, the primary benefit of co-verification is a shorter project schedule. Traditionally, software engineers suffer because they have no way to execute the software they are developing if it interacts closely with the hardware design.

They develop the software, but cannot run it so they just sit and wait for the hardware to become available. After a long delay, the hardware is finally ready. and management is excited because the project will soon be working, only to find out there are many bugs in the software since it is brand new and this is the first time is has been executed.

Figure 6.6: Project Schedule without Co-Verification
Co-verification addresses the problem of software waiting for hardware by allowing software engineers to start testing code much sooner. By getting all the trivial bugs out, the project schedule improves because the amount of time spent in the lab debugging software is much less. Figure 6.6 above shows the project schedule without co-verification and Figure 6.7 below shows the new schedule with co-verification and early access to the hardware design.

Figure 6.7: Project Schedule with Co-Verification
Co-Verification Enables Learning by Providing Visibility
Another greatly overlooked benefit of co-verification is visibility. There is no substitute for being able to run software in a simulated world and see exactly the correlation between hardware and software. We see what is really happening inside the microprocessor in a nonintrusive way and see what the hardware design is doing.

Not only is this useful for debugging. but it can be even more useful in providing a way to understand how the microprocessor and the hardware work. We will see in future examples that co-verification is an ideal way to really learn how an embedded system works.

Co-verification provides information that can be used to identify such things as bottlenecks in performance using information about bus activity or cache hit rates. It is also a great way to confirm the hardware is programmed correctly and operations are working as expected. When software engineers get into a lab setting and run code, there is really no way for them to see how the hardware is acting. They usually rely on some print statements to follow execution and assume if the system does not crash it must be working.

Co-Verification Improves Communication
For some projects, the real benefit of co-verification has nothing to do with early access to hardware, improved hardware stimulus, or even a shorter schedule. Sometimes the real benefit of co-verification is improved communication between hardware and software teams.

Many companies separate hardware and software teams to the extent that each does not really care about what the other one is doing, a kind of “not my problem” attitude. This results in negative attitudes and finger pointing. It may sound a bit farfetched, but sometimes the introduction of co-verification enables these teams to work together in a positive way and make a positive improvement in company culture. Figure 6.8 below shows what Brian Bailey, one of the early engineers on Seamless, had to say about communication:

Figure 6.8: Brian Bailey on Communication
Co-Verification versus Co-Simulation. A similar term to co-verification is co-simulation. In fact, the first paper published about Seamless used this term in the title. Co-simulation is defined as two or more heterogeneous simulators working together to produce a complete simulation result.

This could be an ISS working with a logic simulator, a Verilog simulator working with a VHDL simulator, or a digital logic simulator working with an analog simulator. Some co-verification techniques involve co-simulation and some do not.

Co-verification versus Codesign. Often co-verification is often lumped together with codesign, but they are really two different things. Earlier, verification was defined as the process of determining something works as intended. Design is the process of deciding how to implement a required function of a system. In the context of embedded systems, design might involve deciding if a function should be implemented in hardware or software.

For software, design may involve deciding on a set of software layers to form the software architecture. For hardware, design may involve deciding how to implement a DMA controller on the bus and what programmable registers are needed to configure a DMA channel from software. Design is deciding what to create and how to implement it.

Verification is deciding if the thing that was implemented is working correctly. Some co-verification tools provide profiling and other feedback to the user about hardware and software execution, but this alone does not make them codesign tools since they can do this only after hardware and software have been partitioned.

Is Co-Verification Really Necessary?
After learning the definition of co-verification and its benefits, the next logical question asks if co-verification is really necessary. Theoretically, if the hardware design has no bugs and is perfect according to the requirements and specifications then it really does not matter what the software does. For this situation, from the hardware engineer's point of view, there is no reason to execute the software before fabricating the design.

Similarly, software engineers may think that early access to hardware is a pain, not a benefit, since it will require extra work to execute software with co-verification. For some software engineers. no hardware equals no work to do. In addition, at these early stages the hardware may be still evolving and have bugs. There is nothing worse for software engineers than to try to run software on buggy hardware since it makes isolating problems more difficult.

The point is that while individual engineers may think co-verification is not for them, almost every project with custom hardware and software will benefit from co-verification in some way. Most embedded projects do not get the publicity of an Intel microprocessor, but most of us remember the famous (or infamous) Pentium FDIV bug where the CPU did not divide correctly. Hardware always has bugs, software always has bugs, and getting rid of them is good.

To read Part 2, go to “ Software-centric co-verification methods.”
To read Part 3, go to “Hardware-centric coverification methods .”
To read Part 4, go to “ Co-verification metrics .

This series of articles by Jason Andrews is from “Embedded Software know it all” edited by Jack Ganssle, used with permission from Newnes, a division of Elsevier. Copyright 2008. For more information about this title and other similar books, please visit www.elsevierdirect.com.

Jason Andrews, author of Co-verification of Hardware and Software ARM SoC Design , has implemented multiple commercial co-verification tools as well as many custom co-verification solutions. His experience in the EDA and embedded marketplace includes software development and product management at Verisity, Axis Systems, Simpod, Summit Design. and Simulation Technologies. He has presented technical papers and tutorials at the Embedded Systems Conference, Communication Design Conference, and IP/SoC and written numerous articles related to HW/SW co-verification and design verification. He has a B.S. in electrical engineering from The Citadel, Charleston, S.C., and an M.S. in electrical engineering from the University of Minnesota.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.