Modeling embedded systems - Part 4: If prototypes aren’t possible

Shelley Gretlein, National Instruments

November 21, 2012

Shelley Gretlein, National InstrumentsNovember 21, 2012

Editor’s Note: In Part 4 in a series of tutorials on modeling tools, Shelley Gretlein of National Instruments considers those situations when traditional hardware prototyping is not possible and how modeling can be a viable option.

Sometimes you must model - you simply can’t prototype or iterate on your design. Consider a project when perhaps the embedded system doesn’t exist – when you are designing for hardware that is not yet complete or ready, such as the latest chip where you are designing to specifications instead of silicon. This is a great example where modeling, simulation, emulation and later prototyping is a valuable approach.

One example is at National Instruments (NI) where they were designing for unreleased hardware. The embedded team needed to move a significant amount of their core product designs to a new MPU+FPGA+I/O architecture before the integrated silicon was on the market. They saw the value of this new technology early through confidential interactions with the silicon vendor. This early access allowed them to plan for the new technology upgrade in our embedded system. The engineers worked closely with the vendor throughout the entire process – an important point in such situations.

Throughout these discussions, the vendor did an excellent job of setting up a development platform for the NI embedded team to use as an emulation of what the final architecture would look like – including a fixed personality FPGA that would represent or behave like the final FPGA fabric in the eventual design.
This development platform (Figure 21) was a valuable board used for early prototyping, design, and test. However, it was certainly not an exact stand-in for the final product. There were several subtle and a few substantial differences. These discrepancies were well documented by the vendor so there were no surprises, but the engineers needed to optimize for and understand the substantial differences as they were developing, so there was still a bit of work to do.




Figure 21 - Example of a silicon prototype board (courtesy of LogicBricks) versus a final hardware design (NI CompactRIO courtesy of National Instruments).

The most important shortcoming of the development board was the fact that instead of a single, high-performance FPGA fabric like the final design, the engineers at NI were designing to a system with multiple FPGAs. The communication delay between silicon caused a significant performance degradation. The development board couldn’t replicate an entire system running at 40 MHz, but could only reach 10MHz, making it unusable for timing and testing accurate application performance. This is a good example where all models, even physical emulation platforms, aren’t 100% accurate, but if you understand the shortcomings, they are still very useful.

To compensate for the shortcomings of the model, the team chose to include a software-based design in the development. They first found off-the-shelf boards that had similar CPU characteristics in terms of performance and architectures (multicore ARM designs that were close to the final design) and that provided performance comparable to the final product. This hybrid design approach provided a closer system in terms of floating point performance.

To accommodate the FPGA design aspects, the team designed a software-based environment on the real-time CPU that allowed the deployment of FPGA code to a real-time target and have it behave similarly, in terms of timing, to the final FPGA fabric that would then be cross-referenced with the slower, more accurate FPGA combination board from the vendor.

It's important to note that the team chose not to simulate the entire hardware platform using cycle- or even instruction-accurate simulation tools. They deemed “stand-in” hardware platforms with sufficiently similar characteristics as “good enough” and the most efficient approach for development. The team was confident in this approach because they knew once real silicon materialized, the final implementation could leverage the early design and full test frameworks as developed on the hardware stand-ins.

If you find yourself in a similar situation, designing for a target that isn’t in the market yet, you can approach your design in a similar fashion.
  • Work closely with your vendor to understand what features and capabilities the future platform will have and to understand the differences in performance and hardware architecture
  • Select an existing, off-the-shelf, similar platform for early design and development
  • Once you run into the performance or feature limitations of that existing platform (the areas where the new design will differ and add more value) then select a surrogate hardware and software emulation platform for designing to those new, unreleased capabilities. This step requires additional software work to create the simulated or emulated environment.

Even with these well-planned steps there will be differences and there will be additional development once you get your final hardware device. But if you focus on proper design techniques and clear abstraction boundaries, you can protect large portions of your algorithms and focus on optimizing timing and specific I/O features once you get your first prototypes up and running.

Related to the silicon being unreleased, perhaps you can’t prototype because of the size or cost of the project. If you are creating a new control system for a new light rail system, for example, you can’t work on the prototypes until late in the game – and you certainly don’t want to experiment on the real thing. These are situations where software models can be very helpful.

But aren’t All Models Wrong?
“All models are wrong, some are just useful” is a phrase generally attributed to statistician George Box. Whether you are new to modeling or have been designing embedded systems for decades, this warning pertains to us all. No matter how carefully or completely you might model a system, the model will always be less than the reality that is modeled.

You may remember when Boeing’s 787 Dreamliner was introduced. This aerospace innovation was one of the most exciting technology introductions in modern times. The mid-sized, twin-engine jet airliner developed by Boeing Commercial Airplanes is comprised of 50% composite (carbon fiber), 20% aluminum, 15% titanium, 10% steel, and 5% other materials but in terms of volume, the aircraft is 80% composite. Each 787 contains approximately 35 tons of carbon fiber reinforced plastic.

What does this have to do with modeling? Paolo Feraboli, an assistant professor at the University of Washington School’s Automobili Lamborghini Advanced Composite Structures Laboratory ("Is the carbon fiber 787 dreamliner safe enough to fly?"), was extensively involved in the Dreamliner design and had this to say about modeling the 787: “Unlike homogeneous metals, multi-layered composites are very difficult to simulate accurately on a computer. We don’t currently have the knowledge and the computational power to do a prediction based on purely mathematical models.” Thus new, innovative materials require modeling AND prototyping in order to be designed and tested.


Figure 22 - Boeing's 787 Dreamliner was so innovative in terms of material, the embedded designers needed to model AND prototype to truly understand the behavior of the design.

This doesn’t mean models are useless. If done well, you can use models to help in all of the situations covered in this series. You just don’t want to use only models for your embedded system design.

From the examples, you understand how useful models are and then we tell you they are all wrong – what are you to do? You embrace another useful quote from author Jim Collins, embrace “the genius of the and”. You must model your system AND combine it with the ‘real world’. Often the best way to combine your theories and requirements with the real world is to create a prototype.

This approach – modeling, simulation, and prototyping - is also critical in complex mechatronics systems such as robotics. Fred Nikgohar, CEO of Robodynamics, points out the value of the real world. Since building robots is about managing a multi-disciplinary project, it involves not just software but mechanics, electronics, and integration: “Integration is often the after-thought in building robots. It is that final step in the design process where all the disciplines come together and create a robot greater than the sum of its engineered parts. It is also the step where things that worked in isolation often fail. And worse, troubleshooting becomes enormously more difficult because if the robot doesn’t behave as planned, you have to troubleshoot throughout the engineered chain…”.

He points out the challenge in robotics is a lot of ideas that never come to fruition. “The real world is… very real! Wires come loose, mechanical parts bend, even firmware uploads fail sometimes. By sheer necessity to create robot engineering efficiencies, we have developed testing plans, troubleshooting plans, and even simulation runs to make things go further and smoother. But nothing has been more valuable than the actual experience of building robots.” This applies to all of our embedded system design. Nothing is more valuable than experiencing your design with real-world constraints and real-world situations. You must model and prototype to perfect your design.

< Previous
Page 1 of 2
Next >

Loading comments...

Parts Search Datasheets.com

KNOWLEDGE CENTER