What comes after the diagram? - Embedded.com

What comes after the diagram?

National Instruments’ realization that most measurement and control systems can be represented well by a data flow diagram has opened doors for many users. But it has created an always-growing problem for the company’s R/D team: how do you synthesize a real-time system from the diagram. More particularly, how do you synthesize a system you can implement in affordable hardware supported by NI? Answering that question in the face of increasing sample rates and system complexity has led the company down a fascinating technological path.

Initially, NI compiled a software module from the user’s diagram. This was great so long as increasing CPU speeds and clever heuristics could keep the compiled code running fast enough to meet system deadlines, and advances in I/O bus technology could keep up with the data bandwidth between the sensor and actuators and the CPU. But that happy state had to end. NI responded on two fronts: optimizations and FPGAs.

The optimizations began in earnest, according to NI senior vice president of R/D Phil Hester, several years ago with the use of multithreading and multiprocessing to exploit the parallelism explicit in the data flow diagrams. The next step, according to R/D director David Fuller, was to completely reorganize the compilation process to improve optimization.

Some time ago NI introduced an intermediate step in reducing the user’s graphic input to a design file: the Data Flow Intermediate Representation. This internally developed standard graph format, Fuller says, allows NI to apply standard optimizations to the data flow graph, and to use a standard set of heuristics to generate code threads.

This year, the company added another level of intermediate representation, generating code not directly for the target CPU, but instead for the Low-Level Virtual Machine, a widely used representation of near machine-level code. The big advantage of LLVM is that it is an open standard, surrounded by a huge accumulation of open-source tools such as optimizers, code generators for various instruction sets, and utilities. Hence by generating LLVM code from their data flow graph, NI can employ the latest industry thinking in code optimizations and the best open-source code generators, without having to develop them itself.

NI’s second front has been growing use of FPGAs. Putting small FPGAs in its instruments allowed the company to eliminate I/O bus latencies from some critical loops by implementing some data paths and some state machines inside the instrument rather than back in the controller CPU. But the significance goes well beyond that.

“A lot of the code LabVIEW generates is parallel and has hard real-time constraints,” Hester says. “FPGAs provide a very natural environment for implementing these tasks.” This means LabVIEW must in some cases emit not microprocessor code in LVVM, but rather VHDL for input to Xilinx’s FPGA tool chain.

This flow has led to a close relationship between NI and Xilinx, allowing NI near-proprietary access to some features of the FPGAs. For example, as NI exploits larger FPGAs, compile times in the Xilinx tool chain have risen to hours, and sometimes tens of hours. This, while entirely reasonable to an ASIC design team, seems insupportable to LabVIEW users used to near-instant turnaround. To address the problem, NI has interfaced its libraries to Xilinx’s Core Generator utility, in effect shortcutting some of the compilation by use of existing IP. Further, NI is exploring use of Xilinx’s partial reconfiguration capability to essentially assemble pre-compiled IP blocks on the FPGA rather than compiling the entire design. But work in this direction is still far from conclusive.

The relationship may lead in another direction as well. Hester days both performance and cost demands are pushing NI to higher levels of integration. That has already lead NI to examine reconfigurable hardware approaches to some designs. With rumors that Xilinx is researching, and perhaps developing, analog capabilities for future FPGAs, there would seem to be the possibility of monolithic analog measurement/control systems on FPGAs in the future. This could prove a great benefit to NI users, but it would once again compound the R/D team’s challenges.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.