Algorithms are the heart and soul of any control or monitoring application – without “intelligence” integrated into the design it's simply a “dumb” node on the network. Adding signal processing and mathematics to embedded and distributed hardware adds significant value. Signal processing also can add new functionality to devices, therefore, the ability and freedom to easily design and iterate on algorithms is critical to innovation.
Engineers consistently need processing closer and closer to the actual plant or unit under test to help speed performance and decentralize the processing from a central node. While there are many hardware platforms such as PLCs, industrial PCs, custom controllers, and other off-the-shelf devices such as performance automation controllers (PACs), without being able to program “intelligence” onto each distributed node, you simply have a bunch of “dumb” nodes on the network that are slaves to a central server.
Providing the intelligence on each distributed node arms the system against crashes if a component of the network goes down and increases processing capabilities since each node runs algorithms independently instead of relying on one central server.
Signal processing and mathematic algorithms that can be programmed onto distributed devices add significant value to distributed and embedded systems. That said, the ability and freedom to design and iterate on algorithms is critical to innovation. But how are these algorithms being developed? More importantly, how should they be developed?
Often, algorithms are designed in floating point by experts of a particular field (I will refer to him as the domain expert later) that understand the mathematics of an algorithm very well, but may not understand the implications of deploying that algorithm to an embedded device. Deployed devices may be floating point PowerPCs or even x86 processors, but they are more often fixed-point MPUs, DSPs or FPGAs.
Additionally, it is often times not the algorithm developer who is actually deploying the code onto the device, it is an embedded system designer who understands the silicon very well, but may not have a complete understanding of the algorithm.
This communication gap can create inefficiencies and even flawed products if not addressed appropriately. We will discuss how a common interface to perform interactive algorithm design decreases inefficiencies and provides a single programming platform to design, prototype and deploy your intelligent devices.
Rapid Innovation through Software Iterations
As stated earlier, the interactions between the domain expert designing the algorithm and the embedded system designer needs to be improved because the communication gap between these two leads to large inefficiencies. To solve these problems, both groups need to use a better tool and ideally, the same tool.
Using a sufficiently sophisticated graphical programming approach provides intuitive models of computation for designing a variety of algorithms, as well as, implementing that algorithm onto a processor.
As an example, let’s look at the process of designing a low pass filter, similar to the one shown in Figure 1 below, to see how the combination of graphical programming in conjunction with interactive tools can empower a domain expert to easily implement an optimized, high-performance filter, without ever having to deal with some of the numerical complexities of the final fixed point implementation.
|Figure 1: Frequency response of a low-pass filter|
In this example, a Labview graphical toolchain will allow the user to design the algorithm based on the following steps:
1) Configure the filter specification and visually analyze the filter performance characteristics.
2) Automatically parameterize the filter coefficients for fixed or floating point implementation.
3) If fixed point is needed, generate a model for fixed point implementation of the filter and analyze the characteristics of the parameterized filter in the time and frequency domain.
4) Simulate and compare implementation model to original specification on a graphical interface.
5) Generate efficient and optimized code for implementation for a fixed or floating-point platform.
Following these five steps, the domain expert is guided all the way from a theoretical specification for the filter to the implementation of the filter on a target platform. The tool assumes the burden of scaling appropriately across the floating and fixed point domains. This way the embedded systems designer can focus on the desired filter behavior instead of low-level optimizations for the filter implementation. Since the end-code for the filter is generated at the lowest level in the tool’s graphical language, the engineer has the complete freedom to ‘tweak’ and further optimize or modify the filter.
|Figure 2: Configuring the low-pass filter and analyzing frequency response|
Figure 2 above shows an example of using the LabVIEW Digital Filter Design Toolkit for configuring and implementing a lowpass digital filter. To characterize the filter, we set the filter parameters using a LabVIEW function (or VI) that allows configuration through a dialog window.
Using the interactive Configure Classical Filter Design VI, we can select various filter characteristics such as sampling frequency, passband edge frequency, passband ripple, stopband edge frequency, and stopband attenuation. Additionally, we can design this using various design windowing methods including but not limited to Equi-Ripple FIR, elliptical, or Butterworth.
Here you can visualize the filter performance through its frequency response and Root-locus plot to verify the expected behavior of your floating point filter. Once you configure the filter, you can run the VI and observe the magnitude response through a frequency sweep for the filter sampling rate.
After the filter is designed, we need to generate a fixed point filter model to implement on a fixed point platform, such as an FPGA. Here we will add a function to automatically convert this filter to a fixed point model.
|Figure 3: Converting floating point coefficients to fixed point and comparing fixed and floating point models in the frequency domain.|
Within the “DFD FXP Modeling for CodeGen.vi”, shown in Figure 3, above, we can set the integer word length (iwl) for the model inputs and outputs and the filter coefficients. When we run this VI, we can simulate and analyze the fixed-point filter for discrepancies in the frequency domain when compared to the floating point filter and calculate overflows, underflows and zeros. We can use this interactive structure to find the smallest iwl while still maintaining the integrity of the filter in the frequency domain.
The next step is to simulate and analyze the quantized fixed point filter to the floating point implementation in the time domain [Figure 4, below ]. Here we can further modify the integer word length for the coefficients until the two filters behave similarly.
|Figure 4: Comparing the fixed point and floating point filter coefficients in the time domain|
The last step for developing this filter is to generate the code to be implemented within the appropriate target, either generating LabVIEW or C code for implementation on various embedded platforms, as shown below, from the graphical description.
|Figure 5: LabVIEW block diagram for a low pass filter|
In this example, we can generate LabVIEW code for implementation on a FPGA, as shown in Figure 6 below.
|Figure 6: LabVIEW code for a low pass filter|
As you can see, implementing algorithms for such functions as digital filters is much more of an art than a science. With this in mind, using an interactive and iterative approach is necessary so that you can quickly and efficiently move from design, to simulation, to implementation. Additionally, this approach encourages experimentation with various design parameters and can lead to further innovations.
Mike Trimborn is LabVIEW FPGA Product Manager at National Instruments, Inc.