Using FPGAs to improve your wireless subsystem's performance - Embedded.com

Using FPGAs to improve your wireless subsystem’s performance

You can realize significant improvements in the performance of signalprocessing functions in wireless systems. How? By taking advantage ofthe flexibility of FPGA fabric and the embedded DSP blocks in currentFPGA architectures for operations that can benefit from parallelism.

Common examples of operations found in wireless applications includeFIR filtering, FastFourier Transforms(FFTs),digital down and up conversionand ForwaredError Correction (FEC) blocks.

By offloading operations that require high-speed parallel processingonto the FPGA and leaving operationsthat require high-speed serial processing on the processor, overallsystem performance and cost can be optimized while lowering systemrequirements.

Partitioning
The FPGA can be used with a digital signal processor (DSP) , serving either as an independent pre-processor (or sometimespost-processor) device, or as a co-processor. In a preprocessingarchitecture, the FPGA sits directly in the data path and isresponsible for processing the signals to a point when they can beefficiently and cost-effectively handed off to a DSP processor forfurther lower-rate processing.

Figure1: In co-processing architectures, the FPGA sits alongside the DSP,which offloads specific algorithmic functions to the FPGA to beprocessed at significantly higher speeds than what is possible in a DSPprocessor alone.

In co-processing architectures, the FPGA sits alongside the DSP,which offloads specific algorithmic functions to the FPGA to beprocessed at significantly higher speeds than what is possible in a DSPprocessor alone. The results are passed back to the DSP or sent toother devices for further processing, transmission or storage (Figure 1 above ).

Timing margins
The choice of pre-processing, post-processing or co-processing is oftengoverned by the timing marginsneeded to move data between the processor and FPGA and how thatimpinges on the overall latency.

Although a co-processing solution is the topology most oftenconsidered by designers–primarily because the DSP is in more directcontrol of the data hand-off process– this may not always be the bestoverall strategy.

Figure2: Shown is an LTE example of co-processing data-transfer latencyissues.

Consider, for example, the latest specifications for 3GPP Long Term Evolution , inwhich the transmission time interval has been reduced to 1ms, down from2ms for HSDPA and 10ms for W-CDMA. Thisessentially requires that data be processed from the receiver andthrough to the output of the media accesscontrol (MAC) layer in less than 1,000 microseconds.

Figure 2 above shows thatusing a serial RapidIO port on the DSP runningat 3.125Gbit/s, with 8bit/10bit encoding and a 200bit overhead for the Turbo decode function, resultsin a DSP-to-FPGA transfer delay of 230┬Ás. Taking into accountother expected delays, the Turbo codec performance required to meetthese system timings is a very demanding 75.8Mbit/s for 50 users.

Using an FPGA to process the Turbo codecs as a largely independentpost-processor not only removes DSP latency but saves time becausethere's no need to transfer the data at a high bandwidth between theDSP and FPGA.

This reduces the throughput rate of the Turbo decoder down to47Mbit/s, a decrease that allows more cost-effective devices, and hasreduced system power dissipation.

Another consideration is whether to use soft- or hard embeddedprocessor intellectual property (IP) on the FPGA to offload some of thesystem processing tasks, which in turn offers the possibility ofadditional cost, power and footprint reduction benefits.

Given such a wide range of signal processing resources, complexfunctions such as those found in baseband processing can be moreoptimally partitioned between the DSP processor, the FPGA-configurablelogic blocks, embedded FPGA DSP blocks and FPGA embedded processor.

FPGA-embedded processors provide an opportunity to consolidate allnon-critical operations into software running on the embeddedprocessors, minimizing the total amount of hardware resources requiredfor the overall system.

Figure3: An example of system-level-to-FPGA design flow

Software and IP
A critical issue is how to unlock all of this potential capability. Youneed to consider both the software needed to abstract the complexity ofthe problem and the availability of IP, focused on key areas whereFPGAs can provide an optimal solution (Figure3, above ).

There is also an increasingly important ecosystem of tool providerswhose products take development up to the electronic system level (ESL ) through C/C++- to-gate design flows. ESL design flows are aimed atproviding an integrated system-level approach to the production andintegration of hardware-accelerated functions and the control code forprocessors controlling these functions.

Covering critical areas
No single high-level language or software tool is suitable for all ofthe different elements found in today's complex systems. The choice oflanguage and design flow is governed by the customer and sometimes theindividual engineer.

Dave Nicklin is responsible forwireless vertical marketing and partnerships and Tome Hill is systemgenerator product manager at Xilinx,Inc.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.