Optimize data flow video apps by tightly coupling ARM-based CPUs to FPGA fabrics

Michael Fawcett, iVeia, with Dan Isaacs, Xilinx

May 10, 2011

Michael Fawcett, iVeia, with Dan Isaacs, Xilinx

Image processing and data flow
The iVeia design maximizes the performance potential of the processor and FPGA combination via multiple connections between the devices. Let’s examine how the two processing elements are connected relative to how the product might be deployed in a specific application.

The combination of the FPGA fabric and DSP core are a good match for image processing. Typical applications include transportation systems where a camera is used to capture video and the system analyzes the input on a frame-by-frame basis, automatically recognizing and classifying traffic. The combination could also be used in security systems to enable facial recognition.

Such image-processing applications require significant processing power and a very efficient data flow. The iVeia design utilizes multiple buses to optimize data flow in such an application. The OMAP IC includes both a dedicated camera input interface and a dedicated display output interface designed to drive two screens. The IC also includes a general purpose bus. iVeia uses all three to link the processor and FPGA.

The iVeia design implements a 12-bit, 75-MHz interconnect that can feed a stream of video frames directly from the FPGA to the OMAP camera interface. A key role for the FPGA in a vision system is preprocessing each video frame. The FPGA can operate on the real-time stream of frames performing functions such as color-space conversion and noise filtering. The FPGA can correct for camera lens distortion and enhance contrast.

Video analytics
The FPGA can also perform the early stages of video analytics. For example, the FPGA can be used for object and pattern recognition, edge detection and other image enhancement capabilities. The sequentially-oriented DSP and ARM cores can’t perform such functions in real time. The FPGA passes the processed frame to the OMAP IC for storage in a frame buffer using the dedicated camera interface.

The iVeia design also includes a 16-bit, 96-MHz, bidirectional, general-purpose bus that links the FPGA and OMAP ICs. The bidirectional interconnect provides an additional path for the FPGA to transfer data to the OMAP IC, and a path for the OMAP to manage the operation and configuration of the FPGA.

In our vision system example, the FPGA would send object classification data to the OMAP IC in sync with the transfer of the preprocessed video frames.

The OMAP IC meanwhile can use the general-purpose bus to dynamically reconfigure the image-processing blocks in the FPGA. Many people think about an FPGA as a static element that is configured at power up and that performs the same functions continuously. In actuality, the fabric can be configured a hundred thousand times or more per second.

In a typical scenario, a primary set of command and control functions are static in the FPGA. But a technique called dynamic partial reconfiguration allows for changes in portions of the fabric. For example, the OMAP may change the configuration of the image-processing blocks in the FPGA based on the type of images being captured.

The iVeia architecture also supports video post processing functions in the FPGA. The design includes a 24-bit, 75-MHz interface that brings real-time data from the OMAP display output into the FPGA. The FPGA can handle functions such as scaling in real time.

Design teams focused on a video-centric application such as our example can accelerate the product development using iVeia’s Video Development Kit. The kit includes an additional hardware module that provides I/O and a video encoder. More importantly, iVeia supplies a library of IP blocks for the FPGA that are optimized for video applications.

< Previous
Page 3 of 4
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER