Building a configurable embedded processor - From Impulse C to FPGA
This "Product How-To" article focuses how to use a certain product in an embedded system and is written by a company representative.
When most design groups think of the term "configurable processing," their minds immediately go to configurable processors offered by IP vendors. But those designers may be overlooking an obvious and trusted alternative: processor- laden FPGAs. Today FPGA vendors offer high performance industry standard MPU cores on board feature-rich FPGAs.
By using a mix of FPGA vendor and EDA vendor tools and a bit of ingenuity, embedded designers can extend the instruction set of the processors running in these FPGAs to add their own unique functions to their designs. And they can do so without having to go back to school to get a degree in hardware engineering.
Over the last couple of years, FPGA and EDA vendors have made great strides in creating software-to-FPGA tools that will allow embedded designers to effectively use FPGAs to increase the performance of their designs, while meeting timing budgets and cutting bill of material costs.
Let's examine a configurable FPGA-based hardware accelerator methodology in which we'll use an auxiliary processing unit (APU) controller interface to integrate co-processing accelerators to vastly speed up a system's overall performance.
In particular, we'll employ this configurable FPGA-based hardware accelerator methodology to increase the performance of a machine vision system that formerly employed an embedded processor.
Traditionally, an application such as a machine vision system requires a substantial amount of computation, far more than what a single processor can handle. Because a single processor isn't a viable alternative, some design groups may consider using one or more higher-end DSP devices.
But increasingly designers are employing a hardware-accelerated approach using an FPGA, in which designers can implement part of the application as software running on an embedded processor (or multiple embedded processors) within the FPGA, while they implement performance-critical portions of the application, such as video image filtering, as hardware accelerators within that same FPGA.
Video processing is a major driver of advances in embedded systems and tools, and has become one of the largest areas of growth for embedded computing. Computer vision and security systems demand increasingly high levels of bandwidth to support ever-higher resolutions, faster frame rates and more complex image analysis and conversion.
Near-real-time format conversions have become critical in some applications, as have specialized algorithms such as object recognition and motion estimation.
Today's most advanced video applications include complex, pipelined algorithms that must process data at a high rate. Applications include machine vision, unmanned aerial vehicles (UAVs) medical imaging, and automotive safety.
The only way to achieve the needed performance for these types of algorithms is using an accelerated computing strategy. Solutions for such applications might include the use of multiple high-end DSP devices, GPUs, or custom ASIC hardware.
Migrating from Discrete CPUs to FPGAs
Why migrate embedded video applications to FPGAs from discrete processors? The two primary reasons are integration and acceleration. Today's FPGAs have high capacities, which allow design teams to move multiple discrete components (the embedded processor and its various peripherals) into a single programmable device.
There are clear cost savings in integration, and also advantages related to flexibility and protection from future device obsolescence. FPGAs also offer acceleration for applications requiring a significant amount of computation, as is typical in image processing, DSP and communications.
FPGAs support acceleration by providing configurable hardware and flexible, on-chip memories. Designers can access these device resources through libraries, through hardware-level programming, or via software-to-hardware compilation.