While having the right tool for the job is undeniably important, it'll take more than a good paintbrush to turn you into Vermeer.
In virtually every field of endeavor, there are tools to help you do your job and methodologies you can apply when using those tools. From a business perspective, the money is in tools rather than the methodologies, and we're consequently led to believe that the tools make the difference between success and failure. But the best paintbrush in the world won't turn you into Vermeer. In any endeavor, you not only have to know how to use the tool, you need a methodology, a process, good algorithms.
Jack Ganssle and Michael Barr define algorithm in their Embedded Systems Dictionary as “a step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps.”
The founding fathers of Embedded Systems Programming visualized this publication as a vehicle to serve up algorithms to developers, a tradition that we intend to maintain in the magazine, on Embedded.com, and in the Embedded Systems Conference technical programs. This month, the magazine focuses on digital signal processing, specifically the implementation of DSP algorithms. While processors and tools are necessary to implement the algorithms, they're not the entire solution.
Since DSP development is difficult, digital signal processor suppliers all have strategies to make you successful. They offer tool suites, application support, and even off-the-shelf algorithms. A suite of DSP code development tools gets you partway toward your goal, but it's not the whole solution. This month's lead article, “A DSP Algorithm for Frequency Analysis,” offers a high-performance FFT combined with the ability to specify bandwidth to perform spectrum analysis. It's not the only way to perform this function, but using an optimized solution is a way of increasing system performance.
What happens if you still need more performance? One option is to move to a faster processor or to an entirely different architecture. Another is to repartition your system and move critical path functions into hardware. If you're building an ASIC, your options could be reduced. With systems shrinking to single chips and non-recurring engineering costs climbing astronomically, you'd better be certain that you're going to meet your performance goals before you tape out.
Another solution is to move algorithms to a field-programmable gate array. FPGAs are getting cheaper and larger, making room for soft processor cores in addition to plenty of gates for the rest of the functions you need to implement. If your system can tolerate the part cost and the power consumption of an FPGA, then when you run out of steam in software, you can offload functions to hardware. In this issue “Accelerating Algorithms in Hardware” explains in the context of an FPGA how to move algorithms to hardware.
Repartitioning hardware and software is never a walk in the park. Nor is working with FPGAs that incorporate soft processor cores. As important as algorithms are, once they are developed, you'll still need good tools to implement them in the final product.