The quadrature processing techniques employed in spectrum analysis,computer graphics, and digital communications routinely require highspeed determination of the magnitude of a complex number (vector V)given its real and imaginary parts, i.e., the in-phase part I and thequadrature-phase part Q. This magnitude calculation requires a squareroot operation because the magnitude of V is
Assuming that the sum I2 + Q2 is available, theproblem is to efficiently perform the square root operation.
There are several ways to obtain square roots. The optimum techniquedepends on the capabilities of the available hardware and software. Forexample, when performing a square root using a high-level softwarelanguage, we employ whatever software square root function isavailable. Although accurate, software routines can be very slow.
By contrast, if a system must accomplish a square root operation in50 nanoseconds, high-speed magnitude approximations are required. Let'slook at a neat magnitude approximation scheme that's particularlyhardware efficient. There is a technique called the “alpha max plusbeta min” algorithm for calculating the magnitude of a complex vector.
It's a linear approximation to the vector magnitude problem thatrequires the determination of which orthogonal vector, I or Q, has thegreater absolute value. If the maximum absolute value of I or Q isdesignated by Max, and the minimum absolute value of either I or Q isMin, an approximation of |V| using the “alpha max plus beta min”algorithm is expressed as
There are several pairs for the Alpha and Beta constants thatprovide varying degrees of vector magnitude approximation accuracy towithin 0.1 dB. The “alpha max plus beta min” algorithms determine avector magnitude at whatever speed it takes a system to perform amagnitude comparison, two multiplications, and one addition.
But those algorithms require, as a minimum, a 16-bit multiplier toachieve reasonably accurate results. If, however, hardware multipliersare not available, all is not lost. By restricting the Alpha and Betaconstants to reciprocals of integer powers of two, Equation (13″6) above lendsitself well to implementation in binary integer arithmetic.
A prevailing application of the “alpha max plus beta min” algorithmuses Alpha = 1.0 and Beta = 0.5. The 0.5 multiplication operation isperformed by shifting the minimum quadrature vector magnitude, Min, tothe right by one bit.
We can gauge the accuracy of any vector magnitude estimationalgorithm by plotting its error as a function of vector phase angle.Let's do that. The “alpha max plus beta min” estimate for a complexvector of unity magnitude, using
over the vector angular range of 0 to 90 degrees, is shown as thesolid curve in Figure 13″7 below. (The curves in the figure repeatevery 90 degrees .)
An ideal estimation curve for a unity magnitude vector would have avalue of one. We'll use this ideal estimation curve as a yardstick tomeasure the merit of various “alpha max plus beta min” algorithms.Let's make sure we know what the solid curve in Figure 13″7 is telling us. Itindicates that a unity magnitude vector oriented at an angle ofapproximately 26 degrees will be estimated by Equation (13″7) to have a magnitudeof 1.118 instead of the correct magnitude of one.
|Figure13-7. Normalized “alpha max plus beta min”estimates for Alpha = 1; andBeta = 1/2, 1/4, and 3/8.|
The error then, at 26 degree, is 11.8%, or 0.97 dB. Analyzing theentire solid curve in Figure 13″7 results in an average error, over the0 to 90 degree range, of 8.6% (0.71 dB). Figure 13″7 also presents the errorcurves for the Alpha = 1 and Beta = 1/4, and the Alpha = 1 and Beta =3/8 values.
Although the values for Alpha and Beta in Figure 13″7 yield rather accuratevector magnitude estimates, there are other values for Alpha and Betathat deserve our attention because they result in smaller errorstandard deviations.
Consider the Alpha = 7/8 and Beta = 7/16 pair where its error curveis the solid curve in Figure 13″8 below. A further improvement can beobtained using Alpha = 15/16 and Beta = 15/32 having an error shown bythe dashed curve in Figure 13″8 below.
The Alpha = 15/16 and Beta = 15/32 pair give rather good resultswhen compared to the optimum floating point values of Alpha =0.96043387 and Beta = 0.397824735, whose error is the dotted curve in Figure 13″8.
|Figure13-8. Estimates for Alpha = 7/8, Beta = 7/16; Alpha = 15/16, Beta =15/32; and Alpha = 0.96043387, Beta = 0.397824735.|
Although using Alpha = 15/16 and Beta = 15/32 appears to require twomultiplications and one addition, its digital hardware implementationcan be straightforward, as shown in Figure13″9 below. The diagonal lines, 1 for example, denote ahardwired shift of one bit to the right to implement a divide-by-twooperation by truncation.
Likewise, the 4 symbol indicates a right shift by four bits torealize a divide-by-16 operation. The |I|>|Q| control line is TRUEwhen the magnitude of I is greater than the magnitude of Q, so that Max= |I| and Min = |Q|.
This condition enables the registers to apply the values |I| and|Q|/2 to the adder. When |I|>|Q| is FALSE, the registers apply thevalues |Q| and |I|/2 to the adder. Notice that the output of the adder,Max + Min/2, is the result of Eq. (13″26). Equation (13″29) isimplemented via the subtraction of (Max + Min/2)/16 from Max + Min/2.
In Figure 13″9 below, allimplied multiplications are performed by hardwired bit shifting and thetotal execution time is limited only by the delay times associated withthe hardware components.
One thing to keep in mind. Because the various |V| estimations canexceed the correct normalized vector magnitude value, i.e., somemagnitude estimates are greater than one.
This means that although the correct magnitude value may be withinthe system's full-scale word width, an algorithm result may exceed theword width of the system and cause overflow errors. With “alpha maxplus beta min” algorithms the user must be certain that no true vectormagnitude exceeds the value that will produce an estimated magnitudegreater than the maximum allowable word width.
|Figure13″9 Hardware implementation using Alpha = 15/16 and Beta = 15/32.|
The penalty we pay for the convenience of having Alpha and Beta aspowers of two is the error induced by the division-by-truncationprocess, and we haven't taken that error into account thus far.
The error curves in Figure 13″7 andFigure 13″8 were obtained usinga software simulation with its floating point accuracy, and are usefulin evaluating different Alpha and Beta values. However, the true errorintroduced by the “alpha max plus beta min” algorithm will be somewhatdifferent, due to division errors when truncation is used with finiteword widths.
For “alpha max plus beta min” schemes, the truncation errors are afunction of the data's word width, the algorithm used, the values ofboth |I| and |Q|, and the vector's phase angle. (These errors due to truncation compoundthe errors already inherent in our “alpha max plus beta min” algorithms. )
However, modeling has shown that for an 8-bit system (maximum vector magnitude = 255 ) thetruncation error is less than 1%. As the system word width increasesthe truncation errors approach 0%, this means that truncation errorsadd very little to the inherent “alpha max plus beta min” algorithmerrors. The relative performance of the various algorithms issummarized in Table 13″2 below .
|Table13″2. “Alpha max plus beta min” Algorithm Comparisons|
The last column in Table 13″2 illustrates the maximum allowable true vector magnitude as a functionof the system's full-scale (F.S.) word width to avoid overflow errors.
So, the “Alpha max plus beta min” algorithm enables high speedvector magnitude computation without the need for math coprocessor orhardware multiplier chips. Of course, with the recent availability ofhigh-speed floating point multiplier integrated circuits – with theirability to multiply in one or two clock cycles – Alpha and Beta may notalways need to be restricted to reciprocals of integer powers of two.
It's also worth mentioning that this algorithm can be nicelyimplemented in a single hardware integrated circuit (for example, a field programmable gate array )affording high speed operation.
Usedwith the permission of the publisher, Prentice Hall, this on-goingseries of articles is based on copyrighted material from “UnderstandingDigital Signal Processing, Second Edition” by Richard G. Lyons. Thebook can be purchased on line.
Richard Lyons is a consultingsystems engineer and lecturer with Besser Associates. As alecturer with Besser and an instructor for the University of CaliforniaSanta Cruz Extension, Lyons has delivered digitasl signal processingseminars and training course at technical conferences as well atcompanies such as Motorola, Freescale, Lockheed Martin, TexasInstruments, Conexant, Northrop Grumman, Lucent, Nokia, Qualcomm,Honeywell, National Semiconductor, General Dynamics and Infinion.