The basics of DSP for use in intelligent sensor applications: Part 3
Finite Register Length Effects.Closely related to digitization effects, which deal with the inaccuracy introduced by having a finite number of values available to represent a continuous signal, finite register length effects refer to the issues caused by performing repeated mathematical operations on values that are represented by a finite number of bits.
The problem is that, because we have a limited number of bits with which to work, repeated mathematical operations can cause the accumulator in the processor to overflow. A simple example will illustrate the effect.
Suppose our application digitizes the input signal into a 16-bit value (0FFFFh) and further suppose that were using a processor with a 16-bit accumulator. If we try to average two samples that are at 3/4th of the digitization range (BFFFh), we would get a value of BFFFh if we had infinite precision in the accumulator (1/2 (BFFFh + BFFFh) = BFFFh).
However, if we add the two samples together in a 16-bit accumulator, the sum is not 17FFEh but 7FFEh since the most significant bit would be truncated (we dont have space in the accumulator for it). If we then take one-half of the sum, the average becomes 3FFFh, not the BFFFh we want.
Although we can use larger accumulators (i.e., accumulators with more bits), the problem is inherent to the system and is exacerbated when multiplication operations are included.
By choosing appropriate ways to represent the numeric values internally and by carefully handling cases of overflow and underflow, designers can mitigate the effects of having finite register lengths, but the issue must always be addressed to avoid catastrophic system failures.
Oversampling. As mentioned previously, designers have to ensure that systems that use DSP sample the input signals faster than the Nyquist rate (twice the highest frequency in the input signal) to avoid aliasing.
In reality, input signals should be sampled at least four to five times the highest frequency content in the input signal to account for the differences between real-world A/D performance and the ideal. Doing so spreads the sampled spectrums further apart, minimizing bleed-over from one to the next.
Another issue with any filter is the delay between the time the input signal enters the filter and the time the filtered version leaves the filter, and this is true for digital as well as analog filters.
Generally, the more heavily filtered the input signal, the greater the delay is through the system. If the delay becomes excessive (something thats application dependent), the filtered output can be worthless since it arrives too late to be of use by the rest of the system.
Oversampling, the practice of sampling the signal much faster than strictly necessary, can be employed to allow strong filtering of signals without introducing an excessive delay through the system.
The oversampled signal can be heavily filtered, but since the delay is relative and the sampling rate is much higher than necessary, the filtered signal is available for use by other system components in a timely manner.
The downside of this approach is that it requires greater processing power to handle the higher data rate, which generally adds to the cost of the system and its power consumption.


Loading comments... Write a comment