The basics of DSP for use in intelligent sensor applications: Part 3 - Embedded.com

The basics of DSP for use in intelligent sensor applications: Part 3

Earlier in this series, we touched on one problem that can arise when sampling an analog signal, namely the problem of aliasing. There are three other issues with signal sampling to which we now turn our attention: digitization effects, finite register length effects, and oversampling.

So far, weve assumed that all of the signals were measuring are continuous analog values; i.e., our measurements are completely accurate. Even in the cases in which we have noise, the underlying assumption is that the measurement itself, for example the noisy sensor output voltage, is known precisely.

In reality, at least for a system that employs digital signal processing, thats not really true because the measured analog signals go through a process known as digitization that converts the analog signal to a corresponding numeric value that can be manipulated mathematically by a processor. Figure 2.16 below shows this process (signal value is sampled at the points shown).

Figure 2.16. Signal Digitization Process Showing Four Successive Samples

The issue that we face with digitization is that within any processing unit we have only a finite number of bits with which to represent the measured signal. For instance, lets assume that we want to sample a signal that varies between 0V and 5V.

If we try to represent the measurement with one bit, well have exactly two possible values (0 and 1) that we can use. Designating the measured signal voltage as VS we might choose to map the lower half of the signal range (0 = VS < 2.5V) to 0 and to map the upper half (2.5V = VS < 5V) to 1.

Figure 2.17. Digitization Error Introduced by Rounding

Unfortunately, thats pretty poor resolution! While we can obviously improve the resolution significantly by using more bits to represent our numeric values, we will always map a range of input values to a particular output value, which means that almost all measured signal values within that range will be in error (the lone exception being the signal value that corresponds exactly to the numeric value).

This digitization error can be viewed as a noise signal that is superimposed on the true value of the measured signal as shown in Figures 2.17 above and Figures 2.18 below.

Figure 2.18. Digitization Error Introduced by Truncation

Note that, depending upon whether we perform the digitization by rounding the measured value (as in Figure 2.17) or by truncating the measured value (as in Figure 2.18), we will essentially have either a triangular noise signal (rounding) or a sawtooth noise signal (truncation).

Although we can never completely eliminate the issue, we can reduce its significance by ensuring that we use a relatively large number of bits (say 16 to 32, depending on the application) to represent the numeric values in our algorithms.

For instance, if we use 16-bit values, we can represent our signals with an accuracy of 0.0015 % (assuming no other sources of digitization noise); using 32-bit values, that resolution improves to 2.3 108% (since there are 232 discrete levels).

Finite Register Length Effects.
Closely related to digitization effects, which deal with the inaccuracy introduced by having a finite number of values available to represent a continuous signal, finite register length effects refer to the issues caused by performing repeated mathematical operations on values that are represented by a finite number of bits.

The problem is that, because we have a limited number of bits with which to work, repeated mathematical operations can cause the accumulator in the processor to overflow. A simple example will illustrate the effect.

Suppose our application digitizes the input signal into a 16-bit value (0FFFFh) and further suppose that were using a processor with a 16-bit accumulator. If we try to average two samples that are at 3/4th of the digitization range (BFFFh), we would get a value of BFFFh if we had infinite precision in the accumulator (1/2 (BFFFh + BFFFh) = BFFFh).

However, if we add the two samples together in a 16-bit accumulator, the sum is not 17FFEh but 7FFEh since the most significant bit would be truncated (we dont have space in the accumulator for it). If we then take one-half of the sum, the average becomes 3FFFh, not the BFFFh we want.

Although we can use larger accumulators (i.e., accumulators with more bits), the problem is inherent to the system and is exacerbated when multiplication operations are included.

By choosing appropriate ways to represent the numeric values internally and by carefully handling cases of overflow and underflow, designers can mitigate the effects of having finite register lengths, but the issue must always be addressed to avoid catastrophic system failures.

Oversampling. As mentioned previously, designers have to ensure that systems that use DSP sample the input signals faster than the Nyquist rate (twice the highest frequency in the input signal) to avoid aliasing.

In reality, input signals should be sampled at least four to five times the highest frequency content in the input signal to account for the differences between real-world A/D performance and the ideal. Doing so spreads the sampled spectrums further apart, minimizing bleed-over from one to the next.

Another issue with any filter is the delay between the time the input signal enters the filter and the time the filtered version leaves the filter, and this is true for digital as well as analog filters.

Generally, the more heavily filtered the input signal, the greater the delay is through the system. If the delay becomes excessive (something thats application dependent), the filtered output can be worthless since it arrives too late to be of use by the rest of the system.

Oversampling, the practice of sampling the signal much faster than strictly necessary, can be employed to allow strong filtering of signals without introducing an excessive delay through the system.

The oversampled signal can be heavily filtered, but since the delay is relative and the sampling rate is much higher than necessary, the filtered signal is available for use by other system components in a timely manner.

The downside of this approach is that it requires greater processing power to handle the higher data rate, which generally adds to the cost of the system and its power consumption.

How to Analyze a Sensor Signal Application
When analyzing a specific sensor signal-processing application, designers need to understand the following aspects of the system:

1. the physical property to be measured,
2. the relationship between the physical property being measured and the corresponding parameter value to be reported,
3. the expected frequency spectrum of the signal of interest and of any noise sources in the environment,
4. the physical characteristics of the operating environment,
5. any error conditions that may arise and the proper technique for handling them,
6. calibration requirements,
7. user and/or system interface requirements, and
8. maintenance requirements.

Often, the natures of the physical parameter being measured and of the operating environment will help guide the designer in the selection of appropriate signal-pro-cessing capabilities to include in the sensor system.

For instance, if one is measuring the temperature of a large metallic mass heated by a relatively small heating element, its safe to assume that the frequency content of the signal of interest is minimal since the temperature can change only gradually.

This means that the sensor can employ heavy filtering of the input to reduce noise. In contrast, a temperature sensor monitoring a small device being heated by a laser must be capable of reacting to intense changes in temperature that can occur very quickly.

In such a situation, noise filtering must be lighter and other processing may be required to address noise that gets through the initial filters.

Its also critical to understand the relationship between the physical property being measured and the corresponding parameter being reported to the user or to the rest of the system.

Does the reported parameter vary linearly with the physical property (as is the case with RTD temperature sensors), or does it have a nonlinear relationship (as do many thermocouples)?

If the relationship is nonlinear, is it possible to segment the relationship into piecewise linear regions to simplify computation? A poor or incorrect understanding of the relationship between physical property and reported parameter can render a sensor system useless.

Finally, designers must always consider that sensor systems are going into the real world, where problems are guaranteed to arise at the worst times. The system must be designed to detect common errors, and the more robust its error detection and handling scheme, the better.

The loss of a sensor on the production floor may stop production for an entire line, so any features that allow quick troubleshooting and easy repair are greatly appreciated by end users.

Of even more importance than maintenance, however, is the ability of the sensor system to detect dangerous conditions that may lead to unsafe operation unless corrected. Sensors that operate in a fail-safe environment must be designed with rigorous attention to fault detection, reporting, and correction.

A General Sensor Signal-processing Framework
Were now ready to set up a general sensor signal-processing framework for sensor applications. Like all good designs, the framework is deceptively simple; the key is to implement it reliably so that it performs all of its required tasks accurately, on time, every time. The framework is shown in Figure 2.19 below.

Figure 2.19. General Sensor Signal-processing Framework

((To view an expanded image, click here )

.The framework must be constructed as a hard real-time system; i.e., its response to system inputs and events must be deterministic (occur within a fixed time) and all processing for a given input or event must be finished before the next input or event occurs, at least for the critical processing sections.

Less critical sections, such as the communication protocol handler, are important, but they can occur in soft real-time; they must be capable of processing all inputs or events eventually, but they can queue up those inputs or events for processing at a time thats convenient for the application.

Signal Conditioning and Acquisition
The signal conditioning and acquisition section is responsible for performing any required conditioning of the analog input signal to limit the frequency spectrum to a band that can be successfully processed, to amplify the signal level to an appropriate range for digitization, and to digitize the resulting analog input signal.

The output of this section is a stream of sampled data that can then be processed numerically by the rest of the system.

Pre-analysis Filtering. Once the raw physical property signal has been sampled, its often necessary to apply application-specific filtering to the signal to remove unwanted noise or to somehow shape the signal into a more useful form.

The filtering is typically performed immediately after acquisition so that processing algorithms later in the signal chain are able to use relatively clean data, hopefully yielding better results.

Signal Linearization. Sometimes the parameter of interest does not vary linearly with the physical property being measured. A common example is a thermocouple signal, which has a complex polynomial relationship between its voltage and the corresponding temperature.

In such cases, the signal often needs to be linearized so that it can be dealt with more easily by the parameter analysis section. The specific linearization technique employed will vary by the type of property being measured.

Parameter Analysis. The parameter analysis is also highly application-specific. Although limited only by the designers imagination, some typical operations are parameter transformation (in which the measured signal is converted to the desired corresponding parameter value mathematically), frequency analysis, and limit comparison. Frequently, this is the most complex aspect of the sensor system and the area in which the most value can be added to the product.

Post-analysis Filtering. Once a parameter value has been computed, its not uncommon to filter those values to smooth the data for use by other components in the system. As with the pre-analysis filtering, the particular type of filter employed is application-specific.

Error Detection and Handling. While the parameter analysis section is generally where the most unique value is added to the sensor system, the error detection and handling section can make or break the viability of the system. The ability to detect and to recover from errors can separate a product from its competition, particularly in situations in which the penalty for failure can be catastrophic.

Simple error detection might include checking for the presence of the sensor element and verifying that extracted parameter values are in a reasonable range. More advanced error detection might include diagnostics to alert the user before an actual failure occurs.

Communication. The final element in the framework is the communication section. It is this section that reports all of the information gathered by an intelligent sensor and that allows the user to configure it for operation, so it is absolutely critical that this interface be robust and reliable.

A wide variety of communication interfaces are available, from RS-232 to Control Area Network (CAN) to Ethernet to wireless, though not all systems support all interfaces. The designer must select an interface that provides the easiest integration of the product with other elements of the system while staying within the cost and reliability constraints necessary for a particular application.

A Final Word
A thorough knowledge of DSP is invaluable to the development of robust sensor systems, and this treatment has been meant to instill an intuitive, not exhaustive, understanding.

Nevertheless, with this understanding it is possible to develop a general framework for the digital analysis and reporting of sensor information, one that will be useful in your subsequent work designing sensor systems for specific applications.

To read Part 1 in this series, go to Foundational DSP Concepts for Sensors
To read Part 2 in this series, go to Cleaning Up the Signal – Introducing Filters .

Creed Huddleston is President of Real-Time by Design, LLC, specializing in the design of intelligent sensors, located in Raleigh-Durham, North Carolina.

This series of articles is based on material from Intelligent Sensor Design by Creed Huddleston, used with permission from Newnes, a division of Elsevier. Copyright 2007. For more information about this title and other similar books, please visit www.elsevierdirect.com.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.