Choosing sensors: Specsmanship vs. reality - Embedded.com

Choosing sensors: Specsmanship vs. reality

Accuracy and precision are paramount when specifying sensors. The two terms are often used interchangeably, but there are fundamental differences between them. Accuracy, a qualitative concept, indicates the proximity of measurement results to the true value; precision reflects the repeatability or reproducibility of the measurement.

ISO 3534-1:2006 defines precision to mean the closeness of agreement between independent test results obtained under stipulated conditions, and views the concept of precision as encompassing both repeatability and reproducibility. The standard defines repeatability as precision under repeatable conditions, and reproducibility as precision under reproducible conditions.

Precision, accuracy, repeatability, reproducibility, variability and uncertainty represent qualitative concepts and thus should be applied with care. The precision of an instrument reflects the number of significant digits in a reading; the accuracy of an instrument reflects how close the reading is to the true value being measured. An accurate instrument is not necessarily precise, and instruments are often precise but far from accurate.

Figure 1 (below) illustrates the difference between accuracy and precision, and shows that the precision of the measurement may instead vary in proportion to the signal level.


Concepts of accuracy

Sensor manufacturers and users employ one of two basic methods to specify sensor performance: parameter specification and the total error band envelope.

Parameter specification quantifies individual sensor characteristics without any attempt to combine them.

The total error band envelope yields a solution much nearer to that expected in practice, whereby sensor errors are expressed in the form of a total error band, or error envelope, into which all data points must fit regardless of their origin. As long as the sensor operates within the parameters specified in the data sheet, the sensor data can be relied on, giving the user confidence that all sensor data acquired will be accurate within the stated error band and thereby avoiding the need for lengthy and error-prone data analysis. Figure 2 illustrates the concept.



Figure 1. Precision vs. accuracy.
Click on image to enlarge.


Figure 2. Total error band.
Click on image to enlarge.

 

Many manufacturers, however, specify individual error parameters, unless there are legislative pressures compelling them to state the total error band of their sensors. For instance, if products or services are sold by weight, the weighing equipment is subject to legal metrology legislation and comes under the scrutiny of weights and measures authorities around the world.

The International Organization of Legal Metrology requires that load cells used in weighing equipment are accuracy-controlled by enforcing a strict adherence to an error-band performance specification. Typically, such an error band will include parameters such as nonlinearity, hysteresis, nonrepeatability, creep under load, and thermal effects on both zero and sensitivity. The user of such a sensor can rest assured that its measurement precision will be within the total error band specified, provided all the parameters of interest are included.

Unless there is external pressure to comply, manufacturers do not generally specify their products using the error band method, though it yields results that are more representative of how the product will respond in actual use. Instead, commercial pressures lead manufacturers to portray their sensors in the most favorable light vs. the competition.

The commonly used parameter method allows you to make a direct comparison between competing products by examining their specifications as detailed in the product data sheets. When selecting a sensor, carefully examine all performance parameters with respect to the intended application to ensure that the sensor you ultimately choose is suitable for its specific end use.

A typical sensor data sheet will list a number of individual error sources, not all of which affect the device in a given situation. Given the plethora of data provided, you may find it difficult to decide whether a given sensor is sufficiently accurate for your desired application.

Ideally, the mathematical relationship between a change in the measure and the output of a sensor over the entire compensated temperature and operational range should include all errors due to parameters such as zero offset, span rationalization, nonlinearity, hysteresis, repeatability, thermal effects on zero and span, thermal hysteresis and long-term stability.

Typically, users will focus on just one or two of these parameters, using them as benchmarks with which to compare products. One of the most commonly selected parameters is nonlinearity, which describes the degree to which the sensor’s output (in response to changes in the measured parameter) departs from a straight-line correlation.

A polynomial expression describing the true performance of the sensor—if manufacturers provided it—would yield accuracy improvements of perhaps an order of magnitude.

Many sensors do, in fact, have a quadratic relationship between sensor output and measured value, with a response that is linear to a first-order approximation. Thus, if you substitute the quadratic equation y = ax2 + bx + c as an alternative to using the manufacturer’s advertised sensitivity data, supplied in the form y = ax + b , you can improve the accuracy. In another example, although many gravity-referenced inertial angular sensors have a sine wave transfer function (the relationship between the output and the measured angle is a sine wave), the manufacturer’s data sheets will still list a linear expression, because there is a linear relationship between the sine of the angle and the angle itself.

If the specific thermal effects contributing to both zero and sensitivity errors are stated, then the measurement errors may be minimized by considering the actual errors rather than the global errors quoted on the sensor spec or data sheet, together with the actual temperature range encountered in the app.

Often, both errors are quoted in terms of the percentage of full-range output (FRO). In reality, sensitivity errors are normally a function of a percentage of reading. Thermal errors may be minimized by actively compensating for temperature through the use of a reference temperature sensor installed near to or on the sensor being used. Some manufacturers provide an on-board temperature sensor expressly for this purpose.

It is important to distinguish between the contribution of zero-based and sensitivity errors. Thermal zero errors are absolute errors and are generally quoted as a percentage of full scale (FS). In most cases, sensors are not used to their full-scale capacity; therefore, when expressed as a percentage of reading, errors can become very large indeed. For example, a sensor used at 25 percent FS will have a thermal zero error of four times its data sheet value as a percentage of reading. A similar mistake occurs when users specify sensors with an operating range much higher than that which will be encountered in practice “just to be safe.”

These examples illustrate that you can improve both accuracy and precision because you can minimize predictable errors mathematically. Stability errors and errors that are unpredictable and nonrepeatable present the largest obstacle to achievable accuracies.

Unpredictable errors, such as long-term stability, thermal hysteresis and nonrepeatability, cannot be treated mathematically to improve accuracy or precision and are far more difficult to deal with. While thermal hysteresis and nonrepeatability can be quantified at the point of manufacture under controlled conditions, long-term stability cannot.

Various statistical tools are available to help define long-term stability, but ultimately you have to make a decision that will depend in part on how critical the measurement is. Routine recalibration may be the only reliable way of eliminating the consequences of long-term deterioration in the sensor’s performance. Top tips for the specifier:

• Repeatability is the single most important sensor performance parameter; without it no amount of compensation or result correction will be meaningful.

• Consider the environmental temperature range within which the sensor will operate. Thermal errors, particularly those associated with the zero output of the sensor, will dominate.

• Do not overspecify the operating range of the sensor. Manufacturers state the sensor’s safe over-range limits, and these should be sufficient. By overspecifying your sensor, you will reduce its signal magnitude, and zero-based errors will increase as a percentage of the measurement range.

• Do not confuse resolution with accuracy.

• If the sensor is to be used long-term, consider the effect of its long-term stability. Progressive deterioration in sensor characteristics can have disastrous consequences. This emphasizes the need for periodic recalibration. Typically, 12 months is an acceptable recalibration period, but both the operating environment and the consequences of inaccurate data must be considered.

• Calculate the total error that can be expected from the sensor by referring to the data sheet performance parameters, being careful to include only those that are pertinent to the specific application.


About the author

Michael Baker is managing director at Sherborne Sensors. He earlier founded Maywood Instruments and was general manager of Schaevitz Sensors, part of U.K.-based Measurement Specialties. He led a management buyout team to form Sherborne Sensors in 2002.

This article provided courtesy of Embedded.com and EmbeddedSystems Design Magazine. Sign up for subscriptionsand newsletters. Copyright © 2011 UBM–All rights reserved.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.