# Making sense of uncertainty

**All measurements have errors in themit's a fact of working in the physical world. Even so, you don't have to take it lying down. Fight back with these practical tips for dealing with uncertainty and accounting for those errors via software.**

Consider a carpenter measuring the opening for a new window. He measures the width at one foot, six inches. Does this mean that the window is exactly 18.000000 inches wide? Of course not. All real-world measurements include some amount of error, more precisely termed *uncertainty.* Understanding uncertainty is important for doing useful work based on measurements.

Suppose that the measurement for the window was 18 inches +/- 1 inch. There's a good chance the new window won't fit right because the measurement is not precise enough.

It's possible to be overprecise as well. Suppose the window was measured at 18 inches +/- 0.0001 inches. This is definitely overkill for a window, unless you happen to be building a submarine. The time and cost it takes to get that level of accuracy could be better spent.

We all must strike a balance when taking measurements, but to achieve that balance, we need to know what the uncertainties are and how to find them. To further complicate the matter, most of the quantities that are actually useful are not directly measured but calculated based on other measurements. For instance, the square footage of a room is not measured directly; its length and width are multiplied to compute the area. How do the uncertainties in the length and width combine to affect the uncertainty for the area? And knowing that, do we need a tape measure accurate to +/- 1/8 inch or would a rough estimate to +/- 1 inch be ok?

This article offers a brief explanation of some of the techniques and uses of uncertainty analysis for embedded developers.

**Add it up**

Here's an example scenario culled from real life (mine), but simplified for illustration. Let's say we've just engineered a device that counts pulses from a fluid meter and then calculates a volume based on the pulses and a programmed constant. The device is in final black-box testing.

The test fixture consists of a free-running oscillator gated to an input switch with a pulse counter on it. The device under test has a maximum frequency of 1MHz at a 50% duty cycle. However, the test counter has a maximum frequency of 5kHz, also at a 50% duty cycle. Not the optimal design for testing such a system, but it's all we have.

Due to the design of the test fixture, it's possible that the measurement device will count an extra pulse every time the oscillator is gated on or off. This means there is a +/-1 pulse count error for every on/off transition.

Let's look at the first test we run. This one will verify the pulse accumulation routines in the devices. In this test, we'll preform 10 on/off transitions. At the end of the test, we expect that the pulse count and, consequently, the volume will be different from what the test fixture indicates. But how can we tell if any observed difference is due to the expected uncertainty in values from the imperfect test set-up or due to bugs in the tested device's firmware?

This is where uncertainty analysis comes in. It can tell us how much uncertainty to expect in the counts. Then we can see if the test measurements are within the expected tolerance range.

When only addition is involved, as in this scenario, we can use a very simple formula to determine the overall uncertainty:

(1)

where δ*y* is the uncertainty in *y* (in the same units as y itself). In other words, the uncertainty in *y* is equal to the sum of the uncertainties of each of the numbers added together to get *y.*

Since the uncertainty in each group of pulses is +/- 2 pulses (+/- 1 pulse for each transition on and +/- 1 pulse for each transition off), we can say for 10 pulse groups:

If the measured number of pulses is 672, an uncertainty of +/- 20 puts the expected value between 652 and 692. If the number of pulses actually observed is 630, which is outside that range, the expected uncertainty in the test fixture can not explain all of the error. We must declare this test a failure and seek the cause of the discrepancy within the hardware or software under test.

**Multiply and conquer**

After we squash the software bug uncovered in the previous test and successfully rerun it, we need to run a second test to check the calculated volume.

This time, we have no additions, only multiplication:

*V* = *k* * *y* (2)

where *V* is volume, *k* is a programmable constant (0.37037), and *y* is the number of pulses measured.

The relevant formula for multiplying uncertainties is:

(3)

This value |(δ*y* /*y* )| is known as the *fractional uncertainty* (since hopefully δ*y* <>*y* ). If this value is multiplied by 100, we get what's called the *percent uncertainty.*

Equation 3 says that if a value is a product of measured parameters, then the fractional uncertainty of the resulting value is the sum of the fractional uncertainties of the measured parameters.

Applying this formula to our test:

(4)

For this test, we only ran five individual on/off transitions via the test fixture. Then we just start filling in the pieces of the volumetric uncertainty equation (Equation 4). Let's start with δ*y.* We know from the previous example that:

(5)

In addition, we need to take into account that we're using IEEE floating-point math here (implying accuracy to about the 7th decimal place). Therefore, we also have:

Combining these, we get:

Don't forget to calculate the expected volume from the test fixture's pulse count:

Therefore, the volume we expect if things are working right is 95.5555cm3 +/- 3.79cm^{3} or anything between about 91.8cm^{3} and 99.3cm^{3} . If the volume we record is 97.29630, the test is well within tolerance for the test set-up and is regarded as a success.

**General uncertainty**

So far, we have two formulae to find uncertainties in special cases. Many more exist for various combinations of functions, but all of them are derived from the following.

If you're given a function *y* in *n* parameters:

(6)

Then it can be shown that the following is always true:

(7)

This says that the uncertainty of the calculated dependent variable is equal to the sums of the absolute values of the dependent variables' uncertainties multiplied by the value of the partial derivative of *y* with respect to that variable.

Furthermore, if we're reasonably confident that all of the parameters are statistically independent of each other, we may use:

(8)

This results in a slightly smaller uncertainty value, but is only valid if the parameters are independent of one another. If they're not, use Equation 7.

We can derive the additive and multiplicative formulae from Equation 7. (Remember that a partial derivative is just a derivative, but we pretend that all of the variables in the equation are constants, except for the one we are differentiating with respect to.)

Equations 7 and 8 enable us to find the uncertainty of any calculated value. We can now find the uncertainties of logarithms, exponentials, sines, cosines, and any other differentiable function.

**For example**

Now we can run a third test, this time to test a new feature of the device that accounts for volumetric changes in the fluid (based on temperature changes from a given reference temperature). The actual modification of the volume is carried out by multiplying a single correction factor (CF) with the raw volume. So we have:

(9)

(10)

Through past empirical work, we can calculate CF in the following manner:

(11)

The data associated with this calculation is shown in Table 1.

**Table 1: Temperature correction factor test results**

Data | Value | Comment |

A | 0.0000123 | Constant developed empirically |

B | 1.00540 | Constant developed empirically |

T _{0} |
25^{º} C |
Initial temperature |

T |
30^{º} C |
Measured temperature |

δA | +/- 0.00000005 | Uncertainty in constant A |

δB | +/- 0.000005 | Uncertainty in constant B |

δT _{0} |
0 | Uncertainty in reference temperature |

δT |
+/- 0.5^{º} C |
Uncertainty in measured temperature |

The uncertainties associated with the constants *A* and *B* are determined by convention. If a value is provided without an uncertainty, the uncertainty is taken to be +/- 5 one order of magnitude less than the precision of the value. So the precision of *A* = 1.23 x 10^{-5} implies by convention that δ*A* = +/- 5 x 10^{-8} . Also, since *B* = 1.00540, we infer δ*B* = +/- 5 x 10^{-6} , not +/- 5 x 10^{-5} ; the extra zero at the end of the number implies additional precision.

We can assume the uncertainty of the reference temperature is zero, since this is the theoretical number around which the model is built. We're not measuring this value, nor was it ever determined by measurement, as it was taken as the theory's starting point. In a similar example, the value of π doesn't have any uncertainty associated with it. We may introduce uncertainty with regard to how it's represented internally in floating-point format, but π in and of itself is not uncertain.

The uncertainty in *T* is due to the published error for the temperature transducer being used in the device (+/- 0.5^{º} C between 0.0 and 85^{º} C).

Let's now determine the uncertainty associated with CF. First, we need to find the partial derivatives of CF with respect to each of its dependent variables. What follows are the final simplified answers.

You may attempt these derivations on your own. As a quick bit of help, remember that if:

(12)

Then:

(13)

The partial derivatives are:

(14)

(15)

(16)

(17)

Since it's reasonable to assume independence between these parameters (and δ*T* _{0} = 0) we are left with the following formula for δCF:

(18)

Now we actually calculate the value of δCF. But first we'll find the CF value, displaying several significant digits, with the understanding that many of these may actually be meaningless once we know the uncertainty:

**CF = 1.000304252**

Next, let's calculate the uncertainty of CF:

**δCF = 0.00006**

Therefore, the final measurement of the CF is:

**CF = 1.00030 +/- 0.00006**

**Software considerations**

If we look at the calculation of |(δV/V)| in Equation 4, we can see that one of the terms is much larger than the other, which suggests that the primary source of the uncertainty in the volume is coming from one parameter. This analysis may be used to quantitatively determine which element(s) of a system should be improved to increase the overall accuracy of the system. In our example, the uncertainty in the pulse counters contributed most of the uncertainty to the measurement.

For a moment, assume that you're using a much more accurate pulse counter in the test fixture, which is +/- 1 pulse over all pulses. Look at this example with some different numbers:

then:

Here it's interesting to note that the majority of uncertainty is now coming from floating-point arithmetic, not the measurement of the pulses. If the system was going to be used to make long-term measurements with large quantities of pulses, the correct and most effective way to improve the system would now be to change the internal calculations to double precision. If this device will only be used for smaller volumes, the +/- 1 pulse uncertainty overshadows the uncertainty due to floating-point calculations. In this case, it would be fine to implement the internal calculations using single precision, enabling you to take advantage of the performance increase in **float** vs. **double** calculations.

Look at the example where the correction factor was determined. From this example, if it's typical of the operating temperatures that will be encountered, can you determine whether to use floating-point arithmetic or double precision? The CF is accurate only to about six digits. Therefore, floating-point arithmetic should be fine.

Suppose you need to improve the accuracy of CF. It makes no sense to change the internal calculations to double precision. Instead, you should replace the temperature transducer with a more accurate one, since this is the major contributor to the uncertainty in the CF (Equation 18). The term calculated from the uncertainty in *T* is several orders of magnitude greater than all the others, effectively becoming the primary source of error in the CF.

Imagine that you obtain a better transducer, one with an accuracy of δ*T* = 0.01 *C.* The term associated with *T* now drops to 1.48 x 10^{-12} , which is the same order of magnitude as the term associated with A. And should this transducer actually be used, the uncertainty in the CF becomes +/- 0.000002, implying:

**CF = 1.000304 +/- 0.000002**

This is a bit more than one order of magnitude increase in accuracy. Also, there are now seven digits of precision in the CF. After changing to the new transducer module, a switch to double precision might be something to consider. However, you can see this makes sense only after you upgrade the temperature transducer to a much higher accuracy.

**Certainly valuable**

Uncertainty analysis can be a powerful tool for many things, from determining whether or not a system works as required, to determining if an algorithm is appropriate for a given application. The analysis provides insight into how accurate a value is and where it could be made better.

However, two major drawbacks to this methodology may slow you down. First, it's a very math-intensive process that's not easily rushed. There's no workaround to avoid the intense calculations required. While some software packages can help, they're not readily available to everyone. This means you're on your own to generate any specific uncertainty formulas needed. Once that's done, however, it's a trivial matter to enter the formulas into a spreadsheet for further analysis. A spreadsheet also enables you to tweak the various parameters and easily analyze new data, provided the equations remain unchanged.

Second, this process is not commonly taught or used except haphazardly. I hope this article alleviates the lack of detailed literature and increases knowledge and use of these techniques.

Over the years, this kind of uncertainty analysis has helped me find numerous bugs in software, some of them in decades-old legacy systems. In addition, it's helped address design considerations, such as whether to use single- or double-precision floating-point math and which algorithm generates the least uncertainty in its final result.

**Michael Becker** is a software engineer at Direct Radiography Corporation. He holds a BS in mathematics and a minor in computer science from Gannon University and has worked in a variety of real-time embedded programming areas. Michael welcomes feedback and may be reached at .

**For further reading:** *Manual of Petroleum Measurement Standards,* American Petroleum Institute (API), Chapter 11.1 Volume X (ANSI/ASTM D 1250)(IP 200)(API Std 2540) August 1980 / Reaffirmed, March 1997.

Volumetric measurement and correction equations presented in this article are simplified derivations of the theories and equations presented within this work.

Taylor, John R. *An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements.* Oxford University Press: University Science Books, 1982.

An excellent introduction and reference book for uncertainty analysis, especially with regards to quantitative techniques. All the uncertainty formulas used in this article (and more) may be found here, along with numerous examples.