How Many Decimal Places Are Enough?

We’ve been asked by customers how many decimal places are enough. It’s either because they want to maximize the resolution of their purchase or they expect far more resolution from the equipment than is reasonable.

Let’s take these examples a bit further. Suppose the instrument wasn’t repeatable when the resolution was set to ±0.5 counts.

Some may argue that observing the extra digit will help with rounding or other calculations by making them more exact.

Example 4

Let’s say the instrument reads 10,000.5, 10,002.5, and 10,003.5. The standard deviation of these three numbers is 1.527525. This example highlights that the instrument might benefit from setting a higher resolution, say 1. Maybe our numbers become 10,001, 10,003, and 10,003, which will lower our repeatability to 1.154701.

One rule for setting an instrument’s resolution

Of those three examples, the third (setting the resolution to ±0.1) would result in the best possible scenario for capturing repeatability, which is often required for any measurement uncertainty budget, as is the resolution of the instrument being tested.

Our device reads by 10,000, and we know the best we can measure is ±0.002% of that, or ± 0.2. Does it make sense to try and measure the extra digit to 0.02?

So, how many decimal places are enough? The answer depends on how stable the unit might be. Many standards say the actual resolution of an instrument depends on the stability at 0 reading. They often recommend calculating the resolution by the range of fluctuation when the instrument is at what should be 0 divided by 2. Thus, if we set the resolution to ±0.1 and the readings fluctuate by 0.0, 0.1 and 0.0, we’ll likely have an appropriate resolution of 0.1 > 0.05 (0.1/2).

What if the resolution was 0.1 and the readings were observed to fluctuate from 0.0, 2.0, and 1.2? The resolution will be ±0.1 < 1 (2.0/2). Thus, the resolution of the instrument should be set to 1.

We’ve seen a lot, even a meter being set to read 100,000.001. That’s 10 million counts, though this instrument (i.e., readout) wasn’t even stable to 1 count. The extra 0.001 meant nothing.

Using the same instrument, we set the resolution to ±0.2 counts. We recorded three readings at 10,000.6, 10,000.4, and 10,000.6. This time, the standard deviation of our measurements is 0.11547.

It’s the same instrument, though this time we set the resolution to ±0.1 counts. We record three readings at 10,000.5, 10,000.4, and 10,000.5. The standard deviation of our measurements is 0.057735.

It wouldn’t make sense if that extra digit isn’t stable per the guidance above. Our measurement uncertainty will not be better than ± 0.002%, no matter what we do.

Many of us are for taking extra digits if we can. However, are they meaningful?

Example 2

Example 1

Let’s start with a situation where we can set the appropriate resolution to display some degree of repeatability between measurements.

This article isn’t about formulas or rounding. But it is about determining how many decimal places are enough.

Repeatability and resolution

In this scenario, we record three readings: 10,000.5, 10,000.5, and 10,000.5. Our repeatability is perfect. The standard deviation is 0. The resolution is ±0.5 counts. The resolution is too coarse.

Example 3

There are other rules, though this is one we use most of the time at Morehouse. It seems sound, and it’s found in many ASTM standards.

Reference standard uncertainty

When considering the uncertainty of the reference standard and applying the rule of maximum fluctuation divided by 2, the actual difference in the measurement result is often insignificant. In this case, “insignificant” means that the probability of that extra digit having an effect on the measurement uncertainty reported to two significant figures is relatively low.

منبع: https://www.qualitydigest.com/inside/metrology-article/how-many-decimal-places-are-enough-051324.html

Then there’s the other extreme, where some say that rounding is affected. They might be right, depending on what they’re trying to accomplish. Many meters will round based on an invisible digit to the right of the decimal place. That means if a meter can read 10,000 counts by a resolution of ±1 count, the meter is likely rounding based on 10,000.X, where X equals 0.1 – 0.9. Thus, 10,000.6 becomes 10,001, and 10,000.3 becomes 10,000.

This one isn’t as obvious, though it is worth considering. If the indicator used for the calibration is specified as 0.002% of the indicated value, does it make sense to record the resolution far past that specification?