Please open the "FermiLab Quantum foam" article by Don Lincoln linked below.
This article describes a situation where the team calculated the strength of a muon magnet and then measured the strength of a muon magnet.
The article is interesting from a number of aspects, but we're just in the Precision of Measurement for now, so we'll focus on that.
Note the language in the second to the last paragraph: the measurement "was found to be off from the prediction by 0.1 percent—a significant amount."
The team's measurement was "off from the preciction by 0.1 percent. What, precisely, is 0.1 percent?
"Percent" means literally, "per hundred." One cent is one percent of a dollar. There are 100 cents in a dollar, so a penny is one per hundred-in-a-dollar, or one percent of a dollar. So 0.1 percent would be like 1/10 of a penny in relation to a dollar.
Decimal | Fraction | Exponent |
---|---|---|
0.001 | 1/1,000 | 10-3 |
This is like if you saved some money, but spent some of it. You add up deposits and withdrawals, and calculate that you have $1,000. When you call the bank, they say you have $999. $1,000 is your calculation; $999 is the measurement. Your calculation is off by 0.1 percent.
When scientists have a measurement error of 0.1 percent, they know there's something "significant."
Please open the "Physics Today" article by Gerald Gabrielse linked below.
In this document, find where it says "The uncertainty, in parentheses for the rightmost two digits, is only 2.8 parts in 1013." Don't see it? Search ([Ctrl][f]) for the text "2.8 parts"; it's just above the section title "The standard-model calculation".
This document discusses the scientists' predicted and measured values for the magnetic moment of a single electron. The measurement was off from the prediction (they always are) by a tiny amount; "only 2.8 parts in 1013." But what, precisely, is "2.8 parts in 1013."
For our purposes, the 2.8 is not significant. We're more concerned with magnitudes than with fractions of magnitudes, so let's call it 1013.
Here, we begin to need exponential notation because, well...
Decimal | Fraction | Exponent |
---|---|---|
0.0000000000001 | 1/10,000,000,000,000 | 10-13 |
...all those zeros get difficult to count!
By comparing the language in these two documents, we see that scientists treat an error with a magnitude of 10-3 as too-much-error, but an error with a magnitude of 10-13 as a very precise measurement.