“20% off” isn’t real story, of course. (It’s a quick-and-dirty number to get people off the phone when they call the 800 number for “meter support”.) Here’s the whole picture:
From a single (large) blood sample, used to feed hundreds of “identical” strips into the same brand of meter, the results will vary in (roughly) a bell shaped curve which they call a “Normal Distribution”. Once in a while, a test will be off by MUCH more than 20%. Most will be closer to the middle. You describe the shape of the bell with a calculated number called “Standard Deviation”: about 68% of the readings will be less than 1x of the S.D. from the middle of the bell; about 95% will be less than 2x of the S.D. from the middle of the bell. And so on.
When they speak of a percentage, called the “Coefficient of Variance” or CV, it’s simply the S.D. divided by the mean value. (Both figures were in units of glucose per unit volume- mg/dL or mmoL/L. Divided by each other, it’s just a number. Then multiply by 100, and it’s a percentage.)
Here’s a very important thing: The CV of bG test strips, like the accuracy of Dexcom, is much lower at low levels of bG. At each particular value of bG, the bell-shaped curve changes. It’s “skinny” (showing more consistent measurements) at higher bG values, and it gets “fatter” at low bG values (i.e., less consistent readings).
For example, here’s the data which one study got by testing a large number of One-Touch Ultra strips, tested with identical control samples at 3 different levels of mg/dL:
44 mg/dL, CV of 4.4%
171 mg/dL, CV of 2.6%
366 mg/dL, CV of 2.4%
When you’re Hypo, strips and Dexcom become much less reliable. (BTW, One-Touch Ultra is one of the more reliable strips for accuracy at low bG readings. A few brands are much, much worse.) And so are the CGMS Sensors. For me, Dexcom off by more than 10% at 90 mg/dL is unusual – but Dexcom off by more than 20% at 60 mg/dL is commonplace.
OK, that pretty much covers meters versus themselves. But there’s one more “meter accuracy” factor: Reliability versus Accuracy. Even if your meter could be totally “reliable” against itself, it would still probably have “accuracy” issues in comparison with values we believe to be truly accurate (such as those generated by a properly calibrated YSI analyzer). The details are messy, but let me summarize by saying that you’d better add another 2-4% of “slop” to your figures for that issue.
Finally, we get to the most important part. (And they’re all excluding this stuff from their sloppy “20%” figures, too): Improper usage. Strip containers left open for too long; or kept at improper temperatures (for shorter periods of time); or most common of all, sugar-polluted hands. Wash 'em well!
If you do that, you can relax (a bit) about your fingerstick readings at non-Hypo bG levels. With my meter, each reading should be within 10% of true reality about 95% of the time. (That’s 7% for two standard deviations of Variance, plus 3% for Variance versus Accuracy.) Take two readings, and the Variance of their average (versus the “true” average value for the curve) goes down dramatically- they’re extremely unlikely to both be “whacked”, by large amounts, and in the same direction.
BTW, that’s why Dexcom asks for two readings at startup.
Now you and Lisa know everything! It didn’t hurt your head that much, did it? Go to the head of the class.