from my blog Don’t Fear Diabetes
If there’s one number, one test that strikes fear and loathing into the hearts of diabetics everywhere, it’s the A1c. Not only is it the main number used to diagnose diabetes, it is the measuring stick wielded
by doctors, CDEs, insurers, and diabetics themselves for evaluating our“control”. But are its days numbered?
A1c, according to the American Diabetes Association, is “a test that measures a person’s average blood glucose level over the past 2 to 3 months. Hemoglobin (HEE-mo-glo-bin) is the part of a
red blood cell that carries oxygen to the cells and sometimes joins
with the glucose in the bloodstream. Also called hemoglobin A1C or
glycosylated (gly-KOH-sih-lay-ted) hemoglobin, the test shows the
amount of glucose that sticks to the red blood cell, which is
proportional to the amount of glucose in the blood.”
So, essentially, it is a measurement of your average blood sugar over a period of 2-3 months.
The biggest problem with this, as any diabetic will tell you, is that blood sugars move around. A lot. A whole big giant ton. So an average blood sugar over a period of as little as an hour could mask a
huge range of values (it would be very easy to have an average blood
sugar over an hour of 100 mg/dl, which sounds like a very good number,
and have a range of 40mg/dl-220mg/dl, which is a lot less desirable).
So if one hour’s average can hide so much, how about 2160 hours?
Probably a lot more. So why are we so hung up on A1cs?
Well, humans seem to have a need for standardized comparisons. The world is simply too complex to try to understand it all. In order to make decisions, it’s essential to simplify information. The A1c does
that for diabetes. The problem is, while it simplifies things, it does
it in only a marginally useful way. Sure, we can say that someone with
an A1c of 11 is worse off than someone with an A1c of 7.5. But about
the difference between 6.5 and 7.5? Conventional wisdom has it that the
6.5 is preferable (using the generally accepted lower=better paradigm).
But that’s not necessarily the case. Karen at Bittersweet wrote a very important post recently on celebrating her higher A1c.
What we all know (but until recently didn’t have the tools to measure) is that Standard Deviation is as (or, within a certain range, more) important than A1c. When you only test 6-10 times a day, getting
an accurate Standard Deviation is impossible (especially following the
convention of testing 2 hours after meals, well after most post-prandial
spikes have receded). But with modern CGMS, it’s as easy as pressing a
button. While I only have experience with Dexcom software, I’m sure all CGMS (Abbott Navigator, Medtronic Guardian)
have similar functionality. I can easily get my average (mean and
median) blood sugar, standard deviation, highs, lows, and many other
readings. This gives me a far more complete picture than an A1c ever
could. And if I still want an A1c, it’s easy to convert my average from my Dexcom.
Now that CGMS are on the rise, and the importance of blood sugar fluctuations are starting to register as more important than blood sugar averages, maybe we will start to see A1c ushered out as the yardstick
for diabetic care. Instead of hearing “that A1c isn’t where it should
be” maybe people will start to hear “I noticed that your post-prandials
for weekend dinners are much higher than any other meal. Let’s look at
your boluses for these meals and see if we can’t tighten those numbers
up a bit.” Wouldn’t that be more helpful?