Maybe this is a dumb question, but in Dexcom Clarity, they calculate a standard deviation, but the distribution of blood sugars is not symmetric around the mean. So if you want to, say, come up with a first guess on how low you can set your target while maintaining a certain percentage of your readings above your LOW threshold, you can’t just use normal distribution statistics, right?

Does anyone know how you’d calculate that?

I don’t know the answer to your question @Tia_G. I never took any statistics courses in college and most everything I’ve learned about statistics, I’ve learned because of diabetes and using a CGM. I learned here that the accuracy of the standard deviation of any data set depends on normal or “Gaussian” distribution of the data.

I have learned to accept that blood sugar distribution of data points are not normally distributed but I’ve chosen to use the SD deviation statistic anyway.

I am interested in the answer to your question, but in the meantime will be happy to use Dexcom’s flawed SD because I find it helps me. Perhaps someone like @Dragan1 can provide an answer to you.

Sometimes I wish there was a way to interact with Dexcom staff about issues like this. There seems to be an impenatrable wall around skilled corporate engineers that prohibits interaction with patients. This is too bad because I think some dialog could be mutually beneficial.

you can calculate the SD of any set of data, using the data points themselves (each 5 minute report by the sensor to the receiver. The more “normal” the distribution, and the more data points, the more accurate will be the SD. What period of time, called the range, is determined by whatever calculates the SD. Dexcom would have to give us that information.

An acceptable control range depends on the SD and what percentage of readings you want to include within the control range. Anything out of range of that is considered a “failure” to control the process. The greater the percent of readings in control, the less the control system goes out of range. Again, that is determined by Dexcom, and whatever settings we may set, like high and low alert glucose levels.

Control ranges are usually determined by what is defined as a failure. if you set 120 as the upper control limit, you will fail more often than if you set 160, for example. The tighter the range of control for the “accepted” number of failures, the better you are considered to be controlling your glucose levels. All this is also related to the A1c readings, though I have no knowledge of how they relate to each other.

davyboy…so here’s the thing. We’ve set a target BG of 120 for our son. His standard deviation lately has been around 45. We’re not willing to tolerate more lows, but we don’t want to get too complacent with this target if it’s too conservative.

So what I’m wondering is, could we take our data and use the calculated standard deviation from Dex to determine when it’s safe to lower our target while still having the same percentage of “low”? For instance, if his standard deviation dropped to 35, could we lower our target to 100 with the same percentage of lows?

If it was a normal distribution, you could easily figure out what percentage of data points would be below a set threshold, if you know the mean and the variance. But clearly the data is not normally distributed. So does anyone know, what IS the distribution? And is there an equally easy equation to calculate an equivalent to standard-deviation, or a way to translate from dex’s standard deviation to a non-normalized distribution?

I wonder if Dexcom would give you an answer if you posed these questions? Perhaps a link to this thread could motivate them to provide the answer.

My advice is just keep trying to get the A1C and SD smaller, while getting the average closer to 100. Your “Target” is an imaginary number, not the average around which the Std. Dev. will be calculated. I can’t access data except through Clarity. The parameters you can set for yourselves are the upper and lower alert limit. Clarity will calculate the average and the SD. You don’t know the average till you have a range of data to report. I would use two weeks, which is what the system seems to like the best. The fewer data points you have in your chosen time range, the less accurate will be the SD. The ideal SD is 0, which is really not possible in practice. Even the most accomplished six sigma products out there don’t achieve zero SD. So your average will always vary from your desired target by some range. For a 2 week range, my most recent average was 125, SD was 44, and equivalent A1c was 6.0. A non-diabetic happens to have a norm of 100 set by the human body’s brain, but will range to around 120 after a meal, and can go as low as 90 or a little lower during lengthy exercise. Those would be ideal numbers to shoot for in a diabetic.

Set the lower limit low enough so that if your child (didn’t mention age) can recognize he/she will need to keep close eye on the Receiver or iphone. Don’t let them go below 70 without eating candy or glucose, whatever you put in their pocket. I always went for orange juice, and still do, when I am home. it hits very fast. I carry Brachs orange chewable candy slices in my pocket, enough to get me through two low glycemia reactions.

Good upper alert range, meaning you want to take action, is around 140. After eating, the glucose level rises and quickly if the meal had a lot of carbs. A slow rise period occurs one or two hours later, when the protein calories convert to glucose and get released into the blood. Several hours after you eat, the fat calories start to show up in the glucose. The carb rise is sharp and fast, and harder to control. The fat conversion to glucose in the blood takes place more slowly, and lasts significantly longer, maybe several hours for a lot of fat. For that, if your child has a pump, an extended bolus of several hours will cover it. Knowing how much to bolus for fat is difficult, because we don’t have documented history available (that I know of) for calculating a carb number from fat calories. Generally, diabetics handle protein better than carbs and fat. Doctors have always advocated a limit on carbs. I don’t know what it would be for a child, especially not knowing their metabolism or level of exercise. I believe there are apps for converting exercise to burnt calories, but how fast that takes effect on the blood glucose is a tricky job to figure out. When I was a child, I always had a snack before going out to play and often had to eat candy if out a long time; I come from an era when children always played outside after school, which I still believe is the best thing for them, besides learning to do homework and taking on household duties.

Wikipedia discusses how to calculate it and then gives the formulas farther down. https://en.wikipedia.org/wiki/Standard_deviation

When they calculate mean and SD, Dexcom assumes normal distribution, which is only a simple approximation. Distributions that take into account that BG values cannot be negative, and that the median tends to be less than the mean can provide better approximations. One such distribution is log-normal. The wiki page shows how one may go between normal and log-normal parameters. As an example, I’ve taken a month of my bg data (around 8500 data points) and fitted a normal distribution (left) and a log-normal distribution (right) to the data. The parameters (mean and SD) of the normal distribution are exactly the same as the values reported by Dexcom, which confirms that they just assume normal distribution. Note how the log-normal distribution better fits the histogram, but normal distribution does not look too bad either. The differences would be larger if the mean and the median were further apart.

Whether you can use log-normal (or similar) distribution to make better decisions on how to set a target, or do anything else, I am not sure. If you set the target to a lower value, risks of going low are going to increase, but I do not think we can tell by how much, regardless of how well we approximate the distribution and its parameters before making the change.

Thanks Dragan1, this is exactly the information I was looking for!

A better way of assessing variability with a skewed or asymmetric distribution would be to use quartiles. Unfortunately, most patients and doctors would not be able to interpret these measures properly. In general, since the SD is often driven by “highs,” you can end up thinking your risk of lows is substantial when assuming a normal distribution. In practice you might have a mean of 100 mg/dl and an SD of 30 mg/dl and still never experience any serious lows despite a normal distribution estimating your daily serious events.

yes, Brian_BSC, quartiles could be helpful too. Dragan1, do you use R to analyze your data, or some other program?

I use MATLAB. I am pretty sure you can do this and much more in R. Even Excel might work, not sure. Prior to Clarity, Dexcom had Studio (a desktop program), which was able to generate histograms, but did not have any attempts to fit distribution curves or to do anything more.

BTW, I should say I’ve not found any paper or any other reference saying that log-normal is appropriate to model distribution of BG data. Inspired by your question, I just looked for a distribution that would meet my understanding. I’ve found that log-normal seems to make sense, and that it works really well for my data set. I’d be interested to hear if it works well (or not) for anyone else’s BG data sets.

Following on your comment:

If you’re using Excel, you’d need an add-on like EasyFitXL, XLStat or Risk Solver Pro. Personally, I found XLStat to be the most intuitive of the three.

It’s much easier in R - one approach would be to apply density() and then qqplot(), but there are numerous choices available.

Hi Tia, here is an attempt to quantify how the expected % of time below some BG threshold (70 mg/dL in the example shown below) depends on the average BG and SD values, as reported by Dexcom Clarity. The curves have been generated assuming log-normal distribution of BG data. The % values shown seem to match my own experience reasonably well, but note that this just a small, highly speculative DIY data analysis experiment, which *should not be used to make any therapeutic decisions*. I’d be happy to share the MATLAB code used to generate the curves, if anyone is interested.

This is really fascinating! Our son has an SD as calculated by Clarity of about 45 and an expected mean of about 130ish, but he spends less than 3 percent of his time below 70 and less than 0.5 percent below 55. So his distribution may not be log-normal – or if it is the skew may be different than what you have… it’s also possible the type of distribution changes as you drop down in average BG, reflecting some physiological differences inherent in insulin processing!

The data does sort of align with what I intuitively guessed, which is that to lower the target by 10 points, we’d need to go down by about 7 to 10 points in SD – unless we’re willing to tolerate more lows. Which we’re not…

And then this tells me that his A1C has a floor, at least until we can reduce variability. We could maybe shave off an additional 0.2 or 0.3, but his average BG is approaching his average target BG at this point, and we can’t really go lower unless we reduce some of those swings.

Thanks for the offer of the MATLAB code. I don’t have the product codes/registration stuff for MATLAB so I’m probably going to try and figure these things out with R.

Yeah, at average of 130, and SD around 45, the curve above would predict about 4.6% samples below 70, which is higher than the actual 3% you’ve observed. But, I’d say the prediction is not too bad!? At least, it seems to be somewhat conservative, which is probably better than if it underestimated lows. You may be able to find a better distribution for your son’s bg data. I’d take a look at Burr.

Entirely possible, we just do not know. With n=1 data, we can only do so much. Dexcom has access to bg data from many users, so they should be able to add all kinds of guideline-worthy results to Clarity - hope they are listening. BTW, thanks for the thought-provoking question, it’s been fun to look into this