Understanding T1D Research, clinical trials, papers

Medical papers can be hard to understand without a graduate degree in the life sciences. Worse, a lot of studies seem contradictory, misleading, or even blatantly wrong. This could be due to just sloppiness, but more often than not, other confounding problems can muddy things up, including (but NOT limited to) author affiliations with companies or institutions that aim for particular outcomes. Even those in the medical field have a problem navigating this, and complain that it’s getting worse as online publications make it easier to publish junk science.

All this often leaves people with a bad sense of research itself, that it’s all flawed, or that corporations are untrustworthy, or other feelings of cynicism. So, I decided to write an article to help readers be more discerning about medical literature so they can make better use of their own personal attempts to doing research. There really is good, solid data out there, but knowing how to read it is key.

The article is titled, Who’s the Grand Wizard of T1D Knowledge?.


Excellent article! I hope that everyone takes the time to really READ it and to understand what steps they can take to improve their research and their self-care. Thank you for taking the time to write this article and to share it with us.

If you’re not relying upon the medical research for any therapeutic or academic reason and are just a curious person who wants an easily digestible summary, you can copy/paste the report into ChatGPT and ask it to summarize for you.

As I said, don’t reply on this summary for any reason other than to satisfy your curiosity, and be aware that the summary may include inaccuracies that would need to be vetted by your endocrinologist. But it is a way to be able to quickly get a sense of a complicated medical publication when a granular level of accuracy isn’t essential.

Thanks so much for writing and being a voice for science. It’s frightening how ready people are to dismiss science and scientific institutions, etc. b/c of instances of corruption/error (even though these things are rampant in all of human life, and prob more so in every domain BUT science). If you don’t have any appreciation or understanding of science and the scientific method, and have no willingness to gain such an appreciation or to understand the immense value it’s had for bettering people’s lives, and how dismal life would be if people didn’t engage in science, then society and people’s quality of life will deteriorate accordingly. I’m so grateful people have been willing to be scientific and develop cures and treatments and tech that save lives and enhance lives, and to think how easily other people trash science and embrace ignorance and bogus medicine, etc. is really scary.

1 Like

While I can appreciate the desire to use Ai bots to summarize highly technical information in easy-to-understand language, the study authors already do that in their “conclusions” section, which is generally a short paragraph of a few sentences. There’s also a “discussion” section that typically does the same at the end of the article, though it is often much longer.

I have been experimenting with AI engines over the past year to see how they’re evolving, and so far, the effort is still largely a net negative. The main problem I find is that the factual errors that the algorithms misunderstand are disproportionately amplified in their effort to use quick, short, pedestrian language.

I feel that if you’re going to bother to even search for articles in the first place (or are sent one), you need to understand how to filter for good/bad resources. If you’ve gone that far, take the extra step to just read the “Abstract” (that leads the whole article, and is even free if the article itself is behind a paywall), and the Conclusions section. Once you get used to that, experiment with looking for the kind of details I highlighted in my article. And give that Attia article a read on how to read and understanding clinical trials. But using AI engines at this point is still a poor idea.