Here is a helpful article from Wired magazine (hat tip; Chaplain Mike) on ways to discern whether a scientific article on Covid-19 is useful or not. Of course, one way to judge an article is whether it lines up with one’s ideological bias or not. That assumes one’s goal is ideological purity. Ideological purity maintains that one is in good standing with ones in-group or tribe; and that is very important to some people. I’m not being facetious here; holding opinions against one’s group of friends can be difficult and stressful.
But if one is willing to drill down and try to approach the truth as closely as possible, the Wired article has some useful advice. Covid-19 is a new disease so studies are coming out of labs and research facilities at a rapid pace:
- Some studies are small and anecdotal and are not rigorously vetted.
- Others are based on bad data or misplaced assumptions.
- Many are released as preprints without peer review.
- Others are hyped up with big press releases that overstate the results.
The article cites hydroxychloroquine, the antimalarial drug that appeared promising in the early stages of the pandemic as an example. Early anecdotal evidence from China seemed to show the drug might have some benefits and an early trial in France seemed promising. Once a large-scale, double-blind trial found the drug didn’t hurt patients, but didn’t help them, either, the FDA finally revoked its emergency use authorization for the drug on June 15.
So how do readers without a research or medical background who just want to know what’s going on and how to stay healthy approach this myriad of data? The article cites the Novel Coronavirus Research Compendium (NCRC). The team includes statisticians, epidemiologists, and experts on vaccines, clinical research, and disease modeling; who rapidly review new studies and make reliable information accessible to the public. The key for the non-expert is to be looking at where a study was published, what data it uses, and how it fits into the larger body of scientific research.
- “First step: Look at where it was published. That can offer clues about things like whether the research is finished or still in revision, if it’s been reviewed by other scientists, or whether it’s rigorous enough to be accepted by top journals like the Journal of the American Medical Association, The Lancet, or The New England Journal of Medicine.”
If a study’s design or data don’t live up to the journal’s standards, they might reject it entirely. Then, during peer review, the study is even more closely examined for mistakes, and sent back to the authors with suggestions for ways to make the paper stronger. The authors then revise their paper to address those concerns. Sometimes scientists don’t want to wait for the lengthy peer review process to conclude before they publish their research, especially during this pandemic, when their information could help other scientists. So while their article is under review at a journal, the researchers might also publish the paper on a preprint server. These papers are essentially drafts, and readers should be wary about immediately drawing big conclusions from them. If a paper isn’t published in a journal or on a preprint server at all, but instead shows up on a personal website or as a press release without any data attached, that’s probably a red flag to take those conclusion with a huge grain of salt.
- Know the Format. If a reader wants to dive into reading primary literature, they shouldn’t approach a study like a book or a news article.
A typical study has six major parts (from the Wired article):
- They generally begin with an abstract, which briefly describes the question the researchers were trying to answer, what data they collected, and what the results were.
- Then the introduction and literature review sections set the stage and tell readers more about the ideas the researchers were exploring and what previous studies have found.
- The methods section explains exactly how the study was conducted, which allows other researchers to repeat the experiment to see if they get the same results.
- Results. What the study found in terms of the actual data.
- Discussion. Where the authors will reveal what they found and talk about its significance.
- Conclusions. Which make some larger claims and start to pre-hypothesize about future studies and new work that should be done.
The article notes it can be easy to miss what distinguishes the actual findings from that more speculative part. That speculative part is important—it points to the future and it starts a conversation about how to move the field forward. But it shouldn’t be confused with what a particular study found.
- Go for the Gold Standards. There are a few best practices for medical studies that show the research methods are rigorous. “One is that the study has a control or placebo group. That neutral group doesn’t get the drug or treatment at all and can be directly compared with the groups that did. Another is that the study is a randomized “double-blind” trial, in which neither the test subjects nor the scientists know who received the placebo and who got the active drug”. However, an observational study that includes good data and a big enough sample size can still be useful and informative.
- Beware Shocking Claims. Don’t immediately buy into claims that are wildly inconsistent with what previous research has shown. New, groundbreaking findings make great headlines, but they rarely make good science.
Any one study is not definitive. “Science builds on itself slowly. Findings have to be reviewed by other experts and then replicated in different settings and populations before the community is ready to make any really big claims.” Final money quote from the article:
“Most of it is incremental steps, showing the data moving in the right direction,” says Jason McLellan, a virologist who studies coronaviruses, including MERS, SARS, and the one that causes Covid-19, at the University of Texas at Austin. He advises readers to be cautious about getting too excited about that one study that will answer everything. In science, he says, “there’s never any absolutes.”
Another point I would have added is “beware the cherry-pickers”. A good scientist will weigh all sides and even welcome contrary criticism. But the ideological-minded ignores or minimizes the contrary data and only cites the data that supports their ideology.
Well, I hope that helps. I get really annoyed with people who expect scientists to speak in absolutes. A scientist will make a statement and later, as more data emerges, changes that statement, or even contradicts it. But the ideological-minded person hears a statement from a scientist and thinks that statement is absolute. When the scientist changes their mind with the acquisition of additional data, something good scientist are supposed to do, then the ideological-minded person gets all bent out of shape and starts whining about “how you can’t trust experts”. What utter nonsense. All science is provisional.
And I’ve said this before, but it bears repeating; if you are not going to “trust” experts, who are you going to trust? Someone with no education on the subject and no real-life experience working with the subject? How the hell is that any better? Oh, wait, I know… someone whose biases align with mine. Yeah, that’s how you get to the bottom truth of something! /sarcasm off