By: SARA HARRISON – 07.08.2020 08:00 AM
Confused? Surprised? Wondering where the good parts are? Here are a few tips on reading scientific papers to help those of us following along at home.
EVALUATING THE QUALITY of Covid-19 research is challenging, even for the scientists who study it. Studies are rapidly pouring out of labs and hospitals, but not all of that information is rigorously vetted before it makes its way into the world. Some studies are small and anecdotal. Others are based on bad data or misplaced assumptions. Many are released as preprints without peer review. Others are hyped up with big press releases that overstate the results—but when scientists are finally able to dive into the research, sometimes the study isn’t as groundbreaking as it seemed.
Take hydroxychloroquine, an antimalarial drug that appeared promising in the early stages of the pandemic. Anecdotal evidencefrom a Chinese hospital performing a clinical trial showed the drug might have some benefits, and an early trial in France seemed promising. The US Food and Drug Administration allowed the medication to be available to Covid-19 patients for emergency use. But then the story got complicated. One trial found the drug increased the death rate among patients, but was later retracted because it relied on data that could not be verified. Finally, a large-scale, double-blind trial found the drug didn’t hurt patients, but didn’t help them, either. The FDA finally revoked its emergency use authorization for the drug on June 15.
The promise of hydroxychloroquine rose and fell in just three short months, a lightning-fast turnaround for scientific research. Keeping up with this flood of information about coronavirus therapies and reviewing all the studies coming out is a daunting task, especially for readers without a research or medical background who just want to know what’s going on and how to stay healthy. “We can’t expect everyone to be able to pick up any research paper and know that it’s high quality,” says Elizabeth Stuart, a professor at the Johns Hopkins Bloomberg School of Public Health.
Stuart is part of a team of colleagues at Johns Hopkins who run the Novel Coronavirus Research Compendium (NCRC). The team includes statisticians, epidemiologists, and experts on vaccines, clinical research, and disease modeling; together they rapidly review new studies and make reliable information accessible to the public. For those of us who don’t have advanced degrees in these areas, distinguishing an inflated headline from a genuinely important discovery can feel impossible. But by looking at where a study was published, what data it uses, and how it fits into the larger body of scientific research, even the armchair experts among us can start to be more savvy science information consumers.
Here are a few things experts say we should all do when evaluating new research.
Check the Source
First step: Look at where it was published. That can offer clues about things like whether the research is finished or still in revision, if it’s been reviewed by other scientists, or whether it’s rigorous enough to be accepted by top journals like the Journal of the American Medical Association, The Lancet, or The New England Journal of Medicine.
Normally scientists submit their studies to scholarly journals. Editors review each one, and if a study’s design or data don’t live up to the journal’s standards, they might reject it entirely. Next, the editor will send the paper out to a group of scientists working in the authors’ field. Those peers examine the study even more closely for mistakes, and send it back to the authors with suggestions for ways to make the paper stronger. The authors then revise their paper to address those concerns. The whole process, from the time authors submit a paper until it’s finally published, generally takes about a year.
Sometimes scientists don’t want to wait for the lengthy peer review process to conclude before they publish their research, especially during this pandemic, when their information could help other scientists. So while their article is under review at a journal, the researchers might also publish the paper on a preprint server, like bioRxiv, founded by the nonprofit research center Cold Spring Harbor Laboratory, or medRxiv, which was founded by Cold Spring Harbor Laboratory, Yale University, and the medical journal BMJ. These sites are meant to make information quickly accessible, and they also allow researchers to get feedback from members of the scientific community beyond their peer reviewers. While preprints are screened for offensive or nonscientific content, anything that might pose a health risk, and plagiarism, they aren’t reviewed or edited before they go up.
Experts say these papers are essentially drafts, and caution that readers should be wary about immediately drawing big conclusions from them. “Part of the function of a preprint is to alert other people in the community to possibilities that they might want to investigate,” says Cassandra Volpe Horii, the founding director of the Center for Teaching, Learning, and Outreach at Caltech. “I would say we have to treat any preprints as an informal part of the conversation.”
And during the hurry to publish during the pandemic, even articles published in well-respected journals should be approached with some skepticism. “Normally with papers, you can sort of trust that there’s been a fairly robust peer review process, where other scientists, experts, have weighed in,” says Stuart, of Johns Hopkins. Those reviewers would build on their expertise and look at the body of evidence already available in the field to help them evaluate a new study. But with Covid-19, Stuart says, the information is new and changing rapidly. Reviewers are swamped and have to rely on their own judgement more than they typically would.
Retraction Watch, a site that follows scientific papers, lists over 20 Covid-19 papers that have been retracted from preprint servers and peer-reviewed journals since February 2020. Some, like this study on the effectiveness of surgical and cotton masks, were retracted because the data analysis in the study wasn’t sound. Others, like the study that found hydroxychloroquine was linked to a higher death rate in Covid-19 patients, were retracted because the authors had relied on a private company for the data and never had access to the raw data themselves. In June, scientists urged PNAS, a high-profile journal published by the National Academy of Sciences, to retract an article about the effectiveness of masks, claiming it relied on poor statistical analysis.
With so many scientists from different fields and specialities jumping into coronavirus research, sometimes they make mistakes because they aren’t experts in a particular field. “Just because someone is expert in one area does not mean they’re expert in another,” says Stuart. If a paper seems strange, she recommends looking up the authors in the study and making sure their expertise lines up with the topics the paper covers.
And if a paper isn’t published in a journal or on a preprint server at all, but instead shows up on a personal website or as a press release without any data attached, that’s probably a bad sign. “I think we just need to have some cautious optimism when we see these press releases without data,” says Kate Grabowski, an epidemiologist at Johns Hopkins who works on the NCRC. Press releases might offer hopeful headlines, but without a peer-reviewed study, they don’t offer any substantive proof that the news is true.
Know the Format
If a reader wants to dive into reading primary literature, Caltech’s Horii says they shouldn’t approach a study like a book or a news article. “Any sort of expert communication is going to have its own efficient ways of conveying information to a peer group,” she says. She compares scientific studies to knitting patterns: They have their own vocabulary and set of rules that readers need to understand in order to be able to read them.
A typical study has six major parts. They generally begin with an abstract, which briefly describes the question the researchers were trying to answer, what data they collected, and what the results were. Then the introduction and literature review sections set the stage and tell readers more about the ideas the researchers were exploring and what previous studies have found. The methods section explains exactly how the study was conducted, which allows other researchers to repeat the experiment to see if they get the same results. Then the results, discussion, and conclusion sections break down what the scientists found and what that might mean. The authors might also bring up any problems or questions they encountered, and suggest avenues for further study. When reading the conclusions, it’s important to understand that the scientists’ data set might support or contradict a hypothesis, but it won’t definitively prove or disprove a hypothesis.
Studies are not meant to be read linearly from start to finish. Instead of being organized chronologically or to create a narrative, the papers are organized by section to make it easier for other scientists to find certain kinds of data or information. For readers who aren’t experts, some sections, like the one outlining methods, can be pretty impenetrable, says Horii.
For the lay reader, she recommends starting by spending some time with the abstract. “Often it’s the most concise and clear articulation of what they were testing, what they actually did, and what they found,” Horii says. When she reads studies, Horii will underline exactly what the researchers say they found and refer back to that claim as she works through the article.
Next, she advises skimming the introduction and literature section to get a sense of the background before skipping to the results, discussion, and conclusion. See what the researchers found, she says, and then bounce it back to what the press coverage or what the abstract are saying. Do the article’s claims actually line up with its results?
If Horii has more detailed questions, then she might dive into the methods section. For instance, if the study claims a drug will be a great treatment for Covid-19 patients, she might look at who was included in the study. Was it tested on a young or old population? On women and men? Was it done in a lab setting, a clinical setting, or out in the world? It also matters how big the study was. Anecdotal evidence is important, but if the paper makes a huge claim and the data only comes from 10 people, that might be a red flag.
Stuart says to watch out for overgeneralizations. “The fundamental challenge there is extrapolating: taking a piece of evidence, a piece of data, which might be perfectly valid, but then assuming that that leads to these much more general conclusions,” she says. What works for very sick patients may not work for those who have less severe cases, and what works for younger patients might not benefit older people, whose immune systems work differently.
Readers should be especially wary of extrapolating the results of studies done in animal models to human populations. Some researchers have struggled to even find the right species for Covid-19 studies, because not all animals react to the pathogen the same way humans do. The NCRC team only reviews preclinical animal studies for vaccines, because they feel that for other interventions the differences between humans and other animals are too significant.
“I think there’s certainly information we can learn from animal studies,” says Grabowski. “But I think it’s always really important to understand how these things work in human populations, particularly when it comes to things that might be considered a behavioral intervention.” She points to examples of studies that try to examine the benefits of wearing a mask, which is a human social behavior that can have many variations. People have to wear the masks properly and in the right situations in order for the intervention to work or even be accurately studied. Those behavioral aspects can’t be measured in animals.
Horii says it’s also important to distinguish between the results and discussion sections—where the authors will reveal what they found and talk about its significance—and the conclusion, which tends to make some larger claims and start to pre-hypothesize about future studies and new work that should be done. “It can be easy to miss what distinguishes the actual findings from that more speculative part,” she says. That speculative part is important—it points to the future and it starts a conversation about how to move the field forward. But it shouldn’t be confused with what a particular study found.
Go for the Gold Standards
It’s hard to set strict gold standards for studies on Covid-19 because the disease is being studied in real time by experts in multiple disciplines. What works in a lab setting for researchers testing out different vaccine candidates won’t necessarily apply to scientists studying nonpharmaceutical interventions like masks out in the real world. But there are a few best practices for medical studies that show the research methods are rigorous.
One is that the study has a control or placebo group. That neutral group doesn’t get the drug or treatment at all and can be directly compared with the groups that did.
Another is that the study is a randomized “double-blind” trial, in which neither the test subjects nor the scientists know who received the placebo and who got the active drug. But, says Grabowksi, that’s not always possible. She says an observational study that includes good data and a big enough sample size can still be useful and informative.Beware Shocking Claims
Above all, experts say, don’t immediately buy into claims that are wildly inconsistent with what previous research has shown. New, groundbreaking findings make great headlines, but they rarely make good science. “Any one study is not definitive. Almost never,” says Stuart. “It’s really a body of evidence emerging.” Science builds on itself slowly. Findings have to be reviewed by other experts and then replicated in different settings and populations before the community is ready to make any really big claims.
“Most of it is incremental steps, showing the data moving in the right direction,” says Jason McLellan, a virologist who studies coronaviruses, including MERS, SARS, and the one that causes Covid-19, at the University of Texas at Austin. He advises readers to be cautious about getting too excited about that one study that will answer everything. In science, he says, “there’s never any absolutes.”More From WIRED on Covid-19