When you pick up a newspaper and read a story about the latest results on breast cancer, autism, depression or other ailments, what are the odds that finding will stand the test of time?
The answer, according to a study in the journal PLOS One is: flip a coin.
Only about half of the medical findings reported in 199 English-language newspapers actually turn out to hold up when tested in further studies, the study found. And sorry, dear reader, you’re not likely to hear about those refutations.
This is partly the fault of journalists, who are always on the lookout for new and unexpected findings, which science and medical journals happily highlight and promote.
“But I think the fault is also on scientists,” says Estelle Dumas-Mallet, a biologist at the University of Bordeaux in France and lead author of the study. “I think a lot of time they are so excited about their results they get carried away.”
She and her colleagues plucked out several thousand journal studies to examine. To measure the veracity of those studies, they looked for more in-depth follow-up studies.
They then dug through the Dows Jones Factiva database of newspaper articles (television broadcasts, web-only sources and NPR stories were not included in the study), and identified 156 stories about a range of medical issues, including autism, depression, Alzheimer’s, breast cancer and glaucoma.
Follow-up studies found that about half of those original, exciting findings didn’t stand up to deeper scrutiny. To determine that, the researchers looked at meta-analyses, which are studies that pull together information from multiple similar studies to draw an overarching conclusion more robust than any individual study.
“As a scientist, I must admit that I was sure that almost everything published in Science and Nature was absolutely true, and I was a bit disappointed when I understood this wasn’t the case,” Estelle Dumas-Mallet says.
That’s truly the nature of science. But while scientists eventually set the record straight in the literature, those follow-up studies rarely get covered by the news media.
For example, Dumas-Mallet and her colleagues looked at a study published in 2003 in Science that reported a genetic variant linked to depression when people with that trait faced stressful life events. That news made the pages of 50 newspapers, they found. But when 11 subsequent studies tried and failed to find the same association, not a single newspaper in their database covered any of those studies.
That may be because that kind of “negative” result tends to get published in less prominent journals, Dumas-Mallet says. And journalists aren’t regularly patrolling those for news.
Finally, in 2009, scientists published a meta-analysis of these results in a prominent journal, JAMA, the journal of the American Medical Association, which showed the initial report didn’t hold up. Even then, only four newspapers covered that finding — and newspapers continued to report the original 2003 finding even after it had been clearly refuted.
In fact, she found that none of the studies they examined that linked a gene to depression, schizophrenia or autism panned out.
And when the scientists looked specifically at 53 studies that were first-time reports of a new phenomenon and reported in newspapers, they found that two-thirds were “disconfirmed” by subsequent studies.
What’s really going on here is a reminder that science progresses in fits and starts, with plenty of false leads. Any single finding is provisional. Careful journalists will remind readers of that fact, but it’s overlooked frequently.
Dumas-Mallet’s advice to readers is simple: “Keep in mind that when a study is an initial study, even if it’s very exciting and amazing and a breakthrough — or whatever you want to call it — it still needs to be confirmed.”
The adage “Don’t believe everything you read” applies here as well.