Sorry, There Is No Silver Bullet

Drugs were a major component of interventions reviewed. (bennylin0724: Flickr)

Drugs were a major component of interventions reviewed. (bennylin0724: Flickr)

If a medical study shows that a treatment has a big effect, how much should you trust it? According to a provocative report published today, not very much.

A group of researchers from across the country — including Stanford Medical Center — and Brazil, looked at more than 85-thousand analyses. (In other words, they reviewed a LOT of research). They found that just under 10 percent of studies found a “very large treatment effect,” defined as a five-fold difference in people who received the intervention versus the control group.

But here the rub: more than 90 percent of the time, those “very large effects” don’t hold up after further research.

Dr. John Ioannidis at Stanford led the study, which is published today in the Journal of the American Medical Association. In an interview he told me, “Most of the time … these very large effects largely evaporated, they became substantially smaller. It’s not that they necessarily went away competely, but they were much, much smaller than the initial study.”

As the researchers say in the first line of their publication, “Most effective interventions in health care confer modest, incremental benefits.”

Of course, a modest benefit is not so bad, as Ioannidis quickly added. “If you have several drugs or types of interventions and each of them could contribute some incremental benefit, that means we may have some room for … progress in getting some better outcomes by using several of those interventions.”

Still, when I pointed out that this news was likely to be a big bummer to lots of Americans used to the search for the silver bullet, he laughed softly and said, “Yeah, too bad.” He says that a belief in the silver bullet “creates a vicious circle. It also creates an environment where claims for a silver bullet thrive against such overwhelming evidence.”

There’s room for more skepticism all around, Ioannidis asserts: from scientists, from patients, from the press and its readers, and from citizens. “Whenever a very big discovery, or effect, or extraordinary finding is described,” Ioannidis says, “there should be some skepticism that it’s not as huge, maybe it’s not completely wrong, but it’s not at that magnitude.”

There was only one single case where Ioannidis and his colleagues identified an intervention that dramatically reduced death risk (a treatment for newborns with respiratory failure). Ioannidis admitted that some interventions likely to have big effects are never tested in randomized trials — for ethical reasons. For example, if someone is bleeding profusely, doctors will always try to stop the bleeding. Scientists cannot randomize two groups to test a hypothesis around stopping bleeding.

Related