We are frequently told that randomized, double-blind, placebo-controlled clinical trials are the gold standard for high quality drug research. Heck, we’ve even said it ourselves on numerous occasions. We believed this mantra until a recent meta-analysis of statin studies forced us to reconsider the value and validity of RCTs.
The study in question (European Journal of Preventive Cardiology, March 12, 2014) had an impressive title:
“What proportion of symptomatic side effects in patients taking statins are genuinely caused by the drug? Systematic review of randomized placebo-controlled trials to aid individual patient choice.”
According to the authors the answer is:
“Only a small minority of symptoms reported on statins are genuinely due to the statins: almost all would occur just as frequently on placebo.”
The conclusion of the meta-analysis of randomized controlled trials [RCTs] was that statins do not cause muscle aches or other side effects except possibly a modest increase in new cases of type 2 diabetes. Presumably the muscle aches, fatigue, nerve pain, arthritis symptoms, mental fogginess, sexual dysfunction, etc. are all imaginary, since they were just as likely to occur in patients taking placebos. If you would likie to read how patients reacted to these conclusions, here is a link with some powerful stories (and yes…they are anecdotal and not scientific, but they are powerful just the same).
The Reasoning Behind RCTs
The “gold-standard” concept of clinical research, RCTs, was created because of a recognition that patients could be easily influenced by the study organizers. In an unblinded trial, both the patients and the doctors know who is getting the “real” drug and who is not.
In a “single-blinded” trial the patients are in the dark but the doctors know who is getting actual medicine and who is getting placebo. In both cases, expectations can easily influence outcomes. Patients are more likely to get benefit from something if they are told it is the real deal. And if doctors know who is swallowing the medicine instead of the sugar pill, they can influence the results in subtle, sometimes subconscious, ways.
In theory, if neither the doctors, nurses nor subjects know what is real and what is fake, there will be no influence and the outcome will be “pure.” That is the foundation upon which the double-blind, placebo-controlled trial system is built.
It sounds almost foolproof and for decades health professionals have held up the RCT as the highest standard of research. It is the epitome of “evidence-based medicine,” a mantra that means scientifically valid. The FDA requires at least two randomized controlled trials demonstrating statistically significant benefit before approving a drug for market.
What’s Wrong With Randomized Controlled Trials?
What very few health professionals have realized is that there are serious flaws with the randomized controlled trial system of drug testing.
Although RCTs are pretty good at establishing statistically significant benefit, they have traditionally not been good at predicting how well a particular treatment will work for any given individual. Many drugs can be proven to be 10-15% better than nothing (placebo). That may be enough to get FDA approval. But it may only mean that one person out of 60 (the number needed to treat or NNT) will actually get any benefit after five years of therapy. That happens to be the best case scenario for otherwise healthy people taking a statin to lower cholesterol. To see more about NNTs and statins against heart attacks, here is a cool website link.
What randomized controlled trials are not good at is detecting adverse drug reactions. Here is a link to “the invisible gorilla” video experiment. In this study, “half of the people who watched the video and counted the passes missed the gorilla. It was as though the gorilla was invisible.”
If you watch this video you will say that it’s impossible to miss the gorilla. That’s in part because of the title and because you are prepared. If you were unaware of the nature of the experiment and were totally focused on the white-shirted basketball passers, you too might have missed the gorilla the way the Harvard students did.
The point of the study in the words of the researchers:
“This experiment reveals two things: that we are missing a lot of what goes on around us, and that we have no idea that we are missing so much.”
The same thing could be said of randomized, double-blind, placebo-controlled drug studies. Investigators cannot see what they are not looking for. Unanticipated side effects often go unnoticed.
One of the best examples involves Prozac-like antidepressants. Randomized clinical trials conducted before the drug was marketed revealed that sexual side effects were relatively rare (in the 2-16% range). For people with depression, reduced libido was reported at a rate of 3% and impotence at a rate of 2% while taking Prozac, and not reported for people on placebos. In a collection of RCTs for a variety of ailments including depression, OCD, bulimia and panic disorders, reduced libido was reported at a rate of 4% on Prozac vs. 1% on placebo. You will find these data in the official prescribing information for Prozac at DailyMed.
Researchers now know that sexual problems with Prozac-like drugs actually range from a low of 30% to a high of 80% of patients (depending upon the study). Bob Temple, one of the FDA experts on clinical trials, admitted to us that SSRI-type antidepressants have a rate of sexual dysfunction above 50%.
People report that drugs like Celexa, Effexor, Lexapro, Paxil, Prozac and Zoloft can reduce libido, interfere with sexual arousal, contribute to erectile dysfunction (ED) and delay or block orgasm. Some people describe a numbness or lack of sensation as “genital anesthesia” and it may persist long after such drugs are discontinued (Open Psychology Journal, Vol. 1, pp 42-50, 2008). The authors concluded:
“Post-market prevalence studies have found that Selective Serotonin Reuptake Inhibitor (SSRI) and Serotonin-Norepinephrine Reuptake Inhibitor (SNRI) sexual side effects occur at dramatically higher rates than initially reported in pre-market trials.”
The bottom line is that double-blind clinical trials of antidepressants were incapable of detecting side effects that they were not looking for. (By the way, most of the impressive placebo-controlled statin studies did not detect type 2 diabetes as a side effect, largely because the investigators did not know it existed and did not look for it.)
The reverse also happens in double-blind, placebo-controlled clinical trials. When researchers know about a specific side effect in advance of a clinical trial, they may ask everyone who participates (both those getting the active drug as well as those on placebo) whether they have experienced that symptom. This completely undermines the validity of the side effect data.
Here is an analogy. We no longer allow police detectives to point out potential suspects. That’s because research has demonstrated that this can influence a victim’s choice. People must look at a lineup of similar-looking individuals. With no prompting from the detective, they are asked to identify the suspect. Even with this improved methodology, eyewitnesses frequently misidentify a victim. DNA evidence has repeatedly demonstrated that subjective evaluation is flawed.
Here is a clinical example: Topamax (topiramate) is an anti-seizure drug that is also prescribed for migraines. In clinical trials the drug caused fatigue in 15% of those taking a dose of 200-400 mg. People taking placebo “experienced” fatigue 13% of the time. The active drug caused nausea in 10% of patients on Topamax and 8% of those on placebo. The conclusion by clinicians and FDA executives is likely to be that the drug might actually cause fatigue in only 2% of patients, ie, the difference between active drug and placebo. That is an easy conclusion to draw and doctors, pharmacologists and FDA officials have said exactly that to us on repeated occasions.
Here’s another example. The stimulant drug Adderall XR (mixed amphetamines) has official prescribing information that notes that adults taking the drug in a clinical trial experienced “nervousness” 13% of the time. This is a known side effect of amphetamines just as it is with high doses of caffeine. Guess what? The placebo in the Adderall XR study “caused” nervousness 13% of the time too.
Many people, including many FDA executives, might conclude that Adderall XR does not cause nervousness, since the incidence of this symptom was identical in both the placebo arm as well as the active drug arm of the trial.
The reality is likely to be that by asking patients whether they experienced fatigue and nausea during the Topamax clinical trial or nervousness during the Adderall XR trial, those on placebo responded affirmatively. This way the investigators, intentionally or unintentionally, skewed the placebo results in a specific direction, thereby creating a false assumption that the actual drugs did not cause such side effects.
In one clinical trial of the statin-type drug called Crestor, 12.7% of those taking 40 mg reported myalgia, compared to 12.1% of those on placebo. Myalgia can be defined as muscle pain, though it has been defined in odd ways in some statin studies. In another Crestor trial (the JUPITOR Trial), 7.6% of the patients on 20 mg of Crestor experienced myalgia vs. 6.6% of those on placebo. Arthralgia (joint pain) occurred in 3.8% of those on Crestor compared to 3.2% of those on placebo. FDA officials would likely say that Crestor did not cause either muscle pain or joint pain since the placebo side effects of myalgia and arthralgia were roughly comparable.
We disagree. A landmark study by renowned Harvard researcher, Jerry Avorn, MD, has revealed the Achilles heel in double-blind, randomized controlled trials.
“Adverse effect patterns of the drug group are closely related to adverse effects of the placebo group…Symptom expectations of patients were likely to have been influenced by the consent forms used in the specific trials. Adverse effects mentioned in informed consents might not only increase expectation effects but might also facilitate the perception and reporting of these symptoms…Our results question the basic assumption of clinical trials, namely that all unspecific effects are reflected in the placebo group, while the drug group shows the additive effect of the chemical drug action. Clearly, the adverse effect patterns of placebos reflect, in part, the adverse effects expected for the drug, which complicates the detection of drug-induced adverse effects.”
What Does This Mean For You?
We started this essay with a quote from some very distinguished researchers: “Only a small minority of symptoms reported on statins are genuinely due to the statins: almost all would occur just as frequently on placebo.”
You now know that these very smart scientists likely drew a faulty conclusion from the data. If double-blind, placebo-controlled trials are flawed in the way in which they collect side effect information, then physicians, nurses, pharmacists and other health professionals must reevaluate adverse drug reactions reported in the official prescribing information.
The FDA needs to reconsider the way in which it requires drug companies to collect symptom information in drug trials. To reduce bias, a universal side effect form (that can be modified under special circumstances) would allow a better technique for gathering such information.
Weigh in below. How do you know whether a particular medicine may cause a side effect? Do you trust the official prescribing information? Share your own drug experience in the comment section below.