Sunday, March 24, 2013 New Patents Aim to Reduce Placebo Effect The pharma industry has a big problem on its hands: Placebos are getting to be way too effective. Something needs to be done. But what? What can you do about placebo response? The old saying "It is what it is" would seem to hold true in this case. One answer: come up with low-placebo-response study designs, and patent them if possible. (And yes, it is possible. But we're getting ahead of the story.) Placebo effect has always been a problem for drug companies, but it's especially a problem for low-efficacy drugs (psych meds, in particular). An example of the problem is provided by Eli Lilly. In a -- In Study HBBI, neither LY2140023 monohydrate, nor the comparator molecule olanzapine [Zyprexa], known to be more effective than placebo, separated from placebo. In this particular study, Lilly observed a greater-than-expected placebo response, which was approximately double that historically seen in schizophrenia clinical trials. [emphasis added] [PlaceboDoctor.png] Fast-forward to August 2012: Lilly throws in the towel on mGlu2/3. According to a report in Genetic Engineering and Technology News, "Independent futility analysis concluded H8Y-MC-HBBN, -- press release about disappointing Phase IIb trials of a new antidepressant, Serdaxin, saying: "Results from the study did not demonstrate Serdaxin's efficacy compared to placebo measured by the Montgomery-Asberg Depression Rating Scale (MADRS). All groups showed an approximate 14 point improvement in the protocol defined primary endpoint of MADRS." In March 2012, AstraZeneca threw in the towel on an adjunctive antidepressant, TC-5214, after the drug failed to beat placebo in Phase III trials. A news account put the cost of the failure at half a billion dollars. In December 2011, shares of BioSante Pharmaceutical Inc. slid 77% in a single session after the company's experimental gel for promoting libido in postmenopausal women failed to perform well against placebo in late-stage trials. The drug companies say these failures are happening not because their drugs are ineffective, but because placebos have recently become more effective in clinical trials. (For evidence on increasing placebo effectiveness, see yesterday's post, where I showed a graph of placebo efficacy in antidepressant trials over a 20-year period.) Some idea of the desperation felt by drug companies can be glimpsed in this slideshow (alternate link here) by Anastasia Ivanova of the Department of Biostatistics, UNC at Chapel Hill, which discusses tactics for mitigating high placebo response. The Final Solution? Something called The Sequential Parallel Comparison Design. SPCD is a cascading (multi-phase) protocol design. In the canonical two-phase version, you start with a larger-than-usual group of placebo subjects relative to non-placebo subjects. In phase one, you run the trial as usual, but at the end, placebo non-responders are randomized into a second phase of the study (which, like the first phase, uses a placebo control arm and a study arm). SPCD differs from the usual "placebo run-in" design in that it doesn't actually eliminate placebo responders from the overall study. Instead, it keeps their results, so that when the phase-two placebo group's data are added in, they effectively dilute the higher phase-one placebo results. The assumption, of course, is that placebo non-responders will be non-responsive to placebo in phase two after having been identified as non-responders in phase one. In industry argot, there will be carry-over of (non)effect from placebo phase one to placebo phase two. [PlaceboSPCDProtocol.png] This bit of chicanery (I don't know what else to call it) seems pointless until you do the math. The Ivanova slideshow explains it in some detail, but basically, if you optimize the ratio of placebo to study-arm subjects properly, you end up increasing the overall power of the study while keeping placebo response minimized. This translates to big bucks for pharma companies, who strive mightily to keep the cost of drug trials down by enrolling only as many subjects as might be needed -- SPCD was first introduced in the literature in a paper by Fava et al., Psychother Psychosom. 2003 May-Jun;72(3):115-27, with the interesting title "The problem of the placebo response in clinical trials for psychiatric disorders: culprits, possible remedies, and a novel study design approach." The title is interesting in that it paints placebo response as an evil (complete with cuplrits). In this paper, Maurizio Fava and his colleagues point to possible causes of increasing placebo response that have been considered by others ("diagnostic misclassification, issues concerning inclusion/exclusion criteria, -- them. But then Fava and his coauthors make the baffling statement: "Thus far, there has been no attempt to develop new study designs aimed at reducing the placebo effect." They go on to present SPCD as a more or less revolutionary advance in the quest to quelch placebo effect. Up until this point in science, I don't think there had ever been any discussion, in a scientific paper, of a need to attack placebo effect as something bothersome, something that interferes with scientific progress, something that needs to be guarded against vigilantly like Swine Flu. The whole idea that placebo effect is getting in the way of producing meaningful results is repugnant, I think, to anyone with scientific training. -- Google Patents. The patents begin with the statement: "A method and system for performing a clinical trial having a reduced placebo effect is disclosed." Incredibly, the whole point of the invention is to mitigate (if not actually defeat) the placebo effect. I don't know if anybody else sees this as disturbing. To me it's repulsive. If you're interested in licensing the patents, RCT Logic will be happy -- Have antidepressants and other drugs now become so miserably ineffective, so hopelessly useless in clinical trials, that we need to redesign our scientific protocols in such a way as to defeat placebo effect? Are we now to view placebo effect as something that needs to be made to go away by protocol-fudging? If so, it puts us in a new scientific era indeed. -- Stephen Senn4:02 PM The statisticians involved, at least, ought to know better. 1.They are assuming that placebo response is genuine and not just regression to the mean (the latter is usually grossly underestimated). 2. they are assuming that it is reproducible and -- Agency is not dozing they will make them market the drug as being only suitable for those who have a proven inability to respond to placebo. That should sort them out. ReplyDelete 2. [anon36.png] Anonymous8:53 PM Could you expand please, on why it is repugnant to attempt to lower the placebo response? I've heard this before, and I'm genuinely interested as to why this is a problem. I see it as a signal-detection issue where solving it may have consequences for -- effect (positive and negative) was actually due to biologically induced change - rather than expectation, demand characteristics etc. Thus, eliminating the placebo effect would be extremely useful here, if we care about knowing the true harms and benefits of the drug in question. -- to see how a drug improves the situation beyond what would normally happen then it seems appropriate to have as a control what would normally happen. What is labelled placebo response is in any case in most cases just regression to the mean. I regard all this placebo response stuff as just so much junk science. See http://eprints.gla.ac.uk/8107/1/id8107.pdf and also Kienle, G. S. and H. Kiene (1997). "The powerful placebo effect: fact or fiction?" Journal of Clinical Epidemiology 50(12): 1311-1318. -- Paul Ivsin12:00 PM I also am failing to fully grasp the ethical problems of this study design, and of attempting to reduce the placebo response in RCTs in general. Presumably, reductions in placebo response would happen across all arms of the trial equally, which would provide a clearer picture of the specific chemical activity of the drug. The major problem with the design seems to be -- as Stephen points out in the first comment -- that it probably won't reduce the total placebo effect size much. If placebo responders were likely to remain consistent in their response, then we'd already have a simple solution in the placebo run-in period. The problem being, of course, that there's really no evidence that run-in periods do what they're supposed to do. I actually wrote a brief blog post about the SPCD a couple years ago (here). Then, I asked essentially the same question as now: what evidence to we have that SPCD reduces placebo response compared to more traditional methods? (and a second question for the author: if SPCD is an unethical design, then wouldn't a design featuring a placebo run-in -- which most trials use -- also be unethical?) ReplyDelete -- Anonymous12:52 PM The problem is that there are some people in both treatment and control groups who are placebo-responders -- they will respond no matter what you give them. Responders therefore have to be kept in the control group as the proper baseline for comparison. If they're left out in any way, you're comparing apples and oranges -- a treatment group that includes some unknown placebo responders (whose response to the drug is therefore inflated) is being compared to a control group that excludes placebo responders. Right? ReplyDelete -- presented by the author. We have a simple observation, that drugs that robustly separated from placebo 20 years ago no longer reliably do so. Did they become less active? Probably not. Are placebos working better than they used to? Probably, at least within the clinical trial setting. Assuming that is the case, what should we do? 1)We could remove all these drugs from the market, and simply give patients placebos. But its not really clear whether the placebo effect would continue to operate once word got out that doctors were routinely handing out placebo. In fact once study found that the magnitude of the placebo effect is proportional to the likelihood of receiving active drug. 2)We could leave the old drugs on the market and not approve the -- patients either. 3)We could attempt to make clinical trials look more like real life treatment, in which the choices are not between drug and placebo but between drug and no treatment. But this would lead to drugs with no intrinsic efficacy being approved as a matter of course. 4)We can try to understand the placebo effect, and engineer it out of trials so that we know that approved drugs have greater efficacy that placebo under at least some test conditions. I would say that choice #4 is probably the best of those that I can come up with. For all the insinuations in the article about how -- Obviously we would all like to have more efficacious drugs whose effects are so robust that the issue of establishing separation from placebo never arises. If the author has any ideas on how such compounds might be produced, I'm sure there are venture capitalists who would be strongly interested to hear them. -- oversight, pre-registration, CONSORT etc) has reduced risk of bias, both in relation to trial conduct and reporting, and that this might account for some of the increased placebo effect. I wonder what your view on this is? If I am right, then it follows that older trials overestimated -- Anonymous7:39 AM to Jim: you are assuming that the only option is to treat patients with drugs(be it placebo or "real" effective drugs). However, at least in case of depression, psychotherapy can be highly effective: take cognitive behavioural therapy as good example. There are even -- You always get studies with a positive outcome, even if the drug itself does not work. So any study design that tries to lower placebo response is a great idea for the industry, because that is a cost-effective way of creating more studies with positive outcome, independently of the -- Pro SPCD: If we assume that a psychiatric diagnosis describes a symptom rather than an underlying cause, and that placebo is efficient at counteracting one cause and the medication efficient at counteracting another, and both are equally strong and independent -- responses for each medication, yet it would still make sense to have that medication. The SPCD would in this case show that the medication is efficient for causes where the placebo isn't. An ineffective medication that only works through the placebo effect should still fail SPCD studies. (An additional argument pro SPCD is that due to the low efficacy of psychiatric drugs, practitioners test patients on one drug, and if it doesn't work, switch them to another one. SPCD mimics this practice by switching patients on whom the placebo didn't work to the medication being tested. Thus it could be said to be more relevant to actual medical practice.) Contra SPCD: The above argument assumes that the placebo effect works the same, no matter if it's the medication or the actual placebo that causes it. However, that may not be true. In that case, the SPCD may simply funnel a higher percentage of subjects into the second phase who are able to detect placebos (e.g. through missing side effects). For these people, the placebo effect of the real drug may be stronger, perhaps because they're noticing side effects that weren't present in the previous placebo trial, so they (correctly) conclude they're now given "the real thing", making the placebo effect work for them. ReplyDelete -- Didn't you jump the gun with this article, by about a week? "The drug companies say these failures are happening not because their drugs are ineffective, but because placebos have recently become more effective in clinical trials." Hahahaa!!! But wait, the date says March 24! -- o Heatmap Visualization of Antidepressants o Is Depression Really Biochemical? o New Patents Aim to Reduce Placebo Effect o Placebos Are Becoming More Effective o The "Good Data" Problem in Science o How Blind is Double-Blind? o The 10 Most-Prescribed Drugs in the U.S. o Life Expectancy for U.S. Women Heading Down? o Placebo Surgery o The Open-Source 3D-Printed Gun o A Call for Mandatory Publishing of Clinical Trials...