A Moloch of the Laboratory

Why is it ethical to give the half the people in a trial what you know to be an ineffective treatment for a fatal disease?

I’ve never been comfortable with the ethics of randomized controlled trials.

Sometimes, whether the new therapy is better than the standard therapy is just a coin-flip. But sometimes, it’s not: the odds are strongly in favor of the “experimental” arm over the “control” arm. Then it’s very hard to see how withholding the experimental treatment is in the interest of the patient; “consent” to a 50% chance of drawing the short straw is obtained only by refusing any chance of access to the new treatment to anyone who doesn’t consent.

The bioethics community, whose collective conscience is so tender it can find ethical objections to things that mere mortals regard as harmless or beneficial, seems to have no problem with this particular form of coercion. The justification is that, until the results are in, we don’t really know that you’re worse off in the control group. The Rev. Mr. Bayes would not agree.

The NYT reports on a trial of a new melanoma drug where the problem is especially acute. Some of the patients in the control arm are clearly dying, and the question is whether to allow them to “cross over” and get the experimental drug. But doing so would compromise the purity of the RCT, and so far the manufacturer and the physicians running the trial are standing pat.


One physician who sees the problem the same way I do expressed it better than I could:

We can’t let patients on the control arm cross over because we need them to die earlier to prove this point.

Another states what seems to me like a straightforward principle for deciding whether having a control group is ethical. Referring to the standard, grossly ineffective treatment, he said to the Principal Investigator for the trial:

If it was your life on the line, Doctor, would you take dacarbazine?

Clearly, the answer in this case is “no.” One clinician refers to dacarbazine, the current treatment, as “a drug we all hate and would rather never give a dose of again in our lives.”

The alternative to an RCT is to test the new treatment against the historical record. Predict the distribution of life expectancies - the endpoint in this case - for the people you’re treating, assuming you gave them the existing treatment. Then compare the actual distribution with the new treatment to that prediction. Yes, the placebo effect is an issue - though in this case it’s an open trial, not a blind trial, so it would be an issue anyway.

Yes, it’s second-best. But at some point, we need to stop sacrificing patients to that Idol of the Laboratory, the Randomized Controlled Trial.

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: Markarkleiman-at-gmail.com

16 thoughts on “A Moloch of the Laboratory”

  1. I have never found the pronouncements of bioethicists persuasive. I don't see how we can have a society-wide ethical standard without society-wide values, and, to make things more dubious, bioethics seems to spend a lot of time on edge cases which depend upon beliefs where people's moral intuitions aren't very solid. This seems like an example where many bioethicists are unable to make a distinction which a substantial number of people would find important.

  2. I thought it was standard ethical practice to interrupt a double-blind trial when interim data clearly indicate one treatment is much better than the other. But it seems the temptation to sactrifice patients to scientific proof and reputation is sometimes too strong.

    A propos, I wonder if the moral hardness of the clinical-trials system has anything to do the politics of one of its key founders, Sir Richard Doll. Doll was a <a>regular Communist until the 1950s (I haven´t been able to find out exactly when he saw the light). It was a small flaw in a great man, whose proof that tobacco causes cancer has saved hundreds of thousands of lives. Nevertheless. communists practice a particularly hard form of pragmatic utilitarianism; omelettes and eggs and all that. In that perspective, it´s quite acceptable that some patients die in trials so that others may live.

  3. How about trials involving mock surgery? Surgery is vary dangerous - I used to work in a hospital where they sent people who had undergone surgery - sometimes minor elective surgery like knee replacement or hernia repair - and it got infected, and soon they're at death's door.

  4. SR,

    What I was told at least a decade ago is that, while mock surgery trials have been done in the past (arthroscopy of the knee, for example, was no more effective than making an incision and telling the patient you'd done the surgery), it's now considered to be unethical (because of the risks of anesthesia and hospitalization, more than just the lying), and so it was unlikely more such studies would happen. Then again, I seem to recall one did happen, and made the news, within the last few years.

  5. Mark, I am surprised you don't know about data safety monitoring committees, adaptive trials, or the fact that there's no reason you can't have an allocation of arms different than 50/50. Moreover, the condition necessary for a trial to be ethical is equipoise - and where a drug, surgery, or regimen appears to be light years ahead of the standard treatment, as is sometimes the case, the randomized trial will not go forward. If that standard was not met for this drug and it went forward mistakenly, that is not an indictment of RCT's. Anyhow, most drugs offer at best incremental improvement over existing therapies and an RCT is the only way one can definitively show benefit.

  6. Equipoise is a slippery concept, since it is rare that the prior probability of a treatment's effectiveness is exactly 50%. Prior probabilities are on a continuum. There is a short paper on this topic at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11258… which touches on randomization from a decision analytic perspective. There is lots of literature on this subject, but this two-pager is worth perusing.

  7. Warren Terra:

    You are correct. As for knee osteoarthritis, there was a randomized trial of arthroscopic lavage in 2002 which used sham lavage, complete with sound effects to produce the illusion of the actual procedure. The sham procedure and the actual lavage had similar results. But the study was criticized for having been done only on older patients at a VA hospital. Then in 2008 another randomized trial was done without a sham procedure; the control patients received optimal physical and medical therapy. Again no advantage of lavage was shown. The 2008 study was done on a wider spectrum of patients who would be typically considered for the arthroscopic procedure. It was published by Kelsey et al in N Engl J Med 2008; 359(11):1097-107.

  8. "Without the hard proof the trials can provide, doctors are left to prescribe unsubstantiated hope — and an overstretched health care system is left to pay for it. In melanoma, in particular, no drug that looked promising in early trials had ever turned out to prolong lives.

    "PLX4032 shrinks tumors in the right patients, for a limited time. But would those who took it live longer? No one knew for sure.

    " 'I think we have to prove it,' said Dr. Paul B. Chapman, a medical oncologist at Memorial Sloan-Kettering Cancer Center who is leading the trial. 'I think we have to show that we’re actually helping people in the long run.' "

    I'll play the devil's advocate. So, really, how does one "know"? What's the epistemology?

    People are believers and pattern-recognizers and story-tellers. The journalist tells a great story — parallel lives, bonds, tragedy. Post hoc, propter hoc medicine: we attribute a successful outcome [if there is a successful outcome!] to the treatment, because the treatment came before the outcome. And, medical care is full of hopeful motivation to believe.

    The "ethical" issue, here, arises in large part because it is a double-open, not a double-blind trial. We are reading about it in the New York Times because the treating physicians have to know what treatment they are giving and tell the patient. It is the drama of that moral interaction, not the science, that seems to be driving the assertion of an ethical dilemma, and the denial of a scientific one.

    I'm not sure it if really fair, or even relevant, to bring Bayes in. The doctors know the outcome with the standard treatment — chemotherapy with dacarbazine — and it is not pretty. The doctors have no hope to offer with dacarbazine. An objective observer might cynically wonder if the horrors of dacarbazine chemo did not become the standard therapy, in part to assuage the need of doctors to be able to offer something. The heavy side-effects are a kind of moral proof that doctor and patient are really trying desperately. Now, something else, with hope, and with lesser side-effects comes along. And, it offers the same moral proof that heroic measures are being taken. Is that what Bayes was analyzing — comparative moral proof and narrative potency?

    I think the ethical dilemma is very real, but it is an ethical dilemma that exists in all medical treatment of incurable illness, and especially of incurable, imminently terminal illness. And, even outside of formal trials, it includes the hard reality that the doctor does not "know" — cannot "know" — how well his attempts to manage the disease will work out for the patient. In that context, the physician has as strong an incenctive to manage the drama and moral narrative of care as to manage the treatments administered.

    As the practice of medicine has evolved from a personal craft into a hierarchically-administered, rule-driven and controlled technical process, physicians have been asked to balance what they see and experience in the relationship with the patient against the abstact conclusions of mass trials and statistical inference, the scientific generality against the individual particularity.

    It may be that this case is a marker on the road from the dust-bowl empiricism of the body-as-chemical-soup models to something more sophisticated. As theoretical analysis and operational models of how the body works become better, treatment strategies will improve, and fewer trials will be attempted that require vast numbers of patients and a search for a treatment effect better than the placebo, amidst the statistical noise of biological and circumstantial variation. I hope so. I hope we are not being asked to yield to faith in place of process and method.

  9. The correct answer is to keep a comprehensive database of all people treated by the medical system and their conditions, symptoms, lifestyle, etc. so you can compare new treatments and use statistics to compare people to something. A souped up version of the system that has greatly increased child cancer survival.

    But our personal responsibility medical system makes privacy concerns too important.

  10. John Worrall has some good papers on this topic (e.g "Why There's No Cause to Randomize"), including a good succinct account of the early trials of extracorporeal membraneous oxygenation (ECMO) for persistent pulmonary hypertension. 80% of patients with PPHS were dying, but the group that developed ECMO was able to save 80% of its patients by using it — *all* of its PPHS patients, so there was no selection bias problem for an RCT to overcome. But the RCT purists insisted on trials.

    The group that had developed ECMO believed their treatment worked, and they couldn't stomach assigning half their patients to a control group. So they tried to come up with a compromise: A randomized-play-the-winner design in which every time one arm of the trial succeeded (or failed) the subsequent probability of assigning that treatment to the next patient rose (or fell). The first patient got assigned to ECMO and survived; the second got assigned to the control group and died; and several more then got assigned to ECMO and all survived. After 12 patients the trial was stopped because ECMO had met the predetermined statistical threshold for proven effectiveness. But of course only one baby had been subjected to the control treatment and many purists insisted on future trials. (This is the risk associated with Warren Drugs's non-50/50 strategy.) At least one more RCT was conducted.

    Were the babies in the control groups — both those in these follow-up trials and the one baby assigned to control group in the randomized-play-the-winner trial — sacrificed on the altar of methodological purity?

  11. I guess the question, per yoyo's and Bruce's comments, is whether care can be so standardized that the outcomes of your experimental treatment can be compared to the outcomes known to result from applying the previous standard of care, meaning that you could dispense with an internal control group. And it may well be that the outcomes of malignant melanoma are sufficiently reproducible (and, of course, there's little controversy about scoring the phenotype if you're measuring survival time). So this case really does seem like one that's particularly well suited to early termination of the control group, or even to not having one in the first place. But it may be an unusual case in that respect.

  12. yoyo: "A souped up version of the system that has greatly increased child cancer survival"

    Can you give some details? Since childhood cancer also differs from adult cancer in that the majority of patients are in randomized trials rather than a tiny minority, I would have used it as an example in the other direction.

    I work in observational studies of medication use, and we have a hard time detecting things like the two-fold increase in heart attacks with rofecoxib. For most new medications with benefits that are much smaller than this I don't see how observational studies could really be trusted.

    In the melanoma study the big problem seems to have been that the control group wasn't crossed over to treatment after the trial was stopped.

  13. A few words about childhood cancer survival and its trial system. First, the majority of kids with cancer are treated on trials because they're all incredibly rare, meaning both that getting every last case into a trial is important for power and that the absolute number of cases is manageable. Clearly, we are not going to enter hundreds of thousands of adult cancer cases onto trials each year. Childhood cancer survival has indeed been the great success of cancer treatment - moving from near 0% in the 1960's to 75% today (and better than 95% for some types). That's partly due to the constant improvement of therapy through clinical trials, but it's mainly due to the fact that children just respond better to treatment.

  14. "First, do no harm."

    Having a control group of sick people who are deliberately and knowingly being denied the best care would seem to violate that principle.

  15. “Having a control group of sick people who are deliberately and knowingly being denied the best care would seem to violate that principle.”

    The key word is “knowingly.” You can harm people by denying them effective care, and also by giving them ineffective care. Once you know which is which, the ethical thing to do becomes clear. Thinking you know when you really don’t can be lethal.

    “Evidence-based medicine” under that name did not take off until the late 1980s. This was soon after everyone was chastened by the results of randomized trials of oral anti-arrhythmic drugs, which were expected to save lives but instead increased deaths in the people who took them. The drugs decreased premature ventricular beats, which were correlated with arrhythmias. They should have worked, and there were some who objected to conducting the randomized trials of drugs that they “knew” were effective.

    Similarly, it seemed clear that the prophylactic use of postmenopausal hormone replacement would prevent heart disease in women. They were known to have favorable effects on blood vessel function and on lipid levels, similar to statin drugs which had been proven effective already. Observational studies showed that postmenopausal women who took hormone replacement had lower incidence of heart disease. They looked like a good idea. But it was understood that a randomized trial was needed to show the real effect.

    Then in 2002 the Women’s Heath Initiative published a large randomized trial showing that heart disease occurred more often in the women who took hormones than in the women who did not. This experience also demonstrated that you can be misled by your perceptions of what is effective, based on observation and on principles of pharmacology. The observational trials were misleading because women did not take hormones by a chance process, but by a process that had to do with the characteristics of the women and their doctors. On average, women taking hormones had lower blood pressure, less obesity, more physical activity, more education, and better access to health care than women who did not take hormones. Hormone supplementation did not lead to good health; good health led to hormone supplementation.

    These examples, among others, have boosted the respect given to randomized clinical trials. Most such trials have stopping rules; the Women’s Health Initiative trial of hormones was stopped early when it became apparent, part way through the planned study, that the hormones were increasing heart disease, not decreasing it. The “Idol of the Laboratory” has kept women from being sacrificed to the Moloch of the uncontrolled observation.

    The New York Times article does not explain certain facts about the trial, and the protocol on ClinicalTrials.gov is not very elaborate. Mortality is the primary endpoint, but progression-free survival and time to treatment failure are secondary endpoints. The study as described in the Times does not describe the criteria for stopping the trial early. The effect size seems to be very large and unlikely to be explained by confounding (as happened with the postmenopausal hormone replacement experience). The particular mutation which is required for entry into the trial is present in 40-60% of melanomas (so says the New England Journal article on this drug). So it sounds like a situation in which the protocol should be flexible enough to cross patients over when their disease progresses.

    The point is that these decisions are not all that easy. If they were, there would be “an app for that,” and we could turn the decision over to our iPads.

Comments are closed.