August 26th, 2011

This UCLA study surprised me.  Medical researchers should seek to identify those interventions that significantly improve a patient’s survival rate (abstracting from quality of life).   The benefits of taking a drug hinge on two parameters.   Consider a heart medication. If I take it, how is my survival rate affected?  Intuitively, what would my probability of surving as a function of my age be given that I take the drug versus what would it have been had I not taken the drug or taken some other drug?  Second, how much do I value this reduction in risk?    Medical research seeks to estimate this first parameter.   A huge economics literature seeks to measure this second parameter. 

Yet, the news release implies that medical research is not pursuing this agenda as it narrowly focuses on how a medicine affects a specific measurable outcome such as blood pressure.  Here is a quote;  “Patients and doctors care less about whether a medication lowers blood pressure than they do about whether it prevents heart attacks and strokes or decreases the risk of premature death,” said the study’s lead author, Dr. Michael Hochman.”        The medical researchers must be implicitly assuming that there is a one to one well known function between elevated blood pressure and nasty outcomes. In this case, knowing how the drug affects blood pressure would be equivalent to knowing how the drug affects “the nasty outcomes” such as elevated death risk.   Recall from calculus the Chain Rule,  y = f(g(x)).   The researchers are studying dg/dx while the patients and clinical doctors care about df/dx.   

 

 

 

Share this post:
  • Twitter
  • StumbleUpon
  • Digg
  • Reddit
  • Facebook

8 Responses to “Eyes on the Prize”

  1. Vince says:

    The problem is statistical power. Death is rare (on the order of 1% of people in a study will die), and most beneficial interventions have a small impact on the overall survival rate (on the order of 1% of people who would have died will survive thanks to the intervention). So if you had a study of 20,000 people, half in the control group and half in the intervention group, then you’d expect about 100 deaths in the control group (1% of the 10,000) and 99 deaths in the intervention group (1% of those who would have died are saved by the intervention). In order for there to be a statistically meaningful difference which doesn’t get lost in the noise, you’d need something on the order of a million people to study.

    Instead of studying every intervention on a million people, researchers run one big study on millions of people measuring the relationship between blood pressure (for example) and survival rate, and then run a bunch of smaller studies measuring the effects of various interventions on blood pressure.

  2. Ed Whitney says:

    The Institute of Medicine has a new book about this very topic. http://www.iom.edu/Reports/2010/Evaluation-of-Biomarkers-and-Surrogate-Endpoints-in-Chronic-Disease.aspx allows access to read the report online. It has long been known that surrogate outcomes often lack clinical relevance. The explosive growth of “evidence based medicine” as a movement dates back to about 1990, when heart drugs which decreased the numbers of premature ventricular contractions turned out to lead to significantly greater rates of cardiac death in randomized trials. You might think that intermediate endpoints would fall into such disrepute that they would no longer be the subject of major medical studies, but this appears not to be the case.

    Hell, many journals are still reporting p values as if they had something to do with the strength of evidence in support of a scientific hypothesis, even though the flakiness of this assumption was written about as far back as 1919! Damn shame.

  3. “…(abstracting from quality of life).” Why? Consider the introduction of a drug B that has exactly the same effectiveness as drug A but is delivered by a once-a-day pill rather than four-hourly injections. It’s worth having, quite certainly: although the direct benefits are entirely on quality of life and lower costs. Via better compliance, these will also feed back into better outcomes, but you are talking about the researchers’ primary criteria. See my argument here that medical researchers are required by medical ethics to consider costs.

  4. Ed Whitney says:

    Vince makes an important point about statistical power for different outcomes. If death is not a common outcome for a particular condition, then extremely large studies will be required to detect mortality differences between distinct interventions.

    In the study which Matthew points us to, death was only one clinical endpoint. Other examples included hospitalization, need to seek medical care, need for a therapeutic intervention (such as a hear catheterization), need to miss work, prevention of unwanted pregnancies, weight loss, and disease episodes (number of migraine headaches in a month, for example). These all have meaning and importance to patents and their families. Not having to go to the emergency room with an asthma attack nearly as often is something that people can relate to, and is a fine outcome for a study to report

    The surrogate endpoints that the authors were looking at include such things as asymptomatic venous thrombosis diagnosed through ultrasound surveillance (rather than symptomatic thrombosis) and asymptomatic vertebral compression fractures detected by x-ray rather than as the result of clinical symptoms.

    These clinical endpoints are common enough to be detectable by studies using halfway attainable sample sizes. The lament of the authors of this article is that medication trials often fail to report clinically meaningful data.

  5. Ed Whitney says:

    Nix “hear catheterization;” make that “heart catheterization.”

  6. Keith Humphreys says:

    Matthew — did your post get cut off? After “read the rest of this entry” there is only blank space.

  7. hilzoy says:

    Patients may care about lowering their risk of death and heart attack, but the FDA cares about effectiveness in treating a given illness. This, combined with the point about power above, plus the increased cost of continuing a study to death, provides huge built-in incentives to use surrogate endpoints rather than, say, heart attack or death.

  8. Altoid says:

    Not to dismiss the statistical power issue, which makes a lot of sense, but I think a bigger payoff comes about halfway through the article. By sheer coincidence, studies that are funded solely by pharma sponsors tend much more to report surrogate results. This, the authors point out, can make the results more easily look good for the sponsor. And for honest research, designing studies that use surrogate results could, it seems to me, be a lot cheaper to run and more likely to be funded by whatever source.

    Say you’re developing and testing a cholesterol-lowering med. Since the real point about lower cholesterol is supposed to be reduced risk of all kinds of bad events, and that’s already an accepted and supposedly valid causal relation, all the developer is really going to care about is demonstrating lower cholesterol; because of the accepted link, the powerful move is to be selling lower cholesterol, and it also happens to be the easier result to demonstrate (as opposed to better real-world survival, say)- certainly easier in the time frame needed for FDA approval. And if you’re running an honest study on this drug, it’s still far easier and cheaper to measure serum cholesterol than it is to run a long-term study examining actual clinical outcomes.

    Hasn’t this been the experience with Lipitor? All its marketing seems to say is that it lowers cholesterol and afaik the most they’ll ever go beyond that is a general statement that lower cholesterol is associated with reduced risk of certain events. So unless somebody has really done a long-term study, we don’t really know whether Lipitor has any effect at all on reducing either the unpleasant events or even overall mortality. It’s been used long enough by now that this information could have been developed, and you’d have to think that any demonstrable effect would be a marketing coup we’d never hear the end of.

    I think I understand the fearsome cost factors and the developer’s interests in limiting scrutiny to surrogates, but I agree that there should be much more attention on ultimate outcomes, and on the value of the surrogates as stand-ins for the desired real-life outcomes. The latter should be particularly in the cross-hairs if development effort is focused on the surrogates.

Post a Comment