More on HTE

Share with your friends










Submit

I am very grateful to Dr. James Gaulte for pointing out the paper by Kravitz et al. on the vexing problem of heterogeneity of treatment effects (HTE).  The article indeed provides a very clear and comprehensive explanation of the issues, and it is apparent that the proposal by Kent et al. discussed in the last post was aimed at addressing some of the challenges described by Kravitz et al.  I agree with Dr. Gaulte that the article should be recommended reading for medical students (and we’re all medical students…), but I will present here additional comments about the specifics of the paper, as well as the general philosophy that the authors seem to espouse.  Interested readers are encouraged to review the paper first.

The issue of patient preference

Kravitz et al. provide 4 categories that determine HTE: 1) variable baseline risk 2) variable response to treatment 3) variable vulnerability to adverse effects 4) variable patient preferences for possible outcomes (utilities).  I command the authors for emphasizing the last category, for it is the most commonly ignored by the EBM movement.  Yet, as the authors put it, “Patient preferences for different health states (utilities) are the ultimate arbiter of treatment success.” (I wouldn’t necessarily go this far).

But after reading the entire article, one gets the sense that Kravitz et al. may be paying no more than lip service to the subjective dimension of medicine.  Utilities are only mentioned twice more in the paper:  on page 672, the casual statement that “The EBM community has incorporated patient values and preferences into the general framework of evidence-based medicine (Sackett et al. 2000),” implies that acknowledging utilities is sufficient treatment of the issue.  And on page 673, in the graphs that demonstrate the interplay of the 3 other categories of HET, the authors “assume that utilities have already been incorporated into the scales for measuring the absolute treatment effect and teatment-related harm,” and deal with this rather substantial methodological problem by simple hand-waving.

More complexity

Kravitz et al. do a great job of graphically illustrating the complex interplay between the 4 categories of variability.  They also do not hide the difficulty that this complexity presents for the application of EBM methods to clinical practice.  But the reality is likely to be even more complex than they describe.  For example, the paper discusses ‘responsiveness to treatment’ and ‘vulnerability to harm’ as being independent categories of variability, when in fact they could be dependent of each other.  A high responder to aspirin may also be more vulnerable to bleeding complications, but not particularly susceptible to allergic reactions.  The complex interplay between responsiveness and vulnerability is unlikely to lend itself easily to quantitative modeling.

Another area that is problematic is the effect of treatment context.  The authors do identify this issue (top of p. 669 and top of p. 671, with reference to paper by Horwitz et al.), but don’t seem to properly gauge the difficulty that this represents for the application of RCT results to the patient at hand.  They also have an optimistic view of how the “genomics era” will help tease out responsiveness and vulnerability issues in individuals (pp. 676, 679).

False dichotomy

Kravitz et al. do an excellent job of identifying and illustrating the problems that blind enthusiasm for RTC results generate.  “Guideline creep” has already had its nefarious effects well documented, as they show (p675) .  But instead, of limiting the discussion to suggesting improvements in study design and interpretation (pp677-78), the authors feel compelled to set up the discussion in terms of a conflict between two mutually exclusive approaches:

How should clinicians proceed? First, they should recognize that even compromised knowledge is better than complete ignorance. Thus, in the absence of information on HTE and ITEs, reliance on average effects as measured in good clinical studies is likely to produce better outcomes than is intuition or habit.

And later on, they again stick it to the Luddites:

Not even the most vehement critics of EBM would advocate a return to “opinion-based” practice grounded solely in pathophysiological reasoning and personal clinical experience (Tanenbaum 1993).

It should be obvious to any one barely familiar with the debate that the skeptics are not arguing against using evidence from clinical research in clinical practice, but against using this evidence as the primary, if not sole determinant of medical decision.  It is not a matter of favoring practice by “intuition” and disregarding RCT results, but a matter of not letting “guideline creep” blind us into cookie-cutter medicine for the masses.  As Hayek may have put it, it’s a matter of defending the value of local and dispersed knowledge against the misguided diktat of a centrally-planned healthcare system.

Your AccuWeather Treatment Plan Is…

The paper is an outstanding overview of the parameters that impact treatment effect.  The authors give the reader a clear sense of the various dimensions of the problem and of the interactions among them.  In many ways, the article formalizes the work of therapeutic clinical reasoning and can help clinicians gain a better appreciation of all the parameters involved.

But in the final analysis,  Kravitz et al.’s recommendations fail to genuinely “promulgate a spirit of humility” but instead promote the notion that quantitative decision rules–imperfect as they are–remain a necessary evil in the conduct of modern medicine.  Their vision for the future?

Continuing advances at the nexus of genomics and medical informatics, however, hold promise for the future. The day may not be far off when a practitioner, using a handheld PDA, or personal digital assistant, will be able to calculate a patient’s baseline susceptibility and prognosis using validated clinical prediction rules; assess responsiveness and vulnerability to a therapeutic agent based on genotyping and measurement of biomarkers; and use this information for a discussion with the patient.

In other words, the physician as weatherman.

It is no wonder that 7 years after the publication of this paper, the ascendancy of outcomes research on the formulation of health policy seems complete, while the “philosophical challenge” raised with such foresight by Sandra Tanenbaum 18 years ago, has been utterly ignored.

2 Comments

  1. Great analysis of the Kravitz paper.Thank you for putting into words some of the mental uneasiness that I had in regard to a few aspects of the paper but never verbalized.”Physician as weatherman” is a gem.

    Do you have PDF of the Tanenbaum paper?

    James Gaulte

Leave a Comment

Your email address will not be published. Required fields are marked *