Why I don’t believe in science

Share with your friends










Submit

A few days ago, cardiologist and master blogger John Mandrola wrote a piece that caught my attention. More precisely, it was the title of his blog post that grabbed me: “To Believe in Science Is To Believe in Data Sharing.”

Mandrola wrote about a proposal drafted by the International Committee of Medical Journal Editors (ICMJE) that would require authors of clinical research manuscripts to share patient-level data as a condition for publication. The data would be made available to other researchers who could then perform their own analyses, publish their own papers, etc.

The ICMJE proposal is obviously controversial, raising thorny questions about whether “data” are the kinds of things that can be subject to ownership and, if so, whether there are sufficient ethical or utilitarian grounds to demand that data be “forked over,” so to speak, for others to review and analyze.

Now all of that is of great interest, but I’d like to focus attention on the idea that conditions Mandrola’s endorsement of data sharing. And the question I have is this: Should we believe in science?

Mandrola’s belief in science must assume that medical science can reveal durable answers, truths upon which we can base our clinical decisions confidently. He comments:

I often find myself looking at a positive trial and thinking: “That’s a good result, but can I believe it?”…Are the authors, the keepers of the data sets, telling the whole story?

Presumably, science is the way to get the “whole story,” for after weighing the pros and cons of data sharing, he concludes:

Open data would make it easier to believe. And we need to believe in science.

In other words, we give people who are in the midst of a heart attack aspirin because “science has shown” that aspirin in that setting reduces the mortality rate, we screen for colorectal cancer because science has shown that cancer incidence decreases, we give the latest immune system modulator to patients with rheumatoid arthritis because science has shown that symptoms are improved, etc.

But Mandrola, and most doctors are also perfectly aware that scientific truths change all the time.

John Ioannidis, a Harvard-trained physician and statistician, made a splash a few years ago when he published a paper called “Why Most Published Research Findings Are False” arguing that the claims of most research publications are rapidly contradicted or reversed, at least in medicine. That paper is the most downloaded article from the journal PLoS Medicine to date, and it earned Ioannidis glowing press coverage in The Atlantic and The Economist.

Now, Ioannidis believes—as does Mandrola—that scientific findings are unreliable because of circumstantial or external considerations. These considerations fall into two categories: statistical or human. Statistical deficiencies usually have to do with an insufficient number of subjects (for a given effect size) or perhaps with faulty statistical models. Human deficiencies have to do primarily with bias, which comes under many disguises that Ioannidis describes in his paper.

For Ioannidis, Mandrola, and many others commentators, bias is the most important “correctable” reason for scientific failures. And that’s why all are so keen on promoting some kind of data sharing to improve the reliability of a study result. If the data are open to scrutiny, different teams of researchers can examine the findings and draw their own conclusions. Presumably, the truth will emerge from such a process or, at least, we will get closer to it. Ioannidis advances the notion that while “100% certainty” may not be achieved, we can get close to the true answer if we reform the ways we conduct science.

A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure “gold” standard is unattainable. However, there are several approaches to improve the post-study probability.

And he goes on, in that paper and in several others he has authored since then, to make specific recommendations about ways to reform medical science in order to improve its veracity.

But Ionnidis may be overlooking an uncomfortable truth about the truths of science. The notion that truth can be approached by way of scientific refinement is not itself necessarily true. In fact, it is an idea that has been subject to considerable controversy over the last 70 years.

Without getting into the various debates that have taken place since the neopositivist dream of establishing science as a unified way of explaining the universe was dashed by Kurt Gödel, Willard Quine, Michael Polanyi, Thomas Kuhn, and others in the mid-twentieth century, we can at least remind ourselves that science has trouble with the concept of a “gold standard,” and that a scientific consensus, however close to 100% it may be, has led us down many blind alleys in the past. This is not very controversial, even among self-confident scientists.

Secondly, even we confined ourselves to the modest “normal science” of clinical trials that concerns Mandrola and Ioannidis, I’m not sure what importance we should attach to the pursuit of scientific truth. In fact, when it comes to medical care, too strong a belief in science can be problematic.

As a case in point, a few weeks ago the New England Journal of Medicine published the results of the SPRINT trial which had randomized patients with high blood pressure to one of two treatment protocols: high intensity therapy, to try to achieve a systolic blood pressure less than 120 mmHg, or low intensity treatment for a more modest reduction in blood pressure (less than 140 mmHg).

To the surprise of many, the trial was stopped early because the difference in mortality rates between the two groups was statistically strong enough to trigger an automatic termination. High intensity was superior to low intensity, and continuing the trial could expose the low intensity treatment group to an excess risk of mortality if the trial kept going for another few years.

Now, the trial results caught the medical community by complete surprise, not only because the apparent benefit of intense therapy was strong enough to trigger an early termination, but also because the results were contrary to prior clinical trial findings. Those previous trials, however, were smaller in size than SPRINT, and had not addressed the question of treatment intensity as directly and generally as SPRINT did. SPRINT was specifically designed to “put that question to rest.”

[Related post: Blood pressure and the conundrum of medical numerology]

Luckily, SPRINT was a NIH-sponsored trial, so the usual suspicion that the results could have been rigged by profit-motivated pharmaceutical companies could not be raised. In many other ways (size, design, statistical methods, etc.) SPRINT seemed to follow some of the recommendations made by Ioannidis. Nevertheless, many physicians seemed upset about the results, and some of their reactions seemed to betray biases of their own.

As soon as the abrupt trial termination was announced, Eric Topol and Harlan Krumholz, two academic leaders in cardiology, wrote an Op Ed pages in the New York Times in which they demanded that all patient-level data be made promptly available for review by the scientific community. Being NIH-sponsored, it seemed, was not good enough to satisfy the skeptics.

Now, imagine for a moment that the data were made available to the doubting Thomases to examine, analyze, and interpret to their heart’s content. And imagine further the reviewers were able to scrub the data squeaky clean and purify them from any possible biasing influence. Suppose furthermore that they subjected the data to the best statistical methodology known to man. What would the new results then mean in relationship to a truth we can “believe in?” How would we know that we are closer to “100% certainty”?

The answer is we wouldn’t really know, or at least not for very long. Conditions change constantly, so that new therapies, new practice patterns, new epidemiological considerations, new social, economic, and environmental factors, all invariably conspire to render clinical trial results much less relevant, if not completely obsolete within a short period of time. Trial results are short-lived not just because of design or bias. They are short-lived because the world they try to capture quickly outlives them.

And besides, in real life settings, the importance of scientific truths are vastly overstated. From a practical standpoint, how close the SPRINT trial results are to “the truth” is of secondary concern when treating patients. Even if a clinical trial were carried out with utmost probity and satisfied the most fastidious of scientists, we would still not be obligated to apply its results into practice willy nilly, as Harlan Krumholz himself has reminded us.

Shortly after SPRINT was published, Krumholz wrote another blog post in the New York Times cataloging all the reasons why we shouldn’t rush to try to lower blood pressure intensely in everyone. But his reasons had nothing to do with concerns about the trial design or the possible bias of the investigators. Instead, Krumholz correctly reminded us that numerous patient factors must first be taken into consideration before making a clinical decision.

A trial can tell us that on average certain types of patients may do better with this treatment than that one, but a trial can tell us nothing about how the patient at hand will fare, and this patient invariably has certain characteristics that make him or her different from those patients enrolled in the trial. We are not clones, after all.

The upshot of this inherent limit to the value of clinical trials is that Mandrola, Ioannidis, Topol, Krumholz, myself, and any physician worth the M.D. after their name have to use judgment to take care of patients, and clinical judgment is decidedly “unscientific.” After all, if we are allowed our own respective clinical judgments—and thankfully, so far, we are—there is no scientific explanation for any agreement or discrepancy among us. Judgments are decisions, not “discoveries.”

Now, of course, an immediate retort is to say “Well, Accad, I understand your point about clinical decisions per se not being scientific actions, but isn’t it better to have medical science be least tainted by potential bias, so our clinical decisions can be as efficacious as possible? Surely, you’re not advocating that we leave it up to investigators to advance whatever claims they wish? Let’s not go back to the age of snake oil salesmen!”

Well one answer to that retort is that scientific “purity” comes at a cost, and determining if it’s worthwhile or not to purify science is not itself a matter of scientific inquiry. There is a cost to implementing new rules that must be added to an already horrendously expensive and lengthy clinical trial process, and if the aim is to get ever closer to scientific certainty, there is no end to the resources that could be employed to triple verify and vet everything that goes into a clinical trial.

In fact, clinical research is extremely costly precisely because of perennial calls to make it “more rigorous” and more believable. And these calls are not new: our healthcare system was born out of a desire to put snake oil salesmen out of business by making medicine ever more scientific. But if that desire leads to a boundless commitment of resources and still remains unsatisfied, perhaps it’s time to reconsider its pre-suppositions. Besides, I’m not sure that the academic community, whose living depends in large part on the conduct of research, is sufficiently impartial when it comes to determining the optimum cost of the scientific enterprise.

But there another, more important cost, that is not material in nature.

In the aftermath of the SPRINT trial, some physicians and scientists were so upset about the mini “paradigm shift” they were potentially facing that they railed against the decision taken to stop the trial early according to its pre-specified safety termination rule.

In the comment section of a Health News Review blog post discussing the SPRINT trial, Mayo Clinic physician-scientist Victor Montori—a very well-respected champion of patient-centered medicine and “shared decision-making”—expressed a wish that “new pressure be applied to prevent [early trial termination] from ever happening again.”

Alan Cassels, a health policy researcher and the author of the article, agreed with Montori and added that “we should not cheer the decision to stop the trial, and sprint to erroneous conclusions about what it all means.” I have read similar remarks made in other venues online or in print.

Now, it is quite possible that these comments were made off-the-cuff in the excitement of the surprising news. But if Montori, Cassels and others making these remarks are willing to stand by them, then not only do they demonstrate circular reasoning in regards to scientific validity (termination rules are precisely put in place to remove human bias), but they also implicitly express the view that seeking scientific clarity is worth risking some lives.

This, in my mind is a troubling position to take, and it points to a more general danger, which is that if we believe in science too strongly, we may end up not believing in patients anymore.

[Related Post: Is medicine a scientific enterprise?]

10 Comments

    1. Thank you, Shawn. That’s the intention. Philosophy, ontology, epistemology are crucially important to clinical care, and I think they’re great fun to think about too.

  1. A great example of the over-reliance upon “science” can be found in the refusal of the medical community to recognize hypothyroidism in the general population based upon flawed “science.”

    Too much reliance upon “science” leads to treating the test and not the symptoms.

    1. You touch on a very important point, Diane, about how diseases are defined. Empirical science by itself cannot offer any clarity in that regard.

  2. I like the point about the constantly changing conditions. Life is change and constant adaptation. No study captures that. Evidence based medicine has almost become a religion, especially for non-clinicians. What few of them know, only about 10% of medical knowledge is based on RCTs, rest is causal inference. What a hype..

    1. Thank you, Marc. In a sense, your putting your finger on the old debate between the Eleatics on the one hand (no change is possible) and the Heracliteans on the other (nothing stays the same). Plato had an answer, but it was still problematic. Aristotle solved the puzzle, but no one seems to remember or care.

  3. Great article describing the current flaws in the scientific method within the field of medicine, however I would love to hear you elaborate how would you like to see it change?

    1. Thank you for your comment, Niklas. Your are asking the question. From my point of view, the first step is to do a good job defining basic concepts like “health” and “disease,” and then to properly identify what the role of the physician should be. Only then can we properly understand how to best use and apply empirical science. At this time, these fundamental concepts remain conveniently vague. (Incidentally, I will be presenting an exploratory paper at the next Austrian Economics Research Conference this coming March on these basic questions. Stay tuned!)

  4. Michel

    Another very thought provoking post. I am always astounded that you have time to practice medicine. The breadth of reading and thought underlying your posts encourages me to pursue the perennial questions that are never considered by most people including M.D.’s

    1. Thank you, Russ. I’m encouraged by your reaction (and others’) to write more on philosophy and science, which is a great interest of mine.

Leave a Comment

Your email address will not be published. Required fields are marked *