I opened my last post with a question I never came around to really answer: How should doctors make decisions?
That wasn’t an oversight. To try to provide an answer seemed daunting, plus I wouldn’t have resisted the urge to wax philosophical about praxeology or phronesis. And how sexy is that? Surely my Alexa ranking would have suffered!
Perhaps sensing my predicament, Dr. Saurabh Jha tactfully suggested a book which I have since ordered and read. (And what a great call that was. Thank you, @RogueRad!) The book is Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making by Gary Klein.
According to his Wikipedia entry, Klein is a cognitive psychologist credited with pioneering the field of naturalistic decision-making, a research endeavor where people’s decisions are examined in real life setting, not under contrived laboratory experiments.
The book, published in 2011 by The MIT Press, summarizes the fruits of his research. It’s a captivating work.
Klein and his associates spent decades observing, interviewing, and collecting data on a wide range of human beings in action: pilots, military commanders, firefighters, but also business people, engineers, doctors, and housewives.
Klein’s conclusion, which is the thesis of the book, is that the established wisdom about how experts make decisions is highly defective. Klein shows that instead of relying on analytical decision-making frameworks, real experts frequently operate best on the basis of tacit knowledge, mental shortcuts, and “gut feeling.”
The book is organized around eleven widely believed decision-making claims which Klein systematically undermines with vivid real life examples, including decisions made during momentous historical events (9/11, Pearl Harbor) or during hair-raising misadventures in aviation, firefighting, or finance. Of course, he also has occasion to address clinical decisions.
I won’t list all of the claims that Klein undercuts, but the following few are particularly relevant to the topics we discussed in the last two posts and elsewhere in this blog:
- Claim 1: Teaching people procedures helps them perform tasks more skillfully
- Claim 2a: Successful decision-makers rely on logic and statistics instead of intuition
- Claim 3: To make a decision, generate several options and compare them to pick the best one
- Claim 4: We can reduce uncertainty by gathering more information
- Claim 7: To make sense of a situation, we draw inference from the data
Klein does not show that analytical decision-making is never valuable, but that it is mostly valuable in “well ordered” and stable situations, as opposed to “complex and unpredictable” ones.
How often is a medical encounter “well ordered and stable” and how often is it “complex and unpredictable?” I guess that’s at the crux of my contention against guideline medicine and the risk-prediction industry.
Klein endorses analytical decision-making and operational checklists for standard medical procedures (e.g., operating room routines, central venous catheter placements. etc.), an endorsement that may, to a certain degree, appease enthusiasts of the patient safety movement.
He views favorably the well-known story of how the Johns Hopkins Hospital succeeded in eliminating central line infections by rigorously codifying and enforcing procedures to achieve and maintain a sterile field at the bedside.
But Klein also makes the point that analytical decision-making is vastly overvalued in the vast majority of situations that characterize medical care.
Most of the medical examples in the book deal with acutely ill patients, and for good reason. Cases of acutely ill patients exemplify “complex and unpredictable” situations.
As the reader can judge, the best medical decisions in such cases often involve thought processes that are not linear, methodical, or logical. Instead, the successful physician often jumps to conclusions to make decisions on the basis of intuition and tacit knowledge.
The two examples in the last post were decisions of that nature. And although Klein doesn’t explicit states so, it seems evident that when experts choose to deviate from established guidelines and procedures, and rely instead on intuition or tacit knowledge, they are indeed assuming the risk of their decision (and also become accountable for it).
To the extent that the ever growing checklists, guidelines, and practice standards limit a doctor’s prerogative to use intuition, and to the extent that the fragmentation of care, largely imposed by medical regulations, inhibits a physician’s ability to gain experience and tacit knowledge, acute patient care undoubtedly suffers.
And the same, of course, can be said about limits imposed on other members of the medical team, such as nurses. But the same is also true in the outpatient setting as well.
On the surface, the management of weight, LDL-cholesterol level, and blood glucose may seem suitable for the analytic decision rules that are the stuff of population medicine, but that’s only convincing if we are conditioned into viewing patients as stick figures defined by their risk factors.
In reality, any individual offers a rather rich texture of intangible and dynamic characteristics and is far from being “well ordered and stable” enough to satisfy the conditions under which standard risk analysis is more readily beneficial.
If we pay attention to whole persons and base our medical decisions on intuition and experience, could we possibly serve our patients better than if we base them diligently on clinical guidelines? In this case, a good doctor would excel where a bad one would not, but the system cannot tell the two apart.
And allowing such freedom of practice and independence of thought is intolerable for bureaucracies predicated on standardization and on the illusion of manageability.
Instead, the health care bureaucracies enjoin us to strive for mediocrity. How? By perverting the definition of excellence as adherence to procedural checklists and population-based clinical guidelines.
Here are a few quotes from the first chapters of the book:
Most of the [analytical] claims, rooted in well-ordered situations, try to substitute analysis for experience…But in complex and ambiguous situations, there is no substitute for experience.
We put too much emphasis on reducing errors and not enough on building expertise.
[O]rganizations often overstate the importance of procedures. During an accident investigation, if someone finds that a procedure wasn’t followed, even if it didn’t directly cause the accident, there is a good chance that “procedural violation” will be trumpeted as one of the contributing factors.
…procedures are rarely sufficient and often out of date. In many cases, procedures can make performance worse, not better. They can lull us into mindlessness and complacency, and an erosion of expertise. In some cases, procedures can mislead us.
There are many other statements like these, all of them readily applicable to the medical arena
Now, the perceptive reader may recognize that, for all its persuasiveness, Klein’s empirical research does not amount to a complete theory of the mind on which we can confidently rest our understanding of human decisions. True enough. But for that, my friends, there is always praxeology and phronesis.