How To Think Like A Psychiatrist

March 4, 2012

The cornerstone of any medical intervention is a sound diagnosis.  Accurate diagnosis guides the proper treatment, while an incorrect diagnosis might subject a patient to unnecessary procedures or excessive pharmacotherapy, and it may further obscure the patient’s true underlying condition.  This is true for all medical specialties—including psychiatry.  It behooves us, then, to examine the practice of clinical decision-making, how we do it, and where we might go wrong, particularly in the area of psychiatric diagnosis.

According to Pat Croskerry, a physician at Dalhousie University in Canada, the foundation of clinical cognition the “dual process model,” first described by the Greek philosophers (and reviewed here).  This model proposes that people solve problems using one of two “processes”:  Type 1 processes involve intuition and are largely automatic, fast, and unconscious (e.g., recognizing a friend’s face).  Type 2 processes are more deliberate, analytical, and systematic (e.g., planning the best route for an upcoming trip).  Doctors use both types when making a diagnosis, but the relative emphasis varies with the setting.  In the ED, quick action based on pattern recognition (i.e., Type 1 process) is crucial.  Sometimes, however, it may be wrong, particularly if other conditions aren’t evaluated and ruled out (i.e., Type 2 process).  For instance, a patient with flank pain, nausea, vomiting, and hematuria demonstrates the “pattern” of a kidney stone (common), but may in fact have a dissecting aortic aneurysm (uncommon).

This model is valuable for understanding how we arrive at psychiatric diagnoses (the above figure is from a 2009 article by Croskerry).  When evaluating a patient for the first time, a psychiatrist often looks at “the big picture”:  Does this person appear to have a mood disorder, psychosis, anxiety, a personality disorder?  Have I seen this type of patient before?  What’s my general impression of this person?  In other words, the assessment relies heavily on Type 1 processes, using heuristics and “Gestalt” impressions.  But Type 2 processes are also important.  We must inquire about specific symptoms, treatment history, social background; we might order tests or review old records, which may change our initial perception.

Sound clinical decision-making, therefore, requires both processes.  Unfortunately, these are highly prone to error.  In fact, Croskerry identifies at least 40 cognitive biases, which occur when the processes are not adapted for the specific task at hand.  For instance, we tend to use Type 1 processes more frequently than we should.  Many psychiatrists, particularly those seeing a large volume of patients for short periods of time, often see patterns earlier than is warranted, and rush to diagnoses without fully considering all possibilities.  In other words, they fall victim to what psychologist Keith Stanovich calls “dysrationalia,” or the inability to think or act rationally despite adequate intelligence.  In the dual process model, dysrationalia can “override” Type 2 processes (“I don’t need to do a complete social history, I just know this patient has major depression”), leading to diagnostic failure.

Croskerry calls this the “cognitive miser” function: we rely on processes that consume fewer cognitive resources because we’re cognitively lazy.  The alternative would be to switch to a Type 2 process—a more detailed evaluation, using deductive, analytic reasoning.  But this takes great effort and time.  Moreover, when a psychiatrist switches to a “Type 2” mode, he or she asks questions are nonspecific in nature (largely owing to the unreliability of some DSM-IV diagnoses), or questions that confirm the initial “Type 1” hunch.  In other words, we end up finding we expect to find.

The contrast between Type 1 and Type 2 processes is most apparent when we observe people operating at either end of the spectrum.  Some psychiatrists see patterns in every patient (e.g., “I could tell he was bipolar as soon as he walked into my office”—a classic error called the representativeness heuristic), even though they rarely ask about specific symptoms, let alone test alternate hypotheses.  On the other hand, medical students and young clinicians often work exclusively in Type 2; they ask very thorough questions, covering every conceivable alternative, and every symptom in the DSM-IV (even irrelevant ones).  As a result, they get frustrated when they can’t determine a precise diagnosis or, alternately, they come up with a diagnosis that might “fit” the data but completely miss the mark regarding the underlying essence of the patient’s suffering.

Croskerry writes that the most accurate clinical decision-making occurs when a physician can switch between Type 1 and Type 2 processes  as needed, a process called metacognition.  Metacognition requires a certain degree of humility, a willingness to re-examine one’s decisions in light of new information.  It also demands that the doctor be able to recognize when he or she is not performing well and to be willing to self-monitor and self-criticize.  To do this, Croskerry recommends that we develop “cognitive forcing strategies,” deliberate interventions that force us to think more consciously and deliberately about the problem at hand.  This may help us to be more accurate in our assessments:  in other words, to see both the trees for the forest, and the forest for the trees.

This could be a hard sell.  Doctors can be a stubborn bunch.  Clinicians who insist on practicing Type 2,  “checklist”-style medicine (e.g., in a clinical trial) may be unwilling to consider the larger context in which specific symptoms arise, or they may not have sufficient understanding of that context to see how it might impact a patient.  On the other hand, clinicians who rush to judgment based on first impressions (a Type 1 process) may be annoyed by any suggestion that they should slow down and be more thorough or methodical.  Not to mention the fact that being more thorough takes more time. And as we all know, time is money.

I believe that all psychiatrists should heed the dual-process model and ask how it influences their practice.  Are you too quick to label and diagnose, owing to your “dysrational” (Type 1) impulses?  On the other hand, if you use established diagnostic criteria (Type 2), are you measuring anything useful?  Should you use a cognitive forcing strategy to avoid over-reliance on one type of decision-making?  If you continue to rely on pattern recognition (Type 1 process), then what other data (Type 2) should you collect?  Treatment history?  A questionnaire?  Biomarkers?  A comprehensive assessment of social context?  And ultimately, how do you use this information to diagnose a “disorder” in a given individual?

These are just a few questions that the dual process model raises.  There are no easy answers, but anything that challenges us to be better physicians and avoid clinical errors, in my opinion, is well worth our time, attention, and thought.


The Curious Psychology of “Disability”

December 28, 2011

I’ll start this post with a brief clinical vignette:

I have been seeing Frank, a 44 year-old man, on a regular basis for about six months.  He first came to our community clinic with generalized, nonspecific complaints of “anxiety,” feeling “uncomfortable” in public, and getting “angry all the time,” especially toward people who disagreed with him.  His complaints never truly met official criteria for a DSM-IV disorder, but he was clearly dissatisfied with much in his life and he agreed to continue attending biweekly appointments.  Frank once requested Xanax, by name, but I did not prescribe any medication; I never felt it was appropriate for his symptoms, and besides, he responded well to a combined cognitive-interpersonal approach exploring his regret over past activities as a gang member (and related incarcerations), feelings of being a poor father to his four daughters, and efforts to improve his fragile self-esteem.  Even though Frank still has not met criteria for a specific disorder (he currently holds the imprecise and imperfect label of “anxiety NOS”), he has shown significant improvement and a desire to identify and reverse some of his self-defeating behaviors.

Some of the details (including his name) have been changed to preserve Frank’s privacy.  However, I think the general story still gets across:  a man with low self-worth, guilty feelings, and self-denigration from his overidentification with past misdeeds, came to me for help.  We’ve made progress, despite a lack of medications, and the lack of a clear DSM-IV (or, most likely, DSM-5) diagnosis.  Not dramatic, not earth-shattering, but a success nonetheless.  Right?

Not so fast.

Shortly after our appointment last week, I received a request for Frank’s records from the Social Security Administration, along with a letter from a local law firm he hired to help him obtain benefits.  He had apparently applied for SSI disability and the reviewers needed to see my notes.

I should not have been surprised by this request.  After all, our clinic receives several of these requests each day.  In most cases, I don’t do anything; our clinic staff prints out the records, sends them to SSA, and the evaluation process proceeds generally without any further input from us (for a detailed description of the disability evaluation process, see this article).  But for some reason, this particular request was uniquely heartbreaking.  It made me wonder about the impact of the “disability” label on a man like Frank.

Before I go further, let me emphasize that I’m looking at Frank’s case from the viewpoint of a psychiatrist, a doctor, a healer.  I’m aware that Frank’s family is under some significant financial strain—as are many of my patients in this clinic (a topic about which I’ve written before)—and some sort of welfare or financial support, such as SSI disability income, would make his life somewhat easier.  It might even alleviate some of his anxiety.

However, in six months I have already seen a gradual improvement in Frank’s symptoms, an increase in his motivation to recover, and greater compassion for himself and others.  I do not see him as “disabled”; instead, I believe that with a little more effort, he may be able to handle his own affairs with competence, obtain some form of gainful employment, and raise his daughters as a capable father.  He, too, recognizes this and has expressed gratitude for the progress we have made.

There is no way, at this time, for me to know Frank’s motives for applying for disability.  Perhaps he simply saw it as a way to earn some supplementary income.  Perhaps he believes he truly is disabled (although I don’t think he would say this—and if he did, I wish he’d share it with me!).  I also have no evidence to suggest that Frank is trying to “game the system.”  He may be following the suggestions of a family member, a friend, or even another healthcare provider.  All of the above are worthwhile topics to discuss at our next appointment.

But once those records are sent, the evaluation process is out of my hands.  And even if Frank’s request is denied, I wonder about the psychological effect of the “disability” label on Frank’s desire to maintain the gains he has made.  Labels can mean a lot.  Psychiatric diagnoses, for instance, often needlessly and unfairly label people and lead to unnecessary treatment (and it doesn’t look like DSM-5 will offer much improvement).  Likewise, labels like “chronic,” “incurable,” and “disabled” can also have a detrimental impact, a sentiment expressed emphatically in the literature on “recovery” from mental illness.  The recovery movement, in fact, preaches that mental health services should promote self-direction, empowerment, and patient choice.  If, instead, we convey pessimism, hopelessness, and the stigma of “disability,” we may undermine those goals.

As a healer, I believe that my greatest responsibility and most difficult (although most rewarding) task is to instill hope and optimism in my patients.  Even though not all of them will be entirely “symptom-free” and able to function competently in every situation life hands them, and some may require life-long medication and/or psychosocial support (and, perhaps, disability income), I categorically refuse to believe that most are “disabled” in the sense that they will never be able to live productive, satisfying lives.

I would bet that most doctors and most patients agree with me.  With the proper supports and interventions, all patients (or “users” or “consumers,” if you prefer those terms) can have the opportunity to succeed, and potentially extricate themselves from invisible chains of mental illness.  In Frank’s case, he is was almost there.

But the fact that we as a society provide an institution called “disability,” which provides benefits to people with a psychiatric diagnosis, requiring that they see a psychiatrist, and often requiring that they take medication, sends a very powerful—and potentially unhealthy—psychological message to those who can overcome their disability.  To Frank, it directly contradicts the messages of hope and encouragement I try to offer at each visit.  It makes him dependent upon me, rather than upon himself and his own resources and abilities.  In other words, to a man like Frank, disability is anti-recovery.

I don’t have an easy answer to this problem.  For starters, changing the name of “disability” to something like “temporary psychological material support”—a substitute label, nothing more—might be helpful.  Also, rewarding recipients (e.g., not repealing their benefits) for meeting predetermined milestones of recovery (part-time work, independent housing, etc) may also help.  But the more I think about the life-affirming and empowering potential of recovery, and about how we allocate our scarce resources, the more I believe that the recovery-based—as opposed to disability-based—practice of psychiatry has much more to offer the future of our patients, our profession, and our nation, than the current status quo.  For the sake of Frank’s recovery, and the recovery of countless other men and women like him, maybe it’s time to make that happen.


Is the Criticism of DSM-5 Misguided?

December 15, 2011

In 2013, the American Psychiatric Association will publish the DSM-5, the next edition of its diagnostic manual.  Public reaction has, thus far, not been favorable.  Critics decry the lowering of diagnostic thresholds in existing criteria; the conception of new diagnoses seemingly “out of thin air”; the radical overhaul of entire sections (like the personality disorders); and the secrecy under which many of the earlier planning stages were held.

Much of the criticism, including that from its most vocal critic, Allen Frances (lead author of the current edition, the DSM-IV), laments the expansion of diagnostic criteria.  They argue that this may increase the number of “mentally ill” individuals and/or pathologize “normal” behavior, and lead to the possibility that thousands—if not millions—of new patients will be exposed to medications which may cause more harm than good.

The American Psychological Association, the British Psychological Society, and the American Counseling Association have expressed their opposition publicly.  An online petition from the Society for Humanistic Psychology (part of the APA) has garnered nearly 9,000 signatures in fewer than 60 days.

I understand and sympathize with the critics, particularly against the DSM‘s emphasis on “user acceptability” over validity (and, in the interest of full disclosure, I did sign the petition).  But I wonder whether the greater outcry against the DSM-5 is somewhat misdirected. The DSM-5 may very well turn out to be a highly flawed document, but that’s all it will be: a document.  Whether it results in the “overdrugging and overdiagnosing” predicted by critics like Frances is not the primary responsibility of its authors, but of those who will use the book.  And this is where the outrage should be directed.

First of all, let’s just state the obvious:  it is impossible to write a comprehensive, scientifically valid catalog of all mental illnesses (particularly when some argue convincingly that mental illness is itself a false concept).  When we’re talking about conditions that have both biological and sociocultural origins (in fact, this has long been part of the distinction between “neurologic” and “psychiatric” disease), it seems clear that a diagnostic manual will never capture the full spectrum of psychiatric disorders.  Even if we included semi-accurate biological markers in the diagnostic criteria—a Holy Grail we’re far from attaining—mental illness will always, in the end, depend primarily upon the patient’s subjective experience.

Thus, the DSM-5, like all DSM’s before it, will be, almost by definition, incomplete or deficient.  It will be a descriptive tool, a taxonomy, a guidebook, featuring the authors’ best guess as to what might constitute a treatable condition.  For example, in real life there is no one thing called “major depressive disorder” as it appears in the DSM-IV (in fact, there are 1,497 variations).  Nonetheless, we use “MDD” to label all of our patients with these combinations of symptoms, because it’s the best fit.  But a good mental health professional doesn’t treat MDD, he or she treats the person with MDD.  Calling it “MDD” is only necessary for insurance billing, for drug companies to get FDA approval for new pharmaceuticals, and for patients and docs to give a name to (and, if necessary, demystify) their condition.

In other words, the danger lies not in the label, but in how we use it.  In fact, one might even argue that a lousy label—or a label that is so nonspecific that it applies to a broad swath of the population, including some in the “normal” part of the spectrum (wherever that may be)—may actually be beneficial, because it will be so meaningless that it will require the clinician to think more deeply about what that label is trying to convey.

As an example, consider “chronic pain,” a label frequently applied to patients and written in their charts.  (Even though it’s often written as a diagnosis, it is really a symptom.)  “Chronic pain” simply implies that the patient experiences pain.  Nothing more.  It says nothing about the origin of the pain, what exacerbates or soothes it, how long the patient has experienced it, or whether it responds to NSAIDs, opioids, acupuncture, yoga, or rest.  When a new patient complains of “chronic pain” to a good pain specialist, the doctor doesn’t just write a script; he or she performs an examination, obtains a detailed history and collateral information, and treats in a manner that relieves discomfort yet minimizes side effects (and cost) to the patient.

Perhaps this is what we can do in psychiatry, even with the reviled new DSM-5 diagnoses like “Attenuated Psychosis Syndrome” or “Disruptive Mood Dysregulation Disorder.”  Each of these new “diagnoses” suggests something about the patient and his or her behavior or experience.  But neither one should predict a course of treatment.  In fact, a vague diagnosis should actually prompt the doctor to probe more deeply into a patient’s symptoms and determine their impact on the patient’s well-being and functional status (which may actually help improve disability evaluations, too).  On a population basis, the heterogeneity of patients given a diagnosis might stimulate further research (neurobiological, psychological, epidemiologic, maybe even anthropological) to determine more specific subtypes of illness.

Will the new diagnoses be “overused”?  Probably.  Will they lead to “overdrugging” of patients—the outcome that everyone fears?  I guess that’s possible.  But if so, the spotlight should be turned on those who do the overdrugging, not on the document that simply describes the symptoms.  This may turn out to be difficult: official treatment guidelines might come out with recommendations to medicate, insurance companies may require diagnoses (or medications) in order to cover psychiatric services, and drug companies might aggressively market their products for these new indications.  And there will always be doctors who cut corners, arrive at diagnoses too quickly and are eager to use dangerous medications.  But these targets, ultimately, are where the anti-DSM-5 efforts should be placed, not on its wholesale rejection.

In the end, one could argue that the DSM-5 is unnecessary, premature, and flawed.  Unfortunately, it simply reflects our understanding of mental illness at this point in time.  But is it a “dangerous public health experiment,” as Allen Frances has warned?  Only if we allow it to override our eyes and ears, our hearts and minds, and what our patients truly need and want from us.   In the end, it’s just a book.  What really matters is how we use it.


%d bloggers like this: