My Own Bipolar Kerfuffle

August 5, 2012

I have a confession to make.  I don’t know what “bipolar disorder” is.  And as a psychiatrist, I’ll admit that’s sort of embarrassing.

Okay, maybe I’m exaggerating when I say that I don’t know what bipolar disorder is.  Actually, if you asked me to define it, I’d give you an answer that would probably sound pretty accurate.  I’ve read the DSM-IV, had years of training, took my Boards, treated people in the midst of manic episodes, and so on.  The problem for me is not the “idea” of bipolar disorder.  It’s what we mean when we use that term.

I recognized this problem only recently—in fact, just last month, as I was putting together the July/August issue of the Carlat Psychiatry Report (now available to subscribers here).  This month’s issue is devoted to the topic of “Bipolar Disorder,” and two contributors, faculty members at prestigious psychiatry departments, made contradictory—yet perfectly valid—observations.  One argued that it’s overdiagnosed; the other advocated for broadening our definition of bipolar disorder—in particular, “bipolar depression.”  The discrepancy was also noted in several comments from our Editorial Board.

Disagreements in science and medicine aren’t necessarily a bad thing.  In fact, when two authorities interpret a phenomenon differently, it creates the opportunity for further experimentation and investigation.  In time, the “truth” can be uncovered.  But in this case, as with much in psychiatry, “truth” seems to depend on whom you ask.

Consider this question.  What exactly is “bipolar depression”?  It seems quite simple:  it’s when a person with bipolar disorder experiences a depressive episode.  But what about when a person comes in with depression but has not had a manic episode or been diagnosed with bipolar disorder?  How about when a person with depression becomes “manic” after taking an antidepressant?  Could those be bipolar depression, too?  I suppose so.  But who says so?  One set of criteria was introduced by Jules Angst, a researcher in Switzerland, and was featured prominently in the BRIDGE study, published in 2011.  His criteria for bipolarity include agitation, irritability, hypomanic symptoms for as short as one day, and a family history of mania.  Other experts argue for a “spectrum” of bipolar illness.

(For a critique of the BRIDGE study, see this letter to the editor of the Archives of General Psychiatry, and this detailed—and entertaining—account in David Allen’s blog.)

The end result is rather shocking, when you think about it:  here we have this phenomenon called “bipolar disorder,” which may affect 4% of all Americans, and different experts define it differently.  With the right tweaking, nearly anyone who comes to the attention of a psychiatrist could be considered to have some features suggestive of someone’s definition of bipolar disorder.  (Think I’m kidding?  Check out the questionnaire in the appendix of Angst’s 2003 article.)

Such differences of opinion lead to some absurd situations, particularly when someone is asked to speak authoritatively about this disorder.  At this year’s APA Annual Meeting for example, David Kupfer (DSM-IV Task Force Chair) gave a keynote address on “Rethinking Bipolar Disorder,” which included recommendations for screening adolescents and the use of preventive measures (including drugs) to prevent early stages of the illness.  Why was it absurd?  Because as Kupfer spoke confidently about this disease entity, I looked around the packed auditorium and realized that each person may very well have has his or her own definition of bipolar disorder.  But did anyone say anything?  No, we all nodded in agreement, deferring to the expert.

This problem exists throughout psychiatry.  The criteria for each diagnosis in the DSM-IV can easily be applied in a very general way.  This is due partly to fatigue, partly to the fact that insurance companies require that we give a diagnosis as early as the first visit, partly because we’re so reluctant (even when it’s appropriate) to tell patients that they’re actually healthy and may not even have a diagnosis, and partly because different factions of psychiatrists use their experience to create their own criteria.  It’s no wonder that as criteria are loosened, diagnoses are misapplied, and the ranks of the “mentally ill” continue to grow.

As editor of a newsletter, I’m faced with another challenge I didn’t quite expect.  I can’t come out and say that bipolar disorder doesn’t exist (which wouldn’t be true anyway—I have actually seen cases of “classic,” textbook-style mania which do respond to medications as our guidelines would predict).  But I also can’t say that several definitions of “bipolar” exist.  That may be perceived as being too equivocal for a respectable publication and, as a result, some readers may have difficulty taking me seriously.

At the risk of sounding grandiose, I may be experiencing what our field’s leadership must experience on a regular basis.  Academic psychiatrists make their living by conducting research, publishing their findings, and, in most cases, specializing in a given clinical area.  It’s in their best interest to assume that the subjects of their research actually exist.  Furthermore, when experts see patients, they do so in a specialty clinic or clinical trial, which reinforces their definitions of disease.

This can become a problem to those of us seeing the complicated “real world” patients on the front lines, especially when we look to the experts for answers to such questions as whether we should use antipsychotics to treat acute mania, or whether antidepressants are helpful for bipolar depression.  If their interpretations of the diagnoses simply don’t pertain to the people in our offices, all bets are off.  Yet this, I fear, is what happens in psychiatry every day.

In the end, I can’t say whether my definition of bipolar disorder is right or not, because even the experts can’t seem to agree on what it is.  As for the newsletter, we decided to publish both articles, in the interest of maintaining a dialogue.  Readers will simply have to use their own definition of “bipolar disorder” and “bipolar depression” (or eschew them altogether)—hopefully in ways that help their patients.  But it has been an eye-opening experience in the futility (and humility) of trying to speak with authority about something we’re still trying desperately to understand.

About these ads

What Adderall Can Teach Us About Medical Marijuana

June 19, 2012

An article in the New York Times last week described the increasing use of stimulant medications such as Adderall and Ritalin among high-school students.  Titled “The Risky Rise of the Good-Grade Pill,” the article discussed how 15 to 40 percent of students, competing for straight-As and spots in elite colleges, use stimulants for an extra “edge,” regardless of whether they actually have ADHD.  In this blog, I’ve written about ADHD.  It’s a real condition—and medications can help tremendously—but the diagnostic criteria are quite vague.  As with much in psychiatry, anyone “saying the right thing” can relatively easily get one of these drugs, whether they want it or not.

Sure enough, the number of prescriptions for these drugs has risen 26% since 2007.  Does this mean that ADHD is now 26% more prevalent?  No.  In the Times article, some students admitted they “lie to [their] psychiatrists” in order to “get something good.”  In fact, some students “laughed at the ease with which they got some doctors to write prescriptions for ADHD.”  In the absence of an objective test (some computerized tests exist but aren’t widely used nor validated, and brain scans are similarly circumspect) and diagnostic criteria that are readily accessible on the internet, anyone who wants a stimulant can basically get one.  And while psychiatric diagnosis is often an imperfect science, in many settings the methodology by which we assess and diagnose ADHD is particularly crude.

Many of my colleagues will disagree with (or hate) me for saying so, but in some sense, the prescription of stimulants has become just like any other type of cosmetic medicine.  Plastic surgeons and dermatologists, for instance, are trained to perform medically necessary procedures, but they often find that “cosmetic” procedures like facelifts and Botox injections are more lucrative.  Similarly, psychiatrists can have successful practices in catering to ultra-competitive teens (and their parents) and giving out stimulants.  Who cares if there’s no real disease?  Psychiatry is all about enhancing patients’ lives, isn’t it?  As another blogger wrote last week, some respectable physicians have even argued that “anyone and everyone should have access to drugs that improve performance.”

When I think about “performance enhancement” in this manner, I can’t help but think about the controversy over medical marijuana.  This is another topic I’ve written about, mainly to question the “medical” label on something that is neither routinely accepted nor endorsed by the medical profession.  Proponents of medical cannabis, I wrote, have co-opted the “medical” label in order for patients to obtain an abusable psychoactive substance legally, under the guise of receiving “treatment.”

How is this different from the prescription of psychostimulants for ADHD?  The short answer is, it’s not.  If my fellow psychiatrists and I prescribe psychostimulants (which are abusable psychoactive substances in their own right, as described in the pages of the NYT) on the basis of simple patient complaints—and continue to do so simply because a patient reports a subjective benefit—then this isn’t very different from a medical marijuana provider writing a prescription (or “recommendation”) for medical cannabis.  In both cases, the conditions being treated are ill-defined (yes, in the case of ADHD, it’s detailed in the DSM, which gives it a certain validity, but that’s not saying much).  In both cases, the conditions affect patients’ quality of life but are rarely, if ever, life-threatening.  In both cases, psychoactive drugs are prescribed which could be abused but which most patients actually use quite responsibly.  Last but not least, in both cases, patients generally do well; they report satisfaction with treatment and often come back for more.

In fact, taken one step further, this analogy may turn out to be an argument in favor of medical marijuana.  As proponents of cannabis are all too eager to point out, marijuana is a natural substance, humans have used it for thousands of years, and it’s arguably safer than other abusable (but legal) substances like nicotine and alcohol.  Psychostimulants, on the other hand, are synthetic chemicals (not without adverse effects) and have been described as “gateway drugs” to more or less the same degree as marijuana.  Why one is legal and one is not simply appears to be due to the psychiatric profession’s “seal of approval” on one but not the other.

If the psychiatric profession is gradually moving away from the assessment, diagnosis, and treatment of severe mental illness and, instead, treating “lifestyle” problems with drugs that could easily be abused, then I really don’t have a good argument for denying cannabis to patients who insist it helps their anxiety, insomnia, depression, or chronic pain.

Perhaps we should ask physicians take a more rigorous approach to ADHD diagnosis, demanding interviews with parents and teachers, extensive neuropsychiatric testing, and (perhaps) neuroimaging before offering a script.  But in a world in which doctors’ reimbursements are dwindling, and the time devoted to patient care is vanishing—not to mention a patient culture which demands a quick fix for the problems associated with the stresses of modern adolescence—it doesn’t surprise me one bit that some doctors will cut corners and prescribe without a thorough workup, in much the same way that marijuana is provided, in states where it’s legal.  If the loudest protests against such a practice don’t come from our leadership—but instead from the pages of the New York Times—we only have ourselves to blame when things really get out of hand.


“Patient-Centered” Care and the Science of Psychiatry

May 30, 2012

When asked what makes for good patient care in medicine, a typical answer is that it should be “patient-centered.”  Sure, “evidence-based medicine” and expert clinical guidelines are helpful, but they only serve as the scientific foundation upon which we base our individualized treatment decisions.  What’s more important is how a disorder manifests in the patient and the treatments he or she is most likely to respond to (based on genetics, family history, biomarkers, etc).  In psychiatry, there’s the additional need to target treatment to the patient’s unique situation and context—always founded upon our scientific understanding of mental illness.

It’s almost a cliché to say that “no two people with depression [or bipolar or schizophrenia or whatever] are the same.”  But when the “same” disorder manifests differently in different people, isn’t it also possible that the disorders themselves are different?  Not only does such a question have implications for how we treat each individual, it also impacts how we interpret the “evidence,” how we use treatment guidelines, and what our diagnoses mean in the first place.

For starters, every patient wants something different.  What he or she gets is usually what the clinician wants, which, in turn, is determined by the diagnosis and established treatment guidelines:  lifelong medication treatment, referral for therapy, forced inpatient hospitalization, etc.  Obviously, our ultimate goal is to eliminate suffering by relieving one’s symptoms, but shouldn’t the route we take to get there reflect the patient’s desires?  When a patient gets what he or she wants, shouldn’t this count as good patient care, regardless of what the guidelines say?

For instance, some patients just want a quick fix (e.g., a pill, ideally without frequent office visits), because they have only a limited amount of money (or time) they’re willing to use for treatment.  Some patients need to complete “treatment” to satisfy a judge, an employer, or a family member.  Some patients visit the office simply to get a disability form filled out or satisfy some other social-service need.  Some simply want a place to vent, or to hear from a trusted professional that they’re “okay.”  Still others seek intensive, long-term therapy even when it’s not medically justified.  Patients request all sorts of things, which often differ from what the guidelines say they need.

Sometimes these requests are entirely reasonable, cost-effective, and practical.  But we psychiatrists often feel a need to practice evidence- (i.e., science-) based medicine; thus, we take treatment guidelines (and diagnoses) and try to make them apply to our patients, even when we know they want—or need—something else entirely, or won’t be able to follow through on our recommendations.  We prescribe medications even though we know the patient won’t be able to obtain the necessary lab monitoring; or we refer a patient for intensive therapy even though we know their insurance will only cover a handful of visits; we admit a suicidal patient to a locked inpatient ward even though we know the unpredictability of that environment may cause further distress; or we advise a child with ADHD and his family to undergo long-term behavioral therapy in conjunction with stimulants, when we know this resource may be unavailable.

Guidelines and diagnoses are written by committee, and, as such, rarely apply to the specifics of any individual patient.  Thus, a good clinician uses a clinical guideline simply as a tool—a reference point—to provide a foundation for an individual’s care, just as a master chef knows a basic recipe but alters it according to the tastes he wishes to bring out or which ingredients are in season.  A good clinician works outside the available guidelines for many practical reasons, not the least of which is the patient’s own belief system—what he or she thinks is wrong and how to fix it.  The same could be said for diagnoses themselves.  In truth, what’s written in the DSM is a model—a “case study,” if you will—by which real-world patients are observed and compared.  No patient ever fits a single diagnosis to a “T.”

Unfortunately, under the pressures of limited time, scarce resources, and the threat of legal action for a poor outcome, clinicians are more inclined to see patients for what they are than for who they are, and therefore adhere to guidelines even more closely than they’d like.  This corrupts treatment in many ways.  Diagnoses are given out which don’t fit (e.g., “parity” diagnoses must be given in order to maintain reimbursement).  Treatment recommendations are made which are far too costly or complex for some patients to follow.  Services like disability benefits are maintained far beyond the period they’re needed (because diagnoses “stick”).  And tremendous resources are devoted to the ongoing treatment of patients who simply want (and would benefit from) only sporadic check-ins, or who, conversely, can afford ongoing care themselves.

The entire situation calls into question the value of treatment guidelines, as well as the validity of psychiatric diagnoses.  Our patients’ unique characteristics, needs, and preferences—i.e., what helps patients to become “well”—vary far more widely than the symptoms upon which official treatment guidelines were developed.  Similarly, what motivates a person to seek treatment differs so widely from person to person, implying vastly different etiologies.

To provide optimal care to a patient, care must indeed be “patient-centered.”  But truly patient-centered care must not only sidestep the DSM and established treatment guidelines, but also, frequently, ignore diagnoses and guidelines altogether.  What does this say about the validity, relevance, and applicability of the diagnoses and guidelines at our disposal?  And what does this say about psychiatry as a science?


Is The Joke On Me?

May 12, 2012

I recently returned from the American Psychiatric Association (APA) Annual Meeting in Philadelphia.  I had the pleasure of participating on a panel discussing “psychiatrists and the new media” with the bloggers/authors from Shrink Rap, and Bob Hsiung of dr-bob.org.  The panel discussion was a success.  Some other parts of the conference, however, left me with a sense of doubt and unease.  I enjoy being a psychiatrist, but whenever I attend these psychiatric meetings, I sometimes find myself questioning the nature of what I do.  At times I wonder whether everyone else knows something I don’t.  Sometimes I even ask myself:  is the joke on me?

Here’s an example of what I mean.  On Sunday, David Kupfer of the University of Pittsburgh (and task force chair of the forthcoming DSM-5) gave a talk on “Rethinking Bipolar Disorder.”  The room—a cavernous hall at the Pennsylvania Convention Center—was packed.  Every chair was filled, while scores of attendees stood in the back or sat on the floor, listening with rapt attention.  The talk itself was a discussion of “where we need to go” in the management of bipolar disorder in the future.  Dr Kupfer described a new view of bipolar disorder as a chronic, multifactorial disorder involving not just mood lability and extremes of behavior, but also endocrine, inflammatory, neurophysiologic, and metabolic processes that deserve our attention as well.  He emphasized the fact that in between mood episodes, and even before they develop, there are a range of “dysfunctional symptom domains”—involving emotions, cognition, sleep, physical symptoms, and others—that we psychiatrists should be aware of.  He also introduced a potential way to “stage” development of bipolar disorder (similar to the way doctors stage tumors), suggesting that people at early stages might benefit from prophylactic psychiatric intervention.

Basically, the take-home message (for me, at least) was that in the future, psychiatrists will be responsible for treating other manifestations of bipolar disorder than those we currently attend to.  We will also need to look for subthreshold symptoms in people who might have a “prodrome” of bipolar disorder.

A sympathetic observer might say that Kupfer is simply asking us to practice good medicine, caring for the entire person rather than one’s symptoms, and prevent development or recurrence of bipolar illness.  On the other hand, a cynic might look at these pronouncements as a sort of disease-mongering, encouraging us to uncover signs of “disease” where they might not exist.  But both of these conclusions overlook a much more fundamental question that, to me, remains unanswered.  What exactly is bipolar disorder anyway?

I realize that’s an extraordinarily embarrassing question for a psychiatrist to ask.  And in all fairness, I do know what bipolar disorder is (or, at least, what the textbooks and the DSM-IV say it is).  I have seen examples of manic episodes in my own practice, and in my personal life, and have seen how they respond to medications, psychotherapy, or the passage of time.  But those are the minority.  Over the years (although my career is still relatively young), I have also seen dozens, if not hundreds, of people given the diagnosis of “bipolar disorder” without a clear history of a manic episode—the defining feature of bipolar disorder, according to the DSM.

As I looked around the room at everyone concentrating on Dr Kupfer’s every word, I wondered to myself, am I the only one with this dilemma?  Are my patients “special” or “unique”?  Maybe I’m a bad psychiatrist; maybe I don’t ask the right questions.  Or maybe everyone else is playing a joke on me.   That’s unlikely; others do see the same sorts of patients I do (I know this for a fact, from my own discussions with other psychiatrists).  But nobody seems to have the same crisis of confidence that I do.  It makes me wonder whether we have reached a point in psychiatry when psychiatrists can listen to a talk like this one (or see patients each day) and accept diagnostic categories, without paying any attention to the fact that they our nosology says virtually nothing at all about the unique nature of each person’s suffering.  It seems that we accept the words of our authority figures without asking the fundamental question of whether they have any basis in reality.  Or maybe I’m just missing out on the joke.

As far as I’m concerned, no two “bipolar” patients are alike, and no two “bipolar” patients have the same treatment goals.  The same can be said for almost everything else we treat, from “depression” to “borderline personality disorder” to addiction.  In my opinion, lumping all those people together and assuming they’re all alike for the purposes of a talk (or, even worse, for a clinical trial) makes it difficult—and quite foolish—to draw any conclusions about that group of individuals.

What we need to do is to figure out whether what we call “bipolar disorder” is a true disorder in the first place, rather than accept it uncritically and start looking for yet additional symptom domains or biomarkers as new targets of treatment.  To accept the assumption that everyone currently with the “bipolar” label indeed has the same disorder (or any disorder at all) makes a mockery of the diagnostic process and destroys the meaning of the word.  Some would argue this has already happened.

But then again, maybe I’m the only one who sees it this way.  No one at Kupfer’s talk seemed to demonstrate any bewilderment or concern that we might be heading towards a new era of disease management without really knowing what “disease” we’re treating in the first place.  If this is the case, I sure would appreciate it if someone would let me in on the joke.


Depression Tests: When “Basic” Research Becomes “Applied”

April 22, 2012

Anyone with an understanding of the scientific process can appreciate the difference between “basic” and “applied” research.  Basic research, often considered “pure” science, is the study of science for its own sake, motivated by curiosity and a desire to understand.  General questions and theories are tested, often without any obvious practical application.  On the other hand, “applied” research is usually done for a specific reason: to solve a real-world problem or to develop a new product: a better mousetrap, a faster computer, or a more effective way to diagnose illness.

In psychiatric research, the distinction between “basic” and “applied” research is often blurred.  Two recent articles (and the accompanying media attention they’ve received) provide very good examples of this phenomenon.  Both stories involve blood tests to diagnose depression.  Both are intriguing, novel studies.  Both may revolutionize our understanding of mental illness.  But responses to both have also been blown way out of proportion, seeking to “apply” what is clearly only at the “basic” stage.

The first study, by George Papakostas and his colleagues at Massachusetts General Hospital and Ridge Diagnostics, was published last December in the journal Molecular Psychiatry.  They developed a technique to measure nine proteins in the blood, plug those values into a fancy (although proprietary—i.e., unknown) algorithm, and calculate an “MDDScore” which, supposedly, diagnoses depression.  In their paper, they compared 70 depressed patients with 43 non-depressed people and showed that their assay identifies depression with a specificity of 81% and a sensitivity of 91%.

The other study, published two weeks ago in Translational Psychiatry by Eve Redei and her colleagues at Northwestern University, purports to diagnose depression in adolescents.  They didn’t measure proteins in patients’ blood, but rather levels of RNA.  (As a quick aside, RNA is the “messenger” molecule inside each cell that tells the cell which proteins to make.)  They studied a smaller number of patients—only 14 depressed teenagers, compared with 14 non-depressed controls—and identified 11 RNA molecules which were expressed differently between the two groups.  These were selected from a much larger number of RNA transcripts on the basis of an animal model of depression: specifically, a rat strain that was bred to show “depressive-like” behavior.

If we look at each of these studies as “basic” science, they offer some potentially tantalizing insights into what might be happening in the bodies of depressed people (or rats).  Even though some of us argue that no two “depressed” people are alike—and we should look instead at person-centered factors that might explain how they are unique—these studies nevertheless might have something to say about the common underlying biology of depression—if such a thing exists.  At the very least, further investigation might explain why proteins that have no logical connection with depression (such as apolipoprotein CIII or myeloperoxidase) or RNA transcripts (for genes like toll-like-receptor-1 or S-phase-cyclin-A-associated protein) might help us, someday, to develop more effective treatments than the often ineffective SSRIs that are the current standard of care.

Surprisingly, though, this is not how these articles have been greeted.  Take the Redei article, for instance.  Since its publication, there have been dozens of media mentions, with such headlines as “Depression Blood Test for Teens May Lead To Less Stigma” and “Depression Researchers May Have Developed First Blood Test For Teens.”  To the everyday reader, it seems as if we’ve gone straight from the bench to the bedside.  Granted, each story mentions that the test is not quite “ready for prime time,” but headlines draw readers’ attention.  Even the APA’s official Twitter feed mentioned it (“Blood test for early-onset #depression promising,” along with the tags “#childrenshealth” and “#fightstigma”), giving it a certain degree of legitimacy among doctors and patients alike.

(I should point out that one of Redei’s co-authors, Bill Gardner, emphasized—correctly—on his own blog that their study was NOT to be seen as a test for depression, and that it required refinement and replication before it could be used clinically.  He also acknowledged that their study population—adolescents—are often targets for unnecessary pharmacological intervention, demanding even further caution in interpreting their results.)

As for the Papakostas article, there was a similar flurry of articles about it when preliminary results were presented last year.  Like Redei’s research, it’s an interesting study that could change the way we diagnose depression.  However, unlike Redei’s study, it was funded by a private, self-proclaimed “neurodiagnostics” company.  (That company, Ridge Diagnostics, has not revealed the algorithm by which they calculate their “MDDScore,” essentially preventing any independent group from trying to replicate their findings.)

Incidentally, the Chairman of the Board of Ridge Diagnostics is David Hale, who also founded—and is Chairman of—Somaxon Pharmaceuticals, a company I wrote about last year when it tried to bring low-dose doxepin to the market as a sleep aid, and then used its patent muscle to issue cease-and-desist letters to people who suggested using the ultra-cheap generic version instead of Somaxon’s name-brand drug.

Ridge Diagnostics has apparently decided not to wait for replication of its findings, and instead is taking its MDDScore to the masses, complete with a Twitter feed, a Facebook Page, and a series of videos selling the MDDScore (priced at a low, low $745!), aimed directly at patients.  At this rate, it’s only a matter of time before the MDDScore is featured on the “Dr Oz Show” or “The Doctors.”  Take a look at this professionally produced video, for instance, posted last month on Youtube:


(Interesting—the host hardly even mentions the word “depression.”  A focus group must have told them that it detracted from his sales pitch.)

I think it’s great that scientists are investigating the basic biology of depression.  I also have no problem when private companies try to get in on the act.  However, when research that is obviously at the “basic” stage (and, yes, not ready for prime time) becomes the focus of a viral video marketing campaign or a major story on the Huffington Post, one must wonder why we’ve been so quick to cross the line from “basic” research into the “applied” uses of those preliminary findings.  Okay, okay, I know the answer is money.  But who has the authority—and the voice—to say, “not so fast” and preserve some integrity in the field of psychiatric research?  Where’s the money in that?


The Well Person

March 21, 2012

What does it mean to be “normal”?  We’re all unique, aren’t we?  We differ from each other in so many ways.  So what does it mean to say someone is “normal,” while someone else has a “disorder”?

This is, of course, the age-old question of psychiatric diagnosis.  The authors of the DSM-5, in fact, are grappling with this very question right now.  Take grieving, for example.  As I and others have written, grieving is “normal,” although its duration and intensity vary from person to person.  At some point, a line may be crossed, beyond which a person’s grief is no longer adaptive but dangerous.  Where that line falls, however, cannot be determined by a book or by a committee.

Psychiatrists ought to know who’s healthy and who’s not.  After all, we call ourselves experts in “mental health,” don’t we?  Surprisingly, I don’t think we’re very good at this.  We are acutely sensitive to disorder but have trouble identifying wellness.  We can recognize patients’ difficulties in dealing with other people but are hard-pressed to describe healthy interpersonal skills.  We admit that someone might be able to live with auditory hallucinations but we still feel an urge to increase the antipsychotic dose when a patient says she still hears “those voices.”   We are quick to point out how a patient’s alcohol or marijuana use might be a problem, but we can’t describe how he might use these substances in moderation.  I could go on and on.

Part of the reason for this might lie in how we’re trained.  In medical school we learn basic psychopathology and drug mechanisms (and, by the way, there are no drugs whose mechanism “maintains normality”—they all fix something that’s broken).  We learn how to do a mental status exam, complete with full descriptions of the behavior of manic, psychotic, depressed, and anxious people—but not “normals.”  Then, in our postgraduate training, our early years are spent with the most ill patients—those in hospitals, locked facilities, or emergency settings.  It’s not until much later in one’s training that a psychiatrist gets to see relatively more functional individuals in an office or clinic.  But by that time, we’re already tuned in to deficits and symptoms, and not to personal strengths, abilities, or resilience-promoting factors.

In a recent discussion with a colleague about how psychiatrists might best serve a large population of patients (e.g., in a “medical home” model), I suggested  that perhaps each psychiatrist could be responsible for a handful of people (say, 300 or 400 individuals).  Our job would be to see each of these 300-400 people at least once in a year, regardless of whether they have psychiatric diagnosis or not.  Those who have emotional or psychiatric complaints or who have a clear mental illness could be seen more frequently; the others would get their annual checkup and their clean bill of (mental) health.  It would be sort of like your annual medical visit or a “well-baby visit” in pediatrics:  a way for a person to be seen by a doctor, implement preventive measures,  and undergo screening to make sure no significant problems go unaddressed.

Alas, this would never fly in psychiatry.  Why not?  Because we’re too accustomed to seeing illness.  We’re too quick to interpret “sadness” as “depression”; to interpret “anxiety” or “nerves” as a cue for a benzodiazepine prescription; or to interpret “inattention” or poor work/school performance as ADHD.  I’ve even experienced this myself.  It is difficult to tell a person “you’re really doing just fine; there’s no need for you to see me, but if you want to come back, just call.”  For one thing, in many settings, I wouldn’t get paid for the visit if I said this.  But another concern, of course, is the fear of missing something:  Maybe this person really is bipolar [or whatever] and if I don’t keep seeing him, there will be a bad outcome and I’ll be responsible.

There’s also the fact that psychiatry is not a primary care specialty:  insurance plans don’t pay for an annual “well-person visit” with the a psychiatrist.  Patients who come to a psychiatrist’s office are usually there for a reason.  Maybe the patient deliberately sought out the psychiatrist to ask for help.  Maybe their primary care provider saw something wrong and wanted the psychiatrist’s input.  In the former, telling the person he or she is “okay” risks losing their trust (“but I just know something’s wrong, doc!“).  In the latter, it risks losing a referral source or professional relationship.

So how do we fix this?  I think we psychiatrists need to spend more time learning what “normal” really is.  There are no classes or textbooks on “Normal Adults.”  For starters, we can remind ourselves that the “normal” people around whom we’ve been living our lives may in fact have features that we might otherwise see as a disorder.  Learning to accept these quirks, foibles, and idiosyncrasies may help us to accept them in our patients.

In terms of using the DSM, we need to become more willing to use the V71.09 code, which means, essentially, “No diagnosis or condition.”  Many psychiatrists don’t even know this code exists.  Instead, we give “NOS” diagnoses (“not otherwise specified”) or “rule-outs,” which eventually become de facto diagnoses because we never actually take the time to rule them out!  A V71.09 should be seen as a perfectly valid (and reimbursable) diagnosis—a statement that a person has, in fact, a clean bill of mental health.  Now we just need to figure out what that means.

It is said that when Pope Julius II asked Michelangelo how he sculpted David out of a marble slab, he replied: “I just removed the parts that weren’t David.”  In psychiatry, we spend too much time thinking about what’s not David and relentlessly chipping away.  We spend too little time thinking about the healthy figure that may already be standing right in front of our eyes.


How To Think Like A Psychiatrist

March 4, 2012

The cornerstone of any medical intervention is a sound diagnosis.  Accurate diagnosis guides the proper treatment, while an incorrect diagnosis might subject a patient to unnecessary procedures or excessive pharmacotherapy, and it may further obscure the patient’s true underlying condition.  This is true for all medical specialties—including psychiatry.  It behooves us, then, to examine the practice of clinical decision-making, how we do it, and where we might go wrong, particularly in the area of psychiatric diagnosis.

According to Pat Croskerry, a physician at Dalhousie University in Canada, the foundation of clinical cognition the “dual process model,” first described by the Greek philosophers (and reviewed here).  This model proposes that people solve problems using one of two “processes”:  Type 1 processes involve intuition and are largely automatic, fast, and unconscious (e.g., recognizing a friend’s face).  Type 2 processes are more deliberate, analytical, and systematic (e.g., planning the best route for an upcoming trip).  Doctors use both types when making a diagnosis, but the relative emphasis varies with the setting.  In the ED, quick action based on pattern recognition (i.e., Type 1 process) is crucial.  Sometimes, however, it may be wrong, particularly if other conditions aren’t evaluated and ruled out (i.e., Type 2 process).  For instance, a patient with flank pain, nausea, vomiting, and hematuria demonstrates the “pattern” of a kidney stone (common), but may in fact have a dissecting aortic aneurysm (uncommon).

This model is valuable for understanding how we arrive at psychiatric diagnoses (the above figure is from a 2009 article by Croskerry).  When evaluating a patient for the first time, a psychiatrist often looks at “the big picture”:  Does this person appear to have a mood disorder, psychosis, anxiety, a personality disorder?  Have I seen this type of patient before?  What’s my general impression of this person?  In other words, the assessment relies heavily on Type 1 processes, using heuristics and “Gestalt” impressions.  But Type 2 processes are also important.  We must inquire about specific symptoms, treatment history, social background; we might order tests or review old records, which may change our initial perception.

Sound clinical decision-making, therefore, requires both processes.  Unfortunately, these are highly prone to error.  In fact, Croskerry identifies at least 40 cognitive biases, which occur when the processes are not adapted for the specific task at hand.  For instance, we tend to use Type 1 processes more frequently than we should.  Many psychiatrists, particularly those seeing a large volume of patients for short periods of time, often see patterns earlier than is warranted, and rush to diagnoses without fully considering all possibilities.  In other words, they fall victim to what psychologist Keith Stanovich calls “dysrationalia,” or the inability to think or act rationally despite adequate intelligence.  In the dual process model, dysrationalia can “override” Type 2 processes (“I don’t need to do a complete social history, I just know this patient has major depression”), leading to diagnostic failure.

Croskerry calls this the “cognitive miser” function: we rely on processes that consume fewer cognitive resources because we’re cognitively lazy.  The alternative would be to switch to a Type 2 process—a more detailed evaluation, using deductive, analytic reasoning.  But this takes great effort and time.  Moreover, when a psychiatrist switches to a “Type 2″ mode, he or she asks questions are nonspecific in nature (largely owing to the unreliability of some DSM-IV diagnoses), or questions that confirm the initial “Type 1″ hunch.  In other words, we end up finding we expect to find.

The contrast between Type 1 and Type 2 processes is most apparent when we observe people operating at either end of the spectrum.  Some psychiatrists see patterns in every patient (e.g., “I could tell he was bipolar as soon as he walked into my office”—a classic error called the representativeness heuristic), even though they rarely ask about specific symptoms, let alone test alternate hypotheses.  On the other hand, medical students and young clinicians often work exclusively in Type 2; they ask very thorough questions, covering every conceivable alternative, and every symptom in the DSM-IV (even irrelevant ones).  As a result, they get frustrated when they can’t determine a precise diagnosis or, alternately, they come up with a diagnosis that might “fit” the data but completely miss the mark regarding the underlying essence of the patient’s suffering.

Croskerry writes that the most accurate clinical decision-making occurs when a physician can switch between Type 1 and Type 2 processes  as needed, a process called metacognition.  Metacognition requires a certain degree of humility, a willingness to re-examine one’s decisions in light of new information.  It also demands that the doctor be able to recognize when he or she is not performing well and to be willing to self-monitor and self-criticize.  To do this, Croskerry recommends that we develop “cognitive forcing strategies,” deliberate interventions that force us to think more consciously and deliberately about the problem at hand.  This may help us to be more accurate in our assessments:  in other words, to see both the trees for the forest, and the forest for the trees.

This could be a hard sell.  Doctors can be a stubborn bunch.  Clinicians who insist on practicing Type 2,  “checklist”-style medicine (e.g., in a clinical trial) may be unwilling to consider the larger context in which specific symptoms arise, or they may not have sufficient understanding of that context to see how it might impact a patient.  On the other hand, clinicians who rush to judgment based on first impressions (a Type 1 process) may be annoyed by any suggestion that they should slow down and be more thorough or methodical.  Not to mention the fact that being more thorough takes more time. And as we all know, time is money.

I believe that all psychiatrists should heed the dual-process model and ask how it influences their practice.  Are you too quick to label and diagnose, owing to your “dysrational” (Type 1) impulses?  On the other hand, if you use established diagnostic criteria (Type 2), are you measuring anything useful?  Should you use a cognitive forcing strategy to avoid over-reliance on one type of decision-making?  If you continue to rely on pattern recognition (Type 1 process), then what other data (Type 2) should you collect?  Treatment history?  A questionnaire?  Biomarkers?  A comprehensive assessment of social context?  And ultimately, how do you use this information to diagnose a “disorder” in a given individual?

These are just a few questions that the dual process model raises.  There are no easy answers, but anything that challenges us to be better physicians and avoid clinical errors, in my opinion, is well worth our time, attention, and thought.


Follow

Get every new post delivered to your Inbox.

Join 1,384 other followers

%d bloggers like this: