My Own Bipolar Kerfuffle

August 5, 2012

I have a confession to make.  I don’t know what “bipolar disorder” is.  And as a psychiatrist, I’ll admit that’s sort of embarrassing.

Okay, maybe I’m exaggerating when I say that I don’t know what bipolar disorder is.  Actually, if you asked me to define it, I’d give you an answer that would probably sound pretty accurate.  I’ve read the DSM-IV, had years of training, took my Boards, treated people in the midst of manic episodes, and so on.  The problem for me is not the “idea” of bipolar disorder.  It’s what we mean when we use that term.

I recognized this problem only recently—in fact, just last month, as I was putting together the July/August issue of the Carlat Psychiatry Report (now available to subscribers here).  This month’s issue is devoted to the topic of “Bipolar Disorder,” and two contributors, faculty members at prestigious psychiatry departments, made contradictory—yet perfectly valid—observations.  One argued that it’s overdiagnosed; the other advocated for broadening our definition of bipolar disorder—in particular, “bipolar depression.”  The discrepancy was also noted in several comments from our Editorial Board.

Disagreements in science and medicine aren’t necessarily a bad thing.  In fact, when two authorities interpret a phenomenon differently, it creates the opportunity for further experimentation and investigation.  In time, the “truth” can be uncovered.  But in this case, as with much in psychiatry, “truth” seems to depend on whom you ask.

Consider this question.  What exactly is “bipolar depression”?  It seems quite simple:  it’s when a person with bipolar disorder experiences a depressive episode.  But what about when a person comes in with depression but has not had a manic episode or been diagnosed with bipolar disorder?  How about when a person with depression becomes “manic” after taking an antidepressant?  Could those be bipolar depression, too?  I suppose so.  But who says so?  One set of criteria was introduced by Jules Angst, a researcher in Switzerland, and was featured prominently in the BRIDGE study, published in 2011.  His criteria for bipolarity include agitation, irritability, hypomanic symptoms for as short as one day, and a family history of mania.  Other experts argue for a “spectrum” of bipolar illness.

(For a critique of the BRIDGE study, see this letter to the editor of the Archives of General Psychiatry, and this detailed—and entertaining—account in David Allen’s blog.)

The end result is rather shocking, when you think about it:  here we have this phenomenon called “bipolar disorder,” which may affect 4% of all Americans, and different experts define it differently.  With the right tweaking, nearly anyone who comes to the attention of a psychiatrist could be considered to have some features suggestive of someone’s definition of bipolar disorder.  (Think I’m kidding?  Check out the questionnaire in the appendix of Angst’s 2003 article.)

Such differences of opinion lead to some absurd situations, particularly when someone is asked to speak authoritatively about this disorder.  At this year’s APA Annual Meeting for example, David Kupfer (DSM-IV Task Force Chair) gave a keynote address on “Rethinking Bipolar Disorder,” which included recommendations for screening adolescents and the use of preventive measures (including drugs) to prevent early stages of the illness.  Why was it absurd?  Because as Kupfer spoke confidently about this disease entity, I looked around the packed auditorium and realized that each person may very well have has his or her own definition of bipolar disorder.  But did anyone say anything?  No, we all nodded in agreement, deferring to the expert.

This problem exists throughout psychiatry.  The criteria for each diagnosis in the DSM-IV can easily be applied in a very general way.  This is due partly to fatigue, partly to the fact that insurance companies require that we give a diagnosis as early as the first visit, partly because we’re so reluctant (even when it’s appropriate) to tell patients that they’re actually healthy and may not even have a diagnosis, and partly because different factions of psychiatrists use their experience to create their own criteria.  It’s no wonder that as criteria are loosened, diagnoses are misapplied, and the ranks of the “mentally ill” continue to grow.

As editor of a newsletter, I’m faced with another challenge I didn’t quite expect.  I can’t come out and say that bipolar disorder doesn’t exist (which wouldn’t be true anyway—I have actually seen cases of “classic,” textbook-style mania which do respond to medications as our guidelines would predict).  But I also can’t say that several definitions of “bipolar” exist.  That may be perceived as being too equivocal for a respectable publication and, as a result, some readers may have difficulty taking me seriously.

At the risk of sounding grandiose, I may be experiencing what our field’s leadership must experience on a regular basis.  Academic psychiatrists make their living by conducting research, publishing their findings, and, in most cases, specializing in a given clinical area.  It’s in their best interest to assume that the subjects of their research actually exist.  Furthermore, when experts see patients, they do so in a specialty clinic or clinical trial, which reinforces their definitions of disease.

This can become a problem to those of us seeing the complicated “real world” patients on the front lines, especially when we look to the experts for answers to such questions as whether we should use antipsychotics to treat acute mania, or whether antidepressants are helpful for bipolar depression.  If their interpretations of the diagnoses simply don’t pertain to the people in our offices, all bets are off.  Yet this, I fear, is what happens in psychiatry every day.

In the end, I can’t say whether my definition of bipolar disorder is right or not, because even the experts can’t seem to agree on what it is.  As for the newsletter, we decided to publish both articles, in the interest of maintaining a dialogue.  Readers will simply have to use their own definition of “bipolar disorder” and “bipolar depression” (or eschew them altogether)—hopefully in ways that help their patients.  But it has been an eye-opening experience in the futility (and humility) of trying to speak with authority about something we’re still trying desperately to understand.

About these ads

What Adderall Can Teach Us About Medical Marijuana

June 19, 2012

An article in the New York Times last week described the increasing use of stimulant medications such as Adderall and Ritalin among high-school students.  Titled “The Risky Rise of the Good-Grade Pill,” the article discussed how 15 to 40 percent of students, competing for straight-As and spots in elite colleges, use stimulants for an extra “edge,” regardless of whether they actually have ADHD.  In this blog, I’ve written about ADHD.  It’s a real condition—and medications can help tremendously—but the diagnostic criteria are quite vague.  As with much in psychiatry, anyone “saying the right thing” can relatively easily get one of these drugs, whether they want it or not.

Sure enough, the number of prescriptions for these drugs has risen 26% since 2007.  Does this mean that ADHD is now 26% more prevalent?  No.  In the Times article, some students admitted they “lie to [their] psychiatrists” in order to “get something good.”  In fact, some students “laughed at the ease with which they got some doctors to write prescriptions for ADHD.”  In the absence of an objective test (some computerized tests exist but aren’t widely used nor validated, and brain scans are similarly circumspect) and diagnostic criteria that are readily accessible on the internet, anyone who wants a stimulant can basically get one.  And while psychiatric diagnosis is often an imperfect science, in many settings the methodology by which we assess and diagnose ADHD is particularly crude.

Many of my colleagues will disagree with (or hate) me for saying so, but in some sense, the prescription of stimulants has become just like any other type of cosmetic medicine.  Plastic surgeons and dermatologists, for instance, are trained to perform medically necessary procedures, but they often find that “cosmetic” procedures like facelifts and Botox injections are more lucrative.  Similarly, psychiatrists can have successful practices in catering to ultra-competitive teens (and their parents) and giving out stimulants.  Who cares if there’s no real disease?  Psychiatry is all about enhancing patients’ lives, isn’t it?  As another blogger wrote last week, some respectable physicians have even argued that “anyone and everyone should have access to drugs that improve performance.”

When I think about “performance enhancement” in this manner, I can’t help but think about the controversy over medical marijuana.  This is another topic I’ve written about, mainly to question the “medical” label on something that is neither routinely accepted nor endorsed by the medical profession.  Proponents of medical cannabis, I wrote, have co-opted the “medical” label in order for patients to obtain an abusable psychoactive substance legally, under the guise of receiving “treatment.”

How is this different from the prescription of psychostimulants for ADHD?  The short answer is, it’s not.  If my fellow psychiatrists and I prescribe psychostimulants (which are abusable psychoactive substances in their own right, as described in the pages of the NYT) on the basis of simple patient complaints—and continue to do so simply because a patient reports a subjective benefit—then this isn’t very different from a medical marijuana provider writing a prescription (or “recommendation”) for medical cannabis.  In both cases, the conditions being treated are ill-defined (yes, in the case of ADHD, it’s detailed in the DSM, which gives it a certain validity, but that’s not saying much).  In both cases, the conditions affect patients’ quality of life but are rarely, if ever, life-threatening.  In both cases, psychoactive drugs are prescribed which could be abused but which most patients actually use quite responsibly.  Last but not least, in both cases, patients generally do well; they report satisfaction with treatment and often come back for more.

In fact, taken one step further, this analogy may turn out to be an argument in favor of medical marijuana.  As proponents of cannabis are all too eager to point out, marijuana is a natural substance, humans have used it for thousands of years, and it’s arguably safer than other abusable (but legal) substances like nicotine and alcohol.  Psychostimulants, on the other hand, are synthetic chemicals (not without adverse effects) and have been described as “gateway drugs” to more or less the same degree as marijuana.  Why one is legal and one is not simply appears to be due to the psychiatric profession’s “seal of approval” on one but not the other.

If the psychiatric profession is gradually moving away from the assessment, diagnosis, and treatment of severe mental illness and, instead, treating “lifestyle” problems with drugs that could easily be abused, then I really don’t have a good argument for denying cannabis to patients who insist it helps their anxiety, insomnia, depression, or chronic pain.

Perhaps we should ask physicians take a more rigorous approach to ADHD diagnosis, demanding interviews with parents and teachers, extensive neuropsychiatric testing, and (perhaps) neuroimaging before offering a script.  But in a world in which doctors’ reimbursements are dwindling, and the time devoted to patient care is vanishing—not to mention a patient culture which demands a quick fix for the problems associated with the stresses of modern adolescence—it doesn’t surprise me one bit that some doctors will cut corners and prescribe without a thorough workup, in much the same way that marijuana is provided, in states where it’s legal.  If the loudest protests against such a practice don’t come from our leadership—but instead from the pages of the New York Times—we only have ourselves to blame when things really get out of hand.


“Patient-Centered” Care and the Science of Psychiatry

May 30, 2012

When asked what makes for good patient care in medicine, a typical answer is that it should be “patient-centered.”  Sure, “evidence-based medicine” and expert clinical guidelines are helpful, but they only serve as the scientific foundation upon which we base our individualized treatment decisions.  What’s more important is how a disorder manifests in the patient and the treatments he or she is most likely to respond to (based on genetics, family history, biomarkers, etc).  In psychiatry, there’s the additional need to target treatment to the patient’s unique situation and context—always founded upon our scientific understanding of mental illness.

It’s almost a cliché to say that “no two people with depression [or bipolar or schizophrenia or whatever] are the same.”  But when the “same” disorder manifests differently in different people, isn’t it also possible that the disorders themselves are different?  Not only does such a question have implications for how we treat each individual, it also impacts how we interpret the “evidence,” how we use treatment guidelines, and what our diagnoses mean in the first place.

For starters, every patient wants something different.  What he or she gets is usually what the clinician wants, which, in turn, is determined by the diagnosis and established treatment guidelines:  lifelong medication treatment, referral for therapy, forced inpatient hospitalization, etc.  Obviously, our ultimate goal is to eliminate suffering by relieving one’s symptoms, but shouldn’t the route we take to get there reflect the patient’s desires?  When a patient gets what he or she wants, shouldn’t this count as good patient care, regardless of what the guidelines say?

For instance, some patients just want a quick fix (e.g., a pill, ideally without frequent office visits), because they have only a limited amount of money (or time) they’re willing to use for treatment.  Some patients need to complete “treatment” to satisfy a judge, an employer, or a family member.  Some patients visit the office simply to get a disability form filled out or satisfy some other social-service need.  Some simply want a place to vent, or to hear from a trusted professional that they’re “okay.”  Still others seek intensive, long-term therapy even when it’s not medically justified.  Patients request all sorts of things, which often differ from what the guidelines say they need.

Sometimes these requests are entirely reasonable, cost-effective, and practical.  But we psychiatrists often feel a need to practice evidence- (i.e., science-) based medicine; thus, we take treatment guidelines (and diagnoses) and try to make them apply to our patients, even when we know they want—or need—something else entirely, or won’t be able to follow through on our recommendations.  We prescribe medications even though we know the patient won’t be able to obtain the necessary lab monitoring; or we refer a patient for intensive therapy even though we know their insurance will only cover a handful of visits; we admit a suicidal patient to a locked inpatient ward even though we know the unpredictability of that environment may cause further distress; or we advise a child with ADHD and his family to undergo long-term behavioral therapy in conjunction with stimulants, when we know this resource may be unavailable.

Guidelines and diagnoses are written by committee, and, as such, rarely apply to the specifics of any individual patient.  Thus, a good clinician uses a clinical guideline simply as a tool—a reference point—to provide a foundation for an individual’s care, just as a master chef knows a basic recipe but alters it according to the tastes he wishes to bring out or which ingredients are in season.  A good clinician works outside the available guidelines for many practical reasons, not the least of which is the patient’s own belief system—what he or she thinks is wrong and how to fix it.  The same could be said for diagnoses themselves.  In truth, what’s written in the DSM is a model—a “case study,” if you will—by which real-world patients are observed and compared.  No patient ever fits a single diagnosis to a “T.”

Unfortunately, under the pressures of limited time, scarce resources, and the threat of legal action for a poor outcome, clinicians are more inclined to see patients for what they are than for who they are, and therefore adhere to guidelines even more closely than they’d like.  This corrupts treatment in many ways.  Diagnoses are given out which don’t fit (e.g., “parity” diagnoses must be given in order to maintain reimbursement).  Treatment recommendations are made which are far too costly or complex for some patients to follow.  Services like disability benefits are maintained far beyond the period they’re needed (because diagnoses “stick”).  And tremendous resources are devoted to the ongoing treatment of patients who simply want (and would benefit from) only sporadic check-ins, or who, conversely, can afford ongoing care themselves.

The entire situation calls into question the value of treatment guidelines, as well as the validity of psychiatric diagnoses.  Our patients’ unique characteristics, needs, and preferences—i.e., what helps patients to become “well”—vary far more widely than the symptoms upon which official treatment guidelines were developed.  Similarly, what motivates a person to seek treatment differs so widely from person to person, implying vastly different etiologies.

To provide optimal care to a patient, care must indeed be “patient-centered.”  But truly patient-centered care must not only sidestep the DSM and established treatment guidelines, but also, frequently, ignore diagnoses and guidelines altogether.  What does this say about the validity, relevance, and applicability of the diagnoses and guidelines at our disposal?  And what does this say about psychiatry as a science?


Is The Joke On Me?

May 12, 2012

I recently returned from the American Psychiatric Association (APA) Annual Meeting in Philadelphia.  I had the pleasure of participating on a panel discussing “psychiatrists and the new media” with the bloggers/authors from Shrink Rap, and Bob Hsiung of dr-bob.org.  The panel discussion was a success.  Some other parts of the conference, however, left me with a sense of doubt and unease.  I enjoy being a psychiatrist, but whenever I attend these psychiatric meetings, I sometimes find myself questioning the nature of what I do.  At times I wonder whether everyone else knows something I don’t.  Sometimes I even ask myself:  is the joke on me?

Here’s an example of what I mean.  On Sunday, David Kupfer of the University of Pittsburgh (and task force chair of the forthcoming DSM-5) gave a talk on “Rethinking Bipolar Disorder.”  The room—a cavernous hall at the Pennsylvania Convention Center—was packed.  Every chair was filled, while scores of attendees stood in the back or sat on the floor, listening with rapt attention.  The talk itself was a discussion of “where we need to go” in the management of bipolar disorder in the future.  Dr Kupfer described a new view of bipolar disorder as a chronic, multifactorial disorder involving not just mood lability and extremes of behavior, but also endocrine, inflammatory, neurophysiologic, and metabolic processes that deserve our attention as well.  He emphasized the fact that in between mood episodes, and even before they develop, there are a range of “dysfunctional symptom domains”—involving emotions, cognition, sleep, physical symptoms, and others—that we psychiatrists should be aware of.  He also introduced a potential way to “stage” development of bipolar disorder (similar to the way doctors stage tumors), suggesting that people at early stages might benefit from prophylactic psychiatric intervention.

Basically, the take-home message (for me, at least) was that in the future, psychiatrists will be responsible for treating other manifestations of bipolar disorder than those we currently attend to.  We will also need to look for subthreshold symptoms in people who might have a “prodrome” of bipolar disorder.

A sympathetic observer might say that Kupfer is simply asking us to practice good medicine, caring for the entire person rather than one’s symptoms, and prevent development or recurrence of bipolar illness.  On the other hand, a cynic might look at these pronouncements as a sort of disease-mongering, encouraging us to uncover signs of “disease” where they might not exist.  But both of these conclusions overlook a much more fundamental question that, to me, remains unanswered.  What exactly is bipolar disorder anyway?

I realize that’s an extraordinarily embarrassing question for a psychiatrist to ask.  And in all fairness, I do know what bipolar disorder is (or, at least, what the textbooks and the DSM-IV say it is).  I have seen examples of manic episodes in my own practice, and in my personal life, and have seen how they respond to medications, psychotherapy, or the passage of time.  But those are the minority.  Over the years (although my career is still relatively young), I have also seen dozens, if not hundreds, of people given the diagnosis of “bipolar disorder” without a clear history of a manic episode—the defining feature of bipolar disorder, according to the DSM.

As I looked around the room at everyone concentrating on Dr Kupfer’s every word, I wondered to myself, am I the only one with this dilemma?  Are my patients “special” or “unique”?  Maybe I’m a bad psychiatrist; maybe I don’t ask the right questions.  Or maybe everyone else is playing a joke on me.   That’s unlikely; others do see the same sorts of patients I do (I know this for a fact, from my own discussions with other psychiatrists).  But nobody seems to have the same crisis of confidence that I do.  It makes me wonder whether we have reached a point in psychiatry when psychiatrists can listen to a talk like this one (or see patients each day) and accept diagnostic categories, without paying any attention to the fact that they our nosology says virtually nothing at all about the unique nature of each person’s suffering.  It seems that we accept the words of our authority figures without asking the fundamental question of whether they have any basis in reality.  Or maybe I’m just missing out on the joke.

As far as I’m concerned, no two “bipolar” patients are alike, and no two “bipolar” patients have the same treatment goals.  The same can be said for almost everything else we treat, from “depression” to “borderline personality disorder” to addiction.  In my opinion, lumping all those people together and assuming they’re all alike for the purposes of a talk (or, even worse, for a clinical trial) makes it difficult—and quite foolish—to draw any conclusions about that group of individuals.

What we need to do is to figure out whether what we call “bipolar disorder” is a true disorder in the first place, rather than accept it uncritically and start looking for yet additional symptom domains or biomarkers as new targets of treatment.  To accept the assumption that everyone currently with the “bipolar” label indeed has the same disorder (or any disorder at all) makes a mockery of the diagnostic process and destroys the meaning of the word.  Some would argue this has already happened.

But then again, maybe I’m the only one who sees it this way.  No one at Kupfer’s talk seemed to demonstrate any bewilderment or concern that we might be heading towards a new era of disease management without really knowing what “disease” we’re treating in the first place.  If this is the case, I sure would appreciate it if someone would let me in on the joke.


Depression Tests: When “Basic” Research Becomes “Applied”

April 22, 2012

Anyone with an understanding of the scientific process can appreciate the difference between “basic” and “applied” research.  Basic research, often considered “pure” science, is the study of science for its own sake, motivated by curiosity and a desire to understand.  General questions and theories are tested, often without any obvious practical application.  On the other hand, “applied” research is usually done for a specific reason: to solve a real-world problem or to develop a new product: a better mousetrap, a faster computer, or a more effective way to diagnose illness.

In psychiatric research, the distinction between “basic” and “applied” research is often blurred.  Two recent articles (and the accompanying media attention they’ve received) provide very good examples of this phenomenon.  Both stories involve blood tests to diagnose depression.  Both are intriguing, novel studies.  Both may revolutionize our understanding of mental illness.  But responses to both have also been blown way out of proportion, seeking to “apply” what is clearly only at the “basic” stage.

The first study, by George Papakostas and his colleagues at Massachusetts General Hospital and Ridge Diagnostics, was published last December in the journal Molecular Psychiatry.  They developed a technique to measure nine proteins in the blood, plug those values into a fancy (although proprietary—i.e., unknown) algorithm, and calculate an “MDDScore” which, supposedly, diagnoses depression.  In their paper, they compared 70 depressed patients with 43 non-depressed people and showed that their assay identifies depression with a specificity of 81% and a sensitivity of 91%.

The other study, published two weeks ago in Translational Psychiatry by Eve Redei and her colleagues at Northwestern University, purports to diagnose depression in adolescents.  They didn’t measure proteins in patients’ blood, but rather levels of RNA.  (As a quick aside, RNA is the “messenger” molecule inside each cell that tells the cell which proteins to make.)  They studied a smaller number of patients—only 14 depressed teenagers, compared with 14 non-depressed controls—and identified 11 RNA molecules which were expressed differently between the two groups.  These were selected from a much larger number of RNA transcripts on the basis of an animal model of depression: specifically, a rat strain that was bred to show “depressive-like” behavior.

If we look at each of these studies as “basic” science, they offer some potentially tantalizing insights into what might be happening in the bodies of depressed people (or rats).  Even though some of us argue that no two “depressed” people are alike—and we should look instead at person-centered factors that might explain how they are unique—these studies nevertheless might have something to say about the common underlying biology of depression—if such a thing exists.  At the very least, further investigation might explain why proteins that have no logical connection with depression (such as apolipoprotein CIII or myeloperoxidase) or RNA transcripts (for genes like toll-like-receptor-1 or S-phase-cyclin-A-associated protein) might help us, someday, to develop more effective treatments than the often ineffective SSRIs that are the current standard of care.

Surprisingly, though, this is not how these articles have been greeted.  Take the Redei article, for instance.  Since its publication, there have been dozens of media mentions, with such headlines as “Depression Blood Test for Teens May Lead To Less Stigma” and “Depression Researchers May Have Developed First Blood Test For Teens.”  To the everyday reader, it seems as if we’ve gone straight from the bench to the bedside.  Granted, each story mentions that the test is not quite “ready for prime time,” but headlines draw readers’ attention.  Even the APA’s official Twitter feed mentioned it (“Blood test for early-onset #depression promising,” along with the tags “#childrenshealth” and “#fightstigma”), giving it a certain degree of legitimacy among doctors and patients alike.

(I should point out that one of Redei’s co-authors, Bill Gardner, emphasized—correctly—on his own blog that their study was NOT to be seen as a test for depression, and that it required refinement and replication before it could be used clinically.  He also acknowledged that their study population—adolescents—are often targets for unnecessary pharmacological intervention, demanding even further caution in interpreting their results.)

As for the Papakostas article, there was a similar flurry of articles about it when preliminary results were presented last year.  Like Redei’s research, it’s an interesting study that could change the way we diagnose depression.  However, unlike Redei’s study, it was funded by a private, self-proclaimed “neurodiagnostics” company.  (That company, Ridge Diagnostics, has not revealed the algorithm by which they calculate their “MDDScore,” essentially preventing any independent group from trying to replicate their findings.)

Incidentally, the Chairman of the Board of Ridge Diagnostics is David Hale, who also founded—and is Chairman of—Somaxon Pharmaceuticals, a company I wrote about last year when it tried to bring low-dose doxepin to the market as a sleep aid, and then used its patent muscle to issue cease-and-desist letters to people who suggested using the ultra-cheap generic version instead of Somaxon’s name-brand drug.

Ridge Diagnostics has apparently decided not to wait for replication of its findings, and instead is taking its MDDScore to the masses, complete with a Twitter feed, a Facebook Page, and a series of videos selling the MDDScore (priced at a low, low $745!), aimed directly at patients.  At this rate, it’s only a matter of time before the MDDScore is featured on the “Dr Oz Show” or “The Doctors.”  Take a look at this professionally produced video, for instance, posted last month on Youtube:


(Interesting—the host hardly even mentions the word “depression.”  A focus group must have told them that it detracted from his sales pitch.)

I think it’s great that scientists are investigating the basic biology of depression.  I also have no problem when private companies try to get in on the act.  However, when research that is obviously at the “basic” stage (and, yes, not ready for prime time) becomes the focus of a viral video marketing campaign or a major story on the Huffington Post, one must wonder why we’ve been so quick to cross the line from “basic” research into the “applied” uses of those preliminary findings.  Okay, okay, I know the answer is money.  But who has the authority—and the voice—to say, “not so fast” and preserve some integrity in the field of psychiatric research?  Where’s the money in that?


The Well Person

March 21, 2012

What does it mean to be “normal”?  We’re all unique, aren’t we?  We differ from each other in so many ways.  So what does it mean to say someone is “normal,” while someone else has a “disorder”?

This is, of course, the age-old question of psychiatric diagnosis.  The authors of the DSM-5, in fact, are grappling with this very question right now.  Take grieving, for example.  As I and others have written, grieving is “normal,” although its duration and intensity vary from person to person.  At some point, a line may be crossed, beyond which a person’s grief is no longer adaptive but dangerous.  Where that line falls, however, cannot be determined by a book or by a committee.

Psychiatrists ought to know who’s healthy and who’s not.  After all, we call ourselves experts in “mental health,” don’t we?  Surprisingly, I don’t think we’re very good at this.  We are acutely sensitive to disorder but have trouble identifying wellness.  We can recognize patients’ difficulties in dealing with other people but are hard-pressed to describe healthy interpersonal skills.  We admit that someone might be able to live with auditory hallucinations but we still feel an urge to increase the antipsychotic dose when a patient says she still hears “those voices.”   We are quick to point out how a patient’s alcohol or marijuana use might be a problem, but we can’t describe how he might use these substances in moderation.  I could go on and on.

Part of the reason for this might lie in how we’re trained.  In medical school we learn basic psychopathology and drug mechanisms (and, by the way, there are no drugs whose mechanism “maintains normality”—they all fix something that’s broken).  We learn how to do a mental status exam, complete with full descriptions of the behavior of manic, psychotic, depressed, and anxious people—but not “normals.”  Then, in our postgraduate training, our early years are spent with the most ill patients—those in hospitals, locked facilities, or emergency settings.  It’s not until much later in one’s training that a psychiatrist gets to see relatively more functional individuals in an office or clinic.  But by that time, we’re already tuned in to deficits and symptoms, and not to personal strengths, abilities, or resilience-promoting factors.

In a recent discussion with a colleague about how psychiatrists might best serve a large population of patients (e.g., in a “medical home” model), I suggested  that perhaps each psychiatrist could be responsible for a handful of people (say, 300 or 400 individuals).  Our job would be to see each of these 300-400 people at least once in a year, regardless of whether they have psychiatric diagnosis or not.  Those who have emotional or psychiatric complaints or who have a clear mental illness could be seen more frequently; the others would get their annual checkup and their clean bill of (mental) health.  It would be sort of like your annual medical visit or a “well-baby visit” in pediatrics:  a way for a person to be seen by a doctor, implement preventive measures,  and undergo screening to make sure no significant problems go unaddressed.

Alas, this would never fly in psychiatry.  Why not?  Because we’re too accustomed to seeing illness.  We’re too quick to interpret “sadness” as “depression”; to interpret “anxiety” or “nerves” as a cue for a benzodiazepine prescription; or to interpret “inattention” or poor work/school performance as ADHD.  I’ve even experienced this myself.  It is difficult to tell a person “you’re really doing just fine; there’s no need for you to see me, but if you want to come back, just call.”  For one thing, in many settings, I wouldn’t get paid for the visit if I said this.  But another concern, of course, is the fear of missing something:  Maybe this person really is bipolar [or whatever] and if I don’t keep seeing him, there will be a bad outcome and I’ll be responsible.

There’s also the fact that psychiatry is not a primary care specialty:  insurance plans don’t pay for an annual “well-person visit” with the a psychiatrist.  Patients who come to a psychiatrist’s office are usually there for a reason.  Maybe the patient deliberately sought out the psychiatrist to ask for help.  Maybe their primary care provider saw something wrong and wanted the psychiatrist’s input.  In the former, telling the person he or she is “okay” risks losing their trust (“but I just know something’s wrong, doc!“).  In the latter, it risks losing a referral source or professional relationship.

So how do we fix this?  I think we psychiatrists need to spend more time learning what “normal” really is.  There are no classes or textbooks on “Normal Adults.”  For starters, we can remind ourselves that the “normal” people around whom we’ve been living our lives may in fact have features that we might otherwise see as a disorder.  Learning to accept these quirks, foibles, and idiosyncrasies may help us to accept them in our patients.

In terms of using the DSM, we need to become more willing to use the V71.09 code, which means, essentially, “No diagnosis or condition.”  Many psychiatrists don’t even know this code exists.  Instead, we give “NOS” diagnoses (“not otherwise specified”) or “rule-outs,” which eventually become de facto diagnoses because we never actually take the time to rule them out!  A V71.09 should be seen as a perfectly valid (and reimbursable) diagnosis—a statement that a person has, in fact, a clean bill of mental health.  Now we just need to figure out what that means.

It is said that when Pope Julius II asked Michelangelo how he sculpted David out of a marble slab, he replied: “I just removed the parts that weren’t David.”  In psychiatry, we spend too much time thinking about what’s not David and relentlessly chipping away.  We spend too little time thinking about the healthy figure that may already be standing right in front of our eyes.


How To Think Like A Psychiatrist

March 4, 2012

The cornerstone of any medical intervention is a sound diagnosis.  Accurate diagnosis guides the proper treatment, while an incorrect diagnosis might subject a patient to unnecessary procedures or excessive pharmacotherapy, and it may further obscure the patient’s true underlying condition.  This is true for all medical specialties—including psychiatry.  It behooves us, then, to examine the practice of clinical decision-making, how we do it, and where we might go wrong, particularly in the area of psychiatric diagnosis.

According to Pat Croskerry, a physician at Dalhousie University in Canada, the foundation of clinical cognition the “dual process model,” first described by the Greek philosophers (and reviewed here).  This model proposes that people solve problems using one of two “processes”:  Type 1 processes involve intuition and are largely automatic, fast, and unconscious (e.g., recognizing a friend’s face).  Type 2 processes are more deliberate, analytical, and systematic (e.g., planning the best route for an upcoming trip).  Doctors use both types when making a diagnosis, but the relative emphasis varies with the setting.  In the ED, quick action based on pattern recognition (i.e., Type 1 process) is crucial.  Sometimes, however, it may be wrong, particularly if other conditions aren’t evaluated and ruled out (i.e., Type 2 process).  For instance, a patient with flank pain, nausea, vomiting, and hematuria demonstrates the “pattern” of a kidney stone (common), but may in fact have a dissecting aortic aneurysm (uncommon).

This model is valuable for understanding how we arrive at psychiatric diagnoses (the above figure is from a 2009 article by Croskerry).  When evaluating a patient for the first time, a psychiatrist often looks at “the big picture”:  Does this person appear to have a mood disorder, psychosis, anxiety, a personality disorder?  Have I seen this type of patient before?  What’s my general impression of this person?  In other words, the assessment relies heavily on Type 1 processes, using heuristics and “Gestalt” impressions.  But Type 2 processes are also important.  We must inquire about specific symptoms, treatment history, social background; we might order tests or review old records, which may change our initial perception.

Sound clinical decision-making, therefore, requires both processes.  Unfortunately, these are highly prone to error.  In fact, Croskerry identifies at least 40 cognitive biases, which occur when the processes are not adapted for the specific task at hand.  For instance, we tend to use Type 1 processes more frequently than we should.  Many psychiatrists, particularly those seeing a large volume of patients for short periods of time, often see patterns earlier than is warranted, and rush to diagnoses without fully considering all possibilities.  In other words, they fall victim to what psychologist Keith Stanovich calls “dysrationalia,” or the inability to think or act rationally despite adequate intelligence.  In the dual process model, dysrationalia can “override” Type 2 processes (“I don’t need to do a complete social history, I just know this patient has major depression”), leading to diagnostic failure.

Croskerry calls this the “cognitive miser” function: we rely on processes that consume fewer cognitive resources because we’re cognitively lazy.  The alternative would be to switch to a Type 2 process—a more detailed evaluation, using deductive, analytic reasoning.  But this takes great effort and time.  Moreover, when a psychiatrist switches to a “Type 2″ mode, he or she asks questions are nonspecific in nature (largely owing to the unreliability of some DSM-IV diagnoses), or questions that confirm the initial “Type 1″ hunch.  In other words, we end up finding we expect to find.

The contrast between Type 1 and Type 2 processes is most apparent when we observe people operating at either end of the spectrum.  Some psychiatrists see patterns in every patient (e.g., “I could tell he was bipolar as soon as he walked into my office”—a classic error called the representativeness heuristic), even though they rarely ask about specific symptoms, let alone test alternate hypotheses.  On the other hand, medical students and young clinicians often work exclusively in Type 2; they ask very thorough questions, covering every conceivable alternative, and every symptom in the DSM-IV (even irrelevant ones).  As a result, they get frustrated when they can’t determine a precise diagnosis or, alternately, they come up with a diagnosis that might “fit” the data but completely miss the mark regarding the underlying essence of the patient’s suffering.

Croskerry writes that the most accurate clinical decision-making occurs when a physician can switch between Type 1 and Type 2 processes  as needed, a process called metacognition.  Metacognition requires a certain degree of humility, a willingness to re-examine one’s decisions in light of new information.  It also demands that the doctor be able to recognize when he or she is not performing well and to be willing to self-monitor and self-criticize.  To do this, Croskerry recommends that we develop “cognitive forcing strategies,” deliberate interventions that force us to think more consciously and deliberately about the problem at hand.  This may help us to be more accurate in our assessments:  in other words, to see both the trees for the forest, and the forest for the trees.

This could be a hard sell.  Doctors can be a stubborn bunch.  Clinicians who insist on practicing Type 2,  “checklist”-style medicine (e.g., in a clinical trial) may be unwilling to consider the larger context in which specific symptoms arise, or they may not have sufficient understanding of that context to see how it might impact a patient.  On the other hand, clinicians who rush to judgment based on first impressions (a Type 1 process) may be annoyed by any suggestion that they should slow down and be more thorough or methodical.  Not to mention the fact that being more thorough takes more time. And as we all know, time is money.

I believe that all psychiatrists should heed the dual-process model and ask how it influences their practice.  Are you too quick to label and diagnose, owing to your “dysrational” (Type 1) impulses?  On the other hand, if you use established diagnostic criteria (Type 2), are you measuring anything useful?  Should you use a cognitive forcing strategy to avoid over-reliance on one type of decision-making?  If you continue to rely on pattern recognition (Type 1 process), then what other data (Type 2) should you collect?  Treatment history?  A questionnaire?  Biomarkers?  A comprehensive assessment of social context?  And ultimately, how do you use this information to diagnose a “disorder” in a given individual?

These are just a few questions that the dual process model raises.  There are no easy answers, but anything that challenges us to be better physicians and avoid clinical errors, in my opinion, is well worth our time, attention, and thought.


ADHD: A Modest Proposal

February 1, 2012

I’m reluctant to write a post about ADHD.  It just seems like treacherous ground.  Judging by comments I’ve read online and in magazines, and my own personal experience, expressing an opinion about this diagnosis—or just about anything in child psychiatry—will be met with criticism from one side or another.  But after reading L. Alan Sroufe’s article (“Ritalin Gone Wild”) in this weekend’s New York Times, I feel compelled to write.

If you have not read the article, I encourage you to do so.  Personally, I agree with every word (well, except for the comment about “children born into poverty therefore [being] more vulnerable to behavior problems”—I would remind Dr Sroufe that correlation does not equal causation).  In fact, I wish I had written it.  Unfortunately, it seems that only outsiders or retired psychiatrists can write such stuff about this profession. The rest of us might need to look for jobs someday.

Predictably, the article has attracted numerous online detractors.  For starters, check out this response from the NYT “Motherlode” blog, condemning Dr Sroufe for “blaming parents” for ADHD.  In my reading of the original article, Dr Sroufe did nothing of the sort.  Rather, he pointed out that ADHD symptoms may not entirely (or at all) arise from an inborn neurological defect (or “chemical imbalance”), but rather that environmental influences may be more important.  He also remarked that, yes, ADHD drugs do work; children (and adults, for that matter) do perform better on them, but those successes decline over time, possibly because a drug solution “does nothing to change [environmental] conditions … in the first place.”

I couldn’t agree more.  To be honest, I think this statement holds true for much of what we treat in psychiatry, but it’s particularly relevant in children and adolescents.  Children are exposed to an enormous number of influences as they try to navigate their way in the world, not to mention the fact that their brains—and bodies—continue to develop rapidly and are highly vulnerable.  “Environmental influences” are almost limitless.

I have a radical proposal which will probably never, ever, be implemented, but which might help resolve the problems raised by the NYT article.  Read on.

First of all, you’ll note that I referred to “ADHD symptoms” above, not “ADHD.”  This isn’t a typo.  In fact, this is a crucial distinction.  As with anything else in psychiatry, diagnosing ADHD relies on documentation of symptoms.  ADHD-like symptoms are extremely common, particularly in child-age populations.  (To review the official ADHD diagnostic criteria from the DSM-IV, click here.)  To be sure, a diagnosis of ADHD requires that these symptoms be “maladaptive and inconsistent with developmental level.”  Even so, I’ve often joked with my colleagues that I can diagnose just about any child with ADHD just by asking the right questions in the right way.  That’s not entirely a joke.  Try it yourself.  Look at the criteria, and then imagine you have a child in your office whose parent complains that he’s doing poorly in school, or gets in fights, or refuses to do homework, or daydreams a lot, etc.  When the ADHD criteria are on your mind—remember, you have to think like a psychiatrist here!—you’re likely to ask leading questions, and I guarantee you’ll get positive responses.

That’s a lousy way of making a diagnosis, of course, but it’s what happens in psychiatrists’ and pediatricians’ offices every day.  There are more “valid” ways to diagnose ADHD:  rating scales like the Connors or Vanderbilt surveys, extensive neuropsychiatric assessment, or (possibly) expensive imaging tests.  However, in practice, we often let subthreshold scores on those surveys “slide” and prescribe ADHD medications anyway (I’ve seen it plenty); neuropsychiatric assessments are often wishy-washy (“auditory processing score in the 60th percentile,” etc); and, as Dr Sroufe correctly points out, children with poor motivation or “an underdeveloped capacity to regulate their behavior” will most likely have “anomalous” brain scans.  That doesn’t necessarily mean they have a disorder.

So what’s my proposal?  My proposal is to get rid of the diagnosis of ADHD altogether.  Now, before you crucify me or accuse me of being unfit to practice medicine (as one reader—who’s also the author of a book on ADHD—did when I floated this idea on David Allen’s blog last week), allow me to elaborate.

First, if we eliminate the diagnosis of ADHD, we can still do what we’ve been doing.  We can still evaluate children with attention or concentration problems, or hyperactivity, and we can still use stimulant medications (of course, they’d be off-label now) to provide relief—as long as we’ve obtained the same informed consent that we’ve done all along.  We do this all the time in medicine.  If you complain of constant toe and ankle pain, I don’t immediately diagnose you with gout; instead, I might do a focused physical exam of the area and recommend a trial of NSAIDs.  If the pain returns, or doesn’t improve, or you have other features associated with gout, I may want to check uric acid levels, do a synovial fluid analysis, or prescribe allopurinol.

That’s what medicine is all about:  we see symptoms that suggest a diagnosis, and we provide an intervention to help alleviate the symptoms while paying attention to the natural course of the illness, refining the diagnosis over time, and continually modifying the therapy to treat the underlying diagnosis and/or eliminate risk factors.  With the ultimate goal, of course, of minimizing dangerous or expensive interventions and achieving some degree of meaningful recovery.

This is precisely what we don’t do in most cases of ADHD.  Or in most of psychiatry.  While exceptions definitely exist, often the diagnosis of ADHD—and the prescription of a drug that, in many cases, works surprisingly well—is the end of the story.  Child gets a diagnosis, child takes medication, child does better with peers or in school, parents are satisfied, everyone’s happy.  But what caused the symptoms in the first place?  Can (or should) that be fixed?  When can (or should) treatment be stopped?  How can we prevent long-term harm from the medication?

If, on the other hand, we don’t make a diagnosis of ADHD, but instead document that the child has “problems in focusing” or “inattention” or “hyperactivity” (i.e., we describe the specific symptoms), then it behooves us to continue looking for the causes of those symptoms.  For some children, it may be a chaotic home environment.  For others, it may be a history of neglect, or ongoing substance abuse.  For others, it may be a parenting style or interaction which is not ideal for that child’s social or biological makeup (I hesitate to write “poor parenting” because then I’ll really get hate mail!).  For still others, there may indeed be a biological abnormality—maybe a smaller dorsolateral prefrontal cortex (hey! the DLPFC!) or delayed brain maturation.

ADHD offers a unique platform upon which to try this open-minded, non-DSM-biased approach.  Dropping the diagnosis of “ADHD” would have a number of advantages.  It would encourage us to search more deeply for root causes; it would allow us to be more eclectic in our treatment; it would prevent patients, parents, doctors, teachers, and others from using it as a label or as an “excuse” for one’s behavior; and it would require us to provide truly individualized care.  Sure, there will be those who simply ask for the psychostimulants “because they work” for their symptoms of inattentiveness or distractibility (and those who deliberately fake ADHD symptoms because they want to abuse the stimulant or because they want to get into Harvard), but hey, that’s already happening now!  My proposal would create a glut of “false negative” ADHD diagnoses, but it would also reduce the above “false positives,” which, in my opinion, are more damaging to our field’s already tenuous nosology.

A strategy like this could—and probably should—be extended to other conditions in psychiatry, too.  I believe that some of what we call “ADHD” is truly a disorder—probably multiple disorders, as noted above; the same is probably true with “major depression,” ”bipolar disorder,” and just about everything else.  But when these labels start being used indiscriminately (and unfortunately DSM-5 doesn’t look to offer any improvement), the diagnoses become fixed labels and lock us into an approach that may, at best, completely miss the point, and at worst, cause significant harm.  Maybe we should rethink this.


The Unfortunate Therapeutic Myopia of the EMR

January 19, 2012

There’s a lot you can say about an electronic medical record (EMR).  Some of it is good: it’s more legible than a written chart, it facilitates billing, and it’s (usually) readily accessible.  On the other hand, EMRs are often cumbersome and confusing, they encourage “checklist”-style medicine, and they contain a lot of useless or duplicate information.  But a recent experience in my child/adolescent clinic opened my eyes to where an EMR might really mislead us.

David, a 9 year-old elementary school student, has been coming to the clinic every month for the last three years.  He carries a diagnosis of “bipolar disorder,” manifested primarily as extreme shifts in mood, easy irritability, insomnia, and trouble controlling his temper, both in the classroom and at home.  Previous doctors had diagnosed “oppositional defiant disorder,” then ADHD, then bipolar.  He had had a trial of psychostimulants with no effect, as well as some brief behavioral therapy.  Somewhere along the way, a combination of clonidine and Risperdal was started, and those have been David’s meds for the last year.

The information in the above paragraph came from my single interaction with David and his mom.  It was the first time I had seen David; he was added to my schedule at the last minute because the doctor he had been seeing for the last four months—a locum tenens doc—was unavailable.

Shortly before the visit, I had opened David’s EMR record to review his case, but it was not very informative.  Our EMR only allows one note to be open at a time, and I saw the same thing—”bipolar, stable, continue current meds”—and some other text, apparently cut & pasted, in each of his last 3-4 notes.  This was no big surprise; EMRs are full of cut & pasted material, plus lots of other boilerplate stuff that is necessary for legal & billing purposes but can easily be ignored.  The take-home message, at the time, was that David had been fairly stable for at least the last few months and probably just needed a refill.

During the appointment, I took note that David was a very pleasant child, agreeable and polite.  Mom said he had been “doing well.”  But I also noticed that, throughout the interview, David’s mom was behaving strangely—her head bobbed rhythmically side to side, and her arms moved in a writhing motion.  She spoke tangentially and demonstrated some acute (and extreme) shifts in emotion, at one point even crying suddenly, with no obvious trigger.

I asked questions about their home environment, David’s access to drugs and alcohol, etc., and I learned that mom used Vicodin, Soma, and Xanax.  She admitted that they weren’t prescribed to her—she bought them from friends.  Moreover, she reported that she “had just taken a few Xanax to get out the door this morning” which, she said, “might explain why I’m acting like this.”  She also shared with me that she had been sent to jail four years ago on an accusation of child abuse (she had allegedly struck her teenage daughter during an argument), at which time David and his brothers were sent to an emergency children’s shelter for four nights.

Even though I’m not David’s regular doctor, I felt that these details were relevant to his case.  It was entirely possible, in my opinion, that David’s home environment—a mother using prescription drugs inappropriately, a possible history of trauma—had contributed to his mood lability and “temper dysregulation,” something that a “bipolar” label might mask.

But I’m not writing this to argue that David isn’t “bipolar.”  Instead, I wish to point out that I obtained these details simply by observing the interaction between David and his mom over the course of ~30 minutes, and asking a few questions, and not by reading his EMR record.  In fact, after the appointment I reviewed the last 12 months of his EMR record, which showed dozens of psychiatrists’ notes, therapists’ notes, case manager’s notes, demographic updates, and “treatment plans,” and all of it was generally the same:  diagnosis, brief status updates, LOTS of boilerplate mumbo-jumbo, pages and pages of checkboxes, a few mentions of symptoms.  Nothing about David’s home situation or mom’s past.  In fact, nothing about mom at all.  I could not have been the first clinician to have had concerns about David’s home environment, but if such information was to be found in his EMR record, I had no idea where.

Medical charts—particularly in psychiatry—are living documents.  To any physician who has practiced for more than a decade or so, simply opening an actual, physical, paper chart can be like unfolding a treasure map:  you don’t know what you’ll find, but you know that there may be riches to be revealed.   Sometimes, while thumbing through the chart, a note jumps out because it’s clearly detailed or something relevant is highlighted or “flagged” (in the past, I learned how to spot the handwriting of the more perceptive and thorough clinicians).  Devices like Post-It notes or folded pages provide easy—albeit low-tech—access to relevant information.  Also, a thick paper chart means a long (or complicated) history in treatment, necessitating a more thorough review.  Sometimes the absence of notes over a period of time indicates a period of decompensation, a move, or, possibly a period of remission.  All of this is available, literally, at one’s fingertips.

EMRs are far more restrictive.  In David’s case, the EMR was my only source of information—apart from David himself.  And for David, it seemed sterile, bland, just a series of “check-ins” of a bipolar kid on Risperdal.  There was probably more info somewhere in there, but it was too difficult and non-intuitive to access.  Hence, the practice (adopted by most clinicians) of just opening up the patient’s most recent note—and that’s it.

Unfortunately, this leads to a therapeutic myopia that may change how we practice medicine.  EMRs, when used this way, are here-and-now.  They have become the medical equivalent of Facebook.  When I log on to the EMR, I see my patient’s most recent note—a “status update,” so to speak—but not much else.  It takes time and effort to search through a patient’s profile for more relevant historical info—and that’s if you know where to look.  After working with seven different EMRs in the last six years, I can say that they’re all pretty similar in this regard.  And if an electronic chart is only going to be used for its most recent note, there’s no incentive to be thorough.

Access to information is great.  But the “usability” of EMRs is so poor that we have easy access only to what the last clinician thought was important.  Or better yet, what he or she decided to document.  The rest—like David’s home life, the potential impact of his mother’s behavior on his symptoms, and environmental factors that require our ongoing attention, all of which may be far more meaningful than David’s last Risperdal dose—must be obtained “from scratch.”  If it is obtained at all.


Talk Is Cheap

October 9, 2011

I work part-time in a hospital psychiatry unit, overseeing residents and medical students on their inpatient psychiatry rotations.  They are responsible for three to six patients at any given time, directing and coordinating the patients’ care while they are admitted to our hospital.

To an outsider, this may seem like a generous ratio: one resident taking care of only 3-6 patients.  One would think that this should allow for over an hour of direct patient contact per day, resulting in truly “personalized” medicine.  But instead, the absolute opposite is true: sometimes doctors only see patients for minutes at a time, and develop only a limited understanding of patients for whom they are responsible.  I noticed this in my own residency training, when halfway through my first year I realized the unfortunate fact that even though I was “taking care” of patients and getting my work done satisfactorily, I couldn’t tell you whether my patients felt they were getting better, whether they appreciated my efforts, or whether they had entirely different needs that I had been ignoring.

In truth, much of the workload in a residency program (in any medical specialty) is related to non-patient-care concerns:  lectures, reading, research projects, faculty supervision, etc.  But even outside of the training environment, doctors spend less and less time with patients, creating a disturbing precedent for the future of medicine.  In psychiatry in particular, the shrinking “therapy hour” has received much attention, most recently in a New York Times front-page article (which I blogged about it here and here).  The responses to the article echoed a common (and growing) lament among most psychiatrists:  therapy has been replaced with symptom checklists, rapid-fire questioning, and knee-jerk prescribing.

In my case, I don’t mean be simply one more voice among the chorus of psychiatrists yearning for the “glory days” of psychiatry, where prolonged psychotherapy and hour-long visits were the norm.  I didn’t practice in those days, anyway.  Nevertheless, I do believe that we lose something important by distancing ourselves from our patients.

Consider the inpatient unit again.  My students and residents sometimes spend hours looking up background information, old charts, and lab results, calling family members and other providers, and discussing differential diagnosis and possible treatment plans, before ever seeing their patient.  While their efforts are laudable, the fact remains that a face-to-face interaction with a patient can be remarkably informative, sometimes even immediately diagnostic to the skilled eye.  In an era where we’re trying to reduce our reliance on expensive technology and wasteful tests, patient contact should be prioritized over the hours upon hours that trainees spend hunched over computer workstations.

In the outpatient setting, direct patient-care time has been largely replaced by “busy work” (writing notes; debugging EMRs; calling pharmacies to inquire about prescriptions; completing prior-authorization forms; and performing any number of “quality-control,” credentialing, or other mandatory “compliance” exercises required by our institutions).  Some of this is important, but at the same time, an extra ten or fifteen minutes with a patient may go a long way to determining that patient’s treatment goals (which may disagree with the doctor’s), improving their motivation for change, or addressing unresolved underlying issues– matters that may truly make a difference and cut long-term costs.

The future direction of psychiatry doesn’t look promising, as this vanishing emphasis on the patient’s words and deeds is likely to make treatment even less cost-effective.  For example, there is a growing effort to develop biomarkers for diagnosis of mental illness and to predict medication response.  In my opinion, the science is just not there yet (partly because the DSM is still a poor guide by which to make valid diagnoses… what are depression and schizophrenia anyway?).  And even if the biomarker strategy were a reliable one, there’s still nothing that could be learned in a $745+ blood test that couldn’t be uncovered in a good, thorough clinical examination by a talented diagnostician, not to mention the fact that the examination would also uncover a large amount of other information– and establish valuable rapport– which would likely improve the quality of care.

The blog “1boringoldman” recently featured a post called “Ask them about their lives…” in which a particularly illustrative case was discussed.  I’ll refer you there for the details, but I’ll repost the author’s summary comments here:

I fantasize an article in the American Journal of Psychiatry entitled “Ask them about their lives!” Psychiatrists give drugs. Therapists apply therapies. Who the hell interviews patients beyond logging in a symptom list? I’m being dead serious about that…

I share Mickey’s concern, as this is a vital question for the future of psychiatry.  Personally, I chose psychiatry over other branches of medicine because I enjoy talking to people, asking about their lives, and helping them develop goals and achieve their dreams.  I want to help them overcome the obstacles put in their way by catastrophic relationships, behavioral missteps, poor insight, harmful impulsivity, addiction, emotional dysregulation, and– yes– mental illness.

However, if I don’t have the opportunity to talk to my patients (still my most useful diagnostic and therapeutic tool), I must instead rely on other ways to explain their suffering:  a score on a symptom list, a lab value, or a diagnosis that’s been stuck on the patient’s chart over several years without anyone taking the time to ask whether it’s relevant.  Not only do our patients deserve more than that, they usually want more than that, too; the most common complaint I hear from a patient is that “Dr So-And-So didn’t listen to me, he just prescribed drugs.”

This is not the psychiatry of my forefathers.  This is neither Philippe Pinel’s “moral treatment,” Emil Kraepelin’s meticulous attention to symptoms and patterns thereof, nor Aaron Beck’s cognitive re-strategizing.  No, it’s the psychiatry of HMOs, Wall Street, and an over-medicalized society, and in this brave new world, the patient is nowhere to be found.


Follow

Get every new post delivered to your Inbox.

Join 1,363 other followers

%d bloggers like this: