Advertisements
 

The Well Person

March 21, 2012

What does it mean to be “normal”?  We’re all unique, aren’t we?  We differ from each other in so many ways.  So what does it mean to say someone is “normal,” while someone else has a “disorder”?

This is, of course, the age-old question of psychiatric diagnosis.  The authors of the DSM-5, in fact, are grappling with this very question right now.  Take grieving, for example.  As I and others have written, grieving is “normal,” although its duration and intensity vary from person to person.  At some point, a line may be crossed, beyond which a person’s grief is no longer adaptive but dangerous.  Where that line falls, however, cannot be determined by a book or by a committee.

Psychiatrists ought to know who’s healthy and who’s not.  After all, we call ourselves experts in “mental health,” don’t we?  Surprisingly, I don’t think we’re very good at this.  We are acutely sensitive to disorder but have trouble identifying wellness.  We can recognize patients’ difficulties in dealing with other people but are hard-pressed to describe healthy interpersonal skills.  We admit that someone might be able to live with auditory hallucinations but we still feel an urge to increase the antipsychotic dose when a patient says she still hears “those voices.”   We are quick to point out how a patient’s alcohol or marijuana use might be a problem, but we can’t describe how he might use these substances in moderation.  I could go on and on.

Part of the reason for this might lie in how we’re trained.  In medical school we learn basic psychopathology and drug mechanisms (and, by the way, there are no drugs whose mechanism “maintains normality”—they all fix something that’s broken).  We learn how to do a mental status exam, complete with full descriptions of the behavior of manic, psychotic, depressed, and anxious people—but not “normals.”  Then, in our postgraduate training, our early years are spent with the most ill patients—those in hospitals, locked facilities, or emergency settings.  It’s not until much later in one’s training that a psychiatrist gets to see relatively more functional individuals in an office or clinic.  But by that time, we’re already tuned in to deficits and symptoms, and not to personal strengths, abilities, or resilience-promoting factors.

In a recent discussion with a colleague about how psychiatrists might best serve a large population of patients (e.g., in a “medical home” model), I suggested  that perhaps each psychiatrist could be responsible for a handful of people (say, 300 or 400 individuals).  Our job would be to see each of these 300-400 people at least once in a year, regardless of whether they have psychiatric diagnosis or not.  Those who have emotional or psychiatric complaints or who have a clear mental illness could be seen more frequently; the others would get their annual checkup and their clean bill of (mental) health.  It would be sort of like your annual medical visit or a “well-baby visit” in pediatrics:  a way for a person to be seen by a doctor, implement preventive measures,  and undergo screening to make sure no significant problems go unaddressed.

Alas, this would never fly in psychiatry.  Why not?  Because we’re too accustomed to seeing illness.  We’re too quick to interpret “sadness” as “depression”; to interpret “anxiety” or “nerves” as a cue for a benzodiazepine prescription; or to interpret “inattention” or poor work/school performance as ADHD.  I’ve even experienced this myself.  It is difficult to tell a person “you’re really doing just fine; there’s no need for you to see me, but if you want to come back, just call.”  For one thing, in many settings, I wouldn’t get paid for the visit if I said this.  But another concern, of course, is the fear of missing something:  Maybe this person really is bipolar [or whatever] and if I don’t keep seeing him, there will be a bad outcome and I’ll be responsible.

There’s also the fact that psychiatry is not a primary care specialty:  insurance plans don’t pay for an annual “well-person visit” with the a psychiatrist.  Patients who come to a psychiatrist’s office are usually there for a reason.  Maybe the patient deliberately sought out the psychiatrist to ask for help.  Maybe their primary care provider saw something wrong and wanted the psychiatrist’s input.  In the former, telling the person he or she is “okay” risks losing their trust (“but I just know something’s wrong, doc!“).  In the latter, it risks losing a referral source or professional relationship.

So how do we fix this?  I think we psychiatrists need to spend more time learning what “normal” really is.  There are no classes or textbooks on “Normal Adults.”  For starters, we can remind ourselves that the “normal” people around whom we’ve been living our lives may in fact have features that we might otherwise see as a disorder.  Learning to accept these quirks, foibles, and idiosyncrasies may help us to accept them in our patients.

In terms of using the DSM, we need to become more willing to use the V71.09 code, which means, essentially, “No diagnosis or condition.”  Many psychiatrists don’t even know this code exists.  Instead, we give “NOS” diagnoses (“not otherwise specified”) or “rule-outs,” which eventually become de facto diagnoses because we never actually take the time to rule them out!  A V71.09 should be seen as a perfectly valid (and reimbursable) diagnosis—a statement that a person has, in fact, a clean bill of mental health.  Now we just need to figure out what that means.

It is said that when Pope Julius II asked Michelangelo how he sculpted David out of a marble slab, he replied: “I just removed the parts that weren’t David.”  In psychiatry, we spend too much time thinking about what’s not David and relentlessly chipping away.  We spend too little time thinking about the healthy figure that may already be standing right in front of our eyes.

Advertisements

How To Think Like A Psychiatrist

March 4, 2012

The cornerstone of any medical intervention is a sound diagnosis.  Accurate diagnosis guides the proper treatment, while an incorrect diagnosis might subject a patient to unnecessary procedures or excessive pharmacotherapy, and it may further obscure the patient’s true underlying condition.  This is true for all medical specialties—including psychiatry.  It behooves us, then, to examine the practice of clinical decision-making, how we do it, and where we might go wrong, particularly in the area of psychiatric diagnosis.

According to Pat Croskerry, a physician at Dalhousie University in Canada, the foundation of clinical cognition the “dual process model,” first described by the Greek philosophers (and reviewed here).  This model proposes that people solve problems using one of two “processes”:  Type 1 processes involve intuition and are largely automatic, fast, and unconscious (e.g., recognizing a friend’s face).  Type 2 processes are more deliberate, analytical, and systematic (e.g., planning the best route for an upcoming trip).  Doctors use both types when making a diagnosis, but the relative emphasis varies with the setting.  In the ED, quick action based on pattern recognition (i.e., Type 1 process) is crucial.  Sometimes, however, it may be wrong, particularly if other conditions aren’t evaluated and ruled out (i.e., Type 2 process).  For instance, a patient with flank pain, nausea, vomiting, and hematuria demonstrates the “pattern” of a kidney stone (common), but may in fact have a dissecting aortic aneurysm (uncommon).

This model is valuable for understanding how we arrive at psychiatric diagnoses (the above figure is from a 2009 article by Croskerry).  When evaluating a patient for the first time, a psychiatrist often looks at “the big picture”:  Does this person appear to have a mood disorder, psychosis, anxiety, a personality disorder?  Have I seen this type of patient before?  What’s my general impression of this person?  In other words, the assessment relies heavily on Type 1 processes, using heuristics and “Gestalt” impressions.  But Type 2 processes are also important.  We must inquire about specific symptoms, treatment history, social background; we might order tests or review old records, which may change our initial perception.

Sound clinical decision-making, therefore, requires both processes.  Unfortunately, these are highly prone to error.  In fact, Croskerry identifies at least 40 cognitive biases, which occur when the processes are not adapted for the specific task at hand.  For instance, we tend to use Type 1 processes more frequently than we should.  Many psychiatrists, particularly those seeing a large volume of patients for short periods of time, often see patterns earlier than is warranted, and rush to diagnoses without fully considering all possibilities.  In other words, they fall victim to what psychologist Keith Stanovich calls “dysrationalia,” or the inability to think or act rationally despite adequate intelligence.  In the dual process model, dysrationalia can “override” Type 2 processes (“I don’t need to do a complete social history, I just know this patient has major depression”), leading to diagnostic failure.

Croskerry calls this the “cognitive miser” function: we rely on processes that consume fewer cognitive resources because we’re cognitively lazy.  The alternative would be to switch to a Type 2 process—a more detailed evaluation, using deductive, analytic reasoning.  But this takes great effort and time.  Moreover, when a psychiatrist switches to a “Type 2” mode, he or she asks questions are nonspecific in nature (largely owing to the unreliability of some DSM-IV diagnoses), or questions that confirm the initial “Type 1” hunch.  In other words, we end up finding we expect to find.

The contrast between Type 1 and Type 2 processes is most apparent when we observe people operating at either end of the spectrum.  Some psychiatrists see patterns in every patient (e.g., “I could tell he was bipolar as soon as he walked into my office”—a classic error called the representativeness heuristic), even though they rarely ask about specific symptoms, let alone test alternate hypotheses.  On the other hand, medical students and young clinicians often work exclusively in Type 2; they ask very thorough questions, covering every conceivable alternative, and every symptom in the DSM-IV (even irrelevant ones).  As a result, they get frustrated when they can’t determine a precise diagnosis or, alternately, they come up with a diagnosis that might “fit” the data but completely miss the mark regarding the underlying essence of the patient’s suffering.

Croskerry writes that the most accurate clinical decision-making occurs when a physician can switch between Type 1 and Type 2 processes  as needed, a process called metacognition.  Metacognition requires a certain degree of humility, a willingness to re-examine one’s decisions in light of new information.  It also demands that the doctor be able to recognize when he or she is not performing well and to be willing to self-monitor and self-criticize.  To do this, Croskerry recommends that we develop “cognitive forcing strategies,” deliberate interventions that force us to think more consciously and deliberately about the problem at hand.  This may help us to be more accurate in our assessments:  in other words, to see both the trees for the forest, and the forest for the trees.

This could be a hard sell.  Doctors can be a stubborn bunch.  Clinicians who insist on practicing Type 2,  “checklist”-style medicine (e.g., in a clinical trial) may be unwilling to consider the larger context in which specific symptoms arise, or they may not have sufficient understanding of that context to see how it might impact a patient.  On the other hand, clinicians who rush to judgment based on first impressions (a Type 1 process) may be annoyed by any suggestion that they should slow down and be more thorough or methodical.  Not to mention the fact that being more thorough takes more time. And as we all know, time is money.

I believe that all psychiatrists should heed the dual-process model and ask how it influences their practice.  Are you too quick to label and diagnose, owing to your “dysrational” (Type 1) impulses?  On the other hand, if you use established diagnostic criteria (Type 2), are you measuring anything useful?  Should you use a cognitive forcing strategy to avoid over-reliance on one type of decision-making?  If you continue to rely on pattern recognition (Type 1 process), then what other data (Type 2) should you collect?  Treatment history?  A questionnaire?  Biomarkers?  A comprehensive assessment of social context?  And ultimately, how do you use this information to diagnose a “disorder” in a given individual?

These are just a few questions that the dual process model raises.  There are no easy answers, but anything that challenges us to be better physicians and avoid clinical errors, in my opinion, is well worth our time, attention, and thought.


Mental Illness IS Real After All… So What Was I Treating Before?

July 26, 2011

I recently started working part-time on an inpatient psychiatric unit at a large county medical center.  The last time I worked in inpatient psychiatry was six years ago, and in the meantime I’ve worked in various office settings—community mental health, private practice, residential drug/alcohol treatment, and research.  I’m glad I’m back, but it’s really making me rethink my ideas about mental illness.

An inpatient psychiatry unit is not just a locked version of an outpatient clinic.  The key difference—which would be apparent to any observer—is the intensity of patients’ suffering.  Of course, this should have been obvious to me, having treated patients like these before.  But I’ll admit, I wasn’t prepared for the abrupt transition.  Indeed, the experience has reminded me how severe mental illness can be, and has proven to be a “wake-up” call at this point in my career, before I get the conceited (yet naïve) belief that “I’ve seen it all.”

Patients are hospitalized when they simply cannot take care of themselves—or may be a danger to themselves or others—as a result of their psychiatric symptoms.  These individuals are in severe emotional or psychological distress, have immense difficulty grasping reality, or are at imminent risk of self-harm, or worse.  In contrast to the clinic, the illnesses I see on the inpatient unit are more incapacitating, more palpable, and—for lack of a better word—more “medical.”

Perhaps this is because they also seem to respond better to our interventions.  Medications are never 100% effective, but they can have a profound impact on quelling the most distressing and debilitating symptoms of the psychiatric inpatient.  In the outpatient setting, medications—and even psychotherapy—are confounded by so many other factors in the typical patient’s life.  When I’m seeing a patient every month, for instance—or even every week—I often wonder whether my effort is doing any good.  When a patient assures me it is, I think it’s because I try to be a nice, friendly guy.  Not because I feel like I’m practicing any medicine.  (By the way, that’s not humility, I see it as healthy skepticism.)

Does this mean that the patient who sees her psychiatrist every four weeks and who has never been hospitalized is not suffering?  Or that we should just do away with psychiatric outpatient care because these patients don’t have “diseases”?  Of course not.  Discharged patients need outpatient follow-up, and sometimes outpatient care is vital to prevent hospitalization in the first place.  Moreover, people do suffer and do benefit from coming to see doctors like me in the outpatient setting.

But I think it’s important to look at the differences between who gets hospitalized and who does not, as this may inform our thinking about the nature of mental illness and help us to deliver treatment accordingly.  At the risk of oversimplifying things (and of offending many in my profession—and maybe even some patients), perhaps the more severe cases are the true psychiatric “diseases” with clear neurochemical or anatomic foundations, and which will respond robustly to the right pharmacological or neurosurgical cure (once we find it), while the outpatient cases are not “diseases” at all, but simply maladaptive strategies to cope with what is (unfortunately) a chaotic, unfair, and challenging world.

Some will argue that these two things are one and the same.  Some will argue that one may lead to the other.  In part, the distinction hinges upon what we call a “disease.”  At any rate, it’s an interesting nosological dilemma.  But in the meantime, we should be careful not to rush to the conclusion that the conditions we see in acutely incapacitated and severely disturbed hospital patients are the same as those we see in our office practices, just “more extreme versions.”  In fact, they may be entirely different entities altogether, and may respond to entirely different interventions (i.e., not just higher doses of the same drug).

The trick is where to draw the distinction between the “true” disease and its “outpatient-only” counterpart.  Perhaps this is where biomarkers like genotypes or blood tests might prove useful.  In my opinion, this would be a fruitful area of research, as it would help us better understand the biology of disease, design more suitable treatments (pharmacological or otherwise), and dedicate treatment resources more fairly.  It would also lead us to provide more humane and thoughtful care to people on both sides of the double-locked doors—something we seem to do less and less of these days.


Abilify for Bipolar Maintenance: More Hard Questions

May 31, 2011

Much attention has been drawn to a recent PLoS Medicine article criticizing the evidence base for the use of Abilify as maintenance treatment for bipolar disorder.  The major points emphasized by most critics are, first, that the FDA approved Abilify for this purpose in 2005 on the basis of flawed and scanty evidence and, secondly, that the literature since that time has failed to point out the deficiencies in the original study.

While the above may be true, I believe these criticisms miss a more important point.  Instead of lambasting the FDA or lamenting the poor quality of clinical research, we psychiatrists need to use this as an opportunity to take a closer look at what we treat, why we treat, and how we treat.

Before elaborating, let me summarize the main points of the PLoS article.  The authors point out that FDA approval of Abilify was based on only one “maintenance” trial by Keck et al published in 2007.  The trial included only 161 patients (only 7 of whom, or 1.3% of the total 567 who started the study, were followed throughout 26 weeks of stabilization and 74 follow-up weeks of maintenance).  It also consisted of patients who had already been stabilized on Abilify; thus, it was “enriched” for patients who had already shown a good response to this drug.  Furthermore, the “placebo failures” consisted of patients who were abruptly withdrawn from Abilify and placed on placebo; their relapses might thus be attributed to the researchers’ “randomized discontinuation” design rather than the failure of placebo.  (For more commentary, including follow-up from Bristol-Myers Squibb, Abilify’s manufacturer, please see this excellent post on Pharmalot.)

These are all valid arguments.  But as I read the PLoS paper and the ongoing discussion ever since, I can’t help but think, so what??  First of all, most psychiatrists probably don’t know about the PLoS paper.  And even if they did, the major questions for me would be:  would the criticism of the Keck et al. study change the way psychiatrists practice?  Should it?

Let’s think about psychiatric illness for a moment.  Most disorders are characterized by an initial, abrupt onset or “episode.”  These acute episodes are usually treated with medications (plus or minus psychotherapy or other psychosocial interventions), often resulting in rapid symptomatic improvement—or, at the very least, stabilization of those symptoms.

One big, unanswered (and, unfortunately, under-asked) question in psychiatry is, then what?  Once a person is stabilized (which in some cases means nothing more than “he’s no longer a danger to himself or others”), what do we do?  We don’t know how long to treat patients, and there are no guidelines for when to discontinue medications.  Instead we hear the common refrain:  depression, schizophrenia, and bipolar disorder, are lifelong illnesses—”just like hypertension or diabetes”—and should be treated as such.

But is that true?  At the risk of sounding like a heretic (and, indeed, I’d be laughed out of residency if I had ever asked this question), are there some cases of bipolar disorder—or schizophrenia, or depression, for that matter—which only require brief periods of psychopharmacological treatment, or none at all?

The conventional wisdom is that, once a person is stabilized, we should just continue treatment.  And why not?  What doctor is going to take his patient off Abilify—or any other mood stabilizer or antipsychotic which has been effective in the acute phase—and risk a repeat mood episode?  None.  And if he does, would he attribute the relapse to the disease, or to withdrawal of the drug?  Probably to the disease.

For another example of what I’m talking about, consider Depakote.  Depakote has been used for decades and is regarded as a “prototypical” mood stabilizer.  Indeed, some of my patients have taken Depakote for years and have remained stable, highly functional, and without evidence of mood episodes.  But Depakote was never approved for the maintenance treatment of bipolar disorder (for a brilliant review of this, which raises some of the same issues as the current Abilify brouhaha, read this article by The Last Psychiatrist).  In fact, the one placebo-controlled study of Depakote for maintenance treatment of bipolar disorder showed that it’s no better than placebo.  So why do doctors use it? Because it works (in the acute phase.)  Why do patients take it?  Again, because it works—oh, and their doctors tell them to continue taking it.  As the old saying goes, “if it ain’t broke, don’t fix it.”

However, what if it is broke[n]?  Some patients indeed fail Depakote monotherapy and require additional “adjunctive” medication (which, BTW, has provided another lucrative market for the atypical antipsychotics).  In such cases, most psychiatrists conclude that the patient’s disease is worsening and they add the second agent.  Might it be, however, that after the patient’s initial “response” to Depakote, the medication wasn’t doing anything at all?

To be sure, the Abilify study may have been more convincing if it was larger, followed patients for a longer time, and had a dedicated placebo arm consisting of patients who had not been on Abilify in the initial stage.  But I maintain that, regardless of the outcome of such an “improved” trial, most doctors would still use Abilify for maintenance treatment anyway, and convince themselves that it works—even if the medication is doing absolutely nothing to the underlying biology of the disease.

The bottom line is that it’s easy to criticize the FDA for approving a drug on the basis of a single, flawed study.  It’s also easy to criticize a pharmaceutical company for cutting corners and providing “flawed” data for FDA review.  But when it comes down to it, the real criticism should be directed at a field of medicine which endorses the “biological” treatment of a disorder (or group of disorders) whose biochemical basis and natural history are not fully understood, which creates post hoc explanations of its successes and failures based on that lack of understanding, and which is unwilling to look itself in the mirror and ask if it can do better.


Here’s A Disease. Do You Have It?

March 29, 2011

I serve as a consultant to a student organization at a nearby university.  These enterprising students produce patient-education materials (brochures, posters, handouts, etc) for several chronic diseases, and their mission—a noble one—is to distribute these materials to free clinics in underserved communities, with a goal to raise awareness of these conditions and educate patients on their proper management.

Because I work part-time in a community mental health clinic, I was, naturally, quite receptive to their offer to distribute some of their handiwork to my patients.  The group sent me several professional-looking flyers and brochures describing the key features of anxiety disorders, depression, PTSD, schizophrenia, and insomnia, and suggested that I distribute these materials to patients in my waiting room.

They do an excellent job at demystifying (and destigmatizing) mental illness, and describe, in layman’s terms, symptoms that may be suggestive of a significant psychiatric disorder (quoting from one, for example: “Certain neurotransmitters are out of balance when people are depressed.  They often feel sad, hopeless, helpless, lack energy, … If you think you may be depressed, talk to a doctor.”)  But just as I was about to print a stack of brochures and place them at the front door, I thought to myself, what exactly is our goal?

Experiencing symptoms of anxiety, depression, or insomnia doesn’t necessarily indicate mental illness or a need for medications or therapy; they might reflect a stressful period in one’s life or a difficult transition for which one might simply need some support or encouragement.  I feared that the questions posed in these materials may lead people to believe that there might be something “wrong” with them, when they are actually quite healthy.  (The target audience needs to be considered, too, but I’ll write more about that later.)

It led me to the question: when does “raising awareness” become “disease mongering”?

“Disease-mongering,” if you haven’t heard of it, is the (pejorative) term used to describe efforts to lead people to believe they have a disease when they most likely do not, or when the “disease” in question is so poorly defined as to be questionable in and of itself.  Accusations of disease-mongering have made in the area of bipolar disorder, fibromyalgia, restless legs syndrome, female sexual arousal disorder, “low testosterone,” and many others, and have mainly been directed toward pharmaceutical companies with a vested interest in getting people on their drugs.  (See this special issue of PLoS One for several articles on this topic.)

Psychiatric disorders are ripe for disease-mongering because they are essentially defined by subjective symptoms, rather than objective signs and tests.  In other words, if I simply recite the symptoms of depression to my doctor, he’ll probably prescribe me an antidepressant; but if I tell him I have an infection, he’ll check my temperature, my WBC count, maybe palpate some lymph nodes, and if all seems normal he probably won’t write me a script for an antibiotic.

It’s true that some patients might deliberately falsify or exaggerate symptoms in order to obtain a particular medication or diagnosis.  What’s far more likely, though, is that they are (unconsciously) led to believe they have some illness, simply on the basis of experiencing some symptoms that are, more or less, a slight deviation from “normal.”  This is problematic for a number of reasons.  Obviously, an improper diagnosis leads to the prescription of unnecessary medications (and to their undesirable side effects), driving up the cost of health care.  It may also harm the patient in other ways; it may prevent the patient from getting health insurance or a job, or—even more insidiously—lead them to believe they have less control over their thoughts or behaviors than they actually do.

When we educate the public about mental illness, and encourage people to seek help if they think they need it, we walk a fine line.  Some people who may truly benefit from professional help will ignore the message, saying they “feel fine,” while others with very minor symptoms which are simply part of everyday life may be drawn in.  (Here is another example, a flyer for childhood bipolar disorder, produced by the NIH; how many parents & kids might be “caught”?)  Mental health providers should never turn away someone who presents for an evaluation or assessment, but we also have an obligation to provide a fair and unbiased opinion of whether a person needs treatment or not.  After all, isn’t that our responsibility as professionals?  To provide our honest input as to whether someone is healthy or unhealthy?

I almost used the words “normal” and “abnormal” in the last sentence.  I try not to use these words (what’s “normal” anyway?), but keeping them in mind helps us to see things from the patient’s perspective.  When she hears constant messages touting “If you have symptom X then you might have disorder Y—talk to your doctor!” she goes to the doctor seeking guidance, not necessarily a diagnosis.

The democratization of medical and scientific knowledge is, in my opinion, a good thing.  Information about what we know (and what we don’t know) about mental illness should indeed be shared with the public.   But it should not be undertaken with the goal of prescribing more of a certain medication, bringing more patients into one’s practice, or doling out more diagnoses.  Prospective patients often can’t tell what the motives are behind the messages they see—magazine ads, internet sites, and waiting-room brochures may be produced by just about anyone —and this is where the responsibility and ethics of the professional are of utmost importance.

Because if the patient can’t trust us to tell them they’re okay, then are we really protecting and ensuring the public good?

(Thanks to altmentalities for the childhood bipolar flyer.)


The Perils of Checklist Psychiatry

March 16, 2011

It’s no secret that doctors in all specialties spend less and less time with patients these days.  Last Sunday’s NY Times cover article (which I wrote about here and here) gave a fairly stark example of how reimbursement incentives have given modern psychiatry a sort of assembly-line mentality:  “Come in, state your problems, and here’s your script.  Next in line!!”  Unfortunately, all the trappings of modern medicine—shrinking reimbursements, electronic medical record systems which favor checklists over narratives, and patients who frequently want a “quick fix”—contribute directly to this sort of practice.

To be fair, there are many psychiatrists who don’t work this way.  But this usually comes with a higher price tag, which insurance companies often refuse to pay.  Why?  Well, to use the common yet frustrating phrase, it’s not “evidence-based medicine.”  As it turns out, the only available evidence is for the measurement of specific symptoms (measured by a checklist) and the prescription of pills over (short) periods of time.  Paradoxically, psychiatry—which should know better—no longer sees patients as people with interesting backgrounds and multiple ongoing social and psychological dynamics, but as collections of symptoms (anywhere in the world!) which respond to drugs.

The embodiment of this mentality, of course, is the DSM-IV, the “diagnostic manual” of psychiatry, which is basically a collection of symptom checklists designed to make a psychiatric diagnosis.  Now, I know that’s a gross oversimplification, and I’m also aware that sophisticated interviewing skills can help to determine the difference between a minor disturbance in a patient’s mood or behavior and a pathological condition (i.e., betwen a symptom and a syndrome).  But often the time, or those skills, simply aren’t available, and a diagnosis is made on the basis of what’s on the list.  As a result, psychiatric diagnoses have become “diagnoses of inclusion”:  you say you have a symptom, you’ll get a diagnosis.

To make matters worse, the checklist mentality, aided by the Internet, has spawned a small industry of “diagnostic tools,” freely available to clinicians and to patients, and published in books, magazines, and web sites.  (The bestselling book The Checklist Manifesto may have contributed, too.  In it, author-surgeon Atul Gawande explains how simple checklists are useful in complex situations in which lives are on the line.  He has received much praise, but the checklists he describes help to narrow our focus, when in psychiatry it should be broadened.  In other words, checklists are great for preparing an OR for surgery, or a jetliner for takeoff, but not in identifying the underlying causes of an individual’s suffering.)

Anyway, a quick Google search for any mental health condition (or even a personality trait like shyness, irritability, or anger) will reveal dozens of free questionnaires, surveys, and checklists designed to make a tentative diagnosis.  Most give the disclaimer “this is not meant to be a diagnostic tool—please consult your physician.”

But why?  If the patient has already answered all the questions that the doctor will ask anyway in the 10 to 15 minutes allotted for their appointment, why can’t the patient just email the questionnaire directly to a doc in another state (or another country) from the convenience of their own home, enter their credit card information, and wait for a prescription in the mail?  Heck, why not eliminate the middleman and submit the questionnaire directly to the drug company for a supply of pills?

I realize I’m exaggerating here.  Questionnaires and checklists can be extremely helpful—when used responsibly—as a way to obtain a “snapshot” of a patient’s progress or of his/her active symptoms, and to suggest topics for discussion in a more thorough interview.  Also, people also have an innate desire to know how they “score” on some measure—the exercise can even be entertaining—and their results can sometimes reveal things they didn’t know about themselves.

But what makes psychiatry and psychology fascinating is the discovery of alternate, more parsimonious (or potentially more serious) explanations for a patient’s traits and behaviors; or, conversely, informing a patient that his or her “high score” is actually nothing to be worried about.  That’s where the expert comes in.  Unfortunately, experts can behave like Internet surveys, too, and when we do, the “rush to judgment” can be shortsighted, unfair, and wrong.


Getting Inside The Patient’s Mind

March 4, 2011

As a profession, medicine concerns itself with the treatment of individual human beings, but primarily through a scientific or “objective” lens.  What really counts is not so much a person’s feelings or attitudes (although we try to pay attention to the patient’s subjective experience), but instead the pathology that contributes to those feelings or that experience: the malignant lesion, the abnormal lab value, the broken bone, or the infected tissue.

In psychiatry, despite the impressive inroads of biology, pharmacology, molecular genetics into our field—and despite the bold predictions that accurate molecular diagnosis is right around the corner—the reverse is true, at least from the patient’s perspective.  Patients (generally) don’t care about which molecules are responsible for their depression or anxiety; they do know that they’re depressed or anxious and want help.  Psychiatry is getting ever closer to ignoring this essential reality.

Lately I’ve come across a few great reminders of this principle.  My colleagues over at Shrink Rap recently posted an article about working with patients who are struggling with problems that resemble those that the psychiatrist once experienced.  Indeed, a debate exists within the field as to whether providers should divulge details of their own personal experiences, or whether they should remain detached and objective.  Many psychiatrists see themselves in the latter group, simply offering themselves as a sounding board for the patient’s words and restricting their involvement to medications or other therapeutic interventions that have been planned and agreed to in advance.  This may, however, prevent them from sharing information that may be vital in helping the patient make great progress.

A few weeks ago a friend sent me a link to this video produced by the Janssen pharmaceutical company (makers of Risperdal and Invega, two atypical antipsychotic medications).

The video purports to simulate the experience of a person experiencing psychotic symptoms.  While I can’t attest to its accuracy, it certainly is consistent with written accounts of psychotic experiences, and is (reassuringly!) compatible with what we screen for in the evaluation of a psychotic patient.  Almost like reading a narrative of someone with mental illness (like Andrew Solomon’s Noonday Demon, William Styron’s Darkness Visible, or An Unquiet Mind by Kay Redfield Jamison), videos and vignettes like this one may help psychiatrists to understand more deeply the personal aspect of what we treat.

I also stumbled upon an editorial in the January 2011 issue of Schizophrenia Bulletin by John Strauss, a Yale psychiatrist, entitled “Subjectivity and Severe Psychiatric Disorders.” In it, he argues that in order to practice psychiatry as a “human science” we must pay as much attention to a patient’s subjective experience as we do to the symptoms they report or the signs we observe.  But he also points out that our research tools and our descriptors—the terms we use to describe the dimensions of a person’s disease state—fail to do this.

Strauss argues that, as difficult as it sounds, we must divorce ourselves from the objective scientific tradition that we value so highly, and employ different approaches to understand and experience the subjective phenomena that our patients encounter—essentially to develop a “second kind of knowledge” (the first being the textbook knowledge that all doctors obtain through their training) that is immensely valuable in understanding a patient’s suffering.  He encourages role-playing, journaling, and other experiential tools to help physicians relate to the qualia of a patient’s suffering.

It’s hard to quantify subjective experiences for purposes of insurance billing, or for standardized outcomes measurements like surveys or questionnaires, or for large clinical trials of new pharmaceutical agents.  And because these constitute the reality of today’s medical practice, it is hard for physicians to draw their attention to the subjective experience of patients.  Nevertheless, physicians—and particularly psychiatrists—should remind themselves every so often that they’re dealing with people, not diseases or symptoms, and to challenge themselves to know what that actually means.

By the same token, patients have a right to know that their thoughts and feelings are not just heard, but understood, by their providers.  While the degree of understanding will (obviously) not be precise, patients may truly benefit from a clinician who “knows” more than meets the eye.


%d bloggers like this: