Turf Wars

July 6, 2012

The practice of medicine has changed enormously in just the last few years.  While the upcoming implementation of the Affordable Care Act promises even further—and more dramatic—change, one topic which has received little popular attention is the question of exactly who provides medical services.  Throughout medicine, physicians (i.e., those with MD or DO degrees) are being replaced by others, whenever possible, in an attempt to cut costs and improve access to care.

In psychiatry, non-physicians have long been a part of the treatment landscape.  Most commonly today, psychiatrists focus on “medication management” while psychologists, psychotherapists, and others perform “talk therapy.” But even the med management jobs—the traditional domain of psychiatrists, with their extensive medical training—are gradually being transferred to other so-called “midlevel” providers.

The term “midlevel” (not always a popular term, by the way) refers to someone whose training lies “mid-way” between that of a physician and another provider (like a nurse, psychologist, social worker, etc) but who is still licensed to diagnose and treat patients.  Midlevel providers usually work under the supervision (although often not direct) of a physician.  In psychiatry, there are a number of such midlevel professionals, with designations like PMHNP, PMHCNS, RNP, and APRN, who have become increasingly involved in “med management” roles.  This is partly because they tend to demand lower salaries and are reimbursed at a lower rate than medical professionals.  However, many physicians—and not just in psychiatry, by the way—have grown increasingly defensive (and, at times, downright angry, if some physician-only online communities are any indication) about this encroachment of “lesser-trained” practitioners onto their turf.

In my own experience, I’ve worked side-by-side with a few RNPs.  They performed their jobs quite competently.  However, their competence speaks less to the depth of their knowledge (which was impressive, incidentally) and more to the changing nature of psychiatry.  Indeed, psychiatry seems to have evolved to such a degree that the typical psychiatrist’s job—or “turf,” if you will—can be readily handled by someone with less (in some cases far less) training.  When you consider that most psychiatric visits comprise a quick interview and the prescription of a drug, it’s no surprise that someone with even just a rudimentary understanding of psychopharmacology and a friendly demeanor can do well 99% of the time.

This trend could spell (or hasten) the death of psychiatry.  More importantly, however, it could present an opportunity for psychiatry’s leaders to redefine and reinvigorate our field.

It’s easy to see how this trend could bring psychiatry to its knees.  Third-party payers obviously want to keep costs low, and with the passage of the ACA the role of the third-party payer—and “treatment guidelines” that can be followed more or less blindly—will be even stronger.  Patients, moreover, increasingly see psychiatry as a medication-oriented specialty, thanks to direct-to-consumer advertising and our medication-obsessed culture.  Taken together, this means that psychiatrists might be passed over in favor of cheaper workers whose main task will be to follow guidelines or protocols.  If so, most patients (unfortunately) wouldn’t even know the difference.

On the other hand, this trend could also present an opportunity for a revolution in psychiatry.  The predictions in the previous paragraph are based on two assumptions:  first, that psychiatric care requires medication, and second, that patients see the prescription of a drug as equivalent to a cure.  Psychiatry’s current leadership and the pharmaceutical industry have successfully convinced us that these statements are true.  But they need not be.  Instead, they merely represent one treatment paradigm—a paradigm that, for ever-increasing numbers of people, leaves much to be desired.

Preservation of psychiatry requires that psychiatrists find ways to differentiate themselves from midlevel providers in a meaningful fashion.  Psychiatrists frequently claim that they are already different from other mental health practitioners, because they have gone to medical school and, therefore, are “real doctors.”  But this is a specious (and arrogant) argument.  It doesn’t take a “real doctor” to do a psychiatric interview, to compare a patient’s complaints to what’s written in the DSM (or what’s in one’s own memory banks) and to prescribe medication according to a guideline or flowchart. Yet that’s what most psychiatric care is.  Sure, there are those cases in which successful treatment requires tapping the physician’s knowledge of pathophysiology, internal medicine, or even infectious disease, but these are rare—not to mention the fact that most treatment settings don’t even allow the psychiatrist to investigate these dimensions.

Thus, the sad reality is that today’s psychiatrists practice a type of medical “science” that others can grasp without four years of medical school and four years of psychiatric residency training.  So how, then, can psychiatrists provide something different—particularly when appointment lengths continue to dwindle and costs continue to rise?  To me, one answer is to revamp specialty training.  I received my training in two institutions with very different cultures and patient populations.  But both shared a common emphasis on teaching medication management.  Did I need four years to learn how to prescribe drugs?  No.  In reality, practical psychopharmacology can be learned in a one-year (maybe even six-month) course—not to mention the fact that the most valuable knowledge comes from years of experience, something that only real life (and not a training program) can provide.

Beyond psychopharmacology, psychiatry training programs need to beef up psychotherapy training, something that experts have encouraged for years.  But it goes further than that: psychiatry trainees need hands-on experience in the recovery model, community resources and their delivery, addictive illness and recovery concepts, behavioral therapies, case management, and, yes, how to truly integrate medical care into psychiatry.  Furthermore, it wouldn’t hurt to give psychiatrists lessons in communication and critical thinking skills, cognitive psychology principles, cultural sensitivity, economics, business management, alternative medicine (much of which is “alternative” only because the mainstream says so), and, my own pet peeve, greater exposure to the wide, natural variability among human beings in their intellectual, emotional, behavioral, perceptual, and physical characteristics and aptitudes—so we stop labeling everyone who walks in the door as “abnormal.”

One might argue, that sounds great but psychiatrists don’t get paid for those things.  True, we don’t.  At least not yet.  Nevertheless, a comprehensive approach to human wellness, taken by someone who has invested many years learning how to integrate these perspectives, is, in the long run, far more efficient than the current paradigm of discontinuous care, in which one person manages meds, another person provides therapy, another person serves as a case manager—roles which can change abruptly due to systemic constraints and turnover.

If we psychiatrists want to defend our “turf,” we can start by reclaiming some of the turf we’ve given away to others.  But more importantly, we must also identify new turf and make it our own—not to provide duplicate, wasteful care, but instead to create a new treatment paradigm in which the focus is on the patient and the context in which he or she presents, and treatment involves only what is necessary (and which is likely to work for that particular individual).  Only a professional with a well-rounded background can bring this paradigm to light, and psychiatrists—those who have invested the time, effort, expense, and hard work to devote their lives to the understanding and treatment of mental illness—are uniquely positioned to bring this perspective to the table and make it happen.

About these ads

“Patient-Centered” Care and the Science of Psychiatry

May 30, 2012

When asked what makes for good patient care in medicine, a typical answer is that it should be “patient-centered.”  Sure, “evidence-based medicine” and expert clinical guidelines are helpful, but they only serve as the scientific foundation upon which we base our individualized treatment decisions.  What’s more important is how a disorder manifests in the patient and the treatments he or she is most likely to respond to (based on genetics, family history, biomarkers, etc).  In psychiatry, there’s the additional need to target treatment to the patient’s unique situation and context—always founded upon our scientific understanding of mental illness.

It’s almost a cliché to say that “no two people with depression [or bipolar or schizophrenia or whatever] are the same.”  But when the “same” disorder manifests differently in different people, isn’t it also possible that the disorders themselves are different?  Not only does such a question have implications for how we treat each individual, it also impacts how we interpret the “evidence,” how we use treatment guidelines, and what our diagnoses mean in the first place.

For starters, every patient wants something different.  What he or she gets is usually what the clinician wants, which, in turn, is determined by the diagnosis and established treatment guidelines:  lifelong medication treatment, referral for therapy, forced inpatient hospitalization, etc.  Obviously, our ultimate goal is to eliminate suffering by relieving one’s symptoms, but shouldn’t the route we take to get there reflect the patient’s desires?  When a patient gets what he or she wants, shouldn’t this count as good patient care, regardless of what the guidelines say?

For instance, some patients just want a quick fix (e.g., a pill, ideally without frequent office visits), because they have only a limited amount of money (or time) they’re willing to use for treatment.  Some patients need to complete “treatment” to satisfy a judge, an employer, or a family member.  Some patients visit the office simply to get a disability form filled out or satisfy some other social-service need.  Some simply want a place to vent, or to hear from a trusted professional that they’re “okay.”  Still others seek intensive, long-term therapy even when it’s not medically justified.  Patients request all sorts of things, which often differ from what the guidelines say they need.

Sometimes these requests are entirely reasonable, cost-effective, and practical.  But we psychiatrists often feel a need to practice evidence- (i.e., science-) based medicine; thus, we take treatment guidelines (and diagnoses) and try to make them apply to our patients, even when we know they want—or need—something else entirely, or won’t be able to follow through on our recommendations.  We prescribe medications even though we know the patient won’t be able to obtain the necessary lab monitoring; or we refer a patient for intensive therapy even though we know their insurance will only cover a handful of visits; we admit a suicidal patient to a locked inpatient ward even though we know the unpredictability of that environment may cause further distress; or we advise a child with ADHD and his family to undergo long-term behavioral therapy in conjunction with stimulants, when we know this resource may be unavailable.

Guidelines and diagnoses are written by committee, and, as such, rarely apply to the specifics of any individual patient.  Thus, a good clinician uses a clinical guideline simply as a tool—a reference point—to provide a foundation for an individual’s care, just as a master chef knows a basic recipe but alters it according to the tastes he wishes to bring out or which ingredients are in season.  A good clinician works outside the available guidelines for many practical reasons, not the least of which is the patient’s own belief system—what he or she thinks is wrong and how to fix it.  The same could be said for diagnoses themselves.  In truth, what’s written in the DSM is a model—a “case study,” if you will—by which real-world patients are observed and compared.  No patient ever fits a single diagnosis to a “T.”

Unfortunately, under the pressures of limited time, scarce resources, and the threat of legal action for a poor outcome, clinicians are more inclined to see patients for what they are than for who they are, and therefore adhere to guidelines even more closely than they’d like.  This corrupts treatment in many ways.  Diagnoses are given out which don’t fit (e.g., “parity” diagnoses must be given in order to maintain reimbursement).  Treatment recommendations are made which are far too costly or complex for some patients to follow.  Services like disability benefits are maintained far beyond the period they’re needed (because diagnoses “stick”).  And tremendous resources are devoted to the ongoing treatment of patients who simply want (and would benefit from) only sporadic check-ins, or who, conversely, can afford ongoing care themselves.

The entire situation calls into question the value of treatment guidelines, as well as the validity of psychiatric diagnoses.  Our patients’ unique characteristics, needs, and preferences—i.e., what helps patients to become “well”—vary far more widely than the symptoms upon which official treatment guidelines were developed.  Similarly, what motivates a person to seek treatment differs so widely from person to person, implying vastly different etiologies.

To provide optimal care to a patient, care must indeed be “patient-centered.”  But truly patient-centered care must not only sidestep the DSM and established treatment guidelines, but also, frequently, ignore diagnoses and guidelines altogether.  What does this say about the validity, relevance, and applicability of the diagnoses and guidelines at our disposal?  And what does this say about psychiatry as a science?


What’s the Proper Place of Science in Psychiatry and Medicine?

April 29, 2012

On the pages of this blog I have frequently written about the “scientific” aspects of psychiatry and questioned how truly scientific they are.   And I’m certainly not alone.  With the growing outcry against psychiatry for its medicalization of human behavior and the use of powerful drugs to treat what’s essentially normal variability in our everyday existence, it seems as if everyone is challenging the evidence base behind what we do—except most of us who do it on a daily basis.

Psychiatrists are unique among medical professionals, because we need to play two roles at once.  On the one hand, we must be scientists—determining whether there’s a biological basis for a patient’s symptoms.  On the other hand, we must identify environmental or psychological precursors to a patient’s complaints and help to “fix” those, too.  However, today’s psychiatrists often eschew the latter approach, brushing off their patients’ internal or interpersonal dynamics and ignoring environmental and social influences, rushing instead to play the “doctor” card:  labeling, diagnosing, and prescribing.

Why do we do this?  We all know the obvious reasons:  shrinking appointment lengths, the influence of drug companies, psychiatrists’ increasing desire to see themselves as “clinical neuroscientists,” and so on.

But there’s another, less obvious reason, one which affects all doctors.  Medical training is all about science.  There’s a reason why pre-meds have to take a year of calculus, organic chemistry, and physics to get into medical school.  It’s not because doctors solve differential equations and perform redox reactions all day.  It’s because medicine is a science (or so we tell ourselves), and, as such, we demand a scientific, mechanistic explanation for everything from a broken toe to a myocardial infarction to a manic episode.  We do “med checks,” as much as we might not want to, because that’s what we’ve been trained to do.  And the same holds true for other medical specialties, too.  Little emphasis is placed on talking and listening.  Instead, it’s all about data, numbers, mechanisms, outcomes, and the right drugs for the job.

Perhaps it’s time to rethink the whole “medical science” enterprise.  In much of medicine, paying more and more attention to biological measures—and the scientific evidence—hasn’t really improved outcomes.  “Evidence-based medicine,” in fact, is really just a way for payers and the government to create guidelines to reduce costs, not a way to improve individual patients’ care. Moreover, we see examples all the time—in all medical disciplines—of the corruption of scientific data (often fueled by drug company greed) and very little improvement in patient outcomes.  Statins, for instance, are effective drugs for high cholesterol, but their widespread use in people with no other risk factors seems to confer no additional benefit.  Decades of research into understanding appetite and metabolism hasn’t eradicated obesity in our society.  A full-scale effort to elucidate the brain’s “reward pathways” hasn’t made a dent in the prevalence of drug and alcohol addiction.

Psychiatry suffers under the same scientific determinism.  Everything we call a “disease” in psychiatry could just as easily be called something else.  I’ve seen lots of depressed people in my office, but I can’t say for sure whether I’ve ever seen one with a biological illness called “Major Depressive Disorder.”  But that’s what I write in the chart.  If a patient in my med-management clinic tells me he feels better after six weeks on an antidepressant, I have no way of knowing whether it was due to the drug.  But that’s what I tell myself—and that’s usually what he believes, too.  My training encourages me to see my patients as objects, as collections of symptoms, and to interpret my “biological” interventions as having a far greater impact on my patients’ health than the hundreds or thousands of other phenomena my patient experiences in between appointments with me.  Is this fair?

(This may explain some of the extreme animosity from the anti-psychiatry crowd—and others—against some very well-meaning psychiatrists.  With few exceptions, the psychiatrists I know are thoughtful, compassionate people who entered this field with a true desire to alleviate suffering.  Unfortunately, by virtue of their training, many have become uncritical supporters the scientific model, making them easy targets for those who have been hurt by that very same model.)

My colleague Daniel Carlat, in his book Unhinged, asks the question: “Why do [psychiatrists] go to medical school? How do months of intensive training in surgery, internal medicine, radiology, etc., help psychiatrists treat mental illness?”  He lays out several alternatives for the future of psychiatric training.  One option is a hybrid approach that combines a few years of biomedical training with a few years of rigorous exposure to psychological techniques and theories.  Whether this would be acceptable to psychiatrists—many of whom wear their MD degrees as scientific badges of honor—or to psychologists—who might feel that their turf is being threatened—is anyone’s guess.

I see yet another alternative.  Rather than taking future psychiatrists out of medical school and teaching them an abbreviated version of medicine, let’s change medical school itself.  Let’s take some of the science out of medicine and replace it with what really matters: learning how to think critically and communicate with patients (and each other), and to think about our patients in a greater societal context.  Soon the Medical College Admissions Test (MCAT) will include more questions about cultural studies and ethics.  Medical education should go one step further and offer more exposure to economics, politics, management, health-care policy, decision-making skills, communication techniques, multicultural issues, patient advocacy, and, of course, how to interpret and critique the science that does exist.

We doctors will need a scientific background to interpret the data we see on a regular basis, but we must also acknowledge that our day-to-day clinical work requires very little science at all.  (In fact, all the biochemistry, physiology, pharmacology, and anatomy we learned in medical school is either (a) irrelevant, or (b) readily available on our iPhones or by a quick search of Wikipedia.)  We need to be cautious not to bring science into a clinical scenario simply because it’s easy or “it’s what we know,” particularly—especially—when it provides no benefit to the patient.

So we don’t need to take psychiatry out of medicine.  Instead, we should bring a more enlightened, patient-centered approach to all of medicine, starting with formal medical training itself.  This would help all medical professionals to offer care that focuses on the person, rather than an MRI or CT scan, receptor profile or genetic polymorphism, or lab value or score on a checklist.  It would help us to be more accepting of our patients’ diversity and less likely to rush to a diagnosis.  It might even restore some respect for the psychiatric profession, both within and outside of medicine.  Sure, it might mean that fewer patients are labeled with “mental illnesses” (translating into less of a need for psychiatrists), but for the good of our patients—and for the future of our profession—it’s a sacrifice that we ought to be willing to make.


The Problem With Organized Psychiatry

March 27, 2012

Well, it happened again.  I attended yet another professional conference this weekend (specifically, the annual meeting of my regional psychiatric society), and—along with all the talks, exhibits, and networking opportunities—came the call I’ve heard over and over again in venues like this one:  We must get psychiatrists involved in organized medicine.  We must stand up for what’s important to our profession and make our voices heard!!

Is this just a way for the organization to make money?  One would be forgiven for drawing this conclusion.  Annual dues are not trivial: membership in the society costs up to $290 per person, and also requires APA membership, which ranges from $205 to $565 per year.  But setting the money aside, the society firmly believes that we must protect ourselves and our profession.  Furthermore, the best way to do so is to recruit as many members as possible, and encourage members to stand up for our interests.

This raises one important question:  what exactly are we standing up for?  I think most psychiatrists would agree that we’d like to keep our jobs, and we’d like to get paid well, too.  (Oh, and benefits would be nice.)  But that’s about all the common ground that comes to mind.  The fact that we work in so many different settings makes it impossible for us to speak as a single voice or even (gasp!) to unionize.

Consider the following:  the conference featured a panel discussion by five early-career psychiatrists:  an academic psychiatrist; a county mental health psychiatrist; a jail psychiatrist; an HMO psychiatrist; and a cash-only private-practice psychiatrist.  What might all of those psychiatrists have in common?  As it turns out, not much.  The HMO psychiatrist has a 9-to-5 job, a stable income, and extraordinary benefits, but a restricted range of services, a very limited medication formulary and not much flexibility in what she can provide.  The private-practice guy, on the other hand, can do (and charge) essentially whatever he wants (a lot, as it turns out); but he also has to pay his own overhead.  The county psychiatrist wants his patients to have access to additional services (therapy, case management, housing, vocational training, etc) that might be irrelevant—or wasteful—in other settings.  The academic psychiatrist is concerned about his ability to obtain research funding, to keep his inpatient unit afloat, and to satisfy his department chair.  The jail psychiatrist wants access to substance abuse treatment and other vital services, and to help inmates make the transition back into their community safely.

Even within a given practice setting, different psychiatrists might disagree on what they want:  Some might want to see more patients, while delegating services like psychotherapy and case management to other providers.  On the other hand, some might want to spend more time with fewer patients and to be paid to provide these services themselves.  Some might want a more generous medication formulary, while others might argue that the benefits of medication are too exaggerated and want patients to have access to other types of treatment.  Finally, some might lobby for greater access to pharmaceutical companies and the benefits they provide (samples, coupons, lectures, meals, etc), while others might argue that pharmaceutical promotion has corrupted our field.

For most of the history of modern medicine, doctors have had a hard time “organizing” because there has been no entity worth organizing against.  Today, we have some definite targets: the Affordable Care Act, big insurance companies, hospital employers, pharmacy benefits managers, state and local governments, malpractice attorneys, etc.  But not all doctors see those threats equally.  (Many, in fact, welcome the Affordable Care Act with open arms.)  So even though there are, for instance, several unanswered questions as to how the ACA (aka “Obamacare”) might change the health-care-delivery landscape, the ramifications are, in the eyes of most doctors, too far-removed from the day-to-day aspects of patient care for any of us to worry about.  Just like everything else in the above list, we shrug them off as nuisances—the costs of doing business—and try to devote attention to our patients instead of agitating for change.

In psychiatry, the conflicts are particularly  wide-ranging, and the stakes more poorly defined than elsewhere in medicine, making the targets of our discontent less clear.  One of the panelists put it best when she said: “there’s a lot of white noise in psychiatry.”  In other words, we really can’t figure out where we’re headed—or even where we want to head.  At one extreme, for instance, are those psychiatrists who argue (sometimes convincingly) that all psychiatry is a farce, that diagnoses are socially constructed entities with no external validity, and that “treatment” produces more harm than good.  At the other extreme are the DSM promoters and their ilk, arguing for greater access to effective treatment, the medicalization of human behavior, and the early recognition and treatment of mental illness—sometimes even before it develops.

Until we psychiatrists determine what we want the future of psychiatric care to look like, it will be difficult for us to jump on any common bandwagon.  In the meantime, the future of our field will be determined by those who do have a well-formed agenda and who can rally around a common goal.  At present, that includes the APA, insurance companies, Big Pharma, and government.  As for the rest of us, we’ll just pick up whatever scraps are left over, and “organize” after we’ve finished our charts, returned our calls, completed the prior authorizations, filed the disability paperwork, paid our bills, and said good-night to our kids.


The Well Person

March 21, 2012

What does it mean to be “normal”?  We’re all unique, aren’t we?  We differ from each other in so many ways.  So what does it mean to say someone is “normal,” while someone else has a “disorder”?

This is, of course, the age-old question of psychiatric diagnosis.  The authors of the DSM-5, in fact, are grappling with this very question right now.  Take grieving, for example.  As I and others have written, grieving is “normal,” although its duration and intensity vary from person to person.  At some point, a line may be crossed, beyond which a person’s grief is no longer adaptive but dangerous.  Where that line falls, however, cannot be determined by a book or by a committee.

Psychiatrists ought to know who’s healthy and who’s not.  After all, we call ourselves experts in “mental health,” don’t we?  Surprisingly, I don’t think we’re very good at this.  We are acutely sensitive to disorder but have trouble identifying wellness.  We can recognize patients’ difficulties in dealing with other people but are hard-pressed to describe healthy interpersonal skills.  We admit that someone might be able to live with auditory hallucinations but we still feel an urge to increase the antipsychotic dose when a patient says she still hears “those voices.”   We are quick to point out how a patient’s alcohol or marijuana use might be a problem, but we can’t describe how he might use these substances in moderation.  I could go on and on.

Part of the reason for this might lie in how we’re trained.  In medical school we learn basic psychopathology and drug mechanisms (and, by the way, there are no drugs whose mechanism “maintains normality”—they all fix something that’s broken).  We learn how to do a mental status exam, complete with full descriptions of the behavior of manic, psychotic, depressed, and anxious people—but not “normals.”  Then, in our postgraduate training, our early years are spent with the most ill patients—those in hospitals, locked facilities, or emergency settings.  It’s not until much later in one’s training that a psychiatrist gets to see relatively more functional individuals in an office or clinic.  But by that time, we’re already tuned in to deficits and symptoms, and not to personal strengths, abilities, or resilience-promoting factors.

In a recent discussion with a colleague about how psychiatrists might best serve a large population of patients (e.g., in a “medical home” model), I suggested  that perhaps each psychiatrist could be responsible for a handful of people (say, 300 or 400 individuals).  Our job would be to see each of these 300-400 people at least once in a year, regardless of whether they have psychiatric diagnosis or not.  Those who have emotional or psychiatric complaints or who have a clear mental illness could be seen more frequently; the others would get their annual checkup and their clean bill of (mental) health.  It would be sort of like your annual medical visit or a “well-baby visit” in pediatrics:  a way for a person to be seen by a doctor, implement preventive measures,  and undergo screening to make sure no significant problems go unaddressed.

Alas, this would never fly in psychiatry.  Why not?  Because we’re too accustomed to seeing illness.  We’re too quick to interpret “sadness” as “depression”; to interpret “anxiety” or “nerves” as a cue for a benzodiazepine prescription; or to interpret “inattention” or poor work/school performance as ADHD.  I’ve even experienced this myself.  It is difficult to tell a person “you’re really doing just fine; there’s no need for you to see me, but if you want to come back, just call.”  For one thing, in many settings, I wouldn’t get paid for the visit if I said this.  But another concern, of course, is the fear of missing something:  Maybe this person really is bipolar [or whatever] and if I don’t keep seeing him, there will be a bad outcome and I’ll be responsible.

There’s also the fact that psychiatry is not a primary care specialty:  insurance plans don’t pay for an annual “well-person visit” with the a psychiatrist.  Patients who come to a psychiatrist’s office are usually there for a reason.  Maybe the patient deliberately sought out the psychiatrist to ask for help.  Maybe their primary care provider saw something wrong and wanted the psychiatrist’s input.  In the former, telling the person he or she is “okay” risks losing their trust (“but I just know something’s wrong, doc!“).  In the latter, it risks losing a referral source or professional relationship.

So how do we fix this?  I think we psychiatrists need to spend more time learning what “normal” really is.  There are no classes or textbooks on “Normal Adults.”  For starters, we can remind ourselves that the “normal” people around whom we’ve been living our lives may in fact have features that we might otherwise see as a disorder.  Learning to accept these quirks, foibles, and idiosyncrasies may help us to accept them in our patients.

In terms of using the DSM, we need to become more willing to use the V71.09 code, which means, essentially, “No diagnosis or condition.”  Many psychiatrists don’t even know this code exists.  Instead, we give “NOS” diagnoses (“not otherwise specified”) or “rule-outs,” which eventually become de facto diagnoses because we never actually take the time to rule them out!  A V71.09 should be seen as a perfectly valid (and reimbursable) diagnosis—a statement that a person has, in fact, a clean bill of mental health.  Now we just need to figure out what that means.

It is said that when Pope Julius II asked Michelangelo how he sculpted David out of a marble slab, he replied: “I just removed the parts that weren’t David.”  In psychiatry, we spend too much time thinking about what’s not David and relentlessly chipping away.  We spend too little time thinking about the healthy figure that may already be standing right in front of our eyes.


Is The Criticism of DSM-5 Misguided? Part II

March 14, 2012

A few months ago, I wrote about how critics of the DSM-5 (led by Allen Frances, editor of the DSM-IV) might be barking up the wrong tree.  I argued that many of the problems the critics predict are not the fault of the book, but rather how people might use it.  Admittedly, this sounds a lot like the “guns don’t kill people, people do” argument against gun control (as one of my commenters pointed out), or a way for me to shift responsibility to someone else (as another commenter wrote).  But it’s a side of the issue that no one seems to be addressing.

The issue emerges again with the ongoing controversy over the “bereavement exclusion” in the DSM-IV.  Briefly, our current DSM says that grieving over a loved one does not constitute major depression (as long as it doesn’t last more than two months) and, as such, should not be treated.  However, some have argued that this exclusion should be removed in DSM-5.  According to Sidney Zisook, a UCSD psychiatrist, if we fail to recognize and treat clinical depression simply because it occurs in the two-month bereavement period, we do those people a “disservice.”  Likewise, David Kupfer, chair of the DSM-5 task force, defends the removal of the bereavement exclusion because “if patients … want help, they should not be prevented from getting [it] because somebody tells them that this is what everybody has when they have a loss.”

The NPR news program “Talk of the Nation” featured a discussion of this topic on Tuesday’s broadcast, but the guests and callers described the issue in a more nuanced (translation: “real-world”) fashion.  Michael Craig Miller, Editor of the Harvard Mental Health Letter, referred to the grieving process by saying: “The reality is that there is no firm line, and it is always a judgment call…. labels tend not to matter as much as the practical concern, that people shouldn’t feel a sense of shame.  If they feel they need some help to get through something, then they should ask for it.”  Bereavement and the need for treatment, therefore, is not a yes/no, either/or proposition, but something individually determined.

This sentiment was echoed in a February 19 editorial in Lancet by the psychiatrist/anthropologist Arthur Kleinman, who wrote that the experience of loss “is always framed by meanings and values, which themselves are affected by all sorts of things like one’s age, health, financial and work conditions, and what is happening in one’s life and in the wider world.”  Everyone seems to be saying pretty much the same thing:  people grieve in different ways, but those who are suffering should have access to treatment.

So why the controversy?  I can only surmise it’s because the critics of DSM-5 believe that mental health clinicians are unable to determine who needs help and, therefore, have to rely on a book to do so.  Listening to the arguments of Allen Frances et al, one would think that we have no ability to collaborate, empathize, and relate with our patients.  I think that attitude is objectionable to anyone who has made it his or her life’s work to treat the emotional suffering of others, and underestimates the effort that many of us devote to the people we serve.

But in some cases the critics are right.  Sometimes clinicians do get answers from the book, or from some senseless protocol (usually written by a non-clinician).  One caller to the NPR program said she was handed an antidepressant prescription upon her discharge from the hospital after a stillbirth at 8 months of pregnancy.  Was she grieving?  Absolutely.  Did she need the antidepressant?  No one even bothered to figure that out.  It’s like the clinicians who see “bipolar” in everyone who has anger problems; “PTSD” in everyone who was raised in a turbulent household; or “ADHD” in every child who does poorly in school.

If a clinician observes a symptom and makes a diagnosis simply on the basis of a checklist from a book, or from a single statement by a patient, and not on the basis of his or her full understanding, experience, and clinical assessment of that patient, then the clinician (and not the book) deserves to take full responsibility for any negative outcome of that treatment.  [And if this counts as acceptable practice, then we might as well fire all the psychiatrists and hire high-school interns—or computers!—at a mere fraction of the cost, because they could do this job just as well.]

Could the new DSM-5 be misused?  Yes.  Drug companies could (and probably will) exploit it to develop expensive and potentially harmful drugs.  Researchers will use it to design clinical trials on patients that, regrettably, may not resemble those in the “real world.”  Unskilled clinicians will use it to make imperfect diagnoses and give inappropriate labels to their patients.  Insurance companies will use the labels to approve or deny treatment.  Government agencies will use it to determine everything from who’s “disabled” to who gets access to special services in preschool.  And, of course, the American Psychiatric Association will use it as their largest revenue-generating tool, written by authors with extensive drug-industry ties.

To me, those are the places where critics should focus their rage.  But remember, to most good clinicians, it’s just a book—a field guide, helping us to identify potential concerns, and to guide future research into mental illness and its treatment.  What we choose to do with such information depends upon our clinical acumen and our relationship with our patients.  To assume that clinicians will blindly use it to slap the “depression” label and force antidepressants on anyone whose spouse or parent just died “because the book said so,” is insulting to those of us who actually care about our patients, and about what we do to improve their lives.


Do I Want A Philosopher As My Surgeon?

February 20, 2012

I recently stumbled upon an article describing upcoming changes to the Medical College Admissions Test.  Also known as the MCAT, this is the exam that strikes fear into the hearts of pre-med students nationwide, due to its rigorous assessment of all the hard sciences that we despised in college.  The MCAT can make or break someone’s application to a prestigious medical school, and in a very real way, it can be the deciding factor as to whether someone even becomes a doctor at all.

According to the article, the AAMC—the organization which administers the MCAT—will “stop focusing solely on biology, physics, statistics, and chemistry, and also will begin asking questions on psychology, ethics, cultural studies, and philosophy.”  The article goes on to say that questions will ask about such topics as “behavior and behavior change, cultural and social differences that influence well-being, and socioeconomic factors, such as access to resources.”

Response has been understandably mixed.  On at least two online physician discussion groups, doctors are denouncing the change.  Medicine is based in science, they argue, and the proposed changes simply encourage mediocrity and “beat the drum for socialized medicine.”  Others express frustration that this shift rewards not those who can practice good medicine, but rather those who can increase “patient satisfaction” scores.  Still others believe the new MCAT is just a way to recruit a new generation of liberal-minded, government-employed docs (or, excuse me, “providers”) just in time for the roll-out of Obamacare.

I must admit that I can understand the resistance from the older generation of physicians.  In the interest of full disclosure, I was trained under the traditional medical model.  I learned anatomy, biochemistry, pathology, microbiology, etc., independently, and then had to synthesize the material myself, rather than through the “problem-based learning” format of today’s medical schools.  I also have an advanced degree in neuroscience, so I’m inclined to think mechanistically, to be critical of experimental designs, and always to search for alternate explanations of what I observe.

In spite of my own training, however, I think I might actually support the new MCAT format.  Medicine is different today.  Driven by factors that are beyond the control of the average physician, diagnostic tools are becoming more automated and treatment protocols more streamlined, even incorporated into our EMRs.  In today’s medicine, the doctor is no longer an independent, objective authority, but rather someone hired to follow a set of rules or guidelines.  We’re rapidly losing sight of (1) who the patient is, (2) what the patient wants, and (3) what unique skills we can provide to that patient.

Some examples:  The scientifically minded physician sees the middle-aged obese male with diabetes and hypertension as a guy with three separate diseases, each requiring its own treatment, often driven by guidelines that result in disorganized, fractured care.  He sees the 90 year-old woman with kidney failure, brittle osteoporosis, and congestive heart failure as a candidate for nephrology, orthopedics, and cardiology consults, exacerbating cost and the likelihood of iatrogenic injury.  In reality, the best care might come from, in the first example, a family doc with an emphasis on lifestyle change, and in the second example, a geriatrician who understands the woman’s resources, needs, and support system.

Psychiatry presents its own unique challenges.  Personally, I believe we psychiatrists have been overzealous in our redefinition of the wide range of abnormal human behaviors as “illnesses” requiring treatment.  It would be refreshing to have an economist work in a community mental health clinic, helping to redirect scarce resources away from expensive antipsychotics or wasteful “disability” programs and towards job-training or housing services instead.  Maybe a sociologist would be less likely to see an HMO patient as “depressed” and needing meds, but enduring complicated relationship problems amenable to therapy and to a reassessment of what she aspires to achieve in her life.

This may sound “touchy-feely” to some.  Trust me, ten years ago—at the peak of my enthusiasm for biological psychiatry—I would have said the same thing, and not in a kind way.  But I’ve since learned that psychiatry is touchy-feely.  And in their own unique ways, all specialties of medicine require a sophisticated understanding of human behavior, psychology, and the socioeconomic realities of the world in which we live and practice.  What medicine truly needs is that rare combination of someone who can not only describe Friedel-Crafts alkylation and define Hardy Weinberg equilibrium, but who can also understand human learning and motivation or describe—even in a very rough way—what the heck “Obamacare” is all about anyway.

If I needed cardiac bypass surgery, would I want a philosophy major as my surgeon?  I honestly don’t care, as long as he or she has the requisite technical skill to put me under the knife.  But perhaps a philosopher would be just as well—or better—prepared to judge whether I needed the operation in the first place, how to evaluate my other options (if any), and—if I undergo the surgery—how to change my behavior so that I won’t need another one.  Better yet, maybe that philosopher would also want to change conditions so that fewer people suffer from coronary artery disease, or to determine a more equitable way to ensure that anyone who needs such a procedure can get it.

If we doctors continue to see ourselves as scientists first and foremost, we’ll be ordering tests and prescribing meds until we’re bankrupt.  At the other extreme, if we’re too people-friendly, patients will certainly like us, but we may have no impact on their long-term health.  Maybe the new MCAT is a way to encourage docs to bridge this gap, to make decisions based on everything that matters, even those factors that today’s medicine tends to ignore.  It’s not clear whether this will succeed, but it’s worth a try.


Follow

Get every new post delivered to your Inbox.

Join 1,387 other followers

%d bloggers like this: