Be Careful What You Wish For

September 2, 2012

Whatever your opinion of the Affordable Care Act, you must admit that it’s good to see the American public talk about reducing health care costs, offering more efficient delivery systems, and expanding health care services to more of our nation’s people.  There’s no easy (or cheap) way to provide health care to all Americans, particularly with the inefficiencies and absurdities that characterize our current health care system, but it’s certainly goal worth pursuing.

However, there’s more to the story than just expanding coverage to more Americans.  There’s also the issue about improving the quality of that coverage.  If you listen to the politicians and pundits, you might get the impression that the most important goal is to insure more people, when in fact insurance may leave us with worse outcomes in the end.

Take, for example, an Op-Ed by Richard Friedman, MD, published in the New York Times in July.  The title says it all: “Good News For Mental Illness in Health Law.”  Dr Friedman makes the observations that seem de rigueur for articles like this one:  “Half of Americans will experience a major psychiatric disorder,” “mental illnesses are chronic lifelong diseases,” and so forth.  Friedman argues that the Affordable Care Act will—finally!—give these people the help they need.

Sounds good, right?  Well, not so fast.  First of all, there are two strategies in the ACA to insure more patients:  (1) the individual mandate, which requires people to purchase insurance through the state health-insurance exchanges, and (2) expansion of Medicaid, which may add another 11 million more people to this public insurance plan.

So more people will be insured.  But where’s the evidence that health insurance—whether private or public—improves outcomes in mental health?  To be sure, in some cases, insurance can be critically important: the suicidal patient can be hospitalized for his safety; the substance-abusing patient can access rehabilitation services; and the patient with bipolar disorder can stay on her mood stabilizing medication and keep her job, her family, and her life.  But there are many flavors of mental illness (i.e., not everything called “bipolar disorder” is bipolar disorder), and different people have different needs.  That’s the essence of psychiatry: understanding the person behind the illness and delivering treatment accordingly.  Individualized care is a lot harder when millions of people show up for it.

I’ve worked in insurance settings and Medicaid settings.  I’ve seen first-hand the emphasis on rapid treatment, the overwhelming urge to medicate (because that’s generally all we psychiatrists have time—and get paid—to do in such settings), and the underlying “chronic disease” assumption that keeps people persistently dependent on the psychiatric system.  This model does work for some patients.  But whether it “works” for all—or even most—patients seems to be less important than keeping costs low or enrolling as many people as possible for our services.

These demands are not only external; they have become part of the mindset of many psychiatrists.  I spent my last year of residency training, for instance, in a public mental health system, where I was a county employee and all patients were Medicaid recipients.  I walked away with a sense that what mattered was not the quality of care I provided, nor whether I developed treatment plans that incorporated people’s unique needs, nor whether my patients even got better at all.  Instead, what was most important (and what we were even lectured on!) was how to write notes that satisfied the payers, how to choose medications on the basis of a 20- or 30-minute (or shorter) assessment, and how not to exceed the 12 annual outpatient visits each patient was allotted.  To make matters worse, there was no way to discharge a patient without several months of red tape—regardless of whether the patient no longer needed our services, or was actually being harmed by the treatment.  The tide has definitely turned: tomorrow’s psychiatrists will answer to administrators’ rules, not the patients’ needs—and this generation of trainees will unfortunately never even know the difference.

The great irony in this whole debacle is that those who argue loudest for expansion of health care also tend to be those who argue for more humanistic and compassionate treatment.  In a similar vein, some of the most conscientious and compassionate doctors I know—many of them supporters of Obamacare—have deliberately chosen to work outside of insurance or Medicaid/Medicare altogether.  (I can’t say that I blame them, but isn’t that sort of like singing the praises of public education but sending your kids to private school?)  With more people obtaining mental health care through insurance “benefits,” the current model will become more widespread:  we’ll continue overprescribing unnecessary drugs to children and adults, institutionalizing people against their will even when less restrictive options may be more effective, offering lower reimbursements for psychotherapy and complementary services, and inviting practitioners with lesser training and experience (and whose experience is often limited exclusively to offering pills) to become the future face of mental health care.

Do psychiatry’s leaders say anything about these issues?  No.  When they’re not lamenting the lack of new pharmaceutical compounds or attacking those who offer valid critiques of modern-day psychiatry, they’re defending the imperfect DSM-5 and steadfastly preserving our right to prescribe drugs while the pharmaceutical industry is more than happy to create new (and costly) products to help us do so.  One solution may be to train psychiatrists to be cognizant of the extraordinary diversity among individuals who seek psychiatric help, to understand the limitations of our current treatments, and to introduce patients to alternatives.  While this may be more expensive up front, it may actually save money in the future:  for example, thorough diagnostic assessments by more seasoned and experienced providers may direct patients away from expensive office-and-medication-based treatment, and towards community-based services, self-help programs, talk therapy when indicated or desired by the patient, social work services, or any of a number of alternative resources geared towards true recovery.

Alas, no one seems to be offering that as an alternative.  Instead, we’re patting ourselves on the back for expanding health care coverage to more people and developing cost-saving initiatives of dubious benefit.  Somewhere along the way, we seem to have forgotten what “care” really means.  I wonder when we’ll start figuring that one out.


Turf Wars

July 6, 2012

The practice of medicine has changed enormously in just the last few years.  While the upcoming implementation of the Affordable Care Act promises even further—and more dramatic—change, one topic which has received little popular attention is the question of exactly who provides medical services.  Throughout medicine, physicians (i.e., those with MD or DO degrees) are being replaced by others, whenever possible, in an attempt to cut costs and improve access to care.

In psychiatry, non-physicians have long been a part of the treatment landscape.  Most commonly today, psychiatrists focus on “medication management” while psychologists, psychotherapists, and others perform “talk therapy.” But even the med management jobs—the traditional domain of psychiatrists, with their extensive medical training—are gradually being transferred to other so-called “midlevel” providers.

The term “midlevel” (not always a popular term, by the way) refers to someone whose training lies “mid-way” between that of a physician and another provider (like a nurse, psychologist, social worker, etc) but who is still licensed to diagnose and treat patients.  Midlevel providers usually work under the supervision (although often not direct) of a physician.  In psychiatry, there are a number of such midlevel professionals, with designations like PMHNP, PMHCNS, RNP, and APRN, who have become increasingly involved in “med management” roles.  This is partly because they tend to demand lower salaries and are reimbursed at a lower rate than medical professionals.  However, many physicians—and not just in psychiatry, by the way—have grown increasingly defensive (and, at times, downright angry, if some physician-only online communities are any indication) about this encroachment of “lesser-trained” practitioners onto their turf.

In my own experience, I’ve worked side-by-side with a few RNPs.  They performed their jobs quite competently.  However, their competence speaks less to the depth of their knowledge (which was impressive, incidentally) and more to the changing nature of psychiatry.  Indeed, psychiatry seems to have evolved to such a degree that the typical psychiatrist’s job—or “turf,” if you will—can be readily handled by someone with less (in some cases far less) training.  When you consider that most psychiatric visits comprise a quick interview and the prescription of a drug, it’s no surprise that someone with even just a rudimentary understanding of psychopharmacology and a friendly demeanor can do well 99% of the time.

This trend could spell (or hasten) the death of psychiatry.  More importantly, however, it could present an opportunity for psychiatry’s leaders to redefine and reinvigorate our field.

It’s easy to see how this trend could bring psychiatry to its knees.  Third-party payers obviously want to keep costs low, and with the passage of the ACA the role of the third-party payer—and “treatment guidelines” that can be followed more or less blindly—will be even stronger.  Patients, moreover, increasingly see psychiatry as a medication-oriented specialty, thanks to direct-to-consumer advertising and our medication-obsessed culture.  Taken together, this means that psychiatrists might be passed over in favor of cheaper workers whose main task will be to follow guidelines or protocols.  If so, most patients (unfortunately) wouldn’t even know the difference.

On the other hand, this trend could also present an opportunity for a revolution in psychiatry.  The predictions in the previous paragraph are based on two assumptions:  first, that psychiatric care requires medication, and second, that patients see the prescription of a drug as equivalent to a cure.  Psychiatry’s current leadership and the pharmaceutical industry have successfully convinced us that these statements are true.  But they need not be.  Instead, they merely represent one treatment paradigm—a paradigm that, for ever-increasing numbers of people, leaves much to be desired.

Preservation of psychiatry requires that psychiatrists find ways to differentiate themselves from midlevel providers in a meaningful fashion.  Psychiatrists frequently claim that they are already different from other mental health practitioners, because they have gone to medical school and, therefore, are “real doctors.”  But this is a specious (and arrogant) argument.  It doesn’t take a “real doctor” to do a psychiatric interview, to compare a patient’s complaints to what’s written in the DSM (or what’s in one’s own memory banks) and to prescribe medication according to a guideline or flowchart. Yet that’s what most psychiatric care is.  Sure, there are those cases in which successful treatment requires tapping the physician’s knowledge of pathophysiology, internal medicine, or even infectious disease, but these are rare—not to mention the fact that most treatment settings don’t even allow the psychiatrist to investigate these dimensions.

Thus, the sad reality is that today’s psychiatrists practice a type of medical “science” that others can grasp without four years of medical school and four years of psychiatric residency training.  So how, then, can psychiatrists provide something different—particularly when appointment lengths continue to dwindle and costs continue to rise?  To me, one answer is to revamp specialty training.  I received my training in two institutions with very different cultures and patient populations.  But both shared a common emphasis on teaching medication management.  Did I need four years to learn how to prescribe drugs?  No.  In reality, practical psychopharmacology can be learned in a one-year (maybe even six-month) course—not to mention the fact that the most valuable knowledge comes from years of experience, something that only real life (and not a training program) can provide.

Beyond psychopharmacology, psychiatry training programs need to beef up psychotherapy training, something that experts have encouraged for years.  But it goes further than that: psychiatry trainees need hands-on experience in the recovery model, community resources and their delivery, addictive illness and recovery concepts, behavioral therapies, case management, and, yes, how to truly integrate medical care into psychiatry.  Furthermore, it wouldn’t hurt to give psychiatrists lessons in communication and critical thinking skills, cognitive psychology principles, cultural sensitivity, economics, business management, alternative medicine (much of which is “alternative” only because the mainstream says so), and, my own pet peeve, greater exposure to the wide, natural variability among human beings in their intellectual, emotional, behavioral, perceptual, and physical characteristics and aptitudes—so we stop labeling everyone who walks in the door as “abnormal.”

One might argue, that sounds great but psychiatrists don’t get paid for those things.  True, we don’t.  At least not yet.  Nevertheless, a comprehensive approach to human wellness, taken by someone who has invested many years learning how to integrate these perspectives, is, in the long run, far more efficient than the current paradigm of discontinuous care, in which one person manages meds, another person provides therapy, another person serves as a case manager—roles which can change abruptly due to systemic constraints and turnover.

If we psychiatrists want to defend our “turf,” we can start by reclaiming some of the turf we’ve given away to others.  But more importantly, we must also identify new turf and make it our own—not to provide duplicate, wasteful care, but instead to create a new treatment paradigm in which the focus is on the patient and the context in which he or she presents, and treatment involves only what is necessary (and which is likely to work for that particular individual).  Only a professional with a well-rounded background can bring this paradigm to light, and psychiatrists—those who have invested the time, effort, expense, and hard work to devote their lives to the understanding and treatment of mental illness—are uniquely positioned to bring this perspective to the table and make it happen.


“Patient-Centered” Care and the Science of Psychiatry

May 30, 2012

When asked what makes for good patient care in medicine, a typical answer is that it should be “patient-centered.”  Sure, “evidence-based medicine” and expert clinical guidelines are helpful, but they only serve as the scientific foundation upon which we base our individualized treatment decisions.  What’s more important is how a disorder manifests in the patient and the treatments he or she is most likely to respond to (based on genetics, family history, biomarkers, etc).  In psychiatry, there’s the additional need to target treatment to the patient’s unique situation and context—always founded upon our scientific understanding of mental illness.

It’s almost a cliché to say that “no two people with depression [or bipolar or schizophrenia or whatever] are the same.”  But when the “same” disorder manifests differently in different people, isn’t it also possible that the disorders themselves are different?  Not only does such a question have implications for how we treat each individual, it also impacts how we interpret the “evidence,” how we use treatment guidelines, and what our diagnoses mean in the first place.

For starters, every patient wants something different.  What he or she gets is usually what the clinician wants, which, in turn, is determined by the diagnosis and established treatment guidelines:  lifelong medication treatment, referral for therapy, forced inpatient hospitalization, etc.  Obviously, our ultimate goal is to eliminate suffering by relieving one’s symptoms, but shouldn’t the route we take to get there reflect the patient’s desires?  When a patient gets what he or she wants, shouldn’t this count as good patient care, regardless of what the guidelines say?

For instance, some patients just want a quick fix (e.g., a pill, ideally without frequent office visits), because they have only a limited amount of money (or time) they’re willing to use for treatment.  Some patients need to complete “treatment” to satisfy a judge, an employer, or a family member.  Some patients visit the office simply to get a disability form filled out or satisfy some other social-service need.  Some simply want a place to vent, or to hear from a trusted professional that they’re “okay.”  Still others seek intensive, long-term therapy even when it’s not medically justified.  Patients request all sorts of things, which often differ from what the guidelines say they need.

Sometimes these requests are entirely reasonable, cost-effective, and practical.  But we psychiatrists often feel a need to practice evidence- (i.e., science-) based medicine; thus, we take treatment guidelines (and diagnoses) and try to make them apply to our patients, even when we know they want—or need—something else entirely, or won’t be able to follow through on our recommendations.  We prescribe medications even though we know the patient won’t be able to obtain the necessary lab monitoring; or we refer a patient for intensive therapy even though we know their insurance will only cover a handful of visits; we admit a suicidal patient to a locked inpatient ward even though we know the unpredictability of that environment may cause further distress; or we advise a child with ADHD and his family to undergo long-term behavioral therapy in conjunction with stimulants, when we know this resource may be unavailable.

Guidelines and diagnoses are written by committee, and, as such, rarely apply to the specifics of any individual patient.  Thus, a good clinician uses a clinical guideline simply as a tool—a reference point—to provide a foundation for an individual’s care, just as a master chef knows a basic recipe but alters it according to the tastes he wishes to bring out or which ingredients are in season.  A good clinician works outside the available guidelines for many practical reasons, not the least of which is the patient’s own belief system—what he or she thinks is wrong and how to fix it.  The same could be said for diagnoses themselves.  In truth, what’s written in the DSM is a model—a “case study,” if you will—by which real-world patients are observed and compared.  No patient ever fits a single diagnosis to a “T.”

Unfortunately, under the pressures of limited time, scarce resources, and the threat of legal action for a poor outcome, clinicians are more inclined to see patients for what they are than for who they are, and therefore adhere to guidelines even more closely than they’d like.  This corrupts treatment in many ways.  Diagnoses are given out which don’t fit (e.g., “parity” diagnoses must be given in order to maintain reimbursement).  Treatment recommendations are made which are far too costly or complex for some patients to follow.  Services like disability benefits are maintained far beyond the period they’re needed (because diagnoses “stick”).  And tremendous resources are devoted to the ongoing treatment of patients who simply want (and would benefit from) only sporadic check-ins, or who, conversely, can afford ongoing care themselves.

The entire situation calls into question the value of treatment guidelines, as well as the validity of psychiatric diagnoses.  Our patients’ unique characteristics, needs, and preferences—i.e., what helps patients to become “well”—vary far more widely than the symptoms upon which official treatment guidelines were developed.  Similarly, what motivates a person to seek treatment differs so widely from person to person, implying vastly different etiologies.

To provide optimal care to a patient, care must indeed be “patient-centered.”  But truly patient-centered care must not only sidestep the DSM and established treatment guidelines, but also, frequently, ignore diagnoses and guidelines altogether.  What does this say about the validity, relevance, and applicability of the diagnoses and guidelines at our disposal?  And what does this say about psychiatry as a science?


Addiction Psychiatry and The New Medicine

May 21, 2012

I have always believed that addictive disorders can teach us valuable lessons about other psychiatric conditions and about human behavior in general.  Addictions obviously involve behavior patterns, learning and memory processes, social influences, disturbed emotions, and environmental complexities.  Successful treatment of addiction requires attention to all of these facets of the disorder, and the addict often describes the recovery process not simply as being relieved of an illness, but as enduring a transformative, life-changing experience.

“Addiction psychiatry” is the area of psychiatry devoted to the treatment of these complicated disorders.  Certain trends in addiction psychiatry, however, seem to mirror larger trends in psychiatry as  whole.  Their impact on the future treatment of addictive behavior has yet to be determined, so it would be good to evaluate these trends to determine whether we’re headed in a direction we truly want to go.

Neurobiology:  Addiction psychiatry—like the rest of psychiatry—is slowly abandoning the patient and is becoming a largely neuroscientific enterprise.  While it is absolutely true that neurobiology has something to do with the addict’s repetitive, self-destructive behavior, and “brain reward pathways” are clearly involved, these do not tell the whole story.  Addicts refer to “people, places, and things” as the triggers for drug and alcohol use, not “dopamine, nucleus accumbens, and frontal cortex.”  This isn’t an argument against the need to study the biology of addiction, but to keep due focus on other factors which may affect one’s biology.  Virtually the same thing could also be said for most of what we treat in psychiatry; a multitude of factors might explain the presence of symptoms, but we’ve adopted a bias to think strictly in terms of brain pathways.

Medications:  Researchers in the addiction field (not to mention drug companies) devote much of their effort to disxover medications to treat addictions.  While they may stumble upon some useful adjunctive therapies, a “magic bullet” for addiction will probably never be found.  Moreover, I fear that the promise of medication-based treatments may foster a different sort of “dependence” among patients.  At this year’s APA Annual Meeting, for instance, I frequently heard the phrase “addictions are like other psychiatric disorders and therefore require lifelong treatment” (a statement which, by the way, is probably incorrect on TWO counts).  They weren’t talking about lifelong attendance at AA meetings or relapse prevention strategies, but rather to the need to take Suboxone or methadone (or the next “miracle drug”) indefinitely to achieve successful recovery.  Thus, as with other psychiatric disorders– many of which might only need short-term interventions but usually result in chronic pharmacological management—the long-term management of addiction may not reside in the maintenance of a strong recovery program but in the taking of a pill.

New Providers:  Once a relatively unpopular subspecialty, addiction psychiatry is now a burgeoning field, thanks to this new focus on neurobiology and medication management—areas in which psychiatrists consider themselves well versed.  For example, a psychiatrist can become an “addiction psychiatrist” by receiving “Suboxone certification” (i.e., taking an 8-hour online course to obtain a special DEA license to prescribe buprenorphine, an opioid agonist).  I have nothing against Suboxone: patients who take daily Suboxone are far less likely to use opioids, more likely to remain in treatment, and less likely to suffer the consequences of opioid abuse.  In fact, one might argue that the effectiveness of Suboxone—and methadone, for that matter—for opioid dependence is far greater than that of SSRIs in the treatment of depression.  Many Suboxone prescribers, however, have little exposure to the psychosocial aspects—and hard work—involved in fully treating (or overcoming) an addiction, and a pill is simply a substitute for opioids (which itself can be abused).  Nevertheless, prescribing a medication at monthly intervals—sometimes with little discussion about progress toward other recovery goals—resembles everything else we do in psychiatry; it’s no wonder that we’re drawn to it.

Patients:  Like many patients who seek psychiatric help, addicts might start to see “recovery” as a simple matter of making an appointment with a doctor and getting a prescription.  To be sure, many patients have used drugs like Suboxone or methadone to help them overcome deadly addictions, just as some individuals with major depression owe their lives to SSRIs or ECT.  But others have been genuinely hurt by these drugs.  Patients who have successfully discontinued Suboxone often say that it was the most difficult drug to stop—worse than any other opioid they had abused in the past.  Patients should always be reminded of the potential risks and dangers of treatment.  More importantly, we providers have an obligation to make patients aware of other ways of achieving sobriety and when to use them.  Strategies that don’t rely so heavily on the medical model might require a lot more work, but the payoffs may be much greater.

——

Addictions involve complex biological, psychological, and social dimensions that differ from person to person.  The response of the psychiatric profession has been to devote more research to the neurobiology of addictions and the development of anti-addiction drugs, potentially at the expense of exploring other aspects that may be more promising.  As expected, psychiatrists, pharmaceutical companies, third-party payers, and the general public are quickly buying into this model.

Psychiatry finds itself in a Catch-22.  On the one hand, psychiatry is often criticized for not being “medical,” and focusing on the biology of addiction is a good way to adhere to the medical model (and, perhaps, lead us to better pharmacotherapies).  On the other hand, psychiatric disorders—and especially addictions—are multifactorial in nature, and successful treatment often requires a comprehensive approach.  Fortunately, it may not yet be too late for psychiatry to retreat from a full-scale embrace of the medical model.  Putting the patient first sometimes means stepping away from the science.  And as difficult and non-intuitive as that may be, sometimes that’s where the healthiest recovery can be found.


“Trainwrecks”

May 15, 2012

One of the highlights of the American Psychiatric Association (APA) Annual Meeting is the Exhibit Hall.  Here, under bright lights and fancy multimedia displays, sponsors get to show off their new wares.  If anyone wonders whether modern psychiatry isn’t all about psychopharmacology, one visit to the APA Exhibit Hall would set them straight.  Far and away, the biggest and glitziest displays are those of Big Pharma, promising satisfaction and success—and legions of grateful patients—for prescribing their products.

At the 2012 Annual Meeting last week, I checked out most of the Pharma exhibits, mainly just to see what was in the pipeline.  (Not much, it turns out.)  I didn’t partake in any of the refreshments—lest I be reported to the Feds as the recipient of a $2 cappuccino or a $4 smoothie—but still felt somewhat like an awestruck Charlie Bucket in Willie Wonka’s miraculous Chocolate Factory.

One memorable exchange was at the Nuedexta booth.  Nuedexta, as readers of this blog may recall from a 2011 post, is a combination of dextromethorphan and quinidine, sold by Avanir Pharmaceuticals and approved for the treatment of “pseudobulbar affect,” or PBA.  PBA is a neurological condition, found in patients with multiple sclerosis or stroke, and characterized by uncontrollable laughing and crying.  While PBA can be a devastating condition, treatment options do exist.  In my blog post I wrote that “a number of medications, including SSRIs like citalopram, and tricyclic antidepressants (TCAs), are effective in managing the symptoms of PBA.”  One year later, Nuedexta still has not been approved by the FDA for any other indication than PBA.

In my discussion with the Avanir salesman, I asked the same question I posed to the Avanir rep one year ago:  “If I had a patient in whom I suspected PBA, I’d probably refer him to his neurologist for management of that condition—so why, as a psychiatrist, would I use this medication?”  The rep’s answer, delivered in that cool, convincing way that can only emerge from the salesman’s anima, was a disturbing insight into the practice of psychiatry in the 21st century:

“Well, you probably have some patients who are real trainwrecks, with ten things going on.  Chances are, there might be some PBA in there, so why not try some Nuedexta and see if it makes a difference?”

I nodded, thanked him, and politely excused myself.  (I also promptly tweeted about the exchange.)  I don’t know if his words comprised an official Nuedexta sales pitch, but the ease with which he shared it (no wink-wink, nudge-nudge here) suggested that it has proven successful in the past.  Quite frankly, it’s also somewhat ugly.

First of all, I refuse to refer to any of my patients as “trainwrecks.”  Doctors and medical students sometimes use this term to refer to patients with multiple problems and who, as a result, are difficult to care for.  We’ve all used it, myself included.  But the more I empathize with my patients and try to understand their unique needs and wishes, the more I realize how condescending it is.  (Some might refer to me as a “trainwreck,” too, given certain aspects of my past.)  Furthermore, many of the patients with this label have probably—and unfortunately—earned it as a direct result of psychiatric “treatment.”

Secondly, as any good scientist will tell you, the way to figure out the inner workings of a complicated system is to take it apart and analyze its core features.  If a person presents an unclear diagnostic picture, clouded by a half-dozen medications and no clear treatment goals, the best approach is to take things away and see what remains, not to add something else to the mix and “see if it makes a difference.”

Third, the words of the Avanir rep demonstrate precisely what is wrong with our modern era of biological psychopharmacology.  Because the syndromes and “disorders” we treat are so vague, and because many symptoms can be found in multiple conditions—not to mention everyday life—virtually anything a patient reports could be construed as an indication for a drug, with a neurobiological mechanism to “explain” it.  This is, of course, exactly what I predicted for Nuedexta when I referred to it as a “pipeline in a pill” (a phrase that originally came from Avanir’s CEO).  But the same could be said for just about any drug a psychiatrist prescribes for an “emotional” or “behavioral” problem.  When ordinary complaints can be explained by tenuous biological pathways, it becomes far easier to rationalize the use of a drug, regardless of whether data exist to support it.

Finally, the strategy of “throw a medication into the mix and see if it works” is far too commonplace in psychiatry.  It is completely mindless and ignores any understanding of the underlying biology (if there is such a thing) of the illnesses we treat.  And yet it has become an accepted treatment paradigm.  Consider, for instance, the use of atypical antipsychotics in the treatment of depression.  Not only have the manufacturers of Abilify and Seroquel XR never explained how a dopamine partial agonist or antagonist (respectively) might help treat depression, but look at the way they use the results of STAR*D to help promote their products.  STAR*D, as you might recall, was a large-scale, multi-step study comparing multiple antidepressants which found that no single antidepressant was any better than any other.  (All were pretty poor, actually.)  The antipsychotic manufacturers want us to use their products not because they performed well in STAR*D (they weren’t even in STAR*D!!!) but because nothing else seemed to work very well.

If the most convincing argument we can make for a drug therapy is “well, nothing else has worked, so let’s try it,” this doesn’t bode well for the future of our field.  This strategy is mindless and sloppy, not to mention potentially dangerous.  It opens the floodgates for expensive and relatively unproven treatments which, in all fairness, may work in some patients, but add to the iatrogenic burden—and diagnostic confusion—of others.  It also permits Pharma (and the APA’s key opinion leaders) to maintain the false promise of a neurochemical solution for the human, personal suffering of those who seek our help.

This, in my opinion, is the real “trainwreck” that awaits modern psychiatry.  And only psychiatrists can keep us on the tracks.


Is The Joke On Me?

May 12, 2012

I recently returned from the American Psychiatric Association (APA) Annual Meeting in Philadelphia.  I had the pleasure of participating on a panel discussing “psychiatrists and the new media” with the bloggers/authors from Shrink Rap, and Bob Hsiung of dr-bob.org.  The panel discussion was a success.  Some other parts of the conference, however, left me with a sense of doubt and unease.  I enjoy being a psychiatrist, but whenever I attend these psychiatric meetings, I sometimes find myself questioning the nature of what I do.  At times I wonder whether everyone else knows something I don’t.  Sometimes I even ask myself:  is the joke on me?

Here’s an example of what I mean.  On Sunday, David Kupfer of the University of Pittsburgh (and task force chair of the forthcoming DSM-5) gave a talk on “Rethinking Bipolar Disorder.”  The room—a cavernous hall at the Pennsylvania Convention Center—was packed.  Every chair was filled, while scores of attendees stood in the back or sat on the floor, listening with rapt attention.  The talk itself was a discussion of “where we need to go” in the management of bipolar disorder in the future.  Dr Kupfer described a new view of bipolar disorder as a chronic, multifactorial disorder involving not just mood lability and extremes of behavior, but also endocrine, inflammatory, neurophysiologic, and metabolic processes that deserve our attention as well.  He emphasized the fact that in between mood episodes, and even before they develop, there are a range of “dysfunctional symptom domains”—involving emotions, cognition, sleep, physical symptoms, and others—that we psychiatrists should be aware of.  He also introduced a potential way to “stage” development of bipolar disorder (similar to the way doctors stage tumors), suggesting that people at early stages might benefit from prophylactic psychiatric intervention.

Basically, the take-home message (for me, at least) was that in the future, psychiatrists will be responsible for treating other manifestations of bipolar disorder than those we currently attend to.  We will also need to look for subthreshold symptoms in people who might have a “prodrome” of bipolar disorder.

A sympathetic observer might say that Kupfer is simply asking us to practice good medicine, caring for the entire person rather than one’s symptoms, and prevent development or recurrence of bipolar illness.  On the other hand, a cynic might look at these pronouncements as a sort of disease-mongering, encouraging us to uncover signs of “disease” where they might not exist.  But both of these conclusions overlook a much more fundamental question that, to me, remains unanswered.  What exactly is bipolar disorder anyway?

I realize that’s an extraordinarily embarrassing question for a psychiatrist to ask.  And in all fairness, I do know what bipolar disorder is (or, at least, what the textbooks and the DSM-IV say it is).  I have seen examples of manic episodes in my own practice, and in my personal life, and have seen how they respond to medications, psychotherapy, or the passage of time.  But those are the minority.  Over the years (although my career is still relatively young), I have also seen dozens, if not hundreds, of people given the diagnosis of “bipolar disorder” without a clear history of a manic episode—the defining feature of bipolar disorder, according to the DSM.

As I looked around the room at everyone concentrating on Dr Kupfer’s every word, I wondered to myself, am I the only one with this dilemma?  Are my patients “special” or “unique”?  Maybe I’m a bad psychiatrist; maybe I don’t ask the right questions.  Or maybe everyone else is playing a joke on me.   That’s unlikely; others do see the same sorts of patients I do (I know this for a fact, from my own discussions with other psychiatrists).  But nobody seems to have the same crisis of confidence that I do.  It makes me wonder whether we have reached a point in psychiatry when psychiatrists can listen to a talk like this one (or see patients each day) and accept diagnostic categories, without paying any attention to the fact that they our nosology says virtually nothing at all about the unique nature of each person’s suffering.  It seems that we accept the words of our authority figures without asking the fundamental question of whether they have any basis in reality.  Or maybe I’m just missing out on the joke.

As far as I’m concerned, no two “bipolar” patients are alike, and no two “bipolar” patients have the same treatment goals.  The same can be said for almost everything else we treat, from “depression” to “borderline personality disorder” to addiction.  In my opinion, lumping all those people together and assuming they’re all alike for the purposes of a talk (or, even worse, for a clinical trial) makes it difficult—and quite foolish—to draw any conclusions about that group of individuals.

What we need to do is to figure out whether what we call “bipolar disorder” is a true disorder in the first place, rather than accept it uncritically and start looking for yet additional symptom domains or biomarkers as new targets of treatment.  To accept the assumption that everyone currently with the “bipolar” label indeed has the same disorder (or any disorder at all) makes a mockery of the diagnostic process and destroys the meaning of the word.  Some would argue this has already happened.

But then again, maybe I’m the only one who sees it this way.  No one at Kupfer’s talk seemed to demonstrate any bewilderment or concern that we might be heading towards a new era of disease management without really knowing what “disease” we’re treating in the first place.  If this is the case, I sure would appreciate it if someone would let me in on the joke.


What’s the Proper Place of Science in Psychiatry and Medicine?

April 29, 2012

On the pages of this blog I have frequently written about the “scientific” aspects of psychiatry and questioned how truly scientific they are.   And I’m certainly not alone.  With the growing outcry against psychiatry for its medicalization of human behavior and the use of powerful drugs to treat what’s essentially normal variability in our everyday existence, it seems as if everyone is challenging the evidence base behind what we do—except most of us who do it on a daily basis.

Psychiatrists are unique among medical professionals, because we need to play two roles at once.  On the one hand, we must be scientists—determining whether there’s a biological basis for a patient’s symptoms.  On the other hand, we must identify environmental or psychological precursors to a patient’s complaints and help to “fix” those, too.  However, today’s psychiatrists often eschew the latter approach, brushing off their patients’ internal or interpersonal dynamics and ignoring environmental and social influences, rushing instead to play the “doctor” card:  labeling, diagnosing, and prescribing.

Why do we do this?  We all know the obvious reasons:  shrinking appointment lengths, the influence of drug companies, psychiatrists’ increasing desire to see themselves as “clinical neuroscientists,” and so on.

But there’s another, less obvious reason, one which affects all doctors.  Medical training is all about science.  There’s a reason why pre-meds have to take a year of calculus, organic chemistry, and physics to get into medical school.  It’s not because doctors solve differential equations and perform redox reactions all day.  It’s because medicine is a science (or so we tell ourselves), and, as such, we demand a scientific, mechanistic explanation for everything from a broken toe to a myocardial infarction to a manic episode.  We do “med checks,” as much as we might not want to, because that’s what we’ve been trained to do.  And the same holds true for other medical specialties, too.  Little emphasis is placed on talking and listening.  Instead, it’s all about data, numbers, mechanisms, outcomes, and the right drugs for the job.

Perhaps it’s time to rethink the whole “medical science” enterprise.  In much of medicine, paying more and more attention to biological measures—and the scientific evidence—hasn’t really improved outcomes.  “Evidence-based medicine,” in fact, is really just a way for payers and the government to create guidelines to reduce costs, not a way to improve individual patients’ care. Moreover, we see examples all the time—in all medical disciplines—of the corruption of scientific data (often fueled by drug company greed) and very little improvement in patient outcomes.  Statins, for instance, are effective drugs for high cholesterol, but their widespread use in people with no other risk factors seems to confer no additional benefit.  Decades of research into understanding appetite and metabolism hasn’t eradicated obesity in our society.  A full-scale effort to elucidate the brain’s “reward pathways” hasn’t made a dent in the prevalence of drug and alcohol addiction.

Psychiatry suffers under the same scientific determinism.  Everything we call a “disease” in psychiatry could just as easily be called something else.  I’ve seen lots of depressed people in my office, but I can’t say for sure whether I’ve ever seen one with a biological illness called “Major Depressive Disorder.”  But that’s what I write in the chart.  If a patient in my med-management clinic tells me he feels better after six weeks on an antidepressant, I have no way of knowing whether it was due to the drug.  But that’s what I tell myself—and that’s usually what he believes, too.  My training encourages me to see my patients as objects, as collections of symptoms, and to interpret my “biological” interventions as having a far greater impact on my patients’ health than the hundreds or thousands of other phenomena my patient experiences in between appointments with me.  Is this fair?

(This may explain some of the extreme animosity from the anti-psychiatry crowd—and others—against some very well-meaning psychiatrists.  With few exceptions, the psychiatrists I know are thoughtful, compassionate people who entered this field with a true desire to alleviate suffering.  Unfortunately, by virtue of their training, many have become uncritical supporters the scientific model, making them easy targets for those who have been hurt by that very same model.)

My colleague Daniel Carlat, in his book Unhinged, asks the question: “Why do [psychiatrists] go to medical school? How do months of intensive training in surgery, internal medicine, radiology, etc., help psychiatrists treat mental illness?”  He lays out several alternatives for the future of psychiatric training.  One option is a hybrid approach that combines a few years of biomedical training with a few years of rigorous exposure to psychological techniques and theories.  Whether this would be acceptable to psychiatrists—many of whom wear their MD degrees as scientific badges of honor—or to psychologists—who might feel that their turf is being threatened—is anyone’s guess.

I see yet another alternative.  Rather than taking future psychiatrists out of medical school and teaching them an abbreviated version of medicine, let’s change medical school itself.  Let’s take some of the science out of medicine and replace it with what really matters: learning how to think critically and communicate with patients (and each other), and to think about our patients in a greater societal context.  Soon the Medical College Admissions Test (MCAT) will include more questions about cultural studies and ethics.  Medical education should go one step further and offer more exposure to economics, politics, management, health-care policy, decision-making skills, communication techniques, multicultural issues, patient advocacy, and, of course, how to interpret and critique the science that does exist.

We doctors will need a scientific background to interpret the data we see on a regular basis, but we must also acknowledge that our day-to-day clinical work requires very little science at all.  (In fact, all the biochemistry, physiology, pharmacology, and anatomy we learned in medical school is either (a) irrelevant, or (b) readily available on our iPhones or by a quick search of Wikipedia.)  We need to be cautious not to bring science into a clinical scenario simply because it’s easy or “it’s what we know,” particularly—especially—when it provides no benefit to the patient.

So we don’t need to take psychiatry out of medicine.  Instead, we should bring a more enlightened, patient-centered approach to all of medicine, starting with formal medical training itself.  This would help all medical professionals to offer care that focuses on the person, rather than an MRI or CT scan, receptor profile or genetic polymorphism, or lab value or score on a checklist.  It would help us to be more accepting of our patients’ diversity and less likely to rush to a diagnosis.  It might even restore some respect for the psychiatric profession, both within and outside of medicine.  Sure, it might mean that fewer patients are labeled with “mental illnesses” (translating into less of a need for psychiatrists), but for the good of our patients—and for the future of our profession—it’s a sacrifice that we ought to be willing to make.


Skin In The Game

April 8, 2012

We’ve all heard the saying “you get what you pay for.”  But in medicine, where the laws of economics don’t work like they do everywhere else, this maxim is essentially meaningless.  Thanks to our national health-insurance mess, some people pay very little (or nothing) out-of-pocket for a wide range of services, while others have to fork over huge sums of money for even the most basic of care.

Good arguments have been made for health insurance to become more like automobile or homeowners insurance.  Car insurance doesn’t cover oil changes and replacement tires, but it does pay for collisions and mishaps that may result if you don’t perform routine maintenance.  Homeowners insurance doesn’t pay the plumber, but might reimburse you for a flood that results from a blown valve on your water heater.

In medicine, we’ve never really seen this type of arrangement, apart from the occasional high-deductible plans and health savings accounts.  If you have a typical employer-sponsored health plan, not only do you pay little or nothing for your basic, routine care, but your insurance company has probably added even more services (massage, discounted gym memberships, “healthy eating” classes) in the name of preventive medicine and wellness.  (It’s almost as if your auto insurance paid for exactly what you’d do if you wanted to hang on to your car for 500,000 miles.)  When faced with this smorgasbord of free options, it’s easy to ignore the true underlying cost.  One way to reverse this trend is to ask for patients to put some “skin in the game.”

This might happen in Medicaid, the insurance plan for low-income persons.  California Governor Jerry Brown, for instance, proposed that patients receiving Medi-Cal (the California version of Medicaid) should pay higher co-pay amounts for care which is currently free (or nearly so).  A $5 co-payment for an office visit, or a $50 co-pay for an emergency room visit might sound hefty, but it’s a bargain—even for a poor family—if it means the difference between life and death… or even just sickness and health.

Unfortunately, California’s proposal was shot down in February by the Obama administration on legal grounds: the co-pays “are neither temporary nor targeted at a specific population.”  There are other legitimate questions, too, about its feasibility.  Would people forgo routine checkups or neglect to fill prescriptions to save a few dollars, only to cost the system more money down the road?  Would doctors and hospitals even bother to bill people (or send accounts to collections) for such low sums?  Is it fair to charge people money for what some people think is a right and should be free to all?

Without commenting on the moral and political arguments for or against this plan, I believe that this is a proposal worth testing—and psychiatry may be precisely the specialty in which it may have the greatest promise.

Psychiatric illnesses are unique among medical conditions.  Effective treatment involves more than just taking a pill or subjecting oneself to a biological intervention.  It involves the patient wanting to get better and believing in the path he or she is taking to achieve that outcome (even if it violates what the provider thinks is best).  Call it placebo effect, call it “transference,” call it insight, call it what you will—the psychological aspect of the patient’s “buying in” (pardon the pun) to treatment is an important part of successful psychiatric care, just as important—perhaps more so—as the biological effect of the drugs we prescribe.

Like it or not, part of that “wanting” and “believing” also involves “paying.”  Payment needn’t be extreme, but it should be enough to be noticeable.  Because only when someone has “skin in the game” does he or she feel motivated to change.  (Incidentally, this doesn’t have to be money, it could be one’s time, as well:  agreeing to attend an hour of weekly psychotherapy, going to self-help groups 2 or 3 times a week, or simply driving or taking the bus to the doctor’s office can mean a great deal for one’s recovery.)  It’s more than symbolic; it can mean a lot.

In my own life, I’ll admit, I took medical care for granted.  I was fortunate enough to be a healthy child, and had parents with good jobs that provided excellent health insurance.  It wasn’t until my mid-20s that I actually had to pay for medical care—even my co-payments seemed shocking, since I had never really had to pay anything before then.  Over the years, as I struggled with my own mental health needs (which were, unfortunately, not covered by my insurance), I had to pay ever-larger amounts out of my own pocket.  I honestly believe that this was a major contributor to my successful recovery—for starters, I wanted to get to a point where it didn’t make such a huge bite out of my bank account!

The absence of a “buy-in” is most stark precisely where Governor Brown wants to change it:  in Medicaid patients.  In the community clinics where I have worked, patients can visit the office with zero co-payment (and no penalties for no-shows).  This includes medication and therapy visits.  Prescriptions are often free as well; some patients take 4 or 5 (or more) medications—at zero out-of-pocket cost—which can set the government back hundreds of dollars a month.  At the same time, patients with no health insurance (or even with insurance, like me) can’t access the same drugs because of their prohibitive price tag or byzantine insurance restrictions.  It’s nowhere near a level playing field.

To make matters worse, patients on Medicaid generally tend to be more medically ill and, almost by definition, face significant environmental stressors that detrimentally affect their physical and mental well-being.  In these patients, we give psychiatric diagnoses far too liberally (often simply to give patients the opportunity to keep coming to see us, not because we truly believe there’s a diagnosable “mental illness”), and allow them to keep coming in—for free—to get various forms filled out and to refill medications that cost a fortune and don’t treat anything, perpetuating their dependence on an already overburdened health care system.  In fact, these patients would be much better served if we expected (and helped) them to obtain—and yes, even pay for—counseling or social-work assistance to overcome their environmental stressors, or measures to promote physical and mental wellness.

In the end, the solution seems like common sense.  When you own something—whether a home, an automobile, a major appliance, whatever—you tend to invest much more time and money in it than if you were just renting or borrowing.  The same could be said for your own health.  I don’t think it’s unreasonable to ask people to pony up an investment—even a small one—in their psychological and physical well-being.  Not only does it make good fiscal sense, but the psychological effect of taking responsibility for one’s own health may result in even greater future returns on that investment.  For everyone.


Did The APA Miss A Defining Moment?

April 1, 2012

Sometimes an organization or individual facing a potential public-relations disaster can use the incident as a way to send a powerful message, as well as change the way that organization or individual is perceived.   I wonder whether the American Psychiatric Association (APA) may have missed its opportunity to do exactly that.

Several weeks ago, the CBS news program 60 Minutes ran a story with the provocative argument that antidepressants are no better than placebo.  Reporter Lesley Stahl highlighted the work of Irving Kirsch, a psychologist who has studied the placebo effect for decades.  He has concluded that most, and maybe all, of the benefit of antidepressants can be attributed to placebo.  Simply put, they work because patients (and their doctors) expect them to work.

Since then, the psychiatric establishment has offered several counterarguments.  All have placed psychiatry squarely on the defensive.  One psychiatrist (Michael Thase), interviewed on the CBS program, defended antidepressants, arguing that Kirsch “is confusing the results of studies with what goes on in practice.”  Alan Schatzberg, past APA president and former Stanford chairman, said at a conference last weekend (where he spoke about “new antidepressants”) that the APA executive committee was “outraged” at the story, glibly remarking, “In this nation, if you can attack a psychiatrist, you win a medal.”  The leadership of the APA has mounted an aggressive defense, too.  Incoming APA president and Columbia chairman Jeffrey Lieberman called Kirsch “mistaken and confused, … ideologically based, [and] … just plain wrong.”  Similarly, current APA president John Oldham called the story “irresponsible and dangerous [and] … at odds with common clinical experience.”

These are indeed strong words.  But it raises one very important question:  who or what exactly are these spokesmen defending?  Patients?  Psychiatrists?  Drugs?  It would seem to me that the leadership of a professional medical organization should be defending good patient care, or at the very least, greater opportunities for its members to provide good patient care.  The arguments put forth by APA leadership, however, seem to be defending none of the above.  Instead, they seem to be defending antidepressants.

For the purposes of this post, I won’t weigh in on the question of whether antidepressants work or not.  It’s a complicated issue with no easy answer (we’ll offer some insight in the May issue of the Carlat Psychiatry Report).  However, let’s just assume that the general public now has good reason to believe that current antidepressants are essentially worthless, thanks to the 60 Minutes story (not to mention—just a few weeks earlier—a report on NPR’s “Morning Edition,” as well as a two-part series by Marcia Angell in the New York Review of Books last summer).  Justifiably or not, our patients will be skeptical of psychopharmacology going forward.  If we psychiatrists are hell-bent on defending antidepressants, we’d better have even stronger reasons for doing so than simply “we know they work.”

But why are psychiatrists defending antidepressants in the first place?  If anyone should be defending antidepressants, it should be the drug companies, not psychiatrists.  Why didn’t 60 Minutes interview a Lilly medical expert to explain how they did the initial studies of Prozac, or a Pfizer scientist to explain why patients should be put on Pristiq?  (Now that would have been fun!!)  I would have loved to hear Michael Thase—or anyone from the psychiatric establishment—say to Lesley Stahl:

“You know, Dr. Kirsch might just be onto something.  His research is telling us that maybe antidepressants really don’t work as well as we once thought.  As a result, we psychiatrists want drug companies to do better studies on their drugs before approval, and stop marketing their drugs so aggressively to us—and to our patients—until they can show us better data.  In the meantime we want to get paid to provide therapy along with—or instead of—medications, and we hope that the APA puts more of an emphasis on non-biological treatments for depression in the future.”

Wouldn’t that have been great?  For those of us (like me) who think the essence of depression is far more than faulty biology to be corrected with a pill, it would have been very refreshing to hear.  Moreover, it would help our field to reclaim some of the “territory” we’ve been abdicating to others (therapists, psychologists, social workers)—territory that may ultimately be shown to be more relevant for most patients than drugs.  (By the way, I don’t mean to drive a wedge between psychiatry and these other specialties, as I truly believe we can coexist and complement each other.  But as I wrote in my last post, psychiatry really needs to stand up for something, and this would have been a perfect opportunity to do exactly that.)

To his credit, Dr. Oldham wrote an editorial two weeks ago in Psychiatric News (the APA’s weekly newsletter) explaining that he was asked to contribute to the 60 Minutes piece, but CBS canceled his interview at the last minute.  He wrote a response but CBS refused to post it on its website (the official APA response can be found here).  Interestingly, he went on to acknowledge that “good care” (i.e., whatever works) is what our patients need, and also conceded that, at least for “milder forms of depression,” the “nonspecific [placebo] effect dwarfs the specific [drug] effect.”

I think the APA would have a pretty powerful argument if it emphasized this message (i.e., that the placebo effect might be much greater than we believe, and that we should study this more closely—maybe even harness it for the sake of our patients) over what sounds like a knee-jerk defense of drugs.  It’s a message that would demand better science, prioritize our patients’ well-being, and, perhaps even reduce treatment costs in the long run.  If, instead, we call “foul” on anyone who criticizes medications, not only do we send the message that we put our faith in only one form of therapy (out of many), but we also become de facto spokespersons for the pharmaceutical industry.  If the APA wants to change that perception among the general public, this would be a great place to start.


The Problem With Organized Psychiatry

March 27, 2012

Well, it happened again.  I attended yet another professional conference this weekend (specifically, the annual meeting of my regional psychiatric society), and—along with all the talks, exhibits, and networking opportunities—came the call I’ve heard over and over again in venues like this one:  We must get psychiatrists involved in organized medicine.  We must stand up for what’s important to our profession and make our voices heard!!

Is this just a way for the organization to make money?  One would be forgiven for drawing this conclusion.  Annual dues are not trivial: membership in the society costs up to $290 per person, and also requires APA membership, which ranges from $205 to $565 per year.  But setting the money aside, the society firmly believes that we must protect ourselves and our profession.  Furthermore, the best way to do so is to recruit as many members as possible, and encourage members to stand up for our interests.

This raises one important question:  what exactly are we standing up for?  I think most psychiatrists would agree that we’d like to keep our jobs, and we’d like to get paid well, too.  (Oh, and benefits would be nice.)  But that’s about all the common ground that comes to mind.  The fact that we work in so many different settings makes it impossible for us to speak as a single voice or even (gasp!) to unionize.

Consider the following:  the conference featured a panel discussion by five early-career psychiatrists:  an academic psychiatrist; a county mental health psychiatrist; a jail psychiatrist; an HMO psychiatrist; and a cash-only private-practice psychiatrist.  What might all of those psychiatrists have in common?  As it turns out, not much.  The HMO psychiatrist has a 9-to-5 job, a stable income, and extraordinary benefits, but a restricted range of services, a very limited medication formulary and not much flexibility in what she can provide.  The private-practice guy, on the other hand, can do (and charge) essentially whatever he wants (a lot, as it turns out); but he also has to pay his own overhead.  The county psychiatrist wants his patients to have access to additional services (therapy, case management, housing, vocational training, etc) that might be irrelevant—or wasteful—in other settings.  The academic psychiatrist is concerned about his ability to obtain research funding, to keep his inpatient unit afloat, and to satisfy his department chair.  The jail psychiatrist wants access to substance abuse treatment and other vital services, and to help inmates make the transition back into their community safely.

Even within a given practice setting, different psychiatrists might disagree on what they want:  Some might want to see more patients, while delegating services like psychotherapy and case management to other providers.  On the other hand, some might want to spend more time with fewer patients and to be paid to provide these services themselves.  Some might want a more generous medication formulary, while others might argue that the benefits of medication are too exaggerated and want patients to have access to other types of treatment.  Finally, some might lobby for greater access to pharmaceutical companies and the benefits they provide (samples, coupons, lectures, meals, etc), while others might argue that pharmaceutical promotion has corrupted our field.

For most of the history of modern medicine, doctors have had a hard time “organizing” because there has been no entity worth organizing against.  Today, we have some definite targets: the Affordable Care Act, big insurance companies, hospital employers, pharmacy benefits managers, state and local governments, malpractice attorneys, etc.  But not all doctors see those threats equally.  (Many, in fact, welcome the Affordable Care Act with open arms.)  So even though there are, for instance, several unanswered questions as to how the ACA (aka “Obamacare”) might change the health-care-delivery landscape, the ramifications are, in the eyes of most doctors, too far-removed from the day-to-day aspects of patient care for any of us to worry about.  Just like everything else in the above list, we shrug them off as nuisances—the costs of doing business—and try to devote attention to our patients instead of agitating for change.

In psychiatry, the conflicts are particularly  wide-ranging, and the stakes more poorly defined than elsewhere in medicine, making the targets of our discontent less clear.  One of the panelists put it best when she said: “there’s a lot of white noise in psychiatry.”  In other words, we really can’t figure out where we’re headed—or even where we want to head.  At one extreme, for instance, are those psychiatrists who argue (sometimes convincingly) that all psychiatry is a farce, that diagnoses are socially constructed entities with no external validity, and that “treatment” produces more harm than good.  At the other extreme are the DSM promoters and their ilk, arguing for greater access to effective treatment, the medicalization of human behavior, and the early recognition and treatment of mental illness—sometimes even before it develops.

Until we psychiatrists determine what we want the future of psychiatric care to look like, it will be difficult for us to jump on any common bandwagon.  In the meantime, the future of our field will be determined by those who do have a well-formed agenda and who can rally around a common goal.  At present, that includes the APA, insurance companies, Big Pharma, and government.  As for the rest of us, we’ll just pick up whatever scraps are left over, and “organize” after we’ve finished our charts, returned our calls, completed the prior authorizations, filed the disability paperwork, paid our bills, and said good-night to our kids.