Turf Wars

July 6, 2012

The practice of medicine has changed enormously in just the last few years.  While the upcoming implementation of the Affordable Care Act promises even further—and more dramatic—change, one topic which has received little popular attention is the question of exactly who provides medical services.  Throughout medicine, physicians (i.e., those with MD or DO degrees) are being replaced by others, whenever possible, in an attempt to cut costs and improve access to care.

In psychiatry, non-physicians have long been a part of the treatment landscape.  Most commonly today, psychiatrists focus on “medication management” while psychologists, psychotherapists, and others perform “talk therapy.” But even the med management jobs—the traditional domain of psychiatrists, with their extensive medical training—are gradually being transferred to other so-called “midlevel” providers.

The term “midlevel” (not always a popular term, by the way) refers to someone whose training lies “mid-way” between that of a physician and another provider (like a nurse, psychologist, social worker, etc) but who is still licensed to diagnose and treat patients.  Midlevel providers usually work under the supervision (although often not direct) of a physician.  In psychiatry, there are a number of such midlevel professionals, with designations like PMHNP, PMHCNS, RNP, and APRN, who have become increasingly involved in “med management” roles.  This is partly because they tend to demand lower salaries and are reimbursed at a lower rate than medical professionals.  However, many physicians—and not just in psychiatry, by the way—have grown increasingly defensive (and, at times, downright angry, if some physician-only online communities are any indication) about this encroachment of “lesser-trained” practitioners onto their turf.

In my own experience, I’ve worked side-by-side with a few RNPs.  They performed their jobs quite competently.  However, their competence speaks less to the depth of their knowledge (which was impressive, incidentally) and more to the changing nature of psychiatry.  Indeed, psychiatry seems to have evolved to such a degree that the typical psychiatrist’s job—or “turf,” if you will—can be readily handled by someone with less (in some cases far less) training.  When you consider that most psychiatric visits comprise a quick interview and the prescription of a drug, it’s no surprise that someone with even just a rudimentary understanding of psychopharmacology and a friendly demeanor can do well 99% of the time.

This trend could spell (or hasten) the death of psychiatry.  More importantly, however, it could present an opportunity for psychiatry’s leaders to redefine and reinvigorate our field.

It’s easy to see how this trend could bring psychiatry to its knees.  Third-party payers obviously want to keep costs low, and with the passage of the ACA the role of the third-party payer—and “treatment guidelines” that can be followed more or less blindly—will be even stronger.  Patients, moreover, increasingly see psychiatry as a medication-oriented specialty, thanks to direct-to-consumer advertising and our medication-obsessed culture.  Taken together, this means that psychiatrists might be passed over in favor of cheaper workers whose main task will be to follow guidelines or protocols.  If so, most patients (unfortunately) wouldn’t even know the difference.

On the other hand, this trend could also present an opportunity for a revolution in psychiatry.  The predictions in the previous paragraph are based on two assumptions:  first, that psychiatric care requires medication, and second, that patients see the prescription of a drug as equivalent to a cure.  Psychiatry’s current leadership and the pharmaceutical industry have successfully convinced us that these statements are true.  But they need not be.  Instead, they merely represent one treatment paradigm—a paradigm that, for ever-increasing numbers of people, leaves much to be desired.

Preservation of psychiatry requires that psychiatrists find ways to differentiate themselves from midlevel providers in a meaningful fashion.  Psychiatrists frequently claim that they are already different from other mental health practitioners, because they have gone to medical school and, therefore, are “real doctors.”  But this is a specious (and arrogant) argument.  It doesn’t take a “real doctor” to do a psychiatric interview, to compare a patient’s complaints to what’s written in the DSM (or what’s in one’s own memory banks) and to prescribe medication according to a guideline or flowchart. Yet that’s what most psychiatric care is.  Sure, there are those cases in which successful treatment requires tapping the physician’s knowledge of pathophysiology, internal medicine, or even infectious disease, but these are rare—not to mention the fact that most treatment settings don’t even allow the psychiatrist to investigate these dimensions.

Thus, the sad reality is that today’s psychiatrists practice a type of medical “science” that others can grasp without four years of medical school and four years of psychiatric residency training.  So how, then, can psychiatrists provide something different—particularly when appointment lengths continue to dwindle and costs continue to rise?  To me, one answer is to revamp specialty training.  I received my training in two institutions with very different cultures and patient populations.  But both shared a common emphasis on teaching medication management.  Did I need four years to learn how to prescribe drugs?  No.  In reality, practical psychopharmacology can be learned in a one-year (maybe even six-month) course—not to mention the fact that the most valuable knowledge comes from years of experience, something that only real life (and not a training program) can provide.

Beyond psychopharmacology, psychiatry training programs need to beef up psychotherapy training, something that experts have encouraged for years.  But it goes further than that: psychiatry trainees need hands-on experience in the recovery model, community resources and their delivery, addictive illness and recovery concepts, behavioral therapies, case management, and, yes, how to truly integrate medical care into psychiatry.  Furthermore, it wouldn’t hurt to give psychiatrists lessons in communication and critical thinking skills, cognitive psychology principles, cultural sensitivity, economics, business management, alternative medicine (much of which is “alternative” only because the mainstream says so), and, my own pet peeve, greater exposure to the wide, natural variability among human beings in their intellectual, emotional, behavioral, perceptual, and physical characteristics and aptitudes—so we stop labeling everyone who walks in the door as “abnormal.”

One might argue, that sounds great but psychiatrists don’t get paid for those things.  True, we don’t.  At least not yet.  Nevertheless, a comprehensive approach to human wellness, taken by someone who has invested many years learning how to integrate these perspectives, is, in the long run, far more efficient than the current paradigm of discontinuous care, in which one person manages meds, another person provides therapy, another person serves as a case manager—roles which can change abruptly due to systemic constraints and turnover.

If we psychiatrists want to defend our “turf,” we can start by reclaiming some of the turf we’ve given away to others.  But more importantly, we must also identify new turf and make it our own—not to provide duplicate, wasteful care, but instead to create a new treatment paradigm in which the focus is on the patient and the context in which he or she presents, and treatment involves only what is necessary (and which is likely to work for that particular individual).  Only a professional with a well-rounded background can bring this paradigm to light, and psychiatrists—those who have invested the time, effort, expense, and hard work to devote their lives to the understanding and treatment of mental illness—are uniquely positioned to bring this perspective to the table and make it happen.


Skin In The Game

April 8, 2012

We’ve all heard the saying “you get what you pay for.”  But in medicine, where the laws of economics don’t work like they do everywhere else, this maxim is essentially meaningless.  Thanks to our national health-insurance mess, some people pay very little (or nothing) out-of-pocket for a wide range of services, while others have to fork over huge sums of money for even the most basic of care.

Good arguments have been made for health insurance to become more like automobile or homeowners insurance.  Car insurance doesn’t cover oil changes and replacement tires, but it does pay for collisions and mishaps that may result if you don’t perform routine maintenance.  Homeowners insurance doesn’t pay the plumber, but might reimburse you for a flood that results from a blown valve on your water heater.

In medicine, we’ve never really seen this type of arrangement, apart from the occasional high-deductible plans and health savings accounts.  If you have a typical employer-sponsored health plan, not only do you pay little or nothing for your basic, routine care, but your insurance company has probably added even more services (massage, discounted gym memberships, “healthy eating” classes) in the name of preventive medicine and wellness.  (It’s almost as if your auto insurance paid for exactly what you’d do if you wanted to hang on to your car for 500,000 miles.)  When faced with this smorgasbord of free options, it’s easy to ignore the true underlying cost.  One way to reverse this trend is to ask for patients to put some “skin in the game.”

This might happen in Medicaid, the insurance plan for low-income persons.  California Governor Jerry Brown, for instance, proposed that patients receiving Medi-Cal (the California version of Medicaid) should pay higher co-pay amounts for care which is currently free (or nearly so).  A $5 co-payment for an office visit, or a $50 co-pay for an emergency room visit might sound hefty, but it’s a bargain—even for a poor family—if it means the difference between life and death… or even just sickness and health.

Unfortunately, California’s proposal was shot down in February by the Obama administration on legal grounds: the co-pays “are neither temporary nor targeted at a specific population.”  There are other legitimate questions, too, about its feasibility.  Would people forgo routine checkups or neglect to fill prescriptions to save a few dollars, only to cost the system more money down the road?  Would doctors and hospitals even bother to bill people (or send accounts to collections) for such low sums?  Is it fair to charge people money for what some people think is a right and should be free to all?

Without commenting on the moral and political arguments for or against this plan, I believe that this is a proposal worth testing—and psychiatry may be precisely the specialty in which it may have the greatest promise.

Psychiatric illnesses are unique among medical conditions.  Effective treatment involves more than just taking a pill or subjecting oneself to a biological intervention.  It involves the patient wanting to get better and believing in the path he or she is taking to achieve that outcome (even if it violates what the provider thinks is best).  Call it placebo effect, call it “transference,” call it insight, call it what you will—the psychological aspect of the patient’s “buying in” (pardon the pun) to treatment is an important part of successful psychiatric care, just as important—perhaps more so—as the biological effect of the drugs we prescribe.

Like it or not, part of that “wanting” and “believing” also involves “paying.”  Payment needn’t be extreme, but it should be enough to be noticeable.  Because only when someone has “skin in the game” does he or she feel motivated to change.  (Incidentally, this doesn’t have to be money, it could be one’s time, as well:  agreeing to attend an hour of weekly psychotherapy, going to self-help groups 2 or 3 times a week, or simply driving or taking the bus to the doctor’s office can mean a great deal for one’s recovery.)  It’s more than symbolic; it can mean a lot.

In my own life, I’ll admit, I took medical care for granted.  I was fortunate enough to be a healthy child, and had parents with good jobs that provided excellent health insurance.  It wasn’t until my mid-20s that I actually had to pay for medical care—even my co-payments seemed shocking, since I had never really had to pay anything before then.  Over the years, as I struggled with my own mental health needs (which were, unfortunately, not covered by my insurance), I had to pay ever-larger amounts out of my own pocket.  I honestly believe that this was a major contributor to my successful recovery—for starters, I wanted to get to a point where it didn’t make such a huge bite out of my bank account!

The absence of a “buy-in” is most stark precisely where Governor Brown wants to change it:  in Medicaid patients.  In the community clinics where I have worked, patients can visit the office with zero co-payment (and no penalties for no-shows).  This includes medication and therapy visits.  Prescriptions are often free as well; some patients take 4 or 5 (or more) medications—at zero out-of-pocket cost—which can set the government back hundreds of dollars a month.  At the same time, patients with no health insurance (or even with insurance, like me) can’t access the same drugs because of their prohibitive price tag or byzantine insurance restrictions.  It’s nowhere near a level playing field.

To make matters worse, patients on Medicaid generally tend to be more medically ill and, almost by definition, face significant environmental stressors that detrimentally affect their physical and mental well-being.  In these patients, we give psychiatric diagnoses far too liberally (often simply to give patients the opportunity to keep coming to see us, not because we truly believe there’s a diagnosable “mental illness”), and allow them to keep coming in—for free—to get various forms filled out and to refill medications that cost a fortune and don’t treat anything, perpetuating their dependence on an already overburdened health care system.  In fact, these patients would be much better served if we expected (and helped) them to obtain—and yes, even pay for—counseling or social-work assistance to overcome their environmental stressors, or measures to promote physical and mental wellness.

In the end, the solution seems like common sense.  When you own something—whether a home, an automobile, a major appliance, whatever—you tend to invest much more time and money in it than if you were just renting or borrowing.  The same could be said for your own health.  I don’t think it’s unreasonable to ask people to pony up an investment—even a small one—in their psychological and physical well-being.  Not only does it make good fiscal sense, but the psychological effect of taking responsibility for one’s own health may result in even greater future returns on that investment.  For everyone.


The Problem With Organized Psychiatry

March 27, 2012

Well, it happened again.  I attended yet another professional conference this weekend (specifically, the annual meeting of my regional psychiatric society), and—along with all the talks, exhibits, and networking opportunities—came the call I’ve heard over and over again in venues like this one:  We must get psychiatrists involved in organized medicine.  We must stand up for what’s important to our profession and make our voices heard!!

Is this just a way for the organization to make money?  One would be forgiven for drawing this conclusion.  Annual dues are not trivial: membership in the society costs up to $290 per person, and also requires APA membership, which ranges from $205 to $565 per year.  But setting the money aside, the society firmly believes that we must protect ourselves and our profession.  Furthermore, the best way to do so is to recruit as many members as possible, and encourage members to stand up for our interests.

This raises one important question:  what exactly are we standing up for?  I think most psychiatrists would agree that we’d like to keep our jobs, and we’d like to get paid well, too.  (Oh, and benefits would be nice.)  But that’s about all the common ground that comes to mind.  The fact that we work in so many different settings makes it impossible for us to speak as a single voice or even (gasp!) to unionize.

Consider the following:  the conference featured a panel discussion by five early-career psychiatrists:  an academic psychiatrist; a county mental health psychiatrist; a jail psychiatrist; an HMO psychiatrist; and a cash-only private-practice psychiatrist.  What might all of those psychiatrists have in common?  As it turns out, not much.  The HMO psychiatrist has a 9-to-5 job, a stable income, and extraordinary benefits, but a restricted range of services, a very limited medication formulary and not much flexibility in what she can provide.  The private-practice guy, on the other hand, can do (and charge) essentially whatever he wants (a lot, as it turns out); but he also has to pay his own overhead.  The county psychiatrist wants his patients to have access to additional services (therapy, case management, housing, vocational training, etc) that might be irrelevant—or wasteful—in other settings.  The academic psychiatrist is concerned about his ability to obtain research funding, to keep his inpatient unit afloat, and to satisfy his department chair.  The jail psychiatrist wants access to substance abuse treatment and other vital services, and to help inmates make the transition back into their community safely.

Even within a given practice setting, different psychiatrists might disagree on what they want:  Some might want to see more patients, while delegating services like psychotherapy and case management to other providers.  On the other hand, some might want to spend more time with fewer patients and to be paid to provide these services themselves.  Some might want a more generous medication formulary, while others might argue that the benefits of medication are too exaggerated and want patients to have access to other types of treatment.  Finally, some might lobby for greater access to pharmaceutical companies and the benefits they provide (samples, coupons, lectures, meals, etc), while others might argue that pharmaceutical promotion has corrupted our field.

For most of the history of modern medicine, doctors have had a hard time “organizing” because there has been no entity worth organizing against.  Today, we have some definite targets: the Affordable Care Act, big insurance companies, hospital employers, pharmacy benefits managers, state and local governments, malpractice attorneys, etc.  But not all doctors see those threats equally.  (Many, in fact, welcome the Affordable Care Act with open arms.)  So even though there are, for instance, several unanswered questions as to how the ACA (aka “Obamacare”) might change the health-care-delivery landscape, the ramifications are, in the eyes of most doctors, too far-removed from the day-to-day aspects of patient care for any of us to worry about.  Just like everything else in the above list, we shrug them off as nuisances—the costs of doing business—and try to devote attention to our patients instead of agitating for change.

In psychiatry, the conflicts are particularly  wide-ranging, and the stakes more poorly defined than elsewhere in medicine, making the targets of our discontent less clear.  One of the panelists put it best when she said: “there’s a lot of white noise in psychiatry.”  In other words, we really can’t figure out where we’re headed—or even where we want to head.  At one extreme, for instance, are those psychiatrists who argue (sometimes convincingly) that all psychiatry is a farce, that diagnoses are socially constructed entities with no external validity, and that “treatment” produces more harm than good.  At the other extreme are the DSM promoters and their ilk, arguing for greater access to effective treatment, the medicalization of human behavior, and the early recognition and treatment of mental illness—sometimes even before it develops.

Until we psychiatrists determine what we want the future of psychiatric care to look like, it will be difficult for us to jump on any common bandwagon.  In the meantime, the future of our field will be determined by those who do have a well-formed agenda and who can rally around a common goal.  At present, that includes the APA, insurance companies, Big Pharma, and government.  As for the rest of us, we’ll just pick up whatever scraps are left over, and “organize” after we’ve finished our charts, returned our calls, completed the prior authorizations, filed the disability paperwork, paid our bills, and said good-night to our kids.


The Well Person

March 21, 2012

What does it mean to be “normal”?  We’re all unique, aren’t we?  We differ from each other in so many ways.  So what does it mean to say someone is “normal,” while someone else has a “disorder”?

This is, of course, the age-old question of psychiatric diagnosis.  The authors of the DSM-5, in fact, are grappling with this very question right now.  Take grieving, for example.  As I and others have written, grieving is “normal,” although its duration and intensity vary from person to person.  At some point, a line may be crossed, beyond which a person’s grief is no longer adaptive but dangerous.  Where that line falls, however, cannot be determined by a book or by a committee.

Psychiatrists ought to know who’s healthy and who’s not.  After all, we call ourselves experts in “mental health,” don’t we?  Surprisingly, I don’t think we’re very good at this.  We are acutely sensitive to disorder but have trouble identifying wellness.  We can recognize patients’ difficulties in dealing with other people but are hard-pressed to describe healthy interpersonal skills.  We admit that someone might be able to live with auditory hallucinations but we still feel an urge to increase the antipsychotic dose when a patient says she still hears “those voices.”   We are quick to point out how a patient’s alcohol or marijuana use might be a problem, but we can’t describe how he might use these substances in moderation.  I could go on and on.

Part of the reason for this might lie in how we’re trained.  In medical school we learn basic psychopathology and drug mechanisms (and, by the way, there are no drugs whose mechanism “maintains normality”—they all fix something that’s broken).  We learn how to do a mental status exam, complete with full descriptions of the behavior of manic, psychotic, depressed, and anxious people—but not “normals.”  Then, in our postgraduate training, our early years are spent with the most ill patients—those in hospitals, locked facilities, or emergency settings.  It’s not until much later in one’s training that a psychiatrist gets to see relatively more functional individuals in an office or clinic.  But by that time, we’re already tuned in to deficits and symptoms, and not to personal strengths, abilities, or resilience-promoting factors.

In a recent discussion with a colleague about how psychiatrists might best serve a large population of patients (e.g., in a “medical home” model), I suggested  that perhaps each psychiatrist could be responsible for a handful of people (say, 300 or 400 individuals).  Our job would be to see each of these 300-400 people at least once in a year, regardless of whether they have psychiatric diagnosis or not.  Those who have emotional or psychiatric complaints or who have a clear mental illness could be seen more frequently; the others would get their annual checkup and their clean bill of (mental) health.  It would be sort of like your annual medical visit or a “well-baby visit” in pediatrics:  a way for a person to be seen by a doctor, implement preventive measures,  and undergo screening to make sure no significant problems go unaddressed.

Alas, this would never fly in psychiatry.  Why not?  Because we’re too accustomed to seeing illness.  We’re too quick to interpret “sadness” as “depression”; to interpret “anxiety” or “nerves” as a cue for a benzodiazepine prescription; or to interpret “inattention” or poor work/school performance as ADHD.  I’ve even experienced this myself.  It is difficult to tell a person “you’re really doing just fine; there’s no need for you to see me, but if you want to come back, just call.”  For one thing, in many settings, I wouldn’t get paid for the visit if I said this.  But another concern, of course, is the fear of missing something:  Maybe this person really is bipolar [or whatever] and if I don’t keep seeing him, there will be a bad outcome and I’ll be responsible.

There’s also the fact that psychiatry is not a primary care specialty:  insurance plans don’t pay for an annual “well-person visit” with the a psychiatrist.  Patients who come to a psychiatrist’s office are usually there for a reason.  Maybe the patient deliberately sought out the psychiatrist to ask for help.  Maybe their primary care provider saw something wrong and wanted the psychiatrist’s input.  In the former, telling the person he or she is “okay” risks losing their trust (“but I just know something’s wrong, doc!“).  In the latter, it risks losing a referral source or professional relationship.

So how do we fix this?  I think we psychiatrists need to spend more time learning what “normal” really is.  There are no classes or textbooks on “Normal Adults.”  For starters, we can remind ourselves that the “normal” people around whom we’ve been living our lives may in fact have features that we might otherwise see as a disorder.  Learning to accept these quirks, foibles, and idiosyncrasies may help us to accept them in our patients.

In terms of using the DSM, we need to become more willing to use the V71.09 code, which means, essentially, “No diagnosis or condition.”  Many psychiatrists don’t even know this code exists.  Instead, we give “NOS” diagnoses (“not otherwise specified”) or “rule-outs,” which eventually become de facto diagnoses because we never actually take the time to rule them out!  A V71.09 should be seen as a perfectly valid (and reimbursable) diagnosis—a statement that a person has, in fact, a clean bill of mental health.  Now we just need to figure out what that means.

It is said that when Pope Julius II asked Michelangelo how he sculpted David out of a marble slab, he replied: “I just removed the parts that weren’t David.”  In psychiatry, we spend too much time thinking about what’s not David and relentlessly chipping away.  We spend too little time thinking about the healthy figure that may already be standing right in front of our eyes.


Two Psychiatries

March 12, 2012

A common—and ever-increasing—complaint of physicians is that so many variables interfere with our ability to diagnose and treat disease:  many patients have little or no access to preventive services; lots of people are uninsured; insurance plans routinely deny necessary care; drug formularies are needlessly restrictive; paperwork never ends; and the list goes on and on.  Beneath the frustration (and, perhaps, part of the source of it) is the fact that medical illness, for the most part, has absolutely nothing to do with these external burdens or socioeconomic inequalities.  Whether a patient is rich or poor, black or white, insured or uninsured—a disease is a disease, and everyone deserves the same care.

I’m not so sure whether the same can be said for psychiatry.  Over the last four years, I’ve spent at least part of my time working in community mental health (and have written about it here and here).  Before that, though—and for the majority of my training—I worked in a private, academic hospital setting.  I saw patients who had good health insurance, or who could pay for health care out of pocket.  I encountered very few restrictions in terms of access to medications or other services (including multiple types of psychotherapy, partial hospitalization programs, ECT, rTMS, clinical trials of new treatments, etc).  I was fortunate enough to see patients in specialty referral clinics, where I saw fascinating “textbook” cases of individuals who had failed to respond to years of intensive treatment.  It was exciting, stimulating, thought-provoking, and—for lack of a better word—academic.  (Perhaps it’s not surprising that this the environment in which textbooks, and the DSM, are written.)

When I started working in community psychiatry, I tried to approach patients with the same curiosity and to employ the same diagnostic strategies and treatment approach.  It didn’t take long for me to learn, however, that these patients had few of the resources I had taken for granted elsewhere.  For instance, psychotherapy was difficult to arrange, and I was not reimbursed for doing it myself.  Access to medications depended upon capricious, unpredictable (and illogical) formularies.  Patients found it difficult to get to regular appointments or to come up with the co-payment, not to mention pay the electric bill or deal with crime in their neighborhood.  It was often hard to obtain a coherent and reliable history, and records obtained from elsewhere were often spotty and unhelpful.

It all made for a very challenging place in which to practice what I (self-righteously) called “true” psychiatry.  But maybe community psychiatry needs to be redefined.  Maybe the social stressors encountered by community psych patients—not the least of which is substandard access to “quality” medical and psychiatric services—result in an entirely different type of mental distress, and demand an entirely different type of intervention.

(I should point out that I did see, at times, examples of the same sort of mental illness I saw in the private hospital, and which did respond to the same interventions that the textbooks predicted.  While this reaffirmed my hope in the validity of [at least some] mental illnesses, this was a small fraction of the patients I saw.)

Should we alter our perceptions and definitions of illness—and of “psychiatry” itself—in public mental health?  Given the obstacles found in community psychiatry settings (absurdly brief appointment times; limited psychotherapy; poor prescription drug coverage; high rates of nonadherence and substance abuse; reliance on ERs for non-emergency care, often resulting in complicated medication regimens, like dangerous combinations of narcotics and benzodiazepines), should we take an entirely different approach?  Does it even make sense to diagnose the same disorders—not to mention put someone on “disability” for these disorders—when there are so many confounding factors involved?

One of my colleagues suggested: just give everyone an “adjustment disorder” diagnosis until you figure everything out.  Good idea, but you won’t get paid for diagnosing “adjustment disorder.”  So a more “severe” diagnosis must be given, followed closely thereafter by a medication (because many systems won’t let a psychiatrist continue seeing a patient unless a drug is prescribed).  Thus, in a matter of one or two office visits (totaling less than an hour in most cases), a Medicaid or uninsured patient might end up with a major Axis I diagnosis and medication(s), while the dozens of stressors that may have contributed to the office visit in the first place go unattended.

Can this change?  I sure hope so.  I firmly believe that everyone deserves access to mental health care.  (I must also point out that questionable diagnoses and inappropriate medication regimens can be found in any demographic.)  But we psychiatrists who work in community settings must not delude ourselves into thinking that what’s written in the textbooks or tested on our Board exams always holds true for the patients we see.  It’s almost as if we’re practicing a “different psychiatry,” one that requires its own diagnostic system, different criteria for “disability” determinations, a different philosophy of “psychotherapy,” and where medications should be used much more conservatively.  (It might also help to perform clinical trials with subjects representative of those in community psychiatry, but due to their complexity, this is highly unlikely).

Fortunately, a new emphasis on the concept of “recovery” is taking hold in many community mental health settings.  This involves patient empowerment, self-direction, and peer support, rather than a narrow focus on diagnosis and treatment.  For better or for worse, such an approach relies less on the psychiatrist and more on peers and the patient him- or herself.  It also just seems much more rational, emphasizing what patients want and what helps them to succeed.

Whether psychiatrists—and community mental health as a whole—are able to follow this trend remains to be seen.  Unless we do, however, I fear that we may continue to mislead ourselves into believing that we’re doing good, when in fact we’re perpetuating a cycle of invalid diagnoses, potentially harmful treatment, and, worst of all, over-reliance on a system designed for a distinctly different type of “care” than what these individuals need and deserve.


Disruptive Technology Vs. The Disruptive Physician

February 26, 2012

The technological advances of just the last decade—mobile computing, social networking, blogging, tablet computers—were never thought to be “essential” when first introduced.  But while they started as novelties, their advantages became apparent, and today these are all part of our daily lives.  These are commonly referred to as “disruptive technologies”:  upstart developments that originally found their place in niche markets outside of the mainstream, but gradually “disrupted” the conventional landscape (and conventional wisdom) to become the established ways of doing things.

In our capitalist economy, disruptive technology is considered a very good thing.  It has made our lives easier, more enjoyable, and more productive.  It has created no small number of multimillionaires.  Entrepreneurs worldwide are constantly looking for the next established technologies to disrupt, usurp, and overturn, in hopes of a very handsome payoff.

In medicine, when we talk about “disruption,” the implication is not quite as positive.  In fact, the term “disruptive physician” is an insult, a black mark on one’s record that can be very hard to overcome.  It refers to someone who doesn’t cooperate, doesn’t follow established protocols, yells at people, discriminates against others, who might abuse drugs or alcohol, or who is generally incompetent.  These are not good things.

Really?  Now, no one would argue that substance abuse, profanity, spreading rumors, degrading one’s peers, or incompetence are good.  But what about the physician who “expresses political views that are disagreeable to the hospital administration”?  How about the physician who speaks out about deficiencies in patient care or patient safety, or who (legitimately) points out the incompetence of others?  How about the physician who prioritizes his own financial and/or business objectives over those of the hospital (when in fact it may be the only way to protect one’s ability to practice)?  All of these have been considered to be “disruptive” behaviors and could be used by highly conservative medical staffs to discipline physicians and preserve the status quo.

Is this fair?  In modern psychiatry, with its shrinking appointment lengths, overreliance on the highly deficient DSM, excessive emphasis on pharmacological solutions, and an increasing ignorance of developmental models and psychosocial interventions among practitioners, maybe someone should stand up and express opinions that the “powers that be” might consider unacceptable.  Someone should speak out on behalf of patient safety.  Someone should point out extravagant examples of waste, incompetence, or abuse of privilege.  Plenty of psych bloggers and a few renegade psychiatrists do express these opinions, but they (we?) are a minority.  I don’t know of any department chairmen or APA officers who are willing to be so “disruptive.”  As a result, we’re stuck with what we’ve got.

That’s not to say there aren’t any disruptive technologies in psychiatry.  What are they?  Well, medications, for instance.  Drug treatment “disrupted” psychoanalysis and psychotherapy, and represent the foundation of most psychiatric treatment today.  Over the last 30 years, pharmaceutical companies (and prescribers) have earned millions of dollars from SSRIs, SNRIs, second-generation antipsychotics, psychostimulants, and many others.  But are people less mentally ill now than they were in the early 1980s?  Today—just in time for patent expirations!—we’re already seeing the next disruptive medication technologies, like those based on glutamate and glutamine signaling.  According to Stephen Stahl at the most recent NEI Global Congress, “we’ve beaten the monoamine horse sixteen ways to Sunday” (translation: we’ve milked everything we can out of the serotonin and dopamine stories) and glutamate is the next blockbuster drug target to disrupt the marketplace.

Another disruptive technology is the DSM.  I don’t have much to add to what’s already been written about the DSM-5 controversy except to point out what should be obvious:  We don’t need another DSM right now.  Practically speaking, a new DSM is absolutely unnecessary.  It will NOT help me treat patients any better.  But it’s coming, like it or not.  It will disrupt the way we have conducted our practices for the last 10 years (guided by the equally imperfect DSM-IV-TR), and it will put millions more dollars in the coffers of the APA.

And then, of course, is the electronic medical record (EMR).  As with the DSM-5, I don’t need to have an EMR to practice psychiatry.  But some politicians in Washington, DC, decided that, as a component of the Affordable Care Act (and in preparation for truly nationalized health care), we should all use EMRs.  They even offered a financial incentive to doctors to do so (and are levying penalties for not doing so).  And despite some isolated benefits (which are more theoretical than practical, frankly), EMRs are disruptive.  Just not in the right way.  They disrupt work flow, the doctor-patient relationship, and, sometimes, common sense.  But they’re here to stay.

Advances in records & database management, in general, are the new disruptive technologies in medicine.  Practice Fusion, a popular (and ad-supported) EMR has earned tens of millions of dollars in venture capital funding and employs over 150 people.  And what does it do with the data from the 28 million patients it serves?  It sells it to others, of course.  (And it can tell you fun things like which cities are most “lovesick.”  How’s that for ROI?)

There are many other examples of companies competing for your health-care dollar, whose products are often only peripherally related to patient care but which represent that holy grail of the “disruptive technology.”  There are online appointment scheduling services, telepsychiatry services, educational sites heavily sponsored by drug companies, doctor-only message boards (which sell doctors’ opinions to corporations), drug databases (again, sponsored by drug companies), and others.

In the interest of full disclosure, I use some of the above services, and some are quite useful.  I believe telemedicine, in particular, has great potential.  But at the end of the day, these market-driven novelties ignore some of the bigger, more entrenched problems in medicine, which only practicing docs see.  In my opinion, the factors that would truly help psychiatrists take better care of patients are of a different nature entirely:  improving psychiatric training (of MDs and non-MD prescribers); emphasizing recovery and patient autonomy in our billing and reimbursement policies; eliminating heavily biased pharmaceutical advertising (both to patients and to providers); revealing the extensive and unstated conflicts of interest among our field’s “key opinion leaders”; reforming the “disability” system and disconnecting it from Medicaid, particularly among indigent patients; and reallocating health-care resources more equitably.  But, as a physician, if I were to go to my superiors with any ideas to reform the above in my day-to-day work, I run the risk of being labeled “disruptive.”  When in fact, that would be my exact intent:  to disrupt some of the damaging, wasteful practices that occur in our practices almost every day.

I agree that disruption in medicine can be a good thing, and can advance the quality and cost-effectiveness of care.  But when most of the “disruptions” come from individuals who are not actively in the trenches, and who don’t know where needs are the greatest, we may be doing absolutely nothing to improve care.  Even worse, when we fail to embrace the novel ideas of physicians—but instead discipline those physicians for being “disruptive”—we risk punishing creativity, destroying morale, and fostering a sense of helplessness that, in the end, serves no one.


Do What You’re Taught

February 5, 2012

In my mail yesterday was an invitation to an upcoming 6-hour seminar on the topic of “Trauma, Addiction, and Grief.”  The course description included topics such as “models of addiction and trauma/information processing” and using these models to plan treatment; recognizing “masked grief reactions” and manifestations of trauma in clients; and applying several psychotherapeutic techniques to help a patient through addiction and trauma recovery.

Sound relevant?  To any psychiatrist dealing with issues of addiction, trauma, grief, anxiety, and mood—which is pretty much all of us—and interested in integrative treatments for the above, this would seem to be an entirely valid topic to learn.  And, I was pleased to learn that the program offers “continuing education” credit, too.

But upon reading the fine print, credit is not available for psychiatrists.  Instead, you can get credit if you’re one the following mental health workers:  counselor, social worker, MFT, psychologist, addiction counselor, alcoholism & drug abuse counselor, chaplain/clergy, nurse, nurse practitioner, nurse specialist, or someone seeking “certification in thanatology” (whatever that is).  But not a psychiatrist.  In other words, psychiatrists need not apply.

Well, okay, that’s not entirely correct, psychiatrists can certainly attend, and–particularly if the program is a good one—my guess is that they would clearly benefit from it.  They just won’t get credit for it.

It’s not the first time I’ve encountered this.  Why do I think this is a big deal?  Well, in all of medicine, “continuing medical education” credit, or CME, is a rough guide to what’s important in one’s specialty.  In psychiatry, the vast majority of available CME credit is in psychopharmacology.  (As it turns out, in the same batch of mail, I received two “throwaway” journals which contained offers of free CME credits for reading articles about treating metabolic syndrome in patients on antipsychotics, and managing sexual side effects of antidepressants.)  Some of the most popular upcoming CME events are the Harvard Psychopharmacology Master Class and the annual Nevada Psychopharmacology Update.  And, of course, the NEI Global Congress in October is a can’t-miss event.  Far more psychiatrists will attend these conferences than a day-long seminar on “trauma, addiction, and grief.”  But which will have the most beneficial impact on patients?

To me, a more important question is, which will have the most beneficial impact on the future of the psychiatrist?   H. Steven Moffic, MD, recently wrote an editorial in Psychiatric Times in which he complained openly that the classical “territory” of the psychiatrist—diagnosis of mental disorder, psychotherapy, and psychopharmacology—have been increasingly ceded to others.  Well, this is a perfect example.  A seminar whose content is probably entirely applicable to most psychiatric patients, being marketed primarily to non-psychiatrists.

I’ve always maintained—on this blog and in my professional life—that psychiatrists should be just as (if not more) concerned about the psychological, cultural, and social aspects of their patients and their experience as in their proper psychopharmacological management.  It’s also just good common sense, especially when viewed from the patient’s perspective.  But if psychiatrists (and our leadership) don’t advocate for the importance of this type of experience, then of course others will do this work, instead of us.  We’re making ourselves irrelevant.

I’m currently experiencing this irony in my own personal life.  I’m studying for the American Board of Psychiatry and Neurology certification exam (the “psychiatry boards”), while looking for a new job at the same time.  On the one hand, while studying for the test I’m being forced to refresh my knowledge of human development, the history of psychiatry, the theory and practice of psychotherapy, the cognitive and psychological foundations of axis I disorders, theories of personality, and many other topics.  That’s the “core” subject matter of psychiatry, which is (appropriately) what I’ll be tested on.  Simultaneously, however, the majority of the jobs I’m finding require none of that.  I feel like I’m being hired instead for my prescription pad.

Psychiatry, as the study of human experience and the treatment of a vast range of human suffering, can still be a fascinating field, and one that can offer so much more to patients.  To be a psychiatrist in this classic sense of the word, it seems more and more like one has to blaze an independent trail: obtain one’s own specialized training, recruit patients outside of the conventional means, and—unless one wishes to live on a relatively miserly income—charge cash.  And because no one seriously promotes this version of psychiatry, this individual is rapidly becoming an endangered species.

Maybe I’ll get lucky and my profession’s leadership will advocate more for psychiatrists to be better trained in (and better paid for) psychotherapy, or, at the very least, encourage educators and continuing education providers to emphasize this aspect of our training as equally relevant.  But as long as rank-and-file psychiatrists sit back and accept that our primary responsibility is to diagnose and medicate, and rabidly defend that turf at the expense of all else, then perhaps we deserve the fate that we’re creating for ourselves.


The Curious Psychology of “Disability”

December 28, 2011

I’ll start this post with a brief clinical vignette:

I have been seeing Frank, a 44 year-old man, on a regular basis for about six months.  He first came to our community clinic with generalized, nonspecific complaints of “anxiety,” feeling “uncomfortable” in public, and getting “angry all the time,” especially toward people who disagreed with him.  His complaints never truly met official criteria for a DSM-IV disorder, but he was clearly dissatisfied with much in his life and he agreed to continue attending biweekly appointments.  Frank once requested Xanax, by name, but I did not prescribe any medication; I never felt it was appropriate for his symptoms, and besides, he responded well to a combined cognitive-interpersonal approach exploring his regret over past activities as a gang member (and related incarcerations), feelings of being a poor father to his four daughters, and efforts to improve his fragile self-esteem.  Even though Frank still has not met criteria for a specific disorder (he currently holds the imprecise and imperfect label of “anxiety NOS”), he has shown significant improvement and a desire to identify and reverse some of his self-defeating behaviors.

Some of the details (including his name) have been changed to preserve Frank’s privacy.  However, I think the general story still gets across:  a man with low self-worth, guilty feelings, and self-denigration from his overidentification with past misdeeds, came to me for help.  We’ve made progress, despite a lack of medications, and the lack of a clear DSM-IV (or, most likely, DSM-5) diagnosis.  Not dramatic, not earth-shattering, but a success nonetheless.  Right?

Not so fast.

Shortly after our appointment last week, I received a request for Frank’s records from the Social Security Administration, along with a letter from a local law firm he hired to help him obtain benefits.  He had apparently applied for SSI disability and the reviewers needed to see my notes.

I should not have been surprised by this request.  After all, our clinic receives several of these requests each day.  In most cases, I don’t do anything; our clinic staff prints out the records, sends them to SSA, and the evaluation process proceeds generally without any further input from us (for a detailed description of the disability evaluation process, see this article).  But for some reason, this particular request was uniquely heartbreaking.  It made me wonder about the impact of the “disability” label on a man like Frank.

Before I go further, let me emphasize that I’m looking at Frank’s case from the viewpoint of a psychiatrist, a doctor, a healer.  I’m aware that Frank’s family is under some significant financial strain—as are many of my patients in this clinic (a topic about which I’ve written before)—and some sort of welfare or financial support, such as SSI disability income, would make his life somewhat easier.  It might even alleviate some of his anxiety.

However, in six months I have already seen a gradual improvement in Frank’s symptoms, an increase in his motivation to recover, and greater compassion for himself and others.  I do not see him as “disabled”; instead, I believe that with a little more effort, he may be able to handle his own affairs with competence, obtain some form of gainful employment, and raise his daughters as a capable father.  He, too, recognizes this and has expressed gratitude for the progress we have made.

There is no way, at this time, for me to know Frank’s motives for applying for disability.  Perhaps he simply saw it as a way to earn some supplementary income.  Perhaps he believes he truly is disabled (although I don’t think he would say this—and if he did, I wish he’d share it with me!).  I also have no evidence to suggest that Frank is trying to “game the system.”  He may be following the suggestions of a family member, a friend, or even another healthcare provider.  All of the above are worthwhile topics to discuss at our next appointment.

But once those records are sent, the evaluation process is out of my hands.  And even if Frank’s request is denied, I wonder about the psychological effect of the “disability” label on Frank’s desire to maintain the gains he has made.  Labels can mean a lot.  Psychiatric diagnoses, for instance, often needlessly and unfairly label people and lead to unnecessary treatment (and it doesn’t look like DSM-5 will offer much improvement).  Likewise, labels like “chronic,” “incurable,” and “disabled” can also have a detrimental impact, a sentiment expressed emphatically in the literature on “recovery” from mental illness.  The recovery movement, in fact, preaches that mental health services should promote self-direction, empowerment, and patient choice.  If, instead, we convey pessimism, hopelessness, and the stigma of “disability,” we may undermine those goals.

As a healer, I believe that my greatest responsibility and most difficult (although most rewarding) task is to instill hope and optimism in my patients.  Even though not all of them will be entirely “symptom-free” and able to function competently in every situation life hands them, and some may require life-long medication and/or psychosocial support (and, perhaps, disability income), I categorically refuse to believe that most are “disabled” in the sense that they will never be able to live productive, satisfying lives.

I would bet that most doctors and most patients agree with me.  With the proper supports and interventions, all patients (or “users” or “consumers,” if you prefer those terms) can have the opportunity to succeed, and potentially extricate themselves from invisible chains of mental illness.  In Frank’s case, he is was almost there.

But the fact that we as a society provide an institution called “disability,” which provides benefits to people with a psychiatric diagnosis, requiring that they see a psychiatrist, and often requiring that they take medication, sends a very powerful—and potentially unhealthy—psychological message to those who can overcome their disability.  To Frank, it directly contradicts the messages of hope and encouragement I try to offer at each visit.  It makes him dependent upon me, rather than upon himself and his own resources and abilities.  In other words, to a man like Frank, disability is anti-recovery.

I don’t have an easy answer to this problem.  For starters, changing the name of “disability” to something like “temporary psychological material support”—a substitute label, nothing more—might be helpful.  Also, rewarding recipients (e.g., not repealing their benefits) for meeting predetermined milestones of recovery (part-time work, independent housing, etc) may also help.  But the more I think about the life-affirming and empowering potential of recovery, and about how we allocate our scarce resources, the more I believe that the recovery-based—as opposed to disability-based—practice of psychiatry has much more to offer the future of our patients, our profession, and our nation, than the current status quo.  For the sake of Frank’s recovery, and the recovery of countless other men and women like him, maybe it’s time to make that happen.


How To Get Rich In Psychiatry

August 17, 2011

Doctors choose to be doctors for many reasons.  Sure, they “want to help people,” they “enjoy the science of medicine,” and they give several other predictable (and sometimes honest) explanations in their med school interviews.  But let’s be honest.  Historically, becoming a doctor has been a surefire way to ensure prestige, respect, and a very comfortable income.

Nowadays, in the era of shrinking insurance reimbursements and increasing overhead costs, this is no longer the case.  If personal riches are the goal, doctors must graze other pastures.  Fortunately, in psychiatry, several such options exist.  Let’s consider a few.

One way to make a lot of money is simply by seeing more patients.  If you earn a set amount per patient—and you’re not interested in the quality of your work—this might be for you.  Consider the following, recently posted by a community psychiatrist to an online mental health discussion group:

Our county mental health department pays my clinic $170 for an initial evaluation and $80 for a follow-up.  Of that, the doctor is paid $70 or $35, respectively, for each visit.  There is a wide range of patients/hour since different doctors have different financial requirements and philosophies of care.  The range is 3 patients/hour to 6 patients/hour.

This payment schedule incentivizes output.  A doctor who sees three patients an hour makes $105/hr and spends 20 minutes with each patient.  A doctor who sees 6 patients an hour spends 10 minutes with each patient and makes $210.  One “outlier” doctor in our clinic saw, on average, 7 patients an hour, spending roughly 8 minutes with each patient and earning $270/hr.  His clinical notes reflected his rapid pace…. [but] Despite his shoddy care of patients, he was tolerated at the clinic because he earned a lot of money for the organization.

If this isn’t quite your cup of tea, you can always consider working in a more “legit” capacity, like the Department of Corrections.  You may recall the Bloomberg report last month about the prison psychiatrist who raked in over $800,000 in one year—making him the highest-paid state employee in California.  As it turns out, that was a “data entry error.”  (Bloomberg issued a correction.)  Nevertheless, the cat was out of the bag: prison psychiatrists make big bucks (largely for prescribing Seroquel and benzos).  With seniority and “merit-based increases,” one prison shrink in California was able to earn over $600,000—and that’s for a shrink who was found to be “incompetent.”  Maybe they pay the competent ones even more?

Another option is to be a paid drug speaker.  I’m not referring to the small-time local doc who gives bland PowerPoint lectures to his colleagues over a catered lunch of even blander ham-and-cheese sandwiches.  No sir.  I’m talking about the psychiatrists hired to fly all around the country to give talks at the nicest five-star restaurants in the nation’s biggest drug markets cities.  The advantage here is that you don’t even have to be a great doc.  You just have to own a suit, follow a script, speak well, and enjoy good food and wine.

As most readers of this blog know, ProPublica recently published a list of the sums paid by pharmaceutical companies to doctors for these “educational programs.”  Some docs walked away with checks worth tens—or hundreds—of thousands of dollars.  And, not surprisingly, psychiatrists were the biggest offenders earners.  I guess there is gold in explaining the dopamine hypothesis or the mechanism of neurotransmitter reuptake inhibition to yet another doctor.

Which brings me to perhaps the most tried-and-true way to convert one’s medical education into cash:  become an entrepreneur.  Discovering a new drug or unraveling a new disease process might revolutionize medical care and improve the lives of millions.  And throughout the history of medicine, numerous physician-researchers have converted their groundbreaking discoveries (or luck) into handsome profits.

Unfortunately, in psychiatry, paradigm shifts of the same magnitude have been few and far between.  Instead, the road to riches has been paved by the following formula: (1) “Buy in” to the prevailing disease model (regardless of its biological validity); (2) Develop a drug that “fits” into the model; (3) Find some way to get the FDA to approve it; (4) Promote it ruthlessly; (5) Profit.

In my residency program, for example, several faculty members founded a biotech company whose sole product was a glucocorticoid receptor antagonist which, they believed, might treat psychotic depression (you know, with high stress hormones in depression, etc).  The drug didn’t work (rendering their stock options worth only millions instead of tens of millions).  But that didn’t stop them.  They simply searched for other ways to make their compound relevant.  As I write, they’re looking at it as a treatment for Cushing’s syndrome (a more logical—if far less profitable—indication).

The psychiatry blogger 1boringoldman has written a great deal about the legions of esteemed academic psychiatrists who have gotten caught up in the same sort of rush (no pun intended) to bring new drugs to market.  His posts are definitely worth a read.  Frankly, I see no problem with psychiatrists lending their expertise to a commercial enterprise in the hopes of capturing some of the windfall from a new blockbuster drug.  Everyone else in medicine does it, why not us?

The problem, as mentioned above, is that most of our recent psychiatric meds are not blockbusters.  Or, to be more accurate, they don’t represent major improvements in how we treat (or even understand) mental illness.  They’re largely copycat solutions to puzzles that may have very little to do with the actual pathology—not to mention psychology—of the conditions we treat.

To make matters worse, when huge investments in new drugs don’t pay off, investigators (including the psychiatrists expecting huge dividends) look for back-door ways to capture market share, rather than going back to the drawing board to refine their initial hypotheses.  Take, for instance, RCT Logic, a company whose board includes the ubiquitous Stephen Stahl and Maurizio Fava, two psychiatrists with extensive experience in clinical drug trials.  But the stated purpose of this company is not to develop novel treatments for mental illness; they have no labs, no clinics, no scanners, and no patients.  Instead, their mission is to develop clinical trial designs that “reduce the detrimental impact of the placebo response.”

Yes, that’s right: the new way to make money in psychiatry is not to find better ways to treat people, but to find ways to make relatively useless interventions look good.

It’s almost embarrassing that we’ve come to this point.  Nevertheless, as someone who has decidedly not profited (far from it!) from what I consider to be a dedicated, intelligent, and compassionate approach to my patients, I’m not surprised that docs who are “in it for the money” have exploited these alternate paths.  I just hope that patients and third-party payers wake up to the shenanigans played by my colleagues who are just looking for the easiest payoff.

But I’m not holding my breath.

FootnoteFor even more ways to get rich in psychiatry, see this post by The Last Psychiatrist.


Antidepressants: The New Candy?

August 9, 2011

It should come as no surprise to anyone paying attention to health care (not to mention modern American society) that antidepressants are very heavily prescribed.  They are, in fact, the second most widely prescribed class of medicine in America, with 253 million prescriptions written in 2010 alone.  Whether this means we are suffering from an epidemic of depression is another thing.  In fact, a recent article questions whether we’re suffering from much of anything at all.

In the August issue of Health Affairs, Ramin Mojtabai and Mark Olfson present evidence that doctors are prescribing antidepressants at ever-higher rates.  Over a ten-year period (1996-2007), the percentage of all office visits to non-psychiatrists that included an antidepressant prescription rose from 4.1% to 8.8%.  The rates were even higher for primary care providers: from 6.2% to 11.5%.

But there’s more.  The investigators also found that in the majority of cases, antidepressants were given even in the absence of a psychiatric diagnosis.  In 1996, 59.5% of the antidepressant recipients lacked a psychiatric diagnosis.  In 2007, this number had increased to 72.7%.

In other words, nearly 3 out of 4 patients who visited a nonpsychiatrist and received a prescription for an antidepressant were not given a psychiatric diagnosis by that doctor.  Why might this be the case?  Well, as the authors point out, antidepressants are used off-label for a variety of conditions—fatigue, pain, headaches, PMS, irritability.  None of which have any good data supporting their use, mind you.

It’s possible that nonpsychiatrists might add an antidepressant to someone’s medication regimen because they “seem” depressed or anxious.  It is also true that primary care providers do manage mental illness sometimes, particularly in areas where psychiatrists are in short supply.  But remember, in the majority of cases the doctors did not even give a psychiatric diagnosis, which suggests that even if they did a “psychiatric evaluation,” the evaluation was likely quick and haphazard.

And then, of course, there were probably some cases in which the primary care docs just continued medications that were originally prescribed by a psychiatrist—in which case perhaps they simply didn’t report a diagnosis.

But is any of this okay?  Some, like a psychiatrist quoted in a Wall Street Journal article on this report, argue that antidepressants are safe.  They’re unlikely to be abused, often effective (if only as a placebo), and dirt cheap (well, at least the generic SSRIs and TCAs are).  But others have had very real problems discontinuing them, or have suffered particularly troublesome side effects.

The increasingly indiscriminate use of antidepressants might also open the door to the (ab)use of other, more costly drugs with potentially more devastating side effects.  I continue to be amazed, for example, by the number of primary care docs who prescribe Seroquel (an antipsychotic) for insomnia, when multiple other pharmacologic and nonpharmacologic options are ignored.  In my experience, in the vast majority of these cases, the (well-known) risks of increased appetite and blood sugar were never discussed with the patient.  And then there are other antipsychotics like Abilify and Seroquel XR, which are increasingly being used in primary care as drugs to “augment” antidepressants and will probably be prescribed as freely as the antidepressants themselves.  (Case in point: a senior medical student was shocked when I told her a few days ago that Abilify is an antipsychotic.  “I always thought it was an antidepressant,” she remarked, “after seeing all those TV commercials.”)

For better or for worse, the increased use of antidepressants in primary care may prove to be yet another blow to the foundation of biological psychiatry.  Doctors prescribe—and continue to prescribe—these drugs because they “work.”  It’s probably more accurate, however, to say that doctors and patients think they work.  And this may have nothing to do with biology.  As the saying goes, it’s the thought that counts.

Anyway, if this is true—and you consider the fact that these drugs are prescribed on the basis of a rudimentary workup (remember, no diagnosis was given 72.7% of the time)—then the use of an antidepressant probably has no more justification than the addition of a multivitamin, the admonition to eat less red meat, or the suggestion to “get more fresh air.”

The bottom line: If we’re going to give out antidepressants like candy, then let’s treat them as such.  Too much candy can be a bad thing—something that primary care doctors can certainly understand.  So if our patients ask for candy, then we need to find a substitute—something equally soothing and comforting—or provide them instead with a healthy diet of interventions to address the real issues, rather than masking those problems with a treat to satisfy their sweet tooth and bring them back for more.