If Medications Don’t Work, Why Do I Prescribe Them Anyway?

January 4, 2013

I have a confession to make.  I don’t think what I do each day makes any sense.

sense1-docPerhaps I should explain myself.  Six months ago, I started my own private psychiatry practice (one of the reasons why I haven’t posted much to this blog, but I hope to pick up the pace again!).  I made this decision after working for several years in various community clinics, county mental health systems, and three academic institutions.  I figured that an independent practice would permit me to be a more effective psychiatrist, as I wouldn’t be encumbered by the restrictions and regulations of most of today’s practice settings.

My experience has strengthened my long-held belief that people are far more complicated than diagnoses or “chemical imbalances”—something I’ve written about on this blog and with which most psychiatrists would agree.  But I’ve also made an observation that seems incompatible with one of the central dogmas of psychiatry.  To put it bluntly, I’m not sure that psychiatric medications work.

Before you jump to the conclusion that I’m just another disgruntled, anti-medication psychiatrist who thinks we’ve all been bought and misled by the pharmaceutical industry, please wait.  The issue here is, to me, a deeper one than saying that we drug people who request a pill for every ill.  In fact, it might even be a stretch to say that medications never work.  I’ve seen antidepressants, antipsychotics, mood stabilizers, and even interventions like ECT give results that are actually quite miraculous.

sense2-dahlBut here’s my concern: For the vast majority of my patients, when a medication “works,” there are numerous other potential explanations, and a simple discussion may reveal multiple other hypotheses for the clinical response.  And when you consider the fact that no two people “benefit” in quite the same way from the same drug, it becomes even harder to say what’s really going on. There’s nothing scientific about this process whatsoever.

And then, of course, there are the patients who just don’t respond at all.  This happens so frequently I sometimes wonder whether I’m practicing psychiatry wrong, or whether my patients are playing a joke on me.  But no, as far as I can tell, I’m doing things right: I prescribe appropriately, I use proper doses, and I wait long enough to see a response.  My training is up-to-date; I’ve even been invited to lecture at national conferences about psychiatric meds.  I can’t be that bad at psychiatry, can I?

Probably not.  So if I assume that I’m not a complete nitwit, and that I’m using my tools correctly, I’m left to ask a question I never thought I’d ask:  is psychopharmacology just one big charade?  **

Maybe I feel this way because I’m not necessarily looking for medications to have an effect in the first place.  I want my patients to get better, no matter what that entails.  I believe that treatment is a process, one in which the patient (not just his or her chemistry) is central.  When drugs “work,” several factors might explain why, and by the same token, when drugs don’t work, it might mean that something else needs to be treated instead—rather than simply switching to a different drug or changing the dose.  Indeed, over the course of several sessions with a patient, many details inevitably emerge:  persistent anxiety, secretive substance abuse, a history of trauma, an ongoing conflict with a spouse, or a medical illness.  These often deserve just as much attention as the initial concern, if not more.

sense3-pathophysiologyAlthough our understanding of the pathophysiology of mental illness is pure conjecture, prescribing a medication (at least at present) is an acceptable intervention.  What happens next is much more important.  I believe that prescribers should continue to collect evidence and adjust their hypotheses accordingly.  Unfortunately, most psychopharmacologists rarely take the time to discuss issues that can’t be explained by neurochemistry (even worse, they often try to explain all issues in terms of unproven neurochemistry), and dwindling appointment times mean that those who actually want to explore other causes don’t have the chance to do so.

So what’s a solution?  This may sound extreme, but maybe psychiatry should reject the “biochemical model” until it’s truly “biochemical”—i.e., until we have ways of diagnosing, treating, and following illnesses as we do in most of the rest of medicine.  In psychiatry, the use of medications and other “somatic” treatments is based on interview, gut feeling, and guesswork—not biology.  That doesn’t mean we can’t treat people, but we shouldn’t profess to offer a biological solution when we don’t know the nature of the problem.  We should admit our ignorance.

It would also help to allow (if not require) more time with psychiatric patients.  This is important.  If I only have 15-20 minutes with a patient, I don’t have time to ask about her persistent back pain, her intrusive brother-in-law, or her cocaine habit.  Instead, I must restrict my questions to those that pertain to the drug(s) I prescribed at the last visit.  This, of course, creates the perfect opportunity for confirmation bias—where I see what I expect to see.

sense4-mentalWe should also make an effort to educate doctors and patients alike about how little we actually know.  The subjects in trials to obtain FDA approval do NOT resemble real-world patients and are not evaluated or treated like real-world patients (and this is unlikely to change anytime soon because it works so well for the drug companies).  Patients should know this.  They should also know that the reliability of psychiatric diagnosis is poor in the first place, and that psychiatric illnesses have no established biochemical basis with which to guide treatment.

Finally, I should say that even though I call myself a psychiatrist and I prescribe drugs, I do not believe I’m taking advantage of my patients by doing so.  All of my patients are suffering, and they deserve treatment.  For some, drugs may play a key role in their care.  But when I see my entire profession move towards a biochemical approach—without any good evidence for such a strategy, and without a fair assessment of alternative explanations for behavior—and see, in my own practice, how medications provide no real benefit (or, frequently, harm) compared with other treatments, I have to wonder whether we’ve gone WAY beyond what psychopharmacology can truly offer, and whether there’s any way to put some logic back into what we call psychiatric treatment.

** There are areas in which psychopharmacology is most definitely not a “charade.”  These would include the uses of benzodiazepines, psychostimulants, and opioids like methadone and Suboxone.  With each of these agents, the expected effect is quite predictable, and they can be very effective drugs for many people.  Unfortunately, each of these can have an effect even in the absence of a diagnosis, and—probably not coincidentally—each has a potential for abuse.


Explain To Me Again Why Psychologists Can’t Prescribe Meds?

November 25, 2012

Consider the following two clinical scenarios:


A.  William, a 62 year-old accountant, has been feeling “depressed” since his divorce 5 years ago.  His practice, he says, is “falling apart,” as he has lost several high-profile clients and he believes it’s “too late” for his business to recover.  His adult son and daughter admire him greatly, but his ex-wife denigrates him and does everything she can to keep their children from seeing him.  William spends most of his days at his elderly parents’ house, a two-hour drive away, where he sleeps in the room (and bed) he occupied in his childhood.

William has been seeing Dr Moore every 1-2 weeks for the last 2 years.  Dr Moore has tried to support William’s ill-fated attempts to build up his practice, spend more time with his children, and engage in more productive activities, including dating and other social endeavors.  But William persistently complains that it’s “of no use,” he’ll “never meet anyone,” and his practice is “doomed to fail.”  At times, Dr Moore has feared that William may in fact attempt suicide, although to this point no attempt has been made.

B.  Claudia is a 68 year-old Medicare recipient with a history of major depression, asthma, diabetes, peripheral neuropathy, chronic renal failure, low back pain, and—for the last year—unexplained urinary incontinence.  She sees Dr Smith approximately every four weeks.  In each visit (which typically lasts about 20 minutes), Dr Smith must manage all of Claudia’s complaints and concerns, and while Dr Smith has made referrals to the appropriate medical specialists, Claudia’s condition has not improved.  In fact, Claudia now worries that she’s a “burden” on everyone else, especially her family, and “just wants to die.”  She and her daughter ask Dr Smith to “do something” to help.

Each of these scenarios is an actual case from my practice (with details changed to maintain anonymity).  Both William and Claudia are in emotional distress, and a case could be made for a trial of a psychiatric medication in each of them.

The problem, however, lies in the fact that only one of these “doctors” is a medical doctor: in this case, Dr Smith.  As a result, despite whatever experience or insight Dr Moore may have in the diagnosis of mental illness, he’s forbidden from prescribing a drug to treat it.

I recently gave a presentation to a Continuing Education program sponsored by the California School of Professional Psychology.  My audience was a group of “prescribing psychologists”—licensed psychologists who have taken over 500 hours of psychopharmacology course work in addition to the years to obtain their psychology PhDs.  By virtue of their core training, these psychologists do not see patients as “diseases” or as targets for drugs.  Although they do receive training in psychiatric diagnosis (and use the same DSM as psychiatrists), neuroanatomy, and testing/assessment, their interventions are decidedly not biological.  Most of them see psychotherapy as a primary intervention, and, more importantly, they are well versed in determining when and how medications can be introduced as a complement to the work done in therapy.  Most states, however (including my own, California) do not permit psychologists to obtain prescribing privileges, resulting in a division of labor that ultimately affects patient care.

Let’s return to the scenarios: in scenario “A,” Dr Moore could not prescribe William any medication, although he followed William through two brief antidepressant trials prescribed by William’s primary care physician (with whom, incidentally, Dr Moore never spoke).  When Dr Moore referred William to me, I was happy to see him but didn’t want to see myself as just a “prescriber.”  Thus, I had two long phone conversations with Dr Moore to hear his assessment, and decided to prescribe one of the drugs that he recommended.  William still sees both Dr Moore and me.  It’s arguably a waste of time (and money), since each visit is followed by a telephone call to Dr Moore to make sure I’m on the right track.

Claudia’s case was a very different story.  Because Claudia complained of being a “burden” and “wanting to die”—complaints also found in major depression—Dr Smith, her primary care physician, decided to prescribe an antidepressant.  He prescribed Celexa, and about one month later, when it had had no obvious effect, he gave Claudia some samples of Abilify, an antipsychotic sometimes used for augmentation of antidepressants.  (In fact, Dr Smith told Claudia to take Abilify three times daily, with the admonishment “if you want to stop crying, you need to take this Abilify three times a day, but if you stop taking it, you’ll start crying again.”)  Like it or not, this counts as “mental health care” for lots of patients.

Some would argue that the only ones qualified to prescribe medications are medical doctors.  They would claim that Dr Moore, a psychologist, might have crossed a professional boundary by “suggesting” an antidepressant for William, while Dr Smith, a physician, has the full ability to assess interactions among medications and to manage complex polypharmacy, even without consulting a psychiatrist.  In reality, however, Dr Smith’s “training” in psychotropic drugs most likely came from a drug rep (and his use of samples was a telltale sign), not from advanced training in psychopharmacology.  When one considers that the majority of psychotropic medication is prescribed by non-psychiatrists like Dr Smith, it’s fairly safe to say that much use of psychiatric drugs is motivated by drug ads, free samples, and “educational dinners” by “key opinion leaders,” and provided without much follow-up.

Furthermore, Dr Smith’s training in mental health most likely pales in comparison to that of Dr Moore.  Psychologists like Dr Moore have five or more years of postgraduate training, 3000 or more hours of clinical supervision, research experience, and have passed a national licensing exam.  But they’re forbidden from using medications that have been FDA-approved for precisely the conditions that they are extraordinarily well-equipped to evaluate, diagnose, and treat.

A satisfactory alternative would be an integrated behavioral health/primary care clinic in which professionals like Dr Moore can consult with a psychiatrist (or another “psychiatric prescriber”) to prescribe.  This arrangement has been shown to work in many settings.  It also allows for proper follow-up and limits the number of prescribers.  Indeed, pharmaceutical companies salivate at the prospect of more people with prescribing authority—it directly expands the market for their drugs—but the fact is that most of them simply don’t work as well as advertised and cause unwanted side effects.  (More about that in a future post.)

The bottom line is that there are ways of delivering mental health care in a more rational fashion, by people who know what they’re doing. As it currently stands, however, anyone with an MD (or DO, or NP) can prescribe a drug, even if others may possess greater experience or knowledge, or provide higher-quality care.  As an MD, I’m technically licensed to perform surgery, but trust me, you don’t want me to remove your appendix.  By the same token, overworked primary care docs whose idea of treating depression is handing out Pristiq samples every few months are probably not the best ones to treat depression in the medically ill.  But they do, and maybe it’s time for that to change.


Be Careful What You Wish For

September 2, 2012

Whatever your opinion of the Affordable Care Act, you must admit that it’s good to see the American public talk about reducing health care costs, offering more efficient delivery systems, and expanding health care services to more of our nation’s people.  There’s no easy (or cheap) way to provide health care to all Americans, particularly with the inefficiencies and absurdities that characterize our current health care system, but it’s certainly goal worth pursuing.

However, there’s more to the story than just expanding coverage to more Americans.  There’s also the issue about improving the quality of that coverage.  If you listen to the politicians and pundits, you might get the impression that the most important goal is to insure more people, when in fact insurance may leave us with worse outcomes in the end.

Take, for example, an Op-Ed by Richard Friedman, MD, published in the New York Times in July.  The title says it all: “Good News For Mental Illness in Health Law.”  Dr Friedman makes the observations that seem de rigueur for articles like this one:  “Half of Americans will experience a major psychiatric disorder,” “mental illnesses are chronic lifelong diseases,” and so forth.  Friedman argues that the Affordable Care Act will—finally!—give these people the help they need.

Sounds good, right?  Well, not so fast.  First of all, there are two strategies in the ACA to insure more patients:  (1) the individual mandate, which requires people to purchase insurance through the state health-insurance exchanges, and (2) expansion of Medicaid, which may add another 11 million more people to this public insurance plan.

So more people will be insured.  But where’s the evidence that health insurance—whether private or public—improves outcomes in mental health?  To be sure, in some cases, insurance can be critically important: the suicidal patient can be hospitalized for his safety; the substance-abusing patient can access rehabilitation services; and the patient with bipolar disorder can stay on her mood stabilizing medication and keep her job, her family, and her life.  But there are many flavors of mental illness (i.e., not everything called “bipolar disorder” is bipolar disorder), and different people have different needs.  That’s the essence of psychiatry: understanding the person behind the illness and delivering treatment accordingly.  Individualized care is a lot harder when millions of people show up for it.

I’ve worked in insurance settings and Medicaid settings.  I’ve seen first-hand the emphasis on rapid treatment, the overwhelming urge to medicate (because that’s generally all we psychiatrists have time—and get paid—to do in such settings), and the underlying “chronic disease” assumption that keeps people persistently dependent on the psychiatric system.  This model does work for some patients.  But whether it “works” for all—or even most—patients seems to be less important than keeping costs low or enrolling as many people as possible for our services.

These demands are not only external; they have become part of the mindset of many psychiatrists.  I spent my last year of residency training, for instance, in a public mental health system, where I was a county employee and all patients were Medicaid recipients.  I walked away with a sense that what mattered was not the quality of care I provided, nor whether I developed treatment plans that incorporated people’s unique needs, nor whether my patients even got better at all.  Instead, what was most important (and what we were even lectured on!) was how to write notes that satisfied the payers, how to choose medications on the basis of a 20- or 30-minute (or shorter) assessment, and how not to exceed the 12 annual outpatient visits each patient was allotted.  To make matters worse, there was no way to discharge a patient without several months of red tape—regardless of whether the patient no longer needed our services, or was actually being harmed by the treatment.  The tide has definitely turned: tomorrow’s psychiatrists will answer to administrators’ rules, not the patients’ needs—and this generation of trainees will unfortunately never even know the difference.

The great irony in this whole debacle is that those who argue loudest for expansion of health care also tend to be those who argue for more humanistic and compassionate treatment.  In a similar vein, some of the most conscientious and compassionate doctors I know—many of them supporters of Obamacare—have deliberately chosen to work outside of insurance or Medicaid/Medicare altogether.  (I can’t say that I blame them, but isn’t that sort of like singing the praises of public education but sending your kids to private school?)  With more people obtaining mental health care through insurance “benefits,” the current model will become more widespread:  we’ll continue overprescribing unnecessary drugs to children and adults, institutionalizing people against their will even when less restrictive options may be more effective, offering lower reimbursements for psychotherapy and complementary services, and inviting practitioners with lesser training and experience (and whose experience is often limited exclusively to offering pills) to become the future face of mental health care.

Do psychiatry’s leaders say anything about these issues?  No.  When they’re not lamenting the lack of new pharmaceutical compounds or attacking those who offer valid critiques of modern-day psychiatry, they’re defending the imperfect DSM-5 and steadfastly preserving our right to prescribe drugs while the pharmaceutical industry is more than happy to create new (and costly) products to help us do so.  One solution may be to train psychiatrists to be cognizant of the extraordinary diversity among individuals who seek psychiatric help, to understand the limitations of our current treatments, and to introduce patients to alternatives.  While this may be more expensive up front, it may actually save money in the future:  for example, thorough diagnostic assessments by more seasoned and experienced providers may direct patients away from expensive office-and-medication-based treatment, and towards community-based services, self-help programs, talk therapy when indicated or desired by the patient, social work services, or any of a number of alternative resources geared towards true recovery.

Alas, no one seems to be offering that as an alternative.  Instead, we’re patting ourselves on the back for expanding health care coverage to more people and developing cost-saving initiatives of dubious benefit.  Somewhere along the way, we seem to have forgotten what “care” really means.  I wonder when we’ll start figuring that one out.


My Own Bipolar Kerfuffle

August 5, 2012

I have a confession to make.  I don’t know what “bipolar disorder” is.  And as a psychiatrist, I’ll admit that’s sort of embarrassing.

Okay, maybe I’m exaggerating when I say that I don’t know what bipolar disorder is.  Actually, if you asked me to define it, I’d give you an answer that would probably sound pretty accurate.  I’ve read the DSM-IV, had years of training, took my Boards, treated people in the midst of manic episodes, and so on.  The problem for me is not the “idea” of bipolar disorder.  It’s what we mean when we use that term.

I recognized this problem only recently—in fact, just last month, as I was putting together the July/August issue of the Carlat Psychiatry Report (now available to subscribers here).  This month’s issue is devoted to the topic of “Bipolar Disorder,” and two contributors, faculty members at prestigious psychiatry departments, made contradictory—yet perfectly valid—observations.  One argued that it’s overdiagnosed; the other advocated for broadening our definition of bipolar disorder—in particular, “bipolar depression.”  The discrepancy was also noted in several comments from our Editorial Board.

Disagreements in science and medicine aren’t necessarily a bad thing.  In fact, when two authorities interpret a phenomenon differently, it creates the opportunity for further experimentation and investigation.  In time, the “truth” can be uncovered.  But in this case, as with much in psychiatry, “truth” seems to depend on whom you ask.

Consider this question.  What exactly is “bipolar depression”?  It seems quite simple:  it’s when a person with bipolar disorder experiences a depressive episode.  But what about when a person comes in with depression but has not had a manic episode or been diagnosed with bipolar disorder?  How about when a person with depression becomes “manic” after taking an antidepressant?  Could those be bipolar depression, too?  I suppose so.  But who says so?  One set of criteria was introduced by Jules Angst, a researcher in Switzerland, and was featured prominently in the BRIDGE study, published in 2011.  His criteria for bipolarity include agitation, irritability, hypomanic symptoms for as short as one day, and a family history of mania.  Other experts argue for a “spectrum” of bipolar illness.

(For a critique of the BRIDGE study, see this letter to the editor of the Archives of General Psychiatry, and this detailed—and entertaining—account in David Allen’s blog.)

The end result is rather shocking, when you think about it:  here we have this phenomenon called “bipolar disorder,” which may affect 4% of all Americans, and different experts define it differently.  With the right tweaking, nearly anyone who comes to the attention of a psychiatrist could be considered to have some features suggestive of someone’s definition of bipolar disorder.  (Think I’m kidding?  Check out the questionnaire in the appendix of Angst’s 2003 article.)

Such differences of opinion lead to some absurd situations, particularly when someone is asked to speak authoritatively about this disorder.  At this year’s APA Annual Meeting for example, David Kupfer (DSM-IV Task Force Chair) gave a keynote address on “Rethinking Bipolar Disorder,” which included recommendations for screening adolescents and the use of preventive measures (including drugs) to prevent early stages of the illness.  Why was it absurd?  Because as Kupfer spoke confidently about this disease entity, I looked around the packed auditorium and realized that each person may very well have has his or her own definition of bipolar disorder.  But did anyone say anything?  No, we all nodded in agreement, deferring to the expert.

This problem exists throughout psychiatry.  The criteria for each diagnosis in the DSM-IV can easily be applied in a very general way.  This is due partly to fatigue, partly to the fact that insurance companies require that we give a diagnosis as early as the first visit, partly because we’re so reluctant (even when it’s appropriate) to tell patients that they’re actually healthy and may not even have a diagnosis, and partly because different factions of psychiatrists use their experience to create their own criteria.  It’s no wonder that as criteria are loosened, diagnoses are misapplied, and the ranks of the “mentally ill” continue to grow.

As editor of a newsletter, I’m faced with another challenge I didn’t quite expect.  I can’t come out and say that bipolar disorder doesn’t exist (which wouldn’t be true anyway—I have actually seen cases of “classic,” textbook-style mania which do respond to medications as our guidelines would predict).  But I also can’t say that several definitions of “bipolar” exist.  That may be perceived as being too equivocal for a respectable publication and, as a result, some readers may have difficulty taking me seriously.

At the risk of sounding grandiose, I may be experiencing what our field’s leadership must experience on a regular basis.  Academic psychiatrists make their living by conducting research, publishing their findings, and, in most cases, specializing in a given clinical area.  It’s in their best interest to assume that the subjects of their research actually exist.  Furthermore, when experts see patients, they do so in a specialty clinic or clinical trial, which reinforces their definitions of disease.

This can become a problem to those of us seeing the complicated “real world” patients on the front lines, especially when we look to the experts for answers to such questions as whether we should use antipsychotics to treat acute mania, or whether antidepressants are helpful for bipolar depression.  If their interpretations of the diagnoses simply don’t pertain to the people in our offices, all bets are off.  Yet this, I fear, is what happens in psychiatry every day.

In the end, I can’t say whether my definition of bipolar disorder is right or not, because even the experts can’t seem to agree on what it is.  As for the newsletter, we decided to publish both articles, in the interest of maintaining a dialogue.  Readers will simply have to use their own definition of “bipolar disorder” and “bipolar depression” (or eschew them altogether)—hopefully in ways that help their patients.  But it has been an eye-opening experience in the futility (and humility) of trying to speak with authority about something we’re still trying desperately to understand.


Is James Holmes Mentally Ill? Does It Matter?

July 25, 2012

Last Saturday’s early-morning massacre at a crowded movie theater in Aurora, Colorado, stands as one of the most horrific rampages in American history.  But before anyone had any clue as to James Holmes’ motive for such a heinous act—even before the bodies had been removed from the site of the carnage—websites and social media were abuzz with suspicions that the gunman was either mentally ill, or was under the effect of psychotropic medications—one (or both) of which may have contributed to his crime.

As of this writing, any statement about Holmes’ psychiatric history is pure conjecture (although as I post this, I see a Fox News report claiming that Holmes mailed a notebook to “a psychiatrist” detailing his plan—more will surely be revealed).  Acquaintances quoted in the media have described him as a shy person, but have reported no erratic or unusual behaviors to arouse suspicion of an underlying mental illness.  Until recently, he was even enrolled in a graduate neuroscience program.  Some reports suggest that Holmes had spent weeks engineering an elaborate and complex scheme, hinting at some highly organized—albeit deadly—motive.  Nevertheless, the fact remains that we simply don’t know about any diagnosis, medication, or other psychiatric condition or treatment, which might shed light on Holmes’ frame of mind.

Those who focus on Holmes’ mental state at the time of the murders seem to fall in one of two camps.  Some argue that medications (if he were under the influence of any) may have enabled or facilitated this horrific act.  Others say that if Holmes had been diagnosed with a psychiatric illness, then this catastrophe serves as proof that we need more aggressive treatment—including medications—and services for the mentally ill.

It will be some time before we get answers in Holmes’ case.  And to me, that’s just as well.  Determining whether he has a mental illness, or was under the influence of psychotropic drugs last weekend, unfortunately reframes the question in such a way that further propagates the rift between these two factions, and fails to address how we should handle future cases like Holmes’ more humanely.  If, for example, Holmes is found to suffer from untreated schizophrenia, our society’s (and profession’s) reaction will be to lobby for more aggressive treatment, greater access to medication, more widespread screening programs, and, perhaps, a lower threshold by which to hospitalize psychotic individuals we deem as potentially “dangerous to others.”  If, on the other hand, a toxicology report reveals that Holmes had an antidepressant or antipsychotic in his bloodstream at the time of the murders, our inclination will be to restrict our use of these drugs, remove them from our formularies, and the outcries against psychopharmacology will grow ever louder.

Whether Holmes has a mental illness or not is irrelevant.  He was in crisis—probably for quite some time before last weekend—and that‘s what matters.  There was no one—and no way—to reach out to him and meet his needs in such a way to prevent this tragedy, and that, in my opinion, transcends whether he is “mentally ill” or not.

How can we fix this?  In his column in Monday’s New York Times, David Brooks almost provides a solution.  He writes, correctly, that prevention of such catastrophic events occurs through “relationships”—relatives or neighbors, for instance, who might recognize a change in someone’s behavior and encourage him to get help.  Admittedly, establishing a caring relationship with someone who suffers a history of trauma, grief over a recent loss, poor self-esteem, pathological narcissism, or acute psychosis may be difficult.  But relatives and neighbors are indeed often the first to notice questionable behavior and are well positioned to help those in need.  Perhaps in Holmes’ case, too, we’ll soon learn of some classmates or coworkers who felt something was amiss.

Brooks goes on to argue that it’s the responsibility of that neighbor or relative to “[get] that person treatment before the barbarism takes control.”  He doesn’t define what sort of “treatment” he has in mind.  But he does say that killers are “the product of psychological derangements, not sociological ones,” so the aggressive treatment options he endorses presumably include more aggressive psychological (or psychiatric) treatment.  But to expect a neighbor or relative to help an individual access treatment is precisely a sociological phenomenon.  It puts the onus on our culture at large to pay attention to how our neighbors think and act, and to offer a helping hand or a safe environment (or a locked psychiatric unit, if it has progressed that far) to those of us who think and behave differently or who are suffering a crisis.

That, unfortunately, is not what Brooks is arguing for.  (After all, the title of his essay is “More Treatment Programs.”)  If mass murderers suffer from psychological problems, which is what Brooks seems to believe, the solution “has to start with psychiatry.”  But this introduces the longstanding problem of defining that arbitrary border between “normal” and “abnormal”—a virtually impossible task.  And, of course, once we pathologize the “abnormal,” we’re then obligated to provide treatments (antipsychotic medication, involuntary hospitalization, assisted outpatient treatment, forced drugging) which, yes, might decrease the likelihood of further dangerousness, but which also compromise patients’ civil rights and do not always enable them to recover.

Brooks is right on one point.  Relationships are part of the answer.  Relationships can provide compassion and support in one’s most difficult times.  One take-home message from the Aurora tragedy should be that people like Holmes—regardless of whatever he is even “mentally ill” at all—need the security and comfort of safe, trustworthy individuals who are looking out for his (and society’s) best interests and who can intervene at a much earlier stage and in a much less aggressive way, perhaps even avoiding conventional psychiatric treatment altogether.

Getting to that point, unfortunately, requires a sea change in how we deal more compassionately with those in our society who are different from the rest of us—a change that our nation may be unwilling, or unable, to make.  If we fail to make it, we’ll be stuck with the never-ending debate over the validity of psychiatric diagnoses, the effectiveness of psychiatric drugs, the ethics of forced treatment, and the dilemma of defining when antisocial behavior becomes a “disease.”  In the meantime, troubled souls like James Holmes will continue to haunt us, left to their own devious plans until psychiatric treatment—or worse—is the only available option.


Turf Wars

July 6, 2012

The practice of medicine has changed enormously in just the last few years.  While the upcoming implementation of the Affordable Care Act promises even further—and more dramatic—change, one topic which has received little popular attention is the question of exactly who provides medical services.  Throughout medicine, physicians (i.e., those with MD or DO degrees) are being replaced by others, whenever possible, in an attempt to cut costs and improve access to care.

In psychiatry, non-physicians have long been a part of the treatment landscape.  Most commonly today, psychiatrists focus on “medication management” while psychologists, psychotherapists, and others perform “talk therapy.” But even the med management jobs—the traditional domain of psychiatrists, with their extensive medical training—are gradually being transferred to other so-called “midlevel” providers.

The term “midlevel” (not always a popular term, by the way) refers to someone whose training lies “mid-way” between that of a physician and another provider (like a nurse, psychologist, social worker, etc) but who is still licensed to diagnose and treat patients.  Midlevel providers usually work under the supervision (although often not direct) of a physician.  In psychiatry, there are a number of such midlevel professionals, with designations like PMHNP, PMHCNS, RNP, and APRN, who have become increasingly involved in “med management” roles.  This is partly because they tend to demand lower salaries and are reimbursed at a lower rate than medical professionals.  However, many physicians—and not just in psychiatry, by the way—have grown increasingly defensive (and, at times, downright angry, if some physician-only online communities are any indication) about this encroachment of “lesser-trained” practitioners onto their turf.

In my own experience, I’ve worked side-by-side with a few RNPs.  They performed their jobs quite competently.  However, their competence speaks less to the depth of their knowledge (which was impressive, incidentally) and more to the changing nature of psychiatry.  Indeed, psychiatry seems to have evolved to such a degree that the typical psychiatrist’s job—or “turf,” if you will—can be readily handled by someone with less (in some cases far less) training.  When you consider that most psychiatric visits comprise a quick interview and the prescription of a drug, it’s no surprise that someone with even just a rudimentary understanding of psychopharmacology and a friendly demeanor can do well 99% of the time.

This trend could spell (or hasten) the death of psychiatry.  More importantly, however, it could present an opportunity for psychiatry’s leaders to redefine and reinvigorate our field.

It’s easy to see how this trend could bring psychiatry to its knees.  Third-party payers obviously want to keep costs low, and with the passage of the ACA the role of the third-party payer—and “treatment guidelines” that can be followed more or less blindly—will be even stronger.  Patients, moreover, increasingly see psychiatry as a medication-oriented specialty, thanks to direct-to-consumer advertising and our medication-obsessed culture.  Taken together, this means that psychiatrists might be passed over in favor of cheaper workers whose main task will be to follow guidelines or protocols.  If so, most patients (unfortunately) wouldn’t even know the difference.

On the other hand, this trend could also present an opportunity for a revolution in psychiatry.  The predictions in the previous paragraph are based on two assumptions:  first, that psychiatric care requires medication, and second, that patients see the prescription of a drug as equivalent to a cure.  Psychiatry’s current leadership and the pharmaceutical industry have successfully convinced us that these statements are true.  But they need not be.  Instead, they merely represent one treatment paradigm—a paradigm that, for ever-increasing numbers of people, leaves much to be desired.

Preservation of psychiatry requires that psychiatrists find ways to differentiate themselves from midlevel providers in a meaningful fashion.  Psychiatrists frequently claim that they are already different from other mental health practitioners, because they have gone to medical school and, therefore, are “real doctors.”  But this is a specious (and arrogant) argument.  It doesn’t take a “real doctor” to do a psychiatric interview, to compare a patient’s complaints to what’s written in the DSM (or what’s in one’s own memory banks) and to prescribe medication according to a guideline or flowchart. Yet that’s what most psychiatric care is.  Sure, there are those cases in which successful treatment requires tapping the physician’s knowledge of pathophysiology, internal medicine, or even infectious disease, but these are rare—not to mention the fact that most treatment settings don’t even allow the psychiatrist to investigate these dimensions.

Thus, the sad reality is that today’s psychiatrists practice a type of medical “science” that others can grasp without four years of medical school and four years of psychiatric residency training.  So how, then, can psychiatrists provide something different—particularly when appointment lengths continue to dwindle and costs continue to rise?  To me, one answer is to revamp specialty training.  I received my training in two institutions with very different cultures and patient populations.  But both shared a common emphasis on teaching medication management.  Did I need four years to learn how to prescribe drugs?  No.  In reality, practical psychopharmacology can be learned in a one-year (maybe even six-month) course—not to mention the fact that the most valuable knowledge comes from years of experience, something that only real life (and not a training program) can provide.

Beyond psychopharmacology, psychiatry training programs need to beef up psychotherapy training, something that experts have encouraged for years.  But it goes further than that: psychiatry trainees need hands-on experience in the recovery model, community resources and their delivery, addictive illness and recovery concepts, behavioral therapies, case management, and, yes, how to truly integrate medical care into psychiatry.  Furthermore, it wouldn’t hurt to give psychiatrists lessons in communication and critical thinking skills, cognitive psychology principles, cultural sensitivity, economics, business management, alternative medicine (much of which is “alternative” only because the mainstream says so), and, my own pet peeve, greater exposure to the wide, natural variability among human beings in their intellectual, emotional, behavioral, perceptual, and physical characteristics and aptitudes—so we stop labeling everyone who walks in the door as “abnormal.”

One might argue, that sounds great but psychiatrists don’t get paid for those things.  True, we don’t.  At least not yet.  Nevertheless, a comprehensive approach to human wellness, taken by someone who has invested many years learning how to integrate these perspectives, is, in the long run, far more efficient than the current paradigm of discontinuous care, in which one person manages meds, another person provides therapy, another person serves as a case manager—roles which can change abruptly due to systemic constraints and turnover.

If we psychiatrists want to defend our “turf,” we can start by reclaiming some of the turf we’ve given away to others.  But more importantly, we must also identify new turf and make it our own—not to provide duplicate, wasteful care, but instead to create a new treatment paradigm in which the focus is on the patient and the context in which he or she presents, and treatment involves only what is necessary (and which is likely to work for that particular individual).  Only a professional with a well-rounded background can bring this paradigm to light, and psychiatrists—those who have invested the time, effort, expense, and hard work to devote their lives to the understanding and treatment of mental illness—are uniquely positioned to bring this perspective to the table and make it happen.


What Adderall Can Teach Us About Medical Marijuana

June 19, 2012

An article in the New York Times last week described the increasing use of stimulant medications such as Adderall and Ritalin among high-school students.  Titled “The Risky Rise of the Good-Grade Pill,” the article discussed how 15 to 40 percent of students, competing for straight-As and spots in elite colleges, use stimulants for an extra “edge,” regardless of whether they actually have ADHD.  In this blog, I’ve written about ADHD.  It’s a real condition—and medications can help tremendously—but the diagnostic criteria are quite vague.  As with much in psychiatry, anyone “saying the right thing” can relatively easily get one of these drugs, whether they want it or not.

Sure enough, the number of prescriptions for these drugs has risen 26% since 2007.  Does this mean that ADHD is now 26% more prevalent?  No.  In the Times article, some students admitted they “lie to [their] psychiatrists” in order to “get something good.”  In fact, some students “laughed at the ease with which they got some doctors to write prescriptions for ADHD.”  In the absence of an objective test (some computerized tests exist but aren’t widely used nor validated, and brain scans are similarly circumspect) and diagnostic criteria that are readily accessible on the internet, anyone who wants a stimulant can basically get one.  And while psychiatric diagnosis is often an imperfect science, in many settings the methodology by which we assess and diagnose ADHD is particularly crude.

Many of my colleagues will disagree with (or hate) me for saying so, but in some sense, the prescription of stimulants has become just like any other type of cosmetic medicine.  Plastic surgeons and dermatologists, for instance, are trained to perform medically necessary procedures, but they often find that “cosmetic” procedures like facelifts and Botox injections are more lucrative.  Similarly, psychiatrists can have successful practices in catering to ultra-competitive teens (and their parents) and giving out stimulants.  Who cares if there’s no real disease?  Psychiatry is all about enhancing patients’ lives, isn’t it?  As another blogger wrote last week, some respectable physicians have even argued that “anyone and everyone should have access to drugs that improve performance.”

When I think about “performance enhancement” in this manner, I can’t help but think about the controversy over medical marijuana.  This is another topic I’ve written about, mainly to question the “medical” label on something that is neither routinely accepted nor endorsed by the medical profession.  Proponents of medical cannabis, I wrote, have co-opted the “medical” label in order for patients to obtain an abusable psychoactive substance legally, under the guise of receiving “treatment.”

How is this different from the prescription of psychostimulants for ADHD?  The short answer is, it’s not.  If my fellow psychiatrists and I prescribe psychostimulants (which are abusable psychoactive substances in their own right, as described in the pages of the NYT) on the basis of simple patient complaints—and continue to do so simply because a patient reports a subjective benefit—then this isn’t very different from a medical marijuana provider writing a prescription (or “recommendation”) for medical cannabis.  In both cases, the conditions being treated are ill-defined (yes, in the case of ADHD, it’s detailed in the DSM, which gives it a certain validity, but that’s not saying much).  In both cases, the conditions affect patients’ quality of life but are rarely, if ever, life-threatening.  In both cases, psychoactive drugs are prescribed which could be abused but which most patients actually use quite responsibly.  Last but not least, in both cases, patients generally do well; they report satisfaction with treatment and often come back for more.

In fact, taken one step further, this analogy may turn out to be an argument in favor of medical marijuana.  As proponents of cannabis are all too eager to point out, marijuana is a natural substance, humans have used it for thousands of years, and it’s arguably safer than other abusable (but legal) substances like nicotine and alcohol.  Psychostimulants, on the other hand, are synthetic chemicals (not without adverse effects) and have been described as “gateway drugs” to more or less the same degree as marijuana.  Why one is legal and one is not simply appears to be due to the psychiatric profession’s “seal of approval” on one but not the other.

If the psychiatric profession is gradually moving away from the assessment, diagnosis, and treatment of severe mental illness and, instead, treating “lifestyle” problems with drugs that could easily be abused, then I really don’t have a good argument for denying cannabis to patients who insist it helps their anxiety, insomnia, depression, or chronic pain.

Perhaps we should ask physicians take a more rigorous approach to ADHD diagnosis, demanding interviews with parents and teachers, extensive neuropsychiatric testing, and (perhaps) neuroimaging before offering a script.  But in a world in which doctors’ reimbursements are dwindling, and the time devoted to patient care is vanishing—not to mention a patient culture which demands a quick fix for the problems associated with the stresses of modern adolescence—it doesn’t surprise me one bit that some doctors will cut corners and prescribe without a thorough workup, in much the same way that marijuana is provided, in states where it’s legal.  If the loudest protests against such a practice don’t come from our leadership—but instead from the pages of the New York Times—we only have ourselves to blame when things really get out of hand.


The Evidence of the Anecdote

June 8, 2012

The foundation of medical decision-making is “evidence-based medicine.”  As most readers know, this is the effort to use the best available evidence (using the scientific method) to make decisions and recommendations about how to treat individual patients.  “Evidence” is typically rated on four levels (1 to 4).  Level 1 represents high-quality evidence—usually the results of randomized clinical trials—while level 4 typically consists of case studies, uncontrolled observations, and anecdotal reports.

Clinical guidelines and drug approvals typically rely more heavily (or even exclusively) on level-1 evidence.  It is thought to be more valid, more authoritative, and less affected by variations among individuals.  For example, knowing that an antidepressant works (i.e., it gives a “statistically significant effect” vs placebo) in a large, controlled trial is more convincing to the average prescriber than knowing that it worked for a single depressed guy in Peoria.

But is it, really?  Not always (especially if you’re the one treating that depressed guy in Peoria).  Clinical trials can be misleading, even if their results are “significant.”  As most readers know, some investigators, after analyzing data from large industry-funded clinical trials, have concluded that antidepressants may not be effective at all—a story that has received extensive media coverage.  But lots of individuals insist that they do work, based on personal experience.  One such depression sufferer—who benefited greatly from antidepressants—wrote a recent post on the Atlantic Online, and quoted Peter Kramer: “to give the impression that [antidepressants] are placebos is to cause needless suffering” because many people do benefit from them.  Jonathan Leo, on the other hand, argues that this is a patently anti-scientific stance.  In a post this week on the website Mad In America, Leo points out (correctly) that there are people out there who will give recommendations and anecdotes in support of just about anything.  That doesn’t mean they work.

Both sides make some very good points.  We just need to find a way to reconcile them—i.e., to make the “science” more reflective of real-world cases, and use the wisdom of individual cases to influence our practice in a more scientifically valid way.  This is much easier said than done.

While psychiatrists often refer to the “art” of psychopharmacology, make no mistake:  they (we) take great pride in the fact that it’s supposedly grounded in hard science.  Drug doses, mechanisms, metabolites, serum levels, binding coefficients, polymorphisms, biomarkers, quantitative outcome measures—these are the calling cards of scientific investigation.  But when medications don’t work as planned (which is often), we improvise, and when we do, we quickly enter the world of personal experience and anecdote.  In fact, in the absence of objective disease markers (which we may never find, frankly), psychiatric treatment is built almost exclusively on anecdotes.  When a patient says a drug “worked” in some way that the data don’t support, or they experience a side effect that’s not listed in the PDR, that becomes the truth, and it happens far more frequently than we like to admit.

It’s even more apparent in psychotherapy.  When a therapist asks a question like “What went through your mind when that woman rejected you?” the number of possible responses is infinite, unlike a serum lithium level or a blood pressure.  A good therapist follows the patient’s story and individualizes treatment based on the individual case (and only loosely on some theory or therapeutic modality).  The “proof” is the outcome with that particular patient.  Sure, the “N” is only 1, but it’s the only one that counts.

Is there any way to make the science look more like the anecdotal evidence we actually see in practice?  I think not.  Most of us don’t even stop to think about how UN-convincing the “evidence” truly is.  In his book Pharmageddon, David Healy describes the example of the parachute:  no one needs to do a randomized, controlled trial to show that a parachute works.  It just does.   By comparison, to show that antidepressants “work,” drug companies must perform large, expensive trials (and often multiple trials at that) and even then, prove their results through statistical measures or clever trial designs.  Given this complexity, it’s a wonder that we believe clinical trials at all.

On the other side of the coin, there’s really no way to subject the anecdotal report, or case study, to the scientific method.  By definition, including more patients and controls (i.e., increasing the “N”) automatically introduces heterogeneity.  Whatever factor(s) led a particular patient to respond to Paxil “overnight” or to develop a harsh cough on Abilify are probably unique to that individual.

But maybe we can start looking at anecdotes through a scientific lens.  When we observe a particular response or effect, we ought to look not just at the most obvious cause (e.g., a new medication) but at the context in which it occurred, and entertain any and all alternative hypotheses.  Similarly, when planning treatment, we need to think not just about FDA-approved drugs, but also patient expectations, treatment setting, home environment, costs, other comorbidities, the availability of alternative therapies, and other data points or “independent variables.”  To use a crude but common analogy, it is indeed true that every person becomes his or her own laboratory, and should be viewed as such.

The more we look at patients this way, the further we get from clinical trials and the less relevant clinical trials become.  This is unfortunate, because—for better or for worse (I would vote for “worse”)—clinical trials have become the cornerstone of evidence-based psychiatry.  But a re-emphasis on anecdotes and individual cases is important.  Because in the end, it’s the individual who counts.  The individual resembles an N of 1 much more closely than he or she resembles an N of 200, and that’s probably the most important evidence we need to keep in mind.


“Patient-Centered” Care and the Science of Psychiatry

May 30, 2012

When asked what makes for good patient care in medicine, a typical answer is that it should be “patient-centered.”  Sure, “evidence-based medicine” and expert clinical guidelines are helpful, but they only serve as the scientific foundation upon which we base our individualized treatment decisions.  What’s more important is how a disorder manifests in the patient and the treatments he or she is most likely to respond to (based on genetics, family history, biomarkers, etc).  In psychiatry, there’s the additional need to target treatment to the patient’s unique situation and context—always founded upon our scientific understanding of mental illness.

It’s almost a cliché to say that “no two people with depression [or bipolar or schizophrenia or whatever] are the same.”  But when the “same” disorder manifests differently in different people, isn’t it also possible that the disorders themselves are different?  Not only does such a question have implications for how we treat each individual, it also impacts how we interpret the “evidence,” how we use treatment guidelines, and what our diagnoses mean in the first place.

For starters, every patient wants something different.  What he or she gets is usually what the clinician wants, which, in turn, is determined by the diagnosis and established treatment guidelines:  lifelong medication treatment, referral for therapy, forced inpatient hospitalization, etc.  Obviously, our ultimate goal is to eliminate suffering by relieving one’s symptoms, but shouldn’t the route we take to get there reflect the patient’s desires?  When a patient gets what he or she wants, shouldn’t this count as good patient care, regardless of what the guidelines say?

For instance, some patients just want a quick fix (e.g., a pill, ideally without frequent office visits), because they have only a limited amount of money (or time) they’re willing to use for treatment.  Some patients need to complete “treatment” to satisfy a judge, an employer, or a family member.  Some patients visit the office simply to get a disability form filled out or satisfy some other social-service need.  Some simply want a place to vent, or to hear from a trusted professional that they’re “okay.”  Still others seek intensive, long-term therapy even when it’s not medically justified.  Patients request all sorts of things, which often differ from what the guidelines say they need.

Sometimes these requests are entirely reasonable, cost-effective, and practical.  But we psychiatrists often feel a need to practice evidence- (i.e., science-) based medicine; thus, we take treatment guidelines (and diagnoses) and try to make them apply to our patients, even when we know they want—or need—something else entirely, or won’t be able to follow through on our recommendations.  We prescribe medications even though we know the patient won’t be able to obtain the necessary lab monitoring; or we refer a patient for intensive therapy even though we know their insurance will only cover a handful of visits; we admit a suicidal patient to a locked inpatient ward even though we know the unpredictability of that environment may cause further distress; or we advise a child with ADHD and his family to undergo long-term behavioral therapy in conjunction with stimulants, when we know this resource may be unavailable.

Guidelines and diagnoses are written by committee, and, as such, rarely apply to the specifics of any individual patient.  Thus, a good clinician uses a clinical guideline simply as a tool—a reference point—to provide a foundation for an individual’s care, just as a master chef knows a basic recipe but alters it according to the tastes he wishes to bring out or which ingredients are in season.  A good clinician works outside the available guidelines for many practical reasons, not the least of which is the patient’s own belief system—what he or she thinks is wrong and how to fix it.  The same could be said for diagnoses themselves.  In truth, what’s written in the DSM is a model—a “case study,” if you will—by which real-world patients are observed and compared.  No patient ever fits a single diagnosis to a “T.”

Unfortunately, under the pressures of limited time, scarce resources, and the threat of legal action for a poor outcome, clinicians are more inclined to see patients for what they are than for who they are, and therefore adhere to guidelines even more closely than they’d like.  This corrupts treatment in many ways.  Diagnoses are given out which don’t fit (e.g., “parity” diagnoses must be given in order to maintain reimbursement).  Treatment recommendations are made which are far too costly or complex for some patients to follow.  Services like disability benefits are maintained far beyond the period they’re needed (because diagnoses “stick”).  And tremendous resources are devoted to the ongoing treatment of patients who simply want (and would benefit from) only sporadic check-ins, or who, conversely, can afford ongoing care themselves.

The entire situation calls into question the value of treatment guidelines, as well as the validity of psychiatric diagnoses.  Our patients’ unique characteristics, needs, and preferences—i.e., what helps patients to become “well”—vary far more widely than the symptoms upon which official treatment guidelines were developed.  Similarly, what motivates a person to seek treatment differs so widely from person to person, implying vastly different etiologies.

To provide optimal care to a patient, care must indeed be “patient-centered.”  But truly patient-centered care must not only sidestep the DSM and established treatment guidelines, but also, frequently, ignore diagnoses and guidelines altogether.  What does this say about the validity, relevance, and applicability of the diagnoses and guidelines at our disposal?  And what does this say about psychiatry as a science?


Addiction Psychiatry and The New Medicine

May 21, 2012

I have always believed that addictive disorders can teach us valuable lessons about other psychiatric conditions and about human behavior in general.  Addictions obviously involve behavior patterns, learning and memory processes, social influences, disturbed emotions, and environmental complexities.  Successful treatment of addiction requires attention to all of these facets of the disorder, and the addict often describes the recovery process not simply as being relieved of an illness, but as enduring a transformative, life-changing experience.

“Addiction psychiatry” is the area of psychiatry devoted to the treatment of these complicated disorders.  Certain trends in addiction psychiatry, however, seem to mirror larger trends in psychiatry as  whole.  Their impact on the future treatment of addictive behavior has yet to be determined, so it would be good to evaluate these trends to determine whether we’re headed in a direction we truly want to go.

Neurobiology:  Addiction psychiatry—like the rest of psychiatry—is slowly abandoning the patient and is becoming a largely neuroscientific enterprise.  While it is absolutely true that neurobiology has something to do with the addict’s repetitive, self-destructive behavior, and “brain reward pathways” are clearly involved, these do not tell the whole story.  Addicts refer to “people, places, and things” as the triggers for drug and alcohol use, not “dopamine, nucleus accumbens, and frontal cortex.”  This isn’t an argument against the need to study the biology of addiction, but to keep due focus on other factors which may affect one’s biology.  Virtually the same thing could also be said for most of what we treat in psychiatry; a multitude of factors might explain the presence of symptoms, but we’ve adopted a bias to think strictly in terms of brain pathways.

Medications:  Researchers in the addiction field (not to mention drug companies) devote much of their effort to disxover medications to treat addictions.  While they may stumble upon some useful adjunctive therapies, a “magic bullet” for addiction will probably never be found.  Moreover, I fear that the promise of medication-based treatments may foster a different sort of “dependence” among patients.  At this year’s APA Annual Meeting, for instance, I frequently heard the phrase “addictions are like other psychiatric disorders and therefore require lifelong treatment” (a statement which, by the way, is probably incorrect on TWO counts).  They weren’t talking about lifelong attendance at AA meetings or relapse prevention strategies, but rather to the need to take Suboxone or methadone (or the next “miracle drug”) indefinitely to achieve successful recovery.  Thus, as with other psychiatric disorders– many of which might only need short-term interventions but usually result in chronic pharmacological management—the long-term management of addiction may not reside in the maintenance of a strong recovery program but in the taking of a pill.

New Providers:  Once a relatively unpopular subspecialty, addiction psychiatry is now a burgeoning field, thanks to this new focus on neurobiology and medication management—areas in which psychiatrists consider themselves well versed.  For example, a psychiatrist can become an “addiction psychiatrist” by receiving “Suboxone certification” (i.e., taking an 8-hour online course to obtain a special DEA license to prescribe buprenorphine, an opioid agonist).  I have nothing against Suboxone: patients who take daily Suboxone are far less likely to use opioids, more likely to remain in treatment, and less likely to suffer the consequences of opioid abuse.  In fact, one might argue that the effectiveness of Suboxone—and methadone, for that matter—for opioid dependence is far greater than that of SSRIs in the treatment of depression.  Many Suboxone prescribers, however, have little exposure to the psychosocial aspects—and hard work—involved in fully treating (or overcoming) an addiction, and a pill is simply a substitute for opioids (which itself can be abused).  Nevertheless, prescribing a medication at monthly intervals—sometimes with little discussion about progress toward other recovery goals—resembles everything else we do in psychiatry; it’s no wonder that we’re drawn to it.

Patients:  Like many patients who seek psychiatric help, addicts might start to see “recovery” as a simple matter of making an appointment with a doctor and getting a prescription.  To be sure, many patients have used drugs like Suboxone or methadone to help them overcome deadly addictions, just as some individuals with major depression owe their lives to SSRIs or ECT.  But others have been genuinely hurt by these drugs.  Patients who have successfully discontinued Suboxone often say that it was the most difficult drug to stop—worse than any other opioid they had abused in the past.  Patients should always be reminded of the potential risks and dangers of treatment.  More importantly, we providers have an obligation to make patients aware of other ways of achieving sobriety and when to use them.  Strategies that don’t rely so heavily on the medical model might require a lot more work, but the payoffs may be much greater.

——

Addictions involve complex biological, psychological, and social dimensions that differ from person to person.  The response of the psychiatric profession has been to devote more research to the neurobiology of addictions and the development of anti-addiction drugs, potentially at the expense of exploring other aspects that may be more promising.  As expected, psychiatrists, pharmaceutical companies, third-party payers, and the general public are quickly buying into this model.

Psychiatry finds itself in a Catch-22.  On the one hand, psychiatry is often criticized for not being “medical,” and focusing on the biology of addiction is a good way to adhere to the medical model (and, perhaps, lead us to better pharmacotherapies).  On the other hand, psychiatric disorders—and especially addictions—are multifactorial in nature, and successful treatment often requires a comprehensive approach.  Fortunately, it may not yet be too late for psychiatry to retreat from a full-scale embrace of the medical model.  Putting the patient first sometimes means stepping away from the science.  And as difficult and non-intuitive as that may be, sometimes that’s where the healthiest recovery can be found.