If Medications Don’t Work, Why Do I Prescribe Them Anyway?

January 4, 2013

I have a confession to make.  I don’t think what I do each day makes any sense.

sense1-docPerhaps I should explain myself.  Six months ago, I started my own private psychiatry practice (one of the reasons why I haven’t posted much to this blog, but I hope to pick up the pace again!).  I made this decision after working for several years in various community clinics, county mental health systems, and three academic institutions.  I figured that an independent practice would permit me to be a more effective psychiatrist, as I wouldn’t be encumbered by the restrictions and regulations of most of today’s practice settings.

My experience has strengthened my long-held belief that people are far more complicated than diagnoses or “chemical imbalances”—something I’ve written about on this blog and with which most psychiatrists would agree.  But I’ve also made an observation that seems incompatible with one of the central dogmas of psychiatry.  To put it bluntly, I’m not sure that psychiatric medications work.

Before you jump to the conclusion that I’m just another disgruntled, anti-medication psychiatrist who thinks we’ve all been bought and misled by the pharmaceutical industry, please wait.  The issue here is, to me, a deeper one than saying that we drug people who request a pill for every ill.  In fact, it might even be a stretch to say that medications never work.  I’ve seen antidepressants, antipsychotics, mood stabilizers, and even interventions like ECT give results that are actually quite miraculous.

sense2-dahlBut here’s my concern: For the vast majority of my patients, when a medication “works,” there are numerous other potential explanations, and a simple discussion may reveal multiple other hypotheses for the clinical response.  And when you consider the fact that no two people “benefit” in quite the same way from the same drug, it becomes even harder to say what’s really going on. There’s nothing scientific about this process whatsoever.

And then, of course, there are the patients who just don’t respond at all.  This happens so frequently I sometimes wonder whether I’m practicing psychiatry wrong, or whether my patients are playing a joke on me.  But no, as far as I can tell, I’m doing things right: I prescribe appropriately, I use proper doses, and I wait long enough to see a response.  My training is up-to-date; I’ve even been invited to lecture at national conferences about psychiatric meds.  I can’t be that bad at psychiatry, can I?

Probably not.  So if I assume that I’m not a complete nitwit, and that I’m using my tools correctly, I’m left to ask a question I never thought I’d ask:  is psychopharmacology just one big charade?  **

Maybe I feel this way because I’m not necessarily looking for medications to have an effect in the first place.  I want my patients to get better, no matter what that entails.  I believe that treatment is a process, one in which the patient (not just his or her chemistry) is central.  When drugs “work,” several factors might explain why, and by the same token, when drugs don’t work, it might mean that something else needs to be treated instead—rather than simply switching to a different drug or changing the dose.  Indeed, over the course of several sessions with a patient, many details inevitably emerge:  persistent anxiety, secretive substance abuse, a history of trauma, an ongoing conflict with a spouse, or a medical illness.  These often deserve just as much attention as the initial concern, if not more.

sense3-pathophysiologyAlthough our understanding of the pathophysiology of mental illness is pure conjecture, prescribing a medication (at least at present) is an acceptable intervention.  What happens next is much more important.  I believe that prescribers should continue to collect evidence and adjust their hypotheses accordingly.  Unfortunately, most psychopharmacologists rarely take the time to discuss issues that can’t be explained by neurochemistry (even worse, they often try to explain all issues in terms of unproven neurochemistry), and dwindling appointment times mean that those who actually want to explore other causes don’t have the chance to do so.

So what’s a solution?  This may sound extreme, but maybe psychiatry should reject the “biochemical model” until it’s truly “biochemical”—i.e., until we have ways of diagnosing, treating, and following illnesses as we do in most of the rest of medicine.  In psychiatry, the use of medications and other “somatic” treatments is based on interview, gut feeling, and guesswork—not biology.  That doesn’t mean we can’t treat people, but we shouldn’t profess to offer a biological solution when we don’t know the nature of the problem.  We should admit our ignorance.

It would also help to allow (if not require) more time with psychiatric patients.  This is important.  If I only have 15-20 minutes with a patient, I don’t have time to ask about her persistent back pain, her intrusive brother-in-law, or her cocaine habit.  Instead, I must restrict my questions to those that pertain to the drug(s) I prescribed at the last visit.  This, of course, creates the perfect opportunity for confirmation bias—where I see what I expect to see.

sense4-mentalWe should also make an effort to educate doctors and patients alike about how little we actually know.  The subjects in trials to obtain FDA approval do NOT resemble real-world patients and are not evaluated or treated like real-world patients (and this is unlikely to change anytime soon because it works so well for the drug companies).  Patients should know this.  They should also know that the reliability of psychiatric diagnosis is poor in the first place, and that psychiatric illnesses have no established biochemical basis with which to guide treatment.

Finally, I should say that even though I call myself a psychiatrist and I prescribe drugs, I do not believe I’m taking advantage of my patients by doing so.  All of my patients are suffering, and they deserve treatment.  For some, drugs may play a key role in their care.  But when I see my entire profession move towards a biochemical approach—without any good evidence for such a strategy, and without a fair assessment of alternative explanations for behavior—and see, in my own practice, how medications provide no real benefit (or, frequently, harm) compared with other treatments, I have to wonder whether we’ve gone WAY beyond what psychopharmacology can truly offer, and whether there’s any way to put some logic back into what we call psychiatric treatment.

** There are areas in which psychopharmacology is most definitely not a “charade.”  These would include the uses of benzodiazepines, psychostimulants, and opioids like methadone and Suboxone.  With each of these agents, the expected effect is quite predictable, and they can be very effective drugs for many people.  Unfortunately, each of these can have an effect even in the absence of a diagnosis, and—probably not coincidentally—each has a potential for abuse.

About these ads

Explain To Me Again Why Psychologists Can’t Prescribe Meds?

November 25, 2012

Consider the following two clinical scenarios:


A.  William, a 62 year-old accountant, has been feeling “depressed” since his divorce 5 years ago.  His practice, he says, is “falling apart,” as he has lost several high-profile clients and he believes it’s “too late” for his business to recover.  His adult son and daughter admire him greatly, but his ex-wife denigrates him and does everything she can to keep their children from seeing him.  William spends most of his days at his elderly parents’ house, a two-hour drive away, where he sleeps in the room (and bed) he occupied in his childhood.

William has been seeing Dr Moore every 1-2 weeks for the last 2 years.  Dr Moore has tried to support William’s ill-fated attempts to build up his practice, spend more time with his children, and engage in more productive activities, including dating and other social endeavors.  But William persistently complains that it’s “of no use,” he’ll “never meet anyone,” and his practice is “doomed to fail.”  At times, Dr Moore has feared that William may in fact attempt suicide, although to this point no attempt has been made.

B.  Claudia is a 68 year-old Medicare recipient with a history of major depression, asthma, diabetes, peripheral neuropathy, chronic renal failure, low back pain, and—for the last year—unexplained urinary incontinence.  She sees Dr Smith approximately every four weeks.  In each visit (which typically lasts about 20 minutes), Dr Smith must manage all of Claudia’s complaints and concerns, and while Dr Smith has made referrals to the appropriate medical specialists, Claudia’s condition has not improved.  In fact, Claudia now worries that she’s a “burden” on everyone else, especially her family, and “just wants to die.”  She and her daughter ask Dr Smith to “do something” to help.

Each of these scenarios is an actual case from my practice (with details changed to maintain anonymity).  Both William and Claudia are in emotional distress, and a case could be made for a trial of a psychiatric medication in each of them.

The problem, however, lies in the fact that only one of these “doctors” is a medical doctor: in this case, Dr Smith.  As a result, despite whatever experience or insight Dr Moore may have in the diagnosis of mental illness, he’s forbidden from prescribing a drug to treat it.

I recently gave a presentation to a Continuing Education program sponsored by the California School of Professional Psychology.  My audience was a group of “prescribing psychologists”—licensed psychologists who have taken over 500 hours of psychopharmacology course work in addition to the years to obtain their psychology PhDs.  By virtue of their core training, these psychologists do not see patients as “diseases” or as targets for drugs.  Although they do receive training in psychiatric diagnosis (and use the same DSM as psychiatrists), neuroanatomy, and testing/assessment, their interventions are decidedly not biological.  Most of them see psychotherapy as a primary intervention, and, more importantly, they are well versed in determining when and how medications can be introduced as a complement to the work done in therapy.  Most states, however (including my own, California) do not permit psychologists to obtain prescribing privileges, resulting in a division of labor that ultimately affects patient care.

Let’s return to the scenarios: in scenario “A,” Dr Moore could not prescribe William any medication, although he followed William through two brief antidepressant trials prescribed by William’s primary care physician (with whom, incidentally, Dr Moore never spoke).  When Dr Moore referred William to me, I was happy to see him but didn’t want to see myself as just a “prescriber.”  Thus, I had two long phone conversations with Dr Moore to hear his assessment, and decided to prescribe one of the drugs that he recommended.  William still sees both Dr Moore and me.  It’s arguably a waste of time (and money), since each visit is followed by a telephone call to Dr Moore to make sure I’m on the right track.

Claudia’s case was a very different story.  Because Claudia complained of being a “burden” and “wanting to die”—complaints also found in major depression—Dr Smith, her primary care physician, decided to prescribe an antidepressant.  He prescribed Celexa, and about one month later, when it had had no obvious effect, he gave Claudia some samples of Abilify, an antipsychotic sometimes used for augmentation of antidepressants.  (In fact, Dr Smith told Claudia to take Abilify three times daily, with the admonishment “if you want to stop crying, you need to take this Abilify three times a day, but if you stop taking it, you’ll start crying again.”)  Like it or not, this counts as “mental health care” for lots of patients.

Some would argue that the only ones qualified to prescribe medications are medical doctors.  They would claim that Dr Moore, a psychologist, might have crossed a professional boundary by “suggesting” an antidepressant for William, while Dr Smith, a physician, has the full ability to assess interactions among medications and to manage complex polypharmacy, even without consulting a psychiatrist.  In reality, however, Dr Smith’s “training” in psychotropic drugs most likely came from a drug rep (and his use of samples was a telltale sign), not from advanced training in psychopharmacology.  When one considers that the majority of psychotropic medication is prescribed by non-psychiatrists like Dr Smith, it’s fairly safe to say that much use of psychiatric drugs is motivated by drug ads, free samples, and “educational dinners” by “key opinion leaders,” and provided without much follow-up.

Furthermore, Dr Smith’s training in mental health most likely pales in comparison to that of Dr Moore.  Psychologists like Dr Moore have five or more years of postgraduate training, 3000 or more hours of clinical supervision, research experience, and have passed a national licensing exam.  But they’re forbidden from using medications that have been FDA-approved for precisely the conditions that they are extraordinarily well-equipped to evaluate, diagnose, and treat.

A satisfactory alternative would be an integrated behavioral health/primary care clinic in which professionals like Dr Moore can consult with a psychiatrist (or another “psychiatric prescriber”) to prescribe.  This arrangement has been shown to work in many settings.  It also allows for proper follow-up and limits the number of prescribers.  Indeed, pharmaceutical companies salivate at the prospect of more people with prescribing authority—it directly expands the market for their drugs—but the fact is that most of them simply don’t work as well as advertised and cause unwanted side effects.  (More about that in a future post.)

The bottom line is that there are ways of delivering mental health care in a more rational fashion, by people who know what they’re doing. As it currently stands, however, anyone with an MD (or DO, or NP) can prescribe a drug, even if others may possess greater experience or knowledge, or provide higher-quality care.  As an MD, I’m technically licensed to perform surgery, but trust me, you don’t want me to remove your appendix.  By the same token, overworked primary care docs whose idea of treating depression is handing out Pristiq samples every few months are probably not the best ones to treat depression in the medically ill.  But they do, and maybe it’s time for that to change.


Be Careful What You Wish For

September 2, 2012

Whatever your opinion of the Affordable Care Act, you must admit that it’s good to see the American public talk about reducing health care costs, offering more efficient delivery systems, and expanding health care services to more of our nation’s people.  There’s no easy (or cheap) way to provide health care to all Americans, particularly with the inefficiencies and absurdities that characterize our current health care system, but it’s certainly goal worth pursuing.

However, there’s more to the story than just expanding coverage to more Americans.  There’s also the issue about improving the quality of that coverage.  If you listen to the politicians and pundits, you might get the impression that the most important goal is to insure more people, when in fact insurance may leave us with worse outcomes in the end.

Take, for example, an Op-Ed by Richard Friedman, MD, published in the New York Times in July.  The title says it all: “Good News For Mental Illness in Health Law.”  Dr Friedman makes the observations that seem de rigueur for articles like this one:  “Half of Americans will experience a major psychiatric disorder,” “mental illnesses are chronic lifelong diseases,” and so forth.  Friedman argues that the Affordable Care Act will—finally!—give these people the help they need.

Sounds good, right?  Well, not so fast.  First of all, there are two strategies in the ACA to insure more patients:  (1) the individual mandate, which requires people to purchase insurance through the state health-insurance exchanges, and (2) expansion of Medicaid, which may add another 11 million more people to this public insurance plan.

So more people will be insured.  But where’s the evidence that health insurance—whether private or public—improves outcomes in mental health?  To be sure, in some cases, insurance can be critically important: the suicidal patient can be hospitalized for his safety; the substance-abusing patient can access rehabilitation services; and the patient with bipolar disorder can stay on her mood stabilizing medication and keep her job, her family, and her life.  But there are many flavors of mental illness (i.e., not everything called “bipolar disorder” is bipolar disorder), and different people have different needs.  That’s the essence of psychiatry: understanding the person behind the illness and delivering treatment accordingly.  Individualized care is a lot harder when millions of people show up for it.

I’ve worked in insurance settings and Medicaid settings.  I’ve seen first-hand the emphasis on rapid treatment, the overwhelming urge to medicate (because that’s generally all we psychiatrists have time—and get paid—to do in such settings), and the underlying “chronic disease” assumption that keeps people persistently dependent on the psychiatric system.  This model does work for some patients.  But whether it “works” for all—or even most—patients seems to be less important than keeping costs low or enrolling as many people as possible for our services.

These demands are not only external; they have become part of the mindset of many psychiatrists.  I spent my last year of residency training, for instance, in a public mental health system, where I was a county employee and all patients were Medicaid recipients.  I walked away with a sense that what mattered was not the quality of care I provided, nor whether I developed treatment plans that incorporated people’s unique needs, nor whether my patients even got better at all.  Instead, what was most important (and what we were even lectured on!) was how to write notes that satisfied the payers, how to choose medications on the basis of a 20- or 30-minute (or shorter) assessment, and how not to exceed the 12 annual outpatient visits each patient was allotted.  To make matters worse, there was no way to discharge a patient without several months of red tape—regardless of whether the patient no longer needed our services, or was actually being harmed by the treatment.  The tide has definitely turned: tomorrow’s psychiatrists will answer to administrators’ rules, not the patients’ needs—and this generation of trainees will unfortunately never even know the difference.

The great irony in this whole debacle is that those who argue loudest for expansion of health care also tend to be those who argue for more humanistic and compassionate treatment.  In a similar vein, some of the most conscientious and compassionate doctors I know—many of them supporters of Obamacare—have deliberately chosen to work outside of insurance or Medicaid/Medicare altogether.  (I can’t say that I blame them, but isn’t that sort of like singing the praises of public education but sending your kids to private school?)  With more people obtaining mental health care through insurance “benefits,” the current model will become more widespread:  we’ll continue overprescribing unnecessary drugs to children and adults, institutionalizing people against their will even when less restrictive options may be more effective, offering lower reimbursements for psychotherapy and complementary services, and inviting practitioners with lesser training and experience (and whose experience is often limited exclusively to offering pills) to become the future face of mental health care.

Do psychiatry’s leaders say anything about these issues?  No.  When they’re not lamenting the lack of new pharmaceutical compounds or attacking those who offer valid critiques of modern-day psychiatry, they’re defending the imperfect DSM-5 and steadfastly preserving our right to prescribe drugs while the pharmaceutical industry is more than happy to create new (and costly) products to help us do so.  One solution may be to train psychiatrists to be cognizant of the extraordinary diversity among individuals who seek psychiatric help, to understand the limitations of our current treatments, and to introduce patients to alternatives.  While this may be more expensive up front, it may actually save money in the future:  for example, thorough diagnostic assessments by more seasoned and experienced providers may direct patients away from expensive office-and-medication-based treatment, and towards community-based services, self-help programs, talk therapy when indicated or desired by the patient, social work services, or any of a number of alternative resources geared towards true recovery.

Alas, no one seems to be offering that as an alternative.  Instead, we’re patting ourselves on the back for expanding health care coverage to more people and developing cost-saving initiatives of dubious benefit.  Somewhere along the way, we seem to have forgotten what “care” really means.  I wonder when we’ll start figuring that one out.


My Own Bipolar Kerfuffle

August 5, 2012

I have a confession to make.  I don’t know what “bipolar disorder” is.  And as a psychiatrist, I’ll admit that’s sort of embarrassing.

Okay, maybe I’m exaggerating when I say that I don’t know what bipolar disorder is.  Actually, if you asked me to define it, I’d give you an answer that would probably sound pretty accurate.  I’ve read the DSM-IV, had years of training, took my Boards, treated people in the midst of manic episodes, and so on.  The problem for me is not the “idea” of bipolar disorder.  It’s what we mean when we use that term.

I recognized this problem only recently—in fact, just last month, as I was putting together the July/August issue of the Carlat Psychiatry Report (now available to subscribers here).  This month’s issue is devoted to the topic of “Bipolar Disorder,” and two contributors, faculty members at prestigious psychiatry departments, made contradictory—yet perfectly valid—observations.  One argued that it’s overdiagnosed; the other advocated for broadening our definition of bipolar disorder—in particular, “bipolar depression.”  The discrepancy was also noted in several comments from our Editorial Board.

Disagreements in science and medicine aren’t necessarily a bad thing.  In fact, when two authorities interpret a phenomenon differently, it creates the opportunity for further experimentation and investigation.  In time, the “truth” can be uncovered.  But in this case, as with much in psychiatry, “truth” seems to depend on whom you ask.

Consider this question.  What exactly is “bipolar depression”?  It seems quite simple:  it’s when a person with bipolar disorder experiences a depressive episode.  But what about when a person comes in with depression but has not had a manic episode or been diagnosed with bipolar disorder?  How about when a person with depression becomes “manic” after taking an antidepressant?  Could those be bipolar depression, too?  I suppose so.  But who says so?  One set of criteria was introduced by Jules Angst, a researcher in Switzerland, and was featured prominently in the BRIDGE study, published in 2011.  His criteria for bipolarity include agitation, irritability, hypomanic symptoms for as short as one day, and a family history of mania.  Other experts argue for a “spectrum” of bipolar illness.

(For a critique of the BRIDGE study, see this letter to the editor of the Archives of General Psychiatry, and this detailed—and entertaining—account in David Allen’s blog.)

The end result is rather shocking, when you think about it:  here we have this phenomenon called “bipolar disorder,” which may affect 4% of all Americans, and different experts define it differently.  With the right tweaking, nearly anyone who comes to the attention of a psychiatrist could be considered to have some features suggestive of someone’s definition of bipolar disorder.  (Think I’m kidding?  Check out the questionnaire in the appendix of Angst’s 2003 article.)

Such differences of opinion lead to some absurd situations, particularly when someone is asked to speak authoritatively about this disorder.  At this year’s APA Annual Meeting for example, David Kupfer (DSM-IV Task Force Chair) gave a keynote address on “Rethinking Bipolar Disorder,” which included recommendations for screening adolescents and the use of preventive measures (including drugs) to prevent early stages of the illness.  Why was it absurd?  Because as Kupfer spoke confidently about this disease entity, I looked around the packed auditorium and realized that each person may very well have has his or her own definition of bipolar disorder.  But did anyone say anything?  No, we all nodded in agreement, deferring to the expert.

This problem exists throughout psychiatry.  The criteria for each diagnosis in the DSM-IV can easily be applied in a very general way.  This is due partly to fatigue, partly to the fact that insurance companies require that we give a diagnosis as early as the first visit, partly because we’re so reluctant (even when it’s appropriate) to tell patients that they’re actually healthy and may not even have a diagnosis, and partly because different factions of psychiatrists use their experience to create their own criteria.  It’s no wonder that as criteria are loosened, diagnoses are misapplied, and the ranks of the “mentally ill” continue to grow.

As editor of a newsletter, I’m faced with another challenge I didn’t quite expect.  I can’t come out and say that bipolar disorder doesn’t exist (which wouldn’t be true anyway—I have actually seen cases of “classic,” textbook-style mania which do respond to medications as our guidelines would predict).  But I also can’t say that several definitions of “bipolar” exist.  That may be perceived as being too equivocal for a respectable publication and, as a result, some readers may have difficulty taking me seriously.

At the risk of sounding grandiose, I may be experiencing what our field’s leadership must experience on a regular basis.  Academic psychiatrists make their living by conducting research, publishing their findings, and, in most cases, specializing in a given clinical area.  It’s in their best interest to assume that the subjects of their research actually exist.  Furthermore, when experts see patients, they do so in a specialty clinic or clinical trial, which reinforces their definitions of disease.

This can become a problem to those of us seeing the complicated “real world” patients on the front lines, especially when we look to the experts for answers to such questions as whether we should use antipsychotics to treat acute mania, or whether antidepressants are helpful for bipolar depression.  If their interpretations of the diagnoses simply don’t pertain to the people in our offices, all bets are off.  Yet this, I fear, is what happens in psychiatry every day.

In the end, I can’t say whether my definition of bipolar disorder is right or not, because even the experts can’t seem to agree on what it is.  As for the newsletter, we decided to publish both articles, in the interest of maintaining a dialogue.  Readers will simply have to use their own definition of “bipolar disorder” and “bipolar depression” (or eschew them altogether)—hopefully in ways that help their patients.  But it has been an eye-opening experience in the futility (and humility) of trying to speak with authority about something we’re still trying desperately to understand.


Is James Holmes Mentally Ill? Does It Matter?

July 25, 2012

Last Saturday’s early-morning massacre at a crowded movie theater in Aurora, Colorado, stands as one of the most horrific rampages in American history.  But before anyone had any clue as to James Holmes’ motive for such a heinous act—even before the bodies had been removed from the site of the carnage—websites and social media were abuzz with suspicions that the gunman was either mentally ill, or was under the effect of psychotropic medications—one (or both) of which may have contributed to his crime.

As of this writing, any statement about Holmes’ psychiatric history is pure conjecture (although as I post this, I see a Fox News report claiming that Holmes mailed a notebook to “a psychiatrist” detailing his plan—more will surely be revealed).  Acquaintances quoted in the media have described him as a shy person, but have reported no erratic or unusual behaviors to arouse suspicion of an underlying mental illness.  Until recently, he was even enrolled in a graduate neuroscience program.  Some reports suggest that Holmes had spent weeks engineering an elaborate and complex scheme, hinting at some highly organized—albeit deadly—motive.  Nevertheless, the fact remains that we simply don’t know about any diagnosis, medication, or other psychiatric condition or treatment, which might shed light on Holmes’ frame of mind.

Those who focus on Holmes’ mental state at the time of the murders seem to fall in one of two camps.  Some argue that medications (if he were under the influence of any) may have enabled or facilitated this horrific act.  Others say that if Holmes had been diagnosed with a psychiatric illness, then this catastrophe serves as proof that we need more aggressive treatment—including medications—and services for the mentally ill.

It will be some time before we get answers in Holmes’ case.  And to me, that’s just as well.  Determining whether he has a mental illness, or was under the influence of psychotropic drugs last weekend, unfortunately reframes the question in such a way that further propagates the rift between these two factions, and fails to address how we should handle future cases like Holmes’ more humanely.  If, for example, Holmes is found to suffer from untreated schizophrenia, our society’s (and profession’s) reaction will be to lobby for more aggressive treatment, greater access to medication, more widespread screening programs, and, perhaps, a lower threshold by which to hospitalize psychotic individuals we deem as potentially “dangerous to others.”  If, on the other hand, a toxicology report reveals that Holmes had an antidepressant or antipsychotic in his bloodstream at the time of the murders, our inclination will be to restrict our use of these drugs, remove them from our formularies, and the outcries against psychopharmacology will grow ever louder.

Whether Holmes has a mental illness or not is irrelevant.  He was in crisis—probably for quite some time before last weekend—and that‘s what matters.  There was no one—and no way—to reach out to him and meet his needs in such a way to prevent this tragedy, and that, in my opinion, transcends whether he is “mentally ill” or not.

How can we fix this?  In his column in Monday’s New York Times, David Brooks almost provides a solution.  He writes, correctly, that prevention of such catastrophic events occurs through “relationships”—relatives or neighbors, for instance, who might recognize a change in someone’s behavior and encourage him to get help.  Admittedly, establishing a caring relationship with someone who suffers a history of trauma, grief over a recent loss, poor self-esteem, pathological narcissism, or acute psychosis may be difficult.  But relatives and neighbors are indeed often the first to notice questionable behavior and are well positioned to help those in need.  Perhaps in Holmes’ case, too, we’ll soon learn of some classmates or coworkers who felt something was amiss.

Brooks goes on to argue that it’s the responsibility of that neighbor or relative to “[get] that person treatment before the barbarism takes control.”  He doesn’t define what sort of “treatment” he has in mind.  But he does say that killers are “the product of psychological derangements, not sociological ones,” so the aggressive treatment options he endorses presumably include more aggressive psychological (or psychiatric) treatment.  But to expect a neighbor or relative to help an individual access treatment is precisely a sociological phenomenon.  It puts the onus on our culture at large to pay attention to how our neighbors think and act, and to offer a helping hand or a safe environment (or a locked psychiatric unit, if it has progressed that far) to those of us who think and behave differently or who are suffering a crisis.

That, unfortunately, is not what Brooks is arguing for.  (After all, the title of his essay is “More Treatment Programs.”)  If mass murderers suffer from psychological problems, which is what Brooks seems to believe, the solution “has to start with psychiatry.”  But this introduces the longstanding problem of defining that arbitrary border between “normal” and “abnormal”—a virtually impossible task.  And, of course, once we pathologize the “abnormal,” we’re then obligated to provide treatments (antipsychotic medication, involuntary hospitalization, assisted outpatient treatment, forced drugging) which, yes, might decrease the likelihood of further dangerousness, but which also compromise patients’ civil rights and do not always enable them to recover.

Brooks is right on one point.  Relationships are part of the answer.  Relationships can provide compassion and support in one’s most difficult times.  One take-home message from the Aurora tragedy should be that people like Holmes—regardless of whatever he is even “mentally ill” at all—need the security and comfort of safe, trustworthy individuals who are looking out for his (and society’s) best interests and who can intervene at a much earlier stage and in a much less aggressive way, perhaps even avoiding conventional psychiatric treatment altogether.

Getting to that point, unfortunately, requires a sea change in how we deal more compassionately with those in our society who are different from the rest of us—a change that our nation may be unwilling, or unable, to make.  If we fail to make it, we’ll be stuck with the never-ending debate over the validity of psychiatric diagnoses, the effectiveness of psychiatric drugs, the ethics of forced treatment, and the dilemma of defining when antisocial behavior becomes a “disease.”  In the meantime, troubled souls like James Holmes will continue to haunt us, left to their own devious plans until psychiatric treatment—or worse—is the only available option.


Turf Wars

July 6, 2012

The practice of medicine has changed enormously in just the last few years.  While the upcoming implementation of the Affordable Care Act promises even further—and more dramatic—change, one topic which has received little popular attention is the question of exactly who provides medical services.  Throughout medicine, physicians (i.e., those with MD or DO degrees) are being replaced by others, whenever possible, in an attempt to cut costs and improve access to care.

In psychiatry, non-physicians have long been a part of the treatment landscape.  Most commonly today, psychiatrists focus on “medication management” while psychologists, psychotherapists, and others perform “talk therapy.” But even the med management jobs—the traditional domain of psychiatrists, with their extensive medical training—are gradually being transferred to other so-called “midlevel” providers.

The term “midlevel” (not always a popular term, by the way) refers to someone whose training lies “mid-way” between that of a physician and another provider (like a nurse, psychologist, social worker, etc) but who is still licensed to diagnose and treat patients.  Midlevel providers usually work under the supervision (although often not direct) of a physician.  In psychiatry, there are a number of such midlevel professionals, with designations like PMHNP, PMHCNS, RNP, and APRN, who have become increasingly involved in “med management” roles.  This is partly because they tend to demand lower salaries and are reimbursed at a lower rate than medical professionals.  However, many physicians—and not just in psychiatry, by the way—have grown increasingly defensive (and, at times, downright angry, if some physician-only online communities are any indication) about this encroachment of “lesser-trained” practitioners onto their turf.

In my own experience, I’ve worked side-by-side with a few RNPs.  They performed their jobs quite competently.  However, their competence speaks less to the depth of their knowledge (which was impressive, incidentally) and more to the changing nature of psychiatry.  Indeed, psychiatry seems to have evolved to such a degree that the typical psychiatrist’s job—or “turf,” if you will—can be readily handled by someone with less (in some cases far less) training.  When you consider that most psychiatric visits comprise a quick interview and the prescription of a drug, it’s no surprise that someone with even just a rudimentary understanding of psychopharmacology and a friendly demeanor can do well 99% of the time.

This trend could spell (or hasten) the death of psychiatry.  More importantly, however, it could present an opportunity for psychiatry’s leaders to redefine and reinvigorate our field.

It’s easy to see how this trend could bring psychiatry to its knees.  Third-party payers obviously want to keep costs low, and with the passage of the ACA the role of the third-party payer—and “treatment guidelines” that can be followed more or less blindly—will be even stronger.  Patients, moreover, increasingly see psychiatry as a medication-oriented specialty, thanks to direct-to-consumer advertising and our medication-obsessed culture.  Taken together, this means that psychiatrists might be passed over in favor of cheaper workers whose main task will be to follow guidelines or protocols.  If so, most patients (unfortunately) wouldn’t even know the difference.

On the other hand, this trend could also present an opportunity for a revolution in psychiatry.  The predictions in the previous paragraph are based on two assumptions:  first, that psychiatric care requires medication, and second, that patients see the prescription of a drug as equivalent to a cure.  Psychiatry’s current leadership and the pharmaceutical industry have successfully convinced us that these statements are true.  But they need not be.  Instead, they merely represent one treatment paradigm—a paradigm that, for ever-increasing numbers of people, leaves much to be desired.

Preservation of psychiatry requires that psychiatrists find ways to differentiate themselves from midlevel providers in a meaningful fashion.  Psychiatrists frequently claim that they are already different from other mental health practitioners, because they have gone to medical school and, therefore, are “real doctors.”  But this is a specious (and arrogant) argument.  It doesn’t take a “real doctor” to do a psychiatric interview, to compare a patient’s complaints to what’s written in the DSM (or what’s in one’s own memory banks) and to prescribe medication according to a guideline or flowchart. Yet that’s what most psychiatric care is.  Sure, there are those cases in which successful treatment requires tapping the physician’s knowledge of pathophysiology, internal medicine, or even infectious disease, but these are rare—not to mention the fact that most treatment settings don’t even allow the psychiatrist to investigate these dimensions.

Thus, the sad reality is that today’s psychiatrists practice a type of medical “science” that others can grasp without four years of medical school and four years of psychiatric residency training.  So how, then, can psychiatrists provide something different—particularly when appointment lengths continue to dwindle and costs continue to rise?  To me, one answer is to revamp specialty training.  I received my training in two institutions with very different cultures and patient populations.  But both shared a common emphasis on teaching medication management.  Did I need four years to learn how to prescribe drugs?  No.  In reality, practical psychopharmacology can be learned in a one-year (maybe even six-month) course—not to mention the fact that the most valuable knowledge comes from years of experience, something that only real life (and not a training program) can provide.

Beyond psychopharmacology, psychiatry training programs need to beef up psychotherapy training, something that experts have encouraged for years.  But it goes further than that: psychiatry trainees need hands-on experience in the recovery model, community resources and their delivery, addictive illness and recovery concepts, behavioral therapies, case management, and, yes, how to truly integrate medical care into psychiatry.  Furthermore, it wouldn’t hurt to give psychiatrists lessons in communication and critical thinking skills, cognitive psychology principles, cultural sensitivity, economics, business management, alternative medicine (much of which is “alternative” only because the mainstream says so), and, my own pet peeve, greater exposure to the wide, natural variability among human beings in their intellectual, emotional, behavioral, perceptual, and physical characteristics and aptitudes—so we stop labeling everyone who walks in the door as “abnormal.”

One might argue, that sounds great but psychiatrists don’t get paid for those things.  True, we don’t.  At least not yet.  Nevertheless, a comprehensive approach to human wellness, taken by someone who has invested many years learning how to integrate these perspectives, is, in the long run, far more efficient than the current paradigm of discontinuous care, in which one person manages meds, another person provides therapy, another person serves as a case manager—roles which can change abruptly due to systemic constraints and turnover.

If we psychiatrists want to defend our “turf,” we can start by reclaiming some of the turf we’ve given away to others.  But more importantly, we must also identify new turf and make it our own—not to provide duplicate, wasteful care, but instead to create a new treatment paradigm in which the focus is on the patient and the context in which he or she presents, and treatment involves only what is necessary (and which is likely to work for that particular individual).  Only a professional with a well-rounded background can bring this paradigm to light, and psychiatrists—those who have invested the time, effort, expense, and hard work to devote their lives to the understanding and treatment of mental illness—are uniquely positioned to bring this perspective to the table and make it happen.


What Adderall Can Teach Us About Medical Marijuana

June 19, 2012

An article in the New York Times last week described the increasing use of stimulant medications such as Adderall and Ritalin among high-school students.  Titled “The Risky Rise of the Good-Grade Pill,” the article discussed how 15 to 40 percent of students, competing for straight-As and spots in elite colleges, use stimulants for an extra “edge,” regardless of whether they actually have ADHD.  In this blog, I’ve written about ADHD.  It’s a real condition—and medications can help tremendously—but the diagnostic criteria are quite vague.  As with much in psychiatry, anyone “saying the right thing” can relatively easily get one of these drugs, whether they want it or not.

Sure enough, the number of prescriptions for these drugs has risen 26% since 2007.  Does this mean that ADHD is now 26% more prevalent?  No.  In the Times article, some students admitted they “lie to [their] psychiatrists” in order to “get something good.”  In fact, some students “laughed at the ease with which they got some doctors to write prescriptions for ADHD.”  In the absence of an objective test (some computerized tests exist but aren’t widely used nor validated, and brain scans are similarly circumspect) and diagnostic criteria that are readily accessible on the internet, anyone who wants a stimulant can basically get one.  And while psychiatric diagnosis is often an imperfect science, in many settings the methodology by which we assess and diagnose ADHD is particularly crude.

Many of my colleagues will disagree with (or hate) me for saying so, but in some sense, the prescription of stimulants has become just like any other type of cosmetic medicine.  Plastic surgeons and dermatologists, for instance, are trained to perform medically necessary procedures, but they often find that “cosmetic” procedures like facelifts and Botox injections are more lucrative.  Similarly, psychiatrists can have successful practices in catering to ultra-competitive teens (and their parents) and giving out stimulants.  Who cares if there’s no real disease?  Psychiatry is all about enhancing patients’ lives, isn’t it?  As another blogger wrote last week, some respectable physicians have even argued that “anyone and everyone should have access to drugs that improve performance.”

When I think about “performance enhancement” in this manner, I can’t help but think about the controversy over medical marijuana.  This is another topic I’ve written about, mainly to question the “medical” label on something that is neither routinely accepted nor endorsed by the medical profession.  Proponents of medical cannabis, I wrote, have co-opted the “medical” label in order for patients to obtain an abusable psychoactive substance legally, under the guise of receiving “treatment.”

How is this different from the prescription of psychostimulants for ADHD?  The short answer is, it’s not.  If my fellow psychiatrists and I prescribe psychostimulants (which are abusable psychoactive substances in their own right, as described in the pages of the NYT) on the basis of simple patient complaints—and continue to do so simply because a patient reports a subjective benefit—then this isn’t very different from a medical marijuana provider writing a prescription (or “recommendation”) for medical cannabis.  In both cases, the conditions being treated are ill-defined (yes, in the case of ADHD, it’s detailed in the DSM, which gives it a certain validity, but that’s not saying much).  In both cases, the conditions affect patients’ quality of life but are rarely, if ever, life-threatening.  In both cases, psychoactive drugs are prescribed which could be abused but which most patients actually use quite responsibly.  Last but not least, in both cases, patients generally do well; they report satisfaction with treatment and often come back for more.

In fact, taken one step further, this analogy may turn out to be an argument in favor of medical marijuana.  As proponents of cannabis are all too eager to point out, marijuana is a natural substance, humans have used it for thousands of years, and it’s arguably safer than other abusable (but legal) substances like nicotine and alcohol.  Psychostimulants, on the other hand, are synthetic chemicals (not without adverse effects) and have been described as “gateway drugs” to more or less the same degree as marijuana.  Why one is legal and one is not simply appears to be due to the psychiatric profession’s “seal of approval” on one but not the other.

If the psychiatric profession is gradually moving away from the assessment, diagnosis, and treatment of severe mental illness and, instead, treating “lifestyle” problems with drugs that could easily be abused, then I really don’t have a good argument for denying cannabis to patients who insist it helps their anxiety, insomnia, depression, or chronic pain.

Perhaps we should ask physicians take a more rigorous approach to ADHD diagnosis, demanding interviews with parents and teachers, extensive neuropsychiatric testing, and (perhaps) neuroimaging before offering a script.  But in a world in which doctors’ reimbursements are dwindling, and the time devoted to patient care is vanishing—not to mention a patient culture which demands a quick fix for the problems associated with the stresses of modern adolescence—it doesn’t surprise me one bit that some doctors will cut corners and prescribe without a thorough workup, in much the same way that marijuana is provided, in states where it’s legal.  If the loudest protests against such a practice don’t come from our leadership—but instead from the pages of the New York Times—we only have ourselves to blame when things really get out of hand.


Follow

Get every new post delivered to your Inbox.

Join 1,363 other followers

%d bloggers like this: