Advertisements
 

ADHD: A Modest Proposal

February 1, 2012

I’m reluctant to write a post about ADHD.  It just seems like treacherous ground.  Judging by comments I’ve read online and in magazines, and my own personal experience, expressing an opinion about this diagnosis—or just about anything in child psychiatry—will be met with criticism from one side or another.  But after reading L. Alan Sroufe’s article (“Ritalin Gone Wild”) in this weekend’s New York Times, I feel compelled to write.

If you have not read the article, I encourage you to do so.  Personally, I agree with every word (well, except for the comment about “children born into poverty therefore [being] more vulnerable to behavior problems”—I would remind Dr Sroufe that correlation does not equal causation).  In fact, I wish I had written it.  Unfortunately, it seems that only outsiders or retired psychiatrists can write such stuff about this profession. The rest of us might need to look for jobs someday.

Predictably, the article has attracted numerous online detractors.  For starters, check out this response from the NYT “Motherlode” blog, condemning Dr Sroufe for “blaming parents” for ADHD.  In my reading of the original article, Dr Sroufe did nothing of the sort.  Rather, he pointed out that ADHD symptoms may not entirely (or at all) arise from an inborn neurological defect (or “chemical imbalance”), but rather that environmental influences may be more important.  He also remarked that, yes, ADHD drugs do work; children (and adults, for that matter) do perform better on them, but those successes decline over time, possibly because a drug solution “does nothing to change [environmental] conditions … in the first place.”

I couldn’t agree more.  To be honest, I think this statement holds true for much of what we treat in psychiatry, but it’s particularly relevant in children and adolescents.  Children are exposed to an enormous number of influences as they try to navigate their way in the world, not to mention the fact that their brains—and bodies—continue to develop rapidly and are highly vulnerable.  “Environmental influences” are almost limitless.

I have a radical proposal which will probably never, ever, be implemented, but which might help resolve the problems raised by the NYT article.  Read on.

First of all, you’ll note that I referred to “ADHD symptoms” above, not “ADHD.”  This isn’t a typo.  In fact, this is a crucial distinction.  As with anything else in psychiatry, diagnosing ADHD relies on documentation of symptoms.  ADHD-like symptoms are extremely common, particularly in child-age populations.  (To review the official ADHD diagnostic criteria from the DSM-IV, click here.)  To be sure, a diagnosis of ADHD requires that these symptoms be “maladaptive and inconsistent with developmental level.”  Even so, I’ve often joked with my colleagues that I can diagnose just about any child with ADHD just by asking the right questions in the right way.  That’s not entirely a joke.  Try it yourself.  Look at the criteria, and then imagine you have a child in your office whose parent complains that he’s doing poorly in school, or gets in fights, or refuses to do homework, or daydreams a lot, etc.  When the ADHD criteria are on your mind—remember, you have to think like a psychiatrist here!—you’re likely to ask leading questions, and I guarantee you’ll get positive responses.

That’s a lousy way of making a diagnosis, of course, but it’s what happens in psychiatrists’ and pediatricians’ offices every day.  There are more “valid” ways to diagnose ADHD:  rating scales like the Connors or Vanderbilt surveys, extensive neuropsychiatric assessment, or (possibly) expensive imaging tests.  However, in practice, we often let subthreshold scores on those surveys “slide” and prescribe ADHD medications anyway (I’ve seen it plenty); neuropsychiatric assessments are often wishy-washy (“auditory processing score in the 60th percentile,” etc); and, as Dr Sroufe correctly points out, children with poor motivation or “an underdeveloped capacity to regulate their behavior” will most likely have “anomalous” brain scans.  That doesn’t necessarily mean they have a disorder.

So what’s my proposal?  My proposal is to get rid of the diagnosis of ADHD altogether.  Now, before you crucify me or accuse me of being unfit to practice medicine (as one reader—who’s also the author of a book on ADHD—did when I floated this idea on David Allen’s blog last week), allow me to elaborate.

First, if we eliminate the diagnosis of ADHD, we can still do what we’ve been doing.  We can still evaluate children with attention or concentration problems, or hyperactivity, and we can still use stimulant medications (of course, they’d be off-label now) to provide relief—as long as we’ve obtained the same informed consent that we’ve done all along.  We do this all the time in medicine.  If you complain of constant toe and ankle pain, I don’t immediately diagnose you with gout; instead, I might do a focused physical exam of the area and recommend a trial of NSAIDs.  If the pain returns, or doesn’t improve, or you have other features associated with gout, I may want to check uric acid levels, do a synovial fluid analysis, or prescribe allopurinol.

That’s what medicine is all about:  we see symptoms that suggest a diagnosis, and we provide an intervention to help alleviate the symptoms while paying attention to the natural course of the illness, refining the diagnosis over time, and continually modifying the therapy to treat the underlying diagnosis and/or eliminate risk factors.  With the ultimate goal, of course, of minimizing dangerous or expensive interventions and achieving some degree of meaningful recovery.

This is precisely what we don’t do in most cases of ADHD.  Or in most of psychiatry.  While exceptions definitely exist, often the diagnosis of ADHD—and the prescription of a drug that, in many cases, works surprisingly well—is the end of the story.  Child gets a diagnosis, child takes medication, child does better with peers or in school, parents are satisfied, everyone’s happy.  But what caused the symptoms in the first place?  Can (or should) that be fixed?  When can (or should) treatment be stopped?  How can we prevent long-term harm from the medication?

If, on the other hand, we don’t make a diagnosis of ADHD, but instead document that the child has “problems in focusing” or “inattention” or “hyperactivity” (i.e., we describe the specific symptoms), then it behooves us to continue looking for the causes of those symptoms.  For some children, it may be a chaotic home environment.  For others, it may be a history of neglect, or ongoing substance abuse.  For others, it may be a parenting style or interaction which is not ideal for that child’s social or biological makeup (I hesitate to write “poor parenting” because then I’ll really get hate mail!).  For still others, there may indeed be a biological abnormality—maybe a smaller dorsolateral prefrontal cortex (hey! the DLPFC!) or delayed brain maturation.

ADHD offers a unique platform upon which to try this open-minded, non-DSM-biased approach.  Dropping the diagnosis of “ADHD” would have a number of advantages.  It would encourage us to search more deeply for root causes; it would allow us to be more eclectic in our treatment; it would prevent patients, parents, doctors, teachers, and others from using it as a label or as an “excuse” for one’s behavior; and it would require us to provide truly individualized care.  Sure, there will be those who simply ask for the psychostimulants “because they work” for their symptoms of inattentiveness or distractibility (and those who deliberately fake ADHD symptoms because they want to abuse the stimulant or because they want to get into Harvard), but hey, that’s already happening now!  My proposal would create a glut of “false negative” ADHD diagnoses, but it would also reduce the above “false positives,” which, in my opinion, are more damaging to our field’s already tenuous nosology.

A strategy like this could—and probably should—be extended to other conditions in psychiatry, too.  I believe that some of what we call “ADHD” is truly a disorder—probably multiple disorders, as noted above; the same is probably true with “major depression,” ”bipolar disorder,” and just about everything else.  But when these labels start being used indiscriminately (and unfortunately DSM-5 doesn’t look to offer any improvement), the diagnoses become fixed labels and lock us into an approach that may, at best, completely miss the point, and at worst, cause significant harm.  Maybe we should rethink this.

Advertisements

The Unfortunate Therapeutic Myopia of the EMR

January 19, 2012

There’s a lot you can say about an electronic medical record (EMR).  Some of it is good: it’s more legible than a written chart, it facilitates billing, and it’s (usually) readily accessible.  On the other hand, EMRs are often cumbersome and confusing, they encourage “checklist”-style medicine, and they contain a lot of useless or duplicate information.  But a recent experience in my child/adolescent clinic opened my eyes to where an EMR might really mislead us.

David, a 9 year-old elementary school student, has been coming to the clinic every month for the last three years.  He carries a diagnosis of “bipolar disorder,” manifested primarily as extreme shifts in mood, easy irritability, insomnia, and trouble controlling his temper, both in the classroom and at home.  Previous doctors had diagnosed “oppositional defiant disorder,” then ADHD, then bipolar.  He had had a trial of psychostimulants with no effect, as well as some brief behavioral therapy.  Somewhere along the way, a combination of clonidine and Risperdal was started, and those have been David’s meds for the last year.

The information in the above paragraph came from my single interaction with David and his mom.  It was the first time I had seen David; he was added to my schedule at the last minute because the doctor he had been seeing for the last four months—a locum tenens doc—was unavailable.

Shortly before the visit, I had opened David’s EMR record to review his case, but it was not very informative.  Our EMR only allows one note to be open at a time, and I saw the same thing—”bipolar, stable, continue current meds”—and some other text, apparently cut & pasted, in each of his last 3-4 notes.  This was no big surprise; EMRs are full of cut & pasted material, plus lots of other boilerplate stuff that is necessary for legal & billing purposes but can easily be ignored.  The take-home message, at the time, was that David had been fairly stable for at least the last few months and probably just needed a refill.

During the appointment, I took note that David was a very pleasant child, agreeable and polite.  Mom said he had been “doing well.”  But I also noticed that, throughout the interview, David’s mom was behaving strangely—her head bobbed rhythmically side to side, and her arms moved in a writhing motion.  She spoke tangentially and demonstrated some acute (and extreme) shifts in emotion, at one point even crying suddenly, with no obvious trigger.

I asked questions about their home environment, David’s access to drugs and alcohol, etc., and I learned that mom used Vicodin, Soma, and Xanax.  She admitted that they weren’t prescribed to her—she bought them from friends.  Moreover, she reported that she “had just taken a few Xanax to get out the door this morning” which, she said, “might explain why I’m acting like this.”  She also shared with me that she had been sent to jail four years ago on an accusation of child abuse (she had allegedly struck her teenage daughter during an argument), at which time David and his brothers were sent to an emergency children’s shelter for four nights.

Even though I’m not David’s regular doctor, I felt that these details were relevant to his case.  It was entirely possible, in my opinion, that David’s home environment—a mother using prescription drugs inappropriately, a possible history of trauma—had contributed to his mood lability and “temper dysregulation,” something that a “bipolar” label might mask.

But I’m not writing this to argue that David isn’t “bipolar.”  Instead, I wish to point out that I obtained these details simply by observing the interaction between David and his mom over the course of ~30 minutes, and asking a few questions, and not by reading his EMR record.  In fact, after the appointment I reviewed the last 12 months of his EMR record, which showed dozens of psychiatrists’ notes, therapists’ notes, case manager’s notes, demographic updates, and “treatment plans,” and all of it was generally the same:  diagnosis, brief status updates, LOTS of boilerplate mumbo-jumbo, pages and pages of checkboxes, a few mentions of symptoms.  Nothing about David’s home situation or mom’s past.  In fact, nothing about mom at all.  I could not have been the first clinician to have had concerns about David’s home environment, but if such information was to be found in his EMR record, I had no idea where.

Medical charts—particularly in psychiatry—are living documents.  To any physician who has practiced for more than a decade or so, simply opening an actual, physical, paper chart can be like unfolding a treasure map:  you don’t know what you’ll find, but you know that there may be riches to be revealed.   Sometimes, while thumbing through the chart, a note jumps out because it’s clearly detailed or something relevant is highlighted or “flagged” (in the past, I learned how to spot the handwriting of the more perceptive and thorough clinicians).  Devices like Post-It notes or folded pages provide easy—albeit low-tech—access to relevant information.  Also, a thick paper chart means a long (or complicated) history in treatment, necessitating a more thorough review.  Sometimes the absence of notes over a period of time indicates a period of decompensation, a move, or, possibly a period of remission.  All of this is available, literally, at one’s fingertips.

EMRs are far more restrictive.  In David’s case, the EMR was my only source of information—apart from David himself.  And for David, it seemed sterile, bland, just a series of “check-ins” of a bipolar kid on Risperdal.  There was probably more info somewhere in there, but it was too difficult and non-intuitive to access.  Hence, the practice (adopted by most clinicians) of just opening up the patient’s most recent note—and that’s it.

Unfortunately, this leads to a therapeutic myopia that may change how we practice medicine.  EMRs, when used this way, are here-and-now.  They have become the medical equivalent of Facebook.  When I log on to the EMR, I see my patient’s most recent note—a “status update,” so to speak—but not much else.  It takes time and effort to search through a patient’s profile for more relevant historical info—and that’s if you know where to look.  After working with seven different EMRs in the last six years, I can say that they’re all pretty similar in this regard.  And if an electronic chart is only going to be used for its most recent note, there’s no incentive to be thorough.

Access to information is great.  But the “usability” of EMRs is so poor that we have easy access only to what the last clinician thought was important.  Or better yet, what he or she decided to document.  The rest—like David’s home life, the potential impact of his mother’s behavior on his symptoms, and environmental factors that require our ongoing attention, all of which may be far more meaningful than David’s last Risperdal dose—must be obtained “from scratch.”  If it is obtained at all.


Talk Is Cheap

October 9, 2011

I work part-time in a hospital psychiatry unit, overseeing residents and medical students on their inpatient psychiatry rotations.  They are responsible for three to six patients at any given time, directing and coordinating the patients’ care while they are admitted to our hospital.

To an outsider, this may seem like a generous ratio: one resident taking care of only 3-6 patients.  One would think that this should allow for over an hour of direct patient contact per day, resulting in truly “personalized” medicine.  But instead, the absolute opposite is true: sometimes doctors only see patients for minutes at a time, and develop only a limited understanding of patients for whom they are responsible.  I noticed this in my own residency training, when halfway through my first year I realized the unfortunate fact that even though I was “taking care” of patients and getting my work done satisfactorily, I couldn’t tell you whether my patients felt they were getting better, whether they appreciated my efforts, or whether they had entirely different needs that I had been ignoring.

In truth, much of the workload in a residency program (in any medical specialty) is related to non-patient-care concerns:  lectures, reading, research projects, faculty supervision, etc.  But even outside of the training environment, doctors spend less and less time with patients, creating a disturbing precedent for the future of medicine.  In psychiatry in particular, the shrinking “therapy hour” has received much attention, most recently in a New York Times front-page article (which I blogged about it here and here).  The responses to the article echoed a common (and growing) lament among most psychiatrists:  therapy has been replaced with symptom checklists, rapid-fire questioning, and knee-jerk prescribing.

In my case, I don’t mean be simply one more voice among the chorus of psychiatrists yearning for the “glory days” of psychiatry, where prolonged psychotherapy and hour-long visits were the norm.  I didn’t practice in those days, anyway.  Nevertheless, I do believe that we lose something important by distancing ourselves from our patients.

Consider the inpatient unit again.  My students and residents sometimes spend hours looking up background information, old charts, and lab results, calling family members and other providers, and discussing differential diagnosis and possible treatment plans, before ever seeing their patient.  While their efforts are laudable, the fact remains that a face-to-face interaction with a patient can be remarkably informative, sometimes even immediately diagnostic to the skilled eye.  In an era where we’re trying to reduce our reliance on expensive technology and wasteful tests, patient contact should be prioritized over the hours upon hours that trainees spend hunched over computer workstations.

In the outpatient setting, direct patient-care time has been largely replaced by “busy work” (writing notes; debugging EMRs; calling pharmacies to inquire about prescriptions; completing prior-authorization forms; and performing any number of “quality-control,” credentialing, or other mandatory “compliance” exercises required by our institutions).  Some of this is important, but at the same time, an extra ten or fifteen minutes with a patient may go a long way to determining that patient’s treatment goals (which may disagree with the doctor’s), improving their motivation for change, or addressing unresolved underlying issues– matters that may truly make a difference and cut long-term costs.

The future direction of psychiatry doesn’t look promising, as this vanishing emphasis on the patient’s words and deeds is likely to make treatment even less cost-effective.  For example, there is a growing effort to develop biomarkers for diagnosis of mental illness and to predict medication response.  In my opinion, the science is just not there yet (partly because the DSM is still a poor guide by which to make valid diagnoses… what are depression and schizophrenia anyway?).  And even if the biomarker strategy were a reliable one, there’s still nothing that could be learned in a $745+ blood test that couldn’t be uncovered in a good, thorough clinical examination by a talented diagnostician, not to mention the fact that the examination would also uncover a large amount of other information– and establish valuable rapport– which would likely improve the quality of care.

The blog “1boringoldman” recently featured a post called “Ask them about their lives…” in which a particularly illustrative case was discussed.  I’ll refer you there for the details, but I’ll repost the author’s summary comments here:

I fantasize an article in the American Journal of Psychiatry entitled “Ask them about their lives!” Psychiatrists give drugs. Therapists apply therapies. Who the hell interviews patients beyond logging in a symptom list? I’m being dead serious about that…

I share Mickey’s concern, as this is a vital question for the future of psychiatry.  Personally, I chose psychiatry over other branches of medicine because I enjoy talking to people, asking about their lives, and helping them develop goals and achieve their dreams.  I want to help them overcome the obstacles put in their way by catastrophic relationships, behavioral missteps, poor insight, harmful impulsivity, addiction, emotional dysregulation, and– yes– mental illness.

However, if I don’t have the opportunity to talk to my patients (still my most useful diagnostic and therapeutic tool), I must instead rely on other ways to explain their suffering:  a score on a symptom list, a lab value, or a diagnosis that’s been stuck on the patient’s chart over several years without anyone taking the time to ask whether it’s relevant.  Not only do our patients deserve more than that, they usually want more than that, too; the most common complaint I hear from a patient is that “Dr So-And-So didn’t listen to me, he just prescribed drugs.”

This is not the psychiatry of my forefathers.  This is neither Philippe Pinel’s “moral treatment,” Emil Kraepelin’s meticulous attention to symptoms and patterns thereof, nor Aaron Beck’s cognitive re-strategizing.  No, it’s the psychiatry of HMOs, Wall Street, and an over-medicalized society, and in this brave new world, the patient is nowhere to be found.


Psychopharm R&D Cutbacks II: A Response to Stahl

August 28, 2011

A lively discussion has emerged on the NEI Global blog and on Daniel Carlat’s psychiatry blog about a recent post by Stephen Stahl, NEI chairman, pop(ular) psychiatrist, and promoter of psychopharmaceuticals.  The post pertains to the exodus of pharmaceutical companies from neuroscience research (something I’ve blogged about too), and the changing face of psychiatry in the process.

Dr Stahl’s post is subtitled “Be Careful What You Ask For… You Just Might Get It” and, as one might imagine, it reads as a scathing (some might say “ranting”) reaction against several of psychiatry’s detractors: the “anti-psychiatry” crowd, the recent rules restricting pharmaceutical marketing to doctors, and those who complain about Big Pharma funding medical education.  He singles out Dr Carlat, in particular, as an antipsychiatrist, implying that Carlat believes mental illnesses are inventions of the drug industry, medications are “diabolical,” and drugs exist solely to enrich pharmaceutical companies.  [Not quite Carlat’s point of view, as  a careful reading of his book, his psychopharmacology newsletter, and, yes, his blog, would prove.]

While I do not profess to have the credentials of Stahl or Carlat, I have expressed my own opinions on this matter in my blog, and wanted to enter my opinion on the NEI post.

With respect to Dr Stahl (and I do respect him immensely), I think he must re-evaluate his influence on our profession.  It is huge, and not always in a productive way.  Case in point: for the last two months I have worked in a teaching hospital, and I can say that Stahl is seen as something of a psychiatry “god.”  He has an enormous wealth of knowledge, his writing is clear and persuasive, and the materials produced by NEI present difficult concepts in a clear way.  Stahl’s books are directly quoted—unflinchingly—by students, residents, and faculty.

But there’s the rub.  Stahl has done such a good job of presenting his (i.e., the psychopharmacology industry’s) view of things that it is rarely challenged or questioned.  The “pathways” he suggests for depression, anxiety, psychosis, cognition, insomnia, obsessions, drug addiction, medication side effects—basically everything we treat in psychiatry—are accompanied by theoretical models for how some new pharmacological agent might (or will) affect these pathways, when in fact the underlying premises or the proposed drug mechanisms—or both—may be entirely wrong.  (BTW, this is not a criticism of Stahl, this is simply a statement of fact; psychiatry as a neuroscience is decidedly still in its infancy.)

When you combine Stahl’s talent with his extensive relationships with drug companies, it makes for a potentially dangerous combination.  To cite just two examples, Stahl has written articles (in widely distributed “throwaway” journals) making compelling arguments for the use of low-dose doxepin (Silenor) and L-methylfolate (Deplin) in insomnia and depression, respectively, when the actual data suggest that their generic (or OTC) equivalents are just as effective.  Many similar Stahl productions are included as references or handouts in drug companies’ promotional materials or websites.

How can this be “dangerous”?  Isn’t Stahl just making hypotheses and letting doctors decide what to do with them?  Well, not really.  In my experience, if Stahl says something, it’s no longer a hypothesis, it becomes the truth.

I can’t tell you how many times a student (or even a professor of mine) has explained to me “Well, Stahl says drug A works this way, so it will probably work for symptom B in patient C.”  Unfortunately, we don’t have the follow-up discussion when drug A doesn’t treat symptom B; or patient C experiences some unexpected side effect (which was not predicted by Stahl’s model); or the patient improves in some way potentially unrelated to the medication.  And when we don’t get the outcome we want, we invoke yet another Stahl pathway to explain it, or to justify the addition of another agent.  And so on and so on, until something “works.”  Hey, a broken clock is still correct twice a day.

I don’t begrudge Stahl for writing his articles and books; they’re very well written, and the colorful pictures are fun to look at– it makes psychiatry almost as easy as painting by numbers.  I also (unlike Carlat) don’t get annoyed when doctors do speaking gigs to promote new drugs.  (When these paid speakers are also responsible for teaching students in an academic setting, however, that’s another issue.)  Furthermore, I accept the fact that drug companies will try to increase their profits by expanding market share and promoting their drugs aggressively to me (after all, they’re companies—what do we expect them to do??), or by showing “good will” by underwriting CME, as long as it’s independently confirmed to be without bias.

The problem, however, is that doctors often don’t ask for the data.  We don’t  ask whether Steve Stahl’s models might be wrong (or biased).  We don’t look closely at what we’re presented (either in a CME lesson or by a drug rep) to see whether it’s free from commercial influence.  And, perhaps most distressingly, we don’t listen enough to our patients to determine whether our medications actually do what Stahl tells us they’ll do.

Furthermore, our ignorance is reinforced by a diagnostic tool (the DSM) which requires us to pigeonhole patients into a small number of diagnoses that may have no biological validity; a reimbursement system that encourages a knee-jerk treatment (usually a drug) for each such diagnosis; an FDA approval process that gives the illusion that diagnoses are homogeneous and that all patients will respond the same way; and only the most basic understanding of what causes mental illness.  It creates the perfect opportunity for an authority like Stahl to come in and tell us what we need to know.  (No wonder he’s a consultant for so many pharmaceutical companies.)

As Stahl writes, the departure of Big Pharma from neuroscience research is unfortunate, as our existing medications are FAR from perfect (despite Stahl’s texts making them sound pretty darn effective).  However, this “breather” might allow us to pay more attention to our patients and think about what else—besides drugs—we can use to nurse them back to health.  Moreover, refocusing our research efforts on the underlying psychology and biology of mental illness (i.e., research untainted by the need to show a clinical drug response or to get FDA approval) might open new avenues for future drug development.

Stahl might be right that the anti-pharma pendulum has swung too far, but that doesn’t mean we can’t use this opportunity to make great strides forward in patient care.  The paychecks of some docs might suffer.  Hopefully our patients won’t.


Do Antipsychotics Treat PTSD?

August 23, 2011

Do antipsychotics treat PTSD?  It depends.  That seems to be the best response I can give, based on the results of two recent studies on this complex disorder.  A better question, though, might be: what do antipsychotics treat in PTSD?

One of these reports, a controlled, double-blinded study of the atypical antipsychotic risperidone (Risperdal) for the treatment of “military service-related PTSD,” was featured in a New York Times article earlier this month.  The NYT headline proclaimed, somewhat unceremoniously:  “Antipsychotic Use is Questioned for Combat Stress.”  And indeed, the actual study, published in the Journal of the American Medical Association (JAMA), demonstrated that a six-month trial of risperidone did not improve patients’ scores in a scale of PTSD symptoms, when compared to placebo.

But almost simultaneously, another paper was published in the online journal BMC Psychiatry, stating that Abilify—a different atypical antipsychotic—actually did help patients with “military-related PTSD with major depression.”

So what are we to conclude?  Even though there are some key differences between the studies (which I’ll mention below), a brief survey of the headlines might leave the impression that the two reports “cancel each other out.”  In reality, I think it’s safe to say that neither study contributes very much to our treatment of PTSD.  But it’s not because of the equivocal results.  Instead, it’s a consequence of the premises upon which the two studies were based.

PTSD, or post-traumatic stress disorder, is an incredibly complicated condition.  The diagnosis was first given to Vietnam veterans who, for years after their service, experienced symptoms of increased physiological arousal, avoidance of stimuli associated with their wartime experience, and continual re-experiencing (in the form of nightmares or flashbacks) of the trauma they experienced or observed.  It’s essentially a re-formulation of conditions that were, in earlier years, labeled “shell shock” or “combat fatigue.”

Since the introduction of this disorder in 1980 (in DSM-III), the diagnostic umbrella of PTSD has grown to include victims of sexual and physical abuse, traumatic accidents, natural disasters, terrorist attacks (like the September 11 massacre), and other criminal acts.  Some have even argued that poverty or unfortunate psychosocial circumstances may also qualify as the “traumatic” event.

Not only are the types of stressors that cause PTSD widely variable, but so are the symptoms that ultimately develop.  Some patients complain of minor but persistent symptoms, while others experience infrequent but intense exacerbations.  Similarly, the neurobiology of PTSD is still poorly understood, and may vary from person to person.  And we’ve only just begun to understand protective factors for PTSD, such as the concept of “resilience.”

Does it even make sense to say that one drug can (or cannot) treat such a complex disorder?  Take, for instance, the scale used in the JAMA article to measure patients’ PTSD symptoms.  The PTSD score they used as the outcome measure was the Clinician-Administered PTSD Scale, or CAPS, considered the “gold standard” for PTSD diagnosis.  But the CAPS includes 30 items, ranging from sleep disturbances to concentration difficulties to “survivor guilt”:

It doesn’t take a cognitive psychologist or neuroscientist to recognize that these 30 domains—all features of what we consider “clinical” PTSD—could be explained by just as many, if not more, neural pathways, and may be experienced in entirely different ways, depending upon on one’s psychological makeup and the nature of one’s past trauma.

In other words, saying that Risperdal is “not effective” for PTSD is like saying that acupuncture is not effective for chronic pain, or that a low-carb diet is not an effective way to lose weight.  Statistically speaking, these interventions might not help most patients, but in some, they may indeed play a crucial role.  We just don’t understand the disorders well enough.

[By the way, what about the other study, which reported that Abilify was helpful?  Well, this study was a retrospective review of patients who were prescribed Abilify, not a randomized, placebo-controlled trial.  And it did not use the CAPS, but the PCL-M, a shorter survey of PTSD symptoms.  Moreover, it only included 27 of the 123 veterans who agreed to take Abilify, and I cannot, for the life of me, figure out why the other 96 were excluded from their analysis.]

Anyway, the bottom line is this:  PTSD is a complicated, multifaceted disorder—probably a combination of disorders, similar to much of what we see in psychiatry.  To say that one medication “works” or another “doesn’t work” oversimplifies the condition almost to the point of absurdity.  And for the New York Times to publicize such a finding, only gives more credence to the misconception that a prescription medication is (or has the potential to be) the treatment of choice for all patients with a given diagnosis.

What we need is not another drug trial for PTSD, but rather a better understanding of the psychological and neurobiological underpinnings of the disease, a comprehensive analysis of which symptoms respond to which drug, which aspects of the disorder are not amenable to medication management, and how individuals differ in their experience of the disorder and in the tools (pharmacological and otherwise) they can use to overcome their despair.  Anything else is a failure to recognize the human aspects of the disease, and an issuance of false hope to those who suffer.


Antidepressants: The New Candy?

August 9, 2011

It should come as no surprise to anyone paying attention to health care (not to mention modern American society) that antidepressants are very heavily prescribed.  They are, in fact, the second most widely prescribed class of medicine in America, with 253 million prescriptions written in 2010 alone.  Whether this means we are suffering from an epidemic of depression is another thing.  In fact, a recent article questions whether we’re suffering from much of anything at all.

In the August issue of Health Affairs, Ramin Mojtabai and Mark Olfson present evidence that doctors are prescribing antidepressants at ever-higher rates.  Over a ten-year period (1996-2007), the percentage of all office visits to non-psychiatrists that included an antidepressant prescription rose from 4.1% to 8.8%.  The rates were even higher for primary care providers: from 6.2% to 11.5%.

But there’s more.  The investigators also found that in the majority of cases, antidepressants were given even in the absence of a psychiatric diagnosis.  In 1996, 59.5% of the antidepressant recipients lacked a psychiatric diagnosis.  In 2007, this number had increased to 72.7%.

In other words, nearly 3 out of 4 patients who visited a nonpsychiatrist and received a prescription for an antidepressant were not given a psychiatric diagnosis by that doctor.  Why might this be the case?  Well, as the authors point out, antidepressants are used off-label for a variety of conditions—fatigue, pain, headaches, PMS, irritability.  None of which have any good data supporting their use, mind you.

It’s possible that nonpsychiatrists might add an antidepressant to someone’s medication regimen because they “seem” depressed or anxious.  It is also true that primary care providers do manage mental illness sometimes, particularly in areas where psychiatrists are in short supply.  But remember, in the majority of cases the doctors did not even give a psychiatric diagnosis, which suggests that even if they did a “psychiatric evaluation,” the evaluation was likely quick and haphazard.

And then, of course, there were probably some cases in which the primary care docs just continued medications that were originally prescribed by a psychiatrist—in which case perhaps they simply didn’t report a diagnosis.

But is any of this okay?  Some, like a psychiatrist quoted in a Wall Street Journal article on this report, argue that antidepressants are safe.  They’re unlikely to be abused, often effective (if only as a placebo), and dirt cheap (well, at least the generic SSRIs and TCAs are).  But others have had very real problems discontinuing them, or have suffered particularly troublesome side effects.

The increasingly indiscriminate use of antidepressants might also open the door to the (ab)use of other, more costly drugs with potentially more devastating side effects.  I continue to be amazed, for example, by the number of primary care docs who prescribe Seroquel (an antipsychotic) for insomnia, when multiple other pharmacologic and nonpharmacologic options are ignored.  In my experience, in the vast majority of these cases, the (well-known) risks of increased appetite and blood sugar were never discussed with the patient.  And then there are other antipsychotics like Abilify and Seroquel XR, which are increasingly being used in primary care as drugs to “augment” antidepressants and will probably be prescribed as freely as the antidepressants themselves.  (Case in point: a senior medical student was shocked when I told her a few days ago that Abilify is an antipsychotic.  “I always thought it was an antidepressant,” she remarked, “after seeing all those TV commercials.”)

For better or for worse, the increased use of antidepressants in primary care may prove to be yet another blow to the foundation of biological psychiatry.  Doctors prescribe—and continue to prescribe—these drugs because they “work.”  It’s probably more accurate, however, to say that doctors and patients think they work.  And this may have nothing to do with biology.  As the saying goes, it’s the thought that counts.

Anyway, if this is true—and you consider the fact that these drugs are prescribed on the basis of a rudimentary workup (remember, no diagnosis was given 72.7% of the time)—then the use of an antidepressant probably has no more justification than the addition of a multivitamin, the admonition to eat less red meat, or the suggestion to “get more fresh air.”

The bottom line: If we’re going to give out antidepressants like candy, then let’s treat them as such.  Too much candy can be a bad thing—something that primary care doctors can certainly understand.  So if our patients ask for candy, then we need to find a substitute—something equally soothing and comforting—or provide them instead with a healthy diet of interventions to address the real issues, rather than masking those problems with a treat to satisfy their sweet tooth and bring them back for more.


Mental Illness IS Real After All… So What Was I Treating Before?

July 26, 2011

I recently started working part-time on an inpatient psychiatric unit at a large county medical center.  The last time I worked in inpatient psychiatry was six years ago, and in the meantime I’ve worked in various office settings—community mental health, private practice, residential drug/alcohol treatment, and research.  I’m glad I’m back, but it’s really making me rethink my ideas about mental illness.

An inpatient psychiatry unit is not just a locked version of an outpatient clinic.  The key difference—which would be apparent to any observer—is the intensity of patients’ suffering.  Of course, this should have been obvious to me, having treated patients like these before.  But I’ll admit, I wasn’t prepared for the abrupt transition.  Indeed, the experience has reminded me how severe mental illness can be, and has proven to be a “wake-up” call at this point in my career, before I get the conceited (yet naïve) belief that “I’ve seen it all.”

Patients are hospitalized when they simply cannot take care of themselves—or may be a danger to themselves or others—as a result of their psychiatric symptoms.  These individuals are in severe emotional or psychological distress, have immense difficulty grasping reality, or are at imminent risk of self-harm, or worse.  In contrast to the clinic, the illnesses I see on the inpatient unit are more incapacitating, more palpable, and—for lack of a better word—more “medical.”

Perhaps this is because they also seem to respond better to our interventions.  Medications are never 100% effective, but they can have a profound impact on quelling the most distressing and debilitating symptoms of the psychiatric inpatient.  In the outpatient setting, medications—and even psychotherapy—are confounded by so many other factors in the typical patient’s life.  When I’m seeing a patient every month, for instance—or even every week—I often wonder whether my effort is doing any good.  When a patient assures me it is, I think it’s because I try to be a nice, friendly guy.  Not because I feel like I’m practicing any medicine.  (By the way, that’s not humility, I see it as healthy skepticism.)

Does this mean that the patient who sees her psychiatrist every four weeks and who has never been hospitalized is not suffering?  Or that we should just do away with psychiatric outpatient care because these patients don’t have “diseases”?  Of course not.  Discharged patients need outpatient follow-up, and sometimes outpatient care is vital to prevent hospitalization in the first place.  Moreover, people do suffer and do benefit from coming to see doctors like me in the outpatient setting.

But I think it’s important to look at the differences between who gets hospitalized and who does not, as this may inform our thinking about the nature of mental illness and help us to deliver treatment accordingly.  At the risk of oversimplifying things (and of offending many in my profession—and maybe even some patients), perhaps the more severe cases are the true psychiatric “diseases” with clear neurochemical or anatomic foundations, and which will respond robustly to the right pharmacological or neurosurgical cure (once we find it), while the outpatient cases are not “diseases” at all, but simply maladaptive strategies to cope with what is (unfortunately) a chaotic, unfair, and challenging world.

Some will argue that these two things are one and the same.  Some will argue that one may lead to the other.  In part, the distinction hinges upon what we call a “disease.”  At any rate, it’s an interesting nosological dilemma.  But in the meantime, we should be careful not to rush to the conclusion that the conditions we see in acutely incapacitated and severely disturbed hospital patients are the same as those we see in our office practices, just “more extreme versions.”  In fact, they may be entirely different entities altogether, and may respond to entirely different interventions (i.e., not just higher doses of the same drug).

The trick is where to draw the distinction between the “true” disease and its “outpatient-only” counterpart.  Perhaps this is where biomarkers like genotypes or blood tests might prove useful.  In my opinion, this would be a fruitful area of research, as it would help us better understand the biology of disease, design more suitable treatments (pharmacological or otherwise), and dedicate treatment resources more fairly.  It would also lead us to provide more humane and thoughtful care to people on both sides of the double-locked doors—something we seem to do less and less of these days.


%d bloggers like this: