Getting Inside The Patient’s Mind

March 4, 2011

As a profession, medicine concerns itself with the treatment of individual human beings, but primarily through a scientific or “objective” lens.  What really counts is not so much a person’s feelings or attitudes (although we try to pay attention to the patient’s subjective experience), but instead the pathology that contributes to those feelings or that experience: the malignant lesion, the abnormal lab value, the broken bone, or the infected tissue.

In psychiatry, despite the impressive inroads of biology, pharmacology, molecular genetics into our field—and despite the bold predictions that accurate molecular diagnosis is right around the corner—the reverse is true, at least from the patient’s perspective.  Patients (generally) don’t care about which molecules are responsible for their depression or anxiety; they do know that they’re depressed or anxious and want help.  Psychiatry is getting ever closer to ignoring this essential reality.

Lately I’ve come across a few great reminders of this principle.  My colleagues over at Shrink Rap recently posted an article about working with patients who are struggling with problems that resemble those that the psychiatrist once experienced.  Indeed, a debate exists within the field as to whether providers should divulge details of their own personal experiences, or whether they should remain detached and objective.  Many psychiatrists see themselves in the latter group, simply offering themselves as a sounding board for the patient’s words and restricting their involvement to medications or other therapeutic interventions that have been planned and agreed to in advance.  This may, however, prevent them from sharing information that may be vital in helping the patient make great progress.

A few weeks ago a friend sent me a link to this video produced by the Janssen pharmaceutical company (makers of Risperdal and Invega, two atypical antipsychotic medications).

The video purports to simulate the experience of a person experiencing psychotic symptoms.  While I can’t attest to its accuracy, it certainly is consistent with written accounts of psychotic experiences, and is (reassuringly!) compatible with what we screen for in the evaluation of a psychotic patient.  Almost like reading a narrative of someone with mental illness (like Andrew Solomon’s Noonday Demon, William Styron’s Darkness Visible, or An Unquiet Mind by Kay Redfield Jamison), videos and vignettes like this one may help psychiatrists to understand more deeply the personal aspect of what we treat.

I also stumbled upon an editorial in the January 2011 issue of Schizophrenia Bulletin by John Strauss, a Yale psychiatrist, entitled “Subjectivity and Severe Psychiatric Disorders.” In it, he argues that in order to practice psychiatry as a “human science” we must pay as much attention to a patient’s subjective experience as we do to the symptoms they report or the signs we observe.  But he also points out that our research tools and our descriptors—the terms we use to describe the dimensions of a person’s disease state—fail to do this.

Strauss argues that, as difficult as it sounds, we must divorce ourselves from the objective scientific tradition that we value so highly, and employ different approaches to understand and experience the subjective phenomena that our patients encounter—essentially to develop a “second kind of knowledge” (the first being the textbook knowledge that all doctors obtain through their training) that is immensely valuable in understanding a patient’s suffering.  He encourages role-playing, journaling, and other experiential tools to help physicians relate to the qualia of a patient’s suffering.

It’s hard to quantify subjective experiences for purposes of insurance billing, or for standardized outcomes measurements like surveys or questionnaires, or for large clinical trials of new pharmaceutical agents.  And because these constitute the reality of today’s medical practice, it is hard for physicians to draw their attention to the subjective experience of patients.  Nevertheless, physicians—and particularly psychiatrists—should remind themselves every so often that they’re dealing with people, not diseases or symptoms, and to challenge themselves to know what that actually means.

By the same token, patients have a right to know that their thoughts and feelings are not just heard, but understood, by their providers.  While the degree of understanding will (obviously) not be precise, patients may truly benefit from a clinician who “knows” more than meets the eye.

The Mythology of “Treatment-Resistant” Depression

February 27, 2011

“Treatment-resistant depression” is one of those clinical terms that has always been a bit unsettling to me.  Maybe I’m a pessimist, but when I hear this phrase, it reminds me that despite all the time, energy, and expense we have invested in understanding this all-too-common disease, we still have a long way to go.  Perhaps more troubling, the phrase also suggests an air of resignation or abandonment:  “We’ve tried everything, but you’re resistant to treatment, and there’s not much more we can do for you.”

But “everything” is a loaded term, and “treatment” takes many forms.  The term “treatment-resistant depression” first appeared in the literature in 1974 and has been used widely in the literature.  (Incidentally, despite appearing over 20 times in the APA’s 2010 revised treatment guidelines for major depression, it is never actually defined.)  The phrase is often used to describe patients who have failed to respond to a certain number of antidepressant trials (typically two, each from a different class), each of a reasonable (6-12 week) duration, although many other definitions have emerged over the years.

Failure to respond to “adequate” trials of appropriate antidepressant medications does indeed suggest that a patient is resistant to those treatments, and the clinician should think of other ways to approach that patient’s condition.  In today’s psychiatric practice, however, “treatment-resistant” is often a code word for simply adding another medication (like an atypical antipsychotic) or to consider somatic treatment options (such as electroconvulsive therapy, ECT, or transcranial magnetic stimulation, TMS).

Seen this way, it’s a fairly narrow view of “treatment.”  The psychiatric literature—not to mention years and years of anecdotal data—suggests that a broad range of interventions can be helpful in the management of depression, such as exercise, dietary supplements, mindfulness meditation, acupuncture, light therapy, and literally dozens of different psychotherapeutic approaches.  Call me obsessive, or pedantic, but to label someone’s depression as “treatment resistant” without an adequate trial of all of these approaches, seems premature at best, and fatalistic at worst.

What if we referred to someone’s weight problem as “diet-resistant obesity”?  Sure, there are myriad “diets” out there, and some obese individuals have tried several and simply don’t lose weight.  But perhaps these patients simply haven’t found the right one for their psychological/endocrine makeup and motivational level; there are also some genetic and biochemical causes of obesity that prevent weight loss regardless of diet.  If we label someone as “diet-resistant” it means that we may overlook some diets that would work, or ignore other ways of managing this condition.

Back to depression.   I recognize there’s not much of an evidence base for many of the potentially hundreds of different “cures” for depression in the popular and scientific literature.  And it would take far too much time to try them all.  Experienced clinicians will have seen plenty of examples of good antidepressant response to lithium, thyroid hormone, antipsychotics (such as Abilify), and somatic interventions like ECT.  But they have also seen failures with the exact same agents.

Unfortunately, our “decision tree” for assigning patients to different treatments is more like a dartboard than an evidence-based flowchart.  “Well, you’ve failed an SSRI and an SNRI, so let’s try an atypical,” goes the typical dialogue (not to mention the typical TV commercial or magazine ad), when we really should be trying to understand our patients at a deeper level in order to determine the ideal therapy for them.

Nevertheless, the “step therapy” requirements of insurance companies, as well as the large multicenter NIH-sponsored trials (like the STAR*D trial) which primarily focus on medications (yes, I am aware that STAR*D had a cognitive therapy component, although this has received little attention and was not widely chosen by study participants), continue to bias the clinician and patient in the direction of looking for the next pill or the next biological intervention, instead of thinking about patients as individuals with biological, genetic, psychological, and social determinants of their conditions.

Because in the long run, nobody is “treatment resistant,” they’re just resistant to what we’re currently offering them.

To Treat Depression, Just Give ‘Em What They Want

February 23, 2011

A doctor’s chief task is to determine the cause of a patient’s suffering and to develop a course of treatment.  In psychiatry, the task is no different: examine the patient, determine a diagnosis, and initiate treatment.  However, “treatment” comes in many forms, and what works for one patient may not work for another.  A good psychiatrist tries to figure out which approach is ideal for the patient in his office, rather than reflexively reaching for the prescription pad and the latest drug option.

How to determine what’s the best course of action for a patient?  Recent research suggests one potentially foolproof way:  Ask him.

A paper in this month’s Psychotherapy and Psychosomatics by Mergl and colleagues shows that patient preference (that is, whether the patient prefers medications or psychotherapy) predicts how effective a treatment will be.  In their study, patients who expressed a preference for medications at the beginning of treatment had a better response to Zoloft than to group therapy, while patients who preferred therapy showed the exact opposite response.

In an even larger study published in 2009 by James Kocsis and colleagues at Weill-Cornell in New York (comparing nefazodone, an antidepressant, with a cognitive therapy approach called CBASP), a similar result was obtained:  patients with chronic major depression who entered the study expressing a preference for drug treatment had higher remission rates when receiving medication than when receiving psychotherapy, and vice versa.

The numbers were quite shocking:

Patients who preferred medication:

Treatment received Remission rate Avg. depression score (HAM-D) at end of study (high score = more depressed)
Meds 45.5% 11.6
Therapy 22.2% 21.0

Patients who preferred therapy:

Treatment received Remission rate Avg. depression score (HAM-D) at end of study
Meds 7.7% 18.3
Therapy 50.0% 12.1

(original HAM-D scores were approximately 26-27 for all patients, constituting major depression, and patients in this study had been depressed for over two years)

Thus, if a depressed patient wanted therapy but got medications instead, their chances of “remitting” (ie, having a fully therapeutic response to nefazodone) were less than 1 in 12.  But if they did get therapy, those chances improved to 1 in 2.  Interestingly, patients who preferred therapy and got combination treatment (meds and therapy) actually did worse than with therapy alone (remission rate was only 38.9%), leading the authors to conclude that “few patients who stated a preference for psychotherapy benefited much from the addition of medication.”

It’s not surprising, at first glance, that people who “get what they want” do better.  After all, a depressed patient who insists on taking meds probably won’t get much better if he’s dragged into psychotherapy against his will, and the patient who believes that a weekly session with a therapist is exactly what she needs, will probably have some resistance to just getting a pill.

But then again, isn’t depression supposed to be a hard-wired biological illness?  Shouldn’t a medication have a more profound effect, regardless of whether the patient “wants” it or not?

Apparently not.  The fact that people responded to the treatment they preferred means one of two things.  There may be two different types of depression, one that’s biological and one that’s more behavioral or “exogenous,” and people just happen to choose the appropriate treatment for their type due to some predisposition or innate tendency (self-knowledge?).  Alternatively, the “biological” basis of depression is not all it’s cracked up to be.

One question raised by these results is, why don’t we listen more to our patients and give them what they say they want?  If half the people who want therapy actually get better with therapy, doesn’t that make it hard to justify meds for this population?  Conversely, when we talk about “treatment-resistant depression,” or “depression that doesn’t respond to antidepressants alone,” could it be that the people who don’t respond to pills are simply those who would rather engage in psychotherapy instead?

I believe the implications of these findings may be significant.  For one thing, insurers are becoming less likely to pay for therapy, while they spend more and more money on antidepressant medications.  These studies say that this is exactly what we don’t want to do for a large number of patients (and these patients are easy to identify—they’re the ones who tell us they don’t want meds!).  Furthermore, trials of new antidepressant treatments should separate out the self-described “medication responders” and “therapy responders” and determine how each group responds.  [Note:  in the large STAR*D trial, which evaluated “switching” strategies, patients were given the opportunity to switch from meds to therapy or from one med to a different one of their choosing, but there was no group of patients who didn’t have the option to switch.]  If the “therapy responders” routinely fail to respond to drugs, we need to seriously revamp our biological theories of depression.  Its chemical basis may be something entirely different from how our current drugs are thought to work, or maybe depression isn’t “biological” at all in some people.  This will also keep us from wasting money and resources on treatments that are less likely to work.

While it’s often risky to ask a patient what he or she wants (and to give it to them), depression may be just the opportunity to engage the patient in a way that respects their desires.  These data show that the patient may know more than the doctor what “works” and what doesn’t, and maybe it’s time we pay closer attention.

The Placebo Effect: It Just Gets Better and Better

February 13, 2011

The placebo response is the bane of clinical research.  Placebos, by definition, are inert, inactive compounds that should have absolutely no effect on a patient’s symptoms, although they very frequently do.  Researchers compare new drugs to placebos so that any difference in outcome between drug and placebo can be attributed to the drug rather than to any unrelated factor.

In psychiatry, placebo effects are usually quite robust.  Trials of antidepressants, antianxiety medications, mood stabilizers, and other drugs typically show large placebo response rates.  A new paper by Bruce Kinon and his colleagues in this month’s Current Opinion in Psychiatry, however, reports that placebos are also show some improvement in schizophrenia.  Moreover, placebos seem to have become more effective over the last 20 years!

Now, if there’s any mental illness in which you would not expect to see a placebo response, its schizophrenia.  Other psychiatric disorders, one might argue, involve cognitions, beliefs, expectations, feelings, etc.—all of which could conceivably improve when a patient believes an intervention (yes, even a placebo pill) might make him feel better.  But schizophrenia, by definition, is characterized by a distorted sense of reality, impaired thought processes, an inability to grasp the differences between the external world and the contents of one’s mind, and, frequently, the presence of bizarre sensory phenomena that can only come from the aberrant firing of the schizophrenic’s neurons.  How could these symptoms, which almost surely arise from neurochemistry gone awry, respond to a sugar pill?

Yet respond they do.  And not only do subjects in clinical trials get better with placebo, but the placebo response has been steadily improving over the last 20 years!  Kinon and his colleagues summarized placebo response rates from various antipsychotic trials since 1993 and found a very clear and gradual improvement in scores over the last 15-20 years.

Very mysterious stuff.  Why would patients respond better to placebo today than in years past?  Well, as it turns out (and is explored in more detail in this article), the answer may lie not in the fact that schizophrenics are being magically cured by a placebo, but rather that they have greater expectations for improvement now than in the past (although this is hard to believe for schizophrenia), or that clinical researchers have greater incentives for including patients in trials and therefore inadequately screen their subjects.

In support of the latter argument, Kinon and his colleagues showed that in a recent antidepressant trial (in which some arbitrary minimum depression score was required for subjects to be included), researchers routinely rated their subjects as more depressed than the subjects rated themselves at the beginning of the trial—the “screening phase.”  Naturally, then, subjects showed greater improvement at the end of the trial, regardless of whether they received an antidepressant or placebo.

A more cynical argument for why antipsychotic drugs don’t “separate from placebo” is because they really aren’t that much better than placebo (for an excellent series of posts deconstructing the trials that led to FDA approval of Seroquel, and showing how results may have been “spun” in Seroquel’s favor, check out 1BoringOldMan).

This is an important topic that deserves much more attention.  Obviously, researchers and pharmaceutical companies want their drugs to look as good as possible, and want placebo responses to be nil (or worse than nil).  In fact, Kinon and his colleagues are all employees of Eli Lilly, manufacturer of Zyprexa and other drugs they’d like to bring to market, so they have a clear interest in this phenomenon.

Maybe researchers do “pad” their studies to include as many patients as they can, including some whose symptoms are not severe.  Maybe new antipsychotics aren’t as effective as we’d like to believe them to be.  Or maybe schizophrenics really do respond to a “placebo effect” the same way a depressed person might feel better simply by thinking they’re taking a drug that will help.  Each of these is a plausible explanation.

For me, however, a much bigger question arises: what exactly are we doing when we evaluate a schizophrenic patient and prescribe an antipsychotic?  When I see a patient whom I think may be psychotic, do I (unconsciously) ask questions that lead me to that diagnosis?  Do I look for symptoms that may not exist?  Does it make sense for me to prescribe an antipsychotic when a placebo might do just as well?  (See my previous post on the “conscious” placebo effect.)  If a patient “responds” to a drug, why am I (and the patient) so quick to attribute it to the effect of the medication?

I’m glad that pharmaceutical companies are paying attention to this issue and developing ways to tackle these questions.  Unfortunately, because their underlying goal is to make a drug that looks as different from placebo as possible (to satisfy the shareholders, you know) I question whether their solutions will be ideal.  As with everything in medicine, though, it’s the clinician’s responsibility to evaluate the studies critically—and to evaluate their own patients’ responses to treatment in an unbiased fashion—and not to give credit where credit isn’t due.

“Decision Support” in Psychiatry

January 28, 2011

I’ve long believed that, just as no two psychiatric patients are identical, there is– and never will be– a “one size fits all” approach to psychiatric care.  However, much work has been done in the last several years to develop “algorithms” to guide treatment and standardize care. At the same time, the adoption of electronic health record (EHR) systems– which are emphasized in the new U.S. health care legislation– has introduced the possibility that computerized decision-support systems will help guide practitioners to make the right choices for their patients.  It is my opinion that such approaches will not improve psychiatric care, and, in fact, will interfere with the human aspect that is the essence of good psychiatric practice.

Clinical decision support,” or CDS, is the idea that an algorithm can help a provider to give the right kind of care.  For a busy doctor, it makes sense that getting a quick reminder to prescribe aspirin to patients with coronary artery disease, or to give diet and exercise recommendations to patients at risk for obesity or diabetes, helps to ensure good care.  Several years ago, I actually helped to develop a CDS system designed to remind primary care doctors to avoid opiate painkillers (or use them with caution) in patients who had a history of substance abuse or other relative contraindications to narcotics.  At the time, I thought this was a great idea.  Why not harness the ability of a computer to gather all the data on a given patient– something that even the best doctor cannot do with absolute accuracy– and suggest the most advisable plan of action?

Now that I spend most of my time actually practicing medicine, and using two different EHR systems, I’m having second thoughts.  While I appreciate the ability to enter patient data (and my notes) into a system that is instantly accessible by any provider in my office at any time, and write prescriptions with a few clicks of my mouse, I’ve begun to resent the ways in which EHRs tell me how to practice, particularly when (a) they give recommendations that I would employ anyway (thereby wasting my time), or (b) they give recommendations that deviate from what I believe is right for the patient.

Obviously, the latter complaint is particualrly relevant in psychiatry, where each patient presents a different background of symptoms, stressors, preferences, and personal history.  When anyone asks me “who is your ideal patient for drug X?” or “what is your first choice of drug for depression?” I find it hard to give an answer.  Treatment choices come down to a feeling, a gestalt, incorporating both observable data and intuition; it’s hard to describe and impossible to quantify.

One example of a psychiatric CDS is based on the Texas Medication Algorithm Project (TMAP).  The TMAP was developed to help providers determine what medications to use in the treatment of mood disorders; the first version of TMAP for depression was designed in 1999 and implemented in a computerized CDS in 2004.  A pilot study involving four primary care providers, published in 2009, showed that depression outcomes were slightly better (i.e., scores in the HAM-D were lower) in the group using the CDS.  (This may have been due to the setting; in a busy primary care clinic, any guidance to address depression symptoms may improve outcomes relative to no guidance at all.)  However, a follow-up study by the same group found that it was much harder to implement the CDS on a more widespread scale in mental health clinics, due to technical problems, poor IT support, billing & coding problems, formulary issues, recommendations that providers disagreed with, lack of time, and impact on workflow.

That may have been for the better.  A new study in this month’s Archives of Internal Medicine by Romano and Stafford shows that CDSs may just be a waste of time and money.  They evaluated over 330 million ambulatory care patient visits using EHRs over 2005-2007, 57% of which involved at least one CDS, and found that, on 20 quality-of-care indicators, using a CDS contributed to improvements in treatment (i.e., treatment concordant with established guidelines) on only one measure.  (Two measures involved psychiatric conditions– one was for the treamtent of depression, and the other was to remind providers not to use benzodiazepines alone for depression treatment.  Neither of these measures showed improvement when a CDS was used, relative to no CDS.)

So despite all the resources devoted to electronic medical records and clinical decision support systems to improve care, the evidence seems to indicate that they don’t.  Either doctors ignore CDSs and provide “practice as usual” anyway, or the CDSs give recommendations that doctors already follow.

This may be good news for psychiatry, where treatment guidelines (thankfully) offer a great deal of latitude, but CDSs, by their very nature, may restrict our options.  In the future, then, when we believe that the patient sitting in front of us is a good candidate for Effexor, or Seroquel, or interpersonal therapy with no meds at all, we may no longer need to explain to a computer program why we’re ignoring its recommendation to try Prozac or Haldol first.

In my opinion, anything that preserves the integrity of the physician-patient interaction– and prevents the practice of medicine from turning into a checklist-and-formula-based recipe– preserves the identity of the patient, and improves the quality of care.

Addendum:  See also a related post today on

%d bloggers like this: