Advertisements
 

Biomarker Envy I: Cortical Thickness

May 13, 2011

In the latest attempt to look for biological correlates or predictors of mental illness, a paper in this month’s Archives of General Psychiatry shows that children with major depressive disorder (MDD) have thinner cortical layers than “healthy” children, or children with obsessive-compulsive disorder (OCD).  Specifically, researchers performed brain MRI scans on 78 children with or without a diagnosis, and investigated seven specific areas of the cerebral cortex.  Results showed four areas which were thinner in children with MDD than in healthy children, two which were thicker, and one that did not vary.

These results add another small nugget of data to our (admittedly scant) understanding of mental illness—particularly in children, before the effects of years of continuous medication treatment.  They also represent the bias towards imaging studies in psychiatry, whose findings—even if statistically significant—are not always that reliable or meaningful.  (But I digress…)

An accompanying press release, however, was unrealistically enthusiastic.  It suggested that this study “offers an exciting new way to identify more objective markers of psychiatric illness in children.”  Indeed, the title of the paper itself (“Distinguishing between MDD and OCD in children by measuring regional cortical thickness”) might suggest a way to use this information in clinical practice right away.  But it’s best not to jump to these conclusions just yet.

For one, there was tremendous variability in the data, as shown in the figure at left (click for larger view).  While on average the children with MDD had a thinner right superior parietal gyrus (one of the cortical regions studied) than healthy children or children with OCD, no individual measurement was predictive of anything.

Second, the statement that we can “distinguish between depression and OCD” based on a brain scan reflects precisely the type of biological determinism and certainty (and hype?) that psychiatry has been striving for, but may never achieve (just look at the figure again).  Lay readers—and, unfortunately, many clinicians—might read the headline and believe that “if we just order an MRI for Junior, we’ll be able to get the true diagnosis.”  The positive predictive value of any test must be high enough to warrant its use in a larger population, and so far, the predictive value of most tests in psychiatry is poor.

Third, there is no a priori reason why there should be a difference between the brains (or anything else, for that matter) of patients with depression and patients with OCD, when you consider the overlap between these—and other—psychiatric conditions.  There are many shades of grey between “depression” and “OCD”:  some depressed children will certainly have OCD-like traits, and vice versa.  Treating the individual (and not necessarily the individual’s brain scan) is the best way to care for a person.

To be fair, the authors of the study, Erin Fallucca and David Rosenberg from Wayne State University in Detroit, do not state anywhere in their paper that this approach represents a “novel new diagnostic method” or make any other such sweeping claims about their findings.  In fact, they write that the differences they observed “merit further investigation” and highlight the need to look “beyond the frontal-limbic circuit.”  In other words, our current hypotheses about depression are not entirely supported by their findings (true), so we need to investigate further (also true).  And this, admittedly, is how science should proceed.

However, the history of psychiatry is dotted with tantalizing neurobiological theories or findings which find their way into clinical practice before they’ve been fully proven, or even shown any great clinical relevance.  Pertinent examples are the use of SPECT scans to diagnose ADHD, championed by Daniel Amen; quantitiative EEG to predict response to psychotropics; genotyping for metabolic enzymes; and the use of SSRIs to treat depression.  (Wait, did I say that???)

The quest to identify “biomarkers” of psychiatric illness may similarly lead us to believe we know more about a disease than we do.  A biomarker is a biological feature (an endocrine or inflammatory measure, a genotype, a biochemical response to a particular intervention) that distinguishes a person with a condition from one without.  They’re used throughout medicine for diagnosis, risk stratification and monitoring treatment response.   A true biomarker for mental illness would represent a significant leap ahead in personalized treatment.  Or would it?  What if a person’s clinical presentation differs from what the marker predicts?  “I’m sorry Mrs. Jones, but even though Katie compulsively washes her hands and counts to twelve hundreds of times a day, her right superior parietal gyrus is too thin for a diagnosis of OCD.”

Other fields of medicine don’t experience this dilemma.  If you have an elevated hsCRP and high LDL, even though you “feel fine,” you are still at elevated risk for cardiovascular disease and really ought to take preventive measures (exercise, diet, etc).  (However, see this recent editorial in the BMJ about “who should define disease.”)  But if your brain scan shows cortical thinning and you have no symptoms of depression, do you need to be treated?  Are you even at risk?

Some day (hopefully) these questions will be answered, as we gain a greater understanding of the biology of mental illness.  But until then, let’s keep research and clinical practice separate until we know what we’re doing.  Psychiatry doesn’t have to be like other fields of medicine.  Patients suffer and come to us for help; let’s open our eyes and ears before sending them off to the scanner or the lab.  In doing so, we might learn something important.

Advertisements

What Can Cymbalta Teach Us About Pain?

April 29, 2011

You’ve probably noticed widespread TV advertisements lately for Cymbalta, Eli Lilly’s blockbuster antidepressant.  However, these ads say nothing about depression.  Sure, some of the actors may look a little depressed (the guy at right, from the Cymbalta web site, sure looks bummed), but the ads are instead promoting Cymbalta for the treatment of chronic musculoskeletal pain, an indication that Cymbalta received in August 2010, strengthening Cymbalta’s position as the “Swiss Army knife” of psychiatric meds.  (I guess that makes Seroquel the “blunt hammer” of psych meds?)

Cymbalta (duloxetine) had already been approved for diabetic neuropathy and fibromyalgia, two other pain syndromes.  It’s a “dual-action” agent, i.e., an inhibitor of the reuptake of serotonin and norepinephrine.  Other SNRIs include Effexor, Pristiq, and Savella.  Of these, only Savella has a pain [fibromyalgia] indication.

When you consider how common the complaint of “pain” is, this approval is a potential gold mine for Eli Lilly.  Moreover, the vagueness of this complaint is also something they will likely capitalize upon.  To be sure, there are distinct types of pain—e.g., neuropathic, visceral, musculoskeletal—and a proper pain workup can determine the exact nature of pain and guide the treatment accordingly.  But in reality, overworked primary clinicians (not to mention psychiatrists, for whom hearing the word “pain” is often the extent of the physical exam) often hear the “pain” complaint and prescribe something the patient says they haven’t tried yet.  Cymbalta is looking to capture part of that market.

The analgesic mechanism of Cymbalta is (as with much in psychiatry) unknown.   Some have argued it works by relieving the depression and anxiety experienced by patients in pain.  It has also been proposed that it activates “descending” pathways from the brain, helping to dampen “ascending” pain signals from the body.  It might also block NMDA receptors or sodium channels or enhance the body’s own endorphin system.  (Click on the figure above for other potential mechanisms, from a recent article by Dharmshaktu et al., 2011.)

But the more important question is:  does it work?  There does seem to be some decent evidence for Cymbalta’s effect in fibromyalgia and diabetic neuropathy in several outcome measures, and in a variety of 12-week trials summarized in a recent Cochrane review.

The evidence for musculoskeletal pain is less convincing.  In order to obtain approval, Lilly performed two studies of Cymbalta in osteoarthritis (OA) and three studies in chronic low back pain (CLBP).  All CLBP studies showed benefit in “24-hour pain severity” but only one of the OA studies showed improvement.   The effects were not tremendous, even though they were statistically significant (see example above, click to enlarge).  The FDA panel expressed concern “regarding the homogeneity of the study population and the heterogeneity of CLBP presenting to physicians in clinical practice.”  In fact, the advisory committee’s enthusiasm for the expanded indication was somewhat muted:

Even though the committee also complained of the “paucity of sound data regarding the pharmacological mechanisms of many analgesic drugs … and the paucity of sound data regarding the underlying pathophysiology,” they ultimately voted to approve Cymbalta for “as broad an indication as possible,” in order for “the well-informed prescriber [to] have the option of trying out an analgesic product approved for one painful condition in a patient with a similar painful condition.”

Incidentally, they essentially ignored the equivocal results in the OA trials, reasoning instead that it was OK to “extrapolate the finding [of efficacy in CLBP] to other similar musculoskeletal conditions.”

In other words, it sounds like the FDA really wanted to get Cymbalta in the hands of more patients and more doctors.

As much as I dislike the practice of prescribing drugs simply because they’re available and they might work, the truth of the matter is, this is surely how Cymbalta will be used.  (In reality, it explains a lot of what we do in psychiatry, unfortunately.)  But pain is a complex entity.  We have to be certain not to jump to conclusions—like we frequently do in psychiatry—when/if we see a “success story” with Cymbalta.

To the body, 60 mg of duloxetine is 60 mg of duloxetine, whether it’s being ingested for depression or for pain.  If a patient’s fibromyalgia or low back pain is miraculously “cured” by Cymbalta, there’s no a priori reason to think that it’s doing anything different in that person than what it does in a depressed patient (even though that is entirely conceivable).  The same mechanism might be involved in both.

The same can be said for some other medications with multiple indications.  For example, we can’t necessarily posit alternate mechanisms for Abilify in a bipolar patient versus Abilify in a patient with schizophrenia.  At roughly equivalent doses, its efficacy in the two conditions might be better explained by a biochemical similarity between the two conditions.  (Or maybe everything really is bipolar!  —sorry, my apologies to Hagop Akiskal.)

Or maybe the medication is not the important thing.  Maybe the patient’s perceived need for the medication matters more than the medication itself, and 60 mg of duloxetine for pain truly is different from 60 mg duloxetine for depression.  However, if our explanations rely on perceptions and not biology, we’re entering the territory of the placebo effect, in which case we’re better off skipping duloxetine (and its side effect profile and high cost), and just using an actual placebo.

Bottom line:  We tend to lock ourselves into what we think we know about the biology of the condition we’re treating, whether pain, depression, schizophrenia, ADHD, or whatever.  When we have medications with multiple indications, we often infer that the medication must work differently in each condition.  Unless the doses are radically different (e.g., doxepin for sleep vs depression), this isn’t necessarily true.  In fact, it may be more parsimonious to say that disorders are more fundamentally alike than they are different, or that our drugs are being used for their placebo effect.

We can now add chronic pain to the long list of conditions responsive to psychoactive drugs.  Perhaps it’s also time to start looking at pain disorders as variants of psychiatric disorders, or treating pain complaints as symptoms of mental disorders.  Cymbalta’s foray into this field may be the first attempt to bridge this gap.

Addendum:  I had started this article before reading the PNAS article on antidepressants and NSAIDs, which I blogged about earlier this week.  If the article’s conclusion (namely, that antidepressants lose their efficacy when given with pain relievers) is correct, this could have implications for Cymbalta’s use in chronic pain.  Since chronic pain patients will most likely be taking regular analgesic medications in addition to Cymbalta, the efficacy of Cymbalta might be diminished.  It will be interesting to see how this plays out.


Antidepressants and “Stress” Revisited

April 13, 2011

If you have even the slightest interest in the biology of depression (or if you’ve spent any time treating depression), you’ve heard about the connection between stress and depressive illness.  There does seem to be a biological—maybe even a causative—link, and in many ways, this seems intuitive:  Stressful situations make us feel sad, hopeless, helpless, etc—many of the features of major depression—and the physiological changes associated with stress probably increase the likelihood that we will, in fact, become clinically depressed.

To cite a specific example, a steroid hormone called cortisol is elevated during stress, and—probably not coincidentally—is also usually elevated in depression.  Some researchers have attempted to treat depression by blocking the effects of cortisol in the brain.  Although we don’t (yet) treat depression this way, it is a tantalizing hypothesis, if for no reason other than the fact that it makes more intuitive sense than the “serotonin hypothesis” of depression, which has little evidence to back it up.

A recent article in Molecular Psychiatry (pdf here) adds another wrinkle to the stress hormone/depression story.  Researchers from King’s College London, led by Christoph Anacker, show that antidepressants actually promote the growth and development of new nerve cells in the hippocampus, and both processes depend on the stress hormone receptor (also known as the glucocorticoid receptor or GR).

Specifically, the group performed their experiments in a cell culture system using human hippocampal progenitor cells (this avoids some of the complications of doing such experiments in animals or humans).  They found that neither sertraline (Zoloft) alone, nor stress steroids (in this case, dexamethasone or DEX) alone, caused cells to proliferate, but when given together, proliferation occurred—in other words, the hippocampal progenitor cells started to divide rapidly.  [see figure above]

Furthermore, when they continued to incubate the cells with Zoloft, the cells “differentiated”—i.e., they turned into cells with all the characteristics of mature nerve cells.  But in this case, differentiation was inhibited by dexamethasone. [see figure at right]

To make matters more complicated, the differentiation process was also inhibited by RU486, a blocker of the receptor for dexamethasone (and other stress hormones).  What’s amazing is that RU486 prevented Zoloft-induced cell differentiation even in the absence of stress hormones.  (However, it did prevent the damaging effects of dexamethasone, consistent with what we might predict.) [see figure at left]

The take-home message here is that antidepressants and dexamethasone (i.e., stress hormones) are required for cell proliferation (first figure), but only antidepressants cause cell differentiation and maturation (second figure).  Furthermore, both processes can be inhibited by RU486, a stress hormone antagonist (third figure).

All in all, this research makes antidepressants look “good.”  (Incidentally, the researchers also got the same results with amitripytline and clomipramine, two tricyclic antidepressants, so the effect is not unique to SSRIs like Zoloft.)  However, it raises serious questions about the relationship between stress hormones and depression.  If antidepressants work by promoting the growth and development of hippocampal neurons, then this research also says that stress hormones (like dexamethasone) might be required, too—at least for part of this process (i.e., they’re required for growth/proliferation, but not for differentiation).

This also raises questions about the effects of RU486.  Readers may recall the enthusiasm surrounding RU486 a few years ago as a potential treatment for psychotic depression, promoted by Alan Schatzberg and his colleagues at Corcept Pharmaceuticals.  Their argument (a convincing one, at the time) was that if we could block the unusually high levels of cortisol seen in severe, psychotic depression, we might treat the disease more effectively.  However, clinical trials of their drug Corlux (= RU486) were unsuccessful.  The experiments in this paper show one possible explanation why:   Instead of simply blocking stress hormones, RU486 blocks the stress hormone receptor, which seems to be the key intermediary for the positive effects of antidepressants (see the third figure).

The Big Picture:   I’m well aware that this is how science progresses:  we continually refine our hypotheses as we collect new data, and sometimes we learn how medications work only after we’ve been using them successfully for many years.  (How long did it take to learn the precise mechanism of salicylic acid, also known as aspirin?  More than two millennia, at least.)  But here we have a case in which antidepressants seem to work in a fashion that is so different from what we originally thought (incidentally, the word “serotonin” is used only three times in their 13-page article!!).  Moreover, the new mechanism (making new brain cells!!) is quite significant.  And the involvement of stress hormones in this new mechanism doesn’t seem very intuitive or “clean” either.

It makes me wonder (yet again) what the heck these drugs are doing.  I’m not suggesting we call a moratorium on the further use of antidepressants until we learn exactly how they work, but I do suggest that we practice a bit of caution when using them.  At the very least, we need to change our “models” of depression.  Fast.

Overall, I’m glad this research is being done so that we can learn more about the mechanisms of antidepressant action (and develop new, more specific ones… maybe ones that target the glucocorticoid receptor).  In the meantime, we ought to pause and recognize that what we think we’re doing may be entirely wrong.  Practicing a little humility is good every once in a while, even especially for a psychopharmacologist.


Getting Inside The Patient’s Mind

March 4, 2011

As a profession, medicine concerns itself with the treatment of individual human beings, but primarily through a scientific or “objective” lens.  What really counts is not so much a person’s feelings or attitudes (although we try to pay attention to the patient’s subjective experience), but instead the pathology that contributes to those feelings or that experience: the malignant lesion, the abnormal lab value, the broken bone, or the infected tissue.

In psychiatry, despite the impressive inroads of biology, pharmacology, molecular genetics into our field—and despite the bold predictions that accurate molecular diagnosis is right around the corner—the reverse is true, at least from the patient’s perspective.  Patients (generally) don’t care about which molecules are responsible for their depression or anxiety; they do know that they’re depressed or anxious and want help.  Psychiatry is getting ever closer to ignoring this essential reality.

Lately I’ve come across a few great reminders of this principle.  My colleagues over at Shrink Rap recently posted an article about working with patients who are struggling with problems that resemble those that the psychiatrist once experienced.  Indeed, a debate exists within the field as to whether providers should divulge details of their own personal experiences, or whether they should remain detached and objective.  Many psychiatrists see themselves in the latter group, simply offering themselves as a sounding board for the patient’s words and restricting their involvement to medications or other therapeutic interventions that have been planned and agreed to in advance.  This may, however, prevent them from sharing information that may be vital in helping the patient make great progress.

A few weeks ago a friend sent me a link to this video produced by the Janssen pharmaceutical company (makers of Risperdal and Invega, two atypical antipsychotic medications).

The video purports to simulate the experience of a person experiencing psychotic symptoms.  While I can’t attest to its accuracy, it certainly is consistent with written accounts of psychotic experiences, and is (reassuringly!) compatible with what we screen for in the evaluation of a psychotic patient.  Almost like reading a narrative of someone with mental illness (like Andrew Solomon’s Noonday Demon, William Styron’s Darkness Visible, or An Unquiet Mind by Kay Redfield Jamison), videos and vignettes like this one may help psychiatrists to understand more deeply the personal aspect of what we treat.

I also stumbled upon an editorial in the January 2011 issue of Schizophrenia Bulletin by John Strauss, a Yale psychiatrist, entitled “Subjectivity and Severe Psychiatric Disorders.” In it, he argues that in order to practice psychiatry as a “human science” we must pay as much attention to a patient’s subjective experience as we do to the symptoms they report or the signs we observe.  But he also points out that our research tools and our descriptors—the terms we use to describe the dimensions of a person’s disease state—fail to do this.

Strauss argues that, as difficult as it sounds, we must divorce ourselves from the objective scientific tradition that we value so highly, and employ different approaches to understand and experience the subjective phenomena that our patients encounter—essentially to develop a “second kind of knowledge” (the first being the textbook knowledge that all doctors obtain through their training) that is immensely valuable in understanding a patient’s suffering.  He encourages role-playing, journaling, and other experiential tools to help physicians relate to the qualia of a patient’s suffering.

It’s hard to quantify subjective experiences for purposes of insurance billing, or for standardized outcomes measurements like surveys or questionnaires, or for large clinical trials of new pharmaceutical agents.  And because these constitute the reality of today’s medical practice, it is hard for physicians to draw their attention to the subjective experience of patients.  Nevertheless, physicians—and particularly psychiatrists—should remind themselves every so often that they’re dealing with people, not diseases or symptoms, and to challenge themselves to know what that actually means.

By the same token, patients have a right to know that their thoughts and feelings are not just heard, but understood, by their providers.  While the degree of understanding will (obviously) not be precise, patients may truly benefit from a clinician who “knows” more than meets the eye.


The Mythology of “Treatment-Resistant” Depression

February 27, 2011

“Treatment-resistant depression” is one of those clinical terms that has always been a bit unsettling to me.  Maybe I’m a pessimist, but when I hear this phrase, it reminds me that despite all the time, energy, and expense we have invested in understanding this all-too-common disease, we still have a long way to go.  Perhaps more troubling, the phrase also suggests an air of resignation or abandonment:  “We’ve tried everything, but you’re resistant to treatment, and there’s not much more we can do for you.”

But “everything” is a loaded term, and “treatment” takes many forms.  The term “treatment-resistant depression” first appeared in the literature in 1974 and has been used widely in the literature.  (Incidentally, despite appearing over 20 times in the APA’s 2010 revised treatment guidelines for major depression, it is never actually defined.)  The phrase is often used to describe patients who have failed to respond to a certain number of antidepressant trials (typically two, each from a different class), each of a reasonable (6-12 week) duration, although many other definitions have emerged over the years.

Failure to respond to “adequate” trials of appropriate antidepressant medications does indeed suggest that a patient is resistant to those treatments, and the clinician should think of other ways to approach that patient’s condition.  In today’s psychiatric practice, however, “treatment-resistant” is often a code word for simply adding another medication (like an atypical antipsychotic) or to consider somatic treatment options (such as electroconvulsive therapy, ECT, or transcranial magnetic stimulation, TMS).

Seen this way, it’s a fairly narrow view of “treatment.”  The psychiatric literature—not to mention years and years of anecdotal data—suggests that a broad range of interventions can be helpful in the management of depression, such as exercise, dietary supplements, mindfulness meditation, acupuncture, light therapy, and literally dozens of different psychotherapeutic approaches.  Call me obsessive, or pedantic, but to label someone’s depression as “treatment resistant” without an adequate trial of all of these approaches, seems premature at best, and fatalistic at worst.

What if we referred to someone’s weight problem as “diet-resistant obesity”?  Sure, there are myriad “diets” out there, and some obese individuals have tried several and simply don’t lose weight.  But perhaps these patients simply haven’t found the right one for their psychological/endocrine makeup and motivational level; there are also some genetic and biochemical causes of obesity that prevent weight loss regardless of diet.  If we label someone as “diet-resistant” it means that we may overlook some diets that would work, or ignore other ways of managing this condition.

Back to depression.   I recognize there’s not much of an evidence base for many of the potentially hundreds of different “cures” for depression in the popular and scientific literature.  And it would take far too much time to try them all.  Experienced clinicians will have seen plenty of examples of good antidepressant response to lithium, thyroid hormone, antipsychotics (such as Abilify), and somatic interventions like ECT.  But they have also seen failures with the exact same agents.

Unfortunately, our “decision tree” for assigning patients to different treatments is more like a dartboard than an evidence-based flowchart.  “Well, you’ve failed an SSRI and an SNRI, so let’s try an atypical,” goes the typical dialogue (not to mention the typical TV commercial or magazine ad), when we really should be trying to understand our patients at a deeper level in order to determine the ideal therapy for them.

Nevertheless, the “step therapy” requirements of insurance companies, as well as the large multicenter NIH-sponsored trials (like the STAR*D trial) which primarily focus on medications (yes, I am aware that STAR*D had a cognitive therapy component, although this has received little attention and was not widely chosen by study participants), continue to bias the clinician and patient in the direction of looking for the next pill or the next biological intervention, instead of thinking about patients as individuals with biological, genetic, psychological, and social determinants of their conditions.

Because in the long run, nobody is “treatment resistant,” they’re just resistant to what we’re currently offering them.


To Treat Depression, Just Give ‘Em What They Want

February 23, 2011

A doctor’s chief task is to determine the cause of a patient’s suffering and to develop a course of treatment.  In psychiatry, the task is no different: examine the patient, determine a diagnosis, and initiate treatment.  However, “treatment” comes in many forms, and what works for one patient may not work for another.  A good psychiatrist tries to figure out which approach is ideal for the patient in his office, rather than reflexively reaching for the prescription pad and the latest drug option.

How to determine what’s the best course of action for a patient?  Recent research suggests one potentially foolproof way:  Ask him.

A paper in this month’s Psychotherapy and Psychosomatics by Mergl and colleagues shows that patient preference (that is, whether the patient prefers medications or psychotherapy) predicts how effective a treatment will be.  In their study, patients who expressed a preference for medications at the beginning of treatment had a better response to Zoloft than to group therapy, while patients who preferred therapy showed the exact opposite response.

In an even larger study published in 2009 by James Kocsis and colleagues at Weill-Cornell in New York (comparing nefazodone, an antidepressant, with a cognitive therapy approach called CBASP), a similar result was obtained:  patients with chronic major depression who entered the study expressing a preference for drug treatment had higher remission rates when receiving medication than when receiving psychotherapy, and vice versa.

The numbers were quite shocking:

Patients who preferred medication:

Treatment received Remission rate Avg. depression score (HAM-D) at end of study (high score = more depressed)
Meds 45.5% 11.6
Therapy 22.2% 21.0

Patients who preferred therapy:

Treatment received Remission rate Avg. depression score (HAM-D) at end of study
Meds 7.7% 18.3
Therapy 50.0% 12.1

(original HAM-D scores were approximately 26-27 for all patients, constituting major depression, and patients in this study had been depressed for over two years)

Thus, if a depressed patient wanted therapy but got medications instead, their chances of “remitting” (ie, having a fully therapeutic response to nefazodone) were less than 1 in 12.  But if they did get therapy, those chances improved to 1 in 2.  Interestingly, patients who preferred therapy and got combination treatment (meds and therapy) actually did worse than with therapy alone (remission rate was only 38.9%), leading the authors to conclude that “few patients who stated a preference for psychotherapy benefited much from the addition of medication.”

It’s not surprising, at first glance, that people who “get what they want” do better.  After all, a depressed patient who insists on taking meds probably won’t get much better if he’s dragged into psychotherapy against his will, and the patient who believes that a weekly session with a therapist is exactly what she needs, will probably have some resistance to just getting a pill.

But then again, isn’t depression supposed to be a hard-wired biological illness?  Shouldn’t a medication have a more profound effect, regardless of whether the patient “wants” it or not?

Apparently not.  The fact that people responded to the treatment they preferred means one of two things.  There may be two different types of depression, one that’s biological and one that’s more behavioral or “exogenous,” and people just happen to choose the appropriate treatment for their type due to some predisposition or innate tendency (self-knowledge?).  Alternatively, the “biological” basis of depression is not all it’s cracked up to be.

One question raised by these results is, why don’t we listen more to our patients and give them what they say they want?  If half the people who want therapy actually get better with therapy, doesn’t that make it hard to justify meds for this population?  Conversely, when we talk about “treatment-resistant depression,” or “depression that doesn’t respond to antidepressants alone,” could it be that the people who don’t respond to pills are simply those who would rather engage in psychotherapy instead?

I believe the implications of these findings may be significant.  For one thing, insurers are becoming less likely to pay for therapy, while they spend more and more money on antidepressant medications.  These studies say that this is exactly what we don’t want to do for a large number of patients (and these patients are easy to identify—they’re the ones who tell us they don’t want meds!).  Furthermore, trials of new antidepressant treatments should separate out the self-described “medication responders” and “therapy responders” and determine how each group responds.  [Note:  in the large STAR*D trial, which evaluated “switching” strategies, patients were given the opportunity to switch from meds to therapy or from one med to a different one of their choosing, but there was no group of patients who didn’t have the option to switch.]  If the “therapy responders” routinely fail to respond to drugs, we need to seriously revamp our biological theories of depression.  Its chemical basis may be something entirely different from how our current drugs are thought to work, or maybe depression isn’t “biological” at all in some people.  This will also keep us from wasting money and resources on treatments that are less likely to work.

While it’s often risky to ask a patient what he or she wants (and to give it to them), depression may be just the opportunity to engage the patient in a way that respects their desires.  These data show that the patient may know more than the doctor what “works” and what doesn’t, and maybe it’s time we pay closer attention.


The Placebo Effect: It Just Gets Better and Better

February 13, 2011

The placebo response is the bane of clinical research.  Placebos, by definition, are inert, inactive compounds that should have absolutely no effect on a patient’s symptoms, although they very frequently do.  Researchers compare new drugs to placebos so that any difference in outcome between drug and placebo can be attributed to the drug rather than to any unrelated factor.

In psychiatry, placebo effects are usually quite robust.  Trials of antidepressants, antianxiety medications, mood stabilizers, and other drugs typically show large placebo response rates.  A new paper by Bruce Kinon and his colleagues in this month’s Current Opinion in Psychiatry, however, reports that placebos are also show some improvement in schizophrenia.  Moreover, placebos seem to have become more effective over the last 20 years!

Now, if there’s any mental illness in which you would not expect to see a placebo response, its schizophrenia.  Other psychiatric disorders, one might argue, involve cognitions, beliefs, expectations, feelings, etc.—all of which could conceivably improve when a patient believes an intervention (yes, even a placebo pill) might make him feel better.  But schizophrenia, by definition, is characterized by a distorted sense of reality, impaired thought processes, an inability to grasp the differences between the external world and the contents of one’s mind, and, frequently, the presence of bizarre sensory phenomena that can only come from the aberrant firing of the schizophrenic’s neurons.  How could these symptoms, which almost surely arise from neurochemistry gone awry, respond to a sugar pill?

Yet respond they do.  And not only do subjects in clinical trials get better with placebo, but the placebo response has been steadily improving over the last 20 years!  Kinon and his colleagues summarized placebo response rates from various antipsychotic trials since 1993 and found a very clear and gradual improvement in scores over the last 15-20 years.

Very mysterious stuff.  Why would patients respond better to placebo today than in years past?  Well, as it turns out (and is explored in more detail in this article), the answer may lie not in the fact that schizophrenics are being magically cured by a placebo, but rather that they have greater expectations for improvement now than in the past (although this is hard to believe for schizophrenia), or that clinical researchers have greater incentives for including patients in trials and therefore inadequately screen their subjects.

In support of the latter argument, Kinon and his colleagues showed that in a recent antidepressant trial (in which some arbitrary minimum depression score was required for subjects to be included), researchers routinely rated their subjects as more depressed than the subjects rated themselves at the beginning of the trial—the “screening phase.”  Naturally, then, subjects showed greater improvement at the end of the trial, regardless of whether they received an antidepressant or placebo.

A more cynical argument for why antipsychotic drugs don’t “separate from placebo” is because they really aren’t that much better than placebo (for an excellent series of posts deconstructing the trials that led to FDA approval of Seroquel, and showing how results may have been “spun” in Seroquel’s favor, check out 1BoringOldMan).

This is an important topic that deserves much more attention.  Obviously, researchers and pharmaceutical companies want their drugs to look as good as possible, and want placebo responses to be nil (or worse than nil).  In fact, Kinon and his colleagues are all employees of Eli Lilly, manufacturer of Zyprexa and other drugs they’d like to bring to market, so they have a clear interest in this phenomenon.

Maybe researchers do “pad” their studies to include as many patients as they can, including some whose symptoms are not severe.  Maybe new antipsychotics aren’t as effective as we’d like to believe them to be.  Or maybe schizophrenics really do respond to a “placebo effect” the same way a depressed person might feel better simply by thinking they’re taking a drug that will help.  Each of these is a plausible explanation.

For me, however, a much bigger question arises: what exactly are we doing when we evaluate a schizophrenic patient and prescribe an antipsychotic?  When I see a patient whom I think may be psychotic, do I (unconsciously) ask questions that lead me to that diagnosis?  Do I look for symptoms that may not exist?  Does it make sense for me to prescribe an antipsychotic when a placebo might do just as well?  (See my previous post on the “conscious” placebo effect.)  If a patient “responds” to a drug, why am I (and the patient) so quick to attribute it to the effect of the medication?

I’m glad that pharmaceutical companies are paying attention to this issue and developing ways to tackle these questions.  Unfortunately, because their underlying goal is to make a drug that looks as different from placebo as possible (to satisfy the shareholders, you know) I question whether their solutions will be ideal.  As with everything in medicine, though, it’s the clinician’s responsibility to evaluate the studies critically—and to evaluate their own patients’ responses to treatment in an unbiased fashion—and not to give credit where credit isn’t due.


%d bloggers like this: