The Evidence of the Anecdote

June 8, 2012

The foundation of medical decision-making is “evidence-based medicine.”  As most readers know, this is the effort to use the best available evidence (using the scientific method) to make decisions and recommendations about how to treat individual patients.  “Evidence” is typically rated on four levels (1 to 4).  Level 1 represents high-quality evidence—usually the results of randomized clinical trials—while level 4 typically consists of case studies, uncontrolled observations, and anecdotal reports.

Clinical guidelines and drug approvals typically rely more heavily (or even exclusively) on level-1 evidence.  It is thought to be more valid, more authoritative, and less affected by variations among individuals.  For example, knowing that an antidepressant works (i.e., it gives a “statistically significant effect” vs placebo) in a large, controlled trial is more convincing to the average prescriber than knowing that it worked for a single depressed guy in Peoria.

But is it, really?  Not always (especially if you’re the one treating that depressed guy in Peoria).  Clinical trials can be misleading, even if their results are “significant.”  As most readers know, some investigators, after analyzing data from large industry-funded clinical trials, have concluded that antidepressants may not be effective at all—a story that has received extensive media coverage.  But lots of individuals insist that they do work, based on personal experience.  One such depression sufferer—who benefited greatly from antidepressants—wrote a recent post on the Atlantic Online, and quoted Peter Kramer: “to give the impression that [antidepressants] are placebos is to cause needless suffering” because many people do benefit from them.  Jonathan Leo, on the other hand, argues that this is a patently anti-scientific stance.  In a post this week on the website Mad In America, Leo points out (correctly) that there are people out there who will give recommendations and anecdotes in support of just about anything.  That doesn’t mean they work.

Both sides make some very good points.  We just need to find a way to reconcile them—i.e., to make the “science” more reflective of real-world cases, and use the wisdom of individual cases to influence our practice in a more scientifically valid way.  This is much easier said than done.

While psychiatrists often refer to the “art” of psychopharmacology, make no mistake:  they (we) take great pride in the fact that it’s supposedly grounded in hard science.  Drug doses, mechanisms, metabolites, serum levels, binding coefficients, polymorphisms, biomarkers, quantitative outcome measures—these are the calling cards of scientific investigation.  But when medications don’t work as planned (which is often), we improvise, and when we do, we quickly enter the world of personal experience and anecdote.  In fact, in the absence of objective disease markers (which we may never find, frankly), psychiatric treatment is built almost exclusively on anecdotes.  When a patient says a drug “worked” in some way that the data don’t support, or they experience a side effect that’s not listed in the PDR, that becomes the truth, and it happens far more frequently than we like to admit.

It’s even more apparent in psychotherapy.  When a therapist asks a question like “What went through your mind when that woman rejected you?” the number of possible responses is infinite, unlike a serum lithium level or a blood pressure.  A good therapist follows the patient’s story and individualizes treatment based on the individual case (and only loosely on some theory or therapeutic modality).  The “proof” is the outcome with that particular patient.  Sure, the “N” is only 1, but it’s the only one that counts.

Is there any way to make the science look more like the anecdotal evidence we actually see in practice?  I think not.  Most of us don’t even stop to think about how UN-convincing the “evidence” truly is.  In his book Pharmageddon, David Healy describes the example of the parachute:  no one needs to do a randomized, controlled trial to show that a parachute works.  It just does.   By comparison, to show that antidepressants “work,” drug companies must perform large, expensive trials (and often multiple trials at that) and even then, prove their results through statistical measures or clever trial designs.  Given this complexity, it’s a wonder that we believe clinical trials at all.

On the other side of the coin, there’s really no way to subject the anecdotal report, or case study, to the scientific method.  By definition, including more patients and controls (i.e., increasing the “N”) automatically introduces heterogeneity.  Whatever factor(s) led a particular patient to respond to Paxil “overnight” or to develop a harsh cough on Abilify are probably unique to that individual.

But maybe we can start looking at anecdotes through a scientific lens.  When we observe a particular response or effect, we ought to look not just at the most obvious cause (e.g., a new medication) but at the context in which it occurred, and entertain any and all alternative hypotheses.  Similarly, when planning treatment, we need to think not just about FDA-approved drugs, but also patient expectations, treatment setting, home environment, costs, other comorbidities, the availability of alternative therapies, and other data points or “independent variables.”  To use a crude but common analogy, it is indeed true that every person becomes his or her own laboratory, and should be viewed as such.

The more we look at patients this way, the further we get from clinical trials and the less relevant clinical trials become.  This is unfortunate, because—for better or for worse (I would vote for “worse”)—clinical trials have become the cornerstone of evidence-based psychiatry.  But a re-emphasis on anecdotes and individual cases is important.  Because in the end, it’s the individual who counts.  The individual resembles an N of 1 much more closely than he or she resembles an N of 200, and that’s probably the most important evidence we need to keep in mind.


Did The APA Miss A Defining Moment?

April 1, 2012

Sometimes an organization or individual facing a potential public-relations disaster can use the incident as a way to send a powerful message, as well as change the way that organization or individual is perceived.   I wonder whether the American Psychiatric Association (APA) may have missed its opportunity to do exactly that.

Several weeks ago, the CBS news program 60 Minutes ran a story with the provocative argument that antidepressants are no better than placebo.  Reporter Lesley Stahl highlighted the work of Irving Kirsch, a psychologist who has studied the placebo effect for decades.  He has concluded that most, and maybe all, of the benefit of antidepressants can be attributed to placebo.  Simply put, they work because patients (and their doctors) expect them to work.

Since then, the psychiatric establishment has offered several counterarguments.  All have placed psychiatry squarely on the defensive.  One psychiatrist (Michael Thase), interviewed on the CBS program, defended antidepressants, arguing that Kirsch “is confusing the results of studies with what goes on in practice.”  Alan Schatzberg, past APA president and former Stanford chairman, said at a conference last weekend (where he spoke about “new antidepressants”) that the APA executive committee was “outraged” at the story, glibly remarking, “In this nation, if you can attack a psychiatrist, you win a medal.”  The leadership of the APA has mounted an aggressive defense, too.  Incoming APA president and Columbia chairman Jeffrey Lieberman called Kirsch “mistaken and confused, … ideologically based, [and] … just plain wrong.”  Similarly, current APA president John Oldham called the story “irresponsible and dangerous [and] … at odds with common clinical experience.”

These are indeed strong words.  But it raises one very important question:  who or what exactly are these spokesmen defending?  Patients?  Psychiatrists?  Drugs?  It would seem to me that the leadership of a professional medical organization should be defending good patient care, or at the very least, greater opportunities for its members to provide good patient care.  The arguments put forth by APA leadership, however, seem to be defending none of the above.  Instead, they seem to be defending antidepressants.

For the purposes of this post, I won’t weigh in on the question of whether antidepressants work or not.  It’s a complicated issue with no easy answer (we’ll offer some insight in the May issue of the Carlat Psychiatry Report).  However, let’s just assume that the general public now has good reason to believe that current antidepressants are essentially worthless, thanks to the 60 Minutes story (not to mention—just a few weeks earlier—a report on NPR’s “Morning Edition,” as well as a two-part series by Marcia Angell in the New York Review of Books last summer).  Justifiably or not, our patients will be skeptical of psychopharmacology going forward.  If we psychiatrists are hell-bent on defending antidepressants, we’d better have even stronger reasons for doing so than simply “we know they work.”

But why are psychiatrists defending antidepressants in the first place?  If anyone should be defending antidepressants, it should be the drug companies, not psychiatrists.  Why didn’t 60 Minutes interview a Lilly medical expert to explain how they did the initial studies of Prozac, or a Pfizer scientist to explain why patients should be put on Pristiq?  (Now that would have been fun!!)  I would have loved to hear Michael Thase—or anyone from the psychiatric establishment—say to Lesley Stahl:

“You know, Dr. Kirsch might just be onto something.  His research is telling us that maybe antidepressants really don’t work as well as we once thought.  As a result, we psychiatrists want drug companies to do better studies on their drugs before approval, and stop marketing their drugs so aggressively to us—and to our patients—until they can show us better data.  In the meantime we want to get paid to provide therapy along with—or instead of—medications, and we hope that the APA puts more of an emphasis on non-biological treatments for depression in the future.”

Wouldn’t that have been great?  For those of us (like me) who think the essence of depression is far more than faulty biology to be corrected with a pill, it would have been very refreshing to hear.  Moreover, it would help our field to reclaim some of the “territory” we’ve been abdicating to others (therapists, psychologists, social workers)—territory that may ultimately be shown to be more relevant for most patients than drugs.  (By the way, I don’t mean to drive a wedge between psychiatry and these other specialties, as I truly believe we can coexist and complement each other.  But as I wrote in my last post, psychiatry really needs to stand up for something, and this would have been a perfect opportunity to do exactly that.)

To his credit, Dr. Oldham wrote an editorial two weeks ago in Psychiatric News (the APA’s weekly newsletter) explaining that he was asked to contribute to the 60 Minutes piece, but CBS canceled his interview at the last minute.  He wrote a response but CBS refused to post it on its website (the official APA response can be found here).  Interestingly, he went on to acknowledge that “good care” (i.e., whatever works) is what our patients need, and also conceded that, at least for “milder forms of depression,” the “nonspecific [placebo] effect dwarfs the specific [drug] effect.”

I think the APA would have a pretty powerful argument if it emphasized this message (i.e., that the placebo effect might be much greater than we believe, and that we should study this more closely—maybe even harness it for the sake of our patients) over what sounds like a knee-jerk defense of drugs.  It’s a message that would demand better science, prioritize our patients’ well-being, and, perhaps even reduce treatment costs in the long run.  If, instead, we call “foul” on anyone who criticizes medications, not only do we send the message that we put our faith in only one form of therapy (out of many), but we also become de facto spokespersons for the pharmaceutical industry.  If the APA wants to change that perception among the general public, this would be a great place to start.


Whatever Works?

January 29, 2012

My iPhone’s Clock Radio app wakes me each day to the live stream of National Public Radio.  Last Monday morning, I emerged from my post-weekend slumber to hear Alix Spiegel’s piece on the serotonin theory of depression.  In my confused, half-awake state, I heard Joseph Coyle, professor of psychiatry at Harvard, remark: “the ‘chemical imbalance’ is sort of last-century thinking; it’s much more complicated than that.”

Was I dreaming?  It was, admittedly, a surreal experience.  It’s not every day that I wake up to the voice of an Ivy League professor lecturing me in psychiatry (those days are long over, thank Biederman god).  Nor did I ever expect a national news program to challenge existing psychiatric dogma.  As I cleared my eyes, though, I realized, this is the real deal.  And it was refreshing, because this is what many of us have been thinking all along.  The serotonin hypothesis of depression is kaput.

Understandably, this story has received lots of attention (see here and here and here and here and here).  I don’t want to jump on the “I-told-you-so” bandwagon, but instead to offer a slightly different perspective.

A few disclaimers:  first and foremost, I agree that the “chemical imbalance” theory has overrun our profession and has commandeered the public’s understanding of mental illness—so much so that the iconic image of the synaptic cleft containing its neurotransmitters has become ensconced in the national psyche.  Secondly, I do prescribe SSRIs (serotonin-reuptake inhibitors), plus lots of other psychiatric medications, which occasionally do work.  (And, in the interest of full disclosure, I’ve taken three of them myself.  They did nothing for me.)

To the extent that psychiatrists talk about “chemical imbalances,” I can see why this could be misconstrued as “lying” to patients.  Ronald Pies’ eloquent article in Psychiatric Times last summer describes the chemical-imbalance theory as “a kind of urban legend,” which no “knowledgeable, well-trained psychiatrist” would ever believe.  But that doesn’t matter.  Thanks to us, the word is out there.  The damage has already been done.  So why, then, do psychiatrists (even the “knowledgeable, well-trained” ones) continue to prescribe SSRI antidepressants to patients?

Because they work.

Okay, maybe not 100% of the time.  Maybe not even 40% of the time, according to antidepressant drug trials like STAR*D.  Experience shows, however, that they work often enough for patients to come back for more.  In fact, when discussed in the right context, their potential side effects described in detail, and prescribed by a compassionate and apparently intelligent and trusted professional, antidepressants probably “work” far more than they do in the drug trials.

But does that make it right to prescribe them?  Ah, that’s an entirely different question.  Consider the following:  I may not agree with the serotonin theory, but if I prescribe an SSRI to a patient with depression, and they report a benefit, experience no obvious side effects, pay only $4/month for the medication, and (say) $50 for a monthly visit with me, is there anything wrong with that?  Plenty of doctors would say, no, not at all.  But what if my patient (justifiably so) doesn’t believe in the serotonin hypothesis and I prescribe anyway?  What if my patient experiences horrible side effects from the drug?  What if the drug costs $400/month instead of $4?  What if I charge the patient $300 instead of $50 for each return visit?  What if I decide to “augment” my patient’s SSRI with yet another serotonin agent, or an atypical antipsychotic, causing hundreds of dollars more, and potentially causing yet more side effects?  Those are the aspects that we don’t often think of, and constitute the unfortunate “collateral damage” of the chemical-imbalance theory.

Indeed, something’s “working” when a patient reports success with an antidepressant; exactly why and how it “works” is less certain.  In my practice, I tell my patients that, at individual synapses, SSRIs probably increase extracellular serotonin levels (at least in the short-term), but we don’t know what that means for your whole brain, much less for your thoughts or behavior.  Some other psychiatrists say that “a serotonin boost might help your depression” or “this drug might act on pathways important for depression.”   Are those lies?  Jeffrey Lacasse and Jonathan Leo write that “telling a falsehood to patients … is a serious violation of informed consent.”  But the same could be said for psychotherapy, religion, tai chi, ECT, rTMS, Reiki, qigong, numerology, orthomolecular psychiatry, somatic re-experiencing, EMDR, self-help groups, AA, yoga, acupuncture, transcendental meditation, and Deplin.  We recommend these things all the time, not knowing exactly how they “work.”

Most of those examples are rather harmless and inexpensive (except for Deplin—it’s expensive), but, like antidepressants, all rest on shaky ground.  So maybe psychiatry’s problem is not the “falsehood” itself, but the consequences of that falsehood.  This faulty hypothesis has spawned an entire industry capitalizing on nothing more than an educated guess, costing our health care system untold millions of dollars, saddling huge numbers of patients with bothersome side effects (or possibly worse), and—most distressingly to me—giving people an incorrect and ultimately dehumanizing solution to their emotional problems.  (What’s dehumanizing about getting better, you might ask?  Well, nothing, except when “getting better” means giving up one’s own ability to manage his/her situation and instead attribute their success to a pill.)

Dr Pies’ article in Psychiatric Times closed with an admonition from psychiatrist Nassir Ghaemi:  “We must not be drawn into a haze of promiscuous eclecticism in our treatment; rather, we must be guided by well-designed studies and the best available evidence.”  That’s debatable.  If we wait for “evidence” for all sorts of interventions that, in many people, do help, we’ll never get anywhere.  A lack of “evidence” certainly hasn’t eliminated religion—or, for that matter, psychoanalysis—from the face of the earth.

Thus, faulty theory or not, there’s still a place for SSRI medications in psychiatry, because some patients swear by them.  Furthermore—and with all due respect to Dr Ghaemi—maybe we should be a bit more promiscuous in our eclecticism.  Medication therapy should be offered side-by-side with competent psychosocial treatments including, but not limited to, psychotherapy, group therapy, holistic-medicine approaches, family interventions, and job training and other social supports.  Patients’ preferences should always be respected, along with safeguards to protect patient safety and prevent against excessive cost.  We may not have good scientific evidence for certain selections on this smorgasbord of options, but if patients keep coming back, report successful outcomes, and enter into meaningful and lasting recovery, that might be all the evidence we need.


Antidepressants: The New Candy?

August 9, 2011

It should come as no surprise to anyone paying attention to health care (not to mention modern American society) that antidepressants are very heavily prescribed.  They are, in fact, the second most widely prescribed class of medicine in America, with 253 million prescriptions written in 2010 alone.  Whether this means we are suffering from an epidemic of depression is another thing.  In fact, a recent article questions whether we’re suffering from much of anything at all.

In the August issue of Health Affairs, Ramin Mojtabai and Mark Olfson present evidence that doctors are prescribing antidepressants at ever-higher rates.  Over a ten-year period (1996-2007), the percentage of all office visits to non-psychiatrists that included an antidepressant prescription rose from 4.1% to 8.8%.  The rates were even higher for primary care providers: from 6.2% to 11.5%.

But there’s more.  The investigators also found that in the majority of cases, antidepressants were given even in the absence of a psychiatric diagnosis.  In 1996, 59.5% of the antidepressant recipients lacked a psychiatric diagnosis.  In 2007, this number had increased to 72.7%.

In other words, nearly 3 out of 4 patients who visited a nonpsychiatrist and received a prescription for an antidepressant were not given a psychiatric diagnosis by that doctor.  Why might this be the case?  Well, as the authors point out, antidepressants are used off-label for a variety of conditions—fatigue, pain, headaches, PMS, irritability.  None of which have any good data supporting their use, mind you.

It’s possible that nonpsychiatrists might add an antidepressant to someone’s medication regimen because they “seem” depressed or anxious.  It is also true that primary care providers do manage mental illness sometimes, particularly in areas where psychiatrists are in short supply.  But remember, in the majority of cases the doctors did not even give a psychiatric diagnosis, which suggests that even if they did a “psychiatric evaluation,” the evaluation was likely quick and haphazard.

And then, of course, there were probably some cases in which the primary care docs just continued medications that were originally prescribed by a psychiatrist—in which case perhaps they simply didn’t report a diagnosis.

But is any of this okay?  Some, like a psychiatrist quoted in a Wall Street Journal article on this report, argue that antidepressants are safe.  They’re unlikely to be abused, often effective (if only as a placebo), and dirt cheap (well, at least the generic SSRIs and TCAs are).  But others have had very real problems discontinuing them, or have suffered particularly troublesome side effects.

The increasingly indiscriminate use of antidepressants might also open the door to the (ab)use of other, more costly drugs with potentially more devastating side effects.  I continue to be amazed, for example, by the number of primary care docs who prescribe Seroquel (an antipsychotic) for insomnia, when multiple other pharmacologic and nonpharmacologic options are ignored.  In my experience, in the vast majority of these cases, the (well-known) risks of increased appetite and blood sugar were never discussed with the patient.  And then there are other antipsychotics like Abilify and Seroquel XR, which are increasingly being used in primary care as drugs to “augment” antidepressants and will probably be prescribed as freely as the antidepressants themselves.  (Case in point: a senior medical student was shocked when I told her a few days ago that Abilify is an antipsychotic.  “I always thought it was an antidepressant,” she remarked, “after seeing all those TV commercials.”)

For better or for worse, the increased use of antidepressants in primary care may prove to be yet another blow to the foundation of biological psychiatry.  Doctors prescribe—and continue to prescribe—these drugs because they “work.”  It’s probably more accurate, however, to say that doctors and patients think they work.  And this may have nothing to do with biology.  As the saying goes, it’s the thought that counts.

Anyway, if this is true—and you consider the fact that these drugs are prescribed on the basis of a rudimentary workup (remember, no diagnosis was given 72.7% of the time)—then the use of an antidepressant probably has no more justification than the addition of a multivitamin, the admonition to eat less red meat, or the suggestion to “get more fresh air.”

The bottom line: If we’re going to give out antidepressants like candy, then let’s treat them as such.  Too much candy can be a bad thing—something that primary care doctors can certainly understand.  So if our patients ask for candy, then we need to find a substitute—something equally soothing and comforting—or provide them instead with a healthy diet of interventions to address the real issues, rather than masking those problems with a treat to satisfy their sweet tooth and bring them back for more.


The Painful Truth of Antidepressants

April 25, 2011

In a study published today, scientists at Rockefeller University proclaim that SSRI antidepressants (like Prozac and Celexa) may lose their efficacy when given with anti-inflammatory drugs like ibuprofen and aspirin.  Considering the high prevalence of depression and the widespread use of both SSRIs and anti-inflammatory medications, this result is bound to receive much attention.  As a matter of fact, it’s tantalizing to jump to the conclusion (as has been done in the Fox News and WSJ reports on this study) that the reason SSRIs may be so ineffective is because so many people with depression also use non-steroidal anti-inflammatory drugs (NSAIDs).

By my read of the data, it may be a bit too early to draw this conclusion.  Nevertheless, the study, by Paul Greengard, Jennifer Warner-Schmidt, and their colleagues, and published online in the Proceedings of the National Academy of Sciences, does propose some interesting mechanisms by which anti-inflammatory agents may affect antidepressant action.

The majority of the work was performed in mice, for which there are valid “models” of depression that are routinely used in preclinical studies.  In past work, Greengard’s group has shown that the expression of a small protein called p11 (which is associated with the localization and function of serotonin receptors) is correlated with “antidepressant-like” responses in mice, and probably in humans, too.  In the present study, they demonstrate that the antidepressants Prozac and Celexa cause an increase in expression of p11 in the frontal cortex of mice, and, moreover, that p11 expression is mediated by the ability of these antidepressants to cause elevations in interferon-gamma (IFN-γ) and tumor necrosis factor-alpha (TNF-α).  In other words, antidepressants enhance neural expression of these cytokines, which, in turn, increases p11 activity.

However, when mice are given NSAIDs or an analgesic (i.e., ibuprofen, naproxen, aspirin, or Tylenol), this prevents the increase in p11, as well as the increase in IFN-γ and TNF-α.  NSAIDs also prevent the “antidepressant-like” behavioral responses elicited by Celexa (as well as other antidepressants like Wellbutrin, Parnate, and TCAs) in mouse models of depression.

The group went one step further and even created a p11 “knockout” mouse.  These mice had no response to Celexa, nor did they have antidepressant-like responses to injections of IFN-γ or TNF-α.  However, the p11 knockout mice did respond to desipramine, an antidepressant that works mainly on norepinephrine, thus emphasizing the significance of serotonin in the p11-mediated response.

What does this mean for humans?  To answer this question, the group analyzed data from STAR*D, a huge multicenter antidepressant trial.  In the first stage of STAR*D, all patients (total of approximately 1500 individuals) took Celexa for a 12-week period.  The remission rate for patients who took an NSAID at any time during this 12-week period was only 45%, while those who took no NSAID remitted at a rate of 55%.

So does this mean that people taking antidepressants should avoid NSAIDs, and just deal with their pain?  Probably not. (In fact, one might ask the opposite question:  should people with chronic pain avoid SSRIs?  Unfortunately, the study did not look at whether SSRIs inhibited the pain-relieving effects of NSAIDs.)

In my opinion, some of the mouse data need to be interpreted carefully.  For instance, the mice received extremely high doses of NSAIDs (e.g., ibuprofen at 70 mg/kg/d, which corresponds to 4200 mg/d for a 60-kg man, or 21 Advil pills per day; similarly, the mice drinking aspirin received 210 mg/kg/d, or 12,600 mg = ~39 pills of regular-strength aspirin per day for a typical human).  Also, in the behavioral studies the mice received NSAIDs for an entire week but received only a single injection of Celexa (20 mg/kg, or about 1200 mg, 60 pills) immediately before the behavioral experiments.

The human data, of course, are equally suspect.  Patients in the STAR*D study were counted as “NSAID users” if they described using NSAIDs even once in the first 12 weeks of the study.  It’s hard to see how the use of ibuprofen once or twice in a three-month period might interfere with someone’s daily Celexa.  (Not to mention the fact that the “remission” data from STAR*D have come under some scrutiny themselves – see here and here).  Moreover, as the authors point out, it’s quite likely that patients with more severe forms of depression also had concurrent pain syndromes and used NSAIDs more frequently.  In other words, NSAID use might not attenuate SSRI activity, but may be a sign of depression that is more resistant to SSRIs.

In the end, however, I find the study to be quite provocative.  Certainly the correlation of antidepressant effect with expression of the p11 protein and with TNF-α and IFN-γ activity suggests a novel mechanism of antidepressant action—as well as new markers for antidepressant activity.  Moreover, the potential roles of NSAIDs in reducing antidepressant effects (or, in some cases, enhancing these effects), need to be explored.

But it raises even more unanswered questions.  For one, how do we reconcile the fact that antidepressant effects are associated with increased TNF-α and IFN-γ activity in the brain, while increases in these cytokines in the periphery are thought to cause depression?  Also, how can we explain the fact that other analgesic compounds, such as tramadol and buprenorphine, might actually have an antidepressant effect?  Finally, what does this mean for our treatment of pain symptoms in depression?  Should we avoid SSRIs and use other types of antidepressants instead?  Do NSAIDs inhibit the effects of SNRIs like Cymbalta, which has recently been FDA-approved for the treatment of chronic musculoskeletal pain (and whose users are most certainly also taking medications like NSAIDs)?

It’s great that the interface between mental illness and physical syndromes is receiving some well-deserved attention.  It’s also exciting to see that the neuroscience and pharmacology of depression and pain may overlap in critical ways that influence how we will treat these disorders in the future.  Perhaps it may also explain our failures up to now.  With future work in this area, studies like these will help us develop more appropriate antidepressant strategies for the “real world.”

[Finally, a “hat tip,” of sorts, to Fox News, which first alerted me to this article.  Unfortunately, the story, written by Dr. Manny Alvarez, was fairly low on substance but high on the “wow” factor.  It drew some broad conclusions and—my biggest pet peeve—did not refer the reader to any site or source to get more detailed information.  Alas, such is the case with much public science and medicine reporting: Alarm first, ask questions later.]


The Mythology of “Treatment-Resistant” Depression

February 27, 2011

“Treatment-resistant depression” is one of those clinical terms that has always been a bit unsettling to me.  Maybe I’m a pessimist, but when I hear this phrase, it reminds me that despite all the time, energy, and expense we have invested in understanding this all-too-common disease, we still have a long way to go.  Perhaps more troubling, the phrase also suggests an air of resignation or abandonment:  “We’ve tried everything, but you’re resistant to treatment, and there’s not much more we can do for you.”

But “everything” is a loaded term, and “treatment” takes many forms.  The term “treatment-resistant depression” first appeared in the literature in 1974 and has been used widely in the literature.  (Incidentally, despite appearing over 20 times in the APA’s 2010 revised treatment guidelines for major depression, it is never actually defined.)  The phrase is often used to describe patients who have failed to respond to a certain number of antidepressant trials (typically two, each from a different class), each of a reasonable (6-12 week) duration, although many other definitions have emerged over the years.

Failure to respond to “adequate” trials of appropriate antidepressant medications does indeed suggest that a patient is resistant to those treatments, and the clinician should think of other ways to approach that patient’s condition.  In today’s psychiatric practice, however, “treatment-resistant” is often a code word for simply adding another medication (like an atypical antipsychotic) or to consider somatic treatment options (such as electroconvulsive therapy, ECT, or transcranial magnetic stimulation, TMS).

Seen this way, it’s a fairly narrow view of “treatment.”  The psychiatric literature—not to mention years and years of anecdotal data—suggests that a broad range of interventions can be helpful in the management of depression, such as exercise, dietary supplements, mindfulness meditation, acupuncture, light therapy, and literally dozens of different psychotherapeutic approaches.  Call me obsessive, or pedantic, but to label someone’s depression as “treatment resistant” without an adequate trial of all of these approaches, seems premature at best, and fatalistic at worst.

What if we referred to someone’s weight problem as “diet-resistant obesity”?  Sure, there are myriad “diets” out there, and some obese individuals have tried several and simply don’t lose weight.  But perhaps these patients simply haven’t found the right one for their psychological/endocrine makeup and motivational level; there are also some genetic and biochemical causes of obesity that prevent weight loss regardless of diet.  If we label someone as “diet-resistant” it means that we may overlook some diets that would work, or ignore other ways of managing this condition.

Back to depression.   I recognize there’s not much of an evidence base for many of the potentially hundreds of different “cures” for depression in the popular and scientific literature.  And it would take far too much time to try them all.  Experienced clinicians will have seen plenty of examples of good antidepressant response to lithium, thyroid hormone, antipsychotics (such as Abilify), and somatic interventions like ECT.  But they have also seen failures with the exact same agents.

Unfortunately, our “decision tree” for assigning patients to different treatments is more like a dartboard than an evidence-based flowchart.  “Well, you’ve failed an SSRI and an SNRI, so let’s try an atypical,” goes the typical dialogue (not to mention the typical TV commercial or magazine ad), when we really should be trying to understand our patients at a deeper level in order to determine the ideal therapy for them.

Nevertheless, the “step therapy” requirements of insurance companies, as well as the large multicenter NIH-sponsored trials (like the STAR*D trial) which primarily focus on medications (yes, I am aware that STAR*D had a cognitive therapy component, although this has received little attention and was not widely chosen by study participants), continue to bias the clinician and patient in the direction of looking for the next pill or the next biological intervention, instead of thinking about patients as individuals with biological, genetic, psychological, and social determinants of their conditions.

Because in the long run, nobody is “treatment resistant,” they’re just resistant to what we’re currently offering them.


Viva Viibryd ?

January 25, 2011

Well, what do you know… I turn my back for one second and now the FDA has gone ahead and approved another antidepressant.

This new one is vilazodone, made by Massachusetts-based company Clinical Data, Inc., and will be sold under the name Viibryd (which I have absolutely no idea how to pronounce, but I’m sure someone will tell me soon).

At first glance, vilazodone seems promising. It’s not exactly a “me-too” drug, a molecule similar in structure and function to something that already exists. Instead, it’s a “dual-action” antidepressant, a selective serotonin reuptake inhibitor and partial agonist at serotonin 1A receptors. In other words, it does two things: it blocks the reuptake of serotonin into neurons (much like the existing SSRIs like Prozac, Zoloft, and Lexapro) and it acts as a partial agonist at a particular type of serotonin receptor called “1A.” A partial agonist is a molecule that binds to a receptor on a target cell and does not activate that cell fully but doesn’t entirely prevent its response, either.

(Note: don’t let the name fool you. “Dual-action” agents are not “twice as effective” as other agents, and sometimes work just the same.)

If you buy the serotonin hypothesis of depression (closely derived from the “monoamine hypothesis“), then depression is caused by a deficiency in serotonin. SSRIs cause an increase in serotonin between two cells. However, the higher levels of serotonin serve as “negative feedback” to the first-order cell in order to keep the system in balance. (Our bodies do this all the time. If I keep yelling at you for no clear reason, you’ll rapidly “downregulate” your attention so that you don’t listen to me anymore. Neurons work this way, too.) The idea behind a partial agonist is that it will only do “part” of the work that serotonin will do (actually, it will effectively block the negative feedback of serotonin) to increase serotonin release even more.

Remember– that’s only if you agree that low serotonin is responsible for depression. And there are plenty of respectable people who just don’t buy this. After all, no one has convincingly shown a serotonin deficit in depression, and when SSRIs do work (which they do, remarkably well sometimes), they may be acting by a totally different mechanism we just don’t understand yet. Nevertheless, vilazodone did show a significant effect as early as the first week, an effect that lasted for eight weeks.

Specifically, a phase III trial of 410 adults with depression showed decreases in MADRS and HAM-D scales relative to placebo, as well as on the CGI-I, CGI-S, and HAM-A scales, with a decrease in MADRS score from a mean of 30.8 at baseline to about 18 at the 8-week timepoint (the placebo group showed a decrease of about 10 points). A similar decrease was seen in the HAM-D. As is typical with these studies, the phase III trial did not compare vilazodone to an existing drug. However, unpublished phase II trials did compare it to fluoxetine (Prozac) and citalopram (Celexa), and to placebo, and results show that the drugs were comparable (and placebo response rates were high, as high as 40% in some trials). Incidentally, 9.3% of patients in the phase III trial dropped out due to adverse effects, mainly diarrhea.

So is a blockbuster in the works? Well, it’s not quite as “new” as one would think. SSRIs have been in widespread use for years, and there’s already a serotonin 1A partial agonist available called BuSpar (generic = buspirone) which is sort of a “ho-hum” drug– effective for some, but nothing to get too excited about. It seems that one could make “homemade” vilazodone by combining buspirone with an SSRI. (Kids, don’t try this at home. Please consult an expert.) This is a fairly common combination, although most psychiatrists have been underwhelmed by buspirone’s efficacy (one of my teachers called it “holy water”). Maybe vilazodone will convince me otherwise.

To go back to my original question, do we really need this? My gut reaction is no, as it seems too similar to what we already have available. There may be a small group of treatment-resistant depressed patients for whom vilazodone will be a wonder drug, a true lifesaver. In an attempt to discover this small group, the manufacturer is simultaneously studying “biomarkers that may predict treatment response.” In other words, they’re looking for genetic “fingerprints” that might predict patients who will respond to their drug (or who will get side effects). They have no “hits” yet (one of the markers they studied in phase III proved to have no predictive value in a follow-up trial), but it’s appealing to think that we might get more data on how to use– or avoid– this new drug more wisely.

While it’s good to have more tools in our toolkit, I sincerely hope this doesn’t turn into yet another in a long line of medications that we give to depressed patients in the trial-and-error process that unfortunately characterizes a lot of depression management. What’s truly needed is not just another serotonin agent, but a guideline (like a genetic test) to predict who’s likely to respond, or, better yet, a more sophisticated understanding of what’s happening in the minds of “depressed” patients. (And the differences among depressed patients far outweigh their similarities.) Until then, we’ll just be making incremental progress toward an elusive goal.


%d bloggers like this: